![]() |
A book by William H. Calvin UNIVERSITY OF WASHINGTON SEATTLE, WASHINGTON 98195-1800 USA |
HOW BRAINS THINK
A Science Masters book (BasicBooks in the US; to be available in 12 translations) copyright ©1996 by William H. Calvin |
Prospects for a Superhuman Intelligence ![]()
![]() We have a life of the mind, and it is because of the dynamic darwinism of our mental lives that we can invent and daily reinvent ourselves. That life of the mind, a muddle at the beginning of this book, perhaps can now be imagined as a darwinian process a high-level one, up near the top of those levels of stratified stability that is capable of implementing Charles Ravens sense of self. Such depth and versatility could emerge from cerebral codes cloning away, competing for territory with other cerebral codes, and spinning out new variations. Its not a computer, at least not in our usual sense of a reliable machine that can faithfully repeat its actions. For most people, its something new in the mechanistic realm, utterly without good analogies except for the other known darwinian processes. But you can get a feeling for what its like: looking down on the (virtually flattened) surface of the cortex would be like seeing a mosaic a dynamic patchwork quilt, with the patches never at rest. On closer inspection, each patch would appear like a wallpaper pattern that repeated, but each unit pattern would be dynamic, a twinkling spatiotemporal pattern, rather than the traditional static one. The boundaries between adjacent patches of the quilt would sometimes be stable, sometimes moving, like a battlefront. Sometimes the unit patterns would fade from an area, the triangular arrays no longer synchronizing homologous points and another unit pattern, unopposed, might quickly colonize the disorganized territory. The current winner of that copying competition, the one with the biggest chorus vying for the attention of the output pathways, looks like a good candidate for what we term consciousness. Our shifting focus could be another clone coming to the fore. Our subconscious could be the other active patterns not currently dominant. No particular area in cortex is the center of consciousness for very long before another takes over. The shifting mosaics also seem to provide a good candidate for intelligence. Among the spatiotemporal patterns that they shape up are the commands for novel movements. The evolving mosaics can discover new order à la Horace Barlow, since spatiotemporal patterns can vary to find new resonances. The mosaics can simulate actions in the real world à la Kenneth Craik, since the cerebral code for a movement schema can be judged against the resonances of long-term memories and the current sensory inputs. They have Jean Piagets feature of handling situations in which it isnt obvious what to do next. And the mosaics have the open-ended aspect of our mental lives as when we invent new levels of complexity, like crossword puzzles, or (as can be the case with poems) compound symbols to embody new levels of meaning. Because the cerebral codes can represent not just sensory and movement schemas but also ideas, we can imagine metaphors of quality emerging, can imagine how Coleridges willing suspension of disbelief takes place when we enter into an imaginary realm of fiction. Cerebral codes and darwinian processes were what I had in mind back at the beginning of this book, when I suggested that by its end the reader might be able to imagine a process that could result in consciousness and could operate fast enough to constitute a quick intelligence, good at guessing. This last chapter is about the implications of augmenting our brains and creating artificial approximations. But first, a sideways glance at competing styles of explanation.
|
Accordingly, the neuron level of description that provides the currently fashionable picture of the brain and mind is a mere shadow of the deeper level of cytoskeletal action and it is at this deeper level where we must seek the physical basis of mind! Im sure some consciousness physicist or ecclesiastical neuroscientist will say, despite all the prior chapters, that a ghost in the machine is still necessary, leaping over those dozen intermediate levels of stratified stability to provide a guiding role for enigmatic quantum mechanics, down there in the microtubules of the neurons cytoskeleton, where some immaterial spirit can interface with the brains biological machinery. Actually, such theorists usually avoid the word spirit and say something about quantum fields. Ill be happy to compromise on mystery using Dan Dennetts definition: a phenomenon that people dont know how to think about. All that the consciousness physicists have accomplished is the replacement of one mystery by another; so far, there are no parts and pieces of their explanations, the combinations of which can explain other things. And even if they improve on their combinations, any effects from synchronized microtubules would only provide us with another candidate for the unitary nature of our conscious experience one that will have to complete in mechanistic detail with explanations at other levels, and which will have to compete with them for sheer coverage. The darwinian process, thus far, seems to have the right parts and pieces to explain the successes and malfunctions of important aspects of consciousness. I think well continue to see those tiresome debates in which one philosopher tries to hog-tie another philosopher (or at least paint him into a corner, brick him up with a wall of words) over the issue of whether a machine can ever truly understand anything, whether they will ever be able to have our kind of consciousness. Unfortunately, even if all scientists and philosophers agreed about how mind arises from brain, the complexity of the subject would still cause most people to abstract that complexity, using some simpler-to-imagine concept such as spirit. And perhaps to feel like the book reviewer who said (perhaps rhetorically), Is the digital computer merely a simpler version of the human brain, as many theorists contend? If in fact it is, the implications are scary. Scary? Personally, I find ignorance scary. It has a substantial track record, what with demonic possession explaining mental illness, and all those witch trials and inquisitions. We badly need a metaphor more useful than a quantum-mechanical mystery; we need a metaphor that successfully bridges the gap between our perceived mental life and the neural mechanisms responsible for it. So far, weve actually needed two metaphors: a top-down metaphor that maps thoughts onto ensembles of neurons, and a bottom-up metaphor that accounts for how ideas emerge from those apparently chaotic neuron ensembles. But the neocortical Darwin Machine may well do for both metaphors if it really is the creative mechanism within.
|
The neocortical Darwin Machine theory seems to me to be at the right level of
explanation; its not down in the synapse or cytoskeleton but up at the level of
dynamics involving tens of thousands of neurons, generating the spatiotemporal
patterns that are the precursors of movement of behavior in the world outside the
brain. Moreover, the theory is consistent with a lot of phenomena from a century of
brain research, and its testable (with some improvement in the spatial and temporal
resolution of brain imaging or microelectrode arrays). The darwinian process at its core is, at least among biologists, widely understood as a creative mechanism. Weve had well over a century to realize just how powerful such copying competitions can be, when it comes to shaping up quality from random variations on a timescale of millennia. In recent decades, weve been able to see the same process operating on the timescale of days and weeks, as the immune response creates a better-fitting antibody. That this neocortical Darwin Machine can operate in milliseconds to minutes is only another change in scale; we should be able to carry over our understanding of what the darwinian process can accomplish from evolutionary biology and immunology to the timescale of thought and action. It seems to me that the adoption of the William James viewpoint about our mental life is long overdue. But many people, including scientists, still hold to a cardboard view of darwinism as mere selective survival (Darwin, alas, contributed to the confusion by naming his theory for only the fifth of the six essentials, natural selection). What I hope I have done in this book is to pull together all of the essentials, as well as the accelerating aspects, of a darwinian process, and then describe a specific neural mechanism that could implement such a process in primate neocortex. As mechanism rather than improved metaphor, the best thing going for my neocortical Darwin Machine at this point is that the cortical neuroanatomy and the entrained oscillators principles provide a nice fit to those six essentials of a darwinian process and the accelerating factors. Whether this is the most important process going on in the brain, or whether another process dominates consciousness and guessing, is hard to tell; there might be one without antecedents in biology or computer science one we cannot yet imagine without first discovering some intermediate metaphors. Indeed, I suspect that the process of managing the cloning competitions in order to avoid psychosis or stagnation is going to require its own metalevel of description. (Im not thinking of a manager in the usual sense of the term but something like the way that global weather patterns are strongly influenced by jet streams or El Niño.) In psychological terminology, such management might be something like Ravens elusive personality, with its queer and satisfying aspirations and relapses and struggles. Composite cerebral codes, shaped up by darwinian copying competitions, could explain much of our mental lives. Copying competitions suggest why we humans can get away with many more novel behaviors than other animals (we have offline evolution of nonstandard movement plans). It suggests how we can engage in analogical reasoning (relationships themselves can have codes that can compete). Because cerebral codes can be formed from pieces, you can imagine a unicorn and form a memory of it (bumps and ruts can reactivate the spatiotemporal code for unicorn). Best of all, a darwinian process provides a machine for metaphor: you can code relationships between relationships and shape them up into something of quality.
|
Such an explanation for intelligent consciousness gives us some insight into metaphor
and operations in an imaginary realm. And it ought to tell us the kinships between
thought and other mental operations. In the case of my proposed explanation, the
ballistic movements and music seem intimately related to thought and language. Weve
already seen that the emphasis on novel sequences allows for nonlanguage natural
selection that benefits language (and vice versa). Those overlaps between oral-facial
sequencing and hand-arm sequencing (the apraxic aphasics) suggest that both are using
the same neural machinery. The important secondary use of the neocortical Darwin Machine would be for prospective movements other than the ballistic ones: planning on the time scale of seconds, hours, days, careers. It allows for trying out combinations, judging whats wrong with them, refining them, and so forth. Individuals who are good at this are known as intelligent.
|
Any explanation of intelligence also ought to give us some insight into other paths
to intelligence than the ones followed by life on Earth: it ought, in short, to have
implications for artificial intelligence (AI), for augmenting animal and human
intelligence, and perhaps for finding signals from exotic intelligences. Not much can
yet be said on the intelligence elsewhere subject, but let me suggest an ethological
perspective that may also help us think about AI and augmented intelligence. An intelligence freed from the necessity of finding food and avoiding predators might (like artificial intelligence) not need to move and so such an intelligence might well lack the what-happens-next orientation of animal intelligence. We solve movement problems, and only later, in both phylogeny and ontogeny, we graduate to the pondering of more abstract problems, acting to preempt the future by guessing what lies ahead. There may be other ways in which high intelligence can be achieved, but up-from-movement is the paradigm we know about. It is, curiously, seldom mentioned in the literature of psychology or artificial intelligence. Though there is a long intellectual thread in brain research that emphasizes up-from-movement, it is far more common to see discussions of cognitive function that emphasize a passive observer who intellectually analyzes the sensory world. Contemplation of the world still dominates most approaches to the mind, and by itself it can be thoroughly misleading. The exploration of the persons world, with its constant guessing and intermittent decisions about what to do next, must be included in the way we intellectually frame the issues. It is difficult to estimate how often high intelligence might emerge in evolutionary systems both here on earth and elsewhere in the universe. The main limitation, which makes most speculations meaningless, is our present ignorance about how dead ends in nature are overcome: its easy to get trapped in an equilibrium, stuck in a rut. And then theres that continuity requirement: that, at each step along the way, the species remains stable enough not to self destruct and competitive enough not to lose out to a streamlined specialist. Lists of intelligence attributes can, if carried far enough, be little better than stand-ins for giving a human IQ test to the other species (or computer). But we now can say something about what kinds of physiological mechanisms would aid a brain in guessing right and discovering new order.
|
We could assess promising species (or artificial creations, or augmentation
schemes) by counting how many building blocks of intelligence each had
managed to assemble, and the number of stumbling blocks each had
managed to avoid. My current assessment list would emphasize:
A wide repertoire of movements, concepts such as words, and other tools. But even with a large vocabulary from cultural sharing over a long lifespan, high intelligence still needs additional elements in order to make novel combinations of quality.Chimps and bonobos may be missing a few elements but theyve got more of them than the present generation of AI programs. Another implication of my darwinian theory is that, even with all the elements, we would expect considerable variation in intelligence because of individual differences in implementing shortcuts, in finding the appropriate level of abstraction when using analogies, in processing speed, and in perseverance (more is not always better, as when boredom allows better variants a chance to develop).
|
Why arent there more species with complex mental states? There is, of course, a
fantasy nourished by the comic strips that attributes silent wisdom even to insects. But
the apes would be the terror of Africa if they had even a tenth of our plan-ahead mental
states. I suspect that the reason there arent more highly intelligent species is that theres a hump to get over. And its not just a Rubicon of brain size, or a body image that permits you to imitate others, or a dozen other beyond-the-apes improvements seen in the hominids. A little intelligence can be a dangerous thing whether it be exotic, artificial, or human. A beyond-the-apes intelligence must constantly navigate between twin hazards, just as the ancient mariners had to cope with a rock named Scylla and a whirlpool named Charybdis. The turbulence of dangerous innovation is the more obvious hazard. The peril posed by the rock is more subtle: business-as-usual conservatism ignores what the Red Queen explained to Alice about running to stay in the same place. For example, when youre running rapids in a small boat, the way you usually get pushed against a hard rock is when you fail to maintain your speed in the main channel. Intelligence, too, is in a race with its own byproducts. Foresight is our special form of running, essential for the intelligent stewardship that the evolutionary biologist Stephen Jay Gould warns is needed for longer-term survival: We have become, by the power of a glorious evolutionary accident called intelligence, the stewards of lifes continuity on earth. We did not ask for this role, but we cannot abjure it. We may not be suited to it, but here we are.
|
Speaking of other intelligent species, what about the ones we might create
ourselves? A human mind embedded in silico, a copy of the detailed structure of one
individuals brain, is a possibility which has received some attention. I suspect that such an immortality machine the downloading of an individuals brain to a workalike computer is unlikely to function well. Even if we neuroscientists should eventually solve the readout problem, as some physicists and computer scientists blithely assume can be done, I think that dementia, psychosis, and seizures are all too likely, unless the workalike circuits are well tuned (and stay that way). Just think of the human beings who suffer from obsessions and compulsions: Stuck in an endless loop takes on new meaning when the asylum is timeless, no longer limited by the human life span. Who wants to gamble on that kind of Hell? Far better, I think, to recognize the essential nature of copying across successive generations, both of genes and memes. Richard Dawkins saw these copying relations clearly in The Selfish Gene, as did my friend, the futurist Thomas F. Mandel, in addressing his cyberspace friends while coping with his increasingly dim prospects of surviving lung cancer:
The first-order human workalike would, at a minimum, reason, categorize, and understand speech. I think that even the first-order workalike will be recognizably conscious, and likely as self-centered as we are. I dont mean trivial aspects of consciousness such as aware, awake, sensitive, and arousable. And I dont mean self-aware, which seems insignificant. Self-centered consciousness is, I think, going to be easy to achieve; getting it to contribute to intelligence will be harder. It seems to me that progressive generations of workalikes will come to acquire aspects of intelligent consciousness, such as steerable attention, mental rehearsal, language production guided by syntax, abstraction, imagery, subconscious processing, what-if planning, strategic decision making and especially the narratives we humans tell ourselves while we are awake or dreaming. Though running on principles closely analogous to those used in our brains, a workalike would be carefully engineered so that it could be rebooted when difficulties arose. I can already see one way of engineering this, using those darwinian essentials and the cortical wiring patterns that lead to triangular arrays and thus to hexagonal copying competitions among variants and hybrids. To the extent that such functions can operate far faster than they do in our own millisecond-scale brains, well see an aspect of superhuman abilities emerging from the workalike. If workalikes are able to achieve new levels of organization (meta-metaphors!), it may point the way to educate humans to make the same step.. But thats the easy part the extrapolation of existing trends in computing technology, AI, and the neuropsychological and neurophysiological understanding of human brains. Refining wisdom out of knowledge does, of course, take a lot longer than refining knowledge out of data And there are at least three hard parts.
|
One hard part will be to make sure a superhuman intelligence fits into an ecology comprised of
animal species. Such as us. Especially us. Thats because competition is most intense between closely related species which is the reason that none of our Australopithecine and Homo erectus cousins are still around, the reason why only two omnivorous ape species have survived. (The other apes are vegetarians, with long guts to extract the meager calories from all that high-bulk food.) Our more immediate ancestors probably wiped out the other ape and hominid species as competitors, if climate change didnt do the job.
When automation rearrangements occur so gradually that no one starves, they are often beneficial. Everyone used to gather or hunt their own food, but agricultural technologies have gradually reduced the percentage of the population that farms to about 3 percent in the industrialized countries. And thats freed up many people to spend their time at other pursuits. The relative mix of those occupations changes over time, as in the shift from manufacturing jobs to service jobs in recent decades. A century ago, the two largest occupational groups in the developed countries were farm workers and household servants. Now theyre a small fraction of the total. Workalikes, however, will displace even some of the more educated workers; those of poor education or below-average intelligence will have even bleaker prospects than they do now. But there could be some significant benefits to humans: imagine a superhuman teaching machine as a teachers assistant, one that could hold actual conversations with students, never got bored with drills, always remembered to provide the necessary variety to keep the students interested, could tailor the offerings to a students particular needs, and could routinely scan for signs of such developmental disorders as dyslexia or poor attention span. Silicon superhumans could also apply their talents to teaching the next generation of superhumans, evolving still smarter ones just by variation and selection: after all, their star silicon pupil could be cloned. Each offspring would be educated somewhat differently thereafter. With varied experiences, some might acquire desirable traits values such as sociability or concern for human welfare. Again, we could select the star pupil for cloning. Since the copying includes memories to date (thats the other advantage of intelligence in silico besides rebooting: you can include readout capabilities for use in cloning), experience would be cumulative and truly Lamarckian: the offspring wouldnt have to repeat the parents mistakes.
|
Values are the second hard part: agreeing on them and implementing them in silico. The first-order workalikes will be just as amoral as our pets or a young child just raw intelligence and language ability. They wont even come with the inherited qualities that make our pets safe to be around. We humans tend to be treated by our pets as either their mother (in the case of cats) or as their pack leader (in the case of dogs); they defer to us. This cognitive confusion on their part allows us to benefit from their inborn social behaviors. Well probably want something similar in our intelligent machines, but since theyll be a lot more capable of doing mischief than our pets are, well probably want real safeguards something fancier than muzzles, leashes, and fences. How do we build in safeguards as abstract as Isaac Asimovs Laws of Robotics? My guess is that it will require a lot of star-pupil cloning, a process not unlike the domestication of the dog. This gradual evolution over many superhuman generations might partially substitute for biological inheritance at birth, perhaps minimizing any possible sociopathic tendencies in silicon superhumans and limiting their risk-taking behaviors. If thats true, it will take many decades to get from raw intelligence (that first-order workalike) to a safe-without-constant-supervision superhuman. The early models could be smart and talkative without being cautious or wise a very risky combination, potentially sociopathic. They would have the top-end abilities without those abilities well-tested evolutionary predecessors as the underpinning.
|
But the Luddites and sabots of the twenty-first century will be aided by some very basic features of human ethology ones which played little role in nineteenth-century Europe. Groups try to distinguish themselves from others. Despite the benefits of a common language, most tribes in history have exaggerated linguistic differences with their neighbors, so as to tell friend from foe. You can be sure that the Turing Test will be in regular use, with people trying to determine whether a real human is at the other end of the phone line. Machines could be required to speak in a characteristic voice to dampen this anxiety, but that wont be enough to prevent us and them tensions. Workalikes and superhumans could also be restricted to certain occupations. Their entry into other areas could be subject to an evaluation process that carefully tested a new model against a sample of real human society. When the potential for serious side effects is so great, and the rate of introduction is potentially rapid, we would be well advised to adopt procedures similar to how the FDA tests new drugs and medical instruments for efficacy, safety, and side effects. This would not slow the development of the technology so much as it would slow its widespread use, and allow for a retreat before too great a dependency developed. Workalikes might be restricted to a limited sphere of interactions; they might require stringent licensing to use the Internet or telephone networks. There might be a one-day-delay rule for distributing output from superhumans that only had a beginners license, to address some of the program trading hazards. For some fledgling workalikes, we might want the computer equivalent of a biohazard containment for lethal viruses.
|
The ways in which we could introduce caution, however, are constrained by the various drives that are leading us to this intelligence transition:
Curiosity is my own primary motivation how does intelligence come about? and surely that of many computer scientists. But even if because-it-is-there curiosity were somehow hobbled (as various religions have attempted), other drives lead us in the same direction.I dont see realistic ways of buying time to make this superhuman transition at a more deliberate pace. And so the problems of superintelligent machines will simply need to be faced head-on in the next several decades, not somehow postponed by slowing technological progress. Our civilization will, of course, be playing God in an ultimate sense of the phrase: evolving a greater intelligence than currently exists on earth. It behooves us to be a considerate creator, wise to the world and its fragile nature, sensitive to the need for stable footings that will prevent backsliding and keep that house of cards we call civilization from collapsing.
|
![]()
The paperback US edition |