|A book by|
William H. Calvin
UNIVERSITY OF WASHINGTON
SEATTLE, WASHINGTON 98195-1800 USA
The Cerebral Symphony|
Seashore Reflections on the
Structure of Consciousness
Copyright ©1989 by William H. Calvin.
You may download this for personal reading but may not redistribute or archive without permission (exception: teachers should feel free to print out a chapter and photocopy it for students).
Shaping Up Consciousness
with a Darwinian Dance:
Emergence from the Subconscious
I think [that the need for a narrative] is absolutely primal. Children understand stories long before they understand trigonometry.the English-American neurologist Oliver Sacks, 1987
A man is always a teller of tales, he lives surrounded by his stories and the stories of others, he sees everything that happens to him through them; and he tries to live his life as if he were recounting it.the French philosopher Jean-Paul Sartre (1905-1980)
[A]bout the age of three... a child begins to show the ability to put together a narrative in coherent fashion and especially the capacity to recognize narratives, to judge their well-formedness. Children quickly become virtual Aristotelians, insisting upon any storyteller's observation of the "rules," upon proper beginnings, middles, and particularly ends. Narrative may be a special ability or competence that we learn, a certain subset of the general language code which, when mastered, allows us to summarize and retransmit narratives in other words and other languages, to transfer them into other media, while remaining recognizably faithful to the original narrative structure and message.the American literary critic Peter Brooks, 1984
Getting acquainted with the new neighbors happens quickly when aided by one of several catalysts: dogs or small children. They seem to serve the same function in society as enzymes do in the body. Certainly, the three parents that I see talking out on the lawn next to Eel Pond look as if they come from varied parts of the globe (judging from appearances only, I'd say Israel, West Africa, and India). Their preschool-aged children are having a grand time serving each other at an impromptu tea party, using trays and glasses that escaped from the MBL cafeteria in the Swope Center.
Toddlers aren't quite old enough to enjoy make-believe, acting out roles such as waiter or doctor. But by the time they are about three years of age or a bit older, children are engaging in organized fantasy of a kind that you'll never see in a chimpanzee. We seem to have a much more elaborate sense of self; we can even pretend that we're in someone else's shoes.
THE SENSE OF SELF is usually approached somewhat differently than a focus on the narrator as an outcome of the best-rated track of a stochastic sequencer (which is what I am about to propose). Studies in child development tend to identify a gradual development of a sense of self, as when children learn to recognize themselves in the mirror despite a smear of rouge on their noses. Sympathy and empathy emerge along the way. Being able to understand other people, and to understand how the world works more generally, are the late stages in a six-phase sequence proposed by Stanley Greenspan.
It starts with the newborn merely paying some attention to objects in the environment and gradually coming to identify the mother's face as different from others. But by about two months, the infant is "falling in love," smiling back to many people and reacting to them. Emotional dialogues, which develop between ten and eighteen months of age, are associated with the more complex personality that causes the child psychologists to start talking about a sense of self: desires are expressed, actions are initiated.
By eighteen to twenty-four months, one is seeing a much more organized self, with emotional thinking taking place: The child can play make-believe, act out others' feelings. Within another year, the child is engaged in organized fantasy, planning out tea parties, taking roles. And perhaps experiencing "dual consciousness," knowing that his own internal life is very different from how others respond to him. One pathological circumstance is the best-known cause of multiple "selves": Child abuse may cause multiple personalities to form, as the child attempts to minimize the pain.
Being able to predict others' actions is surely a basic feature of many animals' inborn abilities: Grazing animals probably can predict that, when a cat crotches and nervously twitches its tail, it is very likely to pounce next. Social animals such as cats and dogs make additional use of this elementary predicting-the-future ability, trying to avoid offending the powerful, trying to exploit others. But putting yourself in someone else's place would seem to involve something more -- acting at one remove, just as in playacting. This simulation ability suggests a sequencer in which what-would-I-do-in-that-situation scenarios can be constructed and judged against memory.A child learns abstract terms gradually, after becoming familiar with concrete terms. Once he knows the word conscious he may in time deal with consciousness as an abstract term. A host of other words will instigate the gradual growth of such abstractions as honesty, furniture, mind, education, athletics, religion, time, corruption, space, or olfaction. Such concepts form the warp and woof on the loom of thinking as we daydream, spin yarns, read detective stories, argue with friends, or write term papers.... Once we have the needed abstract terms, thinking is free from the impediment of listing concrete examples.the American psychologist David Ballin Klein (1897-1983)
DOGS AS SUMMER VISITORS are less common at MBL, as there is no space for them in most of the temporary housing. But families who rent cottages tend to arrive in station wagons packed with everything, including pets.
Though the year-round residents among the canine population roam the town, sometimes with their tennis balls at the ready, the vacationing dogs tend to be found at the end of a long rope, fastened to a tree in the front yard. One such dog is looking frustrated this morning, as a squirrel is cavorting just out of reach -- the dog is barking and straining at his leash, but the squirrel seems to understand that this dog isn't likely to chase him.
The dog could reach the squirrel if only he would backtrack, as his leash has become hung up, angled around a tree that keeps the dog from reaching the edge of the yard where the squirrel is. Dogs seldom figure out such restraints, though a sufficiently frantic dog might undo the restraint by chance on one of his random paths around the yard. A chimpanzee would take one look at the situation and head back around the tree, then race forward to reach the edge of the yard. It's a textbook example of what's meant by "insight." Leashes didn't figure in the evolution of either dog or chimp, but the chimp's intelligence is more "general purpose" and can deal with many novel situations. But how is this problem-solving done?
As R. B. Cattell notes, "The capacity of animals, even of higher apes, to perceive complex relationships is far below that of adult man, and seldom exceeds the level of a three-year old child, yet it is of the same nature as the intelligence of man. Blind trial-and-error behavior, the very antithesis of intelligence, is common in animal behavior." The fallacy, of course, is that trial and error in overt behavior is quite different from trial and error in the planning phase, done inside the head. Darwinism shows that the product of trial and error can be quite fancy, when shaped by many rounds of selection against memories.
THE TRIAL AND ERROR CONCEPT dates back even further than Lloyd Morgan in 1894, to whom it is often misattributed. Alexander Bain (1818-1903), in a volume entitled The Senses and the Intellect first published in Scotland in 1855, initially employed the phrase trial and error. He considered the mastery of motor skills such as swimming: Through persistent effort, the swimmer stumbles upon the "happy combination" of required movements and can then proceed to practice them. He suggested that the swimmer needed a sense of the effect to be produced, a command of the elements, that he then uses trial and error until the desired effect is actually produced. The neurologist Alf Brodal, himself recovering from a stroke, noted that:Among the original multitude of more or less haphazard movements the correct ones are recognized as such by means of the sensory information they feed back to the central nervous system, and this information is later used in selecting the correct movements in further training.The psychologist E. L. Thorndike more properly called it the method of trial, error, and accidental success; modern AI calls it by the euphemism "generate and test." Applied to our thought processes, the chance creation concept goes back much further, to the sixth century B.C. in Greece:But as for certain truth, no man has known it,But a woven web of guesses can be very powerful if selection operates on an adequate data base, and successive selection steps can shape up outcomes that come to be impressive. This was quickly realized in the aftermath of Darwin's success explaining new species with successive selection by environments, as in the 1880 analysis by the pioneer American psychologist William James:
Nor will he know it: neither of the gods,
Nor yet of all the things of which I speak.
And even if by chance he were to utter
The perfect truth, he would himself not know it,
For all is but a woven web of guesses.Xenophanes (Karl Popper's translation)...the new conceptions, emotions, and active tendencies which evolve are originally produced in the shape of random images, fancies, accidental outbirths of spontaneous variations in the functional activity of the excessively unstable human brain, which the outer environment simply confirms or refutes, preserves or destroys-- selects, in short, just as it selects morphological and social variations due to molecular accidents of an analogous sort.and the 1881 analysis by his French contemporary, Paul Souriau:We know how the series of our thoughts must end, but... it is evident that there is no way to begin except at random. Our mind takes up the first path that it finds open before it, perceives that it is a false route, retraces its steps and takes another direction... By a kind of artificial selection, we can... substantially perfect our own thought and make it more and more logical.With sufficient experience, the brain comes to contain a model of the world, an idea suggested in 1943 by Kenneth Craik and championed after Craik's early demise by J. Z. Young. In The Nature of Explanation, Craik outlined a "hypothesis on the nature of thought," proposing thatthe nervous system is... a calculating machine capable of modelling or paralleling external events....If the organism carries a "small-scale model" of external reality and of its own possible actions within its head, it is able to try out various alternatives, conclude which is the best of them, react to future situations before they arise, utilise the knowledge of past events in dealing with the future, and in every way to react in a much fuller, safer and more competent manner to the emergencies which face it.These concepts are powerful, but they have lacked a framework with suitable building blocks. A Darwin Machine now provides a framework for thinking about thought, indeed one that may be a reasonable first approximation to the actual brain machinery underlying thought. An intracerebral Darwin Machine need not try out one sequence at a time against memory; it may be able to try out dozens, if not hundreds, simultaneously, shape up new generations in milliseconds, and thus initiate insightful actions without overt trial and error. This massively parallel selection among stochastic sequences is more analogous to the ways of Darwinian evolutionary biology than to the "von Neumann machine" serial computer. Which is why I call it a Darwin Machine instead; it shapes up thoughts in milliseconds rather than millennia, and uses innocuous remembered environments rather than the noxious real-life ones. It may well create the uniquely human aspect of our consciousness.
THE BRAIN'S CONSTRUCTION of chained memories and actions surely involves a serial buffer, perhaps even the same one used for planning out a ballistic sequence of movement commands such as "cock elbow, inhibit flexors, inhibit fingers." And certainly it could benefit from Variations-on-a-theme Mode, with its tree-like collection of alternatives. A useful metaphor is that candelabrum-shaped railroad marshalling yard: imagine that many trains are randomly constructed on the parallel tracks, but only the best is selected to be let loose on the "main track" of speech (or that silent speech we often call consciousness). "Best" is determined by memories of the fate of somewhat similar sequences in the past, and one presumes a series of selection steps that shape up candidates into increasingly more realistic sequences. Instead of remembering those free-throw and long-shot sequences and using them to grade the alternative trains, then shape them up with the Darwinian Two-Step, you remember sequences of more general movement sequences such as opening the refrigerator or changing television channels. Maybe even more general strings of concepts such as planning college courses and a career.
For a least a century, it has been recognized that even the highest-known biological function, human thought, involves random generation of many alternatives and is only shaped up into something of quality by a series of selections. Like the elegant eyes and ears produced by biological randomness, the Darwin Machine's final product (whether sentence or scenario, algorithm or allegory) no longer appears random because of many millisecond-long generations of selection shaping up alternative sequences off-line.
WE ARE AWARE OF A TRAIN OF THOUGHT -- silent speech, as it were, talking to ourselves on familiar terms or maybe just imagining movements before carrying one out. I suggest that this conscious sense usually corresponds to the best of the trains shaped up by that ensemble of planning tracks, the one that was let out onto the main line but not necessarily all the way to the muscles (it may also be the most common string among the population of serial buffers). The other candidates are the immediate subconscious -- and there is surely a lot of activity going on there, because they are shuffling and mutating, trying new schemas for both sensory templates and movement programs, creating new sequences. And a lot of nonsense most of the time, but occasionally a winner. But how are the potential scenarios evaluated?
Consider a case when they aren't evaluated, at least not very well: the candidate strings that we call dreams. You see an occasional alternative sequence when the story-line takes a sudden turn, hinging on some feature common to both old and new story line. But then you may come back to the old story-line, though not always at the same point in the story, as if it had progressed some while you were paying attention to the new episode -- rather as if one had switched from one TV soap opera to another, and then back ten minutes later. Awake, the reality filter censors the nonsense much more and also keeps one from jumping around as much between scenarios -- but in sleep, and occasionally daydreaming, you can see several unrealistic story-lines simultaneously meandering via all the "channel-changing".
DISCERNING THE NATURE OF REALITY isn't easy, given a fantasy-prone mechanism at the center of things. With the creativity of a Darwin Machine, we also get the problems. Such as remembering what's real and what isn't. How do we know if something we remember isn't just one of our many fantasies, rather than a real happening?
Well, it is largely because our memory isn't very good. I'm not saying that we've been saved by sloppy design. On the contrary, the mechanism that regulates our ability to make a long-lasting record is likely quite sophisticated. I suspect that it's not unlike the kidney, where first you throw everything away (except for cells and large molecules), then you take back what you really want to keep from what's being discarded before it reaches the bladder.
Short-term memory fundamentally throws everything away -- but slowly. And this fading-memory mechanism is all we've got working for us during nocturnal dreams. The long-term memory mechanism, which normally takes those short-term memories and makes some of them more permanent, is turned off during dreaming. And so within an hour to a day, everything fades (including dreams) unless the long-term process comes along and strengthens it. Or unless it is put into short-term storage anew, as probably happens when you recall a fading short-term memory.
Those memory-regulating circuits that prevent "recording" during dreams are being reset as you awaken, so that longer-term recording becomes possible again within 10-20 seconds after you sit up straight (you have to watch out for short phone calls in the middle of the night: People often fail to remember them because they didn't stay awake long enough). Recalling dreams from short-term memory after awakening means that they get a second chance to make it into long-term memory before fading out. You then remember your recall rather than the original event.
If you don't recall it soon after awakening, it will disappear. Dreams aren't usually so entertaining that you want to keep a permanent record anyway. And their significance, contrary to Freud and others who seem to see them as our own personal Oracle speaking, seems "transparent," merely a hodgepodge of one's current concerns and old memories strung together somehow; like daytime thoughts, they sometimes achieve useful juxtapositions. Darwin Machine reasoning essentially says that our waking thoughts are just like our nocturnal dreams, except 1) we can sometimes remember them better; 2) much higher standards of reality-testing are used when shaping up a scenario; and 3) sometimes a movement plan is gated out into actual movements.
Actually doing something, rather than just reading about it, has long been recognized by educators as a much better way to make something stick. Copying out a phrase, or speaking it aloud, is a technique reinvented by many a student. When your learning strategies finally permit you to memorize something without moving your lips, to scan a page and effectively recall its contents, then you have tampered with one of your major mechanisms for separating illusion from reality: That actual movement is a prerequisite for memorizing. If you then regulate that recording mechanism poorly, you'll flirt with the extremes of believing the imagined and of being a slow learner. Learning to learn involves playing around at the boundary between illusion and reality, coming to cope with the middle ground -- where you spin scenarios without necessarily letting them from your short-term into your long-term memory.
It's a good thing that we can't readily recall our dreams; they're usually dimly remembered, if at all, and the mere "lack of force" of the memory tends to cue us that it was probably just a dream fragment rather than a real happening. But suppose something went wrong and your nocturnal dreams (and even the things you consciously thought about doing but didn't do) were easily recalled -- you could have trouble maintaining your sense of reality in the manner of the schizophrenic, who probably "hears" one of the planning tracks as an auditory hallucination.The Latin verb cogito is derived, as St. Augustine tells us, from Latin words meaning to shake together, while the verb intelligo means to select among. The Romans, it seems, knew what they were talking about.the American philosopher Daniel C. Dennett, 1978
EXERCISING THE SERIAL BUFFER might be the common feature of all those pleasant pastimes: Baseball, basketball, hammering, tennis, juggling, playing pianos, dancing jigs. Indeed, a serial buffer is even handy for games that aren't overtly ballistic: consider chess or contract bridge with their planning for the future and choosing between alternative sequences of moving pieces or laying down cards. Of course, that suggests more than one serial-planning buffer: You need at least two if you are to compare sequences, judge which is better. Having dozens of planning tracks would be even better.
If we have such buffers, are they as long as the ones in a telephone's memory? While machines can easily chain together thirteen digits, humans have problems with only half as many. There are some suggestions that the capacity of one important serial buffer in humans is a bit more than a half-dozen items, judging from phenomena such as digit span: Most of us can hold on to about seven digits in sequence long enough to repeat them back or dial a phone; anything longer and (unless part of it is very familiar to us already) we have to use crutches such as writing them down. In 1956, the psychologist George Miller wrote a now-famous paper entitled "The magical number seven, Plus or minus two" on this theme; some people can only hold on to five digits, some can manage nine, but the average is about seven.
And so we subdivide the problem whenever approaching our seven-digit limit. Back in the days of American telephone exchanges having names rather than all-number prefixes, you just remembered "Murray Hill" and then four or five digits; nearly everyone can manage that so long as they know that Murray Hill translated into "68" (MU and not MH's "64"!). This mnemonic device served to break up the job of remembering a seven-digit-long string by fragmenting the first few digits off into a separate chunk called the prefix, recognized separately as a regular part of our vocabulary. The only way that most of us can manage longer digit sequences (it takes me at least thirteen digits to call Israel) is by chunking: remembering the international-call access number separately (011), the international dialing code for Israel separately (972), then the code for Jerusalem (2), and then the prefix (58) at Hebrew University, followed by the four-digit extension. That's eight chunks total, rather than the thirteen separate units "remembered" by the phone's memory.
Similarly, in planning what to say next, we plan ahead no more than about a half-dozen words; while we're pronouncing those words, we make up the rest of the sentence, one reason that it is said that we usually don't know how a sentence is going to end when we start it. If each word of a seven-word sentence stands for something very simple, like a digit, then the sentence itself cannot say much. But if each word stands for a whole concept and its many connotations, then a unique seven-word sentence can encompass much:Each time that we define a new word, we are usually replacing a long phrase: recrudescence or relapse to stand in place of "the problem starting up all over again after a remission." This is chunking, and we do it mostly because of that seven-plus-or-minus-two limit on what we can work with at any one time in our serial buffer for speech.
The dreams of reason bring forth monsters. Francisco José de Goya (1746-1828)
Lost in a gloom of uninspired research. William Wordsworth (1770-1850)
Science is the record of dead religions. Oscar Wilde (1854-1900)
We are products of editing, not authorship. George Wald (1906- )
VARIATIONS-ON-A-THEME MODE would obviously be handy for generating all of those subconscious sequences that we see in our dreams, and presumably use in our daytime thoughts to good advantage. But how about "making up your mind", when you stop imagining, achieve closure? It took me years to realize this, even after I recognized that serial buffers for throwing ought to be handy for talking and thinking, but the shaping up of a population of buffers, from widely-varied to near-clones, is precisely analogous to the amplification step in the immune response, and to the allopatric speciation step in biological evolution. And it corresponds nicely to narrowing down the thoughts, not merely to the best-of-the-possibilities but to make that best one also the most common one, the string that takes up residence in many of the buffers by winning out in the Darwinian Two-Step competition.
The mere process of having to borrow helpers, to recruit that audience to help sing along with the choir, means that there will be a phase when there will be a lot of variation in the sequences sung by the various singers (the "first rehearsal phenomenon" known to all choirmasters). But unlike the choir where adherence to the music is the criterion for grading performance, each singer is graded against memories of past performances, each aspect of which is weighted by how appropriate it is to the present situation in which the singer finds himself. The more confident the singer is, the louder he sings and so the more he influences other singers to change their song and follow along. The population evolves, as the scattered group of novel individuals start coalescing into a synchronous chorus. The dominant version now becomes the one that won out in the competition, not the one written down on some pre-ordained sheet of music.
IF THIS IS CORRECT, then I was wrong when I said that consciousness was a much harder, more nebulous problem than the standard visual cognition problems such as imagining the front of a doll's house in your head, rotating it around so that you can "look into" its open rear wall, and then concentrating on the furnishings of a single room. That, as any computer programmer can tell you, is a big computational problem that must be done in a long series of time-consuming steps. Those three-year-old children at their tea party on the Swope patio might not be very good at that task, but in another five years or so, they'll be experts. Yet I don't think that such visualization abilities are anywhere as important to our sense of self, compared to scenario-spinning and its associated narrator of our life's story.
On my analysis, the narrator of our conscious experience arises from the current winner of a multitrack Darwin Machine competition. It isn't an explanation for everything that goes on in our head, but it is an explanation for that virtual executive that directs our attention, sometimes outward toward a real house, sometimes inward toward a remembered house or imagined doll's house, sometimes free-running to create our stream of consciousness. Directing sensory attention may seem unlike making movement plans, but the neural circuits seem analogous, all part of that frontal lobe circuitry used in making preparations for action.
And unlike the criticism of Gilbert Ryle's exorcision of "the ghost in the machine," that he offered no alternative explanation for how he generated his own thoughts, the Darwin Machine can account for much: for imagination, for generating a broad range of choices, narrowing them down, imagining again, and so creating more and more sophisticated thoughts in much the same way as the better-known biological evolution creates fancier and fancier species. The Darwin Machine theory accounts for how this explanation was generated, its sentences constructed and revised, and for how criticisms of the proposal can be listened to, analyzed, and amalgamated into a new view.
Indeed, one wonders if alternative explanations for thought will not simply turn out to be mechanistic equivalents to Darwin Machines, once reduced to such an elementary neurophysiological level. What else is there besides randomness for generating imagination, for innovating, for finding the best fits? That is not to say that explanations at other levels might not turn out to be more useful for some purposes, just as equation-solving algorithms are extremely handy for dealing with a restricted class of phenomena (when you learned long division, you learned an algorithm -- a routine procedure guaranteed to provide an answer). A directed search of an ordered list of possibilities, as in expert systems that attempt to diagnose a patient's disease by asking a series of key questions, may be far more efficient than randomly spinning hypotheses. But when we start talking of innovation, imagination, our own stream of thought, and how we initially arrive at an algorithm, we may well be talking Darwin Machines but simply in various "languages."
Many of the attempts to create "artificial intelligence" are simply efforts to apply computer programming to complex problems that humans often solve. And a logical framework is indeed the first thing to try, especially when limited in computing power. However, it has been observed that AI is good at doing what humans find difficult (championship chess), but inept at doing what humans find easy (touching our nose, navigating around obstacles, coming up with creative ideas). Some have suggested that AI's traditional fare is better called "artificial stupidity" in the same spirit as the advice to anxious beginners about computers: "It's just a stupid machine. It can't do anything except what you tell it to do -- if you make a mistake, it'll make the same mistake a thousand times faster." Can AI's paradoxical performance be because AI still has the recipe-following calculating-machine mentality?
But the artificial intelligentsia has lacked the computing machinery to implement a more natural approach. The parallel computers that have grown out of the serial von Neumann machines are now attaining much power despite their programming problems; furthermore, neurallike networks have given us a nontraditional approach to shaping up a mechanical tabula rasa. Implementing massively serial versions is only a matter of time, and some will result in Darwin Machines.
Considering that silicon-based computers can operate a million times faster than our neurons can, gaining a foothold on the problems of instructing massively parallel networks would mean that we might have computing power to rival that of the human brain within a few decades -- and this should help us figure out just how the human brain utilizes emergent properties. It should enable us to construct some fascinating machines. We will be able to hold an interesting conversation with some of them, and appreciate reality from some new points of view. We might even have to concede them some degree of consciousness.
WHAT WE SEQUENCE are often the schemas we call words. Many of our words are verbs, standing for actions: We can think run without running; run can be the movement melody but with the final pathways inhibited (just as they are inhibited during your nighttime dreams). Other cars on this train could be linkages such as is-a-member-of or is-connected-to or is-contained-in. Other words are nouns -- essentially sensory schemas for objects and people and such. "Bob runs" is a noun-verb sequence in our heads that we call a sentence. We can make fancier strings of the sensory and movement schemas: "Jack and Jill went up the hill."
The railroad marshaling yard metaphor serves to show how candidate sequences could be compared against memory and graded, the grades then compared between candidates (those with an engine at one end and a caboose at the other may have fared better earlier and so get higher grades). The interesting emergent properties would, however, arise from the mechanisms that allowed candidate sequences to be compared, both for content (which elements, in what order) and timing (when the element was gated out, analogous to variable spacing between the railroad cars -- as if each were on a leash of adjustable length).
Once you've got a buffer that is seven chunks long -- maybe just for the occasions demanding hammering -- then maybe you can use it for sentences in the spare time. Perhaps when you get dozens of serial buffers for the throwing occasions that demand them, you can also use them for other things, much of the time. That's the startling alternative to language as the evolutionary "cause" of our penchant for stringing things together. Language becomes just another secondary use -- maybe a more important one than making music, but no longer the raison d'être.
This isn't to say that language wasn't useful during the ice ages when hominids were being shaped up into humans -- in evolutionary arguments like this, you have to distinguish between the invention and the streamlining. Conversions of function, as when number-crunching digital computers also turned out to be handy for wristwatches and file cabinets, can sometimes give an enormous boost, accounting for most of the modern functionality, with natural selection accounting only for the frosting on the cake. Or the invention by sidestepping conversion may barely get things going (as when fish jawbones were incorporated into the middle ear) with functional streamlining dominating the modern version (the tiny stapes, hammer, and anvil bones that couple your eardrum to your inner ear). Feathers for flight may be an intermediate case, feathers for thermal insulation getting forelimbs up to the rather considerable threshold of flight; after the conversion of function, there was lots of streamlining, with even those specializations for slow flying that you see on the tips of crow wings.
Which kind of conversion will language turn out to be? Frosting, core -- or more like "marble cake" from a series of back-and-forth inventions and specializations? My guess is that it is going to prove to be very useful to understand the serial buffer specializations needed for precision throwing, that they will help us to appreciate the foundation from which our other serial abilities have arisen. But each secondary serialization will prove to have a life of its own, just as flying involves a lot more than you would have guessed from thermal underwear.Language is incomplete and fragmentary, and merely registers a stage in the average advance beyond ape-mentality. But all men enjoy flashes of insight beyond meanings already stabilized in etymology and grammar.the British philosopher Alfred North Whitehead (1861-1947)
Email || Home Page || Table of Contents || Endnotes for this chapter || continue reading Next Chapter
The Cerebral Symphony (Bantam 1989) is my book on animal and human consciousness, using the setting of the Marine Biological Labs and Cape Cod. AVAILABILITY is limited.