Email Calvin || Email Bickerton || Book's Table of Contents || Calvin Home Page || September 1999

COPY-AND-PASTE CITATION


William H. Calvin and Derek Bickerton, Lingua ex Machina: Reconciling Darwin and Chomsky with the human brain (MIT Press, 2000), chapter 15.  See also http://WilliamCalvin.com/
LEM/LEMch
15.htm

copyright ©2000 by William H. Calvin and Derek Bickerton

The real book is available from  amazon.com 
or direct from MIT Press.

Webbed Reprint Collection
This 'tree' is really a pyramidal neuron of cerebral cortex.  The axon exiting at bottom goes long distances, eventually splitting up into 10,000 small branchlets to make synapses with other brain cells.
William H. Calvin

University of Washington
Seattle WA 98195-1800 USA



 

15

Darwin and Chomsky Together at Last

 

 

For four decades, the study of our species and its unique capacities has been delayed and disrupted by a controversy that should never have come about. The evidence that all species, including our own, developed through natural selection operating on genetic variation is so overwhelming, one might think that only those driven by some ideological agenda could fail to accept it.

The evidence that language is an innate, species-specific, biological attribute that must possess a specialized neural infrastructure is so overwhelming, one might think that only those driven by some ideological agenda could fail to accept that, too.

It ought to have been obvious that a blending of the Chomskyan approach and the Darwinian approach could go a long way towards explaining what we are and resolving the apparent paradox that has hag-ridden the human sciences for centuries: that we were produced by the same forces as other species, yet behave so differently from other species.

This book, then, is an attempt to bring peace to a conflict that should never have broken out in the first place, and to show, contrary to so much that has been written over the past few decades, that the approaches pioneered by Darwin and Chomsky are fully reconcilable. Before summing up that attempt, it may be worth while to consider how the conflict came about.


In doing so, we have to bear in mind that science is not the coldly objective, squeaky clean process it=s sometimes portrayed as being. It is a fallible process carried out by humans who, like all of us, are driven by passions and presuppositions that aren=t always recognized for what they are. If we weren=t ornery, cussed primates who wanted to be alpha animals, we wouldn=t have the energy to drive good new ideas to the point of acceptance. If primates hadn=t developed reciprocal altruism, we wouldn=t form alliances to back up those good new ideas and do down the bad old ideas that stand in their path (and we wouldn=t have language, so we wouldn=t have any kind of science). And of course, in an alliance, you back your own guys against the other guys, come what may.

On top of all this, science has a history, and that history shapes the way in which issues are framed and helps determine the sides people take on those issues. It all began after the publication of The Origin of Species, when Darwin came into conflict with Max Muller, a leading linguist of his day. Taking on himself the mantle of Descartes, who had opined (philosophically filling out the Judeo-Christian framework) that humans and animals were irrevocably distinct, Muller declared language to be the Rubicon that Ano brute will dare to cross.@ Darwin, on the other hand, declared in correspondence with Muller that one Afully convinced, as I am, that man is descended from some lower animal, is almost forced to believe a priori that articulate language has developed from inarticulate cries.@ In response, Muller derided what he termed Darwin=s Abow-wow@ and Apooh-pooh@ theories of the origin of language, and his followers were successful in persuading the Linguistic Society of Paris to ban all presentations on language evolution from its meetings and publications.

The Paris ban is defensible in that it saved the world from a great deal of half-baked speculation. The better part of a century would elapse before people knew enough about language, human ancestry, and the brain to begin to make halfway intelligent guesses about how language might have evolved. The broader implications of the ban, however, were less fortunate. Lines had been drawn in the sand, and those lines would largely determine how linguists and evolutionists would interact for decades to come.


The irony is that Darwin himself might not have been opposed to the idea that language was a kind of instinct. Throughout his life, he was moving from early sensationalism and Lamarckian thinking toward the idea of species-specific behaviors as the consequence of instincts that, in turn, had to have been derived (somehow) from natural selection.

But Darwin, strong at both ends of the intellectual spectrum B on empirical and comparative studies, and on overarching, metatheoretical conceptions B was always weaker on the middle ground, and especially on the mechanisms that would underpin the processes he so presciently described. Perhaps the most poignant image of scientific might-have-beens is Mendel=s groundbreaking paper, which would have solved Darwin=s deepest problems, moldering for sixteen years in his library, unread, its leaves not even cut.

If Darwin had read Mendel and incorporated Mendelian genetics into his theory, that theory and the physiological approach to mental breakdown spearheaded by Emil Kraepelin could have merged to form a true science of human behavior and avoided the wasteful detours into behaviorism, Freudian psychology, and cultural anthropology that were to mark the first half of the twentieth century. Instead, the evolutionary movement got sidetracked into eugenics, became marginalized, and remained peripheral to studies of human nature up to and beyond the mid-century. Throughout the behavioral sciences, it was as if Darwin had never been. And, to add insult to injury, the Nazis highjacked eugenics and made sure that, for decades to come, belief in innate characteristics would tar its holders with a neofascist brush. In the mid-century consensus, language formed part of culture, the human mind at birth was a tabula rasa, and cultural evolution had somehow projected our species out of the domain of biology altogether.

Ironically, the first serious blow against this consensus was struck not by some embattled evolutionary biologist or geneticist, but by a scholar who (the first of several poor P-R moves) chose to fight under the improbable banner of Descartes. Noam Chomsky=s brilliant review of Skinner=s Verbal Behavior exposed the weak intellectual underbelly of the consensus and opened a breach, into which attacks from a variety of disciplines soon began to pour. But, given his choice of Descartes as sponsor, Chomsky=s approach looked, from the evolutionist trenches, less like an assault on general learning theory than a reaffirmation of human separateness. Quickly the debate veered back to those lines in the sand drawn by Darwin and Muller almost a century before.

These military metaphors should not be mistaken for some authorial attempt to jazz up the issues. For Chomsky, and for quite a few others on both sides, this was indeed war, a war to be fought with ideological fervor and with no quarter given or taken. Chomsky in particular perceived himself as an isolated fighter, lumping most positions other than his own into an empiricist-continuist Goliath against whom he could play righteous, stone-slinging David. His combination of an insistence on the biological nature of language with a refusal to look at the origins of that nature B and his blanket statements about the futility of any such enterprise B turned off many in the evolutionary community who might otherwise have been supportive. Others, more hard-nosed, responded in kind and denounced Chomsky as a spinner of absurd and irrelevant theories wholly devoid of empirical support. The lines in the sand became formidable entrenchments, behind one or other of which most workers in the behavioral sciences felt obliged to join up.

In the murky light of this all too real world of modern science, plain and obvious facts B like that Darwinism has to be right and Chomskyism has to be right B tend to look a whole lot less obvious than they otherwise would. Ignore all the warts of either one of them, focus on all the warts of the other, and anyone might conclude that one is right, the other wrong, and never the twain shall meet. Hopefully this book has shown why such a conclusion is misguided, and these last few pages have shown why, despite all this, so many have followed that misguided path.


Peace comes, of course, at a price. No right-minded person would expect that two separate lines of thought, conceived for totally different purposes and developed on totally different lines, would just dovetail neatly together, with no need for trimming and fitting. What we hope to show is that such trimming and fitting, while it might seem a threat to dearly held beliefs on both sides, does not in fact require either side to cede any significant territory. Let=s just summarize the process and then see what, if anything, anyone has to give up.

What we have described is a perfectly legitimate evolutionary process, one that has time and again fashioned apparent novelties out of long-established organs and faculties. Reciprocal altruism, great-grandfather of so much we would want to preserve, started out as pure selfishness, with just a touch of foresight. If you scratch my back, I=ll scratch yours B but if you let me scratch yours without returning the favor, I=ll eventually dump you and find someone more accommodating. Keeping track of the interchanges, making sure you weren=t giving much more than you were getting, became central to the social life of many primate species. And the only way one could avoid being short-changed was to set up abstract categories labeled AGiver@, AGiven@, AReceiver@ (or agent, theme, goal in linguistic terminology) and store an event in memory in such a way that one could recover, automatically, the occupant of any given role.

Without some such way to structure events, our ancestors might have stumbled into protolanguage but could never have created true language. There may be symbols, but strings of symbols without structure quickly become too ambiguous for real time analysis and response.


And the symbols themselves? Darwinians may not even have to entirely jettison Darwin=s belief that Aarticulate language developed from inarticulate cries.@ Although cries and words have too many differences B including even the acoustic wavebands they use B for a simple, straight-line continuist theory to work, a few elements of the original call system, linked with gestures, pointing, and other communicative tricks, might have played a part in the early forms of protolanguage.

Any evolutionary account of a new trait has to provide at least two things: a plausible selective pressure that would have served the trait, and a degree of inheritable genetic variation on which to work. With regard to the emergence of protolanguage, we have chosen a selective pressure from extractive foraging rather than the (recently more popular) social intelligence of primates. In the real world, food comes first, socializing second. And most theory-of-mind, Machiavellian-intelligence accounts extrapolate from modern chimp and bonobo societies rather than realistically reconstructing the hominid situation, which is subject to much more stringent environmental pressures than affect modern tropical apes. After all, the only animals other than ourselves to have anything that can transmit variable factual information, as language can, are bees, and bees, like our ancestors, are extractive foragers and use their language to help in their extractive foraging.

The requisite variability, in the earliest stage of protolanguage, would have involved the capacity to recruit one=s fellows, by whatever combination of gesture and sound, to make particular foraging choices (led to a relatively unguarded carcass only a few hours old rather than to some decaying remains ringed by aggressive scavengers, for example). Thus not merely the power to communicate but some skill in handling the act would have been selected for at the same time, leading to increments in social intelligence as well as communicative ability.

Once protolanguage was established, the theme tagging from the social calculus could have been mapped onto it to immediately give structured utterances, but for one thing. To create a structured utterance requires that neural signals be transmitted over long distances within the brain, relayed through many points, and merged with or added to other signals without losing their coherence. If the brain=s capacity to sustain a coherent message is limited, then it=s a more reliable strategy to send out symbols for utterance one at a time, rather than first combining them into a single structured message. True, that limits you to very short utterances (four or five symbols maximum, preferably less) but why would you need more than that for exchanging information about food sources (or even a bit of basic gossip)?

Accordingly, the further development of language had to await two developments: increase in the number of available neurons and improved connections between the various parts of the brain involved in language. Protolanguage itself, in the beginning at least, probably didn=t serve as a direct selective pressure toward increased brain size. Initially at least, such things aimed throwing and percussive hammering may have increased the number of Aspare@ neurons (that is, neurons that could be co-opted on occasion for linguistic purposes). Aimed throwing, because of its long growth curve (limited only by the power of human arm muscles), may have been particularly effective in this respect (there is no such thing as throwing too far or too accurately). But as time went by, protolanguage must have played an increasing role in brain growth. The people who believe a complicated social life caused language have the right steps in the wrong order. Language of any kind, however primitive, would have made social life enormously more complicated. Once lying and tale-bearing became possible, you had a lot more information to store and you had to be able to figure out how accurate it was if you weren=t going to be perpetual patsy in your buddies= intrigues. Social life plus protolanguage was like a muscle-building machine for the brain. And above it all was the attraction of having as a mate someone who got words out (however few) quickly, clearly, and appropriately B as opposed to someone who coughed, hesitated, and said duh-duh-duh.

So you may be able to knock down a running rabbit at thirty yards. You may be able to con everyone in your group into believing what a nice guy or gal you are, while still managing to grab most of the goodies for yourself. But these are not things that leave any marks in the fossil record. That=s why it was possible for hominid brains to balloon to modern human size without any of the new artifacts or new behavioral patterns that you=d expect high intelligence to yield. To create novelty demands a special type of thought, a very special type indeed.


We don=t have space here to get into the relations between language and thought, which have buffaloed philosophers for centuries. Suffice it to say that to change things, to do new things without a lot of groping around, you have first to plan them in your head. And to do that requires that you are able to maintain thoughts B which are simply patterns of neural impulses B in your brain over times long enough to let you assemble and reassemble them. That=s exactly what you do when you=re putting sentences together. That=s why the Great Breakthrough didn=t happen way back in the lower Paleolithic. It wasn=t that we weren=t smart enough, just that we weren=t the right kind of smart.

So the creativity that most clearly marks us as human had to wait on the crossing of the same threshold that gave us our language. Given that Homo erectus and Neanderthals had big brains too, we can assume that they were traveling the same route. Did our own immediate ancestors get there first, was there really the clash between glib-tongued humans and tongue-tied Grisly Folk that fiction writers like Wells and Golding imagined? Or were the capabilities more even, with mere accidents of technology or political history giving our ancestors the edge? Hopefully, ongoing research into periods of human-Neanderthal co-existence, in both Europe and the Near East, will yield at least part of the answer.

We can be surer about what happened to our own folk. Even when signals could be maintained through the merging of two word representations, that wasn=t much help. Single-merger capacity would give you just prefabricated utterances of two words. But you could produce those just as well in protolanguage, sending out words one at a time, with less risk of something going wrong. All signal coherence then gave you was a little more speed in delivery. It couldn=t even give you a good argument structure mapping, because lots of verbs take more than one obligatory argument. And doubling your capacity to two merges wasn=t much better, even though that meant a 100 percent improvement. But here a new accelerating factor began to come into play. The brain is a parallel, not a serial processor: it can do lots of things at the same time. So, assume you have capacity to sustain coherence through two merges. You simultaneously merge A with B and C with D. That counts as just one merge. Then you merge AB with CD.

With two merges you can assemble four-word messages. You could properly construct a single clause utterance like ABig men eat meat.@ But nothing any longer or more complex than that. And remember that the protolanguage machine could produce strings of four or five words B without any structural constraint, granted B before multiplying ambiguities caused the process to grind to a halt. So there=s still no decisive edge for a language machine over a protolanguage machine. But then a mere 50 percent increment in what coherence buys B from two to three merges B doubled your potential from four to eight-word utterances. Now for the first time, multiclause sentences and multiclause thoughts come within your grasp. A piddling 33 percent further increase would again double the potential length of sentences to as many as sixteen words.

By this time, the Baldwinian processes would have come into effect; our ancestors would have been elaborating means to make sentences more parsable. The best of these would have been reinforced by neural adaptations, and natural selection would have been winnowing the resultant variation to yield something like the Ano argument without a non-argument@ principle and the empty-category algorithm, both described in the appendix. Again, female mate selection would have driven the process and selected systematically for more efficient language processing. And so we arrived at a language that, apart from having different words, would show no significant differences from the kinds of language we speak today.


Now it=s time to add up the score. What, if anything, has anybody had to give up? On the evolutionist side, nothing of any value. The processes we have here hypothesized as taking part in the evolution of language B exaptation, female selection, Baldwinian evolution, and so on B are all recognized processes accepted by a large majority of those working in the field of biological evolution. One might argue about the effects of particular processes at particular stages, but no principle has been violated, no macromutations or Lamarckian heresies have been smuggled in. Some of the more naïve forms of Acall-system to language@ might have to be junked, but that=s no loss, since none has ever been worked out enough to merit debate.

What about nativists? Again, nothing of significance is lost. We have shown that no simple-minded continuist scenario and no amount of general-purpose learning will suffice to account for the emergence of language. If it did, big-brained hominids would surely have given us today=s language a million years ago or more. We have shown language to be innate, species-specific, and supported by task-dedicated circuits B even if parts of those circuits may do double or treble duty in other tasks. We have removed language=s origins from the shroud of mystery in which generativists preferred that it remain hidden, just in case some empiricist came up with a good story. Decades of stimulation mapping in the brain has shown that strictly locationist models of language B one module, one function B won=t fly in the real world. But then it was Fodorian psychology rather than linguistics that promoted encapsulated modules. Most linguists didn=t care enough to get as specific as that. And the model of language our approach entails is one toward which Chomsky=s minimalism has been heading for a decade now.

On the plus side, let=s suggest, for each of the two parties, one problem that our approach promises to solve. The problem for evolutionists concerns brain size; the problem for linguists concerns language acquisition.


Why did the brain cease to expand B even, if some figures for Neanderthals are representative, contract a little B when our species came upon the scene? After all, if evolution has been selecting for larger brains for millions of years, why not go on? If a brain of 1400 cc is good, why isn=t one of 2800 cc twice as good? If our brains had really swollen as general intelligence machines, that argument would surely apply.

WHC: It’s not even clear that brain size is important for anything — except perhaps paleoanthropology, as sheer size is almost the only thing you can measure about ancient hominid brains. We just suspect that bigger is better.
The obvious answer is, of course, that purely physical constraints apply: some say,
Athe human birth canal simply won=t allow for bigger brains to be born.@ But that argument would have applied equally well when human ancestors had brains in the 400-600 cc range; if it didn=t apply then, why should it apply now? The birth canal must simply have widened as heads swelled. In any case, even if we have hit some developmental wall, the argument still wouldn=t apply. First, some humans have brains 50 percent above the norm, so canal size couldn=t stop a 50 percent increase. Second, even if it did, there=s nothing to stop a development in which a still longer and larger development of the brain occurs outside the uterus. If the goal is viable, evolution can find a way.

A much better explanation is that the brain stopped growing because it was now big enough to end-run the biological version of evolution. Suppose, however shocking it may sound, that our computational tools, our modes of reasoning, are in fact no different from those of chimpanzees and that our only superiority lies in our higher level of neural message coherence, which allows us to build chimpanzee-level thoughts into impressive edifices (and doubtless have quite a few thoughts that are well beyond the reach of chimpanzees, too). Now, without novel reasoning devices, there is not much point in being able to construct longer trains of thought than we can now construct. The main limit on these seems to be the size of working memory, and working memory too might expand (and brain size with it) if, for instance, we came into contact with an alien species roughly equal to ourselves in computational power. Without that, or something like it, there=s no selective pressure on working memory to expand. It nicely holds about the number of elements we can compute over; it=s not obvious what advantage would accrue from increasing that number. And evolution has never seen the need to replace a Good Trick with a Better Trick, if the Good Trick works well enough and there=s no competition. It=s just not in the cards.


Let=s look now at a problem for linguists: the pace of syntactic acquisition. Typically, children acquire their first words around twelve to fifteen months, proceed in a couple of months or so from one-word to two-word utterances, and may thereafter remain stuck in the two-word stage for as long as six months before bursting out into what some acquisitionists have called Athe syntactic spurt,@ which carries them, often in a matter of only weeks, to a stage in which they can produce a wide variety of biclausal sentence types. This herky-jerky trajectory has long puzzled acquisitionists, who, noting that children in the two-word stage can typically comprehend sentences far more sophisticated in structure than any they can produce, have talked about some mysterious Aproduction bottleneck@ that curtails their utterances.

First let=s dispose of the production-comprehension asymmetry. This exists not because children have syntax they can=t express, but because comprehension draws on a wider range of possibilities than production does. In comprehension you can use the syntactic structure or you can use pragmatics, semantics, context, plus anything else you can lay your hands on B direction of eye-gaze, to name just one B in order to figure out the message. In production, if you can=t use syntactic structure, you=ve had it. So production is the only reliable guide to how much syntax children have and when they get it.


Now let=s take the old recapitulationist idea and re-run it. Of course there is no general law that ontogeny recapitulates phylogeny B but it does sometimes, if there are good reasons why it should. A baby=s brain is like the brain of a prelinguistic hominid or an ape, in that there are no words in it. The words have to be put there, one by one, in the first instance. Now if it is correct that all our neocortical neurons are there at birth, then a baby has large enough neural choirs to sing out those words loud and clear. But there are two prerequisites for syntax: choirs and connections. The first couple of years of life are a time of dewiring and rewiring, as the environment works on the hand that nature dealt the child. On top of that, there=s myelinization, essential if signals are going to be sent rapidly and frequently.

So until connections adequate to support complex messages are established, the child does indeed B has to B operate in protolanguage mode, dispatching words to the organs of speech one at a time. This does not stop the child from picking up some grammatical regularities such word order, and even some grammatical morphology, in languages that have such. How could a child help but do so, where virtually all words come with inflections or casemarkers attached? In all probability (because segmenting is a hard job), word-plus-morpheme combinations are acquired holistically. But as soon as adequate connections begin to get established, the picture changes rapidly. The onset may be as early as eighteen months or as late as three years, even in normals B but regardless of age, and at around twenty-five months on average, a torrent of structured language bursts forth. It was there all the time, potentially, just waiting to be let loose.

That torrent flows along the causeway already laid down by word-order regularities, but what comes out is not always grammatical in terms of the language of the local environment. To produce locally grammatical output, the child has to acquire the morphophonemic shapes and the functional properties of all the grammatical morphemes in the target language. The child does not, pace Chomsky, Adetermine from the data of performance the underlying system of rules@ of that language. In the first place, as Chomsky himself would probably nowadays agree, there are no rules to learn. Rules are post hoc artefacts extracted by grammarians that might have some usefulness in helping adults with a full Piagetian deck to learn a foreign language. The child simply slots newly acquired grammatical morphemes into the overall schema that the brain, built as we have described it, provides. You might think that it would be harder to learn an item=s functional properties than its morphophonemic shape B after all, the latter is just picking up and imitating a string of sounds. Wrong. Children seldom make mistakes with things like articles or prepositions or inflections (as long as the latter are regular, naturally), but it takes them sometimes till six or older to get the past tense forms of all the English irregular verbs.

So that=s why language acquisition follows the course it does: a slow and hesitant beginning, a sudden spurt that takes one pretty close to adult competence in a few weeks or months, followed by a gradual filling-in of the picture that takes years. Pretty much the same course, as we have hypothesized, that language took when it first evolved, and for exactly the same reasons.


So the long-awaited marriage of Darwin and Chomsky should be greeted with songs of praise on both sides. Like a lot of marriages that both families feared and fiercely resisted, it could well turn out a lot better than either hoped. Now at last the Montagues and Capulets of the contemporary human sciences can quit feuding and get down to some serious collaboration.

But on what, exactly? We can only say what we think needs doing, what we would like to do ourselves, or to see done by others.

Obviously our model of the Language Machine needs its tires kicked and some heavy test driving over the rougher patches. That=s fine. Hopefully the overall design will stand up even if some of the parts need remodeling. Linguists, who are very good at this sort of thing, are sure to come up with all sorts of exotic linguistic phenomena that they will claim our machine couldn=t possibly produce. We=ll have to show either that it could, or that the phenomena can be reliably attributed to extralinguistic causes.

Then there=s brain imaging and the other approaches to language physiology. We need more and better studies (PET scans, fast MRI, whatever the state of the art permits) of how the brain handles language on-line. Two things limit what we can learn from present studies. First, very few if any of them have focused on what happens when speakers produce completely novel sentences (by that, I simply mean sentences that the researcher doesn=t know in advance and that the subjects haven=t rehearsed before utterance B ASay the first thing that comes into your head@ would be a good protocol). Second, most studies limit themselves to adult speakers using their native language. We need comparable studies of children and people speaking something other than their native language. If suitably noninvasive procedures can be applied to small children, we would like to see on-line scans of language production by children in the 18 to 30 months range, preferably the same tests repeated at monthly or bi-monthly intervals. We would also welcome comparative scans of language production by adult speakers in both their native language and a language they are just starting to learn, or an early stage pidgin, if they happen to know one.


Then there are the various aphasias and dysphasias. I just wish some experts in the field would get together and compose the Defective Speech Anthology (DSA). The DSA would consist simply of raw samples of speech by all the different brands of aphasics and dysphasics, with several samples for each condition. The most superficial reading on, say, Broca=s aphasia, is enough to show that a very wide range of syntactic impairment is collected under that label. For each subject we=d need a full description, type and precise extent of trauma, age now and at onset, and so forth.

A colossal task, sure. So why take it on? Because the vast bulk of work in this area is, very naturally, from a clinical perspective, and therefore of limited use to anyone trying to find out how language is instantiated in the brain. But we now know enough about language to be able to determine the linguistic nature of defects rather precisely, and one hypothesis would be that if two individuals show an identical defect, even if they have been diagnosed under different syndromes, the same thing has gone wrong for them somehow. Of course that couldn=t be the whole story. Maybe they would turn out to have things wrong with them in addition to what caused the linguistic deficit. Maybe two different conditions would (among other consequences) affect the same part of the brain in the same way. Maybe damage or loss in two different places could cause identical deficits. We don=t know. But until we look at the evidence from this new perspective, we=re not likely to find out. At worst, we=d find out a lot more about how the brain works.

One goal is a complete wiring diagram for the Language Machine. Although we=ve suggested how that machine was built and how it functions in a rather general way, we have not yet been able to provide a blow-by-blow account of what happens when you utter a sentence, like AX goes to A where it is joined by Y, and then X and Y represented by XY go to B, while at the same time . . .@ in which X and Y are signals representing particular words and A and B are different places in the brain (and of course there would be the time in milliseconds for each move). Again, in attempting to piece together such a diagram, it would be amazing if there weren=t many surprises that would force modification of the model in presently unforeseeable ways.


Then what about the genes? A lot of nonsense has been talked, pro and con, about Agenes for language.@ It would be truly amazing if there turned out to be specific genes that do language and nothing but language. The genetic defect studied by Myrna Gopnik that seems to affect grammatical morphology is the best candidate so far, but its precise effects are still controversial and in any case it affects only a small part of the language faculty. The probability is that a number of genes indirectly conspire to yield language; hopefully, study of the dysphasias will help out here, for of course research is synergistic, and results from most of the areas touched on here ought to shed light on other areas.

Looking back now, into prehistory, we=d obviously like to see a richer record of our past. There could be surprises there too. If complex artifacts two million years old started turning up, it would be back to the drawing board with a vengeance. What seems more likely is a filling out of the existing fossil record. The really interesting additions to our knowledge may come from a better understanding of paleoclimatology and paleoecology, which may help us, far more than a handful of stone tools, to reconstruct the behavior of our remote ancestors.

More far-reaching consequences may come from a more informed exploration of the relationship between language and mind. In the early days of generative grammar, it was sometimes suggested that the study of language would yield profound insight into the workings of the mind. But this line was quickly dropped. Strategically, in the warfare described in the opening paragraphs of this chapter, it looked a better bet to claim that syntax, in particular, was totally different in its principles and mode of operation from anything else in the mind. Now that the war=s over, we can take a second look. It could well turn out that our minds are no different from the minds of other primates B except that we, thanks to our large numbers of neurons and sophisticated connections, can keep a coherent signal going longer than they can.

But we must never forget that the whole point of research is to explore the unknown without too many preconceptions. So perhaps our deepest wish is that our work will lead in directions we never even thought of, to discoveries beyond the span of our imaginations. If our model leads to things of that caliber, it will matter little whether the model itself lives or dies. Better by far to open doors on the unknown than to lock them with dogma.

-------------------THE END -------------------

Wait, there's still more....

Acknowledgments
Linguistics Appendix (DB)
Glossary
Notes
About the Authors

Web supplement:  Some photographs of Bellagio.

Copyright ©2000 by
William H. Calvin and Derek Bickerton

The handheld, traditionally comfortable
 version of this book is available from
  amazon.com or direct from
 MIT Press.

 Email Calvin  
 Email Bickerton  

  Book's Table of Contents  
  Calvin Home Page