Email Calvin || Email Bickerton || Glossary || Book's Table of Contents || Calvin Home Page

COPY-AND-PASTE CITATION

William H. Calvin and Derek Bickerton, Lingua ex machina: Reconciling Darwin and Chomsky with the human brain (MIT Press, 2000), chapter 3.  See also http://WilliamCalvin.com/
LEM/LEMch3.htm

copyright ©2000 by William H. Calvin and Derek Bickerton

The nonvirtual book is available from  amazon.com or direct from MIT Press.

Webbed Reprint Collection
This 'tree' is really a pyramidal neuron of cerebral cortex.  The axon exiting at bottom goes long distances, eventually splitting up into 10,000 small branchlets to make synapses with other brain cells.
William H. Calvin

University of Washington
Seattle WA 98195-1800 USA



3

Why Putting Words Together Isn=t Easy

 

 

 

Now that we have a stock of words, we need to make sentences with them. My bet is, the first thing you=ll think of in this connection is putting them into some sort of order. Indeed, there are even syntacticians who think that the most important thing about syntax is putting words into some kind of fixed order. By the time we get to the end of this chapter I hope to have convinced you that putting words into fixed orders is the least important part of structuring sentences B if indeed it=s part of it at all.

‘Only connect.'
E.M. Forster

The language acquisitionist Leila Gleitman once joked that when linguists talk about acquiring language, most of them explain how to get as far as AThe cat sat on the mat@ B and then they cross their fingers. Well, let=s at least get to AThe cat sat on the mat.@

No problem. You have two nouns, a Acat@ and a Amat,@ that refer to concrete objects. You have a verb, Asat,@ that refers to an action. You have a preposition Aon,@ that tells you where something happened, and you have Athe,@ which suggests that you ought to know which cat and which mat I=m talking about, without more ado. So you take the subject, whatever performs the action, and put it at the beginning, followed by the verb, followed by the location where the action of the verb took place. How else could you do it? It=s dead easy, right?

Wrong. We are coming at this from our knowledge of English. But the human ancestors we are talking about didn=t speak English. They didn=t speak any kind of human language. A word-like Asubject,@ even an abstract noun like Alocation,@ would have been far beyond their reach. Terms like Asubject@ and Aobject@ can only be defined over a syntax that already exists. Before syntax existed, they were meaningless. For that matter, ancestral humans are very unlikely to have had words like Aon@ or Athe.@

What=s an Aon@? What=s a Athe@? These words don=t correspond to anything in the observable universe. They=re strictly relational. Even today, the first words of children don=t include items like these, though they may well include nouns like Acat@ and Amat,@ and even a verb or two. It=s highly unlikely that our remote ancestors had more than a few nouns and verbs B at least not to start with. At best, then, they would have had Acat,@ and Amat,@ and Asat@ (or more likely Asit,@ since past tense is a sophisticated feature of language).

Some languages (like Japanese and Turkish) are verb-final B ACat mat sit.@ A lot of languages (German, for instance) are what is called Averb-second@ languages, and in these you can have AMat sit cat@ just as easily as ACat sit mat.@ A lot of Austronesian languages have the verb at the beginning (ASit cat mat@) while some also have the subject at the end (ASit mat cat@). Literary Latin and some Australian languages just mix them up in any order at all, making use of inflections and role-specific words (as when we say Ahim@ rather than Ahe@) to convey roles.

All right, so word order is more of a problem than it might seem at first. However, given a speech community where everyone agreed on the meanings of Acat,@ Asit,@ and Amat,@ it would surely only be a matter of time before some kind of word order convention was agreed upon.


But this wouldn=t solve all the problems. And the most obvious of these problems is that there=s no way you can be sure that all the elements of the sentence would be expressed. If someone thought you already had the cat in mind they might just say Asit mat@ and assume you could figure out by yourself that they were talking about the cat. Same if they thought you knew about the mat B Acat sit@ ought to do it, in that case. Or just Acat,@ or just Asit,@ or just Amat.@ Little kids of eighteen months or so seem to get on just fine with one-word sentences like these.

Those of us who are a little older might have problems with listening to a language that kept leaving you to figure out things for yourself all the time. When some people talk, it=s hard enough to figure out what they really mean even when they put everything in B imagine what it would be like if they could leave out anything they liked, and you had to devote half your time and effort just to filling the gaps! The thing about real language is, you may have a hard time with the content, but the form of it, the way the sentences are actually put together, hardly ever gives you problems. You don=t even notice it. It=s like it was transparent. Your brain handles the form automatically (and perhaps that=s why seeing the essential aspects of structure are so difficult for us).

But that automatic processing comes about only because there is syntax B principles and procedures for arranging words in ways such that long strings of words can be uttered effortlessly and understood equally effortlessly. Before you had syntax, all that existed was a kind of protolanguage.


 

If you want to know what that protolanguage was like, you can get some idea by looking at the productions of apes who have been taught to use signs or other symbols, or at early-stage pidgin languages (at about the AMe Tarzan B you Jane@ level of development), or at the speech of children under two. I say Aget some idea,@ because of course there will be differences between then and now.

We can assume our early ancestors talked about more things than apes do and that some of those things were different from the things apes talk about. We know that speakers of any pidgin speak at least one natural human language fluently, and there has to be some carry-over (though if you look at samples of pidgin speech, it will amaze you to see how little), at least in the range of things that can be discussed. We know that children, especially if they are learning an inflected language like Spanish or Italian, will pick up the odd grammatical feature you won=t find among apes or early stage pidgin speakers, and probably wouldn=t have found among our remote ancestors, either. All protolanguage varieties:

» can only string together a small handful of words at a time;

» can leave out any words they feel like leaving out;

» often depart from the customary word order unpredictably and for no obvious reason;

» cannot form any complex structures, whether these be complex noun phrases or sentences more than a clause long;

» contain, if they have any at all, only a tiny fraction of the inflections and the Agrammatical words@ B things such as articles, prepositions and the like B that make up 50 percent of true language utterances.

Now the question is, why are these protolanguage varieties B ape-talk, toddler talk, pidgin B the way they are?


 

Let=s assume that you have words and that you=ve somehow managed to sort out a convention about word order, so that everyone says AJohn kissed Mary@ (as we do in English) rather than AJohn Mary kissed,@ as they would in Japanese. Surely, once you had gotten that far, you could very easily just go on building longer and longer sentences until gradually, over time, language attained the complexity it has today. Wrong.

There are lots of reasons why that doesn=t work. First, let=s suppose that you didn=t want to say, AJohn kissed Mary.@ You want to say AThat boy kissed Mary,@ but when you say it you see from my look that I don=t know which boy you mean, so you say AThat boy you saw yesterday kissed Mary.@ Something=s wrong here. Sentences start with a noun and follow it with a verb, but here there are two nouns (well, a noun and a pronoun, Aboy@ and Ayou@) together before you even get to a verb, and they seem to refer to different people, like in the AJohn Mary kissed@ Japanese-style sentence above.

What=s happened to our word order convention? Shouldn=t this new sentence start with AThat boy saw you@? But in that case, Ayou@ would come before Akissed Mary,@ and I know I didn=t kiss Mary yesterday or ever. Could Asaw@ maybe be a noun? No, because then you=d have three nouns in a row instead of two. But then you=ve got Ayesterday kissed Mary.@ Come on. Days can=t kiss people, only people can kiss people. Just what kind of nonsense is this, anyway?

There would be no reason for anyone hearing this sentence to suppose that the whole string, AThat boy you saw,@ is in fact the subject, while Akissed Mary@ is simply its predicate. In fact, at the stage of language development we=re talking about, nobody would have had the faintest idea what a subject or a predicate was. Indeed, I=m cheating a bit even in just imagining some cave-dwelling ancestor sweating bullets while struggling to understand such sentences, because nobody back then could have produced them. Moreover, since comprehension usually runs well ahead of production (think of Kanzi the bonobo, or of yourself struggling with high-school Spanish or German), you would find those sentences even harder to produce than to understand.


 

The reason for this is as follows. A simple grammar like we=ve envisaged B a grammar with a fixed order in which subject precedes predicate and the verb of the predicate, if transitive, precedes its object B works just fine, so long as all you have are nouns and verbs and no more than one verb per utterance. Then you can parse easily: first word, a noun, so the subject; second word, a verb, head of the predicate; third word, a noun, therefore the other half of the predicate. So here=s a grammar, you may think, that would at least give you AJohn kissed Mary@ or AThe cat sat on the mat@ (as it might, but for the zinger that=s waiting in the wings).

But it only works as long as you stick to single words, single nouns and verbs. Once you get a complex structure as your topic (like AThat boy you saw yesterday@), you=re in trouble B because you don=t know, and you have nothing that will help you find out, where the units in the sentence begin and end. You might manage Athat boy,@ because there=s no other noun involved, but once you get to Aboy you,@ you=re lost. All your experience tells you two nouns mean two referents (and they do) but your grammar tells you that no two nouns can come together in this way.

Indeed, whenever we find an example of protolanguage, whether it be child speech, pidgin, or an ape=s attempts at language, we find that it consists of nouns and verbs without modifiers of any kind (except for very occasional common adverbs or adjectives, often incorporated into a single, rote-learned phrase). It=s worth noting that apes haven=t gotten past this stage, children nearly always do, and a few adult pidgin speakers may succeed (although the vast majority do not).

It looks like we are in the presence of something that is specific to the human species but that children do much better than adults B sure signs of a biological property with the kind of window of opportunity known as a Acritical period@ (if the property doesn=t develop before the period=s over, it may never do so).


WHC: That reminds me of our discussion back when we first got to Bellagio (p. 10). Children are enormously acquisitive of patterns, starting with the infant=s listening to language during its first year and devising categories for the common speech sounds (about forty phonemes in English); at six months, a Japanese infant can still hear the difference between the English /L/ and /R/ but, by age one, he or she no longer hears the difference, with a nearby Japanese phoneme capturing all nearby speech sounds as mere variants, standardizing them. ARice@ and Alice@ would sound the same.

Then the baby starts acquiring combinations of phonemes, i.e., words, at the rate of about nine new words every day. Somewhere between 18 and 36 months of age, children figure out the common patterns of words in sentences, and make a fairly fast transition into speaking with phrases and clauses. They aren=t taught rules. (What parent could possibly explain them? Especially in babytalk?) Instead, they guess the underlying structure, from what they hear. So far as I can see, they go on to discover narrative structure and then start criticizing bedtime stories that lack proper endings.

Four major phases of acquisitiveness, each building upon the one before B and even by children of modest intelligence. Deaf children surrounded by fluent sign language (from deaf parents, deaf babysitters, or deaf preschool) make a parallel set of discoveries B but they don=t do very well if deprived of such opportunities until school age; the preschool years are the natural time for such discoveries, and Amaking up for it@ later between the ages of seven and fifteen is increasingly ineffective. That=s the prime evidence for a Acritical period@ in language development, along with the tragic stories of abused children locked away from opportunities to hear speech and their frequent failures to subsequently acquire language fluency.

So, is there an Aepigenetic rule@ that says ASeek structure amid chaos@? Is that what chimps and bonobos lack, or do they lack syntax wiring? (Or both?) To form a new category, such as the notion of a prepositional phrase, might require a lot of varied examples of it, before seeing the regularities of structure in the input. If reared in an environment that lacks a lot of examples of such structures (say, dozens within a week=s time), it might be difficult to zero in on it. So-called Afast mapping@ says that it requires dozens of exposures to a new word (not just one, at least when it is embedded in a complex environment with lots of other things going on) before learning it; the same might be the case for syntax and narrative structures.


If all there was was an Aepigenetic rule@ that said ASeek structure amid chaos,@ there would be no creole languages. Creole languages come into existence when parents who speak a structureless early-stage pidgin pass it on to their children. The children change that pidgin, in a single generation, into a full-fledged language. If they were seeking structure in the pidgin, they wouldn=t find any B they impose structure from within their own minds), Rather than acquiring a vague general capacity to Aseek structure@ B how would any creature do that? B I think we acquired the capacity to create structure in language and that capacity then generalized to apply in other spheres.


WHC: But Aseek structure amid chaos@ allows for guessing wrong, finding structure in the environment when there is none. We fool ourselves all the time. (Think of astrology!) It isn=t much of a further step to invention without a model, so long as those Universal Grammar circuitry underpinnings are really there to guide the invention. Epigenetic tendencies (like Aseek structure@) and innate circuitry (like UG=s resonances) are two separate things, though surely they have co-evolved somewhat.


Naturally. We, like most other creatures, are built to make generalizations on inadequate evidence at very short notice, because that works better, in terms of evolutionary fitness, than making 100 percent correct generalization after a long period of cogitation. But other creatures don=t have language, so there=s no way the Alanguage instinct@ could be Aseeking structure@ and nothing more. Besides, that leaves unanswered the question of why, out of the zillions of kinds of structure that it could have, language just has the one it does.

Anyway, we can be pretty sure that no creature without the appropriate internal structure can learn to increase the size of a descriptive phrase. We can. We can go from Ahats@ to Ablack hats,@ to Athree black hats,@ to Athose three black hats,@ to Athose three black hats with brims,@ to Athose three black hats with broad brims,@ to Athose three black hats with broad brims that remind you of the hats the bad guys in late-night Western movies wear,@ and any of these would fit the empty slot in AI=d like to buy_____@ or A_____would look good on you.@ The reason is not because speakers of a protolanguage cannot add one word to another B they surely can. What they can=t do is know where to stop, where the boundaries lie between one descriptive phrase and another. And the reason for this is that, in protolanguage, there are no units of intermediate size between the single word and the entire utterance. In other words, there are no such things as phrases and clauses. Without phrases and clauses we can=t establish boundaries within the utterance, and as a result of this we become victims of ever increasing ambiguity.

We saw ambiguities and plain old-fashioned confusions arising in plenty out of the attempt by a protolanguage speaker to parse AThat boy you saw yesterday kissed Mary.@ But suppose the sentence was AThat boy you saw kissed the girl he liked.@ Still only nine words, but to the previous ambiguities would now be added the following: Is the last part another misconstructed subject-predicate (really Athe girl liked him@), or is Akissed the girl@ a complete predicate B in which case what do we do with Ahe liked@? Liked what? If what he liked was Athe girl,@ why not say so? Note too that these ambiguities can=t be resolved in isolation. Because they arise from not knowing where the boundaries are (or rather from simply not having boundaries at all), each possible parse of each segment increases the possible parses of the whole utterance exponentially. One ambiguity gives two possible readings, two ambiguities give four, three give eight, and so on. Very soon, as you see, the ambiguities become too numerous for anyone to cope with, which is one reason why protolanguage utterances are almost always confined to four or five words at most B and usually less.


But this still isn=t the worst. I said there was a factor that would quickly destabilize any attempt to give structure to language by providing it with a rigid word order; in fact, it=s already been referred to, briefly, in this chapter. It=s this: Sure, you can say AJohn kissed Mary,@ and we have agreed that, by convention, possible alternatives B AJohn Mary kissed,@ Akissed John Mary@ B would eventually be ruled out. But in protolanguage nothing tells you that you have to say all of AJohn kissed Mary@ B that you have, obligatorily, to put in a word describing the action and two others describing the two actors.

Now you may argue that the same is true of our language. If you ask AWho did John kiss?@ I don=t (outside of an English as a foreign language class!) have to say AJohn kissed Mary.@ It=s much more natural if I just say AMary.@ Similarly, if you ask AWho kissed Mary,@ I would answer simply, AJohn,@ or AJohn did.@ If you asked AWhat did John and Mary do when they met,@ I could say, AThey kissed,@ or just AKissed.@ But that=s only true in the context of answering direct questions. In any other context, if I were to say AJohn kissed,@ or Akissed Mary,@ or simply Akissed,@ you would immediately feel something was missing and blame me for not telling you who kissed and/or who got kissed, even if you already knew those things. It=s true, a boxing referee may say ABreak,@ or a surgeon may say AForceps,@ but because we all know a full human language, and because we know something about the conventions of boxing rings and operating theaters, we know that the first remark is just shorthand for AYou two better break your clinch@ and the second for APass me the forceps, please.@ We understand such remarks because they stand against the background of a language where, in the vast majority of cases, certain things have to be stated fully.

But before there was such a language B and obviously language like we have today could not have sprung up fully formed from the beginning B then there was no way in which we could have known what had to be said or even that there were things that had to be said. We had words and that was all. We could use as few or as many as were wanted, limited only by our power to utter them and the hearer=s power to understand them. As practitioners of a behavior that had barely begun, the natural inclination would have been B as it still is in the contemporary forms of protolanguage B to say as little as we could get away with. It=s less effort to say things like AJohn kissed@ and Akissed Mary@ than to say AJohn kissed Mary.@ The less you say, the less chance there is that you will make a mistake or a fool of yourself, and if there is enough previous knowledge and/or situational context, hearers may be able to figure out for themselves who got kissed or did the kissing.


WHC: And I provided a good example the other day, in trying to communicate with the Villa waiter whose English is limited. ADon=t say too much,@ was Susan Sontag=s writerly advice from across the breakfast table. If I stuck to several English nouns, the waiter could guess my meaning; in trying to speak a real sentence, I was just confusing the waiter, who didn=t know English syntax well enough to parse more words. Maybe the language-reared apes, in sticking to short utterance lengths, are just practicing Sontag=s advice: saying just enough to allow us to guess their intention, their mental model underlying the attempted communication.


This freedom to say anything or nothing, this free-for-all that=s inescapable in any communication system without rules or structure, simply compounds the already more than sufficiently compounded ambiguities that beset every protolanguage utterance. And if by this time you are thoroughly confused, like the centipede who started to wonder which foot he put down first B completely unable to understand how on earth you can produce the simplest sentence B that=s fine. That=s how I want you to be. Because that=s how anyone should be, faced with the awesome mystery of how anything as seemingly complex as language could ever have bootstrapped itself into existence B yet still be learnable by children who can=t tie their shoelaces or use a spoon without spilling its contents all over themselves. And it=s nothing to do with being smart. Children with a condition known as Williams= Syndrome with IQs of 40 or 50 can run together sentences just as well as you or I. What they say may be false or silly, but the way it=s put together is impeccable. Language is an awesome mystery, and we=ll get nowhere by pretending that it isn=t.

So now we=ve got some conception of the difficulty of the task of producing even fairly simple sentences, a task that has so far defeated all species but our own. Now we can get down to examining the problem of how language evolved. Basically it=s an engineering problem. We have to find some way of providing structural units intermediate between the word and the complete utterance. Given the appropriate units (such as phrases and clauses), we should be able to perform all the complex computations that human language requires. But those units must have come from somewhere B we couldn=t have simply invented them. So the units, whatever they are, need a plausible history, in addition to an explanation of how, exactly, they make language possible.


 

 

WHC: I like that statement of the problem, because it allows for something else besides the obvious usefulness-for-language to provide some of the underlying structural tendencies. It=s long been obvious to neuroscientists that language function was likely to be mixed up, location-wise, with some other functions B that Alanguage cortex@ isn=t only doing language tasks. There is an enormous overlap with oral-facial and hand-arm sequencing, for example, suggesting that improvements in one might have benefitted the others, at least at some stage in hominid evolution.

It also makes me wonder if what the language-reared apes are lacking, thus far, is simply a good sense of phrase boundaries B which could be accomplished by a sensitivity to boundary words like Aand@ and Ainto.@ There aren=t very many of them, just a few dozen, and apes might be able to learn them as special words that signal a new phrase or clause. So far, the attempts to teach apes the Aclosed-class words@ have been minor, though I=m told they=re on the agenda for the next round of work with language-reared bonobos.

 

DB: Bill, the problem isn=t that simple. I agree it will be fun to try and teach bonobos boundary words. In fact, I just learned from my old friend and colleague Talmy Givon that he has submitted a proposal to do just that. Lack of the right words is part of the problem, but by no means all of it. Where are the boundary markers in AThat boy you saw yesterday kissed Mary@ or AThe boy you saw kissed the girl he liked@? Thinking that you get the boundaries from the boundary markers just puts things back to front B you have to get the boundaries first and then put in the markers. And that=s not happenstance. It has to be that way, because until you know what the boundaries bound, you can=t know how to correctly use the markers. But there=ll be more on this in a chapter or two.

 

WHC: Kanzi (a bonobo, or pygmy chimpanzee, with more than a decade of language tutoring) can comprehend novel, never heard before, sentences as complex as AKanzi, go to the office and bring back the red ball.@ He makes about as many mistakes as a two-and-a-half-year-old child does, on the same tests of interpreting novel requests. Of course, the child goes on to produce such sentences, and Kanzi is still stuck back at the stage of two-word requests, with an occasional third word.

Linguists, I know, are impressed only by production abilities (as it is instantly apparent whether or not the speaker knows how to unambiguously structure a long utterance). But, in some sense, understanding is said to be the harder task, because you have to correctly guess the speaker=s state of mind; in production, you know your state of mind and Amerely@ have to get it across to another. Once sentences start to lengthen and possibilities for ambiguity develop, production becomes difficult without knowing how to structure a sentence.

Maybe it=s just my motor systems physiology background talking (thought as preparation for action, seeking additional sensory input to help decide between alternative possibilities for one=s next act), but I tend to be impressed by performance, including Kanzi=s ability to carry out a complicated series of instructions for the first time, and get it right. That shows us that bonobos have got the brains for understanding requests of some complexity, even to the extent of some phrases within the sentence. To produce such sentences himself, Kanzi would need to craft a novel request in such a way that there is little ambiguity. Appreciating someone else=s potential confusion (one of the fancier aspects of a Atheory of mind@) is certainly important for serious writers, but language learners likely have acquired simpler conventions for structuring long utterances.

 

DB: Bill, I=m afraid we=ll have to agree to disagree here. Production is harder than comprehension, as anyone knows who has attempted to learn a foreign language. The fabled difficulties of understanding are exaggerated. In the first place, some of those difficulties exist only for overcerebralized Western academics B most folks have no trouble reading others= states of mind via body language and horse sense. In the second, we must distinguish between what something means and what somebody means by it.

If I say, AGee, it=s cold in here!@ you may not know whether I am making a factual remark, or want you to light a fire, or hope to persuade you to move elsewhere. However, you have no trouble in deciding what the words themselves mean B they mean that it=s damn cold in here, and my state of mind has nothing to do with that. I doubt whether, in the case you mention, Kanzi needed to know anything about the state of Sue Savage-Rumbaugh=s mind.

In fact, in understanding what something means, we have all sorts of clues from semantics, pragmatics, and situational context that are quite useless when it comes to production. I don=t think for a moment that Kanzi knew the grammatical structure of AGo to the office and bring back the red ball@ B if he knew what Ago,@ Aoffice,@ Abring,@ and Ared ball@ meant, he wouldn=t have to be the bonobo equivalent of a rocket scientist to figure out what he had to do. And if he did understand the grammatical structure, what=s stopping him from producing sentences like this himself?


On to the NEXT CHAPTER

Notes and References for this chapter

Copyright ©2000 by
William H. Calvin and Derek Bickerton

The nonvirtual book is
available from
  amazon.com
or direct from
 MIT Press.

 Email Calvin  
 Email Bickerton  

  Book's Table of Contents  

  Calvin Home Page