Email Calvin || Email Bickerton || Book's Table of Contents || Calvin Home Page || September 1999

COPY-AND-PASTE CITATION


William H. Calvin and Derek Bickerton, Lingua ex Machina: Reconciling Darwin and Chomsky with the human brain (MIT Press, 2000), chapter 4.  See also http://WilliamCalvin.com/
LEM/LEMch4.htm

copyright ©2000 by William H. Calvin and Derek Bickerton

The nonvirtual book is available from  amazon.com or direct from MIT Press.

Webbed Reprint Collection
This 'tree' is really a pyramidal neuron of cerebral cortex.  The axon exiting at bottom goes long distances, eventually splitting up into 10,000 small branchlets to make synapses with other brain cells.
William H. Calvin

University of Washington
Seattle WA 98195-1800 USA



4

Bigger than a Word,
Smaller than a Sentence







What are the basic differences between protolanguage and real language? Let=s look at one property of the stringing words together process that produces protolanguage. I pointed out a while back that protolanguage characteristically consists almost exclusively of nouns and verbs, without any modifiers B if adverbs appear, they are usually whole-utterance modifiers, not modifiers of single words. If adjectives appear, they are a few of the more common ones, probably acquired with nouns as unanalyzed chunks, like idioms. But what this means is that all units are of equal value, just as you would expect them to be if they are all hung on the same clothesline.

Put it another way. In protolanguage, all words are equal; like runners in a race, it=s every word for itself. But if protolanguage is a footrace, language is a team sport, like football. The teams are phrases, and like any team, not all the players are equal B there=s a captain, and there are just regular players. In language we call these Aheads@ and Amodifiers.@ You can always tell what the head is by asking what the phrase is about. Is the phrase Aa young teacher of algebra from Oklahoma@ about a teacher, algebra, or Oklahoma? A teacher, obviously B all the other words modify the word Ateacher.@

The way we diagram sentences reflects this. Take AJohn kissed Mary.@ This could be either a true-language sentence or a protolanguage utterance. Don=t get the idea that protolanguage has to consist entirely of mangled utterances like AJohn kissed@ or Akissed Mary.@ It will probably contain a majority of these, but there=s nothing to prevent something that looks like a proper sentence from popping out now and then (though likely missing that -ed for past tense). The only difference, for reasons we=ll get to in a moment, is that it will sound like AJohn...kissed...Mary@ rather than AJohnkissedMary.@

So this is how AJohn kissed Mary@ gets put together in the two modes:

 

Now if they made you do this sort of thing in school you may well be thinking, AThese are just drawings, they don=t have anything to do with how sentences are produced.@ But I think that=s wrong. I think that these diagrams really show you what happens in the brain. If the brain is working in protolanguage mode, each word is sent separately to the part of the brain that controls the motor organs of speech, and each word is uttered separately.

When I first arrived in Hawaii, back in 1972, one of the things that struck me most forcibly was the difference in speed between the old-time immigrants who=d come to the island as young adults and spoke pidgin, and their children, born in Hawaii, who spoke creole (which in Hawaii is also called Apidgin,@ just to confuse things a bit more!). On top of all the other differences in their speech, the old-timers spoke about three times slower than their own kids. For instance, here=s an old-timer trying to describe one of those clock/thermometers you often see on the sides of city buildings:

ABuilding B high place B wall part B time B now-time B and then B now temperature every time give you.@

If you=ve ever been in foreign country where you spoke only a few words of the language, you=ll know how it feels to speak protolanguage B anguished search for a word, struggle to pronounce it, anguished search for the next word, and so on.


 

WHC: My Italian is at this level of protolanguage. My Italian comprehension is no better that Kanzi=s understanding of English, and my utterance lengths in Italian are no better than Kanzi=s either! Yet most linguists would classify my Italian as Alanguage@ understanding and production, even though they hesitate to classify Kanzi=s as such. It=s still a dual standard, even though the accomplishments of the language-reared apes have become so impressive.


Absolutely. But if the brain is working in language mode, words are put together in whole phrases and clauses and even sentences before they’re sent to the speech organs to be pronounced. That’s why, when you’re speaking your own native language, the words come out like a blue streak.

The second diagram illustrates another important fact. If you take it from the bottom up, rather than the top down, it reflects not just the fact that but the order in which the brain puts words together. That is to say, "kissed" and "Mary" are joined, before "John" is joined to "kissed Mary."

Which brings us to parsing.


The word " parse" has come in for some pretty vicious abuse lately. As a result of Clinton’s impeachment trial, people talk about speakers "parsing" words like "sex" or "alone" in the sense of determining, sometimes quite arbitrarily, how those words should be interpreted. This usage is daft in two ways. First, you can’t parse single words – you can only parse sentences. Second, parsing isn’t something speakers do, it’s something hearers do. A hearer parses a sentence (quite unconsciously – unless it’s in a syntax class!) by deciding what that sentence’s structure is.

Of course, that’s not quite the whole story. If I say "Would you mind stopping that noise?" you don’t respond by thinking, "Ah! An auxiliary verb followed by a second-person pronoun subject of the main verb ‘mind,’ followed in turn by a participial verb that takes a noun-phrase consisting of noun and determiner for its object," and leave it at that. You parse sentences to find out their meaning. You need to know that I am speaking to you, that I want you to do something, and what it is that I want you to do. I suppose it’s this rather indirect link with meaning that folk have taken as license to abuse the poor word.

Anyway, parsing is something we all do every time anything is uttered. But it works quite differently depending on whether what’s uttered is language or protolanguage. In fact, if it’s protolanguage, it’s a good question whether you can be said to parse at all. You can’t decide what the structure is if there isn’t any structure. What you do is just the second part of the job, trying to determine the meaning directly from the individual words. This of course is much harder than it is when there’s structure there to help you. You have to use all your knowledge of who’s speaking and what’s happening and what the world in general is like in order to figure out what is meant.

Suppose you hear a protolanguage utterance like "John kissed." You might think, that’s easy – all I have to do is figure out who John is most likely to have kissed. But suppose the speaker is a pidgin speaker from Japan. It’s possible in that case that the meaning is, "somebody kissed John," because verbs come at the end of the sentence in Japanese, and pidgin speakers sometimes (but pretty unpredictably) carry over features of their native languages into their pidgin. This is just one of the many reasons you can’t hope to interpret protolanguage without taking lots of context into account (and doing plenty of guesswork, too).

Now take an actual headline I saw in the Denver Post the other day: "Spy Charges Dog Inspectors." You can’t understand this sentence unless you get the structure right, and know that "Charges" is here a noun, not a verb; that "Spy Charges" is a subject; and that "Dog" is a verb. Of course you may have first spotted an alternative parse: "Spy" as subject, "Charges" as verb, "Dog Inspectors" as object. If you don’t get the structure, you can’t get the right meaning.

Here you may reasonably object, "Well, you need context just as much here. If you didn’t know that the story under the headline concerned weapons inspectors in Iraq, you might assume that some spy had leveled unspecified charges against people whose job it was to inspect dogs, or had made them pay him some money." That’s perfectly true; the headline had me baffled until I looked at the text. But two things make this case very different.

First, you very seldom need context to get the meaning of a true-language utterance, whereas you almost always need context to get the meaning of a protolanguage utterance (when I reread transcripts of pidgin speakers that I myself have recorded and transcribed, I often have no idea what they’re talking about, although I can remember they made perfect sense at the time). Second, and much more important, you’re using context in quite different ways. With the headline, you’re using context to choose between two equally grammatical structures; with protolanguage, you’re using context to try and get any meaning at all.

This particular contrast between language and protolanguage shows up best when you look at what linguists call " empty categories." An empty category is where some unit of a sentence isn’t overtly expressed. Take a sentence like "Bill wanted to go." "Wanted" has an overt subject but "go" doesn’t have an overt subject, though we know that it must have a subject, and that its subject must be "Bill." Empty categories are rather like protons. You can’t see any protons in this page you’re reading, but you know they’re there because your physics teacher told you so. Your English teacher should have told you the same thing about "missing" subjects and objects, but probably didn’t (even though to me they’re among the most fascinating things about language, I’m not going to force them on you here; if you choose, you can read more about them in the appendix on page ).

Again, there’s a superficial resemblance between language and protolanguage that masks a profound difference. Protolanguage too has "missing" things, such as a missing subject in "kissed Mary" and a missing object in "John kissed." But the antecedents of these empty categories – the people or things they refer to – can’t be found anywhere in the utterance. To know what those missing items refer to, you have to take into account who and what you’re talking about and, on that basis and your general knowledge, you have to work out who or what the speaker is most likely to be talking about. In real language, the antecedent is always there somewhere in the sentence, and there are rules to help you find it.

You can read more about those rules in the appendix. Enough for now to note that you can’t just assume that the nearest noun is the antecedent of the empty category. That’s true in "Bill wanted to go" and "Bill wanted Helen to go," but not in "Helen was the one that Bill wanted to go." In both the last two sentences, "Helen" is the subject of "go," but in the first she’s next to the verb and in the second she’s far from it and "Bill" is much nearer. The rules that fix the reference of empty categories are not simple, not obvious, and above all, not consciously applied. You just somehow know that, despite the distance between "Helen" and "go," it’s her that, hopefully, will do the going.


Now we come to what’s maybe the most crucial difference between language and protolanguage: the existence in the former of phrases and clauses that are entirely absent from the latter. Such intermediate units cause problems. For instance, how are we going to tell where they begin or end?

The pink shirt is dirty.

It's less easy in

The pink shirt you made me buy is dirty.

and even more so in

The pink shirt you made me buy when we stopped off
on the way to Cincinnati is dirty.

The trouble is, a phrase can be indefinitely long, and can include any number of things that might seem, to an outside observer, to have nothing to do with whatever is the head of the phrase.

The only way you can know where things begin and end is by knowing what phrases and clauses are. And, unfortunately for the common-sense, gradual-evolutionist view that maybe first phrases developed, then clauses (or vice versa), the two can only be defined in terms of one another (a phrase without a clause makes almost as little sense as a clause without a phrase):

A phrase is a group of words making up a participant in the state, process or action expressed by a clause.

A clause is a group consisting of a verb and all the phrases that express participants in its state, process or action.


WHC: Ah, verbs. When I was teaching myself to read scientific French and German, they were the key to survival. Find all the verbs in the sentence, I thought, and the structure of the rest would fall into place. If there was ambiguity remaining, I=d go in search of the prepositions. Unfortunately, this principle did not suffice for spoken language, where elements were often missing and had to be inferred. 

DB: Naturally, but I bet verbs were never missing B that can only happen in protolanguage.


What this means is that a clause is a clause because it has the right number of phrases (AFred put his new credit card into his wallet,@ rather than AFred put his new credit card,@ where there is a phrase too few, or AFred put his sister his new credit card into his wallet.@ where there is one too many). And a phrase is a phrase because it expresses a participant in the action of the verb and because it occupies a particular position in a clause (say, between the verb and Ainto his wallet@ for Ahis new credit card.@ And the two are even more entangled than that. A phrase can contain a clause, which in turn includes phrases of its own, as in

The pink shirt that you made me buy is dirty.

where AThe pink shirt that you made me buy@ contains the clause A(that) you made me buy,@ and where this clause, in turn, contains several phrases (to syntacticians, Ayou@ and Ame@ are just as much phrases as AThe pink shirt@ or AThe tall blond man with one black shoe@ B a phrase is anything that has a head, regardless of whether that head has any modifiers). The fact that these two units, intermediate between word and sentence, can operate in this way is what gives language one of its most striking characteristics, its infinite recursivity.

In his book The Language Instinct, Steven Pinker refers to what the Guinness Book of Records claimed as the longest English sentence: a 1,300-word monster by William Faulkner beginning AThey both bore it as though in deliberate flagellant exaltation . . .@ Pinker correctly pointed out that he could break that record by simply writing AFaulkner wrote, >They both bore it as though in deliberate flagellant exaltation . . .=@

What=s happening here is that Pinker is converting Faulkner=s 1,300 word monster into a mere phrase, a noun-phrase object whose function is no different from that of Aa book@ in AFaulkner wrote a book.@ And as Pinker points out, anyone with ambitions to get into the Guinness Book could do so by adding APinker wrote that Faulkner wrote . . .@ or AWho cares that Pinker wrote that Faulkner wrote . . .@ The process is truly an infinite one, limited only by our shortish immediate memories and the difficulty of making infinite sense.


But where did phrases and clauses come from? If they=re as closely interlinked as I=ve suggested, how can one be the hen and the other the egg? All we=ve seen so far suggests that they were born as twins, and that some third thing has to underlie both phrases and clauses. And indeed it does. That thing is what is known as Aargument structure.@

When you get down to it, the basic task of language is telling you who did what to whom (as well as when, where, how, and occasionally why). These AWH-words,@ as linguists call them (although Ahow@ has its W at the wrong end), just about exhaust the questions you can ask B even in plain old AYes-No@ questions, you=re asking WHether something happened or not. We can conclude from this that there=s a limit to the number of participants there can be in any action, process or state. Or at least that there=s a limit to the number we can talk about. We can talk about who performed an action, or who underwent it, or to whom it was directed, or for whose benefit it was performed, or when, where, or how it was performed.

But there=s no way we can talk directly about who observed it, or who discussed it. If I say ABill kicked the cat,@ you know without more ado that Bill performed the action and the cat underwent it. But there=s no way I can say anything like ABill kicked the cat blik me,@ meaning ABill kicked the cat observed by me,@ or ABill kicked the cat plok us,@ meaning ABill kicked the cat discussed by us.@ Things like that can of course be expressed B you can express anything in language, given time, patience, and ingenuity B but they have to be expressed indirectly: AI observed Bill kicking the cat@ or AWe discussed the fact that Bill had kicked the cat.@ In other words, we have to downgrade the original sentence into some kind of phrase or clause, then insert it into another clause.

Now you=ll have noticed that each of the participants in these states or actions has a specific role to play. There are agents that perform actions, patients or themes that undergo them, goals to which they are directed, and so on. These roles are known as Athematic roles.@ A thematic role plus the noun-phrase ,to which that role is attached, make up what is known as an Aargument.@ And argument structure B the system that determines when and where arguments can appear in language B represents the crucial link between word meaning (semantics) and sentence structure (syntax). Not every syntactician would make argument structure central to an account of syntax as it is today. But that=s irrelevant. How something started is often very different to what it has become B for instance, try describing modern computers in the terms appropriate for their ancestors of just forty or fifty years ago.


Before there was syntax, there was only semantics. So, if you are looking for the very first stages in the development of syntax, you have to look in semantics for whatever is the most syntaxlike thing. Argument structure is the most plausible candidate. It involves meaning (the meanings of the thematic roles, agent and so on, and their relation to the verb meaning) but it can be readily mapped onto linguistic output to provide that output with structure, along the lines described below.

The first thing to note is that all arguments aren=t equal. Some make an obligatory appearance, others only an optional one. It=s as if a team had a small core of seasoned players that appeared predictably while the remainder sat in a bench awaiting a call. For instance, if you use the verb Akick,@ you are obliged to mention a kicker and a kickee. You=re not obliged to mention where the kicking was done, or how, or when, or for whom (even if it was done on behalf of someone else), although of course you can whenever you need to. Likewise, if you use the verb Asleep,@ all you need do is name who slept B you don=t need to say who was slept with, or for how long the person slept. That is to say, every verb demands that a certain number (not less than one, not more than three) of the participants must be expressed.

Is the fact that verbs are divided into three classes (on the basis of the number of arguments that obligatorily accompany them) a fact of nature or an artifact of analysis? Do all states, processes, and actions in the world fall into one of these groups because of the nature of reality, or does the structure of the human mind impose its own pattern? This is a philosophical issue and fortunately I don=t think we need answer it here. You can be sure, whatever human language you may meet, that the verb equivalent to the English Asleep@ will take one obligatory argument, the verb equivalent to Abreak@ will take two, and the verb equivalent to Agive@ will require three.

You=ve heard about Afalse friends@ in language learning: words that sound like words in your language but mean something quite different in the other. Well, the division of verbs into three argument classes is a true friend, and like all true friends, seldom fully appreciated and too often taken for granted.

But the importance of argument structure goes far beyond that. If you know that

» there are phrases and clauses, and you know that

» clauses consist of verbs and their arguments, and you know

» how many arguments each verb must take, and

» what the thematic roles of those arguments are,

then you can easily process sentences that would have had you buffaloed if all you had was protolanguage. Take for example a sentence we looked at in the previous chapter: AThe boy you saw kissed the girl he liked.@ Parsing this with the above in mind, we look immediately at the verb Akissed@ and know that it must take two arguments. Because the language is English, and because we know the way English maps argument structure onto phrase structure, we know that Akissed@ will be followed by a theme (whoever got kissed) and preceded by an agent (whoever did the kissing). But this isn=t a simple AX kissed Y,@ because there are two extra verbs, Asaw@ and Aliked,@ which should have their own arguments. So you look for these.

Start with Aliked.@ That takes two obligatory arguments, but there=s only one there. However, you know that the other must be there, somewhere, even if you can=t see it, because the nature of argument structure tells you so. For every invisible argument there=s a visible argument in the same sentence that refers to the same person or thing. Often (see the appendix for a more detailed treatment) you=ll find that visible argument immediately next to the left of the leftmost obligatory argument of the verb you=re working on: in this case, Athe girl.@

Now you turn to the first part of the sentence. Here, again, the verb Asaw@ should have two arguments but has only one, Ayou.@ Again you know it must be there, and must be linked in reference to the argument on the left of the leftmost argument of Asaw@ (Ayou@). That argument is Athe boy.@

You have successfully parsed AThe boy you saw kissed the girl he liked,@ finding that it contains one main clause and two subordinate clauses modifying the heads Aboy@ and Agirl.@ And in so doing you have arrived at its correct meaning B which of course is what the whole exercise is about. It=s easy for people who work on syntax to get wrapped up in it and think that maybe it=s not just everything, it=s the only thing. Of course it=s not. It=s a mechanism, a means to an end, what allows you to move on to the next task.

But without that means, there wouldn=t be an end. Syntax is the magic key that unlocks the floodgates of language, unleashes the irresistible torrent of words that has swept us to where we are today. But where did that key come from, and how did we come by it?


Let me briefly sum up where we=re at right now. I=ve just said that the core of syntax must contain the means for producing phrases and clauses, because these are the indispensable units intermediate between word and complete utterance. These units are indispensable because without them we could not produce true sentences, or indeed any kind of long and/or complex utterance that could be understood. Now phrases and clauses derive from argument structure B from the fact that verbs can only assign a limited number of arguments, and that every verb falls into one of three classes that assign one, two, or three obligatory arguments. respectively.

Naturally you=ll want to know where argument structure came from and how we came to fashion our utterances in the ways that argument structure dictated. But before I can get to that, we=ll need to look at what goes on in the brain when we use language.

So, over to you, Bill.


 

WHC: From what (words to syntax) and why (evolutionary) considerations, it=s apparent that we need to know a lot more about how brains categorize an entity or a state of affairs, how this memory is retrieved and linked to others, and how we cope with the inevitable ambiguities. Both emergents (like crystals) and conversions of function (like curb cuts) could, at different times, be part of the story.

We=re most accustomed to noun attributes (Derek=s fruit with a color attribute, a shape attribute, the sound it makes when falling off the tree, and so forth). But they=re all optional -- you=ll forgive me if I mention an apple without telling you its color or size. Verbs too have optional attributes, such as time and place, but each verb has one or more obligatory attributes. How that=s implemented in the brain is surely a key question.

If I say (as the billboard ads have taken to doing) AGive him,@ you=ll go looking for three noun phrases. You will happily infer that it=s an imperative construction and supply the missing Ayou,@ but the lack of a noun for the theme will disturb you, and you=ll search for what you missed (supplied, in the ad, by a picture or logo). It=s a technique for grabbing people skimming over the ad and bringing them to a screeching halt, making them pay attention, thanks to a subconscious process that rings alarm bells. We talk of computers Ahanging up,@ and this is a prime example of a hung psychological process that might give us some clues to the circuitry someday.

By this point, I=m certainly curious about how the brain can do all of this, what circuits constitute the algorithm. I=m not sure that I can fully answer it (please don=t ask me where the Empty Categories are located!), but let me creep up on the problem of brain circuits for structuring sentences by introducing language and memory circuits, Darwinian processes, and the brain=s long-distance problem. Then we=ll be able to speculate more intelligently about what neural machinery might have been co-opted for syntax.


On to the NEXT CHAPTER

Notes and References for this chapter

Copyright ©2000 by William H. Calvin and Derek Bickerton

The nonvirtual book is available from  amazon.com or direct from MIT Press.

 Email Calvin  

 Email Bickerton  

  Book's Table of Contents  

  Calvin Home Page