Conversations with Neils Brain|
The Neural Nature of Thought & Language
Copyright 1994 by William H. Calvin and George A. Ojemann.
You may download this for personal reading but may not redistribute or archive without permission (exception: teachers should feel free to print out a chapter and photocopy it for students).
William H. Calvin, Ph.D., is a neurophysiologist on the faculty of the Department of Psychiatry and Behavioral Sciences, University of Washington.
George A. Ojemann, M.D., is a neurosurgeon and neurophysiologist on the faculty of the Department of Neurological Surgery, University of Washington.
Stringing Things Together in Novel Ways
NEIL, weve come to that research study I talked to you about yesterday
afternoon, George says, loud enough for Neil to hear him through the sterile tent.
Wed just finished the mapping of Spanish. And of reading. A new series of slides was
being placed in the projector by Georges technician.
[FIGURE 74 Three oral-facial postures used to test sequencing]
And during stimulation of some sites, Neil makes the wrong movements. He adds movements, things that werent pictured on the slide. Or, during other slides, he produces the movements, but in the wrong order. Such sites disturbing the sequence of movements are more widespread, farther forward in the frontal lobe. And there are now some sites in the part of the temporal lobe near the sylvian fissure, others in the parietal lobe around the back of the sylvian fissure. At these frontal, temporal, and parietal sites, the stimulation has no effect on single, repeated movements its only disrupting when a sequence of different movements must be produced.
Movements are supposed to be a frontal lobe function, but here it is, in the temporal and the parietal lobe sites, as well all near the sylvian fissure.
Neil? Were now ready for the other research study, George announces. This is the one where you hear the funny words akma, adma, atma and you tell me if the letter that changed was k, d, t, or whatever.
All right, Neil says. That was easy enough, last night.
The neuropsychologist starts a tape recorder. The speaker voice says akma. Neil says k. Then apma, and Neil says p. And on it goes. George stimulates only when Neil hears the word, not when hes to respond. Since stimulation effects dont seem to outlast removing the electrode from the brain surface, nothing interferes with the response. So the stimulation effects are presumably disrupting the perception of the speech sound.
And where are these sites that interfere with sequences of speech sounds? They turn out to be the same sites around the sylvian fissure where stimulation interfered with mimicking face movements, either singly or in sequences. A sensory-sequencing task seems to have the same essential sites as a movement-sequencing task, with 86 percent overlap.
THESE RESEARCH STUDIES in Neil and other patients have provided several further
glimpses into the organization of the language cortex. The area of cortex related to motor speech
functions turns out to be quite wide, involving most of the brain around the sylvian fissure. This
is different from what youd read in most textbooks, where Brocas area is the only
place mentioned in conjunction with motor speech.|
Of course, that textbook statement has always been a little surprising, because Brocas actual patient, Leborgne, had damage to the wider area around the sylvian fissure. And it has never been totally clear why Broca focused on just the frontal lobe part of that damage to Leborgnes brain. Indeed, newer studies indicate that a permanent motor language deficit requires a stroke that destroys the whole area around the sylvian fissure, a lot more than just damage to Brocas area. Much of the area of the brain involved in language has a role in the motor aspects of language.
[FIGURE 75 Sequencing specializations of the perisylvian cortex]
There are two subdivisions to the perisylvian movement region. First, there is an area involved in the control of all movements of the tongue and face needed for speech, located just in front of the face part of the motor strip in the frontal lobe. Except for the control of movements on both sides of the body from one (left) side of the brain, it is located pretty much where a motor area should be by classical reasoning and labeled with Brocas name.
The second area related to sequencing movements is the one that occupies the much larger region, including parts that classically arent supposed to have a motor role. This development of the ability to sequence things together seems to be one of the great developments behind the emergence of language on the evolutionary scene in our ancestors, so this is likely a particularly human area.
THE EVOLUTIONARY COURSE OF LANGUAGE has, of course, two major connotations
language and languages. Theres the evolution of language per se (syntax,
grammar) and the establishment of particular languages (such as the Indo-European origins of
The latter is easy. We can increasingly estimate the branching pattern for the genealogy of English or French, even use it to infer the migration of people around Asia and Europe. But to go back any more than a few thousand years before written languages developed is quite difficult.
For language per se, we know that since our last common ancestor with the chimpanzees, some significant improvements have occurred. We dont know very many details of this route, but one rearrangement stands out very clearly: the wild chimps mostly use 36 different sounds. They mean about 36 different things the sounds are the vocabulary. Humans also use about three dozen different sounds, known as phonemes. And what do they mean? Nothing.
Our phonemes are almost totally meaningless by themselves. It is only in combination, strung together end to end, that most of our speech sounds have meaning. Sequences of phonemes have the meaning of words (easily producing vocabularies of 10,000 words, and more than 100,000 in some individuals). Sequences of words produce word phrases that have an additional meaning, the kinds of relationships between the actor, the action, and the acted upon. Sequences of word phrases (sometimes embedded within one another) produce the infinite variety of our sentences and paragraphs.
Somewhere along the line, during the 6 million years since we last shared a common ancestor with our chimpanzee cousins, our predecessors appear to have minimized a system that assigned meaning to individual sounds and created this new meaning-from-sequence scheme as an overlay. How and when and where was this conversion done?
Thats the big question of anthropology and linguistics. It appears that much of it probably happened in the last 2.5 million years the ice ages because thats when hominid brain size and its surface infolding pattern were also changing. Size and infolding changes probably arent required for brain reorganization, but, just as change is easier in a growing economy than in a steady-state one, so brain reorganization was probably easier during the last several million years than from 6 to 2.5 million years ago when hominid brains remained ape-sized.
So we may not know when language abilities changed within that long period, but we surely know a big aspect of what changed: sequencing for additional meaning. The combination of chunking and rapid transmission, so that much meaning can be accommodated within the brief span of short-term memory, has surely been important for what can be held in mind at the same time. But without the syntax that allows us to make mental models of who did what to whom, language would lack much of its power. Wed be back to merely using familiar rules of association, as in protolanguage.
STRINGING THINGS TOGETHER is thus likely to be a prime task of the language cortex.
And the kinds of studies to which Neil has been contributing show that sequence is indeed a
major organizing principle of language cortex.|
During the test of identification of speech sounds, stimulating most of the same areas where motor sequencing was altered also interfered with the phoneme (speech sound) identification. When you hear a speech sound and have to decode it, these areas must be active. But they must be similarly active when you make the sequence of movements needed to make speech sounds.
That finding goes against the traditional teaching, that the brain areas related to speech production are separated from those related to speech perception. That led to an expectation that perception and production met somewhere in the brain at an executive site that analyzed the information coming in and issued the commands going out. Only this site, if it exists at all, ought to be both sensory and motor.
Thats one common way of looking at matters, but it is not the view of most modern brain researchers; indeed, the neurologist John Hughlings Jackson, who discovered the motor strip map, warned over a century ago that most cortical areas ought to be both sensory and motor.
And the finding in the O.R. was not totally unexpected, for an earlier investigation in psycholinguistics had suggested the same thing, that there was a common mechanism for speech production and perception. That psycholinguistic finding was an observation about how we perceive speech sounds.
With a synthesizer, speech sounds can be continuously varied between pa and ba. But if you ask a subject to tell you what he hears with those continuously varying sounds, he will report either pa or ba not something in between. This phenomenon is called categorical perception: sounds are heard or at least identified as belonging to the closest category. In-between states are assigned to one adjacent category or another. One wonders what the ancient Greek philosophers would have thought of this ideal form in the brain.
The psycholinguist Alvin Liberman proposed that this classification effect is because the decoding process involves creation of a motor representation for how to produce the sound. There is no in-between motor representation, and so it is heard as either pa or ba. Libermans idea has been called the motor theory of speech perception. Neils study shows that the decoding of the speech sound and the organization of the motor movements both depend on the same brain areas. So one explanation for Neils findings would be Libermans motor theory, that what you can speak helps determine what you can hear.
A second candidate would be the production and detection of precise time intervals. Both speech production and detection are unusually fast processes. Precise timing is needed for both, and as we found in discussing dyslexia, a defect in a fast processing system can interfere with a language function. Precise timing also requires a lot of brain in order to do well, without jitter. So its also possible that the function common to this cortical area is determining precise timing, whether of movement or of sensation, rather than stringing things together.
The third candidate is that there is some function common to detecting speech sounds and to generating the motor movements for them, one that depends on these brain areas. As we have seen, a cortical mechanism for unique sequences, whether of movements or sounds, is a likely candidate for that common function. Routine sequences probably can be handled subcortically but unique sequences as Neil is experiencing in the O.R. tests may require cortical expertise.
As often happens in neuroscience, further studies indicate increased complexity. In some other patients undergoing operations like Neils, George used the precious research minutes to record the activity of individual neurons during measures of speech perception and production. Based on the findings in stimulation studies like Neils, George had predicted that he would find neurons that responded to both speech production and perception, probably in the very same way for the same sound.
Scientific predictions dont always work out. Such common-to-both neurons must be very rare, at least in the areas where George has made his recordings, for hes found only one neuron so far that seems to act anything like what hed predicted. Most of the neurons change activity to either speech production or perception but not both. Thats actually rather curious, because the way one study was done: the patient heard words without speaking, and then heard the same words again and repeated them aloud. Neurons were active when the word was heard, spoken by the tape recorder, but those neurons were either inactive or inhibited when the patient spoke the same word even though, in speaking it aloud, the patient also heard it. So we apparently turn off the temporal lobe neurons that perceive speech sounds for a brief period when we produce speech.
Thus neurons specializing in speech production and speech perception seem to be separate. But they are often nearby, so if you record activity from several neurons at once, the whole population of neurons is active with both speech production and perception. Since stimulation alters activity in many neurons at once, it shows the combined population effect rather than the separate single-neuron effect on both production and perception.
It is certainly tempting to label this large region surrounding the sylvian fissure after the feature in common between the two tests with such near-perfect overlap. In short, sequencing cortex.
STUDIES OF SEQUENCING OF MOVEMENTS in aphasic patients have also shown deficits, further evidence for the importance of sequencing in language. In one study by neuropsychologists Doreen Kimura and Catherine Mateer, aphasic patients were tested on hand-and-arm movements that needed to be chained together, such as the ones we use every day for putting a key into a keyhole, rotating the key, and pushing on the door. They used tests that were less likely to have become habits and discovered that, while the aphasic patients could do each movement separately, they had trouble in stringing them together. This is often called apraxia.
Aphasia and apraxia seem to go together. Sequential movements of the hand and arm seemed to have something to do with language. This set the stage for the kinds of studies to which Neil is contributing: Mateer went on to design the oral-facial sequencing tasks that are used in the confines of the O.R.
Although the single-neuron studies suggest that the relation between decoding speech and organizing movement sequences is a property of nearby networks and not single neurons active with both functions the relation has potential implications for therapy of language disorders and for education more generally. For it seems likely that there may be benefits to language from learning other sequential motor activities.
IF SEQUENCING ABILITIES are a major underlying feature of language mechanisms, it may be that they can be exercised by means other than listening and talking (or watching and signing). A game with rules of procedure, for example, might also involve the same neural machinery.
The cortical-subcortical division between novel associations and skilled routine suggests, however, that it may be discovering the rules of the game which is more important to developing cortical sequencing abilities than accomplished performance. Learning many new songs might be better than learning to sing one song well, at least for exercising sequencing cortex. If sequence rules could be gradually discovered via increasing success, as they are for many childrens video games, the experience might carry over to discovering the syntax of spoken or signed languages a matter of some importance for the deaf children of hearing parents, who may not discover the syntax game in the usual way. The preschool children taking music lessons are presumably getting to discover a second set of syntax rules.
Language, planning ahead, music, dance, and games with rules of progression are all human serial-order skills that might involve some common neural machinery. Hammering, throwing, kicking, and clubbing also belong on that same list, as their sequence of muscle contractions must be completely planned before beginning the movement feedback is too slow to correct the movement, once underway. Indeed, so many of the uniquely human skills are sequential that it has been suggested that augmenting the sequencer abilities of the ape brain is what human brain evolution was all about.
The sequencing story also reminds us that giving names to things can be dangerous, even when the specialization seems as obvious as that for language. Language cortex is only cortex that appears to support language functions, among other functions. Defining its function by what stops working in its absence confuses a correlation with a cause. The neurologist F. M. R. Walshe pointed out, back in 1947, that to define the function of a brain region in terms of the symptoms following damage is likely to result in a misidentification of the functions of that brain region. His example was a broken tooth on a gear in a cars transmission, which results in such symptoms as a thunk heard once each revolution.
The function of the tooth was not to prevent thunks, and the function of the perisylvian regions is not to prevent aphasia. Language deficits are one clue to the functions of the perisylvian region, strokes affecting hand sequencing are a second clue, and the stimulation mapping of sensory and motor sequencing tasks provides yet another way of inferring the functions that this important cortical region support. Sequencing cortex is one way to summarize the results so far.
Conversations with Neil's Brain: |
The Neural Nature of Thought and Language (Addison-Wesley, 1994), co-authored with my neurosurgeon colleague, George Ojemann. It's a tour of the human cerebral cortex, conducted from the operating room, and has been on the New Scientist bestseller list of science books. It is suitable for biology and cognitive neuroscience supplementary reading lists. ISBN 0-201-48337-8.
|AVAILABILITY widespread (softcover, US$12).|