skip navigation
Change text size:  S  M  L
  NIDCD Home    Research Information    Funding for Research    Health Info    About Us    News and Events  

New Lessons in How Brain Acquires Language Offered at Seminar

June 14, 2002

Why is it that a 3-year-old child can be fluent in Portuguese, French, English, Chinese, Hungarian, or any one of the world's 6,800 languages, but a 38-year-old American with 4 years of Spanish under his belt can barely ask for directions--en Español--to the corner drugstore?

The answer to that paradox probably has less to do with the age of the person learning the language and more to do with their actual learning experience, said Dr. Patricia Kuhl, the William P. and Ruth Gerberding professor at the University of Washington and codirector of the university's Center for Mind, Brain, and Learning. Kuhl was one of three esteemed researchers on language and learning to take part in a symposium titled "Neural and Behavioral Aspects of Early Language Development," the third in a series of symposia on language and the brain, that took place April 25 in Lipsett Amphitheater. Other speakers were Dr. Helen J. Neville of the University of Oregon and Dr. Laura-Ann Petitto of Dartmouth College.

According to Kuhl, who has advised both Presidents Clinton and Bush on the topic of early cognitive development, a confluence of factors allows children to master their native tongue with remarkable ease and efficiency.

She explained that before babies' utters their first slobbery syllable, they have been hard at work mentally calculating the statistical nuances of their parents' language. Not only are babies capable of deciphering sounds, she said, but they also can map how the individual sounds are combined, how syllables are stressed, and what the intonation qualities are--"all by the time they celebrate their first birthday."

Photo of a babyBabies can "babble" using rhythmic hand motions. (Photo: Jeffrey de Belle)

Seeking to pinpoint the period in a child's life when one language takes over another, Kuhl studied how Japanese and American babies perceive sounds. She discovered that both American and Japanese babies were able to differentiate equally well between the sounds ra and la at the ages of 6 to 8 months. However, by the time they were 10 to 12 months old, the American babies became more adept at distinguishing between the two sounds, while the Japanese babies had grown steadily worse. Likewise, American babies who at 6 months were able to distinguish between sounds commonly used in Mandarin Chinese had lost that ability by the time they were 10 to 12 months of age. But American babies exposed to Chinese for a total of 5 hours when they were 9 months old performed up to par with their Chinese counterparts.

Kuhl said that "motherese," the high-pitched, sing-songy voice that most adults naturally lapse into when addressing a baby, also makes it easier for the baby to learn a language because the sounds are greatly exaggerated. Currently, she is studying whether learning two languages at once can slow a baby's progress because of interference between the conflicting frameworks.

How well an individual learns a language may also be affected by the brain itself, according to Neville, professor of psychology and neuroscience and director of the University of Oregon's Brain Development Lab and Center for Cognitive Neuroscience. For years, she has investigated whether certain biological factors can limit how a brain goes about learning language, as well as how experience can influence which parts of the brain are devoted to the task.

In studying how the brain acquires language, Neville distinguishes between semantics, which is the meaning behind each noun, verb, adjective, and adverb that enriches a person's vocabulary, and syntax, which is the logical placement of words in a sentence that follows the rules of grammar.

The next symposium in the series on language and the brain, scheduled for Sept. 19, will focus on language and aging. Illustration from language symposium

Using noninvasive brain imaging techniques such as event-related brain potential and magnetic resonance imaging, Neville studies the sections of the brain that are most active when early and late learners of English and American Sign Language (ASL) detect inaccuracies in semantics or syntax in the two languages. ASL possesses all of the elements of spoken language, but uses vision and movement to communicate as opposed to sounds and words.

Although the left hemisphere has long been known to be the primary hub of language activity in the brain, Neville has discovered that both hemispheres can play a role in certain situations.

For example, early users of ASL, whether deaf or hearing, employ both the left and right hemispheres when reading ASL, whereas late ASL-users rely solely on the left hemisphere. Neville suggests that the right hemisphere helps analyze the shape, motion and location of words in a signed sentence, known as its spatial syntax, however there are time limits on when a person can recruit it for this purpose.

Neville also noted that early users of English, as opposed to deaf individuals and other late learners, demonstrate a lopsided degree of activity favoring the left side of the brain when reading English, a trait that is largely driven by the acquisition of grammatical understanding. Consequently, grammatical development depends on the age at which a language is learned--namely, the earlier, the better--while semantics can be learned and developed throughout a person's life.

But what mechanism do children use to acquire language, and is it the same for hearing and deaf children?

To answer this question, Petitto, professor and director of Dartmouth's Cognitive Neuroscience Laboratory for Language and Child Development, studied deaf children who were exposed to ASL and hearing children exposed to speech. She discovered that both groups attained the same language milestones within days of one another: saying or signing their first word at 12 months, two-word sentences at 18 months, and so on. Even bilingual children who learned both ASL and English achieved both milestones according to the same timetable.

Furthermore, Petitto noted that the mechanism used by both deaf and hearing children in acquiring language is rhythmic oscillation, or what most people refer to as babbling. When a hearing baby babbles, he is uttering shortened phonetic units in repetitive and meaningless ways. Petitto has discovered that deaf babies also produce repetitive and meaningless babble--silently--using their hands. Comparing hearing children born to deaf parents who use only sign language to communicate with hearing children born to hearing parents, Petitto found that the children born to deaf parents used a variety of rhythmic hand motions very different from, and in addition to, the typical hand movements that the babies born to hearing parents made.

Petitto contends that not only is the mechanism the same for hearing and deaf babies, but it arises from the same tissue in the brain. Using positron emission tomography, a brain imaging technique, her laboratory has found that regions of the brain previously thought to be reserved for the processing of speech and sound, such as the superior temporal gyrus, were still functional in deaf people. She proposes that brain tissue involved in acquiring language is not so much dedicated to the processing of speech and sound but to more abstract activities that help us pattern the language we use, be it spoken or signed.

The next symposium in the series, scheduled for Sept. 19, will focus on language and aging. The speakers will be Dr. Susan Kemper, University of Kansas; Dr. Loraine Obler, City University of New York; and Dr. Sandra Weintraub, Northwestern University. For more information, contact Dr. Judith Cooper, (301) 496-5061.

The April 25 seminar was sponsored by NIDCD, NICHD, NIMH, NINDS, NIA, and the Office of Behavioral and Social Sciences Research.

Top


N I D C D logo FirstGovH H S logo-link to U.S. Department of Health and Human ServicesN I H logo