Rift of Gab: Speech insights spark statistical static

The heated debate over how people acquire language burns on. A new study suggests that adults can exploit patterns in an artificial language to discern novel nonsense words in a stream of syllables, but use a different mental computation to discover rules governing the construction of those words.

This finding supports the theory that people are born with a brain-based grasp of grammar, say psychologist Jacques Mehler of the International School for Advanced Studies in Trieste, Italy, and his colleagues. That capacity, including the underlying logic of word construction, doesn’t depend on the mental calculations used to recognize individual words, say the researchers.

“Even though learners can compute powerful statistical relations” between elements of a language, they don’t use this capability to learn grammar, the researchers theorize in an upcoming Science.

For now, no one knows whether a person’s unconscious statistical analysis of a newly encountered language contributes to learning the language’s grammar, according to Mehler (SN: 1/16/99, p. 42). What’s striking, he says, is that when people hear a stream of speech, the judicious introduction of very brief pauses enables them to understand rules for building words.

In their experiments, the scientists first presented a 10-minute stream of nine randomly arranged nonsense words to 14 adults. The three-syllable words belonged to three families. For words in each family, the first and last syllables are the same, but the middle syllable varies. For instance, one word family consisted of puliki, puraki, and pufoki; another used beliga, beraga, and befoga. The researchers presented the words without pauses between them.

Next, the participants heard individual combinations of three syllables. They rated as “wordlike” only the combinations from the three families.

Another group of 14 adults heard a stream of eight of the nine nonsense words without pauses. In a follow-up test, they rated the omitted word as no more wordlike than other syllable combinations they hadn’t heard. Thus, the volunteers couldn’t extract the rules for word construction from the speech stream.

However, when the eight nonsense words were separated with 25-millisecond gaps, a third group proved able to generalize about word structure and identify the ninth nonsense word as wordlike. Although participants reported no awareness of these fleeting pauses, such cues made the speech stream more similar to the rhythm and intonations of natural speech, Mehler asserts.

The introduction of “silent pauses” in the speech stream may have changed the statistical problem solved by volunteers rather than elicited the discovery of word-building rules, says psychologist Mark S. Seidenberg of the University of Wisconsin-Madison. Participants could have monitored the frequency with which certain first and third syllables appeared together, in his view.

Both Mehler and Seidenberg agree that the new findings are unlikely to quell the debate over language acquisition, even for 25 milliseconds.

Bruce Bower has written about the behavioral sciences for Science News since 1984. He writes about psychology, anthropology, archaeology and mental health issues.

From the Nature Index

Paid Content