For kids learning new words, it’s all about context

overhead camera of a kitchen

LISTEN AND LEARN For three years, microphones and fisheye video cameras embedded in ceilings, including this kitchen cam, captured the language environment as a child learned to talk.

MIT Media Lab

Like every other person who carries around a smartphone, I take a lot of pictures, mostly of my kids. I thought I was bad with a few thousand snaps filling my phone’s memory. But then I talked to MIT researcher Deb Roy.

For three years, Roy and a small group of researchers recorded every waking moment of Roy’s son’s life at home, amassing over 200,000 hours of video and audio recordings.

Roy’s intention wasn’t to prove he was the proudest parent of all time. Instead, he wanted to study how babies learn to say words. As a communication and machine learning expert, Roy and his wife Rupal Patel, also a speech researcher, recognized that having a child would be a golden research opportunity.

The idea to amass this gigantic dataset “was kicking around and something we thought about for years,” Roy says. So after a pregnancy announcement and lots of talking and planning and “fascinating conversations” with the university administration in charge of approving human experiments, the researchers decided to go for it.   

To the delight of his parents, a baby boy arrived in 2005. When Roy and Patel brought their newborn home, the happy family was greeted by 11 cameras and 14 microphones, tucked up into the ceiling. From that point on, cameras rolled whenever the baby was awake.

FROM BABE’S MOUTH Over several months, video cameras captured Deb Roy’s son getting better at saying “ball” and then “blue ball,” a distinctive object that can be seen in some of the clips. MIT Media Lab

The researchers combed this raw data for two things: speech that the child could have heard and words spoken by the child (including his first word at 9 months: Mama). These snippets were then transcribed into a database that let scientists hunt for clues about what features of words make a child more likely to say them.

The unprecedented experiment earned attention while it was under way. And after the three years were up, Roy gave a delightful TED talk describing some of the project’s promise and early results. This week brings the scientific publication of an analysis of the lexical treasure trove. 

Surprisingly, the key factor that predicts whether a word will emerge from a baby’s mouth isn’t tied to how many times the baby hears that particular word. Instead, a feature called distinctiveness is what makes the difference, the researchers report September 21 in the Proceedings of the National Academy of Sciences.

Word frequency is important on some level, says Roy. “If a child has never heard a word, he’s not going to produce it.” But distinctiveness — the contextual features that situate a word in a particular place, time or situation — was much more important than frequency for predicting whether his son would say a word, Roy says. “No matter how you cut the data, it was head and shoulders above the other factors.”

Distinctiveness is evident in words like kick, which is usually uttered during a ball game; breakfast, which comes in the morning near the table; or peek-a-boo, which is said while playing a game of it. Unlike general words like with, these distinct words come embedded in a rich situation, offering lots of clues to help a child understand and say it. Distinctiveness “helps crack the code of what that word means,” Roy says.

 By combing through their dataset, “you get this picture of certain words that are highly predictable,” says study coauthor Brandon Roy of MIT and Stanford University (and no relation to Deb Roy). “Distinctiveness puts constraints on the word,” helping to narrow its possible meanings, he says.

Language development expert Marianella Casasola of Cornell University calls the study “truly remarkable,” providing a level of insight that until now had to be inferred from smaller studies.

Of course, the data, for all its power, come from just one child. Scientists don’t know whether word distinctiveness would have the same effect on other kids. But given the depth of the results, Roy thinks it’s not unreasonable to think that their findings might be a general feature of how people learn words.  

These days, the cameras and microphones in Roy’s house are quiet. But he admits that it was hard to turn them off. Their second child, a daughter, had just entered the babbling phase as they were winding down, so for a while, the team ramped the experiment back up. “Then at some point, we were like, wait a minute. When will we turn this off? It was not easy,” he says. “There is something very special about having this kind of dataset.” 

Laura Sanders is the neuroscience writer. She holds a Ph.D. in molecular biology from the University of Southern California.

More Stories from Science News on Health & Medicine