How brains filter the signal from the noise

busy street corner

Our brains can distinguish a single voice in the middle of a noisy street. A new study in ferrets shows how auditory systems might separate the signal from the noise.

tracktwentynine/Flickr (CC BY-NC-SA 2.0)

When you are waiting with a friend to cross a busy intersection, car engines running, horns honking and the city humming all around you, your brain is busy processing all those sounds. Somehow, though, the human auditory system can filter out the extraneous noise and allow you to hear what your friend is telling you. But if you tried to ask your iPhone a question, Siri might have a tougher time. 

A new study shows how the mammalian brain can distinguish the signal from the noise. Brain cells in the primary auditory cortex can both turn down the noise and increase the gain on the signal. The results show how the brain processes sound in noisy environments, and might eventually help in the development of better voice recognition devices, including improvements to cochlear implants for those with hearing loss. Not to mention getting Siri to understand you on a chaotic street corner.

Nima Mesgarani and colleagues at the University of Maryland in College Park were interested in how mammalian brains separate speech from background noise. Ferrets have an auditory system that is extremely similar to humans. So the researchers looked at the A1 area of the ferret cortex, which corresponds to our auditory A1 region. Equipped with carefully implanted electrodes, the alert ferrets listened to both ferret sounds and parts of human speech. The ferret sounds and speech were presented alone, against a background of white noise, against pink noise (noise with equal energy at all octaves that sounds lower in pitch than white noise) and against reverberation. Then they took the neural signals recorded from the electrodes and used a computer simulation to reconstruct the sounds the animal was hearing.

In results published April 21 in Proceedings of the National Academy of Sciences, the researchers show the ferret brain is quite good at detecting both ferrets sounds and speech in all three noisy conditions. “We found that the noise is drastically decreased, as if the brain of the ferret filtered it out and recovered the cleaned speech,” Mesgarani says.

A previous study, published in 2013 in PLOS Biology by auditory neuroscientist Andrew King and his laboratory at the University of Oxford in England, showed that brain cells can enhance the gain of their responses, increasing the signal corresponding to the sound of interest, while tuning out the noise. “The great thing is we got the same result,” King says. “That replication is extremely important.”

Mesgarani’s group takes that work a step further, showing that there’s more to sound filtering than boosting the signal. Their simulation showed that brain cells not only need to boost the signal, they also need to dampen the noise. This involves synaptic depression, where brain cells fire less in response to some signals. The auditory cortex responded less to background noise, while responding more to speech.

“When I’m talking to you, my voice is coming on and off in bursts as I open and close my lips, that’s very dynamic, while white noise is very static,” says Shihab Shamma, a cognitive neuroscientist at the University of Maryland and an author on the study. “Sounds that are more dynamic get enhanced,” Shamma explains, “while sounds that are sustained and repeated over a long period of time are basically thrown out.”

The ability to understand speech in the presence of noise is often degraded in people with hearing loss, King explains. “It’s the single biggest challenge of someone with a cochlear implant.” Algorithms that combine enhancing the important signal and dampening the irrelevant noise could help make speech recognition systems more sensitive.

King also notes that the new study looked at just speech sounds against general noise. The next step is to see how the brain filters out other voices to focus on a single voice, a phenomenon called the cocktail party problem. With a better understanding of how our auditory system functions, cochlear implants, and even our phones, might eventually be able to pick our voices out in a crowd just as well as we can. 

Follow me on Twitter: @scicurious

Bethany was previously the staff writer at Science News for Students. She has a Ph.D. in physiology and pharmacology from Wake Forest University School of Medicine.

More Stories from Science News on Neuroscience

From the Nature Index

Paid Content