Neuroscientists decoded a Pink Floyd song using people’s brain activity

The method captured sounds that resemble the song's rhythm and harmony

An illustration of a woman's profile with music notes and butterflies flying above her head and a squiggly line wrapping around and out the back of her head.

A new study shows that decoding brain activity can re-create musical elements of a song.

proksima/iStock/Getty Images Plus

In what seems like something out of a sci-fi movie, scientists have plucked the famous Pink Floyd song “Another Brick in the Wall” from individuals’ brains.  

Using electrodes, computer models and brain scans, researchers previously have been able to decode and reconstruct individual words and entire thoughts from people’s brain activity (SN: 11/15/22; SN: 5/1/23).

The new study, published August 15 in PLOS Biology, adds music into the mix, showing that songs can also be decoded from brain activity and revealing how different brain areas pick up an array of acoustical elements. The finding could eventually help improve devices that allow communication from people with paralysis or other conditions that limit one’s ability to speak.

To decode the song, neuroscientist Ludovic Bellier of the University of California, Berkeley and colleagues analyzed the brain activity recorded by electrodes implanted in the brains of 29 individuals with epilepsy. While in the hospital undergoing monitoring for the disorder, the individuals listened to the 1979 rock song.

People’s nerve cells, particularly those in auditory areas, responded to hearing the song, and the electrodes detected not only neural signals associated with words but also rhythm, harmony and other musical aspects, the team found. With that information, the researchers developed a computer model to reconstruct sounds from the brain activity data, and found that they could produce sounds that resemble the song.  

“It’s a real tour de force,” says Robert Zatorre, a neuroscientist at McGill University in Montreal who was not involved in the study. “Because you’re recording the activity of neurons directly from the brain, you get very direct information about exactly what the patterns of activity are.”

The study highlights which parts of the brain respond to different elements of music. For example, activity in one area within the superior temporal gyrus, or STG, located in the lower middle of each side of the brain, intensified at the onset of specific sounds, such as when a guitar note played. Another area within the STG increased and kept its activity up when vocals were used.

The STG on the right side of the brain, but not the left, seemed to be crucial in decoding music. When the researchers removed information from that brain area in the computer model, it decreased the accuracy of the song reconstruction.

“Music is a core part of human experience,” says Bellier, who has been playing instruments since he was 6 years old. “Understanding how the brain processes music can really tell us about human nature. You can go to a country and not understand the language, but be able to enjoy the music.”

Continuing to probe musical perception is likely to be difficult because the brain areas that process it are hard to access without invasive methods. And Zatorre wonders about the broader application of the computer model, trained on just one song. “Does [it] work on other kinds of sounds, like a dog barking or phone ringing?” he asks.

The goal, Bellier says, is to eventually be able to decode and generate natural sounds in addition to music. In the shorter term, incorporating the more musical elements of speech, including pitch and timbre, into brain-computer devices could help individuals with brain lesions or paralysis or other conditions communicate better.

More Stories from Science News on Neuroscience