SAN DIEGO — Scientists have devised ways to “read” words directly from brains. Brain implants can translate internal speech into external signals, permitting communication from people with paralysis or other diseases that steal their ability to talk or type.
New results from two studies, presented November 13 at the annual meeting of the Society for Neuroscience, “provide additional evidence of the extraordinary potential” that brain implants have for restoring lost communication, says neuroscientist and neurocritical care physician Leigh Hochberg.
Some people who need help communicating can currently use devices that require small movements, such as eye gaze changes. Those tasks aren’t possible for everyone. So the new studies targeted internal speech, which requires a person to do nothing more than think.
“Our device predicts internal speech directly, allowing the patient to just focus on saying a word inside their head and transform it into text,” says Sarah Wandelt, a neuroscientist at Caltech. Internal speech “could be much simpler and more intuitive than requiring the patient to spell out words or mouth them.”
Neural signals associated with words are detected by electrodes implanted in the brain. The signals can then be translated into text, which can be made audible by computer programs that generate speech.
That approach is “really exciting, and reinforces the power of bringing together fundamental neuroscience, neuroengineering and machine learning approaches for the restoration of communication and mobility,” says Hochberg, of Massachusetts General Hospital and Harvard Medical School in Boston, and Brown University in Providence, R.I.
Wandelt and colleagues could accurately predict which of eight words a person who was paralyzed below the neck was thinking. The man was bilingual, and the researchers could detect both English and Spanish words.
Electrodes picked up nerve cell signals in his posterior parietal cortex, a brain area involved in speech and hand movements. A brain implant there might eventually be used to control devices that can perform tasks usually done by a hand too, Wandelt says.
Another approach, led by neuroscientist Sean Metzger of the University of California, San Francisco and his colleagues, relied on spelling. The participant was a man called Pancho who hadn’t been able to speak for more than 15 years after a car accident and stroke. In the new study, Pancho didn’t use letters; instead, he attempted to silently say code words, such as “alpha” for A and “echo” for E.
By stringing these code letters into words, the man produced sentences such as “I do not want that” and “You have got to be kidding.” Each spelling session would end when the man attempted to squeeze his hand, thereby creating a movement-related neural signal that would stop the decoding. These results presented at the neuroscience meeting were also published November 8 in Nature Communications.
This system allowed Pancho to produce around seven words per minute. That’s faster than the roughly five words per minute his current communication device can make, but much slower than normal speech, typically about 150 words a minute. “That’s the speed we’d love to hit one day,” Metzger says.
To be useful, the current techniques will need to get faster and more accurate. It’s also unclear whether the technology will work for other people, perhaps with more profound speech disorders. “These are still early days for the technologies,” Hochberg says.
Progress will be possible only with the help of people who volunteer for the studies. “The field will continue to benefit from the incredible people who enroll in clinical trials,” says Hochberg, “as their participation is absolutely vital to the successful translation of these early findings into clinical utility.”