Researchers Are Translating Brain Activity Into Speech

It could lead to a computer-generated speaking tool for the speech impaired

Chia-Yi Hou
OneZero

--

Illustration of electrode placements on the research participants’ neural speech centers, from which activity patterns recorded during speech (colored dots) were translated into a computer simulation of the participant’s vocal tract which then could be synthesized to reconstruct the sentence that had been spoken (sound wave & sentence, below). Credit: Chang lab / UCSF Dept. of Neurosurgery

SScientists are getting closer to developing a computer-generated tool to allow people with severe speech impairments — like the late cosmologist Stephen Hawking — to communicate verbally.

In a paper published today in the journal Nature, a team of researchers at the University of California San Francisco (UCSF) report that they’re working on an early computerized system that can decode brain signals from movements made while speaking, and then translate those movements into sounds. The authors said in a press briefing that the study is a proof of principle that it’s possible to synthesize speech by reading brain activity. “It’s been a long-standing goal of our lab to create technologies to restore communications for people with severe speech disability,” says co-author Dr. Edward Chang, a neurosurgeon at UCSF.

The UCSF team’s system works in two stages. In the first, a device surgically attached to the surface of the brain picks up neural activity for vocal tract movements. That neural activity is used to estimate the physical movements of the jaw, larynx, lips, and tongue while a person is speaking. In the second stage, those movements are decoded so the computer can recreate…

--

--

Chia-Yi Hou
OneZero

Science journalist based in New York with a PhD in infectious disease ecology. @chiayi_hou on Twitter.