BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Scientists Use AI To Turn Brain Signals Into Speech

This article is more than 4 years old.

Getty

A recent research study could give a voice to those who no longer have one. Scientists used electrodes and artificial intelligence to create a device that can translate brain signals into speech. This technology could help restore the ability to speak in people with brain injuries or those with neurological disorders such as epilepsy, Alzheimer disease, multiple sclerosis, Parkinson's disease and more.

The new system being developed in the laboratory of Edward Chang, MD shows that it is possible to create a synthesized version of a person’s voice that can be controlled by the activity of their brain’s speech centers. In the future, this approach could not only restore fluent communication to individuals with a severe speech disability, the authors say, but could also reproduce some of the musicality of the human voice that conveys the speaker’s emotions and personality.

The study recorded the brain activity of five epilepsy patients who already had treatment of brain implants. Electrodes on the brain have been used to translate brainwaves into words spoken by a computer. When you speak, your brain sends signals from the motor cortex to the muscles in your jaw, lips and larynx to coordinate their movement and produce a sound. They were asked to read aloud a list of sentences that the AI algorithm would read and decode the process of the brain signals to their speech.

“For the first time, this study demonstrates that we can generate entire spoken sentences based on an individual’s brain activity,” said Chang, a professor of neurological surgery and member of the UCSF Weill Institute for Neuroscience. “This is an exhilarating proof of principle that with technology that is already within reach, we should be able to build a device that is clinically viable in patients with speech loss.”

The machine translates brain signals into actions and chooses the words that they believe were being processed in the brain. The scientists studied English speakers and had a computer transcribe sentences and choose from a possibility of each word used. They then used brain signals that were tracked to the lips, jaw, tongue and throat that are all used by humans to produce language and words. These were then used to predict wording by the machine learning systems. They were previously used to help people with paralysis type with their brains. This was at a rate of only eight words per minute, with this new technology it can be up to 150 words per minute.

Results varied depending on how many options there were to choose from, however on average the listeners were able to correctly identify 70% of the words. When given 25 options per word, they got 69% of the words correct; with 50, they got 47% correct.

The benefit of this technology could truly help those who lost communication skills from a stroke or other diseases and illnesses be able to speak to others. Some worry though, that the technology may work as a “mind reading device” and could compromise people’s private thoughts. However, scientists say we are still a long way off from it being able to accurately mimic speech.

Follow me on LinkedInCheck out my website