
A group of scientists from the University of California at San Francisco (UCSF) has developed a neural interface for decoding commands sent by the brain to the vocal tract. The technology allowed a patient paralyzed for about 15 years with anartria communicate, says in the study.
Specialists placed a thin, flexible electrode array on the surface of the volunteer’s cerebral cortex. The system recorded neural signals and sent the data to a speech decoder.
Scientists say this is the first time that a paralyzed person who has lost the ability to speak has used neurotechnology to reproduce entire words.
The system allows deciphering the user’s intentions to use the vocal tract, which includes dozens of muscles to control the larynx, tongue and lips. Humans use a relatively small set of basic configurations when speaking, the researchers said.

The scientists noted that at the beginning of the study, the team was faced with a lack of data on patterns that explain the connection of brain activity with even the simplest components of speech – phonemes and syllables.
So they used information provided by volunteers at the UCSF Epilepsy Center. There, people are surgically placed with electrodes on the surface of the cerebral cortex before surgery to create a map of the areas involved during seizures.
Many of these patients participate in research experiments using recordings of their brain waves. Therefore, the experts asked volunteers to allow the study of patterns of neural activity during communication.
They recorded changes in respondents’ brain waves as they spoke simple words and sounds, and tracked their tongue and mouth movements.
Sometimes scientists would paint patients’ faces in order to extract kinematic gestures using a computer vision system. They also used an ultrasound machine under the jaw to simulate the movements of the tongue in the mouth.
The team then matched neural patterns to muscle contraction. According to experts, there is a representational map that controls different parts of the vocal tract. They also found that during light communication, different areas of the brain work together and in a coordinated manner.
The UCSF team has already recruited two volunteers to test the system. In the future, they plan to increase the number of participants in the experiment and allow them to communicate at a speed of 100 words per minute.
Recall that in May, the American startup Synchron launched clinical trials of the Stentrode neural interface, designed to help paralyzed patients.
In January, scientists developed an AI-powered eye implant that restored sight to a nearly blind woman.
In August 2021, Synchron received approval from the US Food and Drug Administration to conduct human testing of neural interfaces.
Subscribe to Cryplogger news in Telegram: Cryplogger AI – all the news from the world of AI!
Found a mistake in the text? Select it and press CTRL+ENTER

A group of scientists from the University of California at San Francisco (UCSF) has developed a neural interface for decoding commands sent by the brain to the vocal tract. The technology allowed a patient paralyzed for about 15 years with anartria communicate, says in the study.
Specialists placed a thin, flexible electrode array on the surface of the volunteer’s cerebral cortex. The system recorded neural signals and sent the data to a speech decoder.
Scientists say this is the first time that a paralyzed person who has lost the ability to speak has used neurotechnology to reproduce entire words.
The system allows deciphering the user’s intentions to use the vocal tract, which includes dozens of muscles to control the larynx, tongue and lips. Humans use a relatively small set of basic configurations when speaking, the researchers said.

The scientists noted that at the beginning of the study, the team was faced with a lack of data on patterns that explain the connection of brain activity with even the simplest components of speech – phonemes and syllables.
So they used information provided by volunteers at the UCSF Epilepsy Center. There, people are surgically placed with electrodes on the surface of the cerebral cortex before surgery to create a map of the areas involved during seizures.
Many of these patients participate in research experiments using recordings of their brain waves. Therefore, the experts asked volunteers to allow the study of patterns of neural activity during communication.
They recorded changes in respondents’ brain waves as they spoke simple words and sounds, and tracked their tongue and mouth movements.
Sometimes scientists would paint patients’ faces in order to extract kinematic gestures using a computer vision system. They also used an ultrasound machine under the jaw to simulate the movements of the tongue in the mouth.
The team then matched neural patterns to muscle contraction. According to experts, there is a representational map that controls different parts of the vocal tract. They also found that during light communication, different areas of the brain work together and in a coordinated manner.
The UCSF team has already recruited two volunteers to test the system. In the future, they plan to increase the number of participants in the experiment and allow them to communicate at a speed of 100 words per minute.
Recall that in May, the American startup Synchron launched clinical trials of the Stentrode neural interface, designed to help paralyzed patients.
In January, scientists developed an AI-powered eye implant that restored sight to a nearly blind woman.
In August 2021, Synchron received approval from the US Food and Drug Administration to conduct human testing of neural interfaces.
Subscribe to Cryplogger news in Telegram: Cryplogger AI – all the news from the world of AI!
Found a mistake in the text? Select it and press CTRL+ENTER