
Scientists at the University of Texas at Austin have developed a non-invasive AI system that converts human brain activity into a stream of text. The results of the study are published in the journal nature neuroscience.
To collect data on brain activity, participants were placed in a machine fMRI and let me listen to several hours of podcasts. Then the researchers created a decoder algorithm that works on the principle of ChatGPT or Bard chat bots.
As a result, the trained AI system was able to generate a stream of text when the participant listens to the recordings or imagines that they are telling a new story.
The resulting text is not an exact transcript. According to the researchers, the algorithm is more likely to capture shared thoughts and ideas.
According to press releasethe trained system produces text that closely or exactly matches the intended meaning of the participant’s original words about half the time.
For example, when a participant heard the words “I don’t have a driver’s license yet” during the experiment, her thoughts were transformed into “she hasn’t even started learning to drive yet.”
“For a non-invasive method, this is a real leap forward compared to what was done before, when single words or short sentences were usually used,” said Alexander Hut, one of the study leaders.
According to him, the decoding model is capable of holding long sessions and working with complex ideas.
Participants were also asked to watch four videos without sound while in the scanner. As a result, the AI system was able to accurately recognize “certain events” from the videos, the researchers said.
The brain activity decoder cannot be used outside the lab because it requires an fMRI scanner. But the researchers believe the algorithm will come in handy in the future as more portable brain imaging systems become available.
The developers are also confident that the technology will benefit patients who have lost the ability to physically communicate after suffering a stroke, paralysis or other degenerative diseases.
The lead authors of the study intend to patent the technology.
Recall that in March, Japanese researchers taught AI to recreate images based on human brain activity using Stable Diffusion.
Found a mistake in the text? Select it and press CTRL+ENTER
Cryplogger Newsletters: Keep your finger on the pulse of the bitcoin industry!

Scientists at the University of Texas at Austin have developed a non-invasive AI system that converts human brain activity into a stream of text. The results of the study are published in the journal nature neuroscience.
To collect data on brain activity, participants were placed in a machine fMRI and let me listen to several hours of podcasts. Then the researchers created a decoder algorithm that works on the principle of ChatGPT or Bard chat bots.
As a result, the trained AI system was able to generate a stream of text when the participant listens to the recordings or imagines that they are telling a new story.
The resulting text is not an exact transcript. According to the researchers, the algorithm is more likely to capture shared thoughts and ideas.
According to press releasethe trained system produces text that closely or exactly matches the intended meaning of the participant’s original words about half the time.
For example, when a participant heard the words “I don’t have a driver’s license yet” during the experiment, her thoughts were transformed into “she hasn’t even started learning to drive yet.”
“For a non-invasive method, this is a real leap forward compared to what was done before, when single words or short sentences were usually used,” said Alexander Hut, one of the study leaders.
According to him, the decoding model is capable of holding long sessions and working with complex ideas.
Participants were also asked to watch four videos without sound while in the scanner. As a result, the AI system was able to accurately recognize “certain events” from the videos, the researchers said.
The brain activity decoder cannot be used outside the lab because it requires an fMRI scanner. But the researchers believe the algorithm will come in handy in the future as more portable brain imaging systems become available.
The developers are also confident that the technology will benefit patients who have lost the ability to physically communicate after suffering a stroke, paralysis or other degenerative diseases.
The lead authors of the study intend to patent the technology.
Recall that in March, Japanese researchers taught AI to recreate images based on human brain activity using Stable Diffusion.
Found a mistake in the text? Select it and press CTRL+ENTER
Cryplogger Newsletters: Keep your finger on the pulse of the bitcoin industry!