News
Reading minds with MRIs is possible
It sounds like science fiction, but it's so real it's scary

- May 3, 2023
- Updated: July 2, 2025 at 2:20 AM

An artificial intelligence-based decoder capable of translating brain activity into a continuous stream of text has been developed, in a breakthrough that makes it possible for the first time to read a person’s thoughts non-invasively.
The decoder was able to reconstruct speech with astonishing accuracy while people listened to a story-or even imagined it silently-using only functional MRI data.
Previous speech decoding systems required surgical implants, and this latest breakthrough opens the prospect of new ways to restore speech in patients with communication difficulties due to stroke or motor neuron disease.
Real-time resonance imaging, the great challenge
The achievement overcomes a fundamental limitation of fMRI, which is that although the technique can map brain activity at a particular location with incredibly high resolution, there is an inherent time lag that makes it impossible to track activity in real time.
This lag is because fMRI scanners measure the blood flow response to brain activity, which peaks and returns to baseline in about 10 seconds, meaning that even the most powerful scanner cannot improve on it.
This hard limit has hampered the ability to interpret brain activity in response to natural speech, as it provides a “hodgepodge of information” spread over a few seconds.

However, the advent of large linguistic models – the type of AI that underpins OpenAI’s ChatGPT – provided a new avenue of access.
These models are able to represent, in numbers, the semantic meaning of speech, which allowed the scientists to observe which patterns of neural activity corresponded to strings of words with a particular meaning, rather than trying to read the activity word by word.
The learning process was intensive: three volunteers had to lie in a scanner for 16 hours each, listening to podcasts.
The decoder was trained to relate brain activity to meaning using a large linguistic model, GPT-1, the precursor to ChatGPT.
Represents ideas, not exact words
Subsequently, the same participants were scanned listening to a new story or imagining they were telling a story and the decoder was used to generate text from the brain activity alone.
About half the time, the text matched exactly, and sometimes precisely, the meaning of the original words.
“Our system works at the level of ideas, of semantics, of meaning,” Huth explains. “So what we get is not the exact words, but the gist.”
For example, when a participant was played the words “I don’t have a driver’s license yet,” the decoder translated them as “You haven’t started learning to drive yet.”
The team now hopes to assess whether the technique could be applied to other more portable brain imaging systems, such as functional near-infrared spectroscopy (fNIRS).
Some of the links added in the article are part of affiliate campaigns and may represent benefits for Softonic.
Journalist specialized in technology, entertainment and video games. Writing about what I'm passionate about (gadgets, games and movies) allows me to stay sane and wake up with a smile on my face when the alarm clock goes off. PS: this is not true 100% of the time.
Latest from Chema Carvajal Sarabia
You may also like
NewsThe Simpsons arrive in Fortnite with one of the most anticipated collaborations in history
Read more
NewsKathryn Bigelow is making waves with House of Dynamite, but she already made an impact before with this suggestive vampire movie
Read more
NewsThe best Jurassic Park game is a strategy game that puts us in the shoes of the park's designers
Read more
NewsThe US Department of Homeland Security uses Halo for advertising and Xbox says nothing to the anger of the community
Read more
NewsThis horror movie leaving HBO Max on November 1 is the perfect film for Halloween
Read more
NewsGuy Ritchie's most acclaimed film, which has made him one of Hollywood's top directors, is coming to Prime Video
Read more