Tech

Scientists Use GPT AI to Passively Read People's Thoughts in Breakthrough

An AI model similar to ChatGPT was combined with fMRI readings to non-invasively decode continuous language from subjects, a new study reports.
Scientists Use GPT AI to Passively Read People's Thoughts in Breakthrough
Researchers used an fMRI machine to train a language decoder. Image: Nolan Zunk/University of Texas at Austin
210329_MOTHERBOARD_ABSTRACT_LOGO
ABSTRACT breaks down mind-bending scientific research, future tech, new discoveries, and major breakthroughs.

Scientists have invented a language decoder that can translate a person’s thoughts into text using an artificial intelligence (AI) transformer similar to ChatGPT, reports a new study. 

The breakthrough marks the first time that continuous language has been non-invasively reconstructed from human brain activities, which are read through a functional magnetic resonance imaging (fMRI) machine. 

Advertisement

The decoder was able to interpret the gist of stories that human subjects watched or listened to—or even simply imagined—using fMRI brain patterns, an achievement that essentially allows it to read peoples’ minds with unprecedented efficacy. While this technology is still in its early stages, scientists hope it might one day help people with neurological conditions that affect speech to clearly communicate with the outside world.

However, the team that made the decoder also warned that brain-reading platforms could eventually have nefarious applications, including as a means of surveillance for governments and employers. Though the researchers emphasized that their decoder requires the cooperation of human subjects to work, they argued that “brain–computer interfaces should respect mental privacy,” according to a study published on Monday in Nature Neuroscience.

“Currently, language-decoding is done using implanted devices that require neurosurgery, and our study is the first to decode continuous language, meaning more than full words or sentences, from non-invasive brain recordings, which we collect using functional MRI,” said Jerry Tang, a graduate student in computer science at the University of Texas at Austin who led the study, in a press briefing held last Thursday.

“The goal of language-decoding is to take recordings of a user's brain activity and predict the words that the user was hearing or saying or imagining,” he noted. “Eventually, we hope that this technology can help people who have lost the ability to speak due to injuries like strokes, or diseases like ALS.”

Advertisement

Tang and his colleagues were able to produce their decoder with the help of three human participants who each spent 16 hours in an fMRI machine listening to stories. The researchers trained an AI model referred to as GPT-1 in the study on Reddit comments and autobiographical stories in order to link the semantic features of the recorded stories with the neural activity captured in the fMRI data. This way, it could learn which words and phrases were associated with certain brain patterns.

Screen Shot 2023-05-01 at 11.01.59 AM.png

Once that phase of the experiment was complete, the participants had their brains scanned in an fMRI while they listened to new stories that were not part of the training dataset. The decoder was able to translate the audio narratives into text as the participants heard them, though these interpretations often used different semantic constructions from the original recordings. For instance, a recording of a speaker saying the sentence “I don’t have my driver’s license yet” was decoded from the listener’s thoughts via the fMRI readers to “She has not even started to learn to drive yet.”  

These rough translations emerge from a key difference between the new decoder and existing techniques that use invasive implanted electrodes in the brain. The electrode-based platforms typically predict text from motor activities, such as the movements of a person’s mouth as they try to speak, whereas Tang’s team focused on the flow of blood through the brain, which is what is captured in fMRI machines.

Advertisement

“Our system works at a very different level,” said Alexander Huth, an assistant professor of neuroscience and computer science at UT Austin and senior author of the new study, in the briefing. “Instead of looking at this low-level motor thing, our system really works at the level of ideas, of semantics, and of meaning. That’s what it’s getting at.”

“This is the reason why I think what we get out is not the exact words that somebody heard or spoke, it’s the gist,” he continued. “It’s the same idea but expressed in different words.”

The novel approach allowed the team to push the limits of mind-reading technologies by seeing if the decoder could translate the thoughts of the participants as they watched silent movies, or just imagined stories in their heads. In both cases, the decoder was able to decipher what the participants were seeing, in the case of the movies, and what subjects were thinking as they played out brief stories in their imaginations. 

The decoder produced more accurate results during the tests with audio recordings, compared to the imagined speech, but it was still able to glean some basic details of unspoken thoughts from the brain activity. For instance, when a subject envisioned the sentence “went on a dirt road through a field of wheat and over a stream and by some log buildings,” the decoder produced text that said “he had to walk across a bridge to the other side and a very large building in the distance.”

The participants in the study ran through all these tests while inside an fMRI machine, which is a clunky and immobile piece of laboratory equipment. For this reason, the decoder is not yet ready as a practical treatment for patients with speech conditions, though Tang and his colleagues hope that future iterations of the device could be adapted to more convenient platforms, such as near-infrared spectroscopy (fNIRS) sensors that can be worn on a patient’s head.

While the researchers hinted at the promise of this technology as a new means of communication, they also cautioned that decoders raise ethical concerns about mental privacy. 

“Our privacy analysis suggests that subject cooperation is currently required both to train and to apply the decoder,” Tang’s team said in the study. “However, future developments might enable decoders to bypass these requirements. Moreover, even if decoder predictions are inaccurate without subject cooperation, they could be intentionally misinterpreted for malicious purposes.” 

“For these and other unforeseen reasons, it is critical to raise awareness of the risks of brain decoding technology and enact policies that protect each person’s mental privacy,” the researchers concluded.