How’s this for surveillance? Researchers at Cal Berkeley have figured out how to decode a person’s brain waves in order to understand what they’re hearing. Sounds nuts, sure, but it’s got a huge medical upside: Mapping how the brain processes sound may one day unlock a way for people who physically can’t speak to project their thoughts.In a report published in PLoS Biology, the Berkeley team describes how they were able to record electrical activity in the brain’s temporal lobe, the home of the auditory system, as a person listened to a conversation. Using the correlation between the signals and the known conversation, the team was eventually able to translate brain signals and predict what words a person had heard solely from their brain activity.“This is huge for patients who have damage to their speech mechanisms because of a stroke or Lou Gehrig’s disease and can’t speak,” co-author and UC Berkeley professor of neuroscience Robert Knight said. “If you could eventually reconstruct imagined conversations from brain activity, thousands of people could benefit.”The current work expands on similar research done on animals, notably ferrets, which advanced to the point that researchers at the University of Maryland were able to predict what words the ferrets had heard read by scientists despite the fact that the ferrets didn’t understand what the words even meant. In other words, the UM researchers were able to decode the ferrets’ auditory signals like they’d spliced an extra wire to a microphone.“At some point, the brain has to extract away all that auditory information and just map it onto a word, since we can understand speech and words regardless of how they sound,” first author Brian Pasley said. “The big question is, What is the most meaningful unit of speech? A syllable, a phone, a phoneme? We can test these hypotheses using the data we get from these recordings.”The Berkeley group enlisted the help of 15 volunteers suffering from epilepsy who were already undergoing electrode testing aimed at finding the brain sites that trigger their seizures. All of the test subjects had up to 256 electrodes covering their temporal lobe. Pasley visited each patient to record their brain activity as they heard 5-10 minutes of conversations.He tested a pair of computational models that pair known spoken words with brain waves to serve as a cypher. He tested the cyphers by having a patient hear a single word, which Pasley then predicted based on their brain scan. The results weren’t perfect, but were successful enough to provide support for the theory. The question is whether or not it can be refined to the point that it can record (and project) someone’s thoughts, which would be a major neuroscience breakthrough.“This research is based on sounds a person actually hears, but to use it for reconstructing imagined conversations, these principles would have to apply to someone’s internal verbalizations,” Pasley said. “There is some evidence that hearing the sound and imagining the sound activate similar areas of the brain. If you can understand the relationship well enough between the brain recordings and sound, you could either synthesize the actual sound a person is thinking, or just write out the words with a type of interface device.”As Knight argues, it’s already been shown to be possible to control neuroprosthetics with implanted electrodes. He also makes clear that there’s a flip side: Controlling movement with electrical pulses is a far simpler task than deciphering something as complicated as language based solely off a person’s imagined conversation. But work like this shows that there is indeed potential.
Advertisement