A study conducted by researchers at the University of California, Berkley, and published in PLoS Biology reveals neuroscientists’ new breakthrough research on how they will be able to understand the thoughts of patients without actually hearing them speak. This will be incredibly helpful when treating patients who are unable to speak after strokes, paralysis, or even possibly during comas.

Brian N. Pasley, a post-doctoral researcher and first author of the study says:

“This research is based on sounds a person actually hears, but to use it for reconstructing imagined conversations, these princicples would have to apply to someone’s internal verbalizations.

There is some evidence that hearing the sound and imaging the sound activate similar areas of the brain.If you can understand the relationship well enough between the brain recordings and sound, you could either synthesize the actual sound a person is thinking, or just write out the words with a type of interface device.”

The scientists involved in the study have figured out how to decode electrical activity in the patient’s temporal lobe, which is where the auditory system sits in the brain, as the patient listens to conversations around them. This parallel between what they are hearing and brain activity makes it possible for the researchers to then understand what words the patients had heard – simply from the temporal lobe activity.

Co-author of the study, Robert Knight, a UC Berkeley professor of psychology and neuroscience comments:

“This is huge for patients who have damage to their speech mechanisms because of a stroke or Lou Gehrig’s disease and can’t speak. If you could eventually reconstruct eventually reconstruct imagined conversations from brain activity, thousands of people could benefit. The research is also telling us a lot about how the brain in normal people represents and processes speech sounds.”

For part of the study, the researchers examined 15 volunteers who would be undergoing brain surgery, in order to find the location of intractable seizures to be able to remove the area during a second surgery. Normally, neurosurgeons cut a hole in the skull and effectively put electrodes on the brain surface or cortex. For this study, they placed 256 electrodes over the temporal lobe in order to gather data for 7 days and analyze the seizures.

vocalizing thoughts
Electrodes spread over the brain’s temporal lobe, where sounds are processed, an X-ray CT scan (UC Berkeley)

During the investigation, Pasley returned to the hospital to look at the the brain activity that the electrodes found when the patients listened to 5-10 minutes of conversation. He used this information to change the sounds and play it back. This is possible because the brain breaks down sounds into its component acoustic frequencies.

He said:

“We are looking at which cortical sites are increasing activity at particular acoustic frequencies, and from that, we map back to the sound.”

Pasely looked at two computational models to find how the spoken sounds related to the patterns of brain activity in the electrodes. He then had the patients listen to one word and used the computational models to determine the word derived from the electrode recordings.

Pasley said this type of research is similar to how a pianist knows the keys of the piano enough that he can look at the keys another pianist is touching, and without hearing the music, can still picture or “hear” it in his head.

He determined that the method which worked best could actually duplicate a sound so similar to the original that the team could correctly identify the word.

He states:

“We think we would be more accurate with an hour of listening and recording and then repeating the word many times.”

However, he decided to only test the models using a single word, because any machine being used for this the first time would have to have 100% accuracy.

Knight added:

“This research is a major step toward understand what features of speech are represented in the human brain. Brian’s (Pasely) analysis can reproduce the sound the patient heard, and you can actually recognize the word, although not at a perfect level.”

Knight concludes:

“With neuroprosthetics, people have shown that it’s possible to control movement with brain activity. But that work,while not easy, is relatively simple compared to reconstructing language. This experiment takes that earlier work to a whole new level.”

“At some point, the brain has to extract away all that auditory information and just map it onto a word, since we can understand speech and words regardless of how they sound. The big question is, What is the most meaningful unit of speech? A syllable, a phone, a phoneme? We can test these hypotheses using the data we get from these recordings.”

He believes that this breakthrough study can lead to many outstanding medical discoveries, for example, imagined internal verbalizations, because scientific research shows that when people who are asked to imagine speaking a word, certain parts of their brain actually “speaks” the word.

PLoS Biology Podcast Episode 2: Decoding speech from the human brain

Written By Christine Kearney