A new study suggests that when we are able to predict what a speaker is going to say, our brain activity becomes similar to the brain activity of the speaker.

Previously, it has been thought that the human brain processes the surrounding world from the “bottom up.” This means that when we hear someone speak, the auditory cortex first processes the sounds, which are then assembled into words, sentences and larger units by other areas of the brain, allowing us to understand the content of what is being said.

But this “bottom up” view of language comprehension has been challenged in more recent research by a proposed “top-down” perspective.

In this way of understanding how our brains interpret speech, neuroscientists explain that the brain is a “prediction machine,” constantly anticipating events around us so that we can respond quickly and accurately.

An example of this is being able to predict words and sounds based on context. When we hear the words “Grass is…,” suggest the researchers behind the new study, we can easily predict the concluding word: “green.”

In their study – published in the Journal of Neuroscience – the researchers wanted to investigate how this predictive interaction between listeners and speakers is processed in the brain.

To do this, a participant was shown a series of images and asked to describe what she had seen. Another group of subjects listened to the descriptions while they were viewing the same images.

All of the participants’ brains were monitored using functional magnetic resonance imaging (fMRI) by the researchers while this was happening.

two people talkingShare on Pinterest
“The brains of both speakers and listeners take language predictability into account, resulting in more similar brain activity patterns between the two,” say the researchers.

Some of the images shown to the participants and described by the speaker were easily predicted, because they had a specific description. An example of this is an image used in the study of a penguin hugging a star.

But other images were designed to offer more ambiguous, less easily predicted descriptions. For example, an image of a guitar stirring a bicycle tire in a pot of boiling water could have been described in multiple ways, is it:

  • “A guitar cooking a tire”
  • “A guitar boiling a wheel,” or
  • “A guitar stirring a bike?”

The researchers found that, when comparing the speaker’s and listeners’ brain responses, patterns of activity became more similar when the listeners were able to predict what the speaker was going to say.

They suggest that when a listener can predict what a speaker is going to say, the listener’s brain fires a signal to their auditory cortex, alerting it to expect sound patterns that correspond with the predicted words.

The interesting part is that the speaker’s brain does the same thing. The researchers explain the activity in the speaker’s auditory cortex is based on how predictable their speech will be for the listeners.

“Our findings show that the brains of both speakers and listeners take language predictability into account, resulting in more similar brain activity patterns between the two,” says Suzanne Dikker, PhD, the study’s lead author and a post-doctoral researcher in New York University’s Department of Psychology and Utrecht University in the Netherlands.

“Crucially,” she adds, “this happens even before a sentence is spoken and heard.”

This was a small study, involving only 10 participants (nine listeners and one speaker), so the results need to be replicated by larger trials, although Dr. Dikker does not anticipate that this will be a problem.

“Of course one should always be cautious about drawing definitive conclusions based on a single neuroimaging study,” she told Medical News Today. “In our case especially, since so little is known about the neural basis of language production. The finding that listeners are sensitive to predictability has previously been reported in a number of studies including our own, so those effects are pretty robust and we would expect them to continue to replicate in future studies.”

Further research on the subject by Dr. Dikker will see her team using portable electroencephalography rather than fMRI when scanning the brains of participants, to better examine the timing of the synchronized activity between speakers and listeners.

The team will also expand their research to look at bilingual subjects. Dr. Dikker tells us that this will provide information on “how correlation between individuals changes throughout the brain under different listening conditions, for example when trying to ignore distracting information.”

She adds:

The interplay between prediction and synchronization in brain activity raises all sorts of interesting possibilities for examining what happens when comprehension depends on ‘top-down’ information because the ‘bottom-up’ information is obscured or unavailable.”

In February, Medical News Today reported on another piece of research examining how the brain interprets sounds as speech. This study, published in Science, focused on a region of the brain called the superior temporal gyrus, and how it “tunes” to speech sounds.