New research has zoomed in on the brain’s speech recognition abilities, uncovering the mechanism through which the brain discerns between ambiguous sounds.

illustration of brain and sound wavesShare on Pinterest
The brain deploys fascinating mechanisms to make out sounds.

“Aoccdrnig to a rscheearch at Cmabrigde Uinervtisy, it deosn’t mttaer in waht oredr the ltteers in a wrod are, the olny iprmoetnt tihng is taht the frist and lsat ltteer be at the rghit pclae.”

You, like many others, were probably able to read the above sentence without a problem — which is the reason for the mass online appeal this meme had more than a decade ago.

Psycholinguists explain that the meme is, in itself, false, as the exact mechanisms behind the brain’s visual “autocorrect” feature remain unclear.

Rather than the first and last letter being key to the brain’s ability to recognize misspelled words, explain the researchers, context might be of greater importance in visual word recognition.

New research, now published in the Journal of Neuroscience, looks into the similar mechanisms that the brain deploys to “autocorrect” and recognize spoken words.

Researcher Laura Gwilliams — from the Department of Psychology at New York University (NYU) in New York City and the Neuroscience of Language Lab at NYU Abu Dhabi — is the first author of the paper.

Prof. Alec Marantz, of NYU’s departments of Linguistics and Psychology, is the principal investigator of the research.

Gwilliams and team looked at how the brain untangles ambiguous sounds. For instance, the phrase “a planned meal” sounds very similar to “a bland meal,” but the brain somehow manages to tell the difference between the two, depending on the context.

The researchers wanted to see what happens in the brain after it hears that initial sound as either a “b” or a “p.” The new study is the first one to show how speech comprehension takes place after the brain detects the first sound.

Gwilliams and colleagues carried out a series of experiments in which 50 participants listened to separate syllables and entire words that sounded very similar. They used a technique called magnetoencephalography to map the participants’ brain activity.

The study revealed that a brain area known as the primary auditory cortex picks up the ambiguity of a sound just 50 milliseconds after onset. Then, as the rest of the word unravels, the brain “re-evokes” sounds that it had previously stored while re-evaluating the new sound.

After around half a second, the brain decides how to interpret the sound. “What is interesting,” explains Gwilliams, “is the fact that [the] context can occur after the sounds being interpreted and still be used to alter how the sound is perceived.”

“[A]n ambiguous initial sound,” continues Prof. Marantz, “such as ‘b’ and ‘p,’ is heard one way or another depending on if it occurs in the word ‘parakeet’ or ‘barricade.'”

“This happens without conscious awareness of the ambiguity, even though the disambiguating information doesn’t come until the middle of the third syllable,” he says.

“Specifically,” notes Gwilliams, “we found that the auditory system actively maintains the acoustic signal in [the] auditory cortex, while concurrently making guesses about the identity of the words being said.”

“Such a processing strategy,” she adds, “allows the content of the message to be accessed quickly, while also permitting re-analysis of the acoustic signal to minimize hearing mistakes.”

“What a person thinks they hear does not always match the actual signals that reach the ear,” says Gwilliams.

“This is because, our results suggest, the brain re-evaluates the interpretation of a speech sound at the moment that each subsequent speech sound is heard in order to update interpretations as necessary.”

Remarkably, our hearing can be affected by context occurring up to one second later, without the listener ever being aware of this altered perception.”

Laura Gwilliams