Researchers at the University of California San Francisco show in a new study – published in the journal Science – that the shaping of sound by our mouths leaves “an acoustic trail” that the brain follows.

Scientists have known for some time that it is the superior temporal gyrus (STG; also known as “Wernike’s area”) where speech sounds are interpreted. But not much has been known about how the brain actually processes speech.

To investigate this, the University of California San Francisco (UCSF) researchers placed neural recording devices directly onto the surface of the brains of six patients who were undergoing epilepsy surgery. This allowed the researchers to capture very rapid changes in the brain.

This was one of the most advanced studies of the brain’s interpretation of speech. Previous studies had only been able to analyze neural responses to just a handful of natural or synthesized speech sounds, but because of the speed of the technology used by the UCSF team, they were able to use every kind of speech sound in the English language, multiple times.

The researchers collected data from the STGs of the the patients as they listened to 500 unique English sentences spoken by 400 different people.

What the researchers expected was to see the patients’ brains respond to “phonemes.” Phonemes are the individual sound segments that make up language – the researchers give the example of the b sound in “boy.”

Instead, the researchers found that the brain was “tuned” to an even simpler function of language – something linguists call “features.” Features are distinctive “acoustic signatures” that the human body makes when we move our lips, tongue or vocal cords.

One type of feature are called “plosives” – these occur when, to make a certain speech sound, the speaker has to use the lips or tongue to obstruct air flowing from their lungs, causing a brief burst of air. Examples of plosives are the consonants p, t, k, b and d.

Another type of feature are “fricatives” – these sounds are when the airway is only partially obstructed, which causes friction in the vocal tract. S, z and v are examples of fricatives.

Analyzing the data from the patients’ brains, the researchers saw the STGs of the patients “light up” as the participants heard the different speech features. The team found that the brain recognized the “turbulence” created by a fricative, or the “acoustic pattern” of a plosive, rather than individual phonemes such as b or z.

The researchers compare this system for interpreting the “shapes” of sounds to the way the brain recognizes visual objects using edges and shapes. The visual system allows us to identify known objects regardless of the perspective from which we are viewing them, so the researchers think it makes sense that the brain would apply a similar algorithm to understanding sound.

The study’s senior author, Dr. Edward F. Chang, says:

It’s the conjunctions of responses in combination that give you the higher idea of a phoneme as a complete object. By studying all of the speech sounds in English, we found that the brain has a systematic organization for basic sound feature units, kind of like elements in the periodic table.”

The UCSF team hopes their findings will contribute to work around reading disorders. In a reading disorder, printed words are inaccurately mapped by the brain onto speech sounds.

But the team thinks that the findings are significant in their own right. “This is a very intriguing glimpse into speech processing,” Chang says. “The brain regions where speech is processed in the brain had been identified, but no one has really known how that processing happens.”

Recently, Medical News Today reported on a study that found speech uses both sides of the brain – previously scientists thought just one half of the brain was used for speech and language.