Creating a free account will enable you to subscribe to our daily and weekly email newsletters, as well as customize your reading experience to show only the categories most relevant to you.
Signing up only take a few minutes, so why not give it a try and see what you've been missing out on.
Researchers at the University of California San Francisco show in a new study - published in the journal Science - that the shaping of sound by our mouths leaves "an acoustic trail" that the brain follows.
Scientists have known for some time that it is the superior temporal gyrus (STG; also known as "Wernike's area") where speech sounds are interpreted. But not much has been known about how the brain actually processes speech.
To investigate this, the University of California San Francisco (UCSF) researchers placed neural recording devices directly onto the surface of the brains of six patients who were undergoing epilepsy surgery. This allowed the researchers to capture very rapid changes in the brain.
This was one of the most advanced studies of the brain's interpretation of speech. Previous studies had only been able to analyze neural responses to just a handful of natural or synthesized speech sounds, but because of the speed of the technology used by the UCSF team, they were able to use every kind of speech sound in the English language, multiple times.
The researchers collected data from the STGs of the the patients as they listened to 500 unique English sentences spoken by 400 different people.
What the researchers expected was to see the patients' brains respond to "phonemes." Phonemes are the individual sound segments that make up language - the researchers give the example of the b sound in "boy."
Instead, the researchers found that the brain was "tuned" to an even simpler function of language - something linguists call "features." Features are distinctive "acoustic signatures" that the human body makes when we move our lips, tongue or vocal cords.
One type of feature are called "plosives" - these occur when, to make a certain speech sound, the speaker has to use the lips or tongue to obstruct air flowing from their lungs, causing a brief burst of air. Examples of plosives are the consonants p, t, k, b and d.
Another type of feature are "fricatives" - these sounds are when the airway is only partially obstructed, which causes friction in the vocal tract. S, z and v are examples of fricatives.
Analyzing the data from the patients' brains, the researchers saw the STGs of the patients "light up" as the participants heard the different speech features. The team found that the brain recognized the "turbulence" created by a fricative, or the "acoustic pattern" of a plosive, rather than individual phonemes such as b or z.
The researchers compare this system for interpreting the "shapes" of sounds to the way the brain recognizes visual objects using edges and shapes. The visual system allows us to identify known objects regardless of the perspective from which we are viewing them, so the researchers think it makes sense that the brain would apply a similar algorithm to understanding sound.
The study's senior author, Dr. Edward F. Chang, says:
"It's the conjunctions of responses in combination that give you the higher idea of a phoneme as a complete object. By studying all of the speech sounds in English, we found that the brain has a systematic organization for basic sound feature units, kind of like elements in the periodic table."
The UCSF team hopes their findings will contribute to work around reading disorders. In a reading disorder, printed words are inaccurately mapped by the brain onto speech sounds.
But the team thinks that the findings are significant in their own right. "This is a very intriguing glimpse into speech processing," Chang says. "The brain regions where speech is processed in the brain had been identified, but no one has really known how that processing happens."
Recently, Medical News Today reported on a study that found speech uses both sides of the brain - previously scientists thought just one half of the brain was used for speech and language.
Written by David McNamee
Copyright: Medical News Today
Not to be reproduced without the permission of Medical News Today.
Phonetic Feature Encoding in Human Superior Temporal Gyrus, Edward F. Chang, et al., Science, DOI: 10.1126/science.1245994, published online 30 January 2014, Abstract.
UCSF news release
Visit our Neurology / Neuroscience category page for the latest news on this subject.
Please use one of the following formats to cite this article in your essay, paper or report:
McNamee, David. "How the brain recognizes speech sounds is revealed." Medical News Today. MediLexicon, Intl., 3 Feb. 2014. Web.
17 Apr. 2014. <http://www.medicalnewstoday.com/articles/272068>
McNamee, D. (2014, February 3). "How the brain recognizes speech sounds is revealed." Medical News Today. Retrieved from
Please note: If no author information is provided, the source is cited instead.
If you write about specific medications, operations, or procedures please do not name healthcare professionals by name.
For any corrections of factual information, or to contact our editorial team, please use our feedback form. Please send any medical news or health news press releases to:
Note: Any medical information published on this website is not intended as a substitute for informed medical advice and you should not take any action before consulting with a health care professional. For more information, please read our terms and conditions.
This page was printed from: http://www.medicalnewstoday.com/articles/272068.php
Visit www.medicalnewstoday.com for medical news and health news headlines posted throughout the day, every day.
© 2004-2014 All rights reserved. MNT (logo) is the registered trade mark of MediLexicon International Limited.