New Model For Speech And Sound RecognitionMain Category: Hearing / Deafness
Article Date: 19 Sep 2011 - 0:00 PDT
New Model For Speech And Sound Recognition
|Patient / Public:|
3 (1 votes)
People are adept at recognizing sensations such as sounds or smells, even when many stimuli appear simultaneously. But how the association works between the current event and memory is still poorly understood. Scientists at the Bernstein Center and the Ludwig-Maximilians Universität (LMU) München have developed a mathematical model that accurately mimics this process with little computational effort and may explain experimental findings that have so far remained unclear. (PLoS ONE, September 14, 2011)
The so-called 'cocktail party-problem' has already kept scientists busy for decades. How is it possible for the brain to filter familiar voices out of background noise? It is a long-standing hypothesis that we create a kind of sound library in the auditory cortex of the brain during the course of our lives. Professor Christian Leibold and Dr. Gonzalo Otazu at LMU Munich who are also members of the Bernstein Center Munich now show in a new model how the brain can compare stored and perceived sounds in a particularly efficient manner. Figuratively speaking, current models operate on the following principle: An archivist (possibly the brain region thalamus) compares the incoming sound with the individual entries in the library, and receives the degree of matching for each entry. Usually, however, several entries fit similarly well, so the archivist does not know which result is actually the right one.
The new model is different: as previously the archivist compares the sound with the library entries, this time getting back only a few really relevant records and information about how much the archived and heard elements differ. Therefore, only in the case of unknown or little matching inputs are large amounts of data sent back. "Perhaps this is also one reason why we can ignore known sounds better than new ones," speculates Leibold, head of the study. During a test, the model was easily able to detect the sound of a violin and a grasshopper at the same time from 400 sounds with an overlapping frequency spectrum. Furthermore computational and memory requirements were significantly smaller than in comparable models. For the first time a library-based model allows a highly efficient real-time implementation, which is a prerequisite for an implementation in brain circuits.
Experiments long ago showed that a lot of information is sent from the cerebrum to the thalamus, so far without a universally accepted explanation. The new model predicts exactly this flow of information. "We quickly knew that our model works. But why and how, we had to find out laboriously," Leibold says. Abstract mathematical models of neurobiological processes have the advantage that all contributing factors are known. Thus, one can show whether the model works well in a broad, biologically relevant, application-spectrum, as in this case. The researchers now want to incorporate their findings into other models that are more biologically detail-oriented, and finally test it in psychoacoustic experiments. (Faber/Bernstein Coordination Site)
Visit our hearing / deafness section for the latest news on this subject.
Otazu G, Leibold C
PlosONE, 12 September 2011
21 May. 2013. <http://www.medicalnewstoday.com/releases/234529.php>
Please note: If no author information is provided, the source is cited instead.
Contact Our News Editors
For any corrections of factual information, or to contact the editors please use our feedback form.
Please send any medical news or health news press releases to:
Note: Any medical information published on this website is not intended as a substitute for informed medical advice and you should not take any action before consulting with a health care professional. For more information, please read our terms and conditions.