Audiovisual integration of speech falters under competing demands for attention
One classical example of how vision and audition come together is speech perception. Although we tend to think of speech as a purely auditory process, it is surprisingly sensitive to visual influences. This is quite evident when we try to follow a conversation in a noisy place, as listeners tend to look at the talker\'s lip movements--especially as we grow older and our hearing declines. Classical experiments revealed that if a heard \"ba\" syllable is dubbed to a talker seen to be saying \"ga,\" the observer often \"hears\" \"da\"--a sound that has the characteristics of both the heard and seen speech but is different from either. This illusion, first reported by Harry McGurk in 1976, is so powerful that observers usually do not realize what has happened until they look away from the talker--at which point the illusion breaks up and the true auditory event (\"ba\") is heard.
The currently accepted view on the McGurk illusion and similar multisensory-integration phenomena is that so-called binding processes in the brain occur pre-attentively; that is, they occur automatically and unavoidably as long as the perceiver has access to both input channels. In the new study, the researchers tested this \"automaticity\" hypothesis directly by making observers perform a difficult task (an attention-demanding secondary perceptual task) while showing them the McGurk illusion. The researchers found that under these conditions, the ability to integrate visual and auditory speech was severely reduced--even when the talker was clearly visible and audible, and regardless of whether the secondary task was visual or auditory.
This finding challenges previous claims that multisensory integration (and, therefore, its potential benefits in perception) occurs without attention. In practical terms, whereas audiovisual speech helps us to follow conversations when the auditory input is degraded, it could mean that we don\'t do this as effortlessly as was previously believed. The findings imply that some attentional resources are needed for cross-sensory binding to occur. Download PDF
The members of the research team include Agn?s Alsius, Jordi Navarra, and Salvador Soto-Faraco of Universitat de Barcelona; and Ruth Campbell of University College London. This research was supported by grants from the James McDonnell Foundation and the Ministerio de Ciencia y Tecnolog?a and by a fellowship Beca de Formaci? en la Recerca i la Doc?ncia from the Universitat de Barcelona to A.A.
Alsius, A., Navarra, J., Campbell, R., and Soto-Faraco, S. (2005). Audiovisual Integration of Speech Falters under High Attention Demands. Curr. Biol. 15, 839-843. http://www.current-biology.com
Contact: Heidi Hardman
There are no references listed for this article.
Please use one of the following formats to cite this article in your essay, paper or report:
Hardman, Heidi. "Audiovisual integration of speech falters under competing demands for attention." Medical News Today. MediLexicon, Intl., 10 May. 2005. Web.
25 Jun. 2017. <http://www.medicalnewstoday.com/releases/24102.php>
Hardman, H. (2005, May 10). "Audiovisual integration of speech falters under competing demands for attention." Medical News Today. Retrieved from
Please note: If no author information is provided, the source is cited instead.
Contact our news editors
For any corrections of factual information, or to contact our editorial team, please see our contact page.
Copyright Medical News Today: Excluding email/sharing services explicitly offered on this website, material published on Medical News Today may not be reproduced, or distributed without the prior written permission of Medilexicon International Ltd. Please contact us for further details.