Researchers at the University of York have identified a part of the brain that responds to both facial and vocal expressions of emotion.

They used the MagnetoEncephaloGraphic (MEG) scanner at the York Neuroimaging Centre to test responses in a region of the brain known as the posterior superior temporal sulcus.

The research team from the University's Department of Psychology and York Neuroimaging Centre found that the posterior superior temporal sulcus responds so strongly to a face plus a voice that it clearly has a 'multimodal' rather than an exclusively visual function. The research is published in the latest issue of Proceedings of the National Academy of Sciences (PNAS).

Test participants were shown photographs of people with fearful and neutral facial expressions, and were played fearful and neutral vocal sounds, separately and together. Responses in the posterior superior temporal sulcus were substantially heightened when subjects could both see and hear the emotional faces and voices, but not when subjects could both see and hear the neutral faces and voices.

Researchers believe that the finding could help in the study of autism and other neuro-developmental disorders which exhibit face perception deficits.

Lead researcher Dr Cindy Hagan said: "Previous models of face perception suggested that this region of the brain responds to the face alone, but we demonstrated a supra-additive response to emotional faces and voices presented together - the response was greater than the sum of the parts."

Professor Andy Young added: "This is important because emotions in everyday life are often intrinsically multimodal - expressed through face, posture and voice at the same time."

The research involved tests on 19 people using York Neuroimaging Centre's £1.1 million MEG scanner which provides a non-invasive way of mapping the magnetic fields created by electrical activity in the brain.

Source: David Garner
University of York