A new study from Canada shows that our skin helps us hear speech by sensing the puffs of air that the speaker produces with certain sounds. The study is the first to show that when we are in conversation with another person we don’t just hear their sounds with our ears and use our eyes to interpret facial expressions and other cues (a fact that is already well researched), but we also use our skin to “perceive” their speech.

The study is the work of professor Bryan Gick from the Department of Linguistics, University of British Columbia, in Vancouver, Canada and PhD student Donald Derrick. A paper on their work was published in Nature on 26 November.

Gick and Derrick found that pointing puffs of air at the skin can bias the hearer’s perception of spoken syllables.

Gick, who is also a member of Haskins Laboratories, an affiliate of Yale University in the US, told the media that their findings suggest:

“We are much better at using tactile information than was previously thought.”

We are already aware of using our eyes to help us interpret speech, such as when we lip-read or observe facial features and gestures.

“Our study shows we can do the same with our skin, ‘hearing’ a puff of air, regardless of whether it got to our brains through our ears or our skin,” explained Gick.

Languages like English rely on certain syllables being aspirated, that is the speaker uses tiny and subtly differentiated bursts of breath to make the sound: for instance we distinguish “pa” from “ta” that way and we don’t use aspiration at all in sounds like “ba” and “da”.

For the study, Gick and Derrick recruited 66 men and women and asked them to distinguish among four syllables produced at the same time as inaudible air puffs (simulating aspirations) were directed at their right hand or neck. Altogether each participant heard eight repetitions of the syllables.

The results showed that when the participants heard syllables accompanied by air puffs, they were more likely to perceive them as aspirated syllables, for instance they heard “ba” as “pa” and “da” as “ta”.

In their Nature paper, Gick and Derrick wrote that other studies have examined the influence of “tactile input” but only under limited conditions, such as when perceivers were aware of the task or “where they had received training to establish a cross-modal mapping”.

This study is unique, they wrote, because it shows “that perceivers integrate naturalistic tactile information during auditory speech perception without previous training”.

They concluded that:

“These results demonstrate that perceivers integrate event-relevant tactile information in auditory perception in much the same way as they do visual information.”

Gick and Derrick hope their findings will help new developments in telecommunications, speech science and hearing aid technology.

Future studies could look at how audio, visual and tactile information interact, paving the way to a completely new approach to “multi-sensory speech perception”.

They could also look at how many kinds of speech sound are affected by air flow, giving us more insights into how we interact with our physical environment.

“Aero-tactile integration in speech perception.”
Bryan Gick and Donald Derrick.
Nature 462, 502-504 (26 November 2009)
DOI:10.1038/nature08572.

Additional source: University of British Columbia.

Written by: Catharine Paddock, PhD