Using a new type of “vocal signature” technology that focuses on sound patterns rather than words in child vocalizations and baby talk, researchers in the US say they have proved in principle that it is possible to screen for autism spectrum disorders in young children; they also hope the new method will greatly enhance the study of language development because it can collect day-long recordings in children’s natural environments and automatically analyze the sound patterns considerably more cheaply and efficiently, compared to more traditional labor intensive methods.

The research behind these findings was led by Dr D. Kimbrough Oller, professor and chair of excellence in audiology and speech language pathology at the University of Memphis, and you can read an online report on it published in the Proceedings of the National Academy of Sciences on 19 July.

Oller and colleagues said that LENA (Language Environment Analysis) was 86 per cent accurate in distinguishing very young children with autism.

The system can also distinguish among typically developing children, children with autism, and children with language delay, they said.

LENA relies on new “automatic acoustic analysis” technology to enable “large-scale statistical analysis of strategically selected acoustic parameters” in the sound recordings.

In this study, Oller and colleagues used LENA to analyze 1,486 all-day recordings made by 232 children (amounting to more than 3 million automatically identified utterances).

LENA is essentially a language processor and some analysis software. At the heart of the software is an algorithm that Oller and colleagues developed themselves, which recognizes 12 acoustic paramaters or distinct sound patterns associated with vocal development.

The processor fits in a pocket of specially designed children’s clothing and records all day long what the child says and the sounds it makes, and reliably distinguishes vocalizations (ie when the child talks or tries to talk) from cries, other voices and background noises.

The participant children came from families whose parents responded to an advertisement asking for volunteers to take part in the study. When they signed up, the parents indicated whether their children had been diagnosed with autism or language delay. The project also offered parents of children with language delay or autism further independent evaluations by speech-language clinicians who were not linked to the research; the parents sent in the clinician’s reports to the researchers.

The all-day recordings started in 2006 and took place in natural home environments when the parents switched on the recorders that were already in the pockets of the children’s specially made clothing.

Oller and colleagues found that the most important of the 12 acoustic parameters turned out to be ones that targeted the ability of children to produce well-formed syllables (syllabification), by moving the jaw and tongue rapidly during vocalization. Babies start doing this when they are a few months old, and improve the skill as they get older and learn to talk.

The researchers found that the sound samples from the autistic children showed little evidence of development of syllabification, in that the relevant acoustic parameters did not change much as the children got older (from 1 year to 4 years). This compared with statistically significant development with age of all 12 parameters for both typically developing children and those with language delays.

Oller and colleagues concluded that this was a proof of concept that a method like LENA that uses automatic analysis of huge samples of vocalization recordings can make a valuable contribution to the field of vocal development research.

Scientists have been studying the presence and absence of speech aberrations in children with autism spectrum disorders for over 20 years, but to date, standard criteria for their diagnosis have excluded vocal characteristics, said co-author Dr Steven F. Warren, professor of applied behavioral science and vice provost for research at the University of Kansas.

Among the first scientists to see the potential of a method like LENA in screening for autism, Warren told the press that:

“A small number of studies had previously suggested that children with autism have a markedly different vocal signature, but until now, we have been held back from using this knowledge in clinical applications by the lack of measurement technology.”

In their background information, Warren, Oller and colleagues described the traditional, laborious way that scientists study vocal development and its role in language, and compared it with the automatic, fast process of a system like LENA. In the traditional method, human transcribers and analysts code and take measurements from small recorded samples, whereas LENA is completely automated, “with no human intervention”, they wrote, “allowing efficient sampling and analysis at unprecedented scales”.

Warren said that a tool like LENA, which collects and analyzes unimaginable quantities of data relatively cheaply, will make a huge impact in the fields of language research and behavioral science, and also in areas like screening, assessment and treatment of autism.

Also, because the technology analyzes sound patterns and not words, in theory it could screen for autism in any speaker, regardless of the languages they speak.

“The physics of human speech are the same in all people as far as we know,” said Warren.

Currently in the US, the median age of diagnosis of autism spectrum disorder (ASD) in children is 5.7 years: with this kind of technology it could be brought down to 18 months, suggested Warren.

“This technology could help pediatricians screen children for ASD to determine if a referral to a specialist for a full diagnosis is required and get those children into earlier and more effective treatments,” he added.

LENA could also be used as a way to help parents to supplement language enrichment therapy at home, said Warren. For example, they could use it to assess for themselves how well their interventions were working, he explained.

“Automated vocal analysis of naturalistic recordings from children with autism, language delay, and typical development.”
D. K. Oller, P. Niyogi, S. Gray, J. A. Richards, J. Gilkerson, D. Xu, U. Yapanel, and S. F. Warren.
Proceedings of the National Academy of Sciences, Published online before print 19 July 2010.
DOI: 10.1073/pnas.1003882107

Additional source: Univesrity of Kansas.

Written by: Catharine Paddock, PhD