Research could help restore speech from thoughts for patients with severe paralysis.

A new speech synthesizer converts vocal tract movements into intelligible speech. In a new study published in PLOS Computational Biology, this "articulatory-based" speech synthesizer was developed using deep learning algorithms to convert articulatory movements into an intelligible speech audio signal. This research could help in the future to build a brain-computer interface that could restore speech for individuals with severe paralysis.

Speaking activates a region of the brain cortex that controls the movement of the different organs (called articulators) of the vocal tract, including the tongue, lips, jaw, velum, and larynx. A number of language-impaired people still have their cortex preserved but the neural signals are interrupted on the path toward the muscles moving the articulators.

For these patients, it is thus possible to envision restoring speech using their preserved cortical activity to control an artificial speech synthesizer in real time. The synthesizer should however be adapted to the cortical region. The hypothesis underlying this study is that the cortical region controlling the movements of the speech articulators will be more inclined to control a synthesizer for which the command signals describe explicitly such movements.

This synthesizer could then be adapted to run in real time with different subjects so that they could produce intelligible speech artificially from their articulatory movements while speaking silently. This study paves the way toward a brain-computer interface in which the developed synthesizer will be controlled directly from the neural activity of the speech motor cortex. This could restore communication for aphasic people with severe paralysis.

This study was conducted by Florent Bocquelet and Blaise Yvert at the BrainTech laboratory of the INSERM institute in Grenoble, and by Thomas Hueber, Laurent Girin and Christophe Savariaux from the Gipsa-Lab of CNRS. Both laboratories are also part of the University Grenoble Alpes.

This work was supported by the Fondation pour la Recherche MeÂdicale (www.frm.org) under grant No DBS20140930785. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Article: Real-Time Control of an Articulatory-Based Speech Synthesizer for Brain Computer Interfaces Bocquelet F, Hueber T, Girin L, Savariaux C, Yvert B, PLOS Computational Biology, doi:10.1371/journal.pcbi.1005119, published 23 November 2016.