US scientists have moved a step closer to developing a mind-reading machine: they wired a man’s brain up to a computerized device that helped them to determine at a rate significantly better than chance, which brain signals represented which word he had read from a list.

The study is the work of a team based at the University of Utah in Salt Lake City and also another researcher from the University of Washington in Seattle. A paper about their research was published online on 1 September in the Journal of Neural Engineering.

For the study, Dr Bradley Greger, an assistant professor of bioengineering at the University of Utah, and colleagues, used a device that reads brain signals via two grids of 16 “non-penetrating” microelectrodes implanted beneath the skull and located on the outside top part of the brain.

Although still early days, most likely years before clinical trials can begin, the scientists suggest the device could be developed for long term use to help paralyzed people who are fully conscious and aware but can’t speak, such as patients with ALS (amyotrophic lateral sclerosis) or who have suffered damage to the brainstem (the so-called “locked in syndrome”), for instance because they have had a stroke or suffered a physical injury.

“We have been able to decode spoken words using only signals from the brain with a device that has promise for long-term use in paralyzed patients who cannot now speak,” Greger told the press.

Greger and colleagues describe how with the help of a volunteer patient, a man with severe epileptic seizures who had already had part of his skull removed so doctors could fit electrodes to his brain and try to stop the seizures, they were able to place the two grids of 16 micro-electrodes (each electrode was only 1 millimeter apart) directly over the speech centers of his brain.

Using the micro-electrodes and a computer, they were able to pick up and record brain signals by detecting changes in the local field potentials (LFPs) from the surface of the facial motor cortex (which controls speaking movements such as muscles of the mouth, lips, tongue and face) and Wernicke’s area, which is thought to be involved in language comprehension.

They then asked the patient to read 10 words again and again, and monitored the signals his speech centers made while he read. They asked him to read words that would be very useful to a person who is paralyzed and can’t communicate: for instance yes, no, hot, cold, more, less, hello, goodbye, thirsty, and hungry.

After recording the brain signals, they found they could tell the difference between words, for instance yes and no, 76 to 90 per cent of the time.

And when they looked at all 10 brain patterns at once, they could pick out the correct word for each signal 28 to 48 per cent of the time. Although this might seem like a poor success rate, it is much higher than chance, which would have been 1 in 10 or 10 per cent.

So while the results show that as it currently stands the method is not good enough to be of much use to patients (ie current methods, for instance arduously picking out letters from a list, although painstaking, are at least more reliable) it is “proof of concept”, as Greger explained:

“We’ve proven these signals can tell you what the person is saying well above chance.”

“But we need to be able to do more words with more accuracy before it is something a patient really might find useful,” he added.

The non-penetrating electrodes that the team used are a small version of much larger electrodes used for electrocorticography (called “ECoG electrodes”) that were developed about 50 years ago.

Patients with severe epileptic seizures that don’t respond to medication can have an operation where the surgeon removes a part of their skull, inserts a silicone mat containing ECoG electrodes over their brain and monitor its activity for a few days or weeks. During that time the piece of removed skull is held in place but not re-attached.

From analyzing the electrical activity, the surgeons can see where the brain might be producing abnormal signals that cause the seizures and remove that part.

In a previous study they published last year, Greger and his team recruited some of epileptic patients who were having this operation to take part in a small trial where they inserted the much smaller microECoG electrodes and showed they could “read” brain signals that controlled arm movements. The volunteer in this more recent study was one of those patients.

Because the microECoG electrodes are much smaller than the more conventional ones, they can pick out much weaker signals generated by ony a few thousand neurons or nerve cells and thus be better attuned to finding unique patterns of brain signals for each word.

Greger and colleagues also discovered something they were not expecting: when the patient read the words, the facial motor cortex was active but the Wernicke’s area was less active. But when the researchers said thanks for reading the list, the Wernicke’s area “lit up”. This appeared to confirm that this area was more involved in higher level processes of language comprehension.

The researchers found that when they used only the signals from the facial motor cortex, they could tell words apart 85 per cent of the time, but when they only used Wernicke’s area signals the accuracy was only 76 per cent. Also, when they used both, the accuracy did not go up much, suggesting that the Wernicke’s area signals don’t add more information than can already be gleaned from the facial motor cortex signals alone.

They also found that it was possible to improve the accuracy of telling words apart and also identifying individual words by focusing on only a subset of five micro-electrodes from the 16 in each grid.

Greger said they are still far from solving the problem, but at least they have shown that the system works better than chance, and “we now need to refine it so that people with locked-in syndrome could really communicate”.

He said the obvious next step, and they have already started working on it, is to use bigger grids of micro-electrodes, for example an 11 by 11 grid, is the one they are currently developing.

“We can make the grid bigger, have more electrodes and get a tremendous amount of data out of the brain, which probably means more words and better accuracy,” he added.

Funds from the National Institutes of Health, the Defense Advanced Research Projects Agency, the University of Utah Research Foundation and the National Science Foundation paid for the study.

“Decoding spoken words using local field potentials recorded from the cortical surface.”
Spencer Kellis, Kai Miller, Kyle Thomson, Richard Brown, Paul House and Bradley Greger.
Journal of Neural Engineering, Volume 7, Number 5, published online 1 September 2010.
DOI: 10.1088/1741-2560/7/5/056007

Additional source: University of Utah.

Written by: Catharine Paddock, PhD