Facebook’s tech development team are currently working on a way for users to type with their minds, without the need for an invasive implant. Updating your status with thoughts alone may one day become a reality.

[Brain plugged in with wires]Share on Pinterest
Brain-computer interfaces are entering a brave new era.

The social media company’s 60-strong team hopes to achieve this miraculous feat using optical imaging that scans the brain hundreds of times per second, detecting our silent internal dialogues and translating them into text on a screen.

They hope that, eventually, the technology will allow users to type at 100 words per minute – five times faster than typing on a phone.

If this innovation comes to pass, it will be fascinating for Facebook’s following. There will, however, be deeper and more profound ramifications for people who do not have full use of their limbs.

Brain-computer interfaces (BCIs) that allow users to type with their minds are already available, but they are either slow or require a sensor to be implanted in the brain. This procedure is expensive, risky, and not likely to be adopted by the population at large.

If so-called brain typing could be perfected without the need for intrusive implants, it would be a genuine game-changer with a whole host of applications.

The first steps toward developing a BCI came with Hans Berger’s discovery that the brain was electrically active. Each time an individual nerve cell sends a message, it is accompanied by a tiny electrical signal that nips from neuron to neuron.

This electrical signal can be picked up outside of the skull using an electroencephalogram (EEG). Berger was the first person to record human brain activity using an EEG, having achieved this feat almost a century ago, in 1924.

The term “brain-computer interface” was coined in the 1970s, in papers written by scientists from the University of California-Los Angeles. The research was led by Jacques Vidal, who is now considered the grandfather of BCI.

Can these observable electrical brain signals be put to work as carriers of information in man-computer communication or for the purpose of controlling such external apparatus as prosthetic devices or spaceships?”

Jacques Vidal, “Toward direct brain-computer communication,” 1973

Of course, animal studies were the first port of call when investigating BCIs. Research in the late 1960s and early 1970s proved that monkeys could learn to control the firing rates of single neurons or groups of neurons in the primary motor cortex if they were given a reward. Similarly, using operant conditioning, dogs could be trained to control the rhythms in their hippocampus.

These early studies showed that the electrical output of the brain could be measured and manipulated. Over the past two decades, there has been a surge of interest in BCIs. There is still a long way to go, but there have been notable successes.

In modern BCIs, the cream of the experimental crop is a recently designed system from Stanford University. Two aspirin-sized implants, inserted into an individual’s brain, chart the activity of the motor cortex – a region that controls muscles. Algorithms then interpret this activity and convert it into cursor movements on a screen.

In a recent study, one participant was able to type 39 characters (around eight words) per minute. “This study reports the highest speed and accuracy, by a factor of three, over what’s been shown before,” says Krishna Shenoy, one of the senior authors.

Broadly speaking, modern BCIs are split into three groups. These are:

  • Invasive BCIs: Implants are placed directly into the brain. Software is trained to interpret a subject’s brain activity. For instance, a computer cursor can be controlled by a participant’s thoughts of “left,” “right,” “up,” and “down.” With enough practice, a user can draw shapes on a screen, control a television, and open computer programs.

  • Semi-invasive BCIs: This type of device is implanted inside the skull but does not sit within the gray matter itself. Although less invasive than an invasive BCI, implants left under the skull for long periods of time tend to form scar tissue in the gray matter, which, eventually, blocks the signals and renders them unusable.

  • Noninvasive BCIs: These work on the same principle, but do not involve surgical implantation and have, therefore, received the most research.

Of the noninvasive BCIs, the most common type are EEG-based BCIs. These read the electrical activity of the brain from outside of the body. However, because the skull scatters the electrical signals substantially, making them accurate is a real challenge. Added to this issue, they often take a fair amount of calibration before each use. That being said, there have been some significant steps forward over recent years.

For instance, some researchers have recently investigated noninvasive BCIs as a way to help individuals with amyotrophic lateral sclerosis and brain stem stroke. These patients can become “locked in,” meaning that they lose the use of all voluntary muscles and, as such, have no way to communicate, despite being cognitively “normal.”

Their studies led them to conclude that “BCI use may be of benefit to those with locked-in syndrome.”

BCI technology is based on detecting electrical activity emanating from the brain and then converting it into an external action. However, through the cacophony of neural noise, which signals should be paid attention to?

There are a number of signal types that noninvasive BCIs use, the most popular of which is the P300 event-related potential.

An event-related potential is a measurable brain response to a particular stimulus – specifically, the P300 is produced during decision-making and it is usually elicited experimentally using the so-called oddball paradigm.

Share on Pinterest
BCIs are based on converting brain activity into external action.

In the oddball paradigm, participants are presented with a range of symbols, flashed in front of their eyes one by one.

They are asked to look out for a specific symbol that occurs only rarely within the selection. When the target symbol is noticed by the participant, it triggers a P300 wave.

Over many trials, it is possible to distinguish the P300 from other electrical signals; it is easiest to observe emanating from the parietal lobe, a part of the brain responsible, in part, for integrating sensory information.

Once an algorithm is trained to recognize an individual’s P300, it can, from then on, understand what they are looking for. For instance, if the user is typing a word and they wish to start with the letter “a,” when that letter appears on the screen, a P300 will be generated by the brain, the software will recognize it, and the letter “a” is typed on the screen.

Compared with other similar methods, P300s are relatively fast, require little training (hours rather than days), and are effective for most users.

However, there are still shortfalls. Because the system needs to pick up a user’s response to individual characters, it has to run through a list before it can find the right one. This means that there is a limit to how fast one can type.

There are ways to minimize this wait, but the time taken is still longer than researchers (and users) would like.

To make a system that can type tens of words per minute, a new step in the process will be needed – in fact, an entirely new approach will be necessary, and that is what Facebook is working on.

Medical News Today spoke with Dr. Michael M. Merzenich, chief scientific officer of Posit Science and co-inventor of the cochlear implant. We asked how Facebook’s researchers will bypass this speed issue, to which he responded, “Facebook has discussed using near-infrared (NIR) imaging technology.” With this technology, each word will be picked out in one go, rather than being spelled out letter by letter.

Share on Pinterest
There are challenges ahead for the social media giant.

Of course, this comes with its own difficulties. Dr. Merzenich added:

“While it’s very easy to type ‘lion’ versus ‘tiger’ and be clear, it’s going to be quite a bit harder to have a noninvasive brain imaging technology detect minute differences in brain activity that may correspond to small differences in a category like that.”

“Thinking of the word ‘lion’ and the word ‘tiger’ activates extremely similar and overlapping networks of brain activity for most people.”

There is clearly a lot of work yet to do, but Dr. Merzenich is confident that it will be achieved eventually. He added:

“The best hope is to use modern AI [artificial intelligence] techniques – deep learning techniques – that will gradually learn to identify the patterns of brain activity for an individual person as meaning specific things.”

“In this way, I think it’s likely that people will individually train their brain-reading systems, and those systems will be individually attuned to them and not immediately transferable to another person. In fact, people using these systems will likely train their own brains to optimally produce readable signals to these systems. In this way, these systems represent another application of brain plasticity – the ability of the brain to change itself through training.”

This may all be a long way off, but Facebook are committed; they are combining their research power with a number of universities across the United States. The future looks bright for BCIs and, if they do achieve 100 words per minute, it will be a great leap for millions of people who are unable to communicate with ease.