Entering the inner works of a neuron in the living brain is such a painstaking, complex and complicated task that it is considered an art form, which can only be done in a small number of laboratories worldwide.

A neuron’s inner workings in the living brain provide a vast amount of useful information. For instance, it offers information on the brain’s patterns of electrical activity, its shape and even a profile of which genes are turned on at a particular moment.

According to the May 6 issue of Nature Methods, researchers at MIT and Georgia Tech have developed a method to automate the process of finding and recording information from neurons in the living brain.

In a collaboration between Ed Boyden’s lab, associate professor of biological engineering and brain and cognitive sciences at MIT, and Craig Forest, assistant professor of mechanical engineering at Georgia Tech, the researchers demonstrated that a robotic arm guided by a cell-detecting computer algorithm had higher accuracy and speed rate in identifying and recording from neurons in the living mouse brain than a human experimenter.

Using this new automated process means that long-thought information about living cells’ activities can be obtained without having to provide researchers with months of training. This new technique enables scientists to classify the thousands of different brain cell types and to map how they are linked, as well as learn how diseased cells differ from normal cells.

Forest says:

”Our team has been interdisciplinary from the beginning, and this has enabled us to bring the principles of precision machine design to bear upon the study of the living brain.”

Forest’s graduate student, Suhasa Kodandaramaiah is the leading author of the study, and has spent the past two years as a visiting student at MIT.

According to Boyden, a member of MIT’s Media Lab and McGovern Institute for Brain Research, the method could be particularly useful in studying brain disorders, such as schizophrenia, Parkinson’s disease, autism and epilepsy.

He continues:

“In all these cases, a molecular description of a cell that is integrated with [its] electrical and circuit properties … has remained elusive. If we could really describe how diseases change molecules in specific cells within the living brain, it might enable better drug targets to be found.”

Boyden, Forest and Kodandaramaiah decided to automate a 30-year-old technique, known as whole-cell patch clamping, which requires a level of skill that usually takes a graduate student of postdoc several months to learn. The technique involves bringing a tiny hollow glass pipette in contact with the cell membrane of a neuron, then opening up a small pore in the membrane to record the electrical activity within the cell.

After four months of learning the manual patch-clamp technique, Kodandaramaiah says:

“When I got reasonably good at it, I could sense that even though it is an art form, it can be reduced to a set of stereotyped tasks and decisions that could be executed by a robot.”

Kodandaramaiah and his team built a robotic arm that lowers a glass pipette into the brain of an anesthetized mouse with micrometer accuracy. As the arm moves, the pipette monitors electrical impedance, i.e. a measure of how difficult it is for electricity to flow out of the pipette. If there are no cells around, electricity flows and impedance is low, however, when the tip hits a cell, electricity cannot flow freely and the impedance rises. The pipette takes two-micrometer steps, measuring impedance 10 times per second, and once it detects a cell, it is able to stop instantly so that it does not poke through the membrane.

Boyden states:

“This is something a robot can do that a human can’t.”

Once the pipette has detected a cell it applies suction to form a seal with the cell’s membrane so that the electrode can break through the membrane to record the cell’s internal electrical activity.

The robotic system’s rate of accuracy in detecting cells is 90%, with a 40% success rate of establishing a connection with the detected cells. The new method can also be used in determining a cell’s shape by injecting a dye. The researchers are currently exploring to extract a cell’s contents to read its genetic profile, as well as to scale up the number of electrodes to enable recordings from multiple neurons at a time. They hypothesize that this could allow them to determine how different parts of the brain are connected.

They are also in the process of collaborating to start categorizing the thousands of neuron types in the brain. The most common means of classification of this “parts list” for the brain is by identifying neurons by their shape, however, the new method enables the neurons to be further classified by their electrical activity and genetic profile.

Forest explains:

“If you really want to know what a neuron is, you can look at the shape, and you can look at how it fires. Then, if you pull out the genetic information, you can really know what’s going on. Now you know everything. That’s the whole picture.”

Boyden believes this is just the start of using robotics in neuroscience to study living animals, given that a robot like theirs could be also be used in the future to infuse drugs at targeted points in the brain, or to deliver gene therapy vectors. He hopes their invention will inspire neuroscientists to develop other forms of robotic automations. For instance in optogenetics, using light to perturb targeted neural circuits and determine what role neurons play in brain functions.

Stating that n euroscience is one of the few areas of biology where robots are yet to make a big impact, Boyden concludes:

“The genome project was done by humans and a giant set of robots that would do all the genome sequencing. In directed evolution or in synthetic biology, robots do a lot of the molecular biology. In other parts of biology, robots are essential.”

Written By Petra Rattue