Scientists have found that even before they can talk, babies use sophisticated reasoning to make sense of the physical world around them, combining abstract principles with knowledge from observation to form surprisingly advanced expectations of how new situations will develop.

The international team of scientists developed a computer model of how babies reason that accurately predicts their surprise when objects don’t behave in the way they expect.

A paper on their latest work, co-led by Josh Tenenbaum of the Department of Brain and Cognitive Sciences at the Massachusetts Institute of Technology (MIT) in the US, and Luca Bonatti of the Institució Catalana de Recerca i Estudis Avançats at the Universitat Pompeu Fabra in Barcelona, Spain, appeared online this week in the journal Science.

The team designed the computer model to follow the principle of “pure reasoning”, that is to predict what happens next, based on what has already been observed. However, the model also contains an element that differentiates humans from other organisms: the ability, guided by abstract concepts, to form rational expectations about new situations never previously encountered.

Then they tested the model by comparing it to babies’ responses and found the results were very close, leading them to conclude that babies reason in a similar way.

Tenenbaum, associate professor of cognitive science and computation at MIT, told the press that:

“Real intelligence is about finding yourself in situations that you’ve never been in before but that have some abstract principles in common with your experience, and using that abstract knowledge to reason productively in the new situation.”

He and his colleagues are trying to “reverse engineer” how babies observe and think about the world around them by studying them at key stages in their first 2 years of life, including at 3, 6 and 12 months of age (the project has become known as the “3-6-12 project” and is part of a larger piece of MIT research using computers to simulate human intelligence).

From earlier work by Elizabeth Spelke, a professor of psychology at Harvard University, they knew that measuring how long babies look at something is a good way to measure their level of surprise; the more unexpected the event is, the longer they watch.

Spelke also pioneered much of the work that showed babies have a grasp of abstract concepts about physical objects and how they behave. Concepts on a par with physical objects can’t just appear and disappear, and they have to move in order to be in one place at one time and then another place later on.

Tenenbaum and colleagues programmed these abstract principles into a computational model known as the “Bayesian ideal observer”, and then ran lots of simulations of how objects might behave in given situations: thus giving the model predictive ability based on abstract rules and observational experience.

Using the model, they then made a set of predictions about how long babies would keep looking at particular animations of objects that were more or less consistent with their expectations based on acquired knowledge.

For example, in one experiment, 12-month-old babies observed an animation of four objects, three blue ones and one red one, bouncing in a container with an obvious visible opening.

After letting them watch the objects bouncing around for a while, the researchers covered the scene, and while it was covered, one of the objects would leave the container through the opening.

If the scene was only covered for just under half a second, the babies showed surprise if it was one of the objects furthest away from the container that had left the scene.

If the scene was covered for longer (say 2 seconds), they were less surprised if the furthest one was missing when they saw the scene again, and they were only surprised if it was the red one that was missing (the rarer object).

And in between these two extremes, both the distance from the exit and the number of objects mattered.

The experiment gave the researchers several variables to play with: they could vary the number of objects, their spatial positions (distances from the exit), and the time factor (for instance how long to cover the scene for).

When they ran the computer model, it was able accurately to predict how long the babies would look at the same event, across a dozen different scenarios with different combinations of the variables.

They concluded that:

“Infants’ looking times are consistent with a Bayesian ideal observer embodying abstract principles of object motion.”

“The model explains infants’ statistical expectations and classic qualitative findings about object cognition in younger babies, not originally viewed as probabilistic inferences,” they added.

In other words, the study suggests that babies reason by playing possible scenarios in their minds and then, with the help of a few abstract principles, working out which one is the most likely.

Tenenbaum said that while this does not yet mean they have a “unified theory” of cognition, they are starting to describe mathematically some core aspects of cognition that had only been described intuitively until now.

Spelke said the findings may explain why human thinking develops so fast and is so flexible. She said so far no theory has managed to explain both these features: core knowledge systems tend to be limited and inflexible, and systems designed to learn anything tend to do so very slowly.

“The research described in this article is the first, I believe, to suggest how human infants’ learning could be both fast and flexible,” said Spelke.

Tenenbaum and colleagues now want to add other principles into the model.

“We think infants are much smarter, in a sense, than this model is,” he explained, and said they want to incorporate other physical principles, like gravity and friction.

Another area they wish to explore is how babies make sense of human behavior. Creating models in this area could help us better understand disorders like autism, said Tenenbaum.

“Pure Reasoning in 12-Month-Old Infants as Probabilistic Inference.”
Erno Téglás, Edward Vul, Vittorio Girotto, Michel Gonzalez, Joshua B. Tenenbaum, and Luca L. Bonatti
Science 27 May 2011: 1054-1059.
DOI:10.1126/science.1196404

Additional source: MIT News.

Written by: Catharine Paddock, PhD