- The “overfitted brain” hypothesis centers on the idea that as animals learn repetitive tasks, they run the risk of becoming hidebound, losing the ability to generalize what they have learned.
- This hypothesis, which was inspired by machine learning, proposes that the brain needs a dose of randomness every night in the form of dreams to restore its flexibility and generalization.
- Novels, films, and other art forms may act as “artificial dreams,” performing the same restorative function for the brain.
In the early months of 2020, as millions of people around the world went into isolation due to the burgeoning COVID-19 pandemic, many reported an increase in the vividness and frequency of their dreams.
The hashtag #pandemicdreams began to trend on Twitter as users shared their bizarre dreams.
According to Erik Hoel, Ph.D., a research assistant professor of neuroscience at Tufts University, in Medford, MA, the tedium of our lives under lockdown may have provoked our brains to dose themselves with bursts of random nighttime “noise.”
Dr. Hoel believes that the nervous systems of all kinds of animals, from nematode worms to humans, risk becoming “overfitted” to the information acquired during waking hours.
This means that while animals, including humans, may become very good at performing specific tasks, they fail to generalize what they have learned to other tasks.
To resolve this issue, Dr. Hoel reasons, dreams evolved in higher animals to inject flexibility into their brains’ models of the world.
Psychologists have found that if a person’s tasks during the day are narrow and repetitive, such as playing the game Tetris, they are more likely to have dreams related to these tasks.
This could explain why the unexciting and repetitive experience of life under lockdown has provoked a burst of dreaming in so many people. “Of course, it’s hypothetical, but it does provide an explanation,” Dr. Hoel told Medical News Today.
Dr. Hoel studies machine learning algorithms called deep neural networks, which can be trained to perform tasks such as translating text and recognizing particular features in pictures.
In a paper in the journal
This means that the networks fail to generalize what they have learned to novel data.
Modelers often use “noise injections” to solve the problem of overfitting. These are random or corrupted datasets that restore flexibility to the network’s operations.
In his paper, Dr. Hoel argues that after a day’s learning experiences, the brain faces a similar problem of overfitting, which it solves in much the same way.
He speculates that dreams are “corrupted sensory inputs” — which the brain concocts from random, or “stochastic,” brain activity — that evolved to increase the generalizability of its internal models of reality.
“It is the very strangeness of dreams in their divergence from waking experience that gives them their biological function,” he writes. “Sleep loss, specifically dream loss, leads to an overfitted brain that can still memorize and learn but fails to generalize appropriately,” he adds.
Dr. Hoel calls his idea the “overfitted brain” hypothesis.
To find out whether his hypothesis is correct, the researcher says, it should be possible for psychologists to design behavioral tests that differentiate between the ability to memorize new things and the ability to generalize that knowledge to other tasks.
They would use repetitive training tasks to induce overfitting in participants then measure the effects of sleep deprivation on their ability to remember and generalize.
Dreams may be so beneficial for efficient brain function, Dr. Hoel speculates, that humans have found ways to dream while awake.
Contrary to the prominent idea among psychologists that art forms such as novels, painting, and music are evolutionary “cheesecake,” pleasurable but with no value for survival, Dr. Hoel believes that they prevent our brains from overfitting.
“The [overfitted brain hypothesis] suggests [that] fictions, and perhaps the arts in general, may actually have a deeper underlying cognitive utility, in the form of improving generalization and preventing overfitting, by acting as artificial dreams.”
“As a novelist, myself,” Dr. Hoel observes, “It is nice to think that fictions, which are in a sense artificial dreams, may have cognitive utility by keeping us from fitting to the daily quotidian events of our lives too well,” he told MNT.
The idea that the brain becomes overfitted to its experiences during waking hours and that dreams help build the process of generalizing knowledge has deep roots in machine learning.
Nearly two decades later, in 2014, the neuroscientist Prof. Karl Friston, from University College London, in the United Kingdom, and co-authors built on this concept by developing a theory that dreams are the brain’s way of minimizing the complexity of its models.
Prof. Friston views the brain as a machine for generating predictions about the world that make our every perception, thought, and action possible. According to his
“This was recently extended to periods of waking reflection and an account of ‘aha!’ moments, when the simplicity of things becomes apparent,” he told MNT in an email. “We have even used complexity minimization to critique deep learning!”