Scientists Taught an AI to ‘Sleep’ So That It Doesn't Forget What It Learned, Like a Person

Researchers say counting sheep may be the best way for AIs to exhibit life-long learning.
Scientists Taught an AI to ‘Sleep’ So That It Doesn't Forget What It Learned, Like a Person
gremlin via Getty Images

Chief nourisher in life’s feast, all living beings need to sleep. Without it, humans can become forgetful, hallucinate, and even experience various physical and psychological problems. But new research published in the journal PLOS Computational Biology suggests that future AIs could benefit from getting some shut-eye too. 


Artificial neural networks often reach some superhuman heights, but when it comes to sequential learning, or learning one new thing after another, they become, well, kind of like Finding Nemo’s Dory. Unlike humans and animals who have the ability to learn and apply knowledge continuously, while these systems can certainly achieve excellence in a new task, it’s at the expense of the performance of a previous task. 

“Once properly trained, it's very difficult to teach them [a] completely new task,” Pavel Sanda, co-author of the study and a research fellow at the Institute of Computer Science of the Czech Academy of Sciences, told Motherboard over email. “And if you succeed in training the new task, you end up damaging the old memory.”

Sanda explained that in the neuro world, such an activity is called “catastrophic forgetting.” It’s an issue that can only be solved with something called “consolidation of memory,” a process that helps transform recent short-term memories into long-term ones, often occurring during REM sleep. This reorganization of memory might actually play a large part in why we need to sleep at all, especially as if the process does stop working, or is interrupted in some way, serious mental deficits can occur, Sanda said. 

“You can see this phenomenon in very old people who can have very detailed memories from childhood, but have difficulties [remembering] what they had for lunch yesterday,” Sanda told Motherboard. But could teaching AIs to hit the hay help them improve their own foggy recollections? 

To some, the concept is promising. As sleep is said to spike learning by enabling the “spontaneous reactivation of previously learned memory patterns,” the study notes that neuroscience-inspired artificial intelligence could actually be the next big thing. Building on previous work in memory plasticity and sleep modeling, Sanda’s team used a neural network model to simulate sensory processing and reinforcement learning in an animal’s brain, and then gave it two separate tasks to complete. In both tasks, the network learned how to discriminate between being punished and being rewarded—enough so that eventually, it could make decisions on its own. 

The team then tested whether the network would exhibit catastrophic forgetting, and it certainly did. Each training session for the second task damaged the knowledge of the first, and if continued, the program’s original knowledge would have been slowly obliterated. But by making the network mimic biological sleep, researchers eventually found that in another experiment, interspersing sleep phases between short periods of the second task could allow the AI to remember how to accomplish the first task. 

“It's another nice demonstration that very simple principles can produce not so simple effects,” Sanda told Motherboard. “We used inspiration from real sleep, but the model is orders of magnitude simpler.” 

And that real-life inspiration goes a long way. After all, the human brain may be one of the most complex computers on Earth, even if it needs to rest and recharge sometimes. Why shouldn’t computers and machines be able to do the same?