This story is over 5 years old.

An AI Never Forgets (Until You Try to Teach It Something New)

Researchers are trying to beat the 'catastrophic forgetting' problem in digital brains.
​Image: Flickr/​Tim Green

​While famous techies are blaring the warning siren about digital beings that will one day enslave us, the human brain still outmatches even the most complex computers. One reason for this is that brain-mimicking neural networks can't remember things like a human can. But new research is getting them closer.

A long-standing problem for artificial neural networks (virtual simulations of the brain's neurons) is how to design a system that can not only learn on its own, but also remember what it learned when it tries something new. When neural networks process an input, they make connections between their digital neurons to output a solution. When it comes time to learn something else, a neural network has to reconfigure its connections, effectively overwriting what it already learned.


This is called "catastrophic forgetting," and it puts a real damper on artificial intelligence that is able to respond to changing, real-world situations. Earlier solutions to this problem have involved tricks like plugging a neural network into an external memory that can store everything it's learned. But can a network learn to remember all on its own?

Since artificial neural networks are inspired by biology, it made sense for an international team of researchers based in France, Norway, and the US, to look to the animal brain for a solution to catastrophic forgetting: modularity.

The idea is that animal brains are composed of many tight clusters—modules—of neurons that loosely connect to other clusters. By splitting a neural network into modules, it could learn more by dividing its resources.

Screengrab: Youtube

In a paper ​published today in PLoS Computational Biology, the researchers describe how they applied the principles of animal evolution and modern theories on memory and learning to neural 'nets in order to produce modularity in a virtual brain.

The first bio-inspired idea is that memory in the human brain is thought by scientists to be distributed; there's no central memory bank where you recall information. The first task for the researchers was to reproduce this in neural networks by turning off certain neurons when processing one task and turning them on when processing another, segmenting the network into modules.


The second bio-inspired idea is that there is a cost for building connections in the brain over time—energy, for example. By building in "costs" for more connections in neural networks, the researchers forced them to be more efficient in the number of nodes they used to process an input, thereby encouraging the aforementioned segmenting by design. A network can't turn itself into a mess of entangled nodes, because that would "cost" too much.

Third, the researchers added dedicated reward and punishment nodes that made sure a neural network only learned something when its output lined up with a reward signal. That reward encourages learning is an age-old principle in human and animal psychology.

Finally, they let the networks evolve and change on their own while learning how to complete two different tasks, letting these three design principles guide their eventual organization.

In the end, the researchers found that the segmentation and pressure to reduce connections resulted in highly modular networks that can learn something, learn something else, and then remember the first thing.

This could very well be a step forward for neural networks that can evolve and learn new things—a crucial step forward for artificial intelligence that can adapt to the real world. For a computer, this is impressive. But you probably perform this process dozens of times every day whenever you learn how to do something new.

It ain't easy being a digital brain.