Wooden blocks, carpet squares, half-finished juice boxes, busy three-foot tall human beings—this is what today’s kindergarten classrooms (in part) are made of. We send our five-year-olds off to an environment dedicated to helping them learn and grow their minds, so why can’t we do the same with our robots?
At first glance, this may seem like a far-fetched idea. Robots don’t learn on their own; they’re given knowledge, programmed with algorithms, and then we humans observe their behavior and determine whether they perform in the way that we intended. If not, we tweak our constructs and try again. This is the traditional notion of programming an “intelligent” robot, but is it the only way?
Videos by VICE
Danko Nikolic, a researcher at the Max-Planck Institute for Brain Research, believes that the future of robotics learning may involve an education that’s similar to the way our children learn in the primary grades. In an “AI kindergarten” with human trainers and educators, robots would learn in a playroom, exercising calibrated interactions while sensing the real world, starting with simpler actions like the way we use our hippocampus as we learn motor control and inner ear as we learn balance. Eventually, robots would create their own set of rules for learning new information, a key feature of intelligence. For example, when learning to decipher a drawing of a cat, an AI wouldn’t cram thousands of images of a cat into its memory; instead, it could spend most of its days asking a series of questions about an image of a cat, such as “Is that a cat? Could it be a cat? What does a cat look like? What else could that image be?”
He imagines a teacher interacting with a robot in this playroom environment, with the robot watching and working on the same problems
“In principle, this should be possible. It’s basically similar to how we transfer our civilization to the next generation of children,” Nikolic told Motherboard. “The new generation doesn’t have to create the whole civilization from scratch, something that took us thousands of years; we could do the same with the knowledge stored in the genome, it has not been done by evolution yet, but there’s no reason we shouldn’t be able to do it ourselves once we understand these processes and how they work.”
We’re used to thinking of artificial intelligence as a computer that can beat a human at chess, or, more recently, at Go, but some researchers are thinking more about artificial general intelligence (AGI), or strong AI, that could apply an underlying intelligence to any task the same way humans do.
Most researchers working on strong AI are in the burgeoning field of “deep learning,” a branch of artificial intelligence that uses a set of algorithms to create representations and models of the world by sifting through lots and lots of data.
But some scientists, including Nikolic, believe that the current deep learning models are limited when it comes to creating a truly intelligent being that can learn from its mistakes and adapt its knowledge based on its environmental interactions. Today’s AI systems are smarter than ever, but Nikolic notes that they’re not able to adjust or adapt in a way that a human or any animal does, which limits the system’s ability to take what it’s learned and apply it to new and more challenging contexts.
Nikolic believes that the secret to engineering this kind of adaptive knowledge lies in how the system is organized. He wants to take the deep-learning approach one step further with the practopoiesis, a theory that describes how organisms function and behave based on how their systems, from their genes up to their entire collection of organs, are organized.
Different adaptive systems, from a human being to a potential AI, fall into different adaptive categories. These adaptive categories are defined by the number of levels of organization at which the system receives feedback from the environment, which are also referred to as traverses. Traverses are a measure of how much capability the system has to adjust its existing knowledge, which is different from its raw computing capacity. The following concrete examples, borrowed from Nikolic, can help us wrap our minds around this idea:
- A book: not smart; zero traverses
- A computer: somewhat smart; one traverse
- An AI system: much smarter; two traverses
- A human or animal: smartest; three traverses
It’s helpful to visualize the three human traverses, or levels of organization, as such: our genes; our subconscious and long-term memory formation; and our conscious active decision-making, which works in tandem with our sensory system. Three traverses makes it possible to make informed inferences and determine what to do next in any given situation, something a domestic cat can do but that even the most advanced AI cannot.
To match a human, says Nikolic, a strong AI system must have three traverses, whereas today’s most advanced AI systems are still at the level of two. Systems with three traverses—also called T3-systems—are capable of learning from mistakes and storing their past experiences in an abstract form, which can be used in a more efficient and adaptive way than in two-traversal systems.
AI systems are notably missing the building blocks of knowledge, something human children and animals come equipped with in the form of genes. A child is born with the genes that allow it to understand the basic principles of language, for example; similarly, a rat is born with the ability to learn how to navigate a maze. “We need a robot that has this low-level knowledge,” Nikolic said.
He envisions what he calls a “machine genome,” which would equip robots with a set of learning rules. “To accelerate this process, to cut it down to several or tens of years, I think what we can do with AI kindergarten is kind of steal this knowledge (of genes), take this hard work that evolution has done through eons, and have it translated into our robots, into their genomes,” he said.
This week’s podcast brings you two stories about how humans interact with artificial intelligence. Radio Motherboard is available on iTunes and all podcast apps.
At this point, the machine genome is just a theory, but one that he is working towards understanding the potential of in his research.
The robots in AI kindergarten would come complete with only a few simple algorithms, and would not yet be close to human-level AI. As “infants,” they might be programmed with learning behaviors at the level of a primitive animal, like basic reflexes and instincts.
“Instead of giving the exact rules of how to behave, we can give examples on what should be done, we could give the machines our own intuition on what is the right thing to do in a given situation; children’s brains have to figure out how to take these examples and turn them into new knowledge and new skills, how to do things better next time, we have to do a similar thing with the robots but we have to start much earlier,” Nikolic said.
He imagines a teacher interacting with a robot in this playroom environment, with the robot watching and working on the same problems as the human trainer. For example, future AI educators might model how to navigate around solid objects without running into them, or how to look in the direction of a sensed noise when it’s heard from across the room.
This process might be similar to basic scaffolding in an actual kindergarten environment, where teachers guide students in their learning by modeling behavior and providing lots of opportunities for practice. From these simple lessons, robots could gradually learn more complex algorithms for “how to learn”, and in turn apply these to produce more complex behavior and more complex artificial brains, eventually growing into what we call “strong” or “general” artificial intelligence.
In essence, machines may come to know the world not through an impossibly all-encompassing program, but from grasping, babbling, trying, and fumbling their way to getting it right. Projects to implement this approach would undoubtedly be expensive and long undertakings, but at least we wouldn’t have to buy them juice boxes.