Tech

Meta's AI Chief Publishes Paper on Creating ‘Autonomous’ Artificial Intelligence

Yann LeCun, machine learning pioneer and head of AI at Meta, lays out a vision for AIs that learn about the world more like humans in a new study.
Meta's AI Chief Publishes Paper on Creating ‘Autonomous’ Artificial Intelligence
Image: Bloomberg / Contributor

Yann LeCun, VP and AI chief at Meta, has published a new paper laying out his  vision for “autonomous” AIs that can learn and experience the world in a more human-like way than today’s machine learning models.

In the nearly 70 years since AI was first introduced to the public, machine learning has exploded in popularity, and has since grown to reach dizzying heights. Yet despite how quickly we've come to rely on the power of computing, one question has haunted the field for almost as long as its inception: Could these superintelligent systems one day gain enough sentience to match, or even surpass humanity?

Advertisement

Despite some dubious recent claims—for example, the ex-Google engineer who claimed a chatbot had gained sentience before being fired—we’re pretty far off from that reality. Instead, one of the biggest barriers to a robot overlord situation is the simple fact that compared to animals and humans, current AI and machine learning systems are lacking in reason, a concept essential to the development of “autonomous” machine intelligence systems—that is, AI that can learn on the fly, directly from observations of the real world, rather than lengthy training sessions to perform a specific task. 

Now new research published earlier this month in Open Review.net by LeCun, proposes a way to fix this issue by training learning algorithms to learn more efficiently, as AI has proven that it isn’t very good at predicting and planning for changes in the real world. On the flip side, humans and our animal counterparts are able to gain enormous amounts of knowledge about how the world works through observation and with remarkably little physical interaction. 

LeCun, besides leading AI efforts at Meta, is also a professor at New York University has spent his storied career developing learning systems that many modern AI applications rely on today. In trying to give these machines better insight into how the world operates, he could arguably be hailed as the father of the next generation of AI. In 2013, he went on to found the Facebook AI Research (FAIR) group, Meta’s first foray in experimenting with AI research, before stepping down to become the company’s chief AI scientist a few years later. 

Advertisement

Since then, Meta has had varying levels of success in trying to dominate the ever-growing field. In 2018, their researchers trained an AI to replicate eyeballs in hopes of making it easier for users to edit their digital photos. Earlier this year, the Meta chatbot BlenderBot3 (which proved to be surprisingly malicious towards its creator), stirred up debate on AI ethics and biased data. Most recently, Meta’s Make-a-Video tool is able to animate both text as well as single and paired images into videos, spelling even more bad news for the once-promising rise of AI-generated art

For instance, teenagers can learn to drive with just a few dozen hours of repetition, and without trying out a crash for themselves. Machine learning systems, on the other hand, have to be trained with insanely large amounts of data before it could accomplish the same task.

Advertisement

“A car would have to run off cliffs multiple times before it realizes it's a bad idea,” said LeCun when he presented his work at UC Berkeley on Tuesday. “And then another few thousands of times before it realizes how not to run off the cliff.” That distinction, LeCun went on to note, resides in that humans and animals are capable of common sense.

While the concept of common sense can pretty much be boiled down to having practical judgment, LeCun describes it in the paper as a collection of models that can help a living being infer the difference between what’s likely, what’s possible, and what’s impossible. Such a skill allows a person to explore their environment, fill in missing information, as well as imagine new solutions to unknown problems. 

Still, it does seem as though we take the gift of common sense for granted, as scientists haven’t yet been able to imbue AI and machine learning algorithms with any of these capabilities. During the same talk, LeCun also pointed out that many modern training processes, like reinforcement learning techniques—a method of training based on rewarding favorable behaviors and punishing undesired ones—aren't up to snuff when it comes to matching human reliability in real-world tasks. 

“It’s a practical problem because we really want machines with common sense. We want self-driving cars, we want domestic robots, we want intelligent virtual assistants,” LeCun said. 

So with the aim of advancing AI research over the next decade, LeCun's paper proposes an architecture that would work to minimize the number of actions a system needs to take to successfully learn and carry out an assigned task. 

Much like how varying sections of the brain are responsible for different functions of the body, LeCun suggests a model for spawning autonomous intelligence that would be composed of five separate, yet configurable modules. One of the most complex parts of the proposed architecture, theworld model modulewould work to estimate the state of the world, as well as predict imagined actions and other world sequences, much like a simulator. But by using this single world model engine, knowledge about how the world operates can be easily shared across different tasks. In some ways, it might resemble memory. 

That said, there’s still a lot of hard work to be done before autonomous systems can learn to deal with uncertain situations, but in a world as chaotic and unpredictable as ours, it’s an issue we’ll no doubt have to address sooner rather than later. But for now, dealing with that chaos is part of what makes us human. 

Meta did not respond to a request for comment about the work.