Watch a Computer Program Teach Itself to Run for Dear Life

We’ve come a long way since QWOP.

|
Aug 1 2017, 3:00pm

Image: Michiel van de Panne

Nothing brings us more joy than watching things learn to walk, then run, fall, and get back up again. It's the classic hero's journey. Until now, that joy was mainly reserved for watching toddlers and puppies.

All video: Michiel van de Panne/YouTube

A team of computer programmers has created an algorithm that allows computer characters, and potentially robots in the future, to teach themselves how to walk, run, and even dribble a soccer ball. The key is letting them learn from mistakes as they go.

According to Michiel van de Panne, a University of British Columbia computer science professor who presented his work on Monday at SIGGRAPH 2017, a major computer graphics conference, the project, called DeepLoco, is using "deep reinforcement learning," where a system tries several methods to meet a goal and eventually finds the best way to perform its task.

"The magical thing about reinforcement learning is you can provide direct feedback, and it eventually figures out the best overall strategy," van de Panne told me over the phone.

The computer program defines an adorable square-bodied movement model which van de Panne calls "the biped," which has the ability to autonomously learn how to walk, and to scan its environment and decide where to go.

He said that it takes about two days for the biped to learn how to walk at all, and about five for it to learn how to properly direct its movements, based on the area—anything from different inclines to moving objects. In van de Panne's presentation videos, the biped can be seen walking (and falling) across narrow cliffs, trying to keep balance as it's being bombarded by cubes, and dribbling a soccer ball around in a way that could make any assistant coach/dad proud.

Read More: This Robot Can Learn a Task After Seeing You Do It Once In VR

"While it's trying out various things in its training phase, it does put itself in a lot of awkward situations that it needs to recover from," van de Panne said. "It's a bit like a baby learning how to walk. It will be more often off-balance than you and I are."

Instead of tracking movement through motion capture or writing lines of code to react to every possible outcome, DeepLoco learns how to adapt and create movements as it goes. Van de Panne explains that this is more comprehensive and lifelike.

"The problem with [motion capture] is you're cutting and pasting existing pieces of motion together," he said. That's a bit like creating new images by cutting and pasting pieces of photographs together. At some point, you simply can't capture all the scenarios you need, so using physics is the right answer."

This technology could be used to teach robots how to navigate through areas independently in the future, as well as provide simulated models to study human movement. This is especially relevant in the fields of biomechanics and prosthetic design.

Get six of our favorite Motherboard stories every day by signing up for our newsletter.