Advertisement
AI

Watch This Humanoid Robot Hand Solve a Rubik’s Cube

The efforts are paving the way for robots that can solve complex mechanical problems in the real world.

by Karl Bode
16 October 2019, 2:32am

Image: OpenAI

This article originally appeared on VICE US.

Researchers have successfully trained a neural network to solve a Rubik’s Cube with a single human-like robot hand, bringing us one step closer to being outwitted by our inevitable robotic overlords.

OpenAI, the San Francisco–based for-profit AI research lab, this week released a study documenting the development of an AI-linked robotic hand named Dactyl. Since May of 2017, the researchers have been trying to train Dactyl to intelligently solve the Rubik’s Cube.

For robotic systems built specifically to solve the puzzle, the Rubik’s Cube hasn’t traditionally posed much of a challenge. Some such systems have been shown to solve the puzzle in under a second, well below the previous human record of five seconds. Those robots don't use humanoid hands, though.

While plenty of specialty robots can solve the Rubik’s Cube quickly, Dactyl is different. It’s built from the ground up to intelligently solve complex tasks in simulation before attempting them in the real world, an early step in the quest for smarter robots capable of complex tasks that require not only complicated physical manipulation, but some degree of thought.

This month, Dactyl succeeded for the first time:


“We set this goal because we believe that successfully training such a robotic hand to do complex manipulation tasks lays the foundation for general-purpose robots,” the researchers said in a blog post explaining the project.

The robotic hand itself isn’t really new; some variation of that technology has existed for the better part of the last 15 years. What is new is the researchers’ use of Automatic Domain Randomization (ADR), which endlessly generates progressively more difficult randomized environments in simulation before the robot is tasked with a real world challenge.

In simulation, everything is progressively changed from the size and mass of the cube to the friction of the robotic fingers, OpenAI’s Ashley Pilipiszyn told Motherboard.

“ADR starts with a single, nonrandomized environment, wherein a neural network learns to solve Rubik’s Cube,” she said. “As the neural network gets better at the task and reaches a performance threshold, the amount of domain randomization is increased automatically. This makes the task harder, since the neural network must now learn to generalize to more randomized environments.”

The system then takes that knowledge and applies it to real world tests, which include everything from wearing a rubber glove to having a few fingers tied together. Because of simulator experience, the system is better capable of dealing with variables in the real world, such as being harassed by a stuffed giraffe:

1571167890733-Screen-Shot-2019-10-15-at-32949-PM
Image: OpenAI


In the full uncut video, you can watch as the robot applies the knowledge gleaned in simulation to solve the puzzle in around four minutes. It’s not perfect; researchers say Dactyl only actually solves the Rubik’s Cube about 60 percent of the time when 15 rotations are required—and 20 percent when cube’s scramble is at maximum difficulty requiring 26 or more turns.

Still, Dactyl’s latest achievement brings the project one step closer to its ultimate goal: developing learning robots capable of complex and varied tasks.

“For the past 60 years of robotics, hard tasks which humans accomplish with their fixed pair of hands have required designing a custom robot for each task,” Pilipiszyn said. “As an alternative, people have spent many decades trying to use general-purpose robotic hardware, but with limited success due to their high degrees of freedom.”

Dactyl is the first evolutionary step in changing that narrative. It’s the predecessor of smart, more adaptive, nimble robots, capable of dealing with the messiness and complexity of the real world on the fly.

Tagged:
Artificial Intelligence
machine learning
OpenAI