Scientists Are Training AI Robots to Write Graffiti

Researchers at Georgia Tech created a system that mimics human movements to create its own street art.
Janus Rose
New York, US
Georgia Tech researchers standing in back of AI-produced graffiti art spelling the names "Var,""Sam," and "Praj" in bubble letters.
Image via Georgia Tech / YouTube

Machine learning is quickly becoming a new tool for digital artists, with systems like OpenAI’s DALL-E granting the ability to generate surreal images from just a brief text description. Researchers at Georgia Tech have taken a slightly different route, training a system called GTGraffiti that learns the physical movements of human graffiti writers to create its own street art.

The system’s training data comes from the motion-captured movements of human graffiti artists, teaching the algorithms to imitate the fluidity of human motions. A robotic system, consisting of a spray can connected to a large steel frame via four adjustable cables, then reproduces the motions to create its own tags.


“Art encompasses some of the most nuanced motions that we can make, as humans,” Gerry Chen, a robotics Ph.D. student at Georgia Tech, said in a new video released to introduce the project. “The idea is that if we can reproduce these motions, then we should also be able to produce a lot of other human motions that are necessary for general automation.”

Since graffiti is highly stylized and involves a series of quick, flowing movements, the task can be pretty difficult for a robot. One of the earliest attempts to digitize street art in this way was F.A.T. Lab’s Graffiti Markup Language, an open dataset of graffiti tags that included gestural motion data, released in 2010. The artists described the project as “a new digital standard for tomorrow’s vandals,” but creating a robotic system that could reproduce the nuanced gestures remained out of reach at the time.

Georgia Tech’s researchers claim theirs is the first system that takes this a step further. Rather than using gestures captured via stylus or touch control, GTGraffiti instead uses data from full-body human movements recorded with motion capture equipment. This is then used by the robotic cable-and-pulley system to imitate the hand and body movements of a human artist. 

The robotic system is still limited to pretty basic shapes that are missing a lot of the individuality of human-made graffiti art, and it can only draw within a fixed canvas area. But the researchers say future iterations could be used by artists to reach otherwise inaccessible areas, or remotely tag their designs in faraway locations.

“Our work can be applied to graffiti preservation by recreating captured artwork, to human-robot collaboration in art by enhancing the physical capabilities of artists, and to other fields through technology transfer for large-scale dynamic motion,” the researchers wrote in a paper introducing the project.