Adding an object to a photo is one of the most basic Photoshop skills in the book, but this simple gag becomes more complex when it comes to convincingly altering paintings. Unless, that is, you have AI on your side.
Earlier this year, a team of researchers from Cornell University and Adobe Research created a machine learning algorithm that is able to seamlessly add objects to paintings by replicating their unique style. This feat that would require some serious image editing skills for a human to pull off, and so far no machine learning algorithm has come close.
As explained in a paper called “Deep Painterly Harmonization” posted to the arXiv preprint server in April, paintings are harder to add objects to than photographs because the tools used for manipulating photographs were not designed to handle the brush textures and abstraction typical of painted artworks. Although AI researchers have developed sophisticated systems that can convincingly add objects to photos and videos, these AI don’t perform as well when applied to paintings.
According to the paper, the new technique draws upon previous work that used artificial neural networks—a type of AI loosely modeled on the human brain—to perform “painterly stylization.” This type of neural net uses statistical analysis to create an abstract map of a given painting’s style that can be applied to any input image to recreate that painter’s style.
While this approach works well enough if you want to make your selfie look like it was painted by Van Gogh—there’s even apps that do this—it’s not great if you want to insert that selfie into an actual Van Gogh painting.
This was what the Cornell doctoral candidate Fujun Luan and his colleagues managed to achieve for the first time with their neural net.
“The main challenge in the painting harmonizing problem is that one must match carefully both the spatial and inter-scale statistical consistency for the pasted part, otherwise it would present obvious composite artifacts,” Luan told me in an email. “We also did experiments using previous neural patch matching techniques but the results weren’t satisfying.”
To achieve the level of local realism necessary to add objects into paintings, Luan and his colleagues trained their neural net to follow a three-step process. The first step involves copying-and-pasting the image into the photo. In the next step, the AI attempts to apply a rough style matching to the pasted image based on the rest of the painting. Finally, the neural net does more refined style-matching that also matches the texture of the original painting.
Luan told me that he initially worked on the project while an intern at Adobe. He said he thinks the neural net’s “ultimate purpose” is to provide a tool for artists. Indeed, one can imagine that it may end up in future image editing software.
“Hopefully [the tool] would be user friendly enough that non-professionals could also make use of it to produce fun results,” Luan said.
Luan and his colleagues made the code open source and available to everyone on GitHub. The work recently made the rounds on Twitter after the artist and coder Gene Kogan posted some of his humorous results he made with the tool.
Kogan told me in an email that the open source neural net can be run from the command line with “fairly rudimentary skills.”
As someone whose coding skills can barely be called rudimentary, I was unable to get the neural net up and running myself. But sometime in the near future, I may be able to click a button and let the machine do the work for me.