Photoshopping is already too easy. Selection tools just keep getting more precise, as "smart" fill tools stand ever more ready to clean up the average hack's image editing messes. If Photoshop were an operating room, one could wield a rusty machete during brain surgery, perhaps while blindfolded and a bit buzzed, and be assured that the software would translate every clumsy movement into neuro-precision. This, however, is just the beginning.
Researchers at Brown University have developed a new algorithm capable of making wholesale changes to digital images—40 "transient attributes" in all, fitting into the broad categories of weather, season, mood, time of day, etc.—with a deft enough touch such that 70 percent of participants in a lab study said they preferred the algorithm's changes to those done methodically, tediously by hand. An open-access paper describing the work (and providing a series of hands-on examples) is being presented next week at the SIGGRAPH computer graphics conference.
The editing performed by the Brown algorithm is far more complex than simply putting a landscape through some filter. I can make a scene feel "rainy" easy enough via some smart-phone app filter, but it won't hardly stand up to scrutiny; changes in light affect everything in a scene, and in ways that can't be predicted by a swapped lens. Just to start, think about shadows and reflections and glimmering beads of moisture. There's nothing casual about making the edits needed to change an image from "sun" to "rain" to "snow."
"It's been a longstanding interest on mine to make image editing easier for non-experts," said James Hays, one of the algorithm's developers, in a statement accompanying the new paper. "Programs like Photoshop are really powerful, but you basically need to be an artist to use them. We want anybody to be able to manipulate photographs as easily as you'd manipulate text." Absent human hands, the task falls to machine learning.
Actually, we're not completely done with human hands. The Brown researchers deferred to Mechanical Turk for the tedious task of examining 8,000 different photos—collected from 101 stationary web cams around the world recording static landscapes in different conditions—and labeling each according to the desired transient attributes.
Basically, a Turk laborer received some image along with a list of descriptors and was instructed to check off every one represented in the photo. The results were then handed over to the learning algorithm, which, image after image, developed a meaningful sense of what, in the context of a digital photo, all of these attributes mean.
"Now the computer has data to learn what it means to be sunset or what it means to be summer or what it means to be rainy—or at least what it means to be perceived as being those things," said Hays. The algorithm's next step then is to take a new image and divide it up into different clusters of pixels, determining what changes ("local color transforms") should be made to those discrete regions to convert the given photo into the desired photo.
"If you wanted to make a picture rainier, the computer would know that parts of the picture that look like sky need to become grayer and flatter," Hays explained. "In regions that look like ground, the colors become shinier and more saturated. It does this for hundreds of different regions in the photo."
While the algorithm can add texture and color, it's limited by an inability to add full-on structures to an image; that is, it couldn't very well go from winter to spring because that would involve adding leaves to trees. "We can't synthesize that detail at this point."
Nor can the Brown algorithm determine the most appropriate LOL text for an image or replace the faces on iconic movie posters with cute cats. Those tasks remain the provenance of a human's touch. For now.