Every week, the social media hype-train seems to find new ways to sensationalize generative AI tools. Most recently, a new technique that allows users to produce optical illusions went viral, with some describing the results as AI-generated images with “subliminal” messages.
The technique, called ControlNet, essentially lets users have more control over the generated image by specifying additional inputs—in this case, letting you create images or words within other images. Some users characterized this as a form of “hidden message” that could be used to implant suggestions in the form of subtle visual cues, like a McDonald’s “M” logo appearing in the outlines of a movie poster.
Videos by VICE
For example, here is a viral tweet showing an image of cute cats that spell out the words “Gay Sex”:
On Twitter—which is now called “X” for some reason—algorithmically-boosted blue-check users hyped up the technique as either nefarious or revolutionary, posting various images with “hidden” messages. But the messages are typically not very subtle, and the method for creating them—while certainly novel—is fairly simple.
“I think the ‘subliminal message’ angle is a bit sensationalist/tech-bro-y,” Apolinário Passos, an AI researcher at HuggingFace, told Motherboard.
ControlNet uses the AI image-generating tool Stable Diffusion, and one of its initial uses was generating fancy QR codes using the code as an input image. That idea was then taken further, with some users developing a workflow that lets them specify any image or text as a black-and-white mask that implants itself into the generated image—kind of like an automated, generative version of the masking tool in Photoshop.
“What happened there was that this user discovered that if they used the QR Code ControlNet but instead of feeding it a QR code, they fed it some other black-and-white patterns, they could create nice optical illusions,” said Passos. “You can now send a conditioning image and the model blends in a pattern that satisfies that while still making a coherent image at the same time.”
So while ControlNet-produced images will probably stick around, the reality of the tech is far less sensational than the AI hype crew makes it out to be.
“Will this be a subliminal message nightmare where brands hide their logo/products into seemingly innocent images and manipulate us? Probably not,” said Passos. “Will we see this in ads very soon, but with a double-meaning where you look/look again/squint to see […] and have people talking about it? I think so!”