When Google released its Deep Dream generator, they probably reckoned that people would get creative with the images they could push through the AI program, while expecting some interesting output in the process. Though the "puppy slug" meme cemented the AI’s trippy image generation in the minds of the masses, it seems as if Deep Dream still hasn’t been fully exploited. That's where programmer and artist Johan Nordberg comes in: a self-described “computer guy who likes to make things,” Nordberg recently took the Deep Dream AI and, using it for two music videos, generated hypnotic loops that are like trips into a psychedelic, fractalised infinity.
In “Kundaleen,” the first music video for electronic artist ingMob (a.k.a., Raymond Weitekamp), Nordberg trained Deep Dream to identify images of animals. For the second, “New Calm,” he had Deep Dream look for buildings. What Nordberg does in the process is beautiful to behold. In both videos it’s as if a virtual camera deep-dives into what look like selfsame fractal images of animals and buildings, but which are actually subtly evolving objects and alien-looking landscapes. The technique gives Deep Dream’s surreal textures a three-dimensional quality that’s unlike anything you’ve ever seen (at least in waking life).
After wrapping up the arrangement on “Kundaleen,” Weitekamp was brainstorming ideas for generative visuals that would accompany song. At the same time, he and his brother were talking about machine learning in the context of music and art. After Weitekamp’s brother turned him on to Nordberg’s video “Inside An Artificial Brain,” he reached out to the programmer and artist, who was keen to adapt his loop-based “inceptionism” technique for two music videos—one that would be based on the organic, and one on the “architected.”
“What is important to understand is that the Google Deep Dream generator actually is just a technique to visualise the patterns an artificial neural network has created to help it recognise images,” Nordberg tells The Creators Project. “You can think of an artificial neural network as a crude simulation of a brain. They are notable for their almost unreasonable effectiveness, finding use in just about anything, from self-driving cars to Google's image search, to IBM's Jeopardy! champion, Watson.”
With the videos, Nordberg wanted to explore the patterns that form in deep neural networks and to illustrate how they, reflecting our own brains, process visual information. Each of the frames in the two videos are created from one initial frame of random noise. The process forms a feedback loop where the previous output is fed back to the input
“Basically I ask the network ‘Hey, what do you see in this image? Please respond with painting,’ and keep repeating that with every painting the network gives me, making the details more pronounced each time,” Nordberg explains. “The zoom effect is created by scaling the paintings up a little bit before giving them back, letting the network fill in the details that were lost.”
“The processing was done on Amazon's Compute Cloud, and the software I wrote myself based off the code example Google released with their blog post,” says Nordberg, who created the video with ffmpeg. “I recommend anyone interested in creating deep dream images check out r/deepdream—they have a lot of information and links to services that create the images for you if you don't want to mess around with code.”
Click here to see more of Johan Nordberg’s work.