Here's How Google Deep Dream Generates Those Trippy Images
It's a digital cross between Van Gogh and Dali.
You might know Google Deep Dream from the trippy, layered images it produces—some are like a digital cross between Dali and Van Gogh on acid. The Deep Dream Generator is a computer vision platform that allows users to input photos into the program and transform them through an artificial intelligence algorithm. This video by Computerphile, an educational Youtube channel about videos, explains just how Deep Dream works.
The platform uses convolutional neural networks—a machine term for a forward-fed artificial neural network where neuron connectivity patterns respond to overlapping regions in the visual field—in order to enhance photo patterns with surreal effects.
In simple terms, many levels of neural networks process the images input into the program. The artificial neurons are calculated and the weight of their sum processed through the roughly three layered network: low, intermediate, and high level layers. The lower levels are responsible for more basic edges, corners, and textures. By maximizing those levels, the picture would end up looking more like a Van Gogh. The higher levels are responsible for more detailed, hierarchical input like buildings and other elaborate objects. When the higher levels are maximized, the picture looks more like a jumbled Dali.
Google Deep Dream uses Python code that is input into Caffe, a deep learning framework that speedily expresses architecture and code. The fun in Google Deep Dream is getting to watch how a machine interprets a photograph and makes what looks like a painted piece of art out of it. If you try it however with a picture of your face, you may end up with endless swirls of your entire visage within your cheeks, or thousands of tiny reproductions of your face throughout the image—as scary looking as it is trippy.