A machine learning workshop for artists in Milan spawned a project that uses neural networks to make city maps.
Called Invisible Cities, the project involves using neural networks, a computer system inspired by the human brain, to translate map tiles into generative satellite images for various cities, including Milan, Venice, and Los Angeles. Through this technology, the artists can also hand-draw sketches of imaginary cities and feed them into the generative model trained to create a map.
The project was born out of a workshop at Opendot Fablab, a digital fabrication laboratory, as well as a prototyping and design platform. It was a collaboration involving seven different artists.
"In a nutshell, what we are doing with this project is image to image translation. As this text can be translated into another language maintaining all the information, in the same way images can be translated in different representation of the same semantic content," said Gabriele Gambotto, CTO and cofounder of Leva Engineering consultancy and a participant in the project.
Image: Invisible Cities
The result of the algorithm they used is a software that is designed to generate a solution, though not necessarily to solve a particular task. The first network is the "discriminator," which can tell if a satellite image is the representation of a map. The second is a "generator," which learns how to create a satellite image from a given map, so real that it can fool the discriminator.
The model was trained with about 500 pairs of images, each composed of the map and the corresponding satellite view, he said, all of which is necessary to input a map so that the system can synthesize an aerial view using the selected city model.
"We trained a neural network (a kind of artificial intelligence that resembles the way human brain cells work) by showing it many satellite images and the elements that each image contains (buildings, park, roads, rivers…). This way, the neural network 'learned' to recognize and distinguish these elements in the map," said designer Damiano Gui, also involved in the project.
Image: Invisible Cities
"Then we asked the neural network to generate new maps based on the things it had learned: for example, re-drawing a city using the elements of the style of another city, or drawing an imaginary satellite view of a city that doesn't exist, because we had just sketched its elements by hand."
The project includes an algorithm that learns the relationship between two corresponding images, said artist and programmer Gene Kogan, involved in the project, such as between map tiles and satellite photos, between photos of daytime and nighttime, or between sketches and photographs. "Once you have an algorithm which can reconstruct one from the other (in either direction), you can use it to make new ones from new inputs," he said.
Kogan added that the ability to create algorithms that learn the relationships between corresponding images, such as map tiles and satellite images, has a broad range of implications. "In the future, we will be able to quickly generate detailed images or 3D meshes from simple sketches, speeding up the design process in many fields: architecture, cartography, industrial design, fabrication, and others," he said.
Kogan said the team hopes to provoke more creative experimentation with tools like these, allowing artists and designers to help inform process of "making the next generation of interfaces" with these technologies.
Moreover, in Invisible Cities you can detect a "subversion of technology," adds Michele Ferretti, a PhD candidate in geography at King's College London, who participated in the project. "In this digital age, maps are a product of code, something which is often perceived as dry and technical. But by creatively using code you might end up with completely unexpected outcomes. We had drawn the cities, but these are the networks that ultimate' imagined' them."
Get six of our favorite Motherboard stories every day by signing up for our newsletter.