Cue Up the 'Pulp Fiction' Dance Scene: This App 3D Projects 2D Movies in Your Living Room
Volume is a new experimental project from artists Or Fleisher and Shirin Anlen.
John Travolta takes off his shoes, hitches up his pants, and saunters over to face Uma Thurman. Then, they dance. It’s an iconic scene from Pulp Fiction set in a cheesy 50s diner, but in this case, Travolta and Thurman danced in an average-looking living room:
This video was shot through an augmented reality app using a new machine learning app called Volume, which transforms 2D images into 3D, and projects them into the real world using an augmented reality headset. It’s a new experimental project created by artists Or Fleisher and Shirin Anlen, which they hope will inspire people to create new forms of storytelling.
The idea blossomed after the pair worked on an interactive virtual reality documentary called Tzina, that allowed viewers to explore a virtual 3D space and interact with interviewed subjects. They started to get interested in volumetric film: essentially 3D film in a real world environment—think holograms.
“In the process of creating this documentary we learned a lot about the difficulties and complexities such work involve,” Fleisher, a graduate student at New York University’s Tisch School of the Arts, told me in an interview. “One of the questions we asked ourselves repeatedly was how can we bring 200 plus years of 2D documentation into the volumetric era?”
To create Volume, Fleisher and Shirin used a deep learning Convolutional Neural Network, which was “trained against hundreds of thousands of color and depth images until it reaches a point where it is able to predict the depth on its own,” Fleisher said. Once the neural network was able to predict the depth of different spaces well, they used their existing rendering software to reconstruct the film into augmented reality in the viewers’ living room. They’ve also experimented with converting other classic films into 3D spaces:
It’s still early in development but the pair aims to create a web app that anyone can use and is able to process lots of different kinds of input, including 2D images and moving sequences. While their main focus is on storytelling, Fleisher said that this kind of app could help advance other technology as well.
“It could be useful for self-driving cars, forensic investigations and (slightly on the futuristic side) uses like telepresence,” Fleischer said. “It seems the more accurate it gets, the more industries and uses seem relevant. This is definitely the beginning.”
Get six of our favorite Motherboard stories every day by signing up for our newsletter .