These days, it feels like the worlds of film, gaming and interactive art are collapsing in on one another—video games are starting to become nearly indistinguishable from CGI-enhanced movies, and films and art installations are increasingly inviting viewers to play and interact, no longer content with a passive one-size-fits-all experience. Last December, The Creators Project teamed up with art and technology center Eyebeam and visual effects company Framestore to explore this converging landscape of visual and narrative forms. We brought together 5 teams of filmmakers, creative coders, artists, designers and motion graphics specialists to investigate the future of cinematic storytelling in the New Cinema hackathon.
One team began their exploration into the future of cinema by looking at its past. The earliest beginnings of cinema bear little resemblance to the darkened movie theaters we’ve come to know and expect today. “Early cinema was the ‘cinema of attractions’—it emerged out of physical environments like carnivals and amusement parks… and was an embodied experience used to create illusion,” explains Greg Borenstein, one third of the team behind the New Cinema project We Make The Weather, a breath-controlled interactive work in which the viewer’s breath controls a virtual figure crossing a never-ending bridge over a body of water.
We Make The Weather is a breath-controlled interactive work where the user controls a virtual figure crossing a never-ending bridge.
Borenstein and his teammates, Karolina Sobecka and Sofy Yuditskaya, started out with a loose narrative in mind—they took inspiration from the recent events of Hurricane Sandy and wanted to create an experience that would touch on storms, flooding, and rushing water, using both live action and 3D modeled materials. In We Make The Weather, the viewer wears a headset with a microphone sensor that monitors your breath and tracks its ebb and flow, and level of intensity, then uses this information to power the visuals and sound within the 3D animated landscape.
Yuditskaya tests out the breath-controls for We Make The Weather.
“The departure point for the piece was a reflection on how much influence we humans have on our environment, and on the inevitable contradictions in our attempts to control it,” says Yuditskaya.
For the project, Sobecka constructed a barren, glacier-like virtual environment in Unity 3D, a game engine software popular with interactive artists and visual effects specialists. In it, a ghostly anonymous figure tries to cross a bridge to get to the island on the other side, but can never reach the end as new planks keep getting added to the bridge with every exhale. Yuditskaya composed the sound design, experimenting with white noise loops in PureData to compose the tumultuous storm and water soundscape.
Sobecka constructed a barren, glacier-like virtual environment in Unity 3D.
The team also incorporated live captured footage of water into the project through a technique called seam carving, a type of content-aware image resizing that can expand or contract an image by adding or removing rows of pixels from the image.
Their biggest challenge, however, was trying to merge both interactive and narrative elements. “In some ways, the interactive technological context acted as this acid that dissolved storytelling ideas you put into it,” says Borenstein. Nevertheless, he believes that this diffusion of the language of cinema into other mediums, and its adoption of influences from other mediums, is the future—or at least a future—of cinema.
We Make The Weather is on view at Eyebeam Art + Technology Center in New York City through February 2nd. Visit NewCinema.net to read more about the hackathon and the other projects developed there.