Formally trained in print and documentary, journalist Nonny de la Peña has a history of telling human stories through a multitude of communication tools and media. The award-winning filmmaker and journalist has twenty years of reporting experience, but within the last decade, she's found a powerful storytelling method in an unexpected place: eschewing print, television, and even the Internet, de la Peña believes virtual reality is the most powerful storytelling method at modern media's disposal.
De la Peña has been described as a pioneer of "Immersive Journalism," and has since become a Senior Research Fellow in the new field at USC's Annenberg School of Journalism and Communications. Her craft specifically focuses on creating innovative VR projects using custom-built, motion-capture setups that allow users to walk through 3D generated recreations of non-fictional events. She told The Creators Project, "I believe VR will fundamentally change the landscape of how we experience many stories."
As an example, de la Peña and her team used footage of a man collapsing while waiting in line at a food bank to digitally reconstruct the event—the resulting virtual reality story, entitled Hunger in Los Angeles, gives viewers the firsthand experience of the tension and intensity of the actual event, far beyond the emotional stimuli of any news broadcast.
"Most people don’t have a lot of VR experience," says Paisley Smith, an assistant to de la Peña. "It's a disconnect from the world you know, and you're immersed somewhere else. In this way you can be more invested in the experience. It’s a very powerful, distraction-free storytelling technique."
Hunger rapidly became one of the most talked about pieces at the 2012 Sundance Film Festival, and de la Peña has been devoted to non-fiction virtual reality storytelling ever since. Other virtual reality stories tackle important human rights issues of today: the first, entitled Project Syria, recounts of the bombing of Aleppo, Syria, followed by an inside look at a Syrian refugee camp. In an in-depth profile of the project on Motherboard, writer Christopher Malmo describes the endeavor as "a perfect example of what's possible when new technologies are applied to reporting. Using VR renders the project immersive, going beyond two-dimensional print or video coverage to physically place the viewer into the story. In doing so, they stop being a mere viewer, and much more of a witness."
The second VR project, Use of Force, bears witness to police brutality on the US/Mexico border. These stories often get covered on broadcast, print, and digital news outlets, but de la Peña believes that virtual reality makes these experiences more personal—a new way for people to experience news outside information-saturated media markets. Curious about the future of immersive journalism, we spoke to Nonny de la Peña about technology, non-fiction storytelling, and the art of communicating in VR:
The Creators Project: When did you first realize that virtual reality technology was mature enough for journalism and non-fiction storytelling?
Nonny de la Peña: Eight years ago, after building a virtual Guantanamo Bay Prison in Second Life with artist Peggy Weil, I began to think of how virtual reality could be applied to other important news stories. Soon after we were lucky to collaborate with Mel Slater and Maria Sanchez Vives at the Event Lab at the University of Barcelona on a piece that put people “in the body” of a detainee in a stress position, in order to offer a visceral report using FOIA released documents on the way we tortured prisoners. While at their lab, I saw another powerful piece they had created in order to study the bystander effect which put you in the middle of a bar fight. That was when I realized the power of creating pieces that put audiences on scene using VR goggles and full tracking. After that, I have focused exclusively on using virtual reality for important narratives.
When developing such rich virtual reality experiences, where do you begin? How do you choose which topics best lend themselves to the medium?
My work in virtual reality is unusual in that I tell linear narratives which can’t necessarily be altered by audience interaction, so the first question I need to address is how a narrative can unfold AROUND a participant, as if they are standing on a theater stage. I only have limited cues to direct attention—audio, abrupt movements, gathering of people in the scene—as fully immersive virtual reality allows the audience to look, walk or even run anywhere they choose, so the topic needs to allow the type of robust design that can be experienced from any angle. While these pieces can be created for a variety of stories, I have always been driven by the intersection of investigative reporting and human rights and my work to date reflects that interest.
How is your process as a storyteller different in immersive VR, as opposed to print or documentary?
In my many years working in print and documentary I have enjoyed extensive editorial control. I can very specifically edit a story so that the “cuts” read or view well, without worrying what might happen to someone’s body if they are “experiencing” the story. Virtual reality can definitely cause what’s known as sim sickness, a feeling like motion sickness. So when you design a piece that makes your audience feel as if they are actually present on scene, you have to respect that you have brought their entire body along for the ride. This type of spatial narrative requires very specific considerations for design and key is to imagine truly standing in the middle of the story.
The other crucial element that always needs to be considered is the speed of the refresh rate on the goggles. It has been so great working with some of the top technologies committed to making imagery on the screen track any viewer movement without the lag that can cause sim sickness. I have had incredible support from the motion tracking camera company Phasespace and the whole team at USC’s ICT MxR Lab headed by VR veteran Mark Bolas. Working with Palmer Luckey and Oculus Rift has also been a game changer–we are now able to offer large audiences access to these pieces.
However, there is a key similarity between traditional cinema, television and news reporting and the type of immersive virtual reality experiences I make: If the audio is bad, the piece will be problematic no matter how good the visuals are.
Can you walk me through the process of translating research into a digital world?
When I begin these pieces, I completely rely on my journalistic training and background to gather the necessary elements. I use traditional methods of researching important stories and then begin collecting the images and audio that act as the fundamental scaffolding upon which I build. For example, in Project Syria, I was shown a video of a young girl singing on a street in Aleppo when a mortar shell hit. We then had to gather a dozen mobile phone videos taken before the explosion and during the aftermath, as well as photographic material and Google earth images to anchor the street where the event occurred. I then sent a team into a refugee camp on the border of Syria to collect material about children living in the camp in order to inform the second half of the piece.
Throughout, I had to imagine what it would be like to be standing there when events transpire—will the material I want to duplicate digitally offer a deeper understanding of an event? Once I feel I can hit that note, then the modeling in Maya and 3DMax begins We start designing the motion capture session and ultimately translate all of these digital elements into the game engine to make the experience “feel real.”
How does immersive VR affect people differently, compared to read or watched stories? What kind of reactions are you looking for from your audiences?
One person told me that even two weeks after experiencing Use of Force she still felt the memory of the story “in her body.” I think that’s the key to the difference—it’s a very visceral feeling to go through a well-crafted, fully immersive piece. When I build these, I set out to make people understand what happened in a more profound way by allowing them to become witnesses to an event. I have now put thousands of people through my pieces and it is amazing to watch people gasp when they put the goggles on and they suddenly have been transported to another location—they know they are here but they feel like they are there too. Or else they try to interact with the virtual environment as if it is real.
These kind of reactions tell me that the piece is working. I have even had folks pull their mobile phones from their pockets in a knee jerk reaction when the seizure victim collapses in Hunger in Los Angeles. Of course they put it back immediately when they catch themselves, but the reality becomes that strong.
Do you think the introduction of VR into journalism can change the way people think about news?
I don’t know if it will change the way people think about the news, but it will definitely change the way they receive their news. Just as with the introduction of radio or television, these new news delivery systems changed our feelings about the world we live in. I believe VR will fundamentally change the landscape of how we experience many stories.
What technological advancements do you hope will improve your practice?
We are already seeing the advent of many tools that allow quick photo real models of environments inside and outside buildings and along streets. When we can do that with people, including rapidly capturing their motions in a natural way, the way 360 cameras are beginning to do today, my life will be so much easier! Currently, 360 cameras don’t yet give us the ability that allows people to move throughout the experience and be able to see the environment people from any angle or direction. Viewers are still fixed to a chair. However, I expect photo real and real time to merge in the very near future.
How was your approach different in creating Project Syria, versus Hunger in Los Angeles?
When I made Hunger in Los Angeles, no one had ever used VR for telling a nonfiction story in this way. I had no budget, funding or backer and it took two years for me to beg the favors necessary to get it built. I spent about $700 of my own money and had amazing people helping me out. When it premiered at Sundance, I think I was as surprised as everyone that it really, truly worked.
The World Economic Forum commissioned Project Syria. Elizabeth Daley, Dean of the USC School of Cinematic Arts, brought the head of WEF, Klaus Schwab, to experience Hunger. He took of the goggles and commissioned Project Syria on the spot. But I really ended up with only about six weeks to build from scratch in order to make the January Davos deadline and it was all done at a level of intensity that I hope will be a rare occurrence.
Another crucial difference is I knew I wanted to do a story that exemplified the hunger crisis in America and the strain on food banks. That gave me time to record many hours of material until I captured the right scene to build with. Not only did I not have the luxury of time with Project Syria but the furor of the events also caused tremendous problems. For example, I reached out to a photographer to potentially hire him or utilize his existing archive and twenty minutes after I sent the email, word came across twitter he had been kidnapped. All of this was happening while the aid agencies were calling Syria the worst crisis of our life time—and I needed to make the piece convey that urgency to the level of world leaders who attend WEF. I can tell you, the pressure was intense but once again, we succeeded beyond my wildest imagination.
De la Peña's films will be shown at the Future of Storytelling Summit in New York City on Oct. 1-2, where she will also be giving a talk.