Sensors Bring You Face to Face with Your Virtual Reality Avatar

You've never been closer to being one with your digital self.

by Beckett Mufson
May 25 2015, 7:30pm

GIF and screencaps via

Your virtual self may soon get a sensor-enabled facelift, thanks to a team of USC researchers working with Facebook's Oculus Rift Division to solve one of the fundamental problems with virtual reality as it exists now—facelessness. Using a combination of pressure sensors embedded in the headset's foam liner, and a Kinect-like motion capture camera mounted out front, the setup can create a real-time 3D model of your face and map it onto anything from a picture of a monkey to your personal 3D-scanned game avatar.

USC assistant professor Hao Li says that the technology's ability to convey facial expressions could be the first step toward something like a functional VR Facebook. "To get a virtual social environment, you want to convey this behavior to other people,” he tells the MIT Technology Review. “This is the first facial tracking that has been demonstrated through a head-mounted display.” 

The USC team has a long way to go, though: the Facial Performance Sensing Head-Mounted Display still requires an intense calibration sequence to learn what your face looks like, and its visual sensors are mounted on a bulky arm that dangles from the headset. But given some streamlining, virtual face-to-face conversations might be as easy as Skype. Philip Rosedale, the founder of the social VR contruct, Second Life, reportedly says the device, "is really cool." His current project, High Fidelity, is an open source network of virtual worlds built on the idea that accurate body tracking will popularize virtual reality on a scale that Second Life never achieved, which could make good use of face-tracking technology. 

Other experimental interfaces like the hand-tracking Leap Motion, eye-tracking FOVE headset, and immersive VR theme park, VOID, are breaking down the barriers between the virtual and physical world. But facial mapping has been a challenge in digital design for years—recent progress from USC's Institute of Creative Technology, outlined in our documentary Hollywood's Digital Humans, has opened up the potential to capture and design uncannily-accurate digital visages that could combine with Li's work to build a virtual forum with the scope of social media and the intimacy of an actual conversation.

Watch Hollywood's Digital Humans below:

The USC team behind the Facial Performance Sensing Head-Mounted Display is Hao Li, Laura Trutoiu, Kyle Olszewski, Lingyu Wei, Tristan Trutna, Pei-Lun Hsieh, and Chongyang Ma. Visit Hao Li's website for more of his virtual reality research.


[Video] What's It Like to Become a 3D Actor?

Watch A 94-Camera Rig 3D Scan A Human Into Virtual Reality 

Face Tracking Tech Lets This Artwork Stare at You 

Watch A Model's Face Transform With Projection Mapped Makeup 

virtual reality
second life
University of Southern California
Chongyang Ma
Facial Performance Sensing Head-Mounted Display
Kyle Olszewski
Laura Trutoiu
Lingyu Wei
Pei-Lun Hsieh
Tristan Trutna
digital self
hao li