FYI.

This story is over 5 years old.

Entertainment

The Secret to Creating Star Wars' Snoke and Maz Kanata

ILM London explain how they brought the two main computer-generated characters from 'The Force Awakens' to life.

Image courtesy of Lucasfilm/Disney and ILM

Star Wars: The Force Awakens was, by all accounts, a huge success. Fans went in droves to cinemas to see it, and its true measure was not only all the people telling you how great it was (and it was great) but also the immediacy with which it filtered into popular culture. Memes, sketches, fan films, and more were made, and now everyone is counting down the long, lonesome days until Episode VIII.

Advertisement

One of the reasons it was such a hit was that it felt like the Star Wars universe; the droids and alien lifeforms and the "used future" look of the original were all happily in place. A lot of that was down to the fact that JJ Abrams used practical effects in the film, especially for the characters—but there were two main characters who were, notably, entirely computer-generated from motion-captured performances. These were brought to life by VFX giant and animation studio Industrial Light & Magic, founded by George Lucas when the very first Star Wars movie was made.

Last month, London played host to Escape Studio's VFX Festival 2016 and some alumni from ILM were there, including Scott Pritchard, a 2D sequence supervisor, who discussed ILM London's work on Star Wars: The Force Awakens. The studio exclusively created the two aforementioned computer-generated characters from the film: Supreme Leader Snoke and Maz Kanata, played by Andy Serkis and Lupita Nyong’o, respectively.

In the film, Snoke, who commands the First Order, is a giant seated hologram with a scarred face and a lot of secrets, an influential and powerful dark-sider who trained Darth Vader 2.0, Kylo Ren. Maz Kanata, on the other hand, is a millennium old force-sensitive space pirate who helps Luke Skywalker 2.0, Rey, recover an important lost item which Kanata somehow got ahold of.

Anyway, two intriguing, mysterious characters, who no doubt have much bigger roles to play in these sequels, were brought into CG-existence thanks to the toil of ILM's London VFX team. We sent the studio some questions to find out how you turn the subtleties of an actor's performance into a live-action character.

Advertisement

You've got to have someone to look up to. Supreme Leader Snoke in #StarWars #TheForceAwakens.

A photo posted by Star Wars (@starwars) on Jan 3, 2016 at 4:54pm PST

The Creators Project: I read that some of the inspiration for Snoke’s hologram throne was from the Lincoln Memorial. What were some of the qualities ILM wanted to imbue the character with and how did they go about doing that with visual effects? Scott Pritchard: JJ asked us to design a mysterious, almost diaphanous look for Snoke. He wanted to draw an interesting contrast between his damaged and fragile appearance, and Andy's very powerful performance. We used reference images of jellyfish to explore ideas of a translucent quality to his skin. To enhance this we built a complete internal structure for his head, including a skull and blood vessels which could be glimpsed occasionally through certain areas.

And for Maz Kanata, again, what were some of the qualities that CGI allowed you to imbue the character with? For Maz, the decision to go for a CG approach was to allow the fine nuances of Lupita Nyong'o's performance to come through in the character. We built a very intricate digital character to accommodate this—for instance, the bulge of the cornea of her eyes pushed up on her eyelids. It was tiny details such as this that allowed us to really dial in the subtleties of Lupita's performance. Another key aspect of Maz was the 'zoom' look for her goggles when she studies Finn and sees his true intentions. Here we explored different ideas to achieve a realistic look of extreme refraction but also retain the performance of her eyes.

Advertisement

What was the motion capture system used to capture the performances of Lupita Nyong'o and Andy Serkis? The motion capture process began with capturing a range of specific facial expressions with both Andy and Lupita, using the Medusa facial capture rig developed by Disney Research in Zurich. This captures facial details and movements in incredibly intricate detail, and allowed us to build a library of expressions for each actor. To use Lupita as an example, for each expression, she started with a neutral face, then moved into that expression, and then back to neutral. Medusa has the unique feature of capturing not just the expression but also the movement into it. It does this in incredibly fine detail, tracking the movements of individual pores and wrinkles. This allows us to capture tiny nuances of the facial muscles, and all of this detail was stored in our high-resolution library of face shapes.

Can you explain a little about the processes used in turning an actor’s performance into a digital character? Both actors wore tracking dots on their faces, together with a lightweight four-camera head rig which captured their performance on set. Breaking down a performance into a sequence of expressions, we then matched each expression of their performance to its high-resolution counterpart from the Medusa sessions. This process gave us a highly detailed digital reproduction of their faces for each take.

Advertisement

#LNTopTen- Being a part of the @StarWars universe has been an unforgettable experience! All thanks to George Lucas for creating a world in which we can all escape and find our inner children together. Shout-out to #Thunder! #TheForceAwakens

A photo posted by Lupita Nyong'o (@lupitanyongo) on Dec 29, 2015 at 7:46am PST

Facial expressions are obviously a very important part of the final performance. In that regard, how are facial expressions mapped and morphed in a satisfying way from actor to the non-human forms of Snoke and Maz? Once we had our digital high-resolution reproductions of Lupita and Andy's performances, we transferred the movements over to the characters. For Maz, there was the additional challenge of the large difference between Lupita and Maz's head structures. This required an additional layer of hand-done animation work to ensure that Lupita's performance came through in Maz's face.

So now we now have our animated characters ready to use. For a typical shot, we start with the filmed footage, also known as the 'plate.' We generate a digital replica of the camera used to film this plate. Rotopaint artists remove the actor from the plate—this is often painstaking frame-by-frame work. Our lighting artists analyze the plate to design a system of lights to replicate the lighting conditions of the scene, these lights illuminate the animated character. They then render this setup, which outputs the view from the camera as a series of 2D images. Finally our compositors layer these renders together with the 'cleaned-up' plate, paying close attention to aspects such as color balance, photographic and optical artefacts. The aim of this last process is to create one seamless image which should look like it has been photographed by a single camera.

Advertisement

GIF by author via

What are some of the biggest challenges when taking something from the concept art phase to the final on-screen character? Ultimately everything was driven by the character's performance—an audience has to be able to connect with them on an emotional level. This is true whether the character is realized using a puppet, a performer in a suit, or a fully CG approach. It's always the biggest challenge in creating any character.

What kind of cutting edge or experimental technology are you aware of that could be adapted for use in the future for these kinds of hybrid digital/physical performances? 

Virtual and augmented reality technologies undoubtedly have a large part to play in the future of cinema. xLAB (see video below) is a branch of ILM looking at using these technologies to better enable filmmakers to design and tell their stories. They will also be developing virtual reality, augmented reality, real-time cinema, theme park entertainment, and narrative-based experiences for future platforms. They're looking at ways of breaking down the 'fourth wall' of cinema to create new immersive experiences. It's a really exciting future for cinema!

Click here to visit Industrial LIght & Magic's website.

Related:

Disney Reveals Concept Illustrations for 'Star Wars' Theme Park

A Force-Sensitive Teen Kicks Imperial Ass in This Fan-Made Star Wars Short

On Novelizing 'The Force Awakens'