Animators: Awkward 'Andromeda' Animations Are Automation Amok

The automation of highly-skilled animation labor hits a speed-bump on the way to Andromeda.

by Rob Zacny
Mar 27 2017, 3:00pm

There's no need to ask whether Mass Effect: Andromeda has bad animations; we all know it has some bad animations. They aren't quite omnipresent throughout the game, but it's undeniable that Andromeda is clumsy and awkward in ways that far oustrips its predecessors.

So where did Andromeda go wrong? As it's animations became both a laughingstock and a point of controversy, other animators with industry experience started sharing their expert opinions for exactly how and why Andromeda often looked like the community theater version of Mass Effect. While the notion that Andromeda's team was "rushed" to finish this game plays a part in these explanations, the more important point is how heavily developers rely on automated animation tools to populate the expansive worlds and stories of many of today's games.

Polygon spotted a Twitter thread from Naughty Dog animator Jonathan Cooper, who also has animation credits with the Assassins Creed and Mass Effect series, in which Cooper theorizes about what happened to create some of Andromeda's more memorable gaffes. As Cooper puts it, a lot of game animation is generated by stringing-together already-existing animation clips, "like DJs with samples and tracks." Some scenes may not even get that much attention, and are completely algorithmically generated. Cooper posits that Andromeda is leaning rather heavily on that latter category, but that the intent was likely to have animators do a clean-up pass before the game shipped.

This tracks with what a group of animators over at AnimState concluded over the course of a round-table discussion of Andromeda's issues. Simon Unger, an animator at Phoenix Labs with credits at EA and Square Enix, noted that Andromeda likely has somewhere in the neighborhood of 41 hours of dialogue, judging by similar BioWare projects. At that point, there's simply too much animation to do it all by hand (keyframing), so algorithmic generation becomes necessary. Unger speculates BioWare probably used a program called FaceFX:

which analyzes the audio tracks and creates animation based on the waveforms, projection, etc. At a base level, it can read as a very robotic performance and I suspect that is what we're seeing in some of the footage. You can work with the audio and the procedural tools to polish the performances in various ways of course, but when you're staring down thousands of minutes of performance to clean up, your definition of 'shippable' is a sliding bar that moves relative to team capacity and your content lock date.

In the same discussion, animator Gwen Frey, formerly of Irrational and presently of The Molasses Flood, suggests that a lot of animations may never have gone through an actual animator.

...Many bugs look like FaceFX gone awry. I suspect that a lot of the implementation was not even done by an animator. Frequently you will have an intern or junior simply copy-paste the written script into FaceFX as a starting point. The system will automatically generate facial animation based on the letters/sounds from the text that is input. Generally this looks bad, and you need to spell out words phonetically, rather than typing them in properly – but this takes a lot of time and resources are often limited.

In short, what these veteran animators are pointing to is a set of classic "quality vs. quantity" and "quality vs. time" dilemmas facing anyone who makes a game like this. What are players most likely to see, and what scenes will likely be observed by a fraction of your audience? To what degree do you need to rely on procedural generation versus having animators keyframe certain moments or entire scenes?

In the end, it looks like BioWare made extensive use those procedural solutions, whether it's because of a crunch at the end of the development or a consistent underestimation by project managers of what their animation needs would be. The takeaway, for Cooper, is that game animation should depend less on algorithmic generation and instead devote more resources to the problem, in the form of performance capture and larger animation teams. In an era of Youtube and Twitter clips, "fast and cheap" animation solutions may have higher costs than they used to.