Tech

‘Coded Bias’ Is the Most Important Film About AI You Can Watch Today

The new documentary is an essential introduction to algorithmic bias—and the systems that gave rise to it.
Janus Rose
New York, US
Joy Buolamwini wearing red glasses and a yellow jacket while using a laptop in a white room
Coded Bias

Before it was even released, Coded Bias was positioned to become essential viewing for anyone interested in the AI ethics debate. The documentary, which was released on Netflix this week, is the kind of film that can and should be shown in countless high school classrooms, where students themselves are subjected to various AI systems in the post-pandemic age of Zoom. It's a refreshingly digestible introduction to the myriad ways algorithmic bias has infiltrated every aspect of our lives—from racist facial recognition and predictive policing systems to scoring software that decides who gets access to housing, loans, public assistance, and more.

Advertisement

But amid the recent high-profile firings of Timnit Gebru and others at Google's AI ethics team, the documentary seems like only one part of a deeper and ongoing story. If we understand algorithmic bias as a form of computationally-imposed ideology, rather than an unfortunate rounding error, we can't simply attack the symptoms. We need to challenge the existence of the racist and capitalist institutions that created those systems in the first place.

The film follows Joy Buolamwini, a computer scientist and founder of the Algorithmic Justice League, an organization that she started after realizing that facial recognition systems weren't trained to recognize darker-skinned faces. Buolamwini is easily one of the most important figures in the AI field, and she serves as a gateway into a range of stories about how automation has imposed on us a robotic and unjust world—albeit one that merely reflects and amplifies the pre-existing injustices brought about by racism, sexism, and capitalism.

Showing the actual human impacts of algorithmic surveillance is always a challenge, but filmmaker Shalini Kantayya manages to navigate through a series of deeply compelling portraits: a celebrated teacher who was fired after receiving a low rating from an algorithmic assessment tool, and a group of tenants in Brooklyn who campaigned against their landlord after the installation of a facial recognition system in their building, to name a few. 

Advertisement

Perhaps the film's greatest feat is linking all of these stories to highlight a systemic problem: it's not just that the algorithms "don't work," it's that they were built by the same mostly-male, mostly-white cadre of engineers, who took the oppressive models of the past and deployed them at scale. As author and mathematician Cathy O'Neill points out in the film, we can't understand algorithms—or technology in general—without understanding the asymmetric power structure of those who write code versus those who have code imposed on them.

In discussions of AI, there is a tendency to think of algorithmic bias as an innocent whoopsie-daisy that can be iterated out. In reality, it's often people in positions of power imposing old, bad ideas like racist pseudoscience, using computers and math as a smokescreen to avoid accountability. After all, if the computer says it, it must be true.

Given the systemic nature of the problem, the film's ending feels anticlimactic. We see Buolamwini and others speaking at a pre-pandemic Congressional hearing on AI and algorithms, bringing the issue of algorithmic bias to the highest seats of power. But given the long and ineffective history of Congress tsk-tsking tech CEOs like Mark Zuckerberg, I was left wondering how a hearing translates into justice—especially when injustice seems to be hard-wired into the business models of the tech companies shaping our algorithmic future.

Even more interesting is how the film's timeline stops just before the firing (and subsequent smearing) of Timnit Gebru and other prominent AI ethics researchers at Google. Gebru, a celebrated data scientist who appears in the film, was terminated last year after co-authoring a paper which concluded that the large language models used in many AI systems have a significant environmental impact, as well as "risk of substantial harms, including stereotyping, denigration, increases in extremist ideology, and wrongful arrest." 

In other words, the findings were a refutation of Google's core business model, which the company's senior leadership was none too interested in hearing. To many in the AI ethics field, the firings demonstrated the workings of racial capitalism—how women of color in the tech industry are merely tolerated to achieve the appearance of diversity, and disposed of when they challenge the white-male power structure and its business model of endless surveillance.

If there is hope to be found at the film's conclusion, it lies in the brief mentions of grassroots activists who have successfully campaigned to ban facial recognition in cities across the country. But ultimately, the lessons we should draw from films like Coded Bias aren't about facial recognition, or any algorithm or technology in particular. It's about how the base operating system of our society will continue to produce new, more harmful technologies—unless we dismantle it and create something better to put in its place.