a body controlling a head watching a screen
Tech

Algorithms Are Automating Fascism. Here’s How We Fight Back

“Bias” can’t be fixed with a software update. Only by disrupting oppressive systems on every level can we make a more just world.
Janus Rose
New York, US
JY
photos by John Yuyi

This article appears in VICE Magazine's Algorithms issue, which investigates the rules that govern our society, and what happens when they're broken.

In early August, more than 50 NYPD officers surrounded the apartment of Derrick Ingram, a prominent Black Lives Matter activist, during a dramatic standoff in Manhattan’s Hell’s Kitchen. Helicopters circled overhead and heavily-armed riot cops with K-9 attack dogs blocked off the street as officers tried to persuade Ingram to surrender peacefully. The justification for the siege, according to the NYPD: Ingram had allegedly shouted into the ear of a police officer with a bullhorn during a protest march in June. (The officer had long since recovered.)

Advertisement

Video of the siege later revealed another troubling aspect of the encounter. A paper dossier held by one of the officers outside the apartment showed that the NYPD had used facial recognition to target Ingram, using a photo taken from his Instagram page. Earlier this month, police in Miami used a facial recognition tool to arrest another protester accused of throwing objects at officers—again, without revealing the technology had been utilized.

The use of these technologies is not new, but they have come under increased scrutiny with the recent uprisings against police violence and systemic racism. Across the country and around the world, calls to defund police departments have revived efforts to ban technologies like facial recognition and predictive policing, which disproportionately affect communities of color. These predictive systems intersect with virtually every aspect of modern life, promoting discrimination in healthcare, housing, employment, and more.

The most common critique of these algorithmic decision-making systems is that they are “unfair”—software-makers blame human bias that has crept its way into the system, resulting in discrimination. In reality, the problem is deeper and more fundamental than the companies creating them are willing to admit.

In my time studying algorithmic decision-making systems as a privacy researcher and educator, I’ve seen this conversation evolve. I’ve come to understand that what we call “bias” is not merely the consequence of flawed technology, but a kind of computational ideology which codifies the worldviews that perpetuate inequality—white supremacy, patriarchy, settler-colonialism, homophobia and transphobia, to name just a few. In other words, without a major intervention which addresses the root causes of these injustices, algorithmic systems will merely automate the oppressive ideologies which form our society.

Advertisement

What does that intervention look like? If anti-racism and anti-fascism are practices that seek to dismantle—rather than simply acknowledge—systemic inequality and oppression, how can we build anti-oppressive praxis within the world of technology? Machine learning experts say that much like the algorithms themselves, the answers to these questions are complex and multifaceted, and should involve many different approaches—from protest and sabotage to making change within the institutions themselves.

Meredith Whittaker, a co-founder of the AI Now Institute and former Google researcher, said it starts by acknowledging that “bias” is not an engineering problem that can simply be fixed with a software update.

“We have failed to recognize that bias or racism or inequity doesn’t reside in an algorithm,” she told me. “It may be reproduced through an algorithm, but it resides in who gets to design and create these systems to begin with—who gets to apply them and on whom they are applied.”

Algorithmic systems are like ideological funhouse mirrors: they reflect and amplify the worldviews of the people and institutions that built them.

Tech companies often describe algorithms like magic boxes—indecipherable decision-making systems that operate in ways humans can’t possibly understand. While it’s true these systems are frequently (and often intentionally) opaque, we can still understand how they function by examining who created them, what outcomes they produce, and who ultimately benefits from those outcomes.

Advertisement

To put it another way, algorithmic systems are more like ideological funhouse mirrors: they reflect and amplify the worldviews of the people and institutions that built them. There are countless examples of how these systems replicate models of reality are oppressive and harmful. Take “gender recognition,” a sub-field of computer vision which involves training computers to infer a person’s gender based solely on physical characteristics. By their very nature, these systems are almost always built from an outdated model of “male” and “female” that excludes transgender and gender non-conforming people. Despite overwhelming scientific consensus that gender is fluid and expansive, 95 percent of academic papers on gender recognition view gender as binary, and 72 percent assume it is unchangeable from the sex assigned at birth, according to a 2018 study from the University of Washington.

In a society which views trans bodies as transgressive, it’s easy to see how these systems threaten millions of trans and gender-nonconforming people—especially trans people of color, who are already disproportionately policed. In July, the Trump administration’s Department of Housing and Urban Development proposed a rule that instructs federally funded homeless shelters to identify and eject trans women from women’s shelters based on physical characteristics like facial hair, height, and the presence of an adam’s apple. Given that machine vision systems already possess the ability to detect such features, automating this kind of discrimination would be trivial.

Advertisement

“There is, ipso facto, no way to make a technology premised on external inference of gender compatible with trans lives,” concludes Os Keyes, the author of the University of Washington study. “Given the various ways that continued usage would erase and put at risk trans people, designers and makers should quite simply avoid implementing or deploying Automated Gender Recognition.”

One common response to the problem of algorithmic bias is to advocate for more diversity in the field. If the people and data involved in creating this technology came from a wider range of backgrounds, the thinking goes, we’d see less examples of algorithmic systems perpetuating harmful prejudices. For example, common datasets used to train facial recognition systems are often filled with white faces, leading to higher rates of mis-identification for people with darker skin tones. Recently, police in Detroit wrongfully arrested a Black man after he was mis-identified by a facial recognition system—the first known case of its kind, and almost certainly just the tip of the iceberg.

Even if the system is “accurate,” that still doesn’t change the harmful ideological structures it was built to uphold in the first place. Since the recent uprisings against police violence, law enforcement agencies across the country have begun requesting CCTV footage of crowds of protesters, raising fears they will use facial recognition to target and harass activists. In other words, even if a predictive system is “correct” 100 percent of the time, that doesn’t prevent it from being used to disproportionately target marginalized people, protesters, and anyone else considered a threat by the state.

Advertisement

But what if we could flip the script, and create anti-oppressive systems that instead target those with power and privilege?

This is the provocation behind White Collar Crime Risk Zones, a 2017 project created for The New Inquiry. The project emulates predictive policing systems, creating “heat maps” forecasting where crime will occur based on historical data. But unlike the tools used by cops, these maps show hotspots for things like insider trading and employment discrimination, laying bare the arbitrary reality of the data—it merely reflects which types of crimes and communities are being policed.

“The conversation around algorithmic bias is really interesting because it’s kind of a proxy for these other systemic issues that normally would not be talked about,” said Francis Tseng, a researcher at the Jain Family Institute and co-creator of White Collar Crime Risk Zones. “Predictive policing algorithms are racially biased, but the reason for that is because policing is racially biased.”

Other efforts have focused on sabotage—using technical interventions that make oppressive systems less effective. After news broke of Clearview AI, the facial recognition firm revealed to be scraping face images from social media sites, researchers released “Fawkes,” a system that “cloaks” faces from image recognition algorithms. It uses machine learning to add small, imperceptible noise patterns to image data, modifying the photos so that a human can still recognize them but a facial recognition algorithm can’t. Like the anti-surveillance makeup patterns that came before, it’s a bit like kicking sand in the digital eyes of the surveillance state.

Advertisement

The downside to these counter-surveillance techniques is that they have a shelf life. As you read this, security researchers are already improving image recognition systems to recognize these noise patterns, teaching the algorithms to see past their own blind spots. While it may be effective in the short-term, using technical tricks to blind the machines will always be a cat-and-mouse game.

“Machine learning and AI are clearly very good at amplifying power as it already exists, and there’s clearly some use for it in countering that power,” said Tseng. “But in the end, it feels like it might benefit power more than the people pushing back.”

One of the most insidious aspects of these algorithmic systems is how they often disregard scientific consensus in lieu of completing their ideological mission. Like gender recognition, there has been a resurgence of machine learning research that revives racist pseudoscience practices like phrenology, which have been disproven for over a century. These ideas have re-entered academia under the cover of supposedly “objective” machine learning algorithms, with a deluge of scientific papers—some peer reviewed, some not—describing systems which the authors claim can determine things about a person based on racial and physical characteristics.

In June, thousands of AI experts condemned a paper whose authors claimed their system could predict whether someone would commit a crime based solely on their face with “80 percent accuracy” and “no racial bias.” Following the backlash, the authors later deleted the paper, and their publisher, Springer, confirmed that it had been rejected. It wasn’t the first time researchers have made these dubious claims. In 2016, a similar paper described a system for predicting criminality based on facial photos, using a database of mugshots from convicted criminals. In both cases, the authors were drawing from research that had been disproven for more than a century. Even worse, their flawed systems were creating a feedback loop: any predictions were based on the assumption that future criminals looked like people that the carceral system had previously labelled “criminal.” The fact that certain people are targeted by police and the justice system more than others was simply not addressed.

Advertisement

Whittaker notes that industry incentives are a big part of what creates the demand for such systems, regardless of how fatally flawed they are. “There is a robust market for magical tools that will tell us about people—what they’ll buy, who they are, whether they’re a threat or not. And I think that’s dangerous,” she said. “Who has the authority to tell me who I am, and what does it mean to invest that authority outside myself?”

But this also presents another opportunity for anti-oppressive intervention: de-platforming and refusal. After AI experts issued their letter to the academic publisher Springer demanding the criminality prediction research be rescinded, the paper disappeared from the publisher’s site, and the company later stated that the paper will not be published.

Much in the way that anti-fascist activists have used their collective power to successfully de-platform neo-nazis and white supremacists, academics and even tech workers have begun using their labor power and refuse to accept or implement technologies that reproduce racism, inequality, and harm. Groups like No Tech For ICE have linked technologies sold by big tech companies directly to the harm being done to immigrants and other marginalized communities. Some engineers have signed pledges or even deleted code repositories to prevent their work from being used by federal agencies. More recently, companies have responded to pressure from the worldwide uprisings against police violence, with IBM, Amazon, and Microsoft all announcing they would either stop or pause the sale of facial recognition technology to US law enforcement.

Not all companies will bow to pressure, however. And ultimately, none of these approaches are a panacea. There is still work to be done in preventing the harm caused by algorithmic systems, but they should all start with an understanding of the oppressive systems of power that cause these technologies to be harmful in the first place. “I think it’s a ‘try everything’ situation,” said Whittaker. “These aren’t new problems. We’re just automating and obfuscating social problems that have existed for a long time.”

Follow Janus Rose on Twitter.