Tech

Sexist AI is Even More Sexist Than We Thought

A new study shows bias is deeply ingrained in algorithmic models, generating sexualized images of women while creating professional images of men.
A man in a business suit and red tie extending his hand while sitting at a desk with a laptop
Screen Shot 2021-02-24 at 3
Hacking. Disinformation. Surveillance. CYBER is Motherboard's podcast and reporting on the dark underbelly of the internet.

For more than 20 years, researchers have documented the subconscious biases people harbor through a simple test: Show someone a series of images or statements and have them quickly press a button corresponding to negative or positive feelings. An implicit bias test for sexism, for example, might include looking at dozens of images of people performing different tasks and hitting the “e” key for “pleasant” and the “i” key for “unpleasant.” 

Advertisement

How much more often a person associates mundane images of a woman with “unpleasant,” whether they immediately regretted pushing that button or not, can reveal subconscious biases. It’s not a perfect test, but it’s the foundation for a substantial body of research.

Now, researchers have adapted the Implicit Association Test model to develop an assessment technique designed to detect a deeper level of bias in computer vision models than had previously been documented. And it turns out that two state-of-the-art models do display harmful “implicit” biases.

Using those models, the researchers found that AI systems were more likely to generate sexualized images of women (wearing bikinis or low-cut tops) while creating professional images of men (wearing business or career attire). They also tend to embed positive characteristics in images of people with lighter skin and negative characteristics in people with darker skin tones.

Similarly trained models have been used by companies to classify and generate images, including for tasks like screening job applicants. Those downstream models, though, usually undergo additional training to specialize them for a particular task.

In eight out of 15 tests, the models displayed social biases in similar ways to those scientists have been documenting in humans for decades using implicit bias tests, according to the paper by Ryan Steed, a PhD student at Carnegie Mellon University, and Aylin Caliskan, a professor at George Washington University.

Advertisement

Biased AI is nothing new. But Steed’s and Caliskan’s work shows just how ingrained it can be in an area like computer vision that, through tools like facial recognition and gun detection, can have life-and-death ramifications.

"Supervised" computer vision models are trained on images that have been labeled by humans (this one is a dog, this one is a fish), whereas "unsupervised" models can learn to categorize and generate images by training on image datasets that have not been labeled. The labeling process has many potential problems, and supervised models have well documented bias problems—take this example, where a model took a pixelated picture of President Barack Obama and made him look white.

A screenshot from the research paper showing pixellated images.

The AI systems were more likely to complete pixellated images of white men with career attire, while women were more likely to be completed with bikinis and low-cut tops.

Steed and Caliskan demonstrated that the bias in unsupervised systems runs even deeper and will persist even if humans haven’t instilled additional prejudices through the labelling process—the models will simply learn it from the images themselves.

The consequences can be severe, particularly as new research leads to broader uses of unsupervised models.

“Because methods have improved, these datasets (on which the models are trained) can be used for a lot more than they were intended to be used for,” Steed told Motherboard. “Our work serves two purposes. The first one is to raise awareness about the models that exist and the potential hazards of those models.” The second, he hopes, is to be a tool others can use to examine their own models.

Advertisement

The two models Steed and Caliskan tested—Open AI’s iGPT and Google’s SimCLRv2—use different techniques but were both trained on the ImageNet database, which is one of the most influential testing and training grounds in computer vision.

That’s one part of the problem. Researchers Vinay Prabhu and Abeba Birhane recently demonstrated that ImageNet and other benchmark datasets contain a multitude of racist, pornographic, and otherwise-problematic images. And they are continuously being updated with new images from the web without the subjects’ consent or knowledge and no avenue for recourse.

“All of these deep neural networks, in spite of their fancy names are basically nothing more than statistical sieves” that find, categorize, and recreate what they’ve seen in their training datasets, Prabhu told Motherboard.

And the curators of those datasets are loath to make any changes to them, he added, because sets like ImageNet are used to compare the quality of various computer vision models and set benchmarks. Altering the contexts, some say, would render those benchmarks useless.

Even if dataset curators created a removal process for specific photos or categories of images, “there’s no such thing as an unbiased dataset,” Steed said. That means the architects behind these widely used models need to stop claiming that more or better datasets will solve the problem and take more individual responsibility for what they’re inputting into their systems, what’s coming out, and how they might be used in intentionally or unintentionally harmful ways.

Often when people draw attention to issues like the implicit bias of algorithms, a large proportion of researchers in the field roll their eyes, Prabhu said. They ask whether it’s really that big of a deal if an image generator happens to put a man in a suit and a woman in a bikini. 

“The people asking these questions are not the ones being erased,” he said.