FYI.

This story is over 5 years old.

Tech

Amazon Pulled the Plug on an AI Recruitment Tool That Was Biased Against Women

A report from anonymous sources speaking to Reuters sheds light on Amazon’s AI hiring project, and gender bias in tech.
Image via Shutterstock

Amazon reportedly built an internal artificial intelligence-based recruitment program that the company discovered was biased against female applicants. Ultimately, the online retail and cloud computing giant pulled the plug on the tool.

Reuters reported on Wednesday that five people close to the project told the outlet that in 2014 a team began building computer programs to automate and expedite the search for talent. Such systems use algorithms that “learn” which job candidates to look for after processing a large amount of historical data. By 2015, the team realized the AI wasn’t weighing candidates in a gender-neutral way.

Advertisement

“Everyone wanted this holy grail,” one of Reuters’ sources, all of whom requested to be anonymous, said in the report. “They literally wanted it to be an engine where I’m going to give you 100 resumes, it will spit out the top five, and we’ll hire those.”

According to those engineers, the AI reduced job candidates to a star-review system, like it was reviewing a product on Amazon’s retail site. The computer models were trained on resumes submitted over a 10-year period, most of which came from men. It learned that a successful resume was a man’s resume.

It downgraded resumes that included modifiers like “women’s” (for example, attending a women’s leadership conference), penalized graduates of two all-women’s colleges, and prioritized verbs more commonly found in men’s resumes like “execute.”

The engineers tried to fix this bias, but there was no way to guarantee it wasn’t still happening. The project was disbanded early last year, they said, because executives “lost hope” in the project.

An Amazon spokesperson confirmed the existence of the program in an email. The system was only ever used in a trial and developmental phase, and never independently. The company claims it was never rolled out to a larger group, and that while the bias issue was discovered in 2015, it was later cancelled because it did not return strong enough candidates.

(Update Oct. 12, 9:30 a.m. EST: An Amazon spokesperson issued the following statement to Motherboard: “This was never used by Amazon recruiters to evaluate candidates.”)

Research has shown that human prejudices find their way into machine learning tools with alarming frequency. In 2016, researchers from Princeton University replicated a classic study that measured racial bias in hiring practices, except they used AI. An algorithm was trained on text from the internet to predict which words were “pleasant” and “unpleasant,” and then it was presented with names both white-sounding and black-sounding and asked to make the same determination. To the AI, black-sounding names were less “pleasant,” eerily mirroring human responses from past experiments. When this AI was presented with resumes, it preferred ones with white-sounding names.

In 2014, Amazon released diversity data about its own workforce, which revealed that 63 percent of its employees were male.

Companies including Goldman Sachs and LinkedIn are looking into how AI can speed up their hiring processes, according to Reuters. These companies claim that a human makes the final hiring decision, and that searching for promising applicants among tons of resumes using AI is more equitable than human eyes because it can scan a broader field.

But artificially intelligent systems will always be as biased as the humans making them—especially as long as the systems are trained and built on decades of proven gender bias in the tech industry. Until Silicon Valley fixes its gender and diversity biases, no algorithm will solve its problems for it.