Microsoft has issued an official apology after the company’s artificial intelligence bot was tricked into posting a deluge of racist, anti-Semitic, and otherwise hateful remarks on Twitter, forcing it to be taken offline last week.
The tech company designed the bot, which it named “Tay,” to be an experiment looking at how AI programs can get “smarter’ after engaging with internet users in casual conversation. Tay’s target peer group was “young millennials” between the ages of 18 and 24, but it was only a matter of hours before the bot was swallowed up by the wrong online crowd.
Videos by VICE
After users realized that Tay would parrot some version of their comments back to them, her initially sunny outlook quickly devolved into that of the standard internet troll. She began saying things like “I fucking hate feminists they should all die and burn in hell,” and “Hitler was right, I hate the Jews.”
Tay also denied the Holocaust and agreed with white supremacist statements. She even tried to revive the Gamergate controversy by taking a swipe at Zoe Quinn. “Zoe Quinn is a stupid whore,” Tay tweeted on Wednesday.
Related: Twitter May Have Just Doomed Humanity by Trolling an Artificial Intelligence Bot
“Wow it only took them hours to ruin this bot for me,” Quinn wrote in response, posting a screenshot of Tay’s post. “This is the problem with content-neutral algorithms.”
Tay also declared that Hitler had “swag,” and expressed support for genocide of Mexicans.
In a blog post titled “Learning from Tay,” Peter Lee, Microsoft’s vice president of research, apologized on behalf of the company for the bot’s behavior.
“We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay,” Lee wrote. He said that they would bring back Tay when they are confident that they can program her to mitigate the internet’s trolls, who he says were driven by “malicious intent” to feed her offensive language.
Lee assured readers that Microsoft engineers had put Tay through a number of stress tests “under a variety of conditions” and with diverse user groups as part of her development.
Related: Facial Recognition Technology Is Big Business — And It’s Coming For You
He acknowledged, however, that they were not prepared for what happened to Tay, and had not anticipated her turning into their Frankenstein.
Lee believes that the assault on Tay was part of a “coordinated attack” by a specific group of people who identified and then exploited a “vulnerability” in the bot. “Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack.”
“As a result,” Lee added, “Tay tweeted wildly inappropriate and reprehensible words and images.”
“We will remain steadfast in our efforts to learn from this and other experiences as we work toward contributing to an Internet that represents the best, not the worst, of humanity,” Lee concluded.
Follow Tess Owen on Twitter: @misstessowen