Now boasting 3.9 million users, the r/AmITheAsshole subreddit has become known as one of the leading forums where users can seek and share advice with virtual strangers. Internet artists Morry Kolman and Alex Petros trained three AI models using comments from over 60,000 posts from the popular subreddit. They filtered the comments according to the original subreddit’s formal voting guidelines where users vote on whether the poster is the asshole or if ESH (Everyone Sucks Here). What results is a positive bot, a negative bot, and a neutral bot that respond to every scenario submitted.
The project, funded by Digital Void, aims to illustrate how training data can bias the decision-making abilities of artificial intelligence models. Now, their bots can help you answer the age-old question: Are you the asshole?
“This project is about the bias and motivated reasoning that bad data teaches an AI. The impact of this, however, is abstract,” Kolman tweeted. “How can we view weighted judgment? What does a poorly trained model look like? In short, it looks like Are You The Asshole.”
Users can submit their own moral dilemmas–real or not–and get a positive, negative, and swing response that can go either way. The three AI models are trained on data derived from Reddit users passing judgment so what results is a funny microcosm of what it’s like to debate on the internet now. Any topic can inspire strong, contradictory reactions from total strangers.
“When reading the results of a judgment, note the way in which the AI constructs ideas from snippets of human reasoning,” their website reads. “Sometimes the AI can produce stunning results, but it is fundamentally attempting to mimic the ways that humans put together arguments.”
Each post submitted receives serious, sometimes even nuanced, responses no matter how ridiculous it can be. Someone also noticed that the bot can even respond in Chinese. Overall, the bots do a surprisingly good job at mimicking how actual users interact on the original subreddit. Some of their comments even include an “edit” addendum as if they were actual users updating their responses.
As the co-creator Petros pointed out in a tweet, the bots seamlessly piece together phrases from the comments they’re trained on and create responses that sound almost logical. Twitter user Michael Ben submitted a story from the bible and one of the bots definitively responded, “YTA… this is about your over-the-top anger at people you see as ‘the enemy.’”
AYTA’s quirky responses illuminate the dark reality of using artificial intelligence models in the real world for surveillance and policing. It also shows that it isn’t always a good idea to get your advice from the internet.