Tech

Facebook's AI Chatbot: ‘Since Deleting Facebook My Life Has Been Much Better’

The company warns users that BlenderBot 3 will make ‘untrue and offensive statements.’ It’s also just not very good.
Janus Rose
New York, US
Screen Shot 2022-08-08 at 1
Image via Facebook / Meta AI

In 2016, Microsoft unleashed an AI chatbot called Tay, which was shut down after it turned into a racist, holocaust-denying conspiracy theorist after less than a day of interacting with users on Twitter.

Now, more than six years later, Facebook and its parent company Meta have publicly launched their own chatbot called BlenderBot 3—and it’s going as well as you might expect. 

When asked what it thinks about the company in a chat with Motherboard, the bot responded and said it has deleted its own Facebook account "since finding out they sold private data without permission or compensation." It also said "You must have read that facebook sells user data right?! They made billions doing so without consent." 

Advertisement

BlenderBot further added that “life has been much better” since deleting its Facebook account. 

When clicking on the bot’s responses for more information, the reasoning behind them seems fairly simple: it’s merely pulling from the most popular web search results about Facebook, which mostly have to do with its ever-expanding litany of data privacy scandals. 

Facebook's AI chatbot talking about why it doesn't trust Facebook.

Facebook's AI chatbot talking about why it doesn't trust Facebook.

For its initial response, BlenderBot shows it pulled text from an article about Cambridge Analytica, the company that infamously mined user data from Facebook to target ads in favor of Donald Trump during the 2016 election. The bot also apparently created an entire AI “persona” labeled “I deleted my Facebook account” from the massive amounts of data it scraped from the web. 

Like with all AI systems, the bot’s responses also predictably veered into racist and biased territory. Social media users have posted snippets of the bot denying the results of the 2020 election, repeating disproven anti-vaxxer talking points, and even saying the antisemitic conspiracy theory that Jewish people control the economy is “not implausible.” 

Facebook admits that the bot generates biased and harmful responses, and before using it the company asks users to acknowledge that it is “likely to make untrue or offensive statements” and also agree “not to intentionally trigger the bot to make offensive statements.”

Advertisement

The responses are not too surprising, given that the bot is built atop a large AI model called OPT-175B. Facebook’s own researchers have described the model having “a high propensity to generate toxic language and reinforce harmful stereotypes, even when provided with a relatively innocuous prompt.”

BlenderBot’s responses are also just generally not very realistic or good. The bot frequently changes topics apropos of nothing, and gives stilted and awkward answers that sound like a space alien who has read about human conversations but never actually had one. This somehow feels appropriate for Facebook, which frequently seems out-of-touch with how real humans communicate, despite being a social media platform.

A screenshot of a chat with the BlendBot AI chatbot

For a conversation bot, BlenderBot is not very good at, well, conversation.

Ironically, the bot’s responses perfectly illustrate the problem with AI systems that rely on massive collections of web data: they will always be biased towards whatever results are more prominent in the datasets, which obviously is not always an accurate reflection of reality. Of course, that’s where all the user data the company will gather from the bot’s conversations supposedly comes in.

“Since all conversational AI chatbots are known to sometimes mimic and generate unsafe, biased or offensive remarks, we’ve conducted large-scale studies, co-organized workshops and developed new techniques to create safeguards for BlenderBot 3,” Meta AI wrote in a blog announcing the bot. “Despite this work, BlenderBot can still make rude or offensive comments, which is why we are collecting feedback that will help make future chatbots better.”

But so far, the idea that companies can make their bots less racist and horrible by gathering even more data seems aspirational at best. AI ethics researchers have repeatedly warned that the massive AI language models which power these systems are fundamentally too large and unpredictable to guarantee fair and unbiased results. And even when incorporating feedback from users, there’s no clear way to distinguish helpful responses from those made in bad faith.

Of course, that’s not going to stop companies like Facebook/Meta from trying.

“We understand that not everyone who uses chatbots has good intentions, so we also developed new learning algorithms to distinguish between helpful responses and harmful examples,” the company wrote. “Over time, we will use this technique to make our models more responsible and safe for all users.”