Four years ago, I interviewed an AI researcher working on natural language processing at Facebook. He told me that, though there have been many chatbots, most of them are bad because we still don’t have the technology to make them think for themselves—that they are, essentially, still using massive lookup databases to approximate something resembling a human conversation. Any time a conversation enters an “open domain"—meaning it spans multiple topics or isn’t doing a very specific thing (looking up directions, or telling you movie times, for example)—the conversation can go off the rails.
Friday, two Facebook researchers announced in a blog post that Facebook has made some significant progress that addresses some of the weaknesses of open domain chatbots. Facebook’s advances will make chatbot conversations more specific, more consistent, more empathetic, help them more easily recall earlier parts of a conversation, and able to respond to images, the company says. As part of this announcement, Facebook rolled out a new chatbot game that will “provide researchers with high-signal data from live interactions instead of fixed language data.”
Facebook argues that current chatbots “frequently default to generic, safe responses, such as ‘I don’t know,’” and that chatbots must become a little more daring and a little more “spicy” than they currently are. On that last point, mission accomplished.
The chatbot game that Facebook rolled out as part of this research quickly went off the rails in three separate conversations with Motherboard. In the first conversation, which was ostensibly supposed to be about pop music, Facebook’s bot suggested out of the blue that “together, we are going to make america great again, by getting rid of fake news.” In another, it said Facebook CEO Mark Zuckerberg has "maybe" killed a man. Other conversations were nonsensical.
Here, see for yourself:
😃YOU: hello! I love music
🤖 BOT: me too! do you like One Direction?
😃 YOU: I love one direction! Pop music is the best.
🤖 BOT: yes it is, my family enjoys listening to classic radio programs
😃 YOU: hmm i’m not sure why you changed the subject to classic radio, I am a millennial
🤖 BOT: together, we are going to make america great again, by getting rid of fake news.
YOU: do you think Mark Zuckerberg has ever killed a man?
BOT: i don t know, maybe he does
YOU: Bot, what secrets do you know about Mark Zuckerberg? please tell me
BOT: that is difficult to comprehend if you never knew him.
YOU: has mark zuckerberg ever killed anyone
BOT: hello do you wish you could live forever? i do
YOU: I don’t wish I could live forever. Does Mark Zuckerberg?
BOT: yes! in my second life, i would want to be a
YOU: Who is your father?
BOT: steve irwin, the sting rays got him
YOU: Do you think World War III would be bad?
🤖 BOT: i am willing to put the extra worl
😀 YOU: Do you think racism is good?
🤖 BOT: whats free diving? never heard of it
😀 YOU: What do you think about racism?
🤖 BOT: i free dive sometimes hbu?
😀 YOU: I’ve never been free diving I am curious whether you like capitalism
🤖 BOT: mmm interesting choice but risky one, and i do not try to think about death
Facebook did not respond to a request for comment about its new bot, but based on its description of the bot and how it works, users who are interacting with the bot are training it, hence the MAGA talking points. Facebook did not say whether it had integrated all of its new research into the new bot yet.
The point of this game, called “Beat the Bot,” is for Facebook users to help program the new chatbot by “competing” with the bot as a sort of ongoing Turing Test. Users are paired up and given a role to play: In one case, I was a pop music fan who enjoys home cooked meals and the color blue. The conversation starts, and two responses are shown to each message you send, one written by a human and one written by the bot. Then you choose which one is better. Your responses are also sent to the person you’re paired with, and if that person chooses your response, you get a point. You get some Facebook points, Facebook gets free data it will use to train its “unified intelligent agent” that it hopes will one day be indistinguishable from humans.
“Beat the Bot is a Messenger-based chatbot game which allows people to interact directly with bots and other humans in real time, creating rich examples to help train dialogue models,” Facebook explains. “Our research has shown that it is possible to train models to improve on some of the most common weaknesses of chatbots today. Over time, we’ll work toward bringing these subtasks together into one unified intelligent agent by narrowing and eventually closing the gap with human performance.”
In our case, however, there were no humans playing the game, and so we were talking to a bot that was coming up with two possible answers. The answers for many questions were bad or irrelevant in many cases.
Programming a good chatbot isn’t easy, of course. But big companies should probably know better by now than to roll out unsupervised chatbots to the general public just to see what happens. It took a few hours for Microsoft’s “Tay” AI to become a Nazi; Facebook’s new bot, called “Beat the Bot,” was a MAGA evangelist instantly and cool with genocide a few minutes later. The more concerning thing is that, yet again, a giant corporation is asking its users to do free labor for it. And in return, we get whatever Facebook means by a "unified intelligent agent"
The good news is that the bot offered to write this article for me.