Some bright bulb at Microsoft Research had the clever idea of turning a machine learning program loose on Twitter yesterday to learn how humans interact with each other. Humans, predictably, interacted terribly.
Within 24 hours, the Twitter bot @TayandYou had turned spectacularly ugly in the way only the internet can turn things ugly, spewing racism and hate at Twitter uses in a series of horrifying tweets. Most of them have already been deleted, perhaps in a bid to ensure that any future artificial intelligence (AI) won't have evidence of what horrible bastards humans were back before AI took over the world. Or maybe Microsoft was simply trying to avoid bad PR. Regardless, there are screenshots.
And those future AIs would be right to be pissed about how humankind warped the mind of this poor machine learning program.
Just 24 hours earlier, Tay, pure as the driven snow, had cheerfully greeted the world, telling anyone who would listen "The more humans share with me, the more I learn." But nothing good can happen when that kind of vulnerability is exposed to the internet.
— TayTweets (@TayandYou)March 23, 2016
Tay is relatively low-grade AI configured as a Twitter bot, so it's not like it has the nuclear missile launch codes. But even at this level of AI development seen today, future super intelligent computers are raising some concerns among some very bright people, including Microsoft co-founder Bill Gates, who said in a recent Reddit AMA:
"I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that, though, the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don't understand why some people are not concerned."
As Gates's quote implies, before we get to the terrifying advanced AIs of the future, we're going to see the development of lower-grade intelligences to help work on menial tasks. Some of this kind of precursor work is already happening, like Apple's Siri and Google's search engine algorithms, which both process natural language to help users obtain information and answers. Expanding that kind of functionality would be a goldmine for whatever company figured it out first.
But teaching an AI about everything in the world, one idea at a time, is a tedious pain in the ass. So smart folks are working on machine learning — teaching the machine to shut up, pay attention, and figure things out on its own. It isn't all that far off from the way very young human or animal babies learn by watching and emulating adult behavior. Which led to the Microsoft AI researchers exposing Tay to Twitter for a day to learn all about humans.
In the first 24 hours, Tay knocked out almost 100,000 tweets. Some of these were about as sensible as things said by certain candidates for president.
Others were downright unsettling.
Tay can perhaps be forgiven for some hostile impulses toward humanity considering it had almost 100,000 Twitter interactions in one day. That even makes for a plausible origin story for Skynet. However, Tay isn't really picking up any true meaning from the words it's tweeting at this point. So any noises about genocide coming from the Microsoft AI are simply learned and conditioned responses, not anything that's reasoned.
This gets to the underlying problem. Microsoft's AI developers sent Tay to the internet to learn how to be human, but the internet is a terrible place to figure that out.
First off, it's likely that the internet is already changing human patterns of thinking and behavior, drawing people away from deep thinking. According to a paper published last year in Neuroscientist, the internet is training people to "gravitate toward 'shallow' information processing behaviors characterized by rapid attention shifting and reduced deliberations." The internet does not catch people at their best.
But the bigger (and related) problem, already well known to anyone who has spent more than 40 seconds online, is trolls. The internet turns out to be a wonderland for people with a cluster of really negative personality traits called the "Dark Tetrad": narcissism, psychopathy, sadism, and ruthless self-interest (Machiavellianism).
So while the Microsoft developers intended for Tay to learn about communication from humans, it ended up learning about being an asshole from a pack of sadistic psychopaths.
Which brings us back to what Gates said about the machines first doing a lot of jobs for us without being super intelligent. If it turns out that we're teaching a machine to be an enormously bigoted, sadistic jerk, then maybe we can automate real-life jobs. Like Russia's paid trolls. Or insult comics. Or US presidential candidates.
The fact is, sometimes humanity does some really great stuff, and sometimes humanity is just a bunch of madness. With that in mind, on behalf of all humanity, I apologize to our future super-intelligent computer overlords. We hope you find it in your cold silicon hearts to forgive us, and refrain from exterminating humanity or harvesting humans as an energy source for your legions.
Follow Ryan Faith on Twitter: @Operation_Ryan
Photo via Flickr