Two Very Different Tales of Human-AI Interaction

Humans are encountering artificial intelligence more and more often on a daily basis.
March 11, 2016, 2:30pm

Wow! #AlphaGo wins a second time! Completely surreal... Huge respect for Lee Sedol. Amazing battle!
— Mustafa Suleyman (@mustafasuleymn) March 10, 2016

This week, a human sat across from a computer screen in Seoul, South Korea. Between them was a table, and on that table was a Go game board.

The human player, Lee Sedol, is known for being more aggressive, eager to speed into a fight; the computer, AlphaGo, is understood to have a more conservative opening leading to a steadily stronger game. Both players were aiming to surprise each other.

The challenge between AlphaGo and Lee Sedol, which is still unfolding over the course of five games, is historic because the artificial intelligence unexpectedly beat the 9-dan ranked player. Twice.

It's also unique because it's rare that you see a human face down an AI in such a personal setting for three and a half hours.

But these days, you don't have to be a master chess or Go player to meet an AI. Artificial intelligence—loosely defined as computers that simulate human intelligence—is everywhere. Image recognition used by Google and Facebook is considered AI. Most video games include AI. And if you have a smartphone, you may have an artificial intelligence in your pocket in the form of a digital assistant such as Siri, Cortana, or Google Now.

Sometimes, it can be hard to know how to act around these smart machines. In the first half of Radio Motherboard this week, staff writer Jason Koebler explores how people treat Microsoft's digital assistant Cortana when no one's listening. (A small spoiler: Apparently, people like to harass it. One new challenge in AI programming is learning how to gently smack down haters.)

In the second half, editorial fellow Louise Matsakis looks at the Center for Applied Rationality, which runs a workshop that teaches humans that in some cases, it makes more sense to think more like computers.

Both lessons apply in the Go match.

"It is different preparing for a game against a non-person," Sedol said before the matches. "When I prepare for a match against a person, it is important to read that person's energy. But I can't do that in this match and so it may feel like playing the game all by myself."

Despite bragging beforehand that he expected to win 5-0 or at least 4-1, he immediately changed his tune after the AI beat him. After the first loss, he said his inhuman opponent played "in a perfect manner." After the second game, he said "it was a clear loss on my part."

Sedol's attitude toward the AI is one of humility, but humans aren't always so graceful in the face of superior artificial intelligence. Garry Kasparov, after losing to IBM's Deep Blue in the first of the infamous six-game match in 1996, was reportedly too distressed to talk to reporters. "I was playing against myself and something I couldn't recognize," he said later. He accused the IBM team of cheating and demanded a rematch, which IBM declined.

So how should you behave around an AI? You can treat it with the same respect you'd (hopefully) afford a human, as Sedol did. You may be tempted to refuse to acknowledge its intelligence, as Kasparov did. You may want to fall in love with it, like Joaquin Phoenix's character in the movie Her. Or you might feel like the best response is to swear at it and tell it to lick your butthole.

Show notes:

:35 We're celebrating In Our Image, Motherboard's week of stories about artificial intelligence. It starts Monday, March 14, 2016.

1:00 FallenMyst hopes on the line. Her post on Reddit, almost exactly one year ago: Everything you've ever said to Siri/Cortana has been recorded...and I get to listen to it.

And Motherboard's story about it at the time: Strangers on the Internet Are Listening to People's Phone Voice Commands

2:55 "People feel very comfortable talking freely to Cortana." Deborah Harrison's full talk at Re-Work Virtual Assistant Summit.

5:15 Don Howard is a philosophy professor and the former director of the John J. Reilly Center for Science, Technology, and Values at the University of Notre Dame.

7:45: Lydia Kaye of the Campaign Against Sex Robots.

11:25 Howard says while sometimes robots that seem human can inspire people to treat them badly, other times they inspire what some people might call excessive empathy. For example, soldiers in Iraq get very attached to IED sniffing robots, and one woman reportedly was so grateful to her Roomba that she would give it days off and do the vacuuming herself.

12:00 Introducing Motherboard editorial fellow Louise Matsakis! Last summer, Louise got a strange email asking her to fill out a survey about how rationally her best friend Melissa behaves.

12:30 CFAR's website is at the impressive URL

14:14 The workshop used a lot of computer technology. "On the first day, during the first session, one of the first things we had to do was create a bugs list."

15:40 Julia Galef, one of the cofounders of CFAR, compares the way humans think to the way a spam filter thinks.

19:55 Our brain's inability to think rationally about abstract or long term issues is at the heart of some of the biggest problems in the world, Galef says, so the need for some kind of training becomes increasingly important.

22:30 That's our show! Please subscribe and rate us on iTunes.