This story is over 5 years old.


How to Tell If You’re Talking to a Robot

Mundane small talk could unravel an artificially intelligent phone assistant's otherwise-realistic speech patterns.
Images via Shutterstock / Composition: Jordan Pearson

Google’s new Duplex is an artificial intelligence speech system that sets up appointments and performs other clerical tasks, like a receptionist or human assistant might. It sounds pretty real—it stammers a little, hums and pauses, and mostly nails the tone of someone mildly uncomfortable with being on the phone in the first place. It’s creepy partially because it sounds similar to how we talk, but mainly because we might not be able to tell if we’re speaking with a human or a machine.


As someone who’s unintentionally mastered the dark art of getting a human on the line by yelling obscenities at my internet service provider’s automated phone tree, I’m ambivalent about Google Duplex taking over administrative phone tasks. I’m certainly not delighted by the idea of Google continuing to eat the world, and dehumanization sucks as a rule, but Duplex is what we get when Silicon Valley is fixated on AI as a solution to all of its social ills.

In an ideal future, Google would make good on its word to take the implications of AI and user consent seriously, and figure out ways to incorporate opt-in procedures and warnings that you’re speaking with an AI. But if it doesn’t, or if another company more quietly develops a system like it, us humans might want to know how to dupe the system and figure out who—or what—we’re speaking with.

"If you ask the same system whether it prefers orange juice to lobster, or whether it likes the warm temperatures we are currently experiencing, it will have noticeable problems responding.”

Duplex, like many AI speech systems on the market currently, is limited to closed domains—simple sets of questions that it can easily handle answering, such as, “What time?” and, “What’s your name?” You know, the kinds of routine scripts that receptionists use when scheduling doctor appointments or reserving tables at a restaurant. “Duplex can only carry out natural conversations after being deeply trained in such domains,” according to a Google blog post. “It cannot carry out general conversations.”


This is Duplex’s clear weak spot: Improvisation. Asking questions outside of the set social scripts it’s given to work with is the fastest way to determine whether you’re speaking to a robot or a real person. Ask a question that’s nonsense or off-topic, David Gunkel, a professor at Northern Illinois University told me in an email, and Duplex will falter enough to either reveal that it’s a machine or put a human on the line.

“Duplex can carry out natural-sounding conversations in a specific area, like customer service for the cable company,” Gunkel said. “But if you ask the same system whether it prefers orange juice to lobster, or whether it likes the warm temperatures we are currently experiencing, it will have noticeable problems responding.”

The meaningless small talk that so many people loathe could bring the curtain down on a robot assistant pretty quickly, it turns out. If an exchange goes awry, Google says a human will swoop in and take over the call. If I repeatedly ask Duplex, “How’s the weather where you are?” and the system hasn’t prepared that answer, I could theoretically get a person on the line.

AI academics and the general public are rightfully concerned about consent for data mining phone calls and replacing the nuance of human connection with robots, but it’s a predictable evolution in how we already acknowledge and participate in interactions with robots on the phone—and it’s something we’ve been trading off in little bits all along, from the first "This call may be recorded for quality purposes" warning when you call customer service.

“From that point forward, everything you say to the human or AI operator is being recorded, warehoused, and data mined,” Gunkel told me. “This will not change when Duplex takes over the conversation.”