FYI.

This story is over 5 years old.

Tech

Google Says Its AI That Makes Phone Calls Isn't Good Enough to Be Dangerous

At a product demonstration, executives did their best to assuage fears about the unsettling new technology.
Image: Pexels

Google is hoping that the public will be less freaked out by its human-like, phone-call-making artificial intelligence once we get to try it for ourselves. At a press demonstration this week, executives did their best to assuage fears about the creepy potential of this new system by emphasizing that the system isn’t even that good.

Over the summer, Google will be slowly adding new features powered by its AI to a limited number of Google Assistant users. First, they’ll be able to have the app find out a business’s hours of operation, and later they’ll be able to book restaurant reservations and eventually hair appointments.

Advertisement

It sounds useful and convenient, but the lifelike quality of the AI, and the fact that Google chose to include what it calls “speech disfluencies” (ums and ahs), really rubbed people the wrong way when it debuted the technology at its I/O conference last month. It’s easy to imagine this technology being just a few steps away from systems that imitate a real human to scam you out of personal information over the phone, but the executives said they’re a long way from that.

“A lot of questions came up that sort of were concerned with: ‘have you guys built a general AI that can be like a person and be in any phone call?’ and that’s very, very far from what we’ve actually done,” Scott Huffman, Google’s vice president of engineering, told a room of journalists during a product demo at a restaurant in New York City. “What we’ve actually done is created, basically, three trained automated systems that do very specific things: ask about business hours, book a reservation at a restaurant, and make appointments at hair salons. That really is all they can do.”

At the press demo this week, Google executives addressed many of the concerns that arose following the demonstration at I/O in May. The system now identifies itself as an automated system powered by Google at the start of every call, for example, and notes that the call is being recorded, a disclosure Google execs said has always been part of the technology, but which they notably left out of the initial demonstration at I/O.

Advertisement

“A lot of technology can be used for very good purposes and that same technology can be used for more nefarious purposes,” Nick Fox, Google’s vice president of product and design, told me. “It’s why we think our AI principles are really important and why we’re leading the way in terms of disclosures, and even that fact that it’s limited to concrete, specific use cases.”

To build this system, the team started by having human operators make hundreds of reservations over the phone, and record their conversations—Huffman told me the team went out to dinner a lot in the first few months of the project. They then annotated transcripts of those calls to identify specific parts of the transaction, like being asked the number of people in a party. This gave them a data set with which to train the AI, and they were able to start testing it by having the system make its own calls, with operators jumping in as needed.

Now, Duplex is able to successfully book four out of five reservations completely autonomously, according to Huffman. But Google still has a team of operators who help the system out and continue making improvements.

As for why it chose to make the system sound so lifelike, Google reps said that they got a lot of feedback from earlier iterations with a more computer-generated sound, and found that adding “um” and “ah,” made the bot sound more polite.

“We’re not trying to trick people,” Huffman said. “The early version sounded bad. It didn’t sound natural. It didn’t sound good. The end result was that it didn’t work. Businesses would hang up and say ‘I don’t want to talk to a robot.’”

The reality is that every advancement in the field of AI tends to get people’s hackles up, and it’s good to be vigilant about privacy and safety concerns. When companies like Google do misleading demonstrations like the one at I/O, it’s understandable why people freak out. But we’re still a long way from a lot of the dystopian scenarios we envision. In response to the outcry, Google has now made it clear: their AI is only barely able to do the three very specific things it has been designed for. It won’t be taking over the world just yet.

Listen to our podcast about the world’s greatest mysteries that were solved by science.