This story is over 5 years old.


This Dystopian Wearable Detects AIs Pretending to Be Humans

The 'Anti-AI AI' will literally send a chill down your spine.
Flickr/Abode of Chaos

As AI algorithms that can impersonate the human voice get better and better, it might not be long before you pick up the phone and genuinely can't tell if you're talking to a human or a machine that's been trained to sound like one. In the far future, you might not be able to tell the difference in person, either.

In this vision of a future filled with computers pretending to be people, you might need a device like the Anti-AI AI. Designed as a fun proof-of-concept by Australian firm DT, the device uses essentially the same algorithms that impersonate human voices to detect if you're being spoken to by a computer. If the Anti-AI AI clocks some synthesized vocal patterns, it uses a thermoelectric plate to literally send a chill down your spine. The prototype isn't pretty at the moment, but a video shows the device discriminating between a recording of the real Donald Trump, and an AI-generated impersonator.


DT's mock-ups of the Anti-AI AI envision a sleek wearable that rests behind the ear. Imagine: It's the year 2060 and you're talking about your day at the cricket farm with a new barista at your usual coffee spot. Suddenly, you feel it working its way down your goosebump-covered neck. This isn't a person you're conversing with. Or, maybe it's 2023 and you're listening to the news. The anchor throws to some new, damning audio from America's latest horrorshow president. You're shocked, but doubly so when your AI speech-detecting wearable goes off.

Read More: After 20 Minutes of Listening, New Adobe Tool Can Make You Say Anything

Since DT is based in Australia, I couldn't ask them about the device, but a blog on its website says that it took its team five days to create a working prototype of the Anti-AI AI using some popular machine learning tools. Its architecture seems pretty simple: The device itself streams audio via an iOS app to a cloud-based deep learning model that uses Google's TensorFlow platform for AI developers. The AI model, the post says, was trained on samples of AI-synthesized voices to learn how to detect them.

It's not clear how accurate DT's Anti-AI AI is, but since the AI model (and the entire system) was apparently hacked together in a couple days, it's probably not very good. To DT's credit, they note its a work in progress and posted all their code to GitHub, a site for coders to work together on open-source projects.

But even if the Anti-AI AI never becomes more than a curiosity or a thought experiment, I can't help but think that one day we might need something like it.

Get six of our favorite Motherboard stories every day by signing up for our newsletter .