Advertisement
Tech by VICE

Thieves Used Audio Deepfake of a CEO to Steal $243,000

The heist is just a preview of how unprepared we are for AI-powered cybercrime.

by Edward Ongweso Jr
Sep 5 2019, 5:23pm

Christopher Furlong / Staff

In what may be the world’s first AI-powered heist, synthetic audio was used to imitate a chief executive's voice and trick his subordinate into transferring over $240,000 into a secret account, The Wall Street Journal reported last week.

The company's insurer, Euler Hermes, provided new details to the Washington Post on Wednesday but refused to name the company involved. The company’s managing director was called late one afternoon and his superior’s voice demanded the subordinate wire money to a Hungarian account to save on “late-payment fines”, sending the financial details over email while on the phone. A spokeswoman from Euler Hermes said, "The software was able to imitate the voice, and not only the voice: the tonality, the punctuation, the German accent.”

The thieves behind the voice would call back to demand a second payment, which raised the managing director's suspicions and led to him calling his boss directly. In an email to Euler Hermes, the director said that the synthetic "'Johannes' was demanding to speak to me whilst I was still on the phone to the real Johannes!"

Over the past few years, deepfakes have been growing increasingly sophisticated. Online platforms fail to detect it, and companies struggle with how to handle the resulting fallout. The constant evolution of deepfakes means that simply detecting them will never be enough due to the nature of the modern internet, which guarantees it an audience by monetizing attention and fostering the production of viral content. This past June, convincing deepfakes of Mark Zuckerberg were published to Instagram and kept up shortly after Facebook refused to delete a manipulated video of Nancy Pelosi. There is still no clear consensus on how Facebook should’ve handled that situation or future ones.

All of this is exaggerated by the data monetization models of companies like Facebook and Google. Techno-sociologist Zeynep Tufecki warns that companies like Facebook rely on creating a “persuasion architecture” that “make us more pliable for ads [while] also organizing our political, personal and social information flows.” That core dynamic, combined with the constant evolution of deepfake technology, means this problem will likely get worse across all online platforms unless the companies behind them can be convinced to change their business models.

Tagged:
AI
Facebook
Google
cybercrime
machine learning
monetization
deepfake