The world in 2020 is the most bizarre it has ever been. Waking up feels like a further disconnect from reality and each subsequent day feels more and more like living in a poorly-written dystopian fiction. But we’re sorry to tell you that things are only going to be getting more bizarre from here—and more dangerous.
In a study published on August 5 in the Crime Science journal, scientists from University College London have now ranked all the ways artificial intelligence (AI) could be used to assist crimes within the next 15 years. And while Hollywood might have us believe that humanity can buckle under the autocracy of robots smarter than the humans that built them, it’s actually deepfakes—multimedia content that has been edited using AI to make the subject appear to be doing/saying something they have not actually done/said—that rank the highest on this list when it comes to the threat they pose.
The study was conducted through a workshop involving a threat assessment exercise by a diverse group of stakeholders—security, academia, public policy, and the private sector. They were given the crimes and were asked to assess them along the different aspects of threat severity: harm (the victim and/or the social harm of the crime), criminal profit (the realisation of a criminal aim), achievability (how feasible the crime would be) and its defeatability (whether the measures to defeat it are simple or complex). The crimes ranged from crimes we had been used to like forgery to crimes that were relatively new like tricking face recognition, using driverless vehicles as weapons, and military robots.
After comparing 18 different types of AI threats, the group determined that deepfakes were the greatest overall threat. After all, the harm caused by them is the erosion of a value fundamental to our survival as human beings: trust.
"Humans have a strong tendency to believe their own eyes and ears, so audio and video evidence has traditionally been given a great deal of credence, despite the long history of photographic trickery," explain the authors in the study. "But recent developments in deep learning have significantly increased the scope for the generation of fake content."
The technology which has been largely used to create non-consensual porn, has also begun to infiltrate politics. The potential impact of these manipulations, however, range further—individuals scamming by impersonating a family member to videos designed to impersonate and sow distrust in public, to creating blackmail material through audio and video manipulations. Since these attacks are hard for individuals to detect, it also makes it difficult for them to stop.
Hence, the authors say that changes in citizen behaviour—such as generally distrusting visual evidence—might be their only effective defence. If even a small fraction of visual evidence is proven to be convincing, it becomes much easier to discredit genuine evidence. And while behavioural changes might be necessary, they can be considered an indirect societal harm from the threat of a deepfake.
The other top threats were autonomous cars being used as remote weapons for vehicular terrorist attacks, AI generated fake news, tailored phishing and military robots. Unsurprisingly, attacks like forgery, stalking and burglar bots—technology that has been present for longer—were ranked the lowest.
Professor Shane Johnson, Director of the Dawes Centre for Future Crimes at UCL, which funded the study, said: “We live in an ever changing world which creates new opportunities – good and bad. As such, it is imperative that we anticipate future crime threats so that policy makers and other stakeholders with the competency to act can do so before new ‘crime harvests’ occur.”
Follow Satviki on Instagram.