Indonesia may lose the war on fake news as new technologies emerge online that make it possible to copy/paste one person’s face onto someone else’s body in a video so real looking that it could fool anyone but a trained expert. The technology is called "deepfakes," and it opens the door in Indonesia to an eerie future straight out of Black Mirror—one where we can’t even trust our own eyes right as the country is about to head into both a nationwide election in June in June and a heated presidential race in 14 month’s time.
“The people who see this kind of fake video, it can influence their beliefs,” Semuel Abrijanit Pangerapan of the Indonesian communications ministry tells VICE. His department filters and blocks "controversial" or illegal content before its widely circulated. With elections coming up, it’s expecting to cope with a surge of material intended to harm or discredit political opponents.
Deepfakes porn videos of celebrities like Emma Watson and Daisy Ridley were the first to hit the internet—prompting an quick ban by the popular tube site PornHub. The story, broken by Motherboard reporter Samantha Cole, has inspired a lot of hand-wringing in the United States, where journalists were quick to pen thinkpieces like “Why Reddit’s face-swapping celebrity porn craze is a harbinger of dystopia.”
The deepfakes community continues to thrive on sites like Reddit, where people are using it to create fake revenge porn of their exes, sex tapes of their crushes, and recast a young Harrison Ford as Hans Solo in the new Star Wars origin film Solo: A Star Wars Story.
Here in Indonesia, fake news are national conversation, but the specific threat of deepfakes—and the potential repercussions of such a technology—hasn’t been widely addressed.
We might soon see manufactured videos of politicians involved in corruption or abusing a child, Yohan Misero, a researcher at Indonesia’s Community Legal Aid Foundation tells VICE.
Imagine a future where it’s nearly impossible for voters to understand what’s a political “black campaign,” and what isn’t. One where the visage of a political rival can be inserted into almost any “leaked” video—a sex tape, a riot, a controversial meeting.
That’s where we are now. All you need are several hundred versions of the faces you intend to swap, a main video clip with a similar enough body, and a bit of luck and you have a new deepfakes video ready to take down, or at least damage, your political rivals.
The scariest part of all of this is how much impact far simpler video and photo manipulations have already had on Indonesia’s political scene.
A video edited to distort a speech by then Jakarta Governor Basuki Tjahaja Purnama lost the man an election and landed him behind bars on a blasphemy charge—despite the fact that the man who allegedly edited the video was also charged with spreading hate speech. That was a simple trim job, an edit that anyone could do with even the most-basic video editing apps.
But these deepfakes are so far beyond the Basuki video—to the point that it might force internet regulators to rethink how they approach controversial or illegal content online.
Semuel told VICE that even today only 50 to 60 percent of the porn, fake news stories, and hoaxes already online are caught by government censors.
“Fifty to sixty percent is considered a ‘good result’,” Semuel said.
The process is often a tedious mix of algorithmic web crawling and old-fashioned manual labor. The ministry has software that scours the internet looking for potentially offending websites and blocks them. The algorithm learns which websites are more likely to be blocked in Indonesia and then searches out similar hits. That’s how the ministry was able to block 100,000 porn sites last month alone.
It takes a real person to understand the context behind the ministry’s decision to deem content illegal, Semuel explained.
“An image of a politician with a pig’s head might be considered harmless elsewhere, but in Indonesia this signifies that the person can be targeted,” he told VICE.
To many in Muslim-majority Indonesia pigs are dirty creatures deemed haram—or forbidden—by the Quran. That’s why the ministry’s work is so time-consuming.
But this method only works on the actual internet. The ministry’s software doesn’t crawl people’s phones or WhatsApp groups for banned content. In Indonesia, this distinction matters. News, memes, and image macros spread with remarkable speed through Indonesia’s collective WhatsApp group chat. These groups are already a hotbed of fake news and hoaxes, as well as salacious material, so what are the odds people won’t spread an incredibly controversial deepfakes video around? Pretty slim, said Lurino Bertorani, a researcher at the data analytics firm Dattabot.
“I’m sure any fake porn video featuring a celebrity or politician would be distributed rapidly and widely enough to every nook and cranny in Indonesia [well] before anyone could ever prove that it was fake,” he told VICE.
How, exactly, would the communication ministry—and law enforcement—operate in a world where sex tapes can be so easily faked? Pornography, especially the distribution and creation of, is illegal in Indonesia and punishable with up to six years in prison. It’s perfectly legal to make your own sex tape for your own private viewing. But spreading that same sex tape is illegal, even when you weren’t the one forwarding it to all of your friends.
The pop singer Nazril Irham, a man known as “Ariel,” was sentenced in 2010 to three-and-a-half years in prison when some of his sex tapes hit in the internet. The court heard evidence that it was Ariel’s personal assistant who leaked the videos, but Ariel was found guilty regardless because he was “careless” with how he stored the videos on his computer.
That scandal was over actual sex tapes. But what about deepfakes? How can the government figure out who was actually involved in the production and distribution of a sex tape of the entire thing can be faked?
One legal expert told VICE that experts can still spot the difference between a legit photo or video and one that’s been manipulated with some kind of editing software. The video’s metadata and artifacts could give it away as a fake, said Yohan, the researcher at LBH Jakarta.
But others aren’t so sure. Advances in digital forensics have lagged behind those in video, explained Hany Farid, a scientist who studies fake videos, in an interview with Nature. Lurino, the Dattabot researcher, told VICE that experts were “very concerned,” about the increasing sophistication of fakes.
And by the time a deepfakes video makes its way before an expert for review, the damage may already be done. There’s just no way to analyze and prove a video is a fake before it’s sent out to millions of people online.
“As we’ve already experienced with doctored fake photos, it’s the spread through casual users that is more dangerous than the production itself,” Lurino said.
The best place to catch these kinds of fakes would be the platforms used to share fake news and hoaxes. But even when social media sites like Facebook try to flag fake news, the results seem to have little impact on users. In fact, it might make some users believe a fake story in their feed even more because if it passed the Facebook’s algorithms then it must be real.
Semuel, at the communication ministry, told VICE that the only real solution is to better educate people so they can spot what’s real and what isn’t. If the country can teach people how to spot a fake, then the government won’t need to be constantly blocking websites, he explained.
“The goal is to educate people so they can block it themselves,” Semuel said. “They can decide ‘I don’t want to see this kind of content. I don’t want to visit this kind of website. This is junk.’”