How Facebook, Twitter and YouTube failed to keep gruesome mosque shooting video from going viral

Facebook's much-vaunted artificial intelligence systems and human monitors did not stop it. A call from the police did.

“I will carry out an attack against the invaders, and will even live stream the attack via Facebook.”

That's what Brenton Tarrant posted on fringe message board 8Chan moments before he allegedly massacred at least 49 people in shootings at two mosques in Christchurch, New Zealand, on Friday.

The suspect was true to his word, and using a GoPro camera strapped to his head, he gave anyone watching on his Facebook page a first-person view of the gruesome murder of people at Friday prayers at the Masjid Al Noor mosque.


The stream showed Tarrant driving to the mosque, getting rifles out of his car, and then appearing to enter the mosque and shoot victims indiscriminately, before getting back in his car and driving to the Linwood Islamic Centre. After 17 minutes of broadcasting to the world, Facebook finally pulled the plug.

But it wasn’t Facebook’s much-vaunted artificial intelligence systems that flagged the content, or the company's human moderators. It took a phone call from the New Zealand police to alert the world’s biggest social network to the live murders being broadcast on their platform, Facebook confirmed. Read on Motherboard: Documents Show How Facebook Moderates Terrorism on Livestreams

In the hours after the massacre played out on Facebook, the video was copied and re-uploaded hundreds of times to Facebook, YouTube, and Twitter. The tech giants say they are working hard to stop the video being shared, but it's still easily searchable on all platforms more than 12 hours after the attack.

Critics say the massacre is the latest failure of tech platforms to deal with the spread of extremist content on their networks.

"It is not actually difficult once you identify a video, you can hash it, and you can prevent any video from being re-uploaded and disseminated online. It rings hollow. I don't believe them, I don't believe that they are really making an effort to remove this horrific content,” Lucinda Creighton, a senior adviser at the Counter Extremism Project, an international policy organization, told VICE News.


“Twitter has rigorous processes and a dedicated team in place for managing exigent and emergency situations such as this,” the company told VICE News. But when we searched one of the most popular hashtags related to the shooting just before publication, it returned results with a copy of the video right at the top.

Facebook had struggled to moderate its live video streaming service according to internal documents seen by Motherboard, as well as testimonies from senior Facebook employees.

“I’m not sure how this video was able to stream for [17] minutes,” a source with direct knowledge of Facebook’s content moderation strategies told the website.

Google said “shocking, violent and graphic content has no place on our platforms, and is removed as soon as we become aware of it.”

But on YouTube, a copy of the full video was on the first page of results after searching for a generic shooting-related term. While VICE News was unable to find videos hosted directly on Facebook, there were numerous links to YouTube promising unedited clips of the video.

Facebook failed to respond immediately to requests for comment.

Facebook has previously said it removes 99 percent of terrorist content on its platform automatically, but experts say that artificial intelligence is still a decade away from being proficient in dealing with this type of content.

Crieghton says Facebook’s boasting about statistics is meaningless when you consider the huge amount of terrorist content being shared online.


"This is complete fiction from Facebook. It is the typical Facebook massaging of fact, and frankly I don't believe them. They are not transparent, they are not telling us what they are removing, what the criteria are. It is just so opaque,” Creighton said.

There are several new pieces of legislation moving through the European Parliament designed to punish tech companies for failing to remove terrorist content on their platforms, but with EU elections approaching, it’s unclear if they will be implemented any time soon.

Tarrant leveraged multiple online platforms to plan and promote his attack prior to carrying it out. He posted pictures of the guns used in his attack on his Twitter account Wednesday and posted his manifesto — describing the attack — on his Facebook and Twitter accounts.

But Ben Nimmo, a senior fellow at the Atlantic Council’s Digital Forensic Research Lab, says that it would have been virtually impossible to read the digital signs ahead of the attack — and Tarrant’s social media posts were designed to be read after he conducted the operation.

"He was thinking he doesn't want to give himself away too much [with his social media posts], but it looks like this was set up so people would read it afterward,” Nimmo told VICE News. “That Twitter account is almost like the footnotes to his operation.”

The manifesto also shows Tarrant was well versed in far-right ideology and online memes. He says he learned everything on the internet: “You will not find the truth anywhere else.”

But many experts warn not to sensationalize the content of Tarrant’s diatribe, saying it was designed to troll media organizations.

“It’s bait for journalists who don’t know how memetic culture and racism are intertwined,” Joan Donovan, director of the Technology and Social Change Research Project at Harvard Kennedy’s Shorenstein Center, told VICE News. “Journalists are left with enough clues to want to decode it, and they shouldn’t. It’s racist garbage that doesn’t point to any significant insights.”

“The motive is clear here: bigoted xenophobic racist religious intolerance,” she said.

Cover image: Members of the public react in front of the Masjd Al Noor Mosque as they fear for their relatives on March 15, 2019 in Christchurch, New Zealand. (Photo by Kai Schwoerer/Getty Images)