Two months after a terrorist live-streamed the killing of 51 people at two New Zealand mosques on Facebook, the company announced new rules to prevent a repeat of that kind of incident.
But it's unclear if the new rules would have prevented the gunman’s ability to broadcast his attack.
“Anyone who violates our most serious policies will be restricted from using Live for set periods of time – for example, 30 days – starting on their first offense,” Facebook VP of Integrity Guy Rosen said in a blog post.
For example, if a user shares a link to a statement from a terrorist group without context, they will now be immediately blocked from using Live for a set period of time. But there's no indication that the Christchurch gunman posted content on Facebook that would have violated its policies in this way.
The 28-year-old Australian posted a link to his manifesto, which described what he was going to do, on his Facebook account and elsewhere. Still, Ben Nimmo, a senior fellow at the Atlantic Council’s Digital Forensic Research Lab, said that it would have been difficult to read the digital signs before the attack.
The gunman used other online social networks, notably the fringe message board 8Chan, to promote the fact he was going to carry out his terror attack, and though he used Live to stream the attack, he did not post links to the live stream on Facebook itself.
“In general, the Christchurch shooter kept his most offensive comments for 8chan, and even there, he only posted his intentions shortly before the shooting,” Nimmo told VICE News. “It’s an open question whether the link to the manifesto would have been enough to get him banned on Facebook, given the short time span involved.”
“It’s an open question whether the link to the manifesto would have been enough to get him banned on Facebook."
Facebook didn’t immediately respond to a question about whether the suspect would have violated its policies and been blocked from Facebook Live under the new one strike rule.
New Zealand Prime Minister Jacinta Ardern, who is attending the summit in Paris, called Facebook’s move “a good first step,” but she and her fellow leaders will call on tech companies to do more to prevent terrorist content from spreading via their social networks.
The summit will see Arden and French President Emmanuel Macron host leaders from around the world, including British Prime Minister Theresa May and Canadian Premier Justin Trudeau, who will hear from tech executives like Twitter CEO Jack Dorsey.
The summit will launch an initiative called the “Christchurch call,” a pledge designed to curb the spread of extremism online.
One of the areas of most concern for leaders is the ability of bad actors to upload copies of terrorist material to Facebook and other platforms with apparent ease. For example, Facebook users manipulated or edited the mosque shooting video slightly, bypassing the Facebook algorithms designed to automatically remove copies of videos already identified as terrorist content.
Facebook admitted Wednesday that it could not solve its extremism problem on its own, and announced it was investing $7.5 million in new research partnerships with academics from the University of Maryland, Cornell University and Berkeley to improve image and video analysis technology.
Cover: Founder and CEO of Facebook Mark Zuckerberg leaves after a meeting with French President Emmanuel Macron (not seen) at the Elysee Palace in Paris, France on May 10, 2019. (Photo by Mustafa Yalcin/Anadolu Agency/Getty Images)