MENLO PARK, California — Facebook had spent nearly three years building out its counterterrorism team, sharpening policies against violent content, and training machines to spot extremist propaganda before it gets posted.But it all unraveled in 29 minutes.That was how long it took for a Facebook user to flag an Australian white supremacist’s livestream of his murderous shooting rampage at two New Zealand mosques in March. The window gave supporters ample opportunity to share the video on the message board 8chan. Then they re-posted version after version on Facebook, swarming its defenses.
Advertisement
The company says that it preemptively stopped 1.2 million attempted shares of the video within 24 hours. But 300,000 more, many of them with small tweaks, overwhelmed Facebook’s censors.“Christchurch was seminal in the sense that the scale of the virality of the propaganda was something beyond anything we've seen before,” said Brian Fishman, head of Facebook’s counterterrorism and dangerous organizations team, which coordinated with New Zealand law enforcement as it frantically tracked down the content.It was a humbling moment for the move-fast-and-break-things company. Over the six months since, Facebook has had to fundamentally relearn how it approaches the fight against terror.The company previously saw that role largely as combating organized, international organizations like ISIS and al Qaeda. Fishman, an expert in those groups, built a team of 350 to track them down. Facebook now claims it can stop the vast majority of jihadist communications before they hit anyone’s News Feed.But Christchurch was an explosive reminder to Fishman’s team that white supremacist terror is a more complex type of violent hatred. The movement is highly fragmented and global, with different strains in countries as disparate as Australia, Hungary, and the United States. Its supporters speak in their own highly coded language that’s difficult to detect. And the far-right ideologies they follow are ingrained in nationalist civic discourse, making it even more difficult to separate hate speech from political speech.
Advertisement
That’s partly why terrorist designation lists kept by the United Nations and individual governments often leave out white supremacists. In countries like the U.S., there’s also no explicit domestic terrorism law, making it difficult for authorities to intervene until a crime is committed. Case in point: the August attack targeting Latinx people at an El Paso Walmart that left 22 dead. The Department of Homeland Security only officially recognized white supremacy as a national security threat last month.
It has put Facebook in the position of creating its own standards around hate speech and terrorism where no legal framework exists. “Blocking white nationalist groups has political consequences that blocking jihadist groups doesn’t,” said Chris Meserole, a Brookings Institute fellow who studies terrorism and tech. “This is where content moderation gets really tricky.”The countries where white supremacy is on the march, namely in North America and Europe, happen to drive a disproportionate chunk of the company’s $60 billion ad business. Conservative politicians in those same countries also court a far right that frames any criticism as an assault on freedom of speech.“What we hear a lot from governments is that the tech sector should do more,” said Adam Hadley, director of Tech Against Terrorism, a U.N.-backed group that primarily advises smaller tech platforms like Pinterest. “What we’re lacking is what tech companies should do and how that aligns with the law.”“Blocking white nationalist groups has political consequences that blocking jihadist groups doesn’t.”
Advertisement
Facing white supremacy
Advertisement
Based around the world, the team sharpened Facebook’s tools for locating content from terrorist groups, which the company defined at the time as non-state actors carrying out violence for political or ideological aims.The team targeted known actors’ pages or profiles and booted them from the platform. It created an extensive bank of “hashes,” or digital fingerprints, to automatically identify known propaganda across the platform. And it used terrorist words and imagery provided by third-party analysts to train machine “classifiers” to evaluate posts the same way as full-time, specially trained reviewers that Fishman said comprise most of his 350-person team. (Facebook relies on more than 15,000 total reviewers, many of which are outside contractors, all of whom may field posts flagged for terrorism.)Those efforts, coupled with sustained military pressure, had cut down ISIS’ ability to publish significantly by 2018, the Combating Terrorism Center found. Facebook says it catches 99% of ISIS and al Qaeda propaganda it removes before a user even reports it, though it is difficult to independently verify those numbers.During this period, anti-hate groups clamored for Facebook to use similar tactics against white supremacists. “The attitude they had at the time was that they didn’t want to be arbiters of free speech,” said Heidi Beirich, who heads the Southern Poverty Law Center’s Intelligence Project. “It was a very libertarian attitude, and it was very unproductive from our standpoint.”Then neo-Nazis used Facebook to help organize the deadly Unite the Right Rally in Charlottesville in August 2017. The company began engaging more with civil society groups afterward. But President Donald Trump’s response, suggesting that far-right protesters included “very fine people,” hinted at the coming difficulty in fighting white supremacy.
Arbiters of free speech
Advertisement
“It’s the only form of terrorism that has a history of being state-sanctioned,” said Peter Simi, a Chapman University professor who researches the far right, said of the U.S. “That is a really different relationship than what we have to Islamic extremism.”Fishman’s team began pivoting toward the new threat by adding more than 200 white supremacist organizations, such as the U.K. and Canadian groups Blood & Honour and Combat 18, to its internal designation list. But there was also a learning curve in updating the technology used for automatically identifying terrorist content on the platform.
“A classifier that works well against ISIS may not work so well against a bunch of neo-Nazis,” Fishman said.The new terrain is also more difficult for artificial intelligence to navigate. White supremacists are adept at creating new content that Facebook hasn’t hashed and changing their behavior to avoid security measures. The more vexing problem is many on the far right don’t need an organized group to be radicalized.“One of the trickier things about this movement, though — what we're seeing especially in the United States — is the sort of meme culture,” Fishman said. “Everything is a joke. It's just satire. It’s a blurry line between all of those things and radicalization. Building out the right policies to capture that while not going too far is a challenge.”“A classifier that works well against ISIS may not work so well against a bunch of neo-Nazis.”
Advertisement
Facebook doesn’t publicize data on how much white supremacist content it removes before users see it. But even if the rate were to reach 99%, the same level as Islamic extremism, Christchurch showed just how dangerous that final 1% can be.Company officials say that Facebook’s censors didn’t prevent the attacker’s live-stream because they hadn’t banked similar imagery of first-person shootings. As at least 800 variations of the video later flew across the platform, Fishman’s team widened their security tools’ reach to catch as many versions as possible, likely picking up some false positives in the form of counterspeech or news coverage.“So all of those decisions have costs,” Fishman said. “Everything is a tradeoff.”In New Zealand, however, the government eliminated the need for Facebook to make those decisions: It outlawed all content related to the shooting within days.“When governments have the foresight to define the space and the content and what's illegal, it makes our lives a lot easier,” said Erin Saltman, a policy manager in London.That insight is crucial to understanding Facebook’s content moderation philosophy.Following the shooting, the New Zealand government helped spearhead a global push to eliminate terrorist content online. Signatories to the “Christchurch Call” include 48 countries and major tech companies like Google, Twitter, and Amazon. Citing free speech concerns, the White House did not sign up.
‘This is a hard problem’
Advertisement
Facebook, which chairs the industry-backed Global Internet Forum to Counter Terrorism this year, has played an active role in many of these public-private conversations. The Big Tech firms have created lines of real-time communication among companies and governments for future attacks. They’ve funded research, created a shared database of 200,000 “hashes” of terrorist content, and shared best practices with smaller platforms like Snap, Dropbox, and Ask.fm.Appearing together at the U.N. General Assembly last month, Facebook COO Sheryl Sandberg and New Zealand Prime Minister Jacinda Ardern announced the spinoff of the Global Internet Forum to Counter Terrorism into a standalone watchdog for the tech industry. They didn’t share many details on its funding, staffing, or timetable for launch.“In the same way we respond to natural emergencies like fires and floods, we need to be prepared and ready to respond to a crisis like the one we experienced,” Ardern said. “There’s undoubtedly more work to be done.”
When Facebook doesn’t have that sort of government leadership, however, it can appear awfully bureaucratic. It took the company two months after Christchurch to prohibit users who’d violated its Community Guidelines from live-streaming, a policy change that would have prevented the gory video. The company says it has since begun training its AI to recognize first-person shooting imagery.
Advertisement
Those rule changes must go through what Facebook calls its Product Policy Forum. The process brings together staffers across teams to consult outside experts, hone enforcement mechanisms, and debate unintended consequences in countries around the world.In banning “white nationalism” in addition to “white supremacy” — a change seen by many far-right experts as eliminating a distinction without a difference — the process took months.“We don’t want to rush and do more harm than good,” said Monika Bickert, Facebook’s head of global policy management and Fishman’s boss, who leads the meetings.That’s open to considerable interpretation. Facebook broadened its definition of terrorism last month to emphasize violence intended to intimidate civilians, be it for political, ideological, or religious aims. Just over a week later, the company announced it would allow politicians to use hate speech so long as it doesn’t incite violence.Mixed messages have led to recurring friction between Facebook and outside researchers who still find considerable violent or hateful content on the site. Hany Farid, who studies digital forensics at UC-Berkeley and works with the nonprofit Counter Extremism Project, fears that the company avoids more-aggressive policing to coddle right-wing users and protect its business model.“You can’t have a network the size of Facebook because you’ve managed to figure out how to harness big data and do lots of complicated things, and say that this is a hard problem,” Farid said.
Advertisement