facebook white supremacy
Hunter French
News

Facebook Went to War Against White Supremacist Terror After Christchurch. Will It Work?

Facebook’s 350-person counterterrorism team is retraining its tools for far-right meme culture.

MENLO PARK, California — Facebook had spent nearly three years building out its counterterrorism team, sharpening policies against violent content, and training machines to spot extremist propaganda before it gets posted.

But it all unraveled in 29 minutes.

That was how long it took for a Facebook user to flag an Australian white supremacist’s livestream of his murderous shooting rampage at two New Zealand mosques in March. The window gave supporters ample opportunity to share the video on the message board 8chan. Then they re-posted version after version on Facebook, swarming its defenses.

Advertisement

The company says that it preemptively stopped 1.2 million attempted shares of the video within 24 hours. But 300,000 more, many of them with small tweaks, overwhelmed Facebook’s censors.

“Christchurch was seminal in the sense that the scale of the virality of the propaganda was something beyond anything we've seen before,” said Brian Fishman, head of Facebook’s counterterrorism and dangerous organizations team, which coordinated with New Zealand law enforcement as it frantically tracked down the content.

It was a humbling moment for the move-fast-and-break-things company. Over the six months since, Facebook has had to fundamentally relearn how it approaches the fight against terror.

The company previously saw that role largely as combating organized, international organizations like ISIS and al Qaeda. Fishman, an expert in those groups, built a team of 350 to track them down. Facebook now claims it can stop the vast majority of jihadist communications before they hit anyone’s News Feed.

But Christchurch was an explosive reminder to Fishman’s team that white supremacist terror is a more complex type of violent hatred. The movement is highly fragmented and global, with different strains in countries as disparate as Australia, Hungary, and the United States. Its supporters speak in their own highly coded language that’s difficult to detect. And the far-right ideologies they follow are ingrained in nationalist civic discourse, making it even more difficult to separate hate speech from political speech.

Advertisement

That’s partly why terrorist designation lists kept by the United Nations and individual governments often leave out white supremacists. In countries like the U.S., there’s also no explicit domestic terrorism law, making it difficult for authorities to intervene until a crime is committed. Case in point: the August attack targeting Latinx people at an El Paso Walmart that left 22 dead. The Department of Homeland Security only officially recognized white supremacy as a national security threat last month.

“Blocking white nationalist groups has political consequences that blocking jihadist groups doesn’t.”

It has put Facebook in the position of creating its own standards around hate speech and terrorism where no legal framework exists. “Blocking white nationalist groups has political consequences that blocking jihadist groups doesn’t,” said Chris Meserole, a Brookings Institute fellow who studies terrorism and tech. “This is where content moderation gets really tricky.”

The countries where white supremacy is on the march, namely in North America and Europe, happen to drive a disproportionate chunk of the company’s $60 billion ad business. Conservative politicians in those same countries also court a far right that frames any criticism as an assault on freedom of speech.

“What we hear a lot from governments is that the tech sector should do more,” said Adam Hadley, director of Tech Against Terrorism, a U.N.-backed group that primarily advises smaller tech platforms like Pinterest. “What we’re lacking is what tech companies should do and how that aligns with the law.”

Advertisement

Facing white supremacy

Fishman is at the center of the quagmire, and chasing down white supremacists presents a new personal challenge for him after a career trying to outsmart jihadis.

“One of the questions that we struggle with, and I think the larger community needs to struggle with, is that we're never going to be perfect,” Fishman told VICE News in Facebook’s Menlo Park headquarters, where signs in the brightly colored hallways tout the virtues of connecting the world. “So what is good enough?”

When Fishman arrived at Facebook in early 2016, the company had no dedicated counterterrorism team despite mounting evidence it needed one. ISIS had amplified its message on Twitter and Facebook, among other tools, and governments were clamoring for the companies to take action.

Fishman brought a traditional national-security pedigree to Menlo Park. A former research director at West Point’s Combating Terrorism Center, Fishman studied al Qaeda extensively and wrote a book about the historical origins of ISIS.

Step one at Facebook was undercutting the media strategies of groups like ISIS, which had a centralized propaganda arm. So Fishman built out a SWAT team of seven academics and policy experts to guide the company’s engineers as they disrupted them.

“There's this concept in engineering that eventually you can automate everything,” said Sherif Ahmed, an Egyptian-born policy manager at Facebook, who asked that his last name not be used out of fear for his safety. “And part of our job is to be like, Well, maybe not.”

Advertisement

Based around the world, the team sharpened Facebook’s tools for locating content from terrorist groups, which the company defined at the time as non-state actors carrying out violence for political or ideological aims.

The team targeted known actors’ pages or profiles and booted them from the platform. It created an extensive bank of “hashes,” or digital fingerprints, to automatically identify known propaganda across the platform. And it used terrorist words and imagery provided by third-party analysts to train machine “classifiers” to evaluate posts the same way as full-time, specially trained reviewers that Fishman said comprise most of his 350-person team. (Facebook relies on more than 15,000 total reviewers, many of which are outside contractors, all of whom may field posts flagged for terrorism.)

Those efforts, coupled with sustained military pressure, had cut down ISIS’ ability to publish significantly by 2018, the Combating Terrorism Center found. Facebook says it catches 99% of ISIS and al Qaeda propaganda it removes before a user even reports it, though it is difficult to independently verify those numbers.

Arbiters of free speech

During this period, anti-hate groups clamored for Facebook to use similar tactics against white supremacists. “The attitude they had at the time was that they didn’t want to be arbiters of free speech,” said Heidi Beirich, who heads the Southern Poverty Law Center’s Intelligence Project. “It was a very libertarian attitude, and it was very unproductive from our standpoint.”

Then neo-Nazis used Facebook to help organize the deadly Unite the Right Rally in Charlottesville in August 2017. The company began engaging more with civil society groups afterward. But President Donald Trump’s response, suggesting that far-right protesters included “very fine people,” hinted at the coming difficulty in fighting white supremacy.

Advertisement

“It’s the only form of terrorism that has a history of being state-sanctioned,” said Peter Simi, a Chapman University professor who researches the far right, said of the U.S. “That is a really different relationship than what we have to Islamic extremism.”

Fishman’s team began pivoting toward the new threat by adding more than 200 white supremacist organizations, such as the U.K. and Canadian groups Blood & Honour and Combat 18, to its internal designation list. But there was also a learning curve in updating the technology used for automatically identifying terrorist content on the platform.

“A classifier that works well against ISIS may not work so well against a bunch of neo-Nazis.”

“A classifier that works well against ISIS may not work so well against a bunch of neo-Nazis,” Fishman said.

The new terrain is also more difficult for artificial intelligence to navigate. White supremacists are adept at creating new content that Facebook hasn’t hashed and changing their behavior to avoid security measures. The more vexing problem is many on the far right don’t need an organized group to be radicalized.

“One of the trickier things about this movement, though — what we're seeing especially in the United States — is the sort of meme culture,” Fishman said. “Everything is a joke. It's just satire. It’s a blurry line between all of those things and radicalization. Building out the right policies to capture that while not going too far is a challenge.”

Advertisement

Facebook doesn’t publicize data on how much white supremacist content it removes before users see it. But even if the rate were to reach 99%, the same level as Islamic extremism, Christchurch showed just how dangerous that final 1% can be.

‘This is a hard problem’

Company officials say that Facebook’s censors didn’t prevent the attacker’s live-stream because they hadn’t banked similar imagery of first-person shootings. As at least 800 variations of the video later flew across the platform, Fishman’s team widened their security tools’ reach to catch as many versions as possible, likely picking up some false positives in the form of counterspeech or news coverage.

“So all of those decisions have costs,” Fishman said. “Everything is a tradeoff.”

In New Zealand, however, the government eliminated the need for Facebook to make those decisions: It outlawed all content related to the shooting within days.

“When governments have the foresight to define the space and the content and what's illegal, it makes our lives a lot easier,” said Erin Saltman, a policy manager in London.

That insight is crucial to understanding Facebook’s content moderation philosophy.

Following the shooting, the New Zealand government helped spearhead a global push to eliminate terrorist content online. Signatories to the “Christchurch Call” include 48 countries and major tech companies like Google, Twitter, and Amazon. Citing free speech concerns, the White House did not sign up.

Advertisement

Facebook, which chairs the industry-backed Global Internet Forum to Counter Terrorism this year, has played an active role in many of these public-private conversations. The Big Tech firms have created lines of real-time communication among companies and governments for future attacks. They’ve funded research, created a shared database of 200,000 “hashes” of terrorist content, and shared best practices with smaller platforms like Snap, Dropbox, and Ask.fm.

Appearing together at the U.N. General Assembly last month, Facebook COO Sheryl Sandberg and New Zealand Prime Minister Jacinda Ardern announced the spinoff of the Global Internet Forum to Counter Terrorism into a standalone watchdog for the tech industry. They didn’t share many details on its funding, staffing, or timetable for launch.

“In the same way we respond to natural emergencies like fires and floods, we need to be prepared and ready to respond to a crisis like the one we experienced,” Ardern said. “There’s undoubtedly more work to be done.”

Brian Fishman Facebook

(L-R) Nick Pickles Twitter's head of Public Policy and Government, Brian Fishman Facebook's Counterterrorism policy lead, Nicklas Lundblad, Google's vice President, Public Policy and Government Relations Europe, Middle East and Africa, John Frank Microsoft's Vice President EU Government Affairs during the Ministerial Meeting of the G7 Ministers of the Interior in October 2017. (ANSA via AP)

When Facebook doesn’t have that sort of government leadership, however, it can appear awfully bureaucratic. It took the company two months after Christchurch to prohibit users who’d violated its Community Guidelines from live-streaming, a policy change that would have prevented the gory video. The company says it has since begun training its AI to recognize first-person shooting imagery.

Advertisement

Those rule changes must go through what Facebook calls its Product Policy Forum. The process brings together staffers across teams to consult outside experts, hone enforcement mechanisms, and debate unintended consequences in countries around the world.

In banning “white nationalism” in addition to “white supremacy” — a change seen by many far-right experts as eliminating a distinction without a difference — the process took months.

“We don’t want to rush and do more harm than good,” said Monika Bickert, Facebook’s head of global policy management and Fishman’s boss, who leads the meetings.

That’s open to considerable interpretation. Facebook broadened its definition of terrorism last month to emphasize violence intended to intimidate civilians, be it for political, ideological, or religious aims. Just over a week later, the company announced it would allow politicians to use hate speech so long as it doesn’t incite violence.

Mixed messages have led to recurring friction between Facebook and outside researchers who still find considerable violent or hateful content on the site. Hany Farid, who studies digital forensics at UC-Berkeley and works with the nonprofit Counter Extremism Project, fears that the company avoids more-aggressive policing to coddle right-wing users and protect its business model.

“You can’t have a network the size of Facebook because you’ve managed to figure out how to harness big data and do lots of complicated things, and say that this is a hard problem,” Farid said.

Advertisement

What about WhatsApp?

Inside and outside of Facebook, there’s a growing sense of urgency to curb terrorist content as the company shifts its strategy. CEO Mark Zuckerberg announced in March that Facebook would encrypt more of its messaging services in response to user demands for privacy. Encryption will also likely provide white supremacists and other dangerous groups some potential openings.

“While they’re putting more resources toward policing the content on their sites, their businesses are moving in a direction that will make it more difficult,” said William Braniff, who heads the National Consortium for the Study of Terrorism and Responses to Terrorism and considers Fishman a close friend. “That’s a reality that we’re all going to be navigating.”

There is some precedent for the challenge at Facebook-owned WhatsApp. Though the company can still ban accounts and share metadata that might help law enforcement, users’ messaging’ content is untouchable. The results have been mixed. WhatsApp has been a tool for spreading disinformation in Brazil and organizing lynch mobs in India.

Fishman said that outside analysts are crucial in helping Facebook eliminate such blind spots going forward. But watchdogs have little internal data to work with, and encryption adds another barrier to entry. Fishman added that the content-matching technologies that have proved useful on Facebook and Instagram could run so slowly as to make the platforms unusable.

“In some ways, I think this is going to be a moment for real innovation as we get into more of an encrypted world,” Fishman said, adding that Facebook’s forthcoming cryptocurrency will also bring security concerns. “And I don't know what exactly it's going to look like, in part because I don't know exactly what the product is going to look like.”

It’s a huge question mark. And if Facebook’s products continue their breakneck growth as the world’s de facto communication platforms, it will only become more consequential. There will be more public pressure, from more angles, for Facebook to make more real-time calls on all forms of domestic terrorism.

“The t-word — t being terrorism — is such a loaded one,” Fishman said. “As a former academic, I deeply wish it wasn't, and we could just talk objectively about what a terrorist is and what they are not. It just doesn't work that way, though.”

Take Myanmar, where misinformation and hate speech about minority Rohingya citizens helped fuel a state-backed ethnic cleansing campaign that caught Facebook off guard. It was domestic terrorism by another name. In response to criticism of Facebook’s role in the violence, Fishman’s team is now considering new policies geared toward state actors.

“What I would love from this community — the global community — is real clarity: When do they want us to step on sovereignty? And when do they want us to respect it?” Fishman said. “That needs to be a principle that can be applied globally, across the board, in all these types of circumstances.”

In other words — to use a favorite Silicon Valley term — guidance on what types of violence are unacceptable needs to scale.

“They ask me for the impossible all the time,” Fishman added. “So now I'm asking them for the impossible.”

Cover illustration: Hunter French