FYI.

This story is over 5 years old.

News

Facebook will use artificial intelligence to fight “terrorist content”

Facing criticism that it doesn’t do enough to combat online extremism, Facebook announced Thursday it will step up efforts to eradicate “terrorist content” from its platform.

“We agree with those who say that social media should not be a place where terrorists have a voice,” Monika Bickert, Facebook’s director of global policy management, and Brian Fishman, its counterterrorism policy manager, wrote in a blog post. “We want to be very clear how seriously we take this — keeping our community safe on Facebook is critical to our mission. Our stance is simple: There’s no place on Facebook for terrorism.”

Advertisement

Along with companies like Twitter and YouTube, Facebook has long faced complaints that it’s not tough enough on online extremism in an age when groups like the Islamic State have intensive social media efforts to radicalize and recruit new members. Of the more than 20,000 foreign fighters who joined jihadi groups in Syria by late 2015, many chose to join ISIS because of its brutally effective online strategy, a Brookings Institution report found.

In their post, Bickert and Fishman outlined several steps Facebook will take to fight what they called terrorist content, such as using artificial intelligence to stop people from uploading photos and videos — from propaganda to grisly beheadings — that have already been flagged and taken down. Facebook also plans to develop tools to find and remove text that praises terrorist groups, to “fan out” from terrorist pages, posts, and profiles in order to find others like them, and to stop terrorists from simply making new accounts after old ones get deleted.

Human moderators will in turn ensure that the AI doesn’t accidentally sweep up and eliminate legitimate speech. As Bickert and Fishman explained, “A photo of an armed man waving an ISIS flag might be propaganda or recruiting material, but it could be an image in a news story.”

Dartmouth College computer scientist Hany Farid applauded Facebook’s vow to take what he believes are necessary steps to combat terrorism online, but he’s not convinced the company will live up to the promise.

Advertisement

“There’s a lot of talk about what they’re doing, but not a lot of talk about efficacy,” said Farid, who helped develop the technology Microsoft, Twitter, and Facebook use to root out images of child pornography. He’s now partnered with the New York–based Counter Extremism Project to create technology that, he says, could help these platforms do the same for certain extremist content. “There’s no numbers in that press release…. How aggressive are they going to be at it? They have to be held accountable for actually removing the content.”

Bickert and Fishman contend in their post that Facebook urgently “[removes] terrorists and posts that support terrorism whenever we become aware of them.” But a 2015 German task force appeared to find Facebook had some work to do when it came to responding to troublesome speech.

Facebook — along with Microsoft, Twitter, and YouTube — is bound to review and remove reports of illegal hate speech in the EU. The task force found that YouTube, for instance, eliminated 90 percent of reported content, 82 percent of it within 24 hours. Facebook, however, removed just 39 percent of it, 33 percent of it within 24 hours.

And though Bickert and Fishman attributed Facebook’s moves to “questions” over recent terror attacks, Farid pointed out that the attacks spurred more than just questions. Earlier this week, U.K. Prime Minister Theresa May and French President Emmanuel Macron announced they intended to fine tech companies if they fail to remove extremist content.

“Until we see evidence of real efficacy and real commitment to this,” Farid said, “I’m worried that this is just another press release to release some of the pressure.”