Facebook and Twitter Need to Shut Down Hate Speech Within 24 Hours, Europe Warns

The European Commission's first evaluation of online hate speech demands social networks speed up their response times.
December 7, 2016, 1:30pm

Facebook, Twitter, YouTube and Microsoft aren't responding to cases of online hate speech fast enough, according to the European Commission, which demands the technology companies review reports of hate speech less than 24 hours after they were first reported.

Only 40 percent of all notifications of hate speech were acted upon within a 24-hour timeframe, found a European Commission report, a report that forms part of the governing body's first evaluation into how Facebook, Twitter, YouTube, and Microsoft fight online hate speech more than six months after the four signed up to a code of conduct in Europe in May 2016.

European Commissioner for Justice Věra Jourová said in a statement this week, "It is our duty to protect people in Europe from incitement to hatred and violence online. This is the common goal of the code of conduct."

Twelve NGOs based in nine EU countries analyzed the responses to hate speech notifications over a six-week timeframe for the evaluation during October and November 2016. The findings, according to the European Commission, indicate that among the 600 notifications of online hate speech made in total, 28 percent led to a removal, 40 percent of all responses were received within 24 hours, while another 43 percent arrived after 48 hours.

Facebook CEO Mark Zuckerberg. Image: Alessio Jacona/Flickr

The results show that Facebook suffered with the highest amount of illegal hate speech notifications, with 270 incidents reported, compared to 163 cases on Twitter and 123 cases on YouTube. There were no reported incidents of hate speech reported to Microsoft, which owns communications products such as Skype and video game platform Xbox. Both anti-Semitism and anti-Muslim hatred was prevalent in the illegal content notifications, with slurs against ethnic origin, national origin, and race also appearing.

"A large number of cases corresponded to some form of anti-migrant speech," said the European Commission report.

Despite Facebook receiving the highest number of illegal content notifications, the company only removed the content in 28.3 percent of cases. YouTube removed the flagged content in almost half of its reported cases.

"The reactions by Twitter and YouTube upon notification of illegal hate speech seem to diverge depending on the source use to notify content (trusted reporter/flagger system vs normal user tools)," said the report. "The rations of removal for Facebook are similar, whether the user notifies the content through the trusted reporter channel or the normal tool."

Read more: White Supremacists Are Still Using Twitter Ads to Spread Their Message

Facebook told Motherboard by email that it has nothing to say on the matter at this time. Microsoft told Motherboard it had nothing to share on the matter, either. YouTube did not respond to Motherboard's emails. Twitter also failed to respond to Motherboard's emails.

The report comes as all four of the same companies launch a campaign to tackle terror content this week. Facebook, Twitter, Microsoft and YouTube pledged to combat extremist content on their platforms by sharing information about those posting the content.

"Starting today, we commit to the creation of a shared industry database of 'hashes'—unique digital 'fingerprints'—for violent terrorist imagery or terrorist recruitment videos or images that we have removed from our services," said the companies.

They hope that by sharing the information with each other, they can identify potential terror content easier on their respective platforms.

But Joe McNamee, director at European digital rights campaign group EDRi, told Motherboard in an email this week that the scheme "means yet another step towards a situation where the internet giants become legislator, judge, jury and executioner regarding our free speech."

"This needs to be understood in a context where the EU is pushing the online companies into a full private law enforcement regime," he said, before pointing to examples such as the Terrorism Directive passed this week in the European Parliament that proposes member states may implement internet service blocking "without prejudice to voluntary action taken by the Internet industry to prevent the misuse of its services."

Get six of our favorite Motherboard stories every day by signing up for our newsletter.