News

1 in 1,000 Views on Facebook Is of Hate Speech

The social media platform revealed new metrics for the first time ever.
Facebook CEO Mark Zuckerberg testifies remotely during a Senate Judiciary Committee hearing​.
Facebook CEO Mark Zuckerberg testifies remotely during a Senate Judiciary Committee hearing. (Photo by Hannah McKay-Pool/Getty Images)

Facebook revealed for the first time Thursday the prevalence of hate speech on its platform, announcing that such content accounted for 0.11% of content views between July and September.

While this percentage may seem insignificant, that means 1 in every 1,000 views is of hate speech — and given the sheer scale of Facebook, which has 2.7 billion active users every month, it means that hate speech is widely distributed on the platform.

Advertisement

Rather than looking just at the amount of hate speech content Facebook removes to measure the problem on the platform, the company said prevalence is “calculated by selecting a sample of content seen on Facebook and then labeling how much of it violates our hate speech policies.”

Prevalence has become an important metric at Facebook as a way of measuring how widely violating content is seen by users. In 2016, Facebook compared it “to measuring the concentration of pollutants in the air we breathe.” Guy Rosen, Facebook’s head of integrity, said during a call with reporters on Thursday, that prevalence “was the most important” measurement Facebook has to understand what’s happening on its platform. 

“[The prevalence metric] is important because a small amount of content can go viral and  get a lot of distribution in a very short span of time, whereas other content could be on the internet for a very long time, and not be seen by anyone,” Rosen said.

Because this is the first time Facebook has publicly published numbers about the prevalence of hate speech on its platform, it cannot say if the 0.11%  figure has increased or decreased compared to periods prior to July.

While Facebook has taken some steps to address certain aspects of hate speech — such as banning Holocaust denial and QAnon content — it has struggled to deal with hate speech on its platform in non-English speaking markets.

Advertisement

A 2018 UN report found that Facebook was complicit in facilitating the ethnic cleansing of Rohingya Muslims in Myanmar in 2016, while more recently activists in Ethiopia warned that Facebook was also fueling calls for genocide.

Separate from the prevalence of hate speech, Facebook also revealed on Thursday that it removed 22.1 million pieces of hate speech content, which it has historically kept track of, during the three months to the end of September. Facebook says it has invested heavily in artificial intelligence systems to detect hate speech and added  Thursday that these systems were able to catch 95% of those hate speech posts before they were reported by a user, up from 80% a year ago.

But that still means that at least 1 million pieces of hate speech were allowed on the platform over the course of the three month period, and only taken down when they were reported by users.

Facebook employs 35,000 contractors as content reviewers to review and remove problematic content that its artificial intelligence systems don’t automatically pick up, but the company has faced significant backlash from some of these moderators who have sued Facebook in the U.S. and Europe, claiming the job has left them suffering from PTSD and other mental health issues.

A group of 200 moderators and other Facebook employees this week signed a letter to Facebook CEO Mark Zuckerberg claiming that their health was being put at risk because they are being forced to go back into the office during the COVID pandemic, while full-time Facebook employees can remain working from home until at least July 2021.

“Without our work, Facebook is unusable,” the letter said. “Your algorithms cannot spot satire. They cannot sift journalism from disinformation. They cannot respond quickly enough to self-harm or child abuse. We can.”