content moderation

4.17.19

Machine Learning Identifies Weapons in the Christchurch Attack Video. We Know, We Tried It

It took 29 minutes for a Facebook user to first report the livestream of the Christchurch terrorist. Now a machine learning system spots weapons in the stream with an over 90 percent confidence rating.

3.28.19

Facebook Bans White Nationalism and White Separatism

After a civil rights backlash, Facebook will now treat white nationalism and separatism the same as white supremacy, and will direct users who try to post that content to a nonprofit that helps people leave hate groups.

3.19.19

Internal Google Email Says Moderating Christchurch Manifesto ‘Particularly Challenging’

Google saw an issue with moderating the Christchurch terrorist’s so-called manifesto in part because of its length, telling moderators to mark potential copies or sections as “Terrorist Content” if they were unsure.

3.15.19

Documents Show How Facebook Moderates Terrorism on Livestreams

On Friday, at least 49 people were killed in terror attacks in New Zealand. Documents, sources, and interviews with senior Facebook employees show how difficult it is for social media companies to moderate live footage.

2.26.19

How Facebook Trains Content Moderators

Facebook's former head of training talks about how the company decides whether a person is cut out to look at hateful, violent, and graphic content all day.

12.31.18

Leaked Documents Show How Instagram Polices Stories

Motherboard has obtained internal documents that show how Instagram moderators grapple to police the service's popular Stories feature.

Advertisement
9.20.18

Facebook Is Reviewing its Policy on White Nationalism After Motherboard Investigation, Civil Rights Backlash

"Facebook ignores centuries of history, legal precedent, and expert scholarship that all establish that white nationalism and white separatism are white supremacy."

8.30.18

Life on the Internet Is Hard When Your Last Name is 'Butts'

The “Scunthorpe problem” has never really been solved and for people with "offensive" last names, this can be a real pain in the ass.

8.24.18

The Impossible Job: Inside Facebook’s Struggle to Moderate Two Billion People

Moderating billions of posts a week in more than a hundred languages has become Facebook’s biggest challenge. Leaked documents and nearly two dozen interviews show how the company hopes to solve it.

5.30.18

These Are Facebook’s Policies for Moderating White Supremacy and Hate

As hate speech continues to be a top issue for social media platforms, we are publishing an extended selection of training material showing how Facebook sees the issue of white supremacy and hate more generally.

5.28.18

Here’s Facebook’s Internal Policy on Pepe the Frog

The far-right adopted Pepe the Frog as its own symbol of intolerance. Breaking with its policy of allowing fictional characters to push hateful messages, Facebook banned certain images of Pepe, according to internal documents.

10.26.17

Reddit Is Cracking Down on Nazi and White Supremacist Groups

The move is part of a new policy banning "violent" content that's causing confusion for some users.