content moderation
36 Days After Christchurch, Terrorist Attack Videos Are Still on Facebook
The videos on Facebook and Instagram show sections of the raw Christchurch attack footage, and variations continue to thwart Facebook's moderators and technology.
Machine Learning Identifies Weapons in the Christchurch Attack Video. We Know, We Tried It
It took 29 minutes for a Facebook user to first report the livestream of the Christchurch terrorist. Now a machine learning system spots weapons in the stream with an over 90 percent confidence rating.
Gun Sellers Are Advertising on Instagram and Directing Customers to Encrypted Chat Apps
A leaked Facebook document gives more detail on the type of firearm parts that the social network does not allow people to sell peer-to-peer.
Why Facebook Banned White Nationalism and White Separatism
Its new policy treats white nationalism and separatism the same as white supremacy.
Twitter and YouTube Won’t Commit to Ban White Nationalism After Facebook Makes Policy Switch
Following a Motherboard investigation, Facebook banned white nationalism and white separatism. But Twitter and YouTube, two platforms with their own nationalism problems, won’t commit to following Facebook’s lead.
Facebook Bans White Nationalism and White Separatism
After a civil rights backlash, Facebook will now treat white nationalism and separatism the same as white supremacy, and will direct users who try to post that content to a nonprofit that helps people leave hate groups.
It Took 10 Seconds for Instagram to Push me Into an Anti-Vaxx Rabbit Hole
While social media giants crack down and Facebook says it will make changes to stop anti-vaxx content, Instagram’s recommendation engine makes it exceptionally easy to come across a waterfall of anti-vaccine accounts.
Internal Google Email Says Moderating Christchurch Manifesto ‘Particularly Challenging’
Google saw an issue with moderating the Christchurch terrorist’s so-called manifesto in part because of its length, telling moderators to mark potential copies or sections as “Terrorist Content” if they were unsure.
Documents Show How Facebook Moderates Terrorism on Livestreams
On Friday, at least 49 people were killed in terror attacks in New Zealand. Documents, sources, and interviews with senior Facebook employees show how difficult it is for social media companies to moderate live footage.
North Korea Advertises Military Hardware on Twitter, YouTube, Defying Sanctions
Glocom is a front company for North Korea to sell sanctions violating military equipment. But, even after some tech companies clamped down, Glocom kept up its presence on YouTube, Twitter, and Facebook.
How Facebook Trains Content Moderators
Facebook's former head of training talks about how the company decides whether a person is cut out to look at hateful, violent, and graphic content all day.
How Facebook Trains Content Moderators to Put Out ‘PR Fires’ During Elections
Internal Facebook documents obtained by Motherboard show specific steps and strategies taken by the company to fight content moderation issues that may spike during an election season.