online content moderation
Platforms are having to deal with all sorts of scammers during the coronavirus pandemic.
Facebook's Civil Rights Audit, published on Sunday, recommends the platform also ban implicit forms of white nationalism.
Security Contractor at Facebook Content Moderation Facility Arrested for Allegedly Threatening People With a Gun
The individual worked at a facility belonging to Cognizant, a Facebook contractor. Police arrested him while at work, but the alleged assault didn't happen on site.
"Is it the right approach to deplatform these individuals? Is the right approach to try and engage with these individuals? How should we be thinking about this? What actually works?"
After no one watched streams for Valve-created game Artifact, some users started their own meme streams. Then other content seeped in.
YouTube left the video online for over two days, allowing it to generate tens of thousands of views and spread to other channels.
The videos on Facebook and Instagram show sections of the raw Christchurch attack footage, and variations continue to thwart Facebook's moderators and technology.
It took 29 minutes for a Facebook user to first report the livestream of the Christchurch terrorist. Now a machine learning system spots weapons in the stream with an over 90 percent confidence rating.
A leaked Facebook document gives more detail on the type of firearm parts that the social network does not allow people to sell peer-to-peer.
Following a Motherboard investigation, Facebook banned white nationalism and white separatism. But Twitter and YouTube, two platforms with their own nationalism problems, won’t commit to following Facebook’s lead.
While social media giants crack down and Facebook says it will make changes to stop anti-vaxx content, Instagram’s recommendation engine makes it exceptionally easy to come across a waterfall of anti-vaccine accounts.