FYI.

This story is over 5 years old.

Tech

Internal Google Email Says Moderating Christchurch Manifesto ‘Particularly Challenging’

Google saw an issue with moderating the Christchurch terrorist’s so-called manifesto in part because of its length, telling moderators to mark potential copies or sections as “Terrorist Content” if they were unsure.
Google building
Image: Shutterstock

After a white supremacist live-streamed their attack in Christchurch, New Zealand last week, a wave of other people uploaded the video to Facebook, Twitter, YouTube, and other platforms across the internet. Google told Motherboard it saw an unprecedented number of attempts to post footage from the attack, sometimes as fast as a piece of content per second. But another challenge for platforms has been blocking access to the killer’s so-called manifesto, a 74-page document that spouted racist views and explicit calls for violence.

Advertisement

Now, Motherboard has obtained an internal Google email describing the difficulties of moderating the manifesto in particular, pointing to its length and the issue of users sharing the snippets of the manifesto that Google’s content moderators may not immediately recognise.

The news shows in granular detail the sort of decisions and advice tech giants have to issue to their moderators in a time of crisis, as well as the speed at which tech giants need to respond to material related to an evolving event on their platforms.

“The manifesto will be particularly challenging to enforce against given the length of the document and that you may see various segments of various lengths within the content you are reviewing,” a copy of the email reads.

Got a tip? You can contact Joseph Cox securely on Signal on +44 20 8133 5190, OTR chat on jfcox@jabber.ccc.de, or email joseph.cox@vice.com.

Google told Motherboard it employs 10,000 people across to moderate the company’s platforms and products. When a user reports a piece of potentially violating content—such as the attack video for depicting violence—that report will go to a human moderator to assess.

The email tells moderators to flag all pieces of content related to the attack as “Terrorist Content,” including full-length or sections of the manifesto. Because of the document’s length the email tells moderators not to spend an extensive amount of time trying to confirm whether a piece of content does contain part of the manifesto. Instead, if the moderator is unsure, they should err on the side of caution and still label the content as “Terrorist Content,” which will then be reviewed by a second moderator. The second moderator is told to then take the time to verify that it is a piece of the manifesto, and appropraitely mark the content as terrorism no matter how long or short the section may be.

Advertisement

Moderators are told to mark the manifesto or video as terrorism content unless there is an Educational, Documentary, Scientific, or Artistic (EDSA) context, the email adds. Pieces of EDSA content are also not “gratuitously graphic,” according to YouTube’s policies on harmful or dangerous content. (Google told Motherboard non-EDSA sharing of the manifesto is against the company’s Community Guidelines.)

But a source with knowledge of Google’s strategy for moderating the New Zealand attack material said this can complicate moderation efforts because some outlets did use parts of the video and manifesto. UK newspaper The Daily Mail let readers download the terrorist’s manifesto directly from the paper’s own website, and Sky News Australia aired parts of the attack footage, BuzzFeed News reported. Motherboard granted the source anonymity to speak more candidly about internal processes for moderating Google hosted content.

The email says that Google generally wants to preserve journalistic or educational coverage of the event, but does not want to allow the video or manifesto itself to spread throughout the company’s services without additional context. An exception for media outlets would be if they displayed footage that included a moment of the attacker shooting civilians.

The email also instructs moderators to mark text comments that “glorify, praise, or celebrate” the attack as “Terrorist Content.”

Advertisement

“The manifesto will be particularly challenging to enforce against given the length of the document and that you may see various segments of various lengths within the content you are reviewing.”

Despite this advice to moderators, Google told Motherboard it had at some point taken the unusual step of automatically rejecting any footage of violence from the attack video, cutting out the process of a human determining the context of the clip. If, say, a news organization was impacted by this change, the outlet could appeal the decision, Google told Motherboard.

“We made the call to basically err on the side of machine intelligence, as opposed to waiting for human review,” YouTube’s Product Officer Neal Mohan told the Washington Post in an article published Monday.

Google also told Motherboard it tweaked the search function to show results from authoritative news sources, and suspended the ability to search for clips by upload date, making it harder for people to find copies of the attack footage.

"Since Friday’s horrific tragedy, we’ve removed tens of thousands of videos and terminated hundreds of accounts created to promote or glorify the shooter," a YouTube spokesperson told Motherboard in a statement. "The volume of related videos uploaded to YouTube in the 24 hours after the attack was unprecedented both in scale and speed, at times as fast as a new upload every second. In response, we took a number of steps, including automatically rejecting any footage of the violence, temporarily suspending the ability to sort or filter searches by upload date, and making sure searches on this event pulled up results from authoritative news sources like The New Zealand Herald or USA Today.”

“Our teams are continuing to work around the clock to prevent violent and graphic content from spreading, we know there is much more work to do,” the statement added.

Subscribe to our new cybersecurity podcast, CYBER.