YouTube announced Thursday that it will begin “managing harmful conspiracy theories” on the platform by “removing more conspiracy theory content used to justify real-world violence.” The policy change will specifically target QAnon content and conspiracy theories that “targets an individual or group.”
The move comes just weeks before the election, and well after the QAnon conspiracy theory has become mainstreamed in large part due to videos discussing QAnon on YouTube garnering millions of views. Facebook recently also banned QAnon and Holocaust denial content. Though YouTube was not specific beyond mentioning both QAnon and PizzaGate, presumably this ban will apply to conspiracy theories that target people like Bill Gates and George Soros, which have spread rampantly on the site.
As we’ve reported many times, there is a huge difference between making a policy change—which, to be clear, is needed and long overdue—and actually enforcing it. Platforms like Facebook and YouTube operate on a scale that’s difficult to fathom, with billions of daily posts on Facebook and hundreds of hours of video uploaded to YouTube every minute. Both YouTube and Facebook have been increasingly relying on automated detection techniques, but there is still lots of harmful content that violates existing policies that exists on both platforms.
For example, videos of the Christchurch mosque shooting continued to exist and be shared on both Facebook and YouTube long after it was technically banned, and ISIS- and militant white supremacist-related content continued to spread on YouTube and Twitter long after it was banned.
And so banning QAnon is a notable step, but it’s one that YouTube shouldn’t be applauded for until it proves it can actually enforce it at scale.