Rebecca Lewis is a doctoral student at Stanford University. She researches far-right online subcultures and celebrities, and is the author of Alternative Influence: Broadcasting the Reactionary Right on YouTube.
Last week, Twitter users started tweeting the hashtag #VoxAdpocalypse to protest YouTube’s newest hate speech and extremism policies, which they argued were going to rob large swaths of content creators of the ad revenue they depend on.
In the hours following YouTube’s announcement, a range of far-right users received notices from the platform indicating that they could no longer convert their viewership into ad revenue. This is not the first time YouTube has enacted sweeping changes that affect content creators: in the past year or so, users have come to expect so-called “Adpocalypses” as YouTube attempts to stay advertiser-friendly.v
This time, however, users weren't only blaming YouTube—they were blaming a Vox journalist and YouTube creator who was now facing a torrent of abuse thanks in part to YouTube's fumbling and poorly-timed announcements.
YouTube's big policy announcement ended up acting as incitement to harassment against one of its own creators
On May 30, Vox video journalist Carlos Maza detailed the years-long homophobic harassment he has faced at the hands of a popular conservative YouTuber named Steven Crowder, who commands a following in the millions. On June 4, YouTube announced that it would not be taking any action against Crowder. The following morning, it announced new policies against extremism on the platform and, shortly after, announced it was demonetizing Crowder’s account (only to muddy the waters later in the day about the actions it’d be taking against Crowder).
Because of the rapid succession of these confusing announcements, a range of conservative YouTubers started pinning the blame for the wider policy changes on Maza. Hence, #VoxAdpocalypse.
In reality, platform policies take months to develop and publicly roll out. As my colleague Joan Donovan wrote on Twitter, “large bureaucracies are slow and rigid,” and these specific policies were likely decided “after months of internal debate at YouTube.” In other words, there is no feasible possibility that the new policies were a response to Maza’s tweets. Still, by releasing this announcement the morning after making policy decisions on Maza and Crowder, YouTube all but ensured that harassment against Maza would escalate further.
And it has escalated: since Wednesday afternoon, the hashtag has gone viral, harassers have dredged up out-of-context tweets from Maza’s past, and Senator Ted Cruz tweeted about it, falsely claiming Maza had gotten Crowder’s channel removed.
In short, YouTube's big policy announcement ended up acting as incitement to harassment against one of its own creators.
YouTube’s stilted response to Maza showcases how ill-equipped the platform is to grapple with incitement to harassment
Incitement to harassment happens when influential users on a platform direct their viewers or followers to harass someone on their behalf. Sometimes this incitement is clear, as was the case when alt-right poster-boy Milo Yiannopolous directed his Twitter followers to attack comedian and actress Leslie Jones. In other cases, it can be more oblique: when a YouTuber harasses someone through their content, there is a strong likelihood that their viewers will follow suit, even if they weren’t explicitly directed to do so.
In these cases, it can be incredibly difficult to draw a direct link between the incitement and the harassment, because viewers of a video may move across channels or even platforms to pursue a target, making it difficult to identify the original source of inspiration. In some cases, this process may even happen accidentally, if a celebrity’s followers take it upon themselves to harass another figure without the celebrity’s intention or knowledge.
YouTube’s stilted response to Maza showcases how ill-equipped the platform is to grapple with incitement to harassment.
First, the lines are incredibly blurry and can be difficult to determine. It can be impossible to tell the difference between genuine incitement to harassment and a simple “YouTube beef,” a popular genre of YouTube content in which at least two users mutually turn a (real or otherwise) feud into entertainment for their followers. Callout culture and beefs have become a staple of the platform, and they can often be highly lucrative for all involved parties. Harassers, in turn, exploit these blurry lines to claim that genuine abuse is simply a debate or a joke. This is partly why is was so troubling to see YouTube echoing that exact reasoning in its decision not to punish Crowder further, claiming that his harassment was made in the context of a larger argument.
But beyond that, YouTube has historically been loath to punish its celebrity creators, as they are major revenue drivers for the platform and often have the fandom and reach to successfully damage the company’s PR efforts. So, certain YouTubers in effect become “too big to fail.” Thus, the content creators YouTube is incentivized to placate are the same ones with the power to do the most harm to others (intentionally or otherwise). In many cases, harassment generates views and thus in effect becomes a part of the business model for YouTube and its creators alike. In the days following Youtube’s policy announcements, Steven Crowder gained tens of thousands of new channel subscribers—significantly more than his usual rate. The data suggests that, despite YouTube’s contingent demonetization, the attention has helped him further grow his audience.
On Wednesday, I expressed my skepticism about the timing of YouTube's announcement on Twitter, writing, “It is extremely difficult not to see the new YouTube policies in part as a way to change a negative PR narrative after refusing to address the harassment faced by [Maza].” Even if this was not YouTube’s intention, however, it was deeply irresponsible for YouTube to make the announcement when it did.
Perhaps YouTube hoped that announcing its new hate and extremism rules would appease those who were upset that Crowder’s channel would not be removed. Instead, by making a splashy and poorly-timed PR announcement, it drove more people his way.
For myself and other researchers of online harassment, watching the events unfold was like watching a slow-moving train crash we knew was going to occur. Why didn’t YouTube know what it was doing? Or else, why didn’t it care?