Tech

A Video of California Doctors Stoking Conspiracies Should Never Have Gone Viral

YouTube deleted a video of Dr. Erickson and Massihi after getting torched by the media, which is only making things worse.
Dr. Dan Erickson
Dr. Dan Erickson. Screengrab: YouTube

Earlier this week, YouTube deleted a video of two California emergency room doctors who criticized the state’s COVID-19 shutdown. By the time it was deleted, it had been tweeted by Elon Musk, viewed more than 5 million times, and texted to me by conservative family members. The doctors had become heroes among a sect of the right that wants to immediately re-open the economy regardless of the human toll. The video was and is dangerous, but by deleting it, YouTube made these two men martyrs in the right’s ongoing war in which they claim Silicon Valley and The Left are censoring conservatives.

Advertisement

The video in question features doctors Dan Erickson and Artin Massihi, who own a chain of urgent care facilities in California. In the video, Erickson and Massihi extrapolate data from their own emergency rooms to explain that, in their view, coronavirus is much less deadly than doctors have been saying, is more widespread than people previously thought, and that the shutdown is not only hurting the economy but also hurting people who need to go to the doctor for other conditions. Their math is extremely flawed, to the point where the video is dangerous. Most importantly, they extrapolate coronavirus testing information from their own urgent care facilities to the overall, healthy population. But their conclusions reinforce the right-wing talking point that liberals somehow enjoy lockdown, want to take everyone's freedom, and want to tank the economy, and so it went extremely viral.

YouTube eventually deleted the video, though Facebook has left it up, and dozens of mirrors of the video have continued to spread around YouTube.

YouTube’s handling of this situation is yet another example of internet platforms’ utterly inept content moderation during the pandemic. Time and time again, disinformation has spread wildly on platforms like Facebook, YouTube, and Medium, and is only dealt with after millions and millions of people have seen it. By deleting these posts only after they have broadly entered the public consciousness, we are getting the worst possible outcome: the damage from the misinformation is already done, and the people who upload and share the information can claim they are being censored, therefore allowing them to spread their message further by playing into the narrative that platforms are exclusively targeting Republicans.

Advertisement

As we’ve written time and time again, content moderation at the scale Facebook and YouTube are dealing with is a nearly impossible task, but these companies quite simply must do better. Facebook only deleted astroturfed groups who seeded anti social distancing protests after the protests happened, Medium has only deleted faulty science after it was viewed millions of times, and YouTube only deleted the doctor video after it had widely been passed around. In many cases, Facebook and YouTube do not moderate content until they’ve been informed about it by the media, meaning the companies are not proactively identifying content that has already spread widely on their own platforms.

Deleting these videos, groups, events, and blog posts after they’ve already reached a critical mass of people is actively counterproductive, because it gives these stories a second life.

Laura Ingraham said on Fox News that “YouTube just took down their viral video challenging the COVID narrative … ironically, this is exactly what they warned about. This isn’t about science, this is about control. And in this case, control of a narrative. We should have seen it coming.”

“If you aren’t worried about censorship in this country, you better be now,” she added.

People who get all their information from the right wing media bubble think that internet platform companies like YouTube are inherently left-leaning (despite their crucial role in promoting the most extreme right wing views), and that YouTube only removes videos like this because of some kind of need to appease a left-leaning, "politically correct" media.

In this, tragically, they are at least partially correct. Facebook and YouTube proactively delete millions of pieces of content, but for some of the highest stakes and most widely spread posts, they only delete them after public shaming, and usually after journalists find the video and explain why it's dangerous. No matter what its policies say, YouTube and Facebook's moderation in practice is often very reactive. In fact, in internal documents we've seen, a stated purpose of moderation for platforms is to avoid "public relations fires." Its purpose is to preserve the company's image and bottom line, not to protect millions of people from harmful videos. It's public relations, and on this, too, the platforms are failing.

There is, of course, a mind-boggling number of videos and posts on YouTube and Facebook, and both companies have faced difficulties keeping their content moderation teams up and running while their employees and contractors work from home. But dangerous videos can and must be detected before they have millions of views. There is no reason that my aunt or uncle who barely use the internet should regularly be seeing videos that will later be deleted for violating YouTube's policies before a YouTube moderator does.

The right wins when it spreads disinformation, and it also wins when that disinformation is deleted after it's gone viral, because it plays into their censorship narrative. The solution here, quite simply, is that social media companies need to be more proactive about policing their platforms before harm is done. This doesn't happen because in many cases Facebook and YouTube continue to rely on the media to tell them when something is bad, rather than choosing a policy, sticking to it, and making hard content moderation choices before they become political disasters for them.