Tech

There’s No ‘Correct’ Way to Moderate the Nancy Pelosi Video

When Facebook announced it wouldn’t take down an altered viral video of Nancy Pelosi, experts disagreed as to whether or not the platform made the right call. But there may not be a right call.
Nancy Pelosi

On Friday, a video of Nancy Pelosi—which was slowed down and altered to make the Democratic Speaker of the House appear to be intoxicated—went viral on Facebook and several other social media platforms. President Trump shared the video on Twitter, and Rudy Guiliani tweeted a Facebook link to the video. It has been viewed millions of times, and has become a major topic of conversation on cable news and social media.

Advertisement

Facebook said it won't delete the video. Rather, Facebook Head of Global Policy Management Monika Bickert told Anderson Cooper that the company would be de-prioritizing, or pushing the post down, in people’s news feeds, and putting the video alongside third party fact-checker information.

Facebook’s decision was polarizing, and people were immediately divided into two camps. One camp argues that Facebook can't, at scale, create and implement a policy to deal with videos like the one of Nancy Pelosi. The other camp argues that there is a responsible way to moderate that type of content, and we have to explore it.

It’s understandable that people were upset with Facebook’s response, given the platform’s inadequate response to misinformation in the 2016 election. Although we don’t know precisely how many people were swayed by Russia’s coordinated misinformation campaign through the Internet Research Agency, we do know that a significant amount of Facebook users were reached, unimpeded.

For instance, Recode's Kara Swisher wrote in the New York Times, "the only thing the incident shows is how expert Facebook has become at blurring the lines between simple mistakes and deliberate deception, thereby abrogating its responsibility as the key distributor of news on the planet."

But there’s not necessarily a clear “right” answer here. Rather, the Nancy Pelosi video illustrates that people will accept any evidence that confirms their pre-existing world view. It also shows that content moderation—an already difficult task of anticipating and creating a policy for every possible social situation—is monumentally more difficult when you’re building it into your platform after it’s been adopted by over 2 billion people.

Advertisement

It is difficult, at times, to distinguish between mistakes, satire, and deliberate deception. It’s particularly difficult in the case of a viral video such as that of Nancy Pelosi, where some users sharing the video think it’s real, while others think it’s satire. Should it matter that the video has been shared widely, or shared by the president? Do comments on the video or the text used to share it matter? If you’re crafting a policy that’s based on intent to deceive, how do you account for situations where intent is either unclear or inconsistent among users?

Fight for the Future's Evan Greer said that content like the Nancy Pelosi video occupies a grey area where it’s functionally the same, "from a legal and technological standpoint,” as John Oliver videos that edit videos of politicians as a form of satire.

By removing the Nancy Pelosi video, Facebook wouldn’t just be defining where satire becomes misinformation in that situation. The company would also be crafting a precedent that could be applied to other situations—situations where the target may not be Nancy Pelosi, but Trump, or figures scrutinized by the left or in other countries, in other contexts.

What if Facebook started doing takedowns of clips of late night television, citing the reason as “misinformation”? Neil Potts, Facebook’s director of public policy, said that this is why the Nancy Pelosi video needs to stay up.

Advertisement

As pointed out by Microsoft researcher Tarleton Gillespie, Facebook actually did try labelling certain articles as “satire” back in 2014. But the satire tag was only tested and never implemented at scale.

Jillian York, director for international freedom of expression for the Electronic Frontier Foundation, similarly argues that if we give Facebook the power to draw black-and-white rules for what’s satire and what’s not, the platform will get it wrong and apply it inconsistently to non-U.S. or non-English-speaking countries.

However, other people argue that it is possible to craft a moderation policy that goes further than Facebook currently does. Emily Gorcenski, a data scientist focused on documenting white supremacists, argued that Facebook should enforce specific policies at the level of influential, public-facing government officials rather than ordinary users. On paper, she said, this approach to content moderation is easier to enforce than blanket policies.

But this solution might, in fact, be difficult to implement. On Twitter, world leaders and politicians are granted power to circumvent the platform’s rules because, according to Twitter, suspending or punishing public figures would “hide important information people should be able to see and debate.”

Sam Gregory, program director for Witness, said in an email that platforms like Facebook should tell users how videos may have been altered, and also “set fair, transparent rules” that allow for maximum freedom of expression. But it’s difficult to craft these rules when public figures are involved.

Advertisement

“Unless we're going to engage in digital NIMBYism and say that the Pelosi video only matters because it's a US political figure,” Gregory said, “then we need to think about how we set a standard that won't be exploited by unscrupulous and authoritarian governments worldwide to dismiss legitimate speech as offensive or to falsely claim that something is a fake when it isn't.”

Gillespie said in an email that grey areas of content moderation like the Nancy Pelosi video often come down to human judgement, and maybe the people best suited to be making those judgements are users, not Facebook.

“It’s impossible to draw a line in the sand, that cleanly separates the manipulated Pelosi video on the one hand, and lots of other edited-together political videos on the other,” Gillespie said. “I'm not talking about some ‘Supreme Court’ oversight board, and I don't just mean individuals can choose to block content for themselves. I mean: more of this decision could belong to the users and communities involved.”

Community knowledge is a valuable tool when making difficult decisions, but it also isn’t perfect. Consider Reddit, which empowers the moderators of subreddit communities to make and enforce rules for their subreddits. This has bred subreddits that are friendly to white supremacy, misogyny, and transphobia, and subreddits that launch coordinated harassment attacks against other subreddits.

Corynne McSherry, legal director for the Electronic Frontier Foundation, said that it’s important to think of content moderation as a set of tools that goes beyond content takedowns.

“Too often, platforms and users assume the only option is to take content down or not,” McSherry said. “But there are other options—like providing additional information. We also need more tools for users, so they have the ability to control their internet experience based on their own judgment.”

Still, that’s not to say Facebook’s current policy of de-prioritizing and fact-checking the Nancy Pelosi video is necessarily the “correct” approach. After all, using fact-checking as a mode of combating misinformation falsely assumes that people can and will change their minds once presented with facts. It’s the same problem with YouTube’s Wikipedia fact-checking program to combat conspiracy theories. People will believe what they want to believe, and there’s no simple way to construct a platform policy that combats the core issues of media illiteracy and filter bubbles.

People, disproportionately older Americans, cannot universally agree on what is true or false, what is fake news and what is real news. It's no surprise, then, that Facebook doesn't seem to know, either.