George Benjamin Douglas​ fake news illustrations
Photo Illustrations by George Benjamin Douglas

FYI.

This story is over 5 years old.

The Truth and Lies Issue

Why People Post Fake News

When it comes to the stories we share on the internet, the line between empirical truth and emotional truth can be extremely hard to parse.

This story appears in VICE Magazine's Truth and Lies Issue. Click HERE to subscribe.

During the 2017 G7 Summit in Sicily, a BBC reporter posted a video to Twitter revealing that US president Donald Trump did not visibly appear to be wearing a translation headset during a speech by the Italian prime minister, Paolo Gentiloni. Yannick LeJacq, a California-based writer, admitted to me that he was one of the thousands of people who shared the story on social media, taking to Facebook to post a link from a website he hadn’t heard of before. The implication, he said, was that Trump “wasn’t even bothering to listen to the translations coming through from the other world leaders.”

Advertisement

A day later, the story had been debunked, first by the White House press secretary, Sean Spicer, and later by Gizmodo, after images surfaced of the president wearing a very small earbud in his right ear. The BBC reporter quickly issued a correction on Twitter, but it didn’t gain much traction; as of this writing, the update has just 167 retweets—a far cry from the original’s nearly 19,000 (although another tweet juxtaposing that discrepancy subsequently went viral as well).

LeJacq told me he discovered the article was false soon after he posted it, then tried to warn a Facebook friend who had also shared a similar article that it was “essentially a fake news item.” They didn’t seem to care. “They got huffy and said something to the effect of, ‘Well, he definitely hasn’t listened to people tons of times anyways, so what’s the harm?’”

When it comes to the stories we share on the internet, the line between empirical truth and emotional truth can be extremely hard to parse. It’s not that we’re incapable of identifying false or exaggerated information; it’s just that we’re always performing. In some ways, sharing a news story is no different from posting a song you love or a photo of your latest kitchen creation, or celebrating a relationship anniversary. Whether we’re showing enthusiasm for a politician we admire, or hate-posting a story from a reviled op-ed writer, we’re providing other people with small glimpses of who we are. And sometimes, that need to express ourselves seems to blind us to what we’re actually sharing.

Advertisement

“I think one of the general hypotheses about why people spread fake news online is it’s not about trying to mislead people or mistakenly believing something that you want to share with your friend,” said Andy Guess, an assistant professor of politics and public affairs at Princeton University. “When you’re signaling your membership in a group and you’re saying, ‘I’m one of you,’ then facts don’t matter.”

Guess is one of several academics I interviewed to try and figure out why people share disinformation online—and what institutions and platforms can do to curtail its spread. I’m not alone in my search; since Trump’s victory in the 2016 election raised questions about the extent to which fake news may have affected people’s voting decisions, media organizations like the Trust Project, the Columbia Journalism Review, and CUNY’s Tow-Knight Center have explored a variety of tactics for preventing fake news from proliferating, including flagging disinformation and providing people with the tools to detect it on their own, though outcomes so far have been mixed. The preliminary findings haven’t always been surprising; in a widely reported recent study of 3,500 Facebook users’ sharing behavior, Guess and two researchers from New York University discovered that around the election, individuals aged 65 or older shared more fake news items on average than any other group. That was consistent regardless of party affiliation, education, or overall posting activity, which Guess said may have something to do with that population’s lower levels of digital literacy.

Advertisement

On the whole, though, what they discovered was that most people in their sample didn’t share fake news—at least if, when you say “fake news,” you’re referring to the spate of verifiably false news articles churned out by Macedonian teens and Russian content farms in the run-up to the presidential election. Still, stories like “Obama Signs Executive Order Banning the Pledge of Allegiance in Schools Nationwide” and “Trump Offering Free One-Way Tickets to Africa & Mexico for Those Who Wanna Leave America” received hundreds of thousands of engagements on Facebook that year. We know that people are sharing disinformation, but why?

Researchers don’t have a definitive answer, and finding one presents a host of methodological challenges. For one, academics generally don’t have access to data on how many of the people who liked, shared, or commented on those stories actually read them (Facebook doesn’t publish information on click-through rates). It’s also difficult to experimentally assess what people did or did not believe two years ago, let alone whether fake news articles may have had a large enough impact to swing the election.

That politically loaded question aside, psychologists are still trying to determine exactly what effect, if any, fake news has on people. In a 2018 study, researchers at Yale University showed participants recruited from Amazon Mechanical Turk, a crowdsourced marketplace where users can recruit people to perform tasks online, a selection of fake and real news headlines. They found that the perceived accuracy of false headlines increased when participants had been exposed to them before—even when the stories were labeled as having been “Disputed by 3rd Party Fact-Checkers.”

Advertisement

One of the study’s coauthors, Gordon Pennycook, an assistant professor of behavioral science at the University of Regina, in Saskatchewan, has amassed an extensive body of research investigating why people fall for fake news. In a series of three studies, he and MIT’s David G. Rand, another coauthor of the Yale study, found that in a group of survey takers recruited on Mechanical Turk, those who performed higher on a Cognitive Reflection Test were better at detecting false articles than people more prone to relying on their gut feeling, or emotional intuition, when assessing information. Moreover, more “analytic” people were better at differentiating between true and false stories regardless of their political beliefs, suggesting that “lazy thinking” may be an even bigger culprit than partisan bias.

Of course, we’re still probably more likely to trust information that aligns with our view of the world. “If you see a headline that makes total sense to you, you’re not going to give it any additional thought, really, because there’s nothing to think about,” Pennycook said. “You can just nod your head and move on.”

Moreover, simply being exposed to disinformation can increase our perception of how accurate it is, as the above Yale study suggests. “That doesn’t mean that every time somebody reads something that means they believe it,” Pennycook told me in a phone interview. “It just means that it will impact their relative belief—and you only need one exposure for that.”

Advertisement

Like other researchers I interviewed for this story, Pennycook cautioned that existing research presents only a partial picture when it comes to understanding why people share fake news. Still, he has conducted a few as-yet-unpublished studies investigating the relationship between sharing and believing. In one, he showed two different groups the same collection of headlines, and asked one group if they believed them, and another group whether they would share them. Strikingly, participants were better at differentiating between true and false news when they were asked to evaluate accuracy than when they were asked if they would share certain news items.

1551202996223-final1_300

“If you ask about belief, people are OK at determining what’s true and false,” Pennycook said of his informal findings. “When it comes to sharing, they’re not very good at determining what’s true and false. What they share mostly is the things that are consistent with their political ideology. That makes a lot of sense, because sharing’s a kind of social phenomenon. You’re like, ‘Here’s this thing that I’m willing to put out there.’ And unfortunately, whether something’s true or false is not exactly the first thing people think about when doing that.”

Not surprisingly, the most widely shared false news stories tend to make a strong appeal to our feelings. In a 2018 analysis of approximately 126,000 stories that were shared widely on Twitter from 2006 to 2017, researchers at MIT classified contested news items as “true” or “false” using information from six different fact-checking sites, then generated visualizations of user sharing behavior to look for patterns in how those stories spread.

Advertisement

“What we found is that on average, contested false news spreads faster and to more people than contested true news,” Soroush Vosoughi, one of the study’s coauthors, told me via email. True news, for example, took about six times as long as false news to reach 1,500 people on average—and false political news traveled faster than any other kind of false news, reaching 20,000 people almost three times as fast as other categories reached 10,000.

Now an assistant professor of computer science at Dartmouth College, Vosoughi and his fellow researchers noticed something else: When they examined the replies Twitter users posted in response to viral news stories, false stories were more likely than true ones to elicit reactions of surprise and disgust. “We didn’t specifically look at what makes a news story go viral, but what we did see is that, on average, the false stories were more novel (and elicited more surprise) than true stories,” he explained. “Moreover, false stories on average elicited a greater negative emotional response.”

Vosoughi didn’t want to speculate as to the reasons why so many people shared these surprising and infuriating false news stories, because that was outside the purview of his research. But in a recent Gallup/Knight Foundation survey of 1,440 Gallup Panel members, participants who admitted to having shared verifiably false news items at least once—about 32 percent of Republicans surveyed, and 14 percent of Democrats—reported doing so for a number of reasons, including wanting to call attention to a story’s being inaccurate (84 percent) and believing that the story in question might be true, despite suspecting it was false (34 percent). Disturbingly, 25 percent said they shared stories they suspected to be misinformation because they wanted to spread the message to a wider audience, and 21 percent said they shared it in order to “annoy or upset the recipient”—conjuring images of anonymous alt-right trolls posting xenophobic memes to antagonize left-wing Twitter users.

Advertisement

Complicating the sharing phenomenon further is that Americans are still deeply divided on what “fake news” is. In a 2018 Gallup/Knight Foundation survey of more than 19,000 American adults, the majority of survey takers classified stories by “people knowingly portraying false information as if it were true” as either “always” (48 percent) or “sometimes” (46 percent) fake news. But most Americans also said they believed that “accurate stories portraying politicians in a negative light” constituted fake news to some extent, and 4 in 10 Republicans classified this sort of information as “always” fake news, echoing Trump’s penchant for using the term as a catchall for any news that he finds personally unflattering. And while the research overwhelmingly suggests that conservatives share more fake news than liberals, it’s hard to tell whether such findings are a reflection of their overall news discernment, conservatives’ statistically lower trust in mainstream media, or something else entirely. It’s possible that there were simply more conservative-leaning fake news items in circulation than liberal ones, although that could be because publishers found that conservative fake news generated higher levels of engagement.

In a world where the nature of truth itself seems to be increasingly open to interpretation, focusing on verifiably false news articles disguised to look like real ones tells only part of the story of why people share disinformation. Alice E. Marwick, an assistant professor of communication at the University of North Carolina in Chapel Hill, emphasized the myriad forms that disinformation can take online—from YouTube videos, to podcasts, to memes sporting factually vague proclamations like “Immigrants are invading your country.” Marwick studies the ways that far-right groups use the internet to stoke racist, xenophobic, and anti-Semitic sentiment in the American public, and says it’s important to consider political disinformation in the context of the wider partisan discourse in which it takes root.

Advertisement

By way of an example, she points to one particularly egregious headline that was floating around in 2016: “Police Find 19 White Female Bodies in Freezers With ‘Black Lives Matter’ Carved into Skin.” “That is clear, racist, fake news,” Marwick says. “It plays on long-standing white supremacist and racists tropes, right? But it also plays into this idea of Black Lives Matter as a dangerous, terrorist organization, which is something that FOX News is always harping on about, and is a general conservative talking point.”

In other words, sharing disinformation online may simply be the by-product of people’s desire to signal their preexisting values and beliefs—ideas that are already being promulgated in the wider media sphere. In a recent paper for the Georgetown Law Technology Review, Marwick conducted an analysis of 100 fake news stories identified by BuzzFeed as the most widely shared on Facebook in 2016 and 2017—as well as the “Hot 50” on Snopes.com, a prominent fact-checking outlet, in March of 2018—and looked at how many of them seemed designed to resonate with conservative “deep stories.” The “deep story” is a term developed by the sociologist Arlie Hochschild to describe the narratives and assumptions undergirding different partisan ideologies. After pinpointing some the deep stories that seemed most pervasive in conservative media—including, for example, the belief that “Liberal urbanites look down upon rural conservatives”—she found that the messaging of popular fake news stories with a right-wing slant mapped “fairly neatly” to these talking points.

Advertisement

“Conservative fake news exists on a continuum with mainstream partisan media,” she told me. “They’re reinforcing the same tropes and ideas.”

Anecdotally, her findings do suggest that people may be more likely to share disinformation when it aligns with their “emotional truth”—the stories they already tell themselves about the world, and of the perceived “enemies” and “out-groups” that they share it with. And while there is little reliable evidence thus far to suggest that it can cause people to completely reevaluate their existing political beliefs, Guess, the Princeton political science professor, says that one of the things he’s currently exploring in his research is whether fake news has the effect of stoking people’s preexisting resentments. “I think people’s sense is that there is some connection between online misinformation and polarization, especially the emotional or affective component of polarization, where people just have a dislike for their out-group or out-party,” he said.

After all, in addition to posting articles to show other people who we are, we share them to show the world who we are not. And since many partisan false news stories seem expressly designed to capitalize on our anger toward the out-group, it’s easy to imagine them feeding into a climate where dialogue with the opposition feels absolutely impossible. For one thing—especially if we’re going by the headlines that infuriate us the most—their ideas seem to literally threaten the end of humanity and decency as we know it; for another, they seem to be working with a completely different set of facts.

“It’s not that one person will be misled about the actual basis for a specific policy proposal, or they’ll think that some event happened that didn’t actually happen. It’s more just the cumulative effect of the online cacophony that people are not able to sift through themselves, and feel like the rational response is just to stop trying,” Guess said.

Since concern about partisan disinformation on the internet peaked after the 2016 election, groups other than those devoted to fact-checking have explored a variety of strategies for curbing its spread. Some of these are what Guess calls “demand side” solutions, emphasizing the need for widespread digital-literacy training so that internet users can identify questionable news items before they decide to share them. To that end, France has already begun rolling out digital literacy curricula at the high school level; the European Union, the UK, and several US states are exploring similar interventions in school, along with American nonprofit organizations like the News Literacy Project.

But the academics I interviewed for this story stressed the urgency with which America needs “supply”-side solutions, and most of those depend on the social media platforms that have enabled fake news to proliferate. Facebook, for example, has rolled out a number of anti-disinformation initiatives since 2016, including barring sites that repeatedly publish false news articles from generating ad revenue, and collaborating with third-party fact-checkers to identify factually problematic stories and downrank them in the newsfeed. Although the latter project has reportedly been rocky at times (Snopes pulled out of the collaboration in early February), the platform’s efforts have been somewhat fruitful. In a recent study, researchers from NYU and Stanford University determined that likes, shares, and comments on false news stories decreased by over 50 percent on the platform between the last presidential election and July 2018, though they noted that fake stories on the platform still averaged roughly 70 million engagements per month.

As Guess sees it, though, it’s unlikely that the internet’s disinformation problem is going to go away anytime soon. Even if platforms stamp out verifiably false news articles, as technology evolves, new forms are bound to pop up—and, he said, “There’s always going to be a group of people who have less experience than everyone else in having a healthy skepticism in what they encounter online.” That means that platforms looking to curtail the problem will need to be dogged in their pursuit of solutions, lest we end up in a world where the average citizen feels so bombarded with conflicting information, and so confused about which sources they can trust, that they tune out the political system completely.

“I think that’s the larger danger,” Guess said. “It’s not that one person will be misled about the actual basis for a specific policy proposal, or they’ll think that some event happened that didn’t actually happen. It’s more just the cumulative effect of the online cacophony that people are not able to sift through themselves, and feel like the rational response is just to stop trying.”

For now, it’s hard to spend any time on the internet without noticing the ways we’re all actively contributing to that racket. When we catch ourselves sharing a factually problematic headline just because it “[feels] so right,” as LeJacq described it, it can be unclear whether truth even matters in the face of our desire to express ourselves. Of course, like LeJacq, we could make a good faith effort to double-check the things we post and own up to our mistakes when we slip up—but even if we did, there’s no knowing how many people would listen.

Sign up for our newsletter to get the best of VICE delivered to your inbox daily.