It’s always a bad time to be online, but last week was particularly bewildering. Two women found themselves at the bottom of massive Twitter pile-ons based on claims that, upon closer inspection, turned out to be false. Erika Thompson, a beekeeper with about 6 million followers on TikTok, was accused of being a Trump supporter and a “fraud”; as VICE reported at the time, both allegations are untrue. And despite thousands of tweets that would have you believe otherwise, actress Ellie Kemper is not, in fact, a “KKK Queen.” (She was honored at a gala thrown by an organization with a history of racism and elitism as a teen, but it has no known affiliation with the KKK.)
Kemper and Thompson are just the latest in a long line of people—often famous, sometimes not—who have been lambasted online over claims that were misleading at best, and baseless at worst. In May, Gwyneth Paltrow was pilloried for allegedly complaining about hitting her “lowest point” during quarantine: She fell off her diet, and started making pasta and eating bread. That would be an insensitive thing for her to say, given that millions have lost family members and friends to COVID-19—but Paltrow never said it.
Instead, during a conversation about smoking cigarettes on the podcast SmartLess, she made this offhand comment: “Basically, during quarantine I was drinking seven nights a week and making pasta and eating bread—like, I went totally off the rails.” She never called that her “lowest point”—and yet thousands of people raked her over the coals as if she had.
One could argue that there's nothing inherently problematic about piling on to people who are already rich and famous, even if what they’re accused of isn’t true; by stepping into the spotlight, they’re inviting people to come after them. But the same thing happens to random civilians, too—from Thompson, to “Bean Dad,” to “Alarm Guy,” and beyond. If you take a careful look at the scandals surrounding many of Twitter’s main characters, you’ll find a common dynamic at play: An online mob castigates someone for some egregious alleged offense. Later on, a few people point out that the whole thing has either been wildly mischaracterized or that it never happened at all. Ultimately, the facts barely register, because by the time they come to light, everyone has already moved on to the next day’s scandal.
These aren’t isolated incidents, or inexplicable flukes. They are the product of a deeper problem on Twitter, where falsehoods have a distinct advantage over facts—and of the way social media platforms interact with certain innate, fundamental aspects of human psychology.
In 2018, a team of researchers at MIT published the most comprehensive study to date on how mistruths spread online, based on a survey of about 126,000 rumors shared on Twitter by roughly 3 million users over the span of 11 years. They found that falsehoods consistently beat out the truth on the platform, spreading “significantly farther, faster, deeper, and more broadly than the truth in all categories of information.” False claims are 70 percent more likely to be retweeted than true ones, they found, and far more likely to go viral.
The researchers couldn’t say with certainty why that’s the case, but they did offer a hypothesis. For one, they explained, false claims are often more “novel” than true ones. To draw on a present-day example, people are more likely to amplify something extraordinary (e.g., “This TikTok beekeeper is a Trump-supporting fraud”) than something anodyne (e.g., “This TikTok beekeeper is a Democrat who’s good at beekeeping”). False claims, they said, may also provoke stronger emotional reactions than true ones. Often, those reactions are negative—and when something upsets or angers us, we want to talk about it.
“False information online is often really novel and frequently negative,” Brendan Nyhan, a professor of government at Dartmouth College told The Atlantic in 2018. (He was unaffiliated with the study.) “We know those are two features of information generally that grab our attention as human beings and that cause us to want to share that information with others—we’re attentive to novel threats and especially attentive to negative threats.”
One would hope that people would take the time to independently verify a claim before becoming enraged by it and sharing it. But research shows they almost never do. A 2017 study by Zignal Labs, a media intelligence company, and Harris Poll, a market research and analytics firm, found that 86 percent of people don’t fact-check what they read on social media. Instead, it seems, most of us assume that if enough people are saying something, it must be true.
And we’re not just predisposed to buying into false or exaggerated claims when we stumble across them online; from a psychological perspective, we may be incentivized to spread them. As psychologist Judson Brewer explains in an article for Psychology Today, Twitter thrives by mirroring humans’ reward-based learning process, which involves three steps: trigger, behavior, and reward.
“We have an idea or think of something funny (trigger), tweet it out (behavior), and receive likes and retweets (reward),” Brewer writes. “This learning process causes a dopamine rush in reward centers of the brain. The more we do this, the more this behavior gets reinforced.”
But the dopamine rush we get from tweeting something neutral or positive, Brewer writes, isn’t as powerful as the one we get from attacking someone. In the latter case, according to Brewer, the reward is two-fold: "(1) Self-righteous vindication. ‘Yeah, I got that guy!’; and (2) Approval. ‘Yeah, you got that guy!’ someone tells us through a like or retweet. Another dopamine rush for your brain’s reward center.”
Along with that dopamine rush comes a feeling of schadenfreude: Research shows we have a tendency to delight in the downfall of those more famous or successful than us, and often, Twitter’s main characters are. Taken together, the dopamine reward that comes from lashing out at someone online—coupled with the schadenfreude that comes from watching them tumble from grace—would appear to overpower our concern for the truth. It’s fun to join a pile-on targeting someone higher up on the social ladder than you; it’s less fun to wrestle with whether they deserve that pile-on, or to research whether what you’re tweeting or retweeting is based in fact.
You can’t necessarily blame Twitter for its users’ reluctance to do their own fact-checking, or their eagerness to attack someone based on a claim that may or may not be true. But as tech journalist Alex Kantrowitz recently wrote in his Big Technology newsletter, the platform seems to be exacerbating the problem through its “trending topics” feature, which broadcasts these stories to thousands of people while providing little in the way of context beyond a brief description. The tool also has a habit of zeroing in on the sorts of conversation topics that trigger internet pile-ons and then shuttling them to massive audiences, which is exactly what happened to Kemper, Thompson, and Paltrow. As a user, you’re told a claim against someone exists, without being told whether or not it’s true. From there, you’re tacitly encouraged to join the conversation.
“Twitter’s Trends are engagement bait that entice us to chime in on hot topics,” Kantrowitz writes. “Trends can be harmless. But too often, they invite us to form definitive opinions on a person, no matter how obscure, based on just a few tweets, or even just one. So we get Bean Dad, Shrimp Guy, and a cast of characters whose lives are disrupted—or destroyed—for the sake of entertainment.”
When we weigh in on Twitter’s villain du jour, it pays off to make bold, absolutist statements about them, which “tend to collect the most retweets, no matter how loose their relationship with reality,” Kantrowitz writes. On Twitter, certainty has a competitive advantage—and that motivates us to take reductive stances on complex issues, often at the expense of the truth.
Misleading or flat-out false allegations regularly begin when someone leaps from nuance (“Ellie Kemper was once honored by an elite St. Louis organization, which has a history of racism”) to absolutism (“Ellie Kemper is a KKK Queen”). It’s a self-perpetuating cycle: Inaccurate, absolutist tweets perform well; seeing that they do, users chase virality by mimicking them, if not exaggerating them. Accuracy becomes an inconvenience to be cast aside in the pursuit of amassing as many likes and retweets as possible.
The rise of “scandals” based on misrepresentations and outright lies is, in part, a platform problem. Social media companies are aware of that, and they’ve been taking some steps to address it, especially as they face scrutiny over the spread of misinformation during election cycles and COVID-19: Facebook has a team dedicated to fighting misinformation, as does Twitter, which recently started asking users to read an article before sharing it. As it stands, though, it still falls largely on us to try to tamp down the spread of erroneous claims online. When we come across one, we have a choice to make: either blindly amplify it, or take the time to figure out if it’s actually true before joining the pile-on.
Then, of course, there’s always a third option: Think of something better to do with our time, and just log off.
Follow Drew Schwartz on Twitter.