Whitney Phillips is an Assistant Professor of Communication and Rhetorical Studies at Syracuse University and is the author of This Is Why We Can't Have Nice Things: Mapping the Relationship Between Online Trolling and Mainstream Culture and co-author of The Ambivalent Internet: Mischief, Oddity and Antagonism Online.
Ryan M Milner is an Associate Professor of Communication at the College of Charleston and is author of The World Made Meme: Public Conversations and Participatory Media and co-author of The Ambivalent Internet: Mischief, Oddity and Antagonism Online.
A common lamentation online, one that spans the political divide and is forwarded by politicians and editorial boards alike, is that civility in American politics has died. It’s such a pressing concern that 80 percent of respondents to a recent NPR survey fear that uncivil speech will lead to physical violence. If only people would lower their voices, stop posting rude memes, and quit with the name-calling, we could start having meaningful conversations. We could unite around our shared experiences. We could come together as a nation.
In the current media environment, in which Twitter and Instagram are inundated with harassment, journalists are routinely threatened, and YouTube algorithms prop up reactionary extremists, we find it difficult to argue with that sentiment.
As idyllic as it might sound, however, the call to restore civility isn’t as straightforward as it appears. Civility alone isn’t enough to fix what’s broken. It might actually make the underlying problems worse. We need, instead, to consider the full range of behaviors that facilitate harm online. Yes this includes extreme, explicitly damaging cases. But it also includes the kinds of behaviors that many of us do without thinking, in fact, that many of us have already done today. These things might seem small. When we use them to connect with others, build communities, and express support, they might seem downright civil. But the little things we do every day, even when we have no intention of causing harm, quickly accumulate. Not only do these everyday actions normalize an ever-present toxicity online, they pave the way for the worst kinds of abuses to flourish.
The Civility Trap
When used as a political rallying point, appeals to civility are often a trap, particularly when forwarded in response to critical, dissenting speech. Sidestepping the content of a critique in order to police the tone of that critique—a strategy employed with particular vigor during the Kavanaugh hearings, and which frequently factors into hand-wringing over anti-racist activism—serves to falsey equate civility with politeness, and politeness with the democratic ideal. In short: you are being civil when you don’t ruffle my feathers, which is to say, when I don’t have to hear your grievance.
Besides their tendency to be adopted as bad faith, rhetorical sleights-of-hand, calls for civility have another, perhaps more insidious, consequence: deflecting blame. It’s everybody else’s behavior, they’re the ones who need to start acting right. They’re the ones who need to control themselves. In these instances, “We need to restore civility” becomes an exercise in finger pointing. You’re the one who isn’t being civil. Indeed, the above NPR survey explicitly asked respondents to identify who was to blame for the lack of civility in Washington, with four possible choices: President Trump, Republicans in Congress, Democrats in Congress, or the media. Whose fault is it: this is how the civility question tends to be framed.
Ethics do not mean keeping your voice down. Ethics do not mean keeping feathers unruffled. Ethics mean taking full and unqualified responsibility for the things you choose to do and say.
We certainly maintain that the behavior of others can be a problem, or outright dangerous. We certainly maintain that some people need to control themselves, particularly given the increasingly glaring link between violent political rhetoric and violent action. Those who trade in antagonism, in manipulation, in symbolic violence and physical violence, warrant special, unflinching condemnation.
But few of us are truly blameless. In order to mitigate political toxicity and cultivate healthier communities, we must be willing to consider how, when, and to what effect blame whips around and points the finger squarely at our own chests.
We do this not by focusing merely on what’s civil, certainly when civility is used as a euphemism for tone-policing, or when it’s employed to pathologize and silence social justice activists (as if loudly calling out injustice and bigotry is an equivalent sin to that injustice and bigotry). We do this by focusing on what’s ethical. A more robust civility will stem from that shift in emphasis. Civility without solid ethical foundations, in contrast, will be as useful as a bandaid slapped over a broken bone.
As we conceive of them, online ethics foreground the full political, historical, and technological context of online communication; contend with the repercussions of everyday online behaviors; and avoid harming others. Ethics do not mean keeping your voice down. Ethics do not mean keeping feathers unruffled. Ethics mean taking full and unqualified responsibility for the things you choose to do and say.
The Ethics of the Biomass
It’s not just that online ethics help facilitate more reflective, more empathetic, and indeed, more civil online interactions. Online ethics do even heavier lifting than that. Decisions guided by efforts to contextualize information, foreground stakes, preempt harm, and accept consequences also help combat information disorder, a term Claire Wardle and Hossein Derakhshan use to describe the process by which misinformation, disinformation, and malinformation contaminate public discourse. Ethics are a critical, if underutilized, bulwark against the spread of such information. Without strong ethical foundations, everyday communication functions, instead, as an information sort target.
The fact that unethical—or merely ethically unmoored—behaviors contribute to information disorder is a structural weakness that abusers, bigots, and media manipulators have exploited again and again. Phillips underscores this point in a Data & Society report on the ways extremists and manipulators launder toxic messaging through mainstream journalism. The same point holds for everyday social media users. Extremists need signal boosting. They get it when non-extremists serve as links in the amplification chain, whatever a person’s motives might be for amplifying that content.
When considering how ethical reflection can cultivate civility and help stymie information disorder, biomass pyramids provide a helpful, if unexpected, entry point.
In biology, biomass pyramids chart the relative number or weight of one class of organism compared to another organism within the same ecosystem. For a habitat to support one lion, the biomass pyramid shows, it needs a whole lot of insects. When applied to questions of online toxicity, biomass pyramids speak to the fact that there are far more everyday, relatively low-level cases of harmful behavior than there are apex predator cases—the kinds of actions that are explicitly and wilfully harmful, from coordinated hate and harassment campaigns to media manipulation tactics designed to sow chaos and confusion.
When people talk about online toxicity, they tend to focus on these apex predator cases. With good reason: these attacks have profound personal and professional implications for those targeted.
But apex predators aren’t the only creatures worth considering. The bottom strata is just as responsible for the rancor, negativity, and mis-, dis- and mal- information that clog online spaces, causing a great deal of cumulative harm.
Even when a person’s motives are perfectly innocent, low-level behaviors can still be harmful. They can still flatten others into abstract avatars.
This bottom strata includes posting snarky jokes about an unfolding news story, tragedy, or controversy; retweeting hoaxes and other misleading narratives ironically, to condemn them, make fun of the people involved, or otherwise assert superiority over those who take the narratives seriously; making ambivalent inside jokes because your friends will know what you mean (and for white people in particular, that your friends will know you’re not a real racist); @mentioning the butts of jokes, critiques, or collective mocking, thus looping the target of the conversation into the discussion; and easiest of all, jumping into conversations mid-thread without knowing what the issues are. Regarding visual media, impactive everyday behaviors include responding to a thread with a GIF or reaction image featuring random everyday strangers, or posting (and/or remixing) the latest meme to comment on the news of the day.
Here is one example: recently, one of us published something on, let's say, internet stuff. Other people have written lots of things on the same general subject. One day, a stranger @-mentioned us to say that what we published was better than what someone else had published, and proceeded to explain how the other author fell short. The stranger @-mentioned the other author in the tweet. This was, we suppose, meant as a compliment to us. At the same time, it made us party to something we didn't want any part of, since just saying "thank you" would have cosigned, or seemed to cosign, the underlying insult. The other author, of course, fared much worse; the stranger didn't seem to give them the slightest passing thought.
It was a handle on Twitter to link to, not a person with feelings to consider. But of course, that stranger was wrong—no person on Twitter is just a handle to link to. And no person wants to be told in public that they are less than, for any reason. But that was the conversation, suddenly, this other author had been thrust into. One we were thrust into as well, even as the stranger thought they were saying something nice.
This strata of behavior receives far less attention than apex predator cases. Most basically, this is because each of the above behaviors, taken on their own, pales in comparison to extreme abuses. Whether emanating from platforms like YouTube, white supremacist spaces like The Daily Stormer, or even the White House, the damage done by the proverbial lions is clear, present, and often intractable. From a biomass perspective, insects seem tiny in comparison—and therefore not worth much consideration.
Less obviously, the lower strata of the biomass pyramid receives less fanfare because of assumptions about harm online. In cases of explicit abuse, bigotry, and manipulation, harm is almost always tethered to the criterion of intentionality: the idea that someone meant to hurt another person, meant to sow chaos and confusion, meant to ruin someone’s life.
In terms of classification, and of course interventionist triaging, it makes good sense to use the criterion of intentionality. Coordinated campaigns of hate, harassment, and manipulation, particularly those involving multiple participants, don’t just happen accidentally. Abusers and manipulators choose to abuse and manipulate; this is what makes them apex predators.
At the same time, however, reliance on the criterion of intentionality has some unintended consequences.
First, the criterion of intentionality discourages self-reflection in those who aren’t apex predators. If someone doesn’t set out to harm another person, that person is almost guaranteed not to spend much time reflecting on whether their behavior has or could harm others. Harm is something lions do. If you are not a lion, carry on.
But just because you’re not a lion doesn’t mean you can’t leave a nasty bite. Even when a person’s motives are perfectly innocent, low-level behaviors can still be harmful. They can still flatten others into abstract avatars. They can still weaponize what someone else said, or result in the weaponization of something you said. They can still strip a person of their ability to decide if, for example, they want a picture of themselves to be used as part of some stranger’s snarky Twitter commentary, or to be included in a conversation in which they are being publicly mocked.
From an information disorder perspective, these low-level behaviors can also be of great benefit to the lions. Retweeting false or misleading stories, even if the point is to make fun of how stupid they are, making ironic statements that, taken out of context, look like actual examples of actual hate, and generally opening the floodgates for polluted information to flow through, is what allows apex predators to cause as much damage as they do.
These actions also feed into, and are fed by, issues of journalistic amplification. The greater the social media reaction to a story, the more reason journalists have to cover it, or at least tweet about it. And the greater the journalistic response to a story, the more social media reaction it will generate. And then there are the trending topics algorithms, which do not care why people share things, just that they share things, as polluted information cyclones across platforms, accruing strength as it travels.
Because of these overlapping forces, whether or not someone means to sow discord, or spread hate, or propagate false and misleading information, discord can be sown, hate can be spread, and false and misleading information can be propagated by behaviors that otherwise don’t create a blip on the political radar.
Stacking the Deck with Digital Tools
Focusing on intentionality obscures the collective damage everyday people can do when they use social media in socially and technologically-prescribed ways. The affordances of digital media make this problem even worse by further cloaking the stakes of everyday communication.
We describe these affordances in our book The Ambivalent Internet. They include modularity, the ability to manipulate, rearrange, and/or substitute digitized parts of a larger whole without disrupting or destroying that whole; modifiability, the ability to repurpose and reappropriate aspects of an existing project toward some new end; archivability, the ability to replicate and store existing data; and accessibility, the ability to categorize and search for tagged content.
These tools don’t just allow, they outright encourage participants to flatten contexts into highly shareable, highly remixable, texts: specific images, specific GIFs, specific memes.
All creative play online owes its existence to these affordances. They are what make the internet the internet. They also make it enormously easy to sever social media avatar from offline body, and to mistake one tiny sliver of a story for an entire narrative, or to never even think about what the entire narrative might be. As a result, even the most well-intentioned among us can overlook the consequences of our actions, and never even know whose toes we might be stepping on.
In such an environment, the first step towards making more ethical choices is acknowledging how the deck has been stacked against making more ethical choices.
The second is to anticipate and try and preempt unethical outcomes. This means contending with the fact that your own contextualizing information, including your underlying motivations, become moot once tossed to the internet’s winds. You might know what you meant, or why you did what you did, particularly in cases where you’re relying on “I was just joking” excuses. But others can’t know any of that. Not due to oversensitivity, not due to them not being able to take a joke. But because they can’t read your mind, and shouldn’t be expected to try.
Another critical question to ask is what you don’t know about the content you’re sharing. How and where was something sourced? What happened to the people involved? Did they ever give consent? Who was the initial intended audience? Each of these unknowns shapes the implications, and of course the ethics, of further amplifying that content. The devil, in these cases, isn’t in the details, the devil is in the unseen, unknown, unsolicited narratives.
Finally, we must all remember that the issues we discuss online, the stories we share, the media we play with—all can be traced back to bodies. Fully-fleshed out human beings who have friends, feelings, and a family—just like each of us.
This point is particularly important for middle-class, able-bodied, cisgendered white people to reflect on (a point we make as middle-class, able-bodied, cisgendered white people ourselves). When your body—your skin color, the resources you have access to, your gender identity, your ability—has never been the source of threats, abuse, and dehumanization, it is very easy to downplay the seriousness of threats, abuse, and dehumanization. To approach them abstractly, as just words, on just the internet. The behaviors in question might not seem like a big deal to you, because they’ve never needed to be a big deal for you. Because you’ve always, more or less, been safe. This might help explain why you react the way you do, but it’s not an excuse to keep reacting that way.
So when in doubt, when you do not understand: remember that what might look like an insect to one person can act like a lion to others. Particularly when those insects are everywhere, always, clogging a person’s experience, weighing down their bodies.
The biomass pyramid shows that the distinction between big harm and small harm is, in fact, highly permeable. The big harms perpetrated by apex predators are exactly that: big and dangerous. Smaller harms are, by definition, smaller, and on their own, less dangerous. But the harm at that lower strata can still be harmful. It is also cumulative; it adds up to something massive. So massive, in fact, that these smaller harms implicate all of us—not just as potential victims, but as potential perpetrators. Just as it does in nature, this omnipresent lower strata in turn supports all the strata above, including the largest, most dangerous animals at the top of the food chain. Directly and indirectly, insects feed the lions.
Robust online ethics provide the tools for minimizing all this harm. By using ethical tools, we minimize the environmental support apex predators depend on. We also have in our own hands the ability to cultivate civility that is not superficial, that is not a trap, but that has the potential to fundamentally alter what the online environment is like for the everyday people who call it home.