This week, Harvard decided to rescind ten admission offers after learning that the prospective students had been posting rape-apologist, pedophilic, and violently racist memes to an offshoot of the main Harvard Class of 2021 Facebook Memes group. Because it hinges on tensions between free expression and (what could be described as) "PC culture," this case could be seen as a canary in the coal mine of 2017.
But it is much more than that. The case is a stand-in for the mine itself, along with the company, its miners, their tools, all of it. Culturally, this is where we are: an online environment in which sincere bigotry bleeds into satirical bigotry, irony is forwarded as both justification and argument, and accountability is so frequently sidestepped that just having to face consequences is news in itself.
Far more than being a story about a specific group of memes and a specific group of students, then, the Harvard dustup demonstrates how the fun and games of memes—along with the seeming separation between "the real world" and that somewhere-else place known as "the internet"—gives way to fully embodied, fully consequential ethics.
Some might be tempted to brush off these ethical consequences, arguing that the posting of even the most offensive content is no big deal. It's just internet memes. It's just incoming college kids trying to be as offensive as possible, for the lulz. It's just—as the co-founder of a similar Facebook meme group at Yale suggested to Taylor Lorenz at Mic—another form of hazing.
And sure, pictures on the internet can't physically assault anyone. The problem is that the "just" framing (just joking, just a meme on the internet, just a new kind of hazing ritual) posits what we describe in our work as a fetishized gaze, one that obscures everything but the joke itself. In highly fetishized examples ranging from the Harambe meme to "classic" memes like Bed Intruder, the potential for symbolic or emotional or, in the case of the no-longer-Harvard students, long-term professional harm tends to fall away. The Yale student's blithe equation of dehumanizing meme groups to hazing rituals provides a further example. College students have died during hazing rituals. The parents of those students will go to their graves still mourning.
The "just" in "just a hazing ritual"—like the "just" in any of the excuses used to minimize the emotional impact of degrading speech and behavior—thus makes it very difficult to understand and respond humanely to the pain, or even the potential pain, of others.
The takeaway here is to take ethics seriously, and in every interaction, regardless of medium, to be mindful of the very real people you're talking to and talking about.
If your inclination is to roll your eyes at that statement and accuse us of being snowflakes, then start misspelling swear words at us on Twitter, congratulations, you've underscored how and why this particular case serves as the bellwether of 2017. Indeed white nationalists operating under the banner of the "alt-right" are the whining boy kings of the "just" framing, most conspicuously that their hateful messages are "just" irony, or more irritatingly, "just" trolling, as argued in Breitbart's infamous alt-right "young meme brigade" explainer.
The alt-right might be the most conspicuous proponents of "just" framings, but as the Harvard meme page case reveals, they are hardly the only group operating under the looming question mark of motive. We frame this difficulty using an internet axiom known as Poe's Law—the fact that online, it is often difficult to know when someone is forwarding an earnest claim and when they are actively bullshitting. Making it equally difficult, in turn, to know how best or if to respond to something one encounters.
As we explain, part of the reason Poe's Law reigns online is because content is so easily unmoored from its original context, complicating efforts to trace content back to its original source. But it also reigns because of the aforementioned "just" framings—the impulse to immediately distance oneself from the things one has said, on the grounds that you were just trolling, or if you happen the be the president, were just tweeting, or any other reason used to sidestep personal responsibility for the actions one consciously chooses to undertake.
Our concern over Poe's Law and "just" framings more broadly, in this case and myriad cases like it, is that they're used to conveniently sidestep the very real ethical stakes inherent to online interaction. Regarding the Harvard meme page controversy, the assertion that the page was—say—just a form of hazing (or whatever "just" one might feel tempted to ascribe to it) minimizes broader questions about who might have been harmed by the content being shared, from the specific subjects of the images themselves (whose images may or may not have been used with the posers' consent) to the individuals—from Harvard students to parents to administrators—who might have felt threatened, degraded, or maligned as a result.
The takeaway here is to take ethics seriously, and in every interaction, regardless of medium, to be mindful of the very real people you're talking to and talking about. Though they've not made any public statements about the specific students' applications (per university policy), that's what Harvard seems to have done. We're sympathetic to such an approach. We also understand that this action will negatively impact those ten disenrolled students. An action that, to complicate matters further, will simultaneously signal to already-enrolled students, particularly those who are members of the memetically maligned groups, that theirs is a university where hateful expression is not consequence-free.
Whatever side one might take in this particular case, it underscores the real-world impact of digitally mediated behavior. After all, even when considering something as (seemingly) trivial as internet memes, the things people post, retweet, and comment on can be every bit as personally impactive as embodied interactions—perhaps moreso, since what gets posted online can live on through search indexing and then be amplified exponentially with the click of a button. Assessing how this matters, and what might be done in response, simply cannot happen if the default mode of expression falls somewhere between targeted, identity-based harassment and a blithe "lol jk." Only when we have some visibility can we ever hope to effectively navigate the dead ends and dropoffs of the mine.
Whitney Phillips of Mercer University and Ryan M Milner of the College of Charleston are the co-authors of The Ambivalent Internet.