Holy shit, Americans are assholes (via Floating Sheep)
In a lot of ways, the Geography of Hate affirms what we already know: Americans are fucking racist. Homophobic and ableist, too.
But while that may not come as any great surprise, the map reveals a startling bigotry coursing beneath our preconceived notions of just where in the US hate is harbored most. Americans, it turns out, fall racist and homophobic and ableist, and are apparently vocal enough about it to spout off bigotry on social media, in no real discernible pattern, though it's often where we least expect bigotry that we find it rearing its ugly head.
The visualization comes way of Humboldt State University's Dr. Monica Stephens and the Floating Sheep--the same group that made a map of post-election Twitter hate speech. It comprises 150,000 geo-coded hate tweets flagged between June 2012 and April 2013 for including the word "chink," "gook," "nigger," "wetback," "spick," "cripple," "dyke," "fag," "homo," or "queer". At first blush it's awfully depressing, a real day ruiner, or worse. Click around and most slurs--not all, but most--see the intercontinental US pocked by deep reds, the research team's translation for "most hate." Jesus Christ. Is it 2013? It can't be 2013.
But, really? That can't be right, can it. Surely something's off. How can we be sure "positive" uses of an otherwise hateful slur (e.g., “dykes on bikes #SFPride”) weren't inadvertently swept up in the Geography of Hate? Contextualiztion is crucial--is everything, really. Did Stephens' team allow for it?
They did. In fact, this is why they used humans (read: Humboldt State students), not machines, to analyze the entirety of the 150,000 offending tweets, all drawn from the University of Kentucky's DOLLY project. (It was also very much the reason the project got underway in the first place, as the Floating Sheep got a fair deal of flak over whether their post-election map contextualized hate rigourously enough.) It was a matter of avoiding "any algorithmic sentiment analysis of natural language processing," the researchers write, "as many algorithms would have simply classified a tweet as ‘negative’ when the word was used in a neutral or positive way. The students were able to discern which were negative, neutral, or positive."
As such, the map only includes those tweets used in explicitly negative context. Like so much of modern life, it's an uncomfortable truth perhaps best summed up by the late George Carlin.