An American Neo-Nazi. Image via Wikimedia Commons
Law enforcement officials are increasingly turning to social media for everyday crime prevention purposes. But a new study out of London suggests it may be a particularly effective tool for identifying dangerous political extremists. The new analytical methods offered in the study are just a start, but they could help law enforcement officials prevent hate crimes and other acts of political violence before they happen.
For the study, J.M. Berger and Bill Strathearn, researchers with the International Centre for the Study of Radicalization and Political Violence (ICSR), examined thousands of Twitter accounts associated with the white supremacist movement, some more strongly than others. Using a fairly simple algorithm, they were able to quickly determine the most overt extremists of the bunch.
The algorithm was based on two concepts taken from everyday life: influence and exposure.
Berger and Strathearn started by gathering the 3,542 Twitter followers of 12 “seed” accounts—accounts like David Duke’s or the American Nazi Party’s that are consistent, established outlets for hate-based screed.
But just because someone is following a hate monger doesn't mean that person is dangerous; sniffing out the real extremists amid all that data typically takes a lot of trial and error. In a random sample, for example, only 44 percent of those followers expressly identified themselves as white nationalists or engaged in tweeting behavior that was overtly white nationalist. Some followers, for whatever reason (perhaps because of thoughtless follow-backs, or because they're law enforcement or journalists), actually expressed views that were decidedly antithetical to white nationalist viewpoints.
"Twitter users from the original pool who were both highly influential and highly responsive were almost always dyed-in-the-wool extremists."
Researchers were much more successful at pinpointing bona fide extremists when they examined how users interacted with each other. First, they measured users’ “influence” on Twitter by looking at how many times their tweets were favorited or retweeted. Those of the original pool who were most influential were more likely to be overt white nationalists. Among 100 most influential, 86 percent had self-identified themselves as such or had tipped their hand through their tweets.
Next, they measured “exposure”—described by researchers as basically the “opposite” of influence. Users ranking high on the “exposure” scale were those who responded to others’ tweets most often—whether by favoriting, retweeting or replying. Among the 100 most “exposed” Twitter users in the group, 93 expressed their white nationalism overtly.
But the most accurate means of winnowing the most overt extremists came from combining “influence” and “exposure”—a category researchers simply called “interactivity.” Twitter users who were both highly influential and highly responsive were almost always dyed-in-the-wool extremists. Of the 100 most interactive users, 95 expressed themselves overtly as white nationalists in some way.
Although the study was confined to white nationalism, its implications reach much further. The ICSR’s analytical methods were not content-specific; they measured “influence” and “exposure” in numbers, not words. Attempting to sift through countless tweets to sniff out Twitter’s most dangerous users is tough in the Twitterverse. People write in slang and in code. Abbreviations and acronyms evolve constantly.
But by looking looking at influence and exposure as quantified here, police can conceivably find the most virile members of any extremist group, no matter the affiliation. Anarchists, neo-Nazis, jihadists, it makes no difference. It’s all about the network.
Other findings in the research suggest there are additional applications to this data. They found, for example, that 50 percent of all the influence, flowing in either direction, happened among the top 1.5 percent most influential users. Identifying key players by measuring influence in this way may help law enforcement officials better focus their efforts where they’re most effective.
Identifying extremists using social media is necessarily limited. Socially isolated, mentally ill, lone gunman types may not belong to mainstream social media sites like Twitter or Facebook. (Indications are the Sandy Hook, Conn., and Aurora, Colo., shooters did not.) But the James Holmeses and Adam Lanzas of the world represent only one kind of deranged killer. Politically-motivated killers like the Oklahoma City bomber, Timothy McVeigh—not to mention the countless perpetrators of smaller-scale extremist crimes—often have ideological networks directly or indirectly supporting them. Big Data methods of understanding those networks seem a promising step towards thwarting them.