In August of this year, a white supremacist plowed through a crowd of protesters gathered in downtown Charlottesville, Virginia. The attack injured around 20 people and killed 32-year-old Heather Heyer. The violent clashes that weekend shocked Americans, among them Emily Crose, who wanted to be there to protest against the white supremacists but couldn’t make it. A friend of hers was there, and was attacked and hurt by neo-Nazis.
Crose is a former NSA analyst and ex-Reddit moderator who now works at a cybersecurity startup. Inspired by her friend’s courage, and horrified by the events in Charlottesville, Crose now spends her free time teaching an AI how to automatically spot Nazi symbols in pictures spread online, be it on Twitter, Reddit, or Facebook.
Wanting to use her expertise to help expose ideologies like those seen in Charlottesville, she started this project, which she calls NEMESIS.
Crose’s motivation is to expose white nationalists who use more or less obscure, mundane, or abstract symbols—or so-called dog whistles—in their posts, such as the Black Sun and certain Pepe the frog memes. Crose’s goal is not only to expose people who use these symbols online but hopefully also push the social media companies to clamp down on hateful rhetoric online.
“The real goal is to educate people,” Crose told me in a phone call. “And a secondary goal: I’d really like to get the social media platforms to start thinking how they can enforce some decency on their own platforms, a certain level of decorum.”
“I’m not one of these people who’s going to be OK with apathetically standing by and watching people turn to an ideology that’s probably dangerous,” she added.
NEMESIS, according to Crose, can help spot symbols that have been co-opted by hate groups to signal to each other in plain sight. At a glance, the way NEMESIS works is relatively simple. There’s an “inference graph,” which is a mathematical representation of trained images, classified as Nazi or white supremacist symbols. This inference graph trains the system with machine learning to identify the symbols in the wild, whether they are in pictures or videos.
In a way, NEMESIS is dumb, according to Crose, because there are still humans involved, at least at the beginning. NEMESIS needs a human to curate the pictures of the symbols in the inference graph and make sure they are being used in a white supremacist context. For Crose, that’s the key to the whole project—she absolutely does not want NEMESIS to flag users who post Hindu swastikas, for example—so NEMESIS needs to understand the context.
“It takes thousands and thousands of images to get it to work just right,” she said.
NEMESIS is already a working proof of concept, but Crose won’t stop working on it until it can be refined and deployed widely. Ideally, platforms like Twitter and Facebook will implement it to filter and spot Nazi or white supremacist propaganda, but the system can live without the social media companies, Crose said.
“Just because a microphone exists doesn’t mean that it needs to be given to people who will incite violence and hurt other people, and argue for the removal of civil rights from certain groups of people,” Crose told me. “There is no responsibility on the part of Twitter to make sure that everybody as an equal voice.”
Humans of the Year is a series about the people building a better future for everyone. Follow along here.