News

Unpaid Fact-Checkers Are Getting Burnout From Debunking So Many Nazis on TikTok

And they’re mad as hell.
Unpaid Fact-Checkers Are Getting Burnout From Debunking So Many Nazis on TikTok
Photo: TikTok

Abbie Richards is really proud of her new filming set-up. Surrounded by prints and succulents, this is where she now plans on shooting her TikTok debunk videos – calm, thoughtful monologues delivered straight to camera over a huge mug of tea, as if you’d invited her in for a chat yourself. But you didn’t invite her in. Richards found you, on TikTok’s For You Page, because she knows how to do something most fact-checkers don’t – go viral. 

Advertisement

TikTok is, like every other social media platform, facing a problem with extremism and conspiracy theories. But a report that came out last month from the Institute of Strategic Dialogue suggests that TikTok, now rivalling YouTube for watch time and trouncing platforms like Instagram when it comes to video, is not doing enough to remove troubling content from the platform. This study in particular identified 312 videos promoting white supremacy – 30% of its full sample.

Richards’s latest video is about Hyperborea, a fantastical Arctic, Aryan fairyland that has emanated from esoteric Naziism, a mystical branch of the fascist, racist ideology. Hyperboreans were a mythical people who lived in the northernmost parts of the planet, and were white with blond hair and blue eyes – a race that neo-Nazis want to restore. A TikTokker made a video about it claiming it was real, that “school will never teach you this,” and it received over 1.1 million likes. 

“This stuff is racist and anti-Semitic, but coded enough that people will engage with it like it’s just a story,” Richards says in her video. “TikTok knows it’s Nazi shit, that’s why they banned the hyperborea hashtag – they just forgot to ban all the adjacent ones.” It’s true; you can’t search #hyperborea, but you can search #hyperborean, #hiperborea, #hyberborea and countless other options.

Advertisement

Richards’s own video has more than 300,000 likes, and 1.2 million views – phenomenal numbers for a debunk video, but ultimately it has over two-thirds fewer likes than the original video.

“All of it at this point is free labour on our part, and a lot of the time it’s still ridiculously less seen than the original misinformation,” says Richards from her office chair, looking fondly across the room at her filming set up. “They should be giving resources to creators like me with different expertise to make those videos and then they have every right to push those videos.”

While the Institute of Strategic Dialogue report found little evidence of established extremist or terrorist groups using TikTok publicly, it did find photo slideshows and promotional clips for fascist dictators Adolf Hitler, Benito Mussolini and Francisco Franco in their dataset.

It also found that footage related to the 2019 Christchurch terrorist attack was easily discoverable on the platform, 26 posts denying the existence of the Holocaust, and the far-right “Great Replacement” and “white genocide” conspiracy theories, which “posit that white people are being systematically replaced and their existence is under threat across the world.”

It is a problem that TikTok is aware of. In an October 2020 blogpost, it announced: “While our Trust & Safety teams already work to remove hate speech and hateful ideologies, such as neo-Nazism and white supremacy, we are strengthening our enforcement action to remove neighbouring ideologies, such as white nationalism, white genocide theory, as well as statements that have their origin in these ideologies, and movements such as Identitarianism and male supremacy.” 

Advertisement

What exactly this enforcement action involves is unclear, but the Institute of Strategic Dialogue report included a number of further recommendations for the app, including incorporating extremism expertise into partnerships, transparency over the algorithm and moderation, and evolving policies beyond narrow hashtag bans. For many – including someone on the digital front-line like Richards – all of these things remain as nebulous as before, as well as distant from creators like herself who are trying to make a difference and who are ready to help TikTok if asked. 

She isn’t the only TikTokker trying to create content that might reach people before the white supremacists do. Leonie Plaar, who goes by the handle @frauloewenherz, is a German content creator, who has also used her platform to talk about far-right ideology. “One of my most popular series was the one on coded Nazi symbols and language that are/is often used by alt right groups, including in online spaces,” she says. 

Plaar says that while she doesn’t usually see these codes on TikTok herself, she follows other accounts that talk about their existence. “I know of a number of sizable alt right accounts that have been banned in the past few months. The problem is that a lot of the symbols and codes that they use are made up, only resembling Nazi symbols.”

Advertisement

“Therefore, TikTok, or any social media platform, has a hard time regulating them. They are not violating any laws, and they change them up too quickly for the law to catch up. Yes, these symbols and codes convey hateful, dehumanising messages, but they are, frustratingly, not illegal,” says Plaar.

In her videos, Plaar explains that growing up in Germany, which has laws banning Nazi symbols, she was taught how to spot these codes; the popularity of her content demonstrates that lots of people in other countries did not get similar teaching. But it takes a toll. 

“I’m a queer, female, feminist political content creator,” she says. “I receive threats all the time – responses that span a wide spectrum from mean comments to death threats. They usually attack the aspect of my identity that they apparently feel most corresponds with the respective topic at hand.” 

“And still, the backlash that I experience is a far cry from what BIPOC creators go through with these groups.” 

Richards has also had to up her personal security measures to stay safe. While her work has won herself internet fame that’s translated to mainstream news features in places like the BBC, she jokes that “TikTok owes me emotional damages. So much is unpaid volunteer debunk work.” 

She has an exhausting list of concerns every time she makes a TikTok. “I have an entire device set up for monitoring content and you see terrible videos. They replay in your brain. It’s not good for you. And the pressure to debunk every single line, and do it in a way that’s appealing and not offensive, and doesn’t make people feel defensive. And I want to be as accurate as possible, I don’t want people to then Google things that will red pill them.”

Advertisement

Richards clarifies that the only reason she has been able to make a number of her videos, particularly her QAnon series, is because a groupchat of experts formed to offer her scripts oversight. All of them are disinformation researchers and authors in their own right, but what they lack are Richards’s TikTok skills.

TikTok has over 10,000 content moderators worldwide – but they are at the tail end of video content, taking things down rather than preemptively educating. It has several fact-checking partnerships with organisations like Politifact and Lead Stories, but few of them have TikTok accounts and those that do create videos that only get a few hundred views. When it comes to making educational debunks or coded-language-preppers, no one officially tied to TikTok seems to be taking on the responsibility of making content that might be able to reach users before the far-right content does. 

TikTok has done education initiatives to try and counter ideology like Holocaust denial on their app with their Holocaust Memorial Day hub, during which it pushed creator content. But during the rest of the year, Jewish creators often find that their content gets taken down when they try to challenge anti-Semitism on the app. A day or week of amplified content may not be enough to challenge systemic moderation flaws on the app.

A TikTok spokesperson told VICE World News: "TikTok categorically prohibits violent extremism and hateful behaviour, and our dedicated team will remove any such content as it violates our policies and undermines the creative and joyful experience people expect on our platform.

Advertisement

"We make exceptions for content that is designed to educate and raise awareness of the harms caused by violent extremism and hateful behaviour, and we are grateful to the creators in our community who use their platform to make TikTok a more creative, joyful place to share content."

A month ago, Jay Novak, the principal of converged threat-informed defence at TikTok, posted on LinkedIn that the team was advertising a number of new positions in a Global Security division. “My team is expanding and looking for intel analysts and security engineers to perform threat pursuit, protective intel, counterintelligence, cyber threat intel, security automation, and countermeasures development missions,” he wrote. 

On TikTok’s careers site, 10 threat analyst roles are currently listed in Mountain View, Dublin, Washington D.C. and Singapore – which suggests that threat analysis investment continues to focus on English-language-centric content. 

For many content creators, it’s not just more experts that they want looking at content. They want to see more transparency, and more content on the For You Page that tries to thwart conspiracy content being made, rather than simply dealing with it when it turns up.

But for Richards, not even that is being done well enough. She made her opinion clear in an August TikTok, in which she spoke about research she and a fellow fact checker had conducted into anti-vax videos on TikTok. 

“There is no reason why two 25-year-olds with anxiety disorders can track down viral misinformation on TikTok,” she says, straight to camera, “but the multi-billion dollar company just can’t find it.”