News

How the Far-Right Weaponised Memes

The far-right has been subversively using memes for years to covertly spread their hateful message and recruit new members – and it's almost impossible to clamp down on them.
Decade of Hate is a series that covers the dangerous rise of far-right movements across Europe over the past 10 years.

The far-right has always appreciated the value of propaganda – and had a knack for it.

But after the resounding defeat of fascism – and the discrediting of its ideology – in World War II, traditional far-right propaganda just didn’t hit the same. A new mode of political messaging was required that could somehow get around the general public’s aversion to overtly racist, bigoted politics.

Advertisement

With the rise of social media, that new tool had arrived. Enter the meme: a powerful new mode of pop cultural language that has helped give far-right ideas and values renewed global reach and influence over the past decade.

We Spent Nine Months Undercover on a Far-Right Message Board

As US white supremacist leader Richard Spencer boasted to VICE News in 2016: “We ‘memed’ alt-right into existence, and it’s now almost a household name.”

 Memes have become a huge asset to the far-right, who were quick to recognise their value in spreading their ideas online to grow a burgeoning far-right ecosystem. They co-opted popular memes like Pepe the Frog – a character from the cult online comic “Boy's Club” which had no association with bigotry – and gave him a racist makeover: depicting Pepe in a Ku Klux Klan hood or SS helmet, or with a Jewish kippah and payot, smirking as the Twin Towers fall.

“When you make it funny, their guard goes down,” Tim Gionet, the US alt-right agitator known as Baked Alaska told VICE in 2017.

“All of a sudden, you’re influencing hundreds of thousands of people, if not millions, with one tweet,” said Gionet, who was charged with violent and disorderly conduct on Capitol grounds and knowingly entering a restricted building, for his role in the riot in the US Capitol in January.

Groups from across the political spectrum, including mainstream parties, have incorporated memes into their messaging, in a bid to gain relevance and support from younger voters.

Advertisement

But for the far-right, the nature of memes has brought particular advantages, experts say.

The obscure, subversive humour, as well giving the content viral appeal, has helped remove a key obstacle that had thwarted the impact of far-right propaganda in the past: that potential audiences were wary of publicly engaging with or endorsing it, because of the stigma associated with hateful ideology.

Crucially, the slippery meanings of memes gave them a degree of plausible deniability, helping them find a wider audience, and normalise hate speech.

“Far-right movements will use memes to code and cloak more insidious ideologies and terminology under the guise of humour,” said Ashton Kingdon, a Ph.D candidate at the University of Southampton who researches how right-wing extremists use technology for recruitment and radicalisation.

“It allows the defence of people to … say that they might not have known had this insidious meaning, or say that it was just a joke.”

“That's why I think they’re particularly dangerous in terms of being able to reach unexpected audiences.”

The rise of memes also took a lot of the work out of spreading far-right ideas. Members of online far-right communities generated and spread the memes themselves, enabling them to gain standing and influence in these networks by churning out extremist content.

Advertisement

“You have an adolescent in their room become an influencer. You have them pumping out stuff, you have them become online personalities,” said Nicholas O'Shaughnessy, Emeritus Professor of Communication at Queen Mary, University of London.

Like most online communities, these far-right networks – congregating on messageboards like 8chan – operate as ideological filter bubbles, with memes serving to continually bolster and reinforce the group’s defining narratives: the supposed threat to the West posed by minorities, or Islam, or Jewish or “globalist” conspiracies.

“You congregate with people of your own tribe,” said O'Shaughnessy. “So you have all these online communities which congregate and fester together and dig each other deeper into psychological ditches, because you're not being exposed to information, you're no longer being exposed to argument.”

The dangers of these meme-sustained online communities – as incubators of hate and pipelines to radicalisation – have become increasingly apparent in recent years, experts say.

The August 2017 Unite the Right rally in Charlottesville, Virginia, where tiki torch-wielding white supremacists marched shamelessly before the nation’s media, before a counter-protester was killed in a car-ramming attack, emerged largely from these online networks.

Advertisement

“The alt-right was known as being an online movement before Charlottesville – that was the event that brought them into physical space,” said Kingdon. “Charlottesville made people sit up and realise that, no, this is a group that we need to take seriously.”

And just before the Australian white supremacist terrorist Brenton Tarrant launched an attack on two mosques in Christchurch, New Zealand in March 2019, killing 51 people, he announced his murderous intentions with a post on 8chan featuring a meme, before live-streaming the mass murder on Facebook.

READ: How YouTube and Facebook radicalised the Christchurch shooter

The online comments reacting to the slaughter featured a real-time stream of in-jokey shitposts that underscored the desensitised nature of the online far-right culture Tarrant had been radicalised in. Despite the horrors of his actions, even his weapons – which he had decorated with references to historical clashes between Christians and Muslims, as well as to more recent white supremacist terrorists – were given the meme treatment.

The atrocities in Christchurch highlighted the need to urgently address the threat of online radicalisation, with more than 30 countries signing on to the so-called “Christchurch Call” – an initiative, led by New Zealand Prime Minister Jacinda Ardern and French President Emmanuel Macron, to eliminate terrorist and violent extremist content online.

But experts say that for all their good intentions, any efforts at regulation will struggle to silence an online extremist subculture that is expert at skirting the bounds of taste or legality with transgressive, deeply ironic memes.

“Memes are used in a way to obfuscate hateful ideology and imagery,” said Andy Campbell, a HuffPost senior editor who is writing a book on far-right group the Proud Boys, “to the point where if a platform like Twitter or TikTok creates a hate speech policy and says you can't you can't put imagery of say, lynchings … any more … the people that make memes are going to immediately work to find the loophole.”

O'Shaughnessy agreed. “They will always be complete masters of the technology. So all attempts to cut them down and so forth, they will always find a technological way of outsmarting that.”