The Facebook group “Fight Against Liberalism, Socialism & Islam” has almost 5,000 members. It’s a private group, so whatever is posted there is only visible to its members.
The group, run by a South African lawyer named Mark Taitz, claims that “moderate Islam does not exist and too many people fail to understand this,” and encourages Facebook users to “join our group to learn about Islam and the atrocities it is committing in ‘God’s name.’”
If that wasn’t explicit enough, the group’s banner image, which portrays Islam as a gun pointed at the head of a white woman representing “western civilization,” leaves anyone joining in little doubt about the anti-Muslim nature of the group.
Taitz’s is just one of dozens of Facebook groups based in the U.S., Canada, Australia, and the U.K. whose explicit goal is to spread anti-Muslim hate speech, according to a new report from the Center for Countering Digital Hate (CCDH).
The group’s researchers found 23 Facebook pages or groups that were “mainly or wholly dedicated to anti-Muslim hatred based on their names, descriptions, and content.” These groups had a combined following of over 320,000 people.
Despite being reported to Facebook—via the platform’s own reporting systems—none of these groups has been removed.
But the groups are just one aspect of a much wider problem of platforms failing to tackle Islamophobia, which exists not only on Facebook but on all major social media platforms.
The CCDH researchers found that Facebook, Instagram, TikTok, Twitter, and YouTube failed to act on 89 percent of posts containing anti-Muslim hatred and Islamophobic content reported to them. Across all platforms, the researchers identified and reported 530 posts that contained “disturbing, bigoted, and dehumanizing content that target Muslim people through racist caricatures, conspiracies, and false claims.”
In total, these posts were viewed at least 25 million times.
And it’s not as if this content was difficult to identify. On Instagram, TikTok, and Twitter, users were allowed to spread this hateful material using hashtags such as #deathtoislam, #islamiscancer, and #raghead. Content spread using those hashtags received at least 1.3 million impressions, CCDH found.
“Much of the hateful content we uncovered was blatant and easy to find—with even overtly Islamophobic hashtags circulating openly, and hundreds of thousands of users belonging to groups dedicated to preaching anti-Muslim hatred,” Imran Ahmed, chief executive of CCDH, said in a statement.
In the wake of the 2019 mosque shootings in Christchurch, New Zealand, which left 51 people dead, the people running Facebook, Instagram, Twitter, and YouTube signed on to the Christchurch Call, a pledge to eliminate terrorist and violent extremist content online.
But the new report shows that these platforms are failing to achieve this on even the most basic level: Researchers identified 20 posts featuring the Christchurch terrorist, including footage he livestreamed to Facebook during the attacks. Just six of the flagged pieces of content were removed, and Facebook, Instagram, and Twitter failed to remove any of the content the researchers identified.
In a manifesto posted online before the shooting, the shooter cited the “Great Replacement” conspiracy theory—which claims that non-white immigrants are trying to replace white people and white culture in Western countries—as inspiration for his actions.
But all the major social media companies are failing to live up to their pledge to stop this type of content from spreading on their platforms. The CCDH researchers analyzed nearly 100 posts featuring elements of the “Great Replacement” conspiracy theory and found that platforms failed to act on 89 percent of them.
Because YouTube failed to remove any of the eight “Great Replacement” videos, even though they were reported to the platform, they have amassed nearly 19 million views and users on other platforms continue to use them as reference points for the hateful conspiracy theory.
Facebook, YouTube, and Instagram did not respond to VICE News’ request for comment. TikTok responded but would only comment on background, and did not provide a statement on the record.
In a statement, Twitter said it “does not tolerate the abuse or harassment of people on the basis of religion” and praised the automated system it uses to catch content that violates its policies. While the statement did not address any of the specific claims made in the report, it did admit the company “knows there is still work to be done.”
But such statements will provide little comfort for the millions of Muslims who are denigrated and threatened with violence on social media on a daily basis.
“Three years on from Christchurch, social media companies are full of spin when it comes to fighting the drivers of violence,” Rita Jabri Markwell, of the Australian Muslim Advocacy Network, told VICE News in a statement.
“We are not surprised by these findings, but it’s a relief to have our experiences investigated and validated. Across the world, from India to Australia, Europe to North America, anti-Muslim conspiracy theories have been used to stir violence and extreme politics.”
Want the best of VICE News straight to your inbox? Sign up here.