Unraveling viral disinformation and explaining where it came from, the harm it's causing, and what we should do about it.
“Social media platforms typically use machine-learning models to analyze uploaded image and video content to determine how to act on it—uploads can be rejected entirely, put behind a sensitive content filter, or, in severe cases, reported to law enforcement,” the researchers wrote.One of the most widely used pieces of software is known as PhotoDNA, which stores a list of child exploitation imagery, against which platforms can check uploaded content before it is allowed on the site. But Gettr does not use PhotoDNA or any other automated software to monitor the content being uploaded to its site. Instead, it relies on users to report content.“This, frankly, is just reckless,” David Thiel, an author of the report, tweeted. “You cannot run a social media site, particularly one targeted to include content forbidden from mainstream platforms, solely with voluntary flagging. Implementing PhotoDNA to prevent CEI is the bare minimum for a site allowing image uploads.”By using PhotoDNA’s database of images, the Stanford researchers were able to identify 16 matches among a sample of images taken from posts and comments on Gettr. They were also able to successfully show how easy it is to upload child exploitation imagery by posting several benign images PhotoDNA stores in its database for testing purposes.Miller told VICE News that Stanford’s report is “completely wrong,” claiming that Gettr has “a robust and proactive, dual-layered moderation policy using both artificial intelligence and human review, ensuring that our platform remains safe for all users.”
Advertisement
Advertisement
This is hardly surprising however, given that in the week after Gettr launched, the site’s source code was leaked and prominent accounts were defaced. The platform was also spammed with Sonic porn posted by left-wing trolls, who pointed out Miller’s hypocrisy when he banned the content on the platform.Then, two weeks ago, a Politico investigation found that the site was filled with extremist jihadi content, including beheadings and viral memes in support of the Islamic State.Stanford’s research found that the site was filled primarily with far-right users from the U.S. and Brazil who were deplatformed by larger social media sites, as well as a sizable Arabic-speaking population.
Advertisement
Gettr