Well that was quick. Just 24 hours after the European Union criticized them for failing to fulfil their promise to tackle the proliferation of extremist content on their platforms, Twitter, Facebook, Microsoft and YouTube have announced a new initiative to do just that.
Designed to help curb the spread of terrorist material available online, the new project will see the creation of a centralized database of “hashes” — digital fingerprints — that can determine which images and video are extremist in nature.
The idea is that content flagged by one company will then automatically be reviewed by all the others and removed if it is deemed to violate their respective terms and conditions. Because each company’s interpretation of what is extremist content is different, the process will not be automated, but it will speed up the identification process.
“By sharing this information with each other, we may use the shared hashes to help identify potential terrorist content on our respective hosted consumer platforms,” the companies said in a joint statement on Monday.
Initially each company will share the “most extreme and egregious terrorist images and videos” that they have already removed from their services – content that likely violates all of the companies content policies.
On Sunday, in an interview published in the Financial Times, the EU justice commissioner Vera Jourova criticized the Silicon Valley tech giants for failing to live up to a promise they made in May, when the signed a “voluntary code of conduct” agreeing to take some concrete steps toward dealing with racism and hate on their services.
“The last weeks and months have shown that social media companies need to live up to their important role and take up their share of responsibility when it comes to phenomena like online radicalization, illegal hate speech, or fake news,” Jourova said in the interview.
The joint statement posted on Facebook’s site on Monday appears to have been a fast reaction to the criticism. Facebook didn’t immediately respond to a request for comment on how long this program has been in the works.
The four companies currently use a similar system, called PhotoDNA, to identify child sexual abuse images that might be posted on their services. Unlike the new system, the PhotoDNA database is maintained by law enforcement and companies are legally obliged to remove any content which matches those images and video listed in the database.
Hany Farid, one of the developers of PhotoDNA, has broadly welcomed the announcement, but says that he has concerns about the lack of transparency. He told the Guardian that unless experts in extremist content are in place to maintain the database, the announcement will be futile:
“What we want is to eliminate this global megaphone that social media gives to groups like Isis. This doesn’t get done by writing a press release.”