Tech

Sexting Is Complicating Plans to Scan Private Messages for Child Sexual Abuse

While child victim organizations are calling for quick action, privacy groups are pointing the dangers of mass surveillance.
223
Image: Getty Images

After expanding privacy safeguards for private messages in December, European Union officials and legislators are struggling to come to an agreement on an exception to the new rules that would allow tech platforms like Facebook and Skype to continue to indiscriminately screen private (not end-to-end encrypted) messages for potential child sexual abuse. 

Both digital rights watchdogs and organizations specializing in child abuse (and Ashton Kutcher) have jumped into the fray, with the former arguing that indiscriminate surveillance is a disproportional slippery slope and the latter claming that the European Union’s failure to act quickly is creating a legal vacuum now being take advantage of by sexual predators. 

Advertisement

Now, according to EU-policy news network Euractiv, the issue of what to do about consensual sexual communication—such as sexting—has thrown a wrench into the negotiations. The core debate is over whether platforms’ process of reviewing these private messages should be completely done by algorithms (before being sent to law enforcement), or whether humans should review the material as well, similar to how content moderation is currently done on Facebook. 

The obvious problem with including human review is that employees of private companies would likely see at least some people’s private and consensual sexual communications erroneously flagged by algorithms. But pure algorithmic review comes with its own set of problems as well, such as algorithmic bias and the potential transparency issues that could come with the private platforms designing their own algorithms. 

Because of this, some privacy activists like Diego Naranjo, Head of Policy at European Digital Rights (EDRi), believe that the plans should be abandoned altogether and replaced with targeted surveillance. 

“The problem that we see is that this [type of indiscriminate surveillance] implies a massive discounting of all private communications in order to find some illegal material,” Naranjo told Motherboard over the phone. “It’s not private communication then. Of course, we are in favor of tackling things like child abuse and terrorism, but we are not in favor of the normalization of mass scanning of communications.” 

Advertisement

“One positive thing I can say about human content moderation is that at least in the end you have someone, a person, catching false positives,” he continued. “I think in the end a person should be in charge. This should not be a person at a private company though, but someone from a public institution.” 

For Honza Červenka, a lawyer at McAllister Olivarius law firm—which specializes in online sexual violence—said the European Union’s plans raise difficult questions. While requiring websites like Pornhub to search their material for sexual abuse before uploading their website is an obvious step, he said, platforms like Facebook or Instagram surveilling the communications of private individuals is a far more complicated and nuanced area to tackle, especially when it comes with the risk of people’s private consensual sex lives coming under scrutiny. 

“I can see both sides of the debate,” Červenka said over Zoom. “Obviously, on the one hand, you've got the drive towards privacy—encryption models that are hard to penetrate, VPNs, and so on—which of course are great for many reasons, but also allow secure transmission of content that is illegal to share, such as child pornography, terrorist activities and the like.  On the other hand, having harmless private communications out in the open and subject to scrutiny is also problematic because the right to privacy is recognized as a fundamental human right in many countries.” 

Advertisement

Červenka also thinks it's important to examine not just the technical apparatus that comes with this sort of screening, but also the human one. 

The exploitative working conditions and psychological toll of human content moderation as implemented by companies like Facebook and YouTube is another issue that would come with mandatory human review. Content moderators of these types of platforms consistently come into contact with videos depicting extreme violence or child abuse, and many are hired through a convoluted network of subcontrators in places like India and the Philippines, far away from their California headquarters. 

Regardless of the question of how the surveillance is done, child victimization groups like NSPCC in the UK and NCMEC in the U.S. are continuing to put pressure on both platforms and the European Union to quickly allow it again. 

“We cautioned that if the EU failed to act this would have a devastating impact – literally blinding the world to the online abuse of children in the EU,” NCMEC wrote in a blog post published Wednesday. “Offenders would continue to entice, groom, sexually exploit, and trade sexually abusive images of children online with impunity. In the first 42 days since enactment of the new EU regulations, some of our worst fears for children in the EU have been realized.” 

For now, the combination of this pressure, similar new rules for taking down terrorist content online, and half-hearted support for encryption means that, whether by an algorithm, person, or a combination of the two, means that screening of private messages for child sexual abuse in the European Union will most likely continue in some form.