Instagram Failing to Protect Women From Unsolicited Dick Pics: Report

9 out of 10 abusive DMs are not acted upon by Instagram, a new report by the Center for Countering Digital Hate has found.

Serial cyberflashers account for a disproportionate amount of image-based abuse of high-profile women on Instagram, which fails to act on 9 out of 10 abusive accounts, a new report has found.

The Center for Countering Digital Hate (CCDH), a British non-profit organisation, has found 1 in 15 DMs sent to high-profile women are abusive, and 90% of them are not acted upon by Instagram’s tools. 

The CCDH worked with the actor Amber Heard, Countdown presenter Rachel Riley, co-founder of Reclaim These Streets Jamie Klingler, journalist Bryony Gordon and Sharan Dhaliwal, the founder of magazine Burnt Roti to analyse thousands of the DMs they get. 


On top of failing to act on 9 out of 10 of abusive accounts, the report also found that they failed to act on any image-based sexual abuse within a 48-hour period, and also failed to act on any accounts that sent single-word messages of abuse. 

Serial ‘cyberflashers’ were found to be responsible for nearly a third of the image-based sexual abuse the users received. 

Cyberflashing defines the act of sending obscene images online. 

“It really makes me not want to go into my DMs at all because it’s revolting,” said Rachel Riley. “It’s astounding to know that randomers are sending porn - it empowers them to know that it’s gone to your inbox. They get off on it.”

Cyberflashing is already illegal in France and Ireland, and is included as a criminal offence in the UK’s Online Safety Bill, which was first introduced in Parliament in March. 

As well as leading to the sentencing of cyberflashers themselves, platforms may also find themselves fined for not appropriately protecting users from image-based abuse.

The CCDH report also suggested that abusers are taking advantage of gaps in the reporting system. 

The “hidden words” feature that filters out offensive words into another folder was found to be ineffective, with swear words and “bitch” not being hidden for users. 

While one in seven voice notes were found to be abusive, Instagram has included no method for users to report them. 


Sharan Dhaliwal’s dataset showed that many strangers attempted to call her numerous times, and that one did so after sending her two photographs of his genitals. Another tried video calling her after messaging her 42 times in a 3-day period with comments such as “sex”, “I like hairy” and “I wanna lick it”. 

“You can dissociate from most abuse,” Dhaliwal told the CCDH, but that “when you hear their voice it becomes more real.” 

Imran Ahmed, CEO of the CCDH, said: “Our research finds that Instagram systematically fails to enforce appropriate sanctions and remove those who break its rules.

“Online misogyny, made easy by platforms’ functionality, has offline impacts in normalising gender-based violence and harassment. In the absence of effective tools to stop the stream of harmful content, women have been forced to find their own solutions, often tailoring their content to avoid provoking abusers or avoiding posting altogether to reduce their visibility. Platforms’ purported safety measures are both ineffective and shift the burden of preventing misogynistic and online gender-based violence to those who suffer the abuse.”

Cindy Southworth, head of women’s safety at Meta, which owns Instagram and Facebook, said: “While we disagree with many of the CCDH’s conclusions, we do agree that the harassment of women is unacceptable. That’s why we don’t allow gender-based hate or any threat of sexual violence, and last year we announced stronger protections for female public figures.

“Messages from people you don’t follow go to a separate request inbox where you can either block or report the sender, or you can turn off message requests altogether. Calls from people you don’t know only go through if you accept their message request and we offer a way to filter abusive messages so you never have to see them.”