We think of Instagram as inherently aspirational, filled with ideas on how to live your best life. But the platform has also become a place to find inspiration of a different sort, the kind that makes mental health experts and Silicon Valley uncomfortable, to say the least.
Scouring for graphic self-injury photos is disturbingly easy on Instagram. Using ever-changing variants of words like #selfharm or #cutting, anyone can find a world that seemingly normalizes self-harm. And while Tumblr and Twitter also contain self-injury photos, a new study that looks at photos tagged with #cutting on Instagram, Twitter, and Tumblr found that the same app known for gratuitous selfies and #spon posts also hosted the greatest number of self-harm images.
The question of how to respond when teenage users upload graphic, self-harm photos has plagued social media sites for years. Monitoring the rise of these photos is especially important because those who see a high-status friend engaging in self-harm are more likely to do so themselves, according to a study last year in the Journal of Abnormal Psychology. The same study found that many of those who engage in deliberate self-injury first learned about it from a friend.
By most accounts, self-injury is widespread among adolescents. Researchers believe an estimated 14 to 26 percent of adolescents engage in the practice worldwide. A British charity found that 18,778 children aged 11 to 18 were admitted to hospitals for self harm from 2015 to 2016, a 14 percent rise from the previous year.
Instagram has made some strides in providing resources to those who are cutting. In 2016, the app rolled out a suite of reporting tools allowing users to flag friends' posts for self-harm imagery while partnering with forty organizations around the world to provide support. Today, if you search for #cutting on the app, you'll be referred to a help page with crisis line phone numbers (which you can also easily dismiss and continue viewing self-harm imagery.)
In a statement provided to Broadly, Instagram said it supported "enabling people to discuss self-injury or connect with others who have battled similar issues" but that the company has "zero tolerance for content that encourages users to embrace self-injury." Instagram also defended their policy of not censoring content.
"This is a complex issue, so we strive to go beyond simply removing content or making a hashtag unsearchable," the statement continued. "Instead we take a holistic approach, employing tools and education, and working in partnership with organizations who specialize in mental health issues."
Instagram is in a bind, experts say, because self-harm communities can't be counted on to police their behaviors and banning words or phrases is like playing a game of whack-a-mole. "If one hashtag is no longer allowed as a searchable term, they'll just misspell the word with extra letters or misspellings," says professor Jonathan Comer, the study's lead researcher.
Along with others, Comer is pushing for organizations to use the same hashtags as those who are self-injuring to provide resources to the afflicted. "One of our findings was how extraordinarily rare it is for posts to include any kind of recovery resources," he said. "That's certainly an area where we need to improve things."
Another hope is that AI will soon be able to predict those who are self-harming or suicidal and automatically push help their way. Facebook is already developing an algorithm that can flag certain words or phrases, prompting the dissemination of resources in real time. A study published in March found that machines can predict with 80 to 90 percent accuracy whether or not someone will attempt suicide as far as two years in the future by tracking their medical records, including pain medication prescriptions and ER visits.
"People can encourage one another to refrain from self-injurious behaviors as well. There's nothing in this area that suggests these communities are all bad or all good — it's very gray."
Comer says he doesn't know of efforts to teach computers to recognize imagery that predicts future self-harming behaviors but he's hopeful that researchers will be able to apply machine learning to that task as well. "I think NSSI [non-suicidal-self-injury] is going to be one of the next frontiers where we're going to see a lot of advances just based on what I know of my colleagues' work in this area," he said. "But overt imagery [that suggests someone is considering suicide] might be a little easier to track."
While it's easy to paint self-harming communities online with one brush, some users have found a "sense of community and support" by connecting with other self-harmers online, according to a meta study from 2015. In one of the surveys cited, a participant remarked, "Seeing these pictures gives me a sense of release and calm: it curbs my urges to cut."
"Finding like-minded communities, even if they might be organized around self-harm — can actually have some positive effects," Comer says. "People can encourage one another to refrain from self-injurious behaviors as well. There's nothing in this area that suggests these communities are all bad or all good — it's very gray."
The problem, as he sees it, is that the world of pain can seem never-ending, as if cutting is a social phenomenon on par with taking photos of your food. "When you see these things, when you search for certain words, it can seem like what you find is everywhere," he said. That, in turn, can easily distort your perceptions of "normal" behavior. "It's not just about telling teens, 'don't post,' but about educating them to be savvier consumers of social media."