Upset girl sitting in window.
MART Production (Pexels)
Tech

'Horribly Unethical': Startup Experimented on Suicidal Teens on Social Media With Chatbot

Koko, a mental health nonprofit, found at-risk teens on platforms like Facebook and Tumblr, then tested an unproven intervention on them without obtaining informed consent. “It’s nuanced,” said the founder.

Between August and September of last year, some users between the ages of 18 and 25 on platforms including Facebook, Discord, Tumblr, and Telegram who posted one of more than a thousand keywords ranging from “depression” to “sewer-slide” suggesting they were at risk of harming themselves were, without warning, directed to a chatbot. 

This was part of an experiment carried out by the founder of a controversial mental-health nonprofit called Koko and a Stony Brook University professor who also runs a suicide intervention consultancy. According to a preprint describing the study, when Koko’s algorithm detected people using “crisis-related” language on such partner platforms as Tumblr, they were funneled to Koko’s own platform, where they were presented with a privacy policy and terms of service outlining that their data could be used for research purposes. 

Advertisement

A chatbot then asked, “What are you struggling with?” If their response indicated that they were at risk, they were sent into a so-called “crisis flow” and asked whether they were struggling with suicidal thoughts, self-harm, an eating disorder, abuse, or something else, after which they were randomly assigned to one of two groups. One group, a control, was provided the number for a crisis hotline. The other was provided a “one-minute enhanced crisis Single-Session Intervention”—an official-sounding name for a Typeform-generated survey that presented users with a quiz, contained interstitials with a GIF of a cat, and asked them to identify triggers and coping strategies. 

"Before we get started, do you promise to check your notifications tomorrow? We'll follow up and ask how you're doing. It's extremely important for us to get this feedback so we can learn how to improve things," the intervention said. If a user clicked "Yes," the next slide said "Thanks for that! Here's a cat!"

6F29C026-094E-4B69-9336-57232BF8C6E1.jpeg

Screenshot from Koko

If they selected "No," the next slide said, "That's ok. We still love you. Here's a cat! Let's keep going to create your safety plan."

2F31601D-0E60-4566-AB79-3612A4B1B861.jpeg

Screenshot from Koko

The user was then taken through a series of questions about why they were upset, what they could do to cope, and who they could talk to about it. At the end, they were given a "safety plan" that they were asked to screenshot to remember who they could call in case of a mental health emergency. On this page, information about how to reach a crisis hotline was also given.

Advertisement

The effectiveness of this unproven intervention is what researchers sought to study.

Social media companies have been grappling for years with what to do when their users post content indicating they’re at risk of harming themselves; generally, users are referred to crisis hotlines, where they can speak to people who have been trained to talk to those in distress. Any experiment like the one carried out last year would necessarily raise questions about the ethics of researching anti-suicide interventions in social media, and the role of chatbots and scripted health interventions in suicide prevention.

The Koko experiment, though, raises further questions, in part because it was carried out as “nonhuman subjects research,” meaning that participants were deprived of a number of protections related to their safety and privacy. Experts consulted by Motherboard described the study using terms ranging from “distasteful” to “inexcusable.” Koko founder Rob Morris, though, defended the study’s design by pointing out that social media companies aren’t doing enough for at-risk users and that seeking informed consent from participants might have led them to not participate.

“It’s nuanced,” he said.

“It'd be great if we had a human being that could respond to every single person in crisis online. 24/7, 365 anywhere in the world. That's just not possible,” John Draper, the longtime former director of 988/Lifeline, told Motherboard. “So I guess the question is, what can we do to help them connect to human beings more efficiently?”

Advertisement

Morris’ answer is that providing an emergency hotline is not enough; he hopes platforms will partner with Koko to give users additional resources. More specifically, he hopes they’ll be directed to Koko’s chatbot.  

When users search key terms such as depression on social media platforms, they’re typically directed to a link offering support. Instagram users, for example, are directed to a “Get Support” page where users are offered the options of talking to a friend, talking to a helpline volunteer, or finding ways to support themselves. The GIF gallery platform Giphy, which is already partnered with Koko, offers a direct link to its platform to users who search for terms like “depression,” displaying a banner that they can click to immediately talk to a Koko chatbot. (Although the interface appears similar to a customer service chatbox, there is no disclaimer that this is a bot.)

4A68E990-1AC2-43EE-A2C1-101C1A2DC608.jpeg

Screenshot of Kokobot

When Motherboard used Koko’s peer support tool, one of the four options offered by the chatbot, it asked us if we needed help with "Dating, Friendships, Work, School, Family, Eating Disorders, LGBTQ+, Discrimination, or Other," then asked us to write down our problem and tag our "most negative thought" about it, then sent that information to someone else on the Koko platform. The advice that Koko gave about a non-communicative partner was, “You should just dump him if he’s not texting you. Just tell him he sucks.” Generally, people seeking help on the Koko platform are also asked anonymously give advice to other users on Koko. Immediately after asking for help ourselves, we were given the following conundrum to solve, posted by another user: "i feel so exhausted. im autoromantic and autosexual and struggling to cope. i keep going though periods of being comfortable with my sexuality and my relationship with myself, but right now im struggling. i feel disgusting and narcissistic for loving myself. only yesterday i was feeling happy that i knew i love myself and now im just tired and sad. ugh :(( I’m a loser." 

Advertisement

Earlier this year, Morris posted a Twitter thread that went viral in which he explained that Koko was experimenting with using AI to help users give each other mental health feedback. Motherboard originally reached out to Morris for feedback on this system, which was highly controversial and which Koko eventually abandoned. While researching for that original article, Motherboard came across the suicide and self-harm preprint paper, titled "Improving uptake of mental health crisis resources: Randomized test of a single-session intervention embedded in social media." We asked Morris specifically about this paper, which noted that young social media users were recruited into the study and that all consent was given as part of the company's long privacy policy.  

"That study is probably the most important work I've done in my entire career," Morris said. "I'd love to share it with you all." 

Motherboard also tested the SSI mentioned in the paper, which Morris said is adapted from work that researchers did at the University of Denver. We were first presented with a cat GIF and a drop-down menu of coping strategies including “hugging an animal,” “watching something funny,” “playing a video game,” and “drawing on myself.” We were then asked to list “people who distract you and make you feel better,” and “people that lighten your mood.” These questions were followed up with a picture of a dog with a caption that read “Great job so far! You’re almost done!” 

Advertisement

“I think everyone wants to be helping. It sounds like people have identified insufficient mental health care resources as a problem, but then rather than working to increase resources (more funding for training and hiring mental health care workers) technologists want to find a short cut,” Emily M. Bender, a professor of linguistics at the University of Washington, told Motherboard.

Koko has a strong ethos. It hopes to make mental health accessible to all and to help reach online users who are searching for harmful content on social media. And the results of its study, in which 374 people participated, are moderately promising, or at least suggestive. Those who were directed to Koko “reported greater decreases in hopelessness ten minutes later” than those directed to the crisis hotline and were “more than twice likely to report using the resources provided to them,” according to the preprint. The authors concluded that the “enhanced crisis SSI” can reduce users’ helplessness and increase access to mental health resources.

While it’s certainly important to help people in distress, though, many researchers and psychologists were appalled by the experiment, not least because it potentially put at-risk people at further risk.

“Completely, horribly unethical. Mucking around in an experimental manner with unproven interventions on potentially suicidal persons is just awful,” Arthur Caplan, a professor of bioethics at New York University, told Motherboard. 

Advertisement

What alarmed many researchers was the lack of protections for participants. Typically, a study involving human subjects would require board oversight, informing the subjects of risks and benefits, providing contacts for questions about the research, and requiring a signed consent form to participate. The study, though, “was deemed as nonhuman subjects research in consultation with the institutional review board (IRB) at Stony Brook University,” the authors write, and therefore exempt from requiring informed consent from the subjects. As a result, the consent process simply involved subjects agreeing to Koko’s privacy policy and terms of service. Notably and definitionally, users would have been agreeing to this privacy policy and terms of service while at risk, at least according to the study’s designers, of self-harm. Copious research has shown that the overwhelming majority of people do not read terms of service agreements under normal circumstances; presumably they are even less likely to do so while seeking help for a mental-health crisis.

“There are many situations in which the IRB would exempt researchers from obtaining consent for very good reasons because it could be unethical, or impractical, and this is especially common for internet research. It's nuanced,” Morris, the Koko founder, told Motherboard. “If we wanted to do additional consent, around, like, ‘Hey, we're adding some components to this crisis disclaimer,’ consenting them at that moment would churn out a ton of people and then block access to the resources.” 

Advertisement

Morris declined to say whether he thought the subjects had meaningfully consented to the study. He told Motherboard that his goal was to establish a new best practice, where he would be able to transparently show his results to social media platforms. However, when asked if he felt that the experiment was transparent to the participants involved, he said he’d needed more time to think about it. 

All of this is a non-issue, though, according to his co-author, because the determining factor in whether this was deemed to be research on human subjects was that it wasn’t performed at the university itself. 

“We submitted this project to our IRB here at Stony Brook, and their office formally determined that it was ‘not human subjects research,’ due to the specific research activities to be conducted here at Stony Brook—which did not involve recruitment, data collection, or interactions with participants,” Jessica Schleider, an assistant professor of psychology at Stony Brook and a co-author of the paper, told Motherboard. “No participant recruitment or data collection occurred at Stony Brook University, and the study/project design was determined by staff at Koko. We (my lab team at Stony Brook) strictly analyzed the de-identified data that resulted from Koko’s pre-planned internal evaluation.”

Advertisement

Stony Brook's IRB and multiple people tasked with overseeing the IRB did not respond to multiple emails from Motherboard about the study or the process. Facebook, Discord, and Tumblr responded to Motherboard’s initial emails but did not provide comment. Telegram did not respond to Motherboard’s request for comment. 

While Stony Brook may have been distanced from it as an institution, users did indicate their age, gender identity, and sexual identity to the researchers, and the paper’s conclusion could not have been reached without such identifying and personal data. There is, further, no easy way to wall the collection of such data off from actual subjects, as anonymized datasets can often still be traced back to specific individuals. (A 2019 study found that 99.98 percent of Americans could be correctly re-identified in any dataset using 15 demographic attributes.) This is why privacy experts have been vocal about the exploitation of data privacy and the unreliability of an anonymous dataset. 

“Most IRBs give a pass to ‘de-identified’ research as they claim there can be no privacy or security harms. But, in this case, they are collecting demographic information which could be used to identify users,” Eric Perakslis, the chief science and digital officer at the Duke Clinical Research Institute, told Motherboard. 

Advertisement

Morris told Motherboard that the goal of publishing the study was to create more transparency to show big social media platforms that they need to improve upon the resources they provide to at-risk users. 

Beyond concerns about subjects’ privacy, though, the nature of the experiment raises further questions about their safety. The way it was designed alleviated any responsibility for subjects after the experiment concluded beyond the “ten-minute follow-up” that was conducted. 

“The study does not control for or protect users from interactions that make them feel worse. Have any of these participants gone on to commit suicide? Did they track, or allow, for people seeking more than one session, etc.?” Perakslis said. 

“Should we be following up until we can verify that the person has linked to the resource? Short of harassing them? You know, if they consent to that? And they say, yes, please check with me. Absolutely, the more we can do that, the better,” said Draper. (“What I recall when reading the study that what made me feel more confident about it is that people felt more hopeful,” he added.) 

When we asked Morris whether or not he considered or monitored the long-term effects of the intervention on users beyond the 10-minute follow-up, he replied, “Yeah, we have done that. And I think the important thing to know is, like, the status quo here doesn't do any of that. This is what's important to me.” 

Both the study and Koko’s platform, though, raise the question of how tech can be ethically and effectively used for mental health purposes. The current dynamic between Koko and its users more closely parallels the relationship between most tech companies and their users than that between a mental health provider and patient. Its Terms of Service, for instance, state that, “You grant Koko a fully paid, royalty-free, perpetual, irrevocable, worldwide, royalty-free, non-exclusive, transferable and fully sublicensable right (including any moral rights) and license to use, license, distribute, reproduce, modify, adapt, prepare derivative works of, publicly perform, publicly display, and otherwise fully exploit Your Content.” 

Morris told Motherboard that he feels that most social media platforms are passive and tight-lipped when it comes to what they’re doing to help users in crisis. “[For] these platforms, it's so easy to just do nothing. Eventually, the legislation is coming, which is saying you need to do more, you can't just not detect when someone's slitting their wrists on Twitter,” Morris said. “You need to figure out how to do that. And you need to give them more resources. I need to figure out whether people are actually clicking into your resources. But just the nature of how these partnerships work. They're so sensitive, that their default is like ‘I don't want to do anything.’”

Koko, for its part, offers a "Suicide Prevention Kit" to social media companies—basically, code that sites can implement to push users to Koko—and claims on its website that has "worked with" Twitch, Tumblr, WordPress, Airbnb, Giphy, VSCO, and TikTok. Morris says that Koko—originally a company, now a nonprofit—is focused entirely on doing good in the world. Schleider is the cofounder of Single Session Support Solutions, a company that is working on a mental health app and also provides mental health consulting services to organizations. Her work with this company is listed as a conflict of interest in the preprint paper, but Schleider said, "You are correct that I co-founded a company that helps community organizations use evidence-based single-session interventions. However, my company has not promoted the intervention tested in the paper you linked."

Questions about the usefulness of good intentions here matter. Crisis Text Line, a nonprofit organization that provides free mental health texting services, came under fire last year after Politico reported that it gathered data from its conversations and shared it with a for-profit startup called Loris, which provides companies with AI for customer care conversations. (After the report, the non-profit ended its relationship with Loris three days later.) 

“If this is the way entrepreneurs think they can establish AI for mental diseases and conditions, they had best plan for a launch filled with backlash, lawsuits, condemnation and criticism,” said Caplan, the NYU ethicist. “All of which are entirely earned and deserved. I have not in recent years seen a study so callously asleep at the ethical wheel. Dealing with suicidal persons in this way is inexcusable.”

If you or someone you know is in crisis, call the National Suicide Prevention Lifeline at 800-273-8255, text TALK to 741741, or visit https://suicidepreventionlifeline.org for more information.

Jason Koebler contributed reporting.