FYI.

This story is over 5 years old.

Health

Does an App That Measures Your Suicide Risk Violate Your Rights?

New technology designed to detect suicidality in teens raises ethical questions.
Tony Lam Hoang

It sounds like something out of a science fiction novel: a program that could detect whether a teenager is at risk of committing suicide simply by analyzing their speech patterns. But researchers at the University of Southern California say that soon they might be able to do just that. And for this technology to be the most effective, it might need to be a covert operation. USC's Institute for Creative Technologies (ICT) made headlines when it debuted SimSensei, a "virtual human" that can provide medical care. SimSensei is what you get when you mix a videogame character like Lara Croft with highly sophisticated medical virtual reality programming techniques. It directly interacts with patients, listening and taking note of everything from what the patient says to how they look and sound. Visits with SimSensei aren't that different from an appointment with a medical provider over Skype. SimSensei appears entirely on a screen, but it's programmed to respond to human behavior with the appearance of empathy; for example, leaning in when a patient requires reassurance during a medical appointment.

Advertisement

ICT developed SimSensei to perform mental health screenings, not prevent teen suicide. But when the program was unveiled at a medical conferencein 2013 it caught the attention of a team of physicians from Cincinnati Children's Hospital Medical Center interested in using the technology to learn more about their suicidal patients.

Suicide is the second leading cause of death among people ages 10-34. The Centers for Disease Control and Prevention's (CDC) 2015 Youth Risk Behavioral Surveillance System reports that 17 percent of high school students have seriously considered attempting suicide, 14 percent have made a suicide plan and 8 percent made at least one attempt in the course of a year. These numbers are rising fast; between 1999 and 2014, the suicide rate jumped 24 percent across all ages and genders.

There's no doubt that teen suicide is a public health crisis. What's less certain is whether a variety of high-tech methods of detecting and preventing suicidal tendencies will respect teens' ability to make their own medical decisions—or help them access effective treatments if a risk is detected.

Stefan Scherer leads the team at USC that's bringing SimSensei's speech pattern analysis to suicidal teens. Using voice recordings of two groups of adolescents from Cincinnati Children's, Scherer's team analyzed their voices to see if they could detect differences in the speech patterns of teens who were admitted for suicide attempts versus orthopedic patients suffering from milder ailments like a broken leg. That part was easy. Suicidal teens could be separated from the control group simply by the type of language they used—and the hopeless way they described their feelings.

Advertisement

But Scherer wanted to know more: Could SimSensei detect non-verbal language cues that would separate first-time suicide attempters from those who made repeated attempts? (Repeaters don't tend to give vocal cues, so the sound of a non-repeater could be used to identify a person at risk of an attempt for the first time. A repeater would be a known risk.)

SimSensai can detect changes in vocal timbre that are largely unnoticeable to human clinicians, and it has the ability to continuously observe human behavior by listening to the sounds of the owner's voice on the phone or in a clinical setting. Scherer's study revealed that the voices of first-time suicide attempters who've been admitted to the hospital are dramatically different from those who have made repeated attempts. Their voices are breathier and "they are in serious shock," Scherer says.

"Repeaters," as Scherer calls teens who have made numerous suicide attempts, have an entirely different tone of voice. "Repeaters seem to have voice qualities that are almost normal," Scherer says. "They feel almost numb to the situation because they have been in this situation before. They aren't in shock anymore."

Scherer says his research could be turned into tools to predict whether an individual is at risk of attempting suicide before they make their first attempt, but it comes with a catch. "The problem is that to actually have the ability to prevent suicide, you have to be there all the time, every time," says Scherer. "'In the future, I envision this to run somewhat in the background. You basically have an app on your phone that just learns to listen to you and understand you, and reacts in a meaningful manner that prevents bad outcomes."

Advertisement

Of course, the idea of an all-knowing app on your smartphone that alerts you, and perhaps the authorities, when it decides you might be suicidal sounds more like Big Brother meets RoboCop than a viable medical treatment approach. Scherer's method has some pretty high-profile haters, who argue that human interaction is still paramount to preventing suicide. Gene Grosein, a professor of psychiatry at Harvard Medical School and executive director of the Massachusetts General Hospital Clay Center for Young Healthy Minds, dismisses Scherer's research entirely, asserting that there is no need for elaborate methods of suicide detection. "In my experience, teens are very forthcoming about their admission to depression and suicidal ideation," he says. "If parents, teachers or clergy ask questions with a sense of warmth, safety and caring, they are going to tell their story. It's not rocket science and it's highly ethical."

While that's a viable argument, the problem lies in the overwhelming shortage of adolescent mental health care providers and a mental illness stigma that can prevent teens and young adults from seeking mental health care. And while it would take a software development company—and clinical trials—to turn the research into a viable product, it's precisely these problems that technologies like SimSensei can address. By using virtual humans to perform mental health care screenings, it's possible that services could be accessible to dramatically more youth than are currently being treated by trained mental health providers. "Its reach is infinite. You can reach humans anytime, all the time, everywhere. You can deploy it on the web, on a mobile phone. have a much faster and wider reach," Scherer says.

Advertisement

Speech analysis is only one of the behind-the-scenes ways scientists hope to prevent teen suicide. Researchers at the startup mental health analytics company Qntfy are also developing algorithms to identify trends in human behavior. The company's most recent study applies these algorithms to the detection of mental illness and suicidal behavior using data collected on social media. Their preliminary research suggests that everything from the language you use on Twitter to the types of emojis you post on Facebook provide clues about your risk of suicide.

Qntfy launched a new effort in April called OurDataHelps that asks participants to donate their social media data and the social media data of their loved ones who died from suicide to their research. Participation is voluntary (at least on the behalf of the living volunteer) and the company hopes its research will be leveraged to create tools that can detect and prevent suicide down the line.

What those tools might be—and how they could be used to empower users without infringing on their autonomy—remains to be seen. But how physicians plan to use more traditional methods of suicide detection offer important clues.

A new blood test is one example of how seemingly innocuous methods can be surprisingly intrusive. One study found that pairing a blood test that looks for genetic changes with a questionnaire was able to predict with 92 percent accuracy which men would develop suicidal feelings within the next year. These 11 gene changes could be biological markers of people who might be considering suicide, and some doctors say these biomarkers could be used to take preventative measures such as confiscating weapons or putting someone under suicide watch. This means barring them from leaving the hospital before they even attempt suicide.

There are no easy answers when it comes to balancing a teen's privacy and autonomy against saving their life. None of these tools are currently available outside of research settings—yet. But for companies like Qntfy, it's only a matter of time.

"If it's okay for algorithms to sell sneakers, we think it should be okay for algorithms to help save lives," Tony Woolf, Qntfy's co-founder and chief operating officer, says. "The negative trend in teen deaths by suicide will not be solved by surveillance. It will be solved by giving people the resources to help themselves and each other at just the right time."