FYI.

This story is over 5 years old.

Social Media

Your Phone Can Tell When You're Depressed

But should it?
Photo via Getty Images.

Last month researchers from the Harvard psychology department and the University of Vermont Computational Story Lab released the results of an interesting study: An algorithm they'd developed was able to detect signs of depression with more accuracy than that of a physician, simply by analyzing the contents of one's Instagram feed. Scanning the content of nearly 44,000 photos from 166 participants, the program predicted clinical depression diagnoses by parsing facial detection data, platform activity metrics, metadata, and color analysis. The clues to participants' mental health were hidden in plain sight, from the number of selfies to the number of other people in photos down to the specific filters depressed people are most likely to apply to images. (It's Inkwell, if you're wondering.)

Advertisement

This field of research—mining our interactions with devices like smartphones for diagnostic data—is called digital phenotyping. The basic premise is that our devices are treasure chests of quantifiable medical data, waiting to be unlocked and analyzed. How often are you leaving the house? Your GPS knows. How frequently do you interact with friends? Your text message logs know. Are you moving around during the day or mostly grafted to the couch, supine and immobile and spilling lo mein on yourself? Your accelerometer and your Postmates account both know. And, equipped with a proper understanding of how this granular data might coalesce into a patient profile, your physician might one day know, too.

"The thing that makes these technologies in part both valuable and problematic is that they can do something humans can't do," says Paul Root Wolpe, professor of bioethics, senior bioethicist for NASA, and director of the Center for Ethics at Emory University. Things like, say, finding hidden red flags tucked in the metadata of your #TBT posts, or predicting that you're on the cusp of a manic episode based on how your fingertips interact with your touchscreen. "If we said, 'We have a technology that can take a picture of your face and can tell with pretty good reliability whether you're sad or angry,' most people's response would be, 'Well, so what? So can I.' It's when they start to do things that we can't do that we begin to get uncomfortable."

Advertisement

While still a science in its infancy, this kind of research has attracted a frenzy of press attention—and gobs of money. Ginger.io, a paid app that uses this kind of "passive tracking" to provide users with tailored mental health support, raised nearly $30 million in its Series A and B funding rounds. Earlier this year, the former head of NIMH formed a digital phenotyping startup, Mindstrong, where according to WIRED, "one of the first tests of the concept will be a study of how 600 people use their mobile phones, attempting to correlate keyboard use patterns with outcomes like depression, psychosis, or mania." Mindstrong has filed several patents aimed at measuring brain function "from interaction patterns captured passively and continuously from human-computer interfaces found in ubiquitous mobile technology." According to one of the patents, this data would be collected by recording GPS, accelerometer, and gyroscope coordinates; incoming/outgoing phone calls, emails, and text messages; URLs visited; books read; games played… you get the idea. Mindstrong raised $14 million in its Series A funding round this past June.

Meanwhile, over at Google's Verily, there's Baseline: a large-scale, long-term public health study collecting vast amounts of data over the span of four years from its 10,000 wearable-armed participants encompassing "blood, genome, urine, tears, activity via wearable, heart, sleep, state of mind," a Stanford researcher told WIRED. Researchers at Stanford and Duke will get first dibs on the data, which will then open up to other medical researchers after two years.

Advertisement

The real-world implications of this field, and others like it, are obviously enormous. "Imagine an app you can install on your phone that pings your doctor for a check-up when your behavior changes for the worse, potentially before you even realize there is a problem," Dr. Christopher Danforth, a co-author on the Instagram study, is quoted in the study's press release. Which is groundbreaking—especially for populations with limited access to care. While you're going about your daily life, your phone could be constantly scanning for hints of problematic behavioral changes, alerting your doctor when things veer off-course.

This new field of research raises ethical questions that have been around for far longer than iPhones and social media feeds. Wolpe likens it to a classic hypothetical that physicians raise to ethicists: If a doctor is out at the movies and spots someone with an irregular mole on their neck, should he or she tell them? "There's a whole historical conversation around anonymously informing people of some medical problem that you, for one reason or another, detected," Wolpe says. "So in one sense, this isn't a new question." But once an algorithm can look at your interactions and point to a diagnosis, new ethical questions start propagating like a hydra's head.

For instance: Would you rather be notified of a life-altering diagnosis in a face-to-face conversation with your doctor or via a push notification? "Those are two very different kinds of interactions," Wolpe explains. Then there's the question of circumventing a user to notify their physician. "The question of who should be notified about this is an interesting one," Wolpe says. "If this were a software that noticed moles on people's faces and saw some of them looking suspicious and notified the person, that's one thing. But now we're talking about notifying a third party." What if the app notifies the wrong physician? Circumventing the user to auto-ping someone else feels distinctly artificial and glaringly vulnerable to error. "I think what makes it feel creepy is the fact that no human being is involved, especially in this speculation that it could be used and automatically inform somebody," Wolpe says.

And, of course, there's that ever-recurring Achilles's heel of tech: the question of whether you can capture this kind of data while preserving privacy and keeping that data safe. There's a reason 43 percent of the initial participants in Danforth and Reece's study dropped out after refusing to consent to sharing social media data. Just last year, researchers scraped identifiable personal data from more than 70,000 OkCupid users without their consent and released it online. Genetic testing service 23andMe hoards and sells genetic data the way Google shills your search history to advertisers. People have reason to be skeptical of that kind of vulnerability or exploitation—even more so when it comes to sensitive, stigma-riddled issues like mental health. One can only imagine how shitty it would be for an Ashley Madison–level hack to make this kind of data public.

Tech like this is incredibly promising. It could fundamentally shift the healthcare system, make it easier for people to access the care they need, and contribute invaluable data to mental health research. It's also rife with red flags. There are problems of privacy, of the implications of false positives and false negatives, and of legal liability. The challenge is determining the extent to which we're willing to sacrifice the latter for the former.

Follow Gray Chapman on Twitter.