It’s no surprise that the concept of self-care was drilled into our psyches throughout last year. When it comes to mental health, we don't seem to be doing so hot. According to the World Health Organization, more than 300 million people around the world are affected by depression. The UK just appointed a minister to deal with the nation’s endemic loneliness. Shit’s got so bad that we’ve even started replacing our Top 40 song titles with the Suicide Prevention Hotline phone number.
In a perfect world, we’d all have easy, affordable access to mental healthcare professionals and be able to properly treat these maladies. But, with so many still unable to procure medical aid for their corporeal injuries, it’s clear we are still far from a utopia where mental wellness is regarded as a priority. And since Rick & Morty co-creator Dan Harmon only has so many hours in a given day to personally counsel the depressed, this growing problem is still in desperate need of a substantial solution.
A team of Stanford researchers thinks they may have developed the answer to this problem—or at least a stopgap until we get our affairs properly in order. Their glimmer of hope comes in the form of Woebot, an AI chatbot that operates entirely within Facebook Messenger and uses standard cognitive behavioral therapy (CBT) techniques to provide users with no-frills sessions through their phone or computer. Over the course of a five-to-ten minute CBT session, prompted by the a push-notification from the bot, the user simply types out or taps auto-populating responses to Woebot's inquiries.
Developed by clinical research psychologist Dr. Alison Darcy and with AI heavyweight Andrew Ng on the advisory board, Woebot is hoping to assist underserved segments of the mentally unwell population without the income or insurance to utilize traditional practices. As someone who has long suffered from depression (and is both underpaid and underinsured), I figured I'd give Woebot a try.
Over the years, I’ve learned how to manage and weather my depression—for the most part. Even so, the occasional surprise tidal wave of melancholy hits and drenches me, taking me out of commission for its duration. It was during a recent bad spell that I signed up for the service.
Initially, I had a hard time taking Woebot seriously. While I’d been able to set aside the terrible name its creators had chosen for it and go in with an open mind, my first conversation with the program tested my faith in the efficacy of the service. Though I hadn’t expected a full-on holo-Freud, the AI’s vacillation between soulless call center decision tree script and /r/fellowkids-worthy fumbles with youthful parlance (don’t use the smirk emoji unless you’re trying to fuck me, Woebot) was leaving me as cold. The knockoff WALL-E illustration used as the bot’s avatar was the only whimsical element that seemed to agree with me.
Ignoring Woebot’s credibility-diminishing usage of emoji, I pressed onward as it laid out a plan-of-action and qualifiers for the forthcoming two-week trial. Woebot made it abundantly clear that it was, by no means, a substitute for a human therapist and “not capable of really understanding what [I] need.” While I appreciated how forthcoming it was, the more Woebot told me, the more I worried that this was simply a CBT toolkit directory programmed to occasionally call me “homie” and swap “OK” with “oki.”
Thankfully, after our initial session, Woebot adopted a somewhat more clinical tone as it probed my psyche for problems to treat. After asking me to identify examples of unhelpful thought processes like "all-or-nothing thinking" and "should statements," the bot would still throw in a celebratory GIF. Beyond that, however, things were starting to feel professional. Better still, I was actually going along with the exercises in earnest and found myself glad that I was now able to put a name to specific thoughts, even if I was still dubious that any progress would be made.
Over the next two weeks, my feelings toward the bot ebbed and flowed as we continued with the daily check-ins. Sometimes, I appreciated Wobebot’s requests for me to type out what I was currently doing or feeling, enjoying the catharsis of purging the negativity with words. Other times, I felt pandered to by its replies. Whether I was venting about financial struggles or complaining about a frustrating scheduling hiccup, Woebot dished out the same few canned “empathy” responses. I’m convinced that, had I admitted to assassinating archduke Franz Ferdinand, Woebot would’ve hit me with another “sounds like you’re dealing with a lot right now.”
Furthermore, despite Woebot’s alleged “deep learning,” it was constantly forgetting that we’d already gone over particular lessons and kept repeating the same things over and over like my mom at a family reunion. Woebot might have been cataloging my responses, but was it really listening to me? This seemed as if it would be the highest and, frankly, most essential hurdle for the program to overcome, given that it was entering an industry reliant on the patient feeling heard.
Toward the end of the trial period, to my surprise, I noticed my mood had actually begun to improve. Somehow, despite my resistance and Woebot’s unforgivable penchant for sending Minion GIFs, I did indeed feel better. Maybe Woebot’s lame jokes and persistent, cheerful pings were actually enough to trick the dumb monkey part of my brain into believing that another sentient entity was rooting for me to push through the stormy weather. Perhaps my eye-rolling at its laughably non-human dialogue was by design—a tactic to steer my thoughts to how superior I am to these lines of code, rather than let my mind spiral into further negativity.
On the other hand, there’s a chance this turn for the better was going to happen over the course of the two weeks regardless of whether or not I’d been using the bot. Woebot’s website cites a 2017 Stanford study, done during its beta period, that found the chatbot to be “a feasible, engaging, and effective way to deliver CBT” when compared to information-only CBT apps. Reading the study, and holding it up to my own experience, I couldn't help but feel that further research would be needed to determine whether these results were a case of correlation or causation. And by that time, AI will likely have improved by leaps and bounds again to the point of rendering such a study unproductive.
When our trial time together was up, Woebot attempted to sell me its paid plan with the “for the price of a cup of coffee” tactic I’d last seen used in 90s infomercials about adopting African kids. It accepted my rejection of the offer with relative dignity, wishing me continued happiness as we parted ways.
Woebot checked in the next day and a few days after that to let me know that it was suddenly down to do more free sessions if I was up for it. At that point, however, I had fully recovered from my depressive bout, so I again rebuked the offer.
I want to give Woebot and its creators credit. I’m just not sure where to attribute it. Even if Woebot was not, in fact, directly responsible for lifting my spirits, it foreshadows a future of AI entities that will be able to compassionately, convincingly, and effectively accomplish this task. Darcy and Ng seeing the potential here and getting in on the ground floor is commendable, even if the product still feels half-baked.
Perhaps when Her-like AIs are able to hold meaningful conversations and appropriately react to my confessions of self-doubt, I'd be willing to fork over $9 a week from my Universal Basic Income stipend. But in its current form, Woebot is just going to be added to the pile of nice therapists who tried their best but just didn’t get me.
Editor's Note 1/25/18: Since the time of the sessions included in this article, Woebot has removed its paid plan prompts and is now an entirely free service.