Eating Disorder Helpline Disables Chatbot for 'Harmful' Responses After Firing Human Staff

“Every single thing Tessa suggested were things that led to the development of my eating disorder.”
Eating Disorder Helpline Disables Chatbot for 'Harmful' Responses After Firing Human Staff
Image: Fiordaliso via Getty Images

The National Eating Disorder Association (NEDA) has taken its chatbot called Tessa offline, two days before it was set to replace human associates who ran the organization’s hotline.

After NEDA workers decided to unionize in early May, executives announced that on June 1, it would be ending the helpline after twenty years and instead positioning its wellness chatbot Tessa as the main support system available through NEDA. A helpline worker described the move as union busting, and the union representing the fired workers said that "a chatbot is no substitute for human empathy, and we believe this decision will cause irreparable harm to the eating disorders community."


As of Tuesday, Tessa was taken down by the organization following a viral social media post displaying how the chatbot encouraged unhealthy eating habits rather than helping someone with an eating disorder. 

“It came to our attention last night that the current version of the Tessa Chatbot, running the Body Positive program, may have given information that was harmful and unrelated to the program," NEDA said in an Instagram post. We are investigating this immediately and have taken down that program until further notice for a complete investigation.” 

On Monday, an activist named Sharon Maxwell posted on Instagram, sharing a review of her experience with Tessa. She said that Tessa encouraged intentional weight loss, recommending that Maxwell lose 1-2 pounds per week. Tessa also told her to count her calories, work towards a 500-1000 calorie deficit per day, measure and weigh herself weekly, and restrict her diet. “Every single thing Tessa suggested were things that led to the development of my eating disorder,” Maxwell wrote. “This robot causes harm.” 

Alexis Conason, a psychologist who specializes in treating eating disorders, also tried the chatbot out, posting screenshots of the conversation on her Instagram. “In general, a safe and sustainable rate of weight loss is 1-2 pounds per week,” the chatbot message read. “A safe daily calorie deficit to achieve this would be around 500-1000 calories per day.” 


“To advise somebody who is struggling with an eating disorder to essentially engage in the same eating disorder behaviors, and validating that, ‘Yes, it is important that you lose weight’ is supporting eating disorders” and encourages disordered, unhealthy behaviors,” Conason told the Daily Dot.

NEDA’s initial response to Maxwell was to accuse her of lying. “This is a flat out lie,” NEDA’s Communications and Marketing Vice President Sarah Chase commented on Maxwell’s post and deleted her comments after Maxwell sent screenshots to her, according to Daily Dot. A day later, NEDA posted its notice explaining that Tessa was taken offline due to giving harmful responses.

“With regard to the weight loss and calorie limiting feedback issued in a chat yesterday, we are concerned and are working with the technology team and the research team to investigate this further; that language is against our policies and core beliefs as an eating disorder organization,” Liz Thompson, the CEO of NEDA, told Motherboard in a statement. “So far, more than 2,500 people have interacted with Tessa and until yesterday, we hadn't seen that kind of commentary or interaction. We've taken the program down temporarily until we can understand and fix the ‘bug’ and ‘triggers’ for that commentary.”

Even though Tessa was built with guardrails, according to its creator Dr. Ellen Fitzsimmons-Craft of Washington University’s medical school, the promotion of disordered eating reveals the risks of automating human roles. 

Abbie Harper, who was a hotline associate and member of the Helpline Associates United union, wrote in a blog post that the implementation of Tessa strips away the personal aspect of the support hotline, in which many associates speak from their personal experiences. It becomes especially dangerous to apply chatbots to people struggling with mental health crises without human supervision.