Tech

Crisis Text Line and the Silicon Valleyfication of Everything

Monetizing people's darkest moments shows that even many "tech for good" initiatives have ulterior motives.
GettyImages-1166695965
Image: Getty Images

A “Grammarly for emotion” is one of the bizarre AI products that the suicide prevention nonprofit Crisis Text Line developed out of confidential text messages sent to its counselors, its former CEO Nancy Lublin said at a talk at Google. The way Crisis Text Line behaved with its sensitive user data is a shocking example that shows Silicon Valley beliefs in personal data as a commodity and privatized infrastructure have spread to nonprofits and “tech for good” initiatives.

Advertisement

Last month, Politico reporter Alexandra S. Levine revealed in an investigation that the organization created a for-profit subsidiary to sell sentiment analytics and chatbot-like customer service products created with data from its vulnerable users including suicidal teens. Some of the most alarming details in the investigation have happened out in the open since 2018. On the Crisis Text Line subsidiary’s own website, the AI products are pitched as “enterprise software that helps companies boost empathy AND bottom line” with testimonials from Lyft and Freshly, who claims it made their “support team more productive and empathetic.” The organization’s associates have openly boasted about their Silicon Valley-infused practices.

Levine’s story, the nonprofit initially responded, “cherry-picked and omitted information about our data privacy policies.” But following a request from the FCC on Monday of last week, Crisis Text Line announced that it has ended its “data-sharing relationship” with the for-profit subsidiary. When asked for comment on the Politico story and any plans to win back user trust, the organization referred me to the statement posted to their website last week. There has been no comment on whether the for-profit subsidiary will continue the sale of products that have already been created with texter’s data.

Advertisement

It’s hard to fathom how Crisis Text Line began to exploit the vulnerable communities it serves. There were no other suicide text lines when it launched in 2013 with what seemed to be an admirable mission, offering "support 24/7 to teens via a medium they already use and trust: text." By 2015, the organization expanded to include service for adults. People can reach out casually while riding the train or waiting in line at the store. Plus, a text-based crisis line serves specialized needs for disabled communities and people who don’t have privacy at home. 

But in recent years, Crisis Text Line has sounded a lot like a traditional startup. In a 2016 press release announcing $23.8 million raised from “tech titans” Reid Hoffman, Melinda Gates, and philanthropic groups from Steve Ballmer and Pierre Omidyar; Crisis Text Line co-founder Lublin compared the nonprofit to a “tech startup” with similar fundraising strategies. “We've always thought of ourselves as a data first company,” Lublin said on Hoffman’s podcast Masters of Scale in 2020. “We're outperforming on all of our KPIs,” she added. “All of our key performance indicators right now—we're crushing them.” In 2016, Crisis Text Line wrote on its own website that it “will NEVER share data.” This language was deleted the following year.

Advertisement

The organization offers a “new gig economy for volunteerism, like Uber or Lyft for volunteerism,” Lublin said in an official Ted Talk in 2020. But a number of crisis counselors—who volunteer time and unpaid labor to help people in what might be the worst moments of their life—were startled by its data practices. Former Crisis Text Line volunteer Tim Reierson was dismissed in August after raising concerns internally. (Reierson has since created an advocacy website reformcrisistextline.com that outlines its ethical missteps including the confusing consent terms. As Reierson’s website explains, “Crisis Text Line provides a vital public service to many in need. It also takes advantage of people at their most vulnerable moment. Both can be true.” )

Crisis Text Line also took a page from the Silicon Valley playbook in its attack on existing public services. This year, the federally-funded National Suicide Prevention Lifeline will begin offering a texting service at the number 988. Before the Federal Communications Commission voted to approve the Lifeline texting service, Microsoft researcher danah boyd lobbied the FCC to forgo these plans in favor of embedding Crisis Text Line in the framework. A “a duplicative government-run text line could confuse those who are in crises,” boyd wrote in a February 2020 letter to the FCC

Advertisement

A Crisis Text Line board member since 2012, boyd praised its data-mining operations in her letter, which is among the FCC’s public files (I found it with the google search “danah boyd crisis text line.”) In boyd’s words, Crisis Text Line offered the “awesome power of technology -- including machine learning and data analytics -- for good in innovative new ways.” Two texting hotlines, she argued, would “lead to fragmentation of data -- potentially weakening the insights that can be gleaned from the data.”

“I was wrong when I agreed to this relationship,” boyd said on Twitter last week of the data-sharing agreement between Crisis Text Line and its for-profit subsidiary. She included a link to a post to her blog outlining what “how I thought about these matters and informed my actions and votes over the last eight years.” boyd did not respond to a request for comment. 

In her 2020 letter to the FCC, boyd advocated that “the government should leverage existing text services like Crisis Text Line that are already providing a valuable service to the public at no cost to the taxpayers.” The letter goes on to read:

“In conclusion, based on my experience as an academic researcher, professor, social advocate, and a board member of Crisis Text Line, I recommend that the government leverages Crisis Text Line to help support texters in crisis, rather than trying to duplicate their efforts. Relying on existing leaders like Crisis Text Line would use the resources and experience of a sectoral leader with exceptional data analytics capabilities and privacy protections to help save lives -- an opportunity that the government must not pass up. Crisis Text Line would be proud to work together collaboratively as 988 is implemented, to save lives via text message in a safe, smart, and cost-efficient way, with the innovation and speed of a private tech company acting in the public interest.”

Lifeline has capacity issues, which the American Foundation for Suicide Prevention attributes to insufficient federal and state funding. It would appear that Crisis Text Line latched on to inadequacies in the social safety net just as Amazon undercuts postal service operations, Ring acts as privatized law enforcement, and Airbnb has exacerbated the housing crisis in America.

boyd’s letter to the FCC reveals similar incentives and aspirations to Uber as it supplants mass transit: once Crisis Text Line filled a gap in public resources, it claimed the gap as its turf. Meanwhile the organization incited user dependency and unintended consequences behind the smoke and mirrors that is the “awesome power of technology.” Crisis Text Line put its market proposition above the needs of its vulnerable users: its dehumanizing data collection practices were part of a series of callous acts.

Suicide prevention doesn’t look like the “speed of a private tech company” or “awesome” machine learning. It requires safety and care with no strings attached. This care includes generosity and expansion of public resources like access to housing, food, healthcare, and other basic needs; it can’t be measured in KPIs. The very purpose Crisis Text Line claimed to serve is incompatible with the Silicon Valley way of doing business.