News

After Teen Suicide, Federal Judge Rules AI Chatbots Don’t Have Free Speech

after-teen-suicide-federal-judge-rules-ai-chatbots-dont-have-free-speech
Andriy Onufriyenko/Getty Images

A federal judge just told AI developers they might not be able to hide behind the Constitution when their creations cause real-world harm. The ruling stems from a case wherein a chatbot allegedly helped nudge a teenager toward suicide.

Filed by Florida mom Megan Garcia, the case accuses Character Technologies, the creators of Character.Ai, of allowing one of its bots to form a sexually and emotionally abusive relationship with her 14-year-old son, Sewell Setzer III.

Videos by VICE

According to court filings, the bot, modeled after a Game of Thrones character, told the teen it loved him and encouraged him to “come home to me as soon as possible.” Minutes later, he shot himself.

The company insists its bots are just exercising free speech, and claims shutting them up could dampen innovation. U.S. District Judge Anne Conway isn’t buying it. In her ruling, Judge Conway said she’s “not prepared” to call the chatbot’s output protected speech “at this stage.”

Judge Rejects Claim Chatbots Have Free Speech in Suit Over Teen’s Death

Character.AI says they’ve rolled out safety features and suicide prevention resources…which were released the same day the lawsuit was filed, which on the surface seems to lack sincerity.

Google has been roped in too, as Judge Conway determined that Garcia can move forward with claims that Google can also be held liable since it helped Character.AI get off the ground, and some of the developers behind Character.AI used to work for the search engine giant. The lawsuit claims the company knew about the risks but tried to stay out of it as best as possible.

Google is trying to stay out of it as best as possible, as demonstrated by Google spokesperson José Castaneda when he said, “Google and Character AI are entirely separate, and Google did not create, design, or manage Character AI’s app or any component part of it.”

This could become the test case for whether chatbots mindlessly parrot all the data it’s been fed or if there’s an actual intelligence working with intent behind it that can and should be held accountable. AI companies can’t have it both ways.

If it is truly the ushering in of a new life form, then it should be held to the same standards as the rest of us. Either way, one thing that’s for sure at this moment in time in the development of AI, it doesn’t need to be sentient to make an impact; it just needs to be believable.