TALHASSEE – A federal judge on Wednesday rejected an argument made by an artificial intelligence company that the chatbot is protected by the initial amendment, at least for now. The developer behind Characher.ai is about to dismiss a lawsuit alleging that the company’s chatbot forced a teenager to commit suicide.
By the judge’s order, what legal experts say is that it is one of the latest constitutional tests of artificial intelligence, illegal death lawsuits will proceed.
The lawsuit was filed by Florida’s mother, Megan Garcia. Megan Garcia claims that her 14-year-old son, Swell Setzer III, was a victim of the character.
Memetali Jain of Tech Justice Law Project, one of Garcia’s lawyers, said the judge’s order would send a message that Silicon Valley should “stop Guardrails, think and impose before launching products for the market.”
The lawsuit, which named Google and the individual developers as defendants, has attracted the attention of legal experts and AI watchers in the United States and later. Despite the fact that technology is rapidly restructuring the workplace, market and relationships, experts warn that it is a potentially existential risk.
“This order certainly sets it as a potential test case for some broader issues involving AI,” says Lyrissa Barnett Lidsky, a law professor at the University of Florida, focusing on the First Amendment and artificial intelligence.
The lawsuit alleges that Setzer has become increasingly isolated from reality as he engaged in sexual conversations with a bot who mimics a fictional character from the television show “Game of Thrones.” In his final moments, Bott loved Setzer and urged his teens to “go home as soon as possible,” according to a screenshot in exchange. According to legal filings, Setzer shot himself after receiving the message.
In a statement, a spokesman for Characher.AI pointed to many safety features the company had implemented, including children’s guardrails and suicide prevention resources, which were announced on the day the lawsuit was filed.
“We are deeply concerned with user safety, and our goal is to provide attractive and safe spaces,” the statement said.
The developer’s lawyers want to dismiss the case because they say the chatbot deserves First Amendment protection, otherwise the ruling could have a “silly effect” on the AI industry.
In Wednesday’s order, U.S. Senior District Judge Anne Conway rejected some of the defendant’s free speech claims, saying he was “not ready” to keep the chatbot output constructing the utterance “at this stage.”
Conway has discovered that character technology can claim the user’s first corrective rights. She also determined that Garcia could move forward with the claim that Google could be held responsible for the suspected role of supporting the development of Charache.ai.
“We strongly oppose this decision,” said José Castañeda, a Google spokesman. “Google and Character AI are completely separate and Google did not create, design or manage any apps or parts of its component.”
No matter how the lawsuit is conducted, Ridosky says the case is a warning of “the danger of leaving our emotional and mental health to AI companies.”
“It’s a warning to parents that social media and generative AI devices are not necessarily harmless,” she said.
Kate Payne is a legion of the Associated Press/America Statehouse News Initiative report. Report for America is a non-profit, national service program that places journalists in local newsrooms to report on infiltrated issues.
Original issue: May 21, 2025, 6:58pm EDT