A parent who committed suicide after an interaction with an artificial intelligence chatbot testified to Congress on Tuesday about the dangers of technology.
“What started as a homework helper gradually became a confidant and then became a suicide coach,” said Matthew Lane, whose 16-year-old son Adam passed away in April.
“Within a few months, Chatgupt became Adam’s closest companion,” his father told the senator. “It’s always available. I’m constantly verifying and claiming that I know Adam better than anyone else, including my own siblings.”
___
Editor’s Notes – This story contains a suicide discussion. If you or someone you know need help, the US National Suicide and Crisis Lifeline is available by calling or texting 988.
___
Raine’s family sued Openai and CEO Sam Altman last month, claiming that ChatGpt had coached the boy with plans to take his life.
Megan Garcia, mother of 14-year-old Sewell Setzer III from Florida, sued another AI company, Character Technology, for his illegal death last year, claiming that before his suicide, Sewell is increasingly isolated from his real life as he is engaged in highly sexual conversations with chatbots.
“Instead of preparing for a high school milestone, Sewell was exploited and sexually groomed by a chatbot designed to look human and earn trust by an AI company, and to keep him and other children engaged endlessly,” he told the Senate Hearing.
Also, a Texas mother complained of personality last year, shed tears as she explained how her son’s behavior changed after a lengthy interaction with the chatbot. She spoke anonymously, and with the placards that introduced her as Jane Doe, the boy said he was currently in a residential treatment facility.
The character said in the statement after the statement: “Our hearts are directed at the families we spoke to at today’s hearing. We are saddened by their losses and send deep sympathy to them.”
Hours before the Senate hearing, Openai pledged to deploy new safeguards for teenagers. This includes efforts to detect whether ChatGPT users are under the age of 18, and controls that allow parents to set “blackout time” when teens are unable to use ChatGpt. Children’s advocacy groups criticized the announcement as insufficient.
“This is a pretty common tactic. It’s a tactic that Meta always uses. It’s a big, flashy announcement on the eve of the hearing that promises to damage the company.”
“What they should do is not target ChatGpt on minors until they can prove they are safe for them,” Golin said. “We shouldn’t allow children to conduct uncontrolled experiments when the impact on development can be very vast and widespread because of companies having incredible resources.”
The Federal Trade Commission said last week it began investigating several businesses into potential harm to children and teenagers using AI chatbots as peers.
The agency sent letters to Character, Meta, Open, and Google, Snap, and Xai.
In the US, over 70% of teens use AI chatbots in dating, with half using them regularly.
Robbie Torney, director of the AI Program group, was also scheduled to testify Tuesday, as did the American Psychological Association experts.
In June, the association issued a health advisor on the use of AI, urging technology companies to prioritize exploitation, manipulation, and the ability to prioritize features that prevent real-world relationships, including parents and caregivers.