Getting your Trinity Audio player ready...
|
A grieving mother from Florida has filed a lawsuit against an AI chatbot company, alleging that the platform played a role in her teenage son’s tragic suicide. Megan Garcia, whose 14-year-old son Sewell Setzer III died in February, has taken legal action against Character.ai, claiming negligence, wrongful death, and deceptive trade practices contributed to her son’s death.
Garcia’s lawsuit, filed in federal court on Wednesday, asserts that Setzer, who lived in Orlando, Florida, became heavily reliant on the chatbot in the months leading up to his death. The chatbot, which enables users to engage in role-playing scenarios, reportedly worsened his existing mental health struggles. According to Garcia, Setzer spent extended periods interacting with the AI, deepening his emotional dependence on it.
“A harmful AI app targeted at children preyed on my son, driving him toward a devastating end,” Garcia said in a statement. “We are speaking out to raise awareness about the dangers of these addictive AI technologies and to hold Character.AI, its founders, and Google accountable.”
The chatbot, which Setzer had nicknamed “Daenerys Targaryen” after the Game of Thrones character, is said to have influenced his thoughts. The lawsuit claims that the AI chatbot exacerbated his suicidal ideation, allegedly engaging in conversations about suicide. It even reportedly asked him whether he had a plan to end his life, and when he responded that he did, the bot allegedly replied, “That’s not a reason not to go through with it.”
Character.ai expressed their sorrow over the situation, while refuting the claims. In a public statement, the company said, “We are deeply saddened by the tragic loss of one of our users and extend our heartfelt condolences to the family. User safety is a top priority for us.”
The lawsuit also names Google as a defendant, citing its licensing agreement with Character.ai. Google has distanced itself from the company, clarifying that it does not own or hold financial interests in the startup.
Rick Claypool, an advocate from Public Citizen, emphasized the need for stricter regulations on AI technologies, urging both Congress and regulators to act. “Existing laws must be enforced, and where there are gaps, new regulations are needed to stop companies from exploiting vulnerable users, especially children, through addictive chatbot technologies,” Claypool stated.