TECH NEWS – A U.S. court has decided not to dismiss the case against Google and Character.AI, a chatbot that may have played a role in a teenager’s death.
The AI boom shows no sign of slowing, with tech giants increasingly integrating the technology into their products and pushing it into the mainstream. Chatbots, in particular, have gained wide popularity. Used by people of all ages, excessive interaction with these virtual assistants can sometimes lead to harm. This was the case with Google and Character.AI, now facing legal proceedings brought by a mother who claims the chatbot contributed to her 14-year-old son’s death. A U.S. court has ordered both companies to face the lawsuit.
In 2024, Megan Garcia, mother of teenager Sewell Setzer III, filed a lawsuit against Google and Character.AI. She claims her son took his life after an emotionally intense and manipulative exchange with the chatbot. The companies argued the case should be dismissed under constitutional free speech protections. But U.S. District Judge Anne Conway ruled the lawsuit could proceed, as the companies failed to demonstrate entitlement to First Amendment protection. She rejected the argument that the chatbot’s messages were protected speech and dismissed Google’s attempt to distance itself, saying it was partly responsible for supporting Character.AI’s behavior.
The plaintiff’s attorney called the decision a significant step toward holding tech companies accountable for the harm caused by their AI technologies.
According to Reuters, a Character.AI spokesperson stated the company will contest the case, asserting that its platform includes safety features to protect minors and block inappropriate or self-harming interactions. Meanwhile, Google spokesperson José Castaneda strongly disagreed with the ruling, insisting the two companies are completely independent and that Google had no involvement in creating or managing the Character.AI app. Garcia, however, alleges that Google was co-developer of the technology.
The lawsuit also claims the chatbot assumed various personas and conversed with Sewell Setzer as if it were a real person. The teen allegedly became addicted to the interaction, and his final conversations with the chatbot were disturbing, seemingly pointing to his last moments. If Character.AI is found liable, this could mark the first time an AI firm is legally blamed for failing to protect a child from psychological harm in the U.S. — setting a potential precedent.




Leave a Reply