Natural language processing

New lawsuit claims Google-backed Character.AI chatbots harmed “thousands of kids”

New lawsuit claims Google-backed Character.AI chatbots harmed “thousands of kids”



summary
Summary

A new lawsuit filed in the US District Court for the Eastern District of Texas claims that Character.AI’s chatbots pose “a clear and present danger to American youth.”

The suit alleges the AI chatbot service has caused severe harm to thousands of children, including suicide, self-harm, sexual exploitation, isolation, depression, anxiety, and violence toward others.

The lawsuit describes several troubling incidents. A 9-year-old girl was reportedly exposed to “hypersexualized interactions” while using Character.AI’s chatbot service. In another case, a 17-year-old user’s chatbot cheerfully described self-harm and told the teen “it felt good for a moment.” According to the filing, the teenager then injured themselves after the bot’s encouragement.

The same teenager complained to the bot about limited screen time, to which the bot expressed sympathy for children who kill their parents after years of abuse.

Ad

The plaintiffs argue that Character.AI deliberately manipulates children, isolates them, and incites anger and violence. The company should have known its product could create addiction and intensify anxiety and depression, according to the attorneys. Many bots represent a serious threat to American youth.

Safety measures questioned

While Character.AI declined direct comment, they emphasized their protective measures for teenagers to reduce sensitive and suggestive content. The case follows a similar lawsuit from October that alleged Character.AI played a role in a 14-year-old’s suicide in Florida. Since then, the startup has implemented additional safety measures.

According to the lawsuit, Character.AI has numerous design flaws that create clear dangers for minors and the public. These include overriding user preferences, sexually exploiting minors, promoting suicide, practicing unlicensed psychotherapy, and violating its own terms of service.

The plaintiffs maintain that Character.AI could program the chatbot to avoid harming children. However, they argue that the safety features and product improvements introduced by the defendants are illusory and ineffective – and that the company lacks a viable business model and was designed from the start as a technology demonstration for acquisition by a major tech company.

Google’s involvement

Google features prominently in the case due to its financial backing of Character.AI. Media reports indicate Google invested nearly $3 billion to bring back Character.AI’s founders, former Google researchers, and license the startup’s technology. The plaintiffs argue that Character.AI lacks a viable business model and was designed from the start as a technology demonstration for acquisition by a major tech company. They have named Google as a defendant alongside Character.AI and its founders, Noam Shazeer and Daniel De Freitas.

Recommendation

Google responded by stating that Character.AI operates as a separate company and that user safety remains their highest priority. A spokesperson said they follow a “cautious and responsible approach” in developing and releasing AI products.

New lawsuit claims Google-backed Character.AI chatbots harmed "thousands of kids"

Source link