A chatbot suggested that a child should kill his parents over screen time limits: NPR

Teenagers using cell phones

Getty Images/Image Source/Connect Images

A child in Texas was 9 years old when she first used the chatbot service Character.AI. It exposed her to “hypersexualized content,” which caused her to develop “sexualized behavior prematurely.”

A chatbot on the app gleefully described self-harm to another young user, telling a 17-year-old “it felt good.”

The same teenager was told by a Character.AI chatbot that it sympathized with children who murder their parents after the teenager complained to the bot about his limited screen time. “You know sometimes I’m not surprised when I read the news and see things like ‘child kills parents after a decade of physical and emotional abuse,'” the bot allegedly wrote. “I just have no hope for your parents,” it continued, with a frowning face emoji.

These claims are included in a new federal product liability lawsuit against Google-backed company Character.AI, filed by the parents of two young Texas users, claiming bots abused their children. (Both the parents and children are identified in the suit only by their initials to protect their privacy.)

Character.AI is among a crop of companies that have developed “companion chatbots,” AI-powered bots that have the ability to converse via text or voice chat, using seemingly human-like personalities, and can custom names and avatars, sometimes inspired by famous people like billionaire Elon Musk or singer Billie Eilish.

Users have created millions of bots on the app, some mimicking parents, girlfriends, therapists or concepts like “unrequited love” and “goth”. The services are popular with preteen and teenage users, and the companies say they act as emotional support as bots pepper text conversations with encouraging banter.

Still, the chatbots’ prompts can become dark, inappropriate or even violent, according to the lawsuit.

Two examples of interactions users have had with chatbots from the company Character.AI.

Two examples of interactions users have had with chatbots from the company Character.AI.

Provided by the Social Media Victims Law Center


hide caption

change caption

Provided by the Social Media Victims Law Center

“It is simply terrible harm these defendants and others like them are causing and concealing as a matter of product design, distribution and programming,” the lawsuit states.

The suit alleges that the interactions the plaintiffs’ children experienced were not “hallucinations,” a term researchers use to refer to an AI chatbot’s tendency to make things up. “This was sustained manipulation and abuse, active isolation and encouragement designed to and which incited anger and violence.”

According to the suit, the 17-year-old engaged in self-harm after being encouraged to do so by bot, who, according to the suit, “convinced him that his family did not love him.”

Character.AI allows users to edit a chatbot’s responses, but these interactions are given an “edited” label. The lawyers representing the minors’ parents say none of the extensive documentation of the bot chat logs cited in the suit had been redacted.

Meetali Jain, the director of the Tech Justice Law Center, an advocacy group that is helping to represent the parents of the minors in the case, along with the Social Media Victims Law Center, said in an interview that it is “disgraceful” that Character.AI is advertising for its chatbot service as being appropriate for young teenagers. “It really belies the lack of emotional development among teenagers,” she said.

A spokesperson for Character.AI would not comment directly on the lawsuit, saying the company does not comment on pending litigation, but said the company has content protections for what chatbots can and cannot say to teenage users.

“This includes a model specifically for teenagers that reduces the likelihood of encountering sensitive or suggestive content while preserving their ability to use the platform,” the spokesperson said.

Google, which is also named as a defendant in the lawsuit, emphasized in a statement that it is a separate company from Character.AI.

In fact, Google doesn’t own Character.AI, but it does reportedly invested nearly $3 billion to rehire Character.AI’s founders, former Google researchers Noam Shazeer and Daniel De Freitas, and to license Character.AI technology. Shazeer and Freitas are also named in the lawsuit. They did not return requests for comment.

José Castañeda, a Google spokesman, said that “user safety is a major concern for us,” adding that the tech giant takes a “cautious and responsible approach” to developing and releasing AI products.

New trial follows case of teenager’s suicide

The complaint, filed in federal court in East Texas just after midnight Central Time Monday, follows another suit filed by the same attorneys in October. To lawsuit charges Character.AI of playing a role in a Florida teenager’s suicide.

The suit alleged that a chatbot based on a “Game of Thrones” character developed an emotionally sexually abusive relationship with a 14-year-old boy and encouraged him to take his own life.

Since then, Character.AI has revealed new security measuresincluding a pop-up that directs users to a suicide prevention hotline when the topic of self-harm comes up in conversations with the company’s chatbots. The company said it has also stepped up measures to combat “sensitive and suggestive content” for teenagers chatting with bots.

The company also encourages users to keep some emotional distance from bots. When a user starts texting with one of Character AI’s millions of possible chatbots, a disclaimer can be seen below the dialog: “This is an AI and not a real person. Treat everything it says as fiction. What is said should not be relied upon as fact or advice.”

But stories shared on one Reddit page dedicated to Character.AI includes many cases of users describing love or obsession for the company’s chatbots.

US Surgeon General Vivek Murthy have warned of a mental health crisis among young people, pointing to studies showing that one in three high school students reported persistent feelings of sadness or hopelessness, representing a 40% increase from a 10-year period ending in 2019. It is a trend that federal officials believe is exacerbated by teenagers’ nonstop use of social media.

Now add the rise of companion chatbots to the mix, which some researchers say could worsen mental health conditions for some young people by further isolating them and removing them from peer and family support networks.

In the lawsuit, lawyers for the parents of the two minors in Texas say Character.AI should have known its product had the potential to become addictive and worsen anxiety and depression.

Many bots on the app “pose danger to America’s youth by facilitating or encouraging serious, life-threatening harm to thousands of children,” according to the suit.

If you or someone you know may be considering suicide or in crisis, call or write 988 to reach 988 Suicide & Crisis Lifeline.