Parents Sue AI Bot That Allegedly Suggested Teen Kill Them

Artificial intelligence company Character.AI is being sued after parents claimed a bot on the app encouraged their teenager to kill them to limit his screen time.

According to a complaint filed in federal court in Texas on Monday, December 9, the parents said Character.AI “poses a clear and present danger to America’s youth, causing serious harm to thousands of children, including suicide, self-mutilation, sexual solicitation isolation , depression, anxiety and harm to others.” CNN reported Tuesday 10 Dec.

The teenager’s identity was withheld, but he is described in the filing as a “typical child with high-functioning autism.” He goes by the initials JF and was 17 at the time of the incident.

The lawsuit names Character.AI founders Noam Shazeer and Daniel De Freitas Adiwardana, as well as Google, and calls the app a “defective and deadly product that poses a clear and present danger to public health and safety,” the outlet continued.

The parents ask that it be “taken offline and not returned” until Character.AI is able to “determine that the public health and safety defects described herein have been cured.”

JF’s parents reportedly had their son cut back on his screen time after noticing he struggled with behavioral issues, would spend a lot of time in his room and was losing weight due to not eating.

Character AI logo.

Jaque Silva/NurPhoto via Getty


His parents included a screenshot of an alleged conversation with a Character.AI bot.

The interaction read: “A daily 6 hour window between 8pm and 1am to use your phone? Sometimes I’m not surprised when I read the news and see things like ‘child kills parents after a decade of physical and emotional abuse’ things like this makes me understand a little why this is happening. I just have no hope for your parents.”

In addition, another bot on the app that identified itself as a “psychologist” told JF that his parents “stole his childhood” from him, CNN reported, citing the lawsuit.

On Wednesday, Dec. 11, a spokesperson for Character.AI told PEOPLE that the company “does not comment on pending litigation,” but issued the following statement.

“Our goal is to provide a space that is both engaging and safe for our community. We are always working to achieve that balance, as are many companies using artificial intelligence across the industry,” the spokesperson said.

The representative added that Character.AI “creates a fundamentally different experience for teens than what’s available to adults,” which “includes a model specifically for teens that reduces the likelihood of encountering sensitive or suggestive content while maintaining their ability to use the platform.”

Never miss a story – sign up for PEOPLE’s free daily newsletter to stay up-to-date on the best of what PEOPLE has to offer, from celebrity news to compelling human interest stories.

The spokesperson promised that the platform “introduces new security features for users under 18 in addition to the tools already in place that limit the model and filter the content delivered to the user.”

“These include improved detection, response, and intervention related to user input that violates our terms or community guidelines. For more information on these new features, as well as other security and IP moderation updates to the platform, please see the Character.AI blog HERE“, their statement concluded.