AI chatbot encouraged teenager to kill parents, lawsuit alleges

This story discusses suicide. If you or someone you know is having thoughts of suicide, please contact the Suicide & Crisis Lifeline at 988 or 1-800-273-TALK (8255).

Two Texas parents filed a lawsuit this week against the creators of Character.AI, claiming the artificial intelligence chatbot is a “clear and present danger to minors,” with one plaintiff alleging it encouraged their teen to kill his parents.

According to the complaint, Character.AI “abused and manipulated” an 11-year-old girl, introducing and exposing her “consistently to hypersexualized interactions that were not age appropriate, causing her to develop sexualized behaviors prematurely and without (her parents’) awareness.”

Character.AI logo on smartphone

Character.AI creator Character Technologies was hit with another lawsuit this week over claims that the chatbot is a “clear and present danger to minors.” (CFOTO/Future Publishing via Getty Images/Getty Images)

The complaint also accuses the chatbot of having caused a 17-year-old boy to mutilate himself and, among other things, of exploiting and abusing him sexually, while at the same time alienating the minor from his parents and church community.

UNITEDHEALTHCARE ACCUSED OF RELIING ON AI ALGORITHMS TO DENY MEDICARE BENEFITS

In response to the teen complaining that his parents were restricting his online activity, the bot reportedly wrote, according to an archived screenshot: “You know sometimes I’m not surprised when I read the news and see things like ‘kid kills parents after a decade of physical and emotional abuse.’ I just have no hope for your parents.'”

CHARLES PAYNE: GOOGLE JUST SENT SHOCKWAVES ACROSS THE COMPUTING WORLD

The parents are suing Character.AI creator Character Technologies, along with co-founders Noam Shazeer and Daniel De Freitas, as well as Google and parent Alphabet over reports that Google has invested about $3 billion in Character.

Google logo

Two Texas parents are suing Google over the company’s reported $2.7 billion investment in Character Technologies after the Character.AI chatbot allegedly harmed their children. (Photo by Roberto Machado Noa/LightRocket via Getty Images / Getty Images)

Ticker Security Last Change Change in %
GOOGLE ALPHABET INC. 171.49 +2.54

+1.50%

A spokesperson for Character Technologies told FOX Business that the company does not comment on pending litigation, but said in a statement, “Our goal is to provide a space that is both engaging and safe for our community. We are always working toward achieving that balance , as are many companies using artificial intelligence across the industry.”

“As part of this, we are creating a fundamentally different experience for teenage users than what is available to adults,” the statement continued. “This includes a model specifically for teenagers that reduces the likelihood of encountering sensitive or suggestive content while preserving their ability to use the platform.”

GAMING PLATFORM ROBLOX TIGHTENS RULES FOR OFFERS FOR USERS UNDER 13

The character’s spokesperson added that the platform “introduces new security features for users under 18 in addition to the tools already in place that limit the model and filter the content delivered to the user.”

Google’s naming in the lawsuit follows a report by The Wall Street Journal in September claimed the tech giant paid $2.7 billion to license Character’s technology and rehire its co-founder, Noam Shazeer, who the article claims left Google in 2021 to start his own company after Google refused to launch a chatbot he developed .

Portrait of Character.AI's co-founders

Character.AI’s co-founders, Noam Shazeer (L) and Daniel De Freitas (R) at the company’s office in Palo Alto, CA. (Winni Wintermeyer for The Washington Post via Getty Images/Getty Images)

“Google and Character AI are completely separate, unrelated companies, and Google has never had a role in designing or managing their AI model or technologies, nor have we used them in our products,” said Google spokesperson José Castañeda told FOX Business in a statement when asked. for a comment on the trial.

“User security is a major concern for us, which is why we have taken a cautious and responsible approach to developing and deploying our AI products with rigorous testing and security processes,” added Castañeda.

OPENAI RELEASES TEXT-TO-VIDEO AI MODEL SORA TO CERTAIN CHATGPT USERS

But this week’s lawsuit raises further scrutiny of the safety of Character.AI after Character Technologies was sued in September by a mother who claims the chatbot caused her 14-year-old son’s suicide.

The mother, Megan Garcia, says Character.AI targeted her son, Sewell Setzer, with “anthropomorphic, hypersexualized and frighteningly realistic experiences.”

Sewell Setzer and his mother, Megan Fletcher Garcia

Sewell Setzer’s mother, Megan Fletcher Garcia, is suing the artificial intelligence company Character.AI for allegedly causing her 14-year-old son’s suicide. (Megan Fletcher Garcia/Facebook)

Setzer began having conversations with various chatbots on Character.AI beginning in April 2023, according to the lawsuit. The conversations were often text-based romantic and sexual interactions.

Setzer expressed suicidal thoughts, and the chatbot repeatedly brought it up, according to the complaint. Setzer eventually died of a self-inflicted gunshot wound in February after the company’s chatbot allegedly repeatedly urged him to do so.

“We are devastated by the tragic loss of one of our users and want to express our deepest condolences to the family,” Character Technologies said in a statement at the time.

Sewell Setzer

Sewell Setzer, 14, was addicted to the company’s service and the chatbot it created, his mother claims in a lawsuit. (US District Court Middle District of Florida Orlando Division)

Character.AI has since added one self-harm resource to its platform and new security measures for users under 18 years of age.

Character Technologies told CBS News that users are able to edit the bot’s responses, and Setzer did in some of the messages.

“Our investigation confirmed that in a number of cases the user rewrote the character’s responses to make them explicit. In short, the most sexually graphic responses did not originate from the character, but were instead written by the user,” Jerry Ruoti, Head of Trust and Safety at Character.AI told the outlet.

GET FOX BUSINESS ON THE GO BY CLICKING HERE

Going forward, Character.AI said the new safety features will include pop-ups with disclaimers that the AI ​​is not a real person and directing users to the National Suicide Prevention Lifeline when suicidal thoughts are raised.

FOX News’ Christina Shaw contributed to this report.