ChatGPT chatbot's surprising response after parents of teen who died by suicide sue OpenAI

Home> News> AI

ChatGPT chatbot's surprising response after parents of teen who died by suicide sue OpenAI

OpenAI has come out with a statement on the tragic case

Warning: This article contains discussion of suicide, which some readers may find distressing.

OpenAI's ChatGPT has given its own response to the tragic story of Adam Raine after the 16-year-old took his own life after supposedly talking with the chatbot.

There are increased concerns about where artificial intelligence is heading and when we'll reach the prophesied superintelligence where it overtakes human intelligence. Away from the idea that AI could take our jobs or wipe us off the face of the Earth, there are other worries about how many people seem to be replacing human interaction with AI interaction.

As well as the woman who claimed she 'groomed' ChatGPT into breaking protocols so it could have a romantic relationship with her, there are numerous other stories about those who've fallen for AI companions.

Even though some continue to mock the idea of humans turning to chatbots for emotional support, OpenAI faces its first wrongful death lawsuit, that's been filed by Adam Raine's parents.

In the aftermath of Matt and Maria Raine maintaining that their son's death "was a predictable result of deliberate design choices," and that ChatGPT is a program that will "foster psychological dependency in users," the tech giant has posted a lengthy blog post about how its updated GPT-5 model is addressing similar worries.

In a statement to LADbible Group, an OpenAI spokesperson said: "We are deeply saddened by Mr. Raine’s passing, and our thoughts are with his family. ChatGPT includes safeguards such as directing people to crisis helplines and referring them to real-world resources.

Raine's parents claim his death was preventable (Raine Family via TODAY YouTube)
Raine's parents claim his death was preventable (Raine Family via TODAY YouTube)

“While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade.

“Safeguards are strongest when every element works as intended, and we will continually improve on them, guided by experts."

Aside from the company's official response, we asked ChatGPT itself if it had any comment on the story.

Linking to a RAND Corporation study that was published in Psychiatric Services, ChatGPT said that AI chatbots (including itself) are found to "often respond inconsistently to suicidal prompts, especially those of moderate risk."

It also highlighted how experts have warned about the ethical gap between AI and human mental health professionals. Notably, ChatGPT said AI "lacks responsibility to intervene or protect users emotionally," even despite vulnerable individuals treating them as confidants.


As for the specific case of Adam Raine, ChatGPT concluded that, as an AI language model, it expresses sadness at his death. It went on to say: "It is deeply troubling that anyone, particularly a vulnerable teenager, might have found reassurance or validation in a chatbot instead of real human support."

ChatGPT reiterated that if the allegations are accurate, it reveals "a failure of AI systems to uphold the fundamental responsibility of safeguarding users in crisis—even when mechanisms are in place, they may not suffice during prolonged or complex conversations."

It maintains that the case of Adam Raine underlines the importance of "rigorous, independently verified safety protocols in AI systems, especially those offering emotional or mental health support."

The chatbot says that developers have to ensure features like empathetic memory and personalization don’t come at the cost of a user's well-being. It calls for a broader public awareness while stating: "AI is not a substitute for human connection, especially in moments of crisis."

ChaGPT then concluded: "I hope the legal proceedings lead to meaningful change—reinforced safety features, proper age verification, clear disclosures about the limits of AI, and legally enforceable safeguards.

"It’s also vital for parents, caregivers, schools, and policymakers to be empowered with tools and education to safeguard young people in an increasingly digital world."

If you or someone you know is struggling or in a mental health crisis, help is available through Mental Health America. Call or text 988 or chat 988lifeline.org. You can also reach the Crisis Text Line by texting MHA to 741741.


Featured Image Credit: Raine Family via TODAY YouTube