
Warning: This article contains discussion of suicide, which some readers may find distressing.
The tragic story of Adam Raine is a reminder about the potential dangers of artificial intelligence, as OpenAI faces further questions about ChatGPT users potentially relying on the chatbot for emotional help.
Sam Altman is named as a defendant in a new filing, which sees OpenAI facing its first wrongful death lawsuit. It comes after the 16-year-old Adam Raine is said to have confided in ChatGPT about his mental health, then later found dead by his mother after having taken his own life.
There's been an alarming run of stories about artificial intelligence being accused of involvement in people's deaths, such as 14-year-old Sewell Setzer III, 76-year-old Thongbue Wongbandueman, and now, teenager Adam Raine.
Advert
Matt and Maria Raine are trying to sue OpenAI, claiming that ChatGPT validated their son’s "most harmful and self-destructive thoughts." The BBC reported how the lawsuit maintains that ChatGPT "became the teenager's closest confidant" and argues that Raine's death "was a predictable result of deliberate design choices."

As well as offering condolences to the family in a response to the BBC, OpenAI has now provided more context in a lengthy blog post.
Titled, "Helping people when they need it most," the tech giant states: "Recent heartbreaking cases of people using ChatGPT in the midst of acute crises weigh heavily on us, and we believe it’s important to share more now."
Advert
Reiterating that its goals are to make its tools useful for everyone, OpenAI says it wants to improve recognition of mental and emotional distress signs.
It goes on to promise: "When a conversation suggests someone is vulnerable and may be at risk, we have built a stack of layered safeguards into ChatGPT."
OpenAI states that ChatGPT is trained to direct people to seek medical help from the 988 suicide and crisis hotline in the USA, Samaritans in the UK, and findahelpline.com elsewhere.
The post explains how OpenAI is working with over 90 physicians in over 30 countries, including psychiatrists, pediatricians, and general practitioners. As well as this, it's "convening an advisory group of experts in mental health, youth development, and human-computer interaction."
Advert
In terms of what else OpenAI is doing in regards to potential future issues, it says that detecting intent of harm to others will route conversations to be reviewed "by a small team trained on our usage policies and who are authorized to take action, including banning accounts." Anyone determined to be an imminent threat to others may be referred to law enforcement, although self-harm cases currently aren't being directed to law enforcement because of "people’s privacy given the uniquely private nature of ChatGPT interactions."
Whereas Raine was apparently using the GPT-4o model, OpenAI updated to GPT‑5 as its standard in August 2025. As part of this, the latest version reportedly helps avoid "unhealthy levels of emotional reliance," manages to reduce sycophancy, and has reduced the "prevalence of non-ideal model responses in mental health emergencies by more than 25% compared to 4o."
Finally, ChatGPT is revealing what's coming in the future, with plans to expand interventions for those in crisis, make it easier to reach emergency services and get help from experts, enable connections to trusted contacts, and add strengthened protections for teenagers.
Advert
In a statement to LADbible Group, an OpenAI spokesperson said: "We are deeply saddened by Mr. Raine’s passing, and our thoughts are with his family. ChatGPT includes safeguards such as directing people to crisis helplines and referring them to real-world resources. While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade. Safeguards are strongest when every element works as intended, and we will continually improve on them, guided by experts."
If you or someone you know is struggling or in a mental health crisis, help is available through Mental Health America. Call or text 988 or chat 988lifeline.org. You can also reach the Crisis Text Line by texting MHA to 741741.