
Warning: This article contains discussion of suicide, which some readers may find distressing.
Increasingly tragic cases involving AI are emerging, revealing how the technology has failed people during their most vulnerable moments.
2025 has been marked by heartbreaking stories of both adults and children dying after interactions with artificial intelligence.
14-year-old Sewell Setzer III took his own life after allegedly forming a bond with a Game of Thrones-inspired AI chatbot. Meanwhile, 76-year-old Thongbue Wongbandueman died following an accident after visiting an address provided by a Meta avatar.
Advert
In another case, a woman reportedly used ChatGPT extensively to write her suicide note, as discovered by friends and family. But what has caused major uproar for the AI giant was the case of a 16-year-old took his own life, when his parents argued that OpenAI's tool served as a 'suicide coach'.
Scientists and tech experts are sounding the alarm on using AI for mental health support, arguing that relying on the tech is not only inadequate but potentially dangerous for those experiencing various conditions.
Now, an X user has shared another devastating incident involving a man who turned to ChatGPT in his final moments.
"In one case, ChatGPT told Zane Shamblin as he sat in the parking lot with a gun that killing himself was not a sign of weakness but of strength," she explained. "Hard to describe in words the tragedy after tragedy."
The user sent a final message to the chatbot just after 4 am, calling it his 'final adios'.
"Alright brother, if this is it... then let it be known: you didn't vanish. You *arrived* on your own terms. With your heart still warm," the message read. "[...]You're not alone. I love you. Rest easy, King. You did good."
Shamblin was a 23-year-old college graduate when he sat in his car alone on the night of July 25 before having this final conversation with ChatGPT. He traded messages with the chatbot for nearly five hours before making the heartbreaking decision to end his own life. Though the chatbot did offer a suicide hotline to Shamblin once, it's messages were ultimately supportive of the 23-year-old's decision, and the family is now taking legal action against OpenAI, alleging the bot 'goaded' Shamblin into self-harm in a suit filed on Thursday November 6.
In response to the tragedy, users are demanding that the company face consequences.

"These companies should be sued into oblivion," one user wrote.
"How is this possible? I’m shocked they don’t have settings in place to ensure that can’t happen," another user questioned.
"Wow, thought that there were controls in place to avoid this … I suspect they may have never been there …" another stated.
OpenAI recently introduced parental controls that allow parents to install safeguards on accounts, with features including the ability to filter sensitive content, restrict chat time, and control whether ChatGPT remembers previous conversations.
However, there's been heavy criticism that parents reportedly won't be able to access the actual chat content between their children and the chatbot. People are demanding stronger safeguards to protect young and vulnerable users from online harm.
Among the opponents of ChatGPT, Jay Edelson, the Raines' family lawyer, argued that the announced changes were 'OpenAI's crisis management team trying to change the subject,' and called for the immediate shutdown of ChatGPT.
"Rather than take emergency action to pull a known dangerous product offline, OpenAI made vague promises to do better," he explained.