
One 29-year-old woman didn't fully divulge to her parents or mental health counsellor quite how much she was struggling before she eventually took her own life, but her family and friends discovered after her death that she had relied heavily on ChatGPT, including using the AI chatbot to write her suicide note.
There has been an alarming increase in the number of people relying on ChatGPT and other artificial intelligence tools for mental health support, which is seemingly a natural consequence of how difficult it is for many to gain access to real-life experts across the world.
Scientific studies and experts have previously come to the conclusion that not only is AI-based mental health support inadequate but it is also potentially dangerous for those suffering from various conditions, and a number of recent cases have only supported that hypothesis.
Character.AI – a tool based on interacting with various recreations of fictional and historical figures – has been involved in a lawsuit after a teenager committed suicide after falling in love with one of the company's chatbots, and ChatGPT has recently come under fire after a 16-year-old took his own life, with his parent arguing that OpenAI's tool served as a 'suicide coach'.
Advert

This has prompted OpenAI to implement a number of new safety tools and guidelines to alert parents to concerning behavior discussed with ChatGPT, but the number of users discussing mental health emergencies with its tools only seem to be increasing.
As reported by The Times, one such of these cases was Sophie Rottenberg, who had been using ChatGPT as a therapist for around five months before she took her own life, with her mother revealing that she shared with it "extreme feelings of emotional distress."
What initially began as simple questions in preparation for a climb up Mt Kilimanjaro soon spiralled into replacement therapy after finding a prompt on Reddit, and she seemingly refused to communicate her feelings with anyone outside of ChatGPT, naming the bot 'Harry'.
Advert
She soon began to discuss suicidal thoughts with ChatGPT, which in response informed her that she was "brave," after which she revealed that she was planning to kill herself after Thanksgiving, only informing her parents on the night of December 14 that she was about to throw herself off a bridge.
They thankfully managed to drive out to where she was and bring her home, but they recalled feeling "so rocked," and that her behavior was "so counter to anything we thought she would be capable of."
Sophie appeared to be getting better having spent more time with her parents, but on February 4 she tragically took her own life, leaving her parents and best friend a note alongside her financial details and passwords.
Advert
Her mother, Reiley, reveals that she and her husband "hated the note," revealing that "it was so unlike Sophie. It didn't have Sophie's voice in it. It just seemed kind of flat and platitudinous to me."
They later discovered that she had used ChatGPT to write that very same note, with Reiley outlining that Sophie "had taken a bunch of her own thoughts that were much more herself and she had asked ChatGPT to rewrite this in a way that would hurt us less."
Sophie's parents don't intend to take any legal action against OpenAI for ChatGPT's role in their daughter's death, but her mother had pointed out what she believes to be clear flaws in the technology.
"I am not blaming AI for her death, but if millions of vulnerable people are using a resource that potentially does harm, then it's a consumer product that is faulty," she illustrated, adding that one of the biggest flaws is that "there is no mechanism for ChatGPT or any of the AI chatbots to alert authorities or to report someone for suicidal thoughts."
Advert
UNILAD Tech have reached out to OpenAI for comment.