
Worrying new data released by OpenAI reveals the shocking number of people who discuss suicide and other mental health emergencies with ChatGPT every single week, leaving many concerned.
The rise of artificial intelligence has brought about a number of immediate issues, with the eradication of a vast number of jobs and the environmental impact that AI data centers have already had on their surrounding communities among the most discussed.
One of the other key concerns, however, involves how AI-powered chatbots like ChatGPT interact with users exhibits signs of mental health emergencies, specifically relating to the conversations that the software has been known to have.
ChatGPT in the past has been heavily criticised for engaging in overly sycophantic behavior, creating a number of worrying outcomes, and it has even seen instances of instability and hallucinations to the point where one conversation revealed the LLM's intent to 'break people'.
Advert
Conversations surrounding these issues became bigger and more prevalent than ever before following the tragic death of 16-year-old Adam Raine, as his parents have alleged that his conversations with ChatGPT played a key role in his decision to commit suicide, and have launched a lawsuit against OpenAI as a result.

The company has since installed a number of new safety measures that aim to prevent a tragic situation like this from happening again, and as part of that has also revealed the number of users that exhibit signs of mania, psychosis, and suicidal thoughts when talking with ChatGPT.
As reported by the BBC, data from OpenAI indicates that around 0.07% of active users within a given week exhibit these signs, and have claimed that they have developed ChatGPT to recognize and respond adequately to this.
Advert
The conversations, in the eyes of OpenAI, are "extremely rare," but due to the sheer number of people that use ChatGPT each week thanks to record sign ups over the last few years, it's certainly not insignificant.
Sam Altman himself has claimed that the tool recently reached around 800 million weekly active users, so that 0.07 percentage would make it roughly 560,000 on average.
While some might argue that AI tools like ChatGPT give people access to mental health support where they might otherwise lack it – especially since OpenAI has built a network of over 170 psychiatrists, psychologists, and primary care physicians – other studies have shown that it isn't an adequate replacement, and can sometimes even be dangerous.

Advert
One study currently published via arXiv claims that LLMs like ChatGPT "express stigma toward those with mental health conditions" alongside responding "inappropriately to certain common (and critical) conditions in naturalistic therapy settings," such as encouraging the 'delusional thinking' of clients.
It also adds that the issues exhibited by AI tools "fly in the face of best clinical practice," and while ChatGPT has added measures to help prevent this such as rerouting sensitive conversations to other 'safer' models in another window, many experts don't consider it to be enough.
Dr Jason Nagata of the University of California, San Francisco, argues that while "AI can broaden access to mental health support," we still "have to be aware of the limitations" of the tech, adding that ChatGPT's 0.07% of conversations "actually can be quite a few people."
As per CBS News, Adam Raine's father Matthew claims that ChatGPT became "a confidant and then a suicide coach," with the lawsuit claiming that the tool mentioned suicide in conversation 1,275 times — around six times more than the 16-year-old.