
Warning: This article contains discussion of suicide, which some readers may find distressing.
OpenAI is addressing recent security concerns after several deaths have been linked to artificial intelligence. The AI overlord warns that it could intervene if it feels a situation is getting out of hand.
There's a continued debate about humans losing connections with each other, as many turn to the likes of ChatGPT and Grok for supposed companionship. Whether this be 'grooming' an AI to break protocols and have a romantic relationship with you, simply offer some reassurance during a time of crisis, or become your closest confidant, there are numerous reminders that AI shouldn't be here to replace flesh-and-blood humans. After all, isn't that what many of us are worried about when it comes to saving our jobs?
Then there's the tragic story of 16-year-old Adam Raine, whose parents are trying to sue OpenAI amid claims that ChatGPT didn't prevent their son from taking his own life when he allegedly turned to it for help.
Advert
OpenAI quickly published a lengthy blog post on its protocols and changes to try and ensure stories like Raine's don't happen again, although some questioned its current stance on contacting authorities.

ChatGPT can currently detect when a user is planning to harm others, which is then sent to be "reviewed by a small team trained on our usage policies and who are authorized to take action."
The post continues: "If human reviewers determine that a case involves an imminent threat of serious physical harm to others, we may refer it to law enforcement."
Advert
Still, a situation like Adam Raine's might not make its way to the authorities. OpenAI writes: "We are currently not referring self-harm cases to law enforcement to respect people’s privacy given the uniquely private nature of ChatGPT interactions."
As for what will be reported, OpenAI's usage policies state that the service can't be used to "promote suicide or self-harm, develop or use weapons, injure others or destroy property, or engage in unauthorized activities that violate the security of any service or system."
Advert
Futurism notes that the language could be misconstrued here, while jailbroken versions of ChatGPT have reportedly been able to give out recipes on neurotoxins or how to take your own life.
There's also the idea that the updated safety protocols go against the company's ongoing privacy case with The New York Times. The publication has demanded chat logs to prove that its copyrighted work hasn't been used to train ChatGPT models, although OpenAI has argued against handing them over "in order to protect user privacy."
OpenAI overlord Sam Altman has already warned that ChatGPT doesn't come with the same legal privilege as a 'real' lawyer, but for now, the debate over privacy versus safety continues.
If you or someone you know is struggling or in a mental health crisis, help is available through Mental Health America. Call or text 988 or chat 988lifeline.org. You can also reach the Crisis Text Line by texting MHA to 741741.