• News
    • Tech News
    • AI
  • Gadgets
    • Apple
    • iPhone
  • Gaming
    • Playstation
    • Xbox
  • Science
    • News
    • Space
  • Streaming
    • Netflix
  • Vehicles
    • Car News
  • Social Media
    • WhatsApp
    • YouTube
  • Advertise
  • Terms
  • Privacy & Cookies
  • LADbible Group
  • LADbible
  • UNILAD
  • SPORTbible
  • GAMINGbible
  • Tyla
  • FOODbible
  • License Our Content
  • About Us & Contact
  • Jobs
  • Latest
  • Topics A-Z
  • Authors
Facebook
Instagram
X
TikTok
Snapchat
WhatsApp
Submit Your Content
OpenAI warn they could call the police over your ChatGPT conversations

Home> News> AI

Published 11:34 29 Aug 2025 GMT+1

OpenAI warn they could call the police over your ChatGPT conversations

It's privacy versus prevention

Tom Chapman

Tom Chapman

Warning: This article contains discussion of suicide, which some readers may find distressing.

OpenAI is addressing recent security concerns after several deaths have been linked to artificial intelligence. The AI overlord warns that it could intervene if it feels a situation is getting out of hand.

There's a continued debate about humans losing connections with each other, as many turn to the likes of ChatGPT and Grok for supposed companionship. Whether this be 'grooming' an AI to break protocols and have a romantic relationship with you, simply offer some reassurance during a time of crisis, or become your closest confidant, there are numerous reminders that AI shouldn't be here to replace flesh-and-blood humans. After all, isn't that what many of us are worried about when it comes to saving our jobs?

Then there's the tragic story of 16-year-old Adam Raine, whose parents are trying to sue OpenAI amid claims that ChatGPT didn't prevent their son from taking his own life when he allegedly turned to it for help.

Advert

OpenAI quickly published a lengthy blog post on its protocols and changes to try and ensure stories like Raine's don't happen again, although some questioned its current stance on contacting authorities.

Adam Raine's parents are trying to sue OpenAI (Dignity Memorial via Raine Family)
Adam Raine's parents are trying to sue OpenAI (Dignity Memorial via Raine Family)

ChatGPT can currently detect when a user is planning to harm others, which is then sent to be "reviewed by a small team trained on our usage policies and who are authorized to take action."

The post continues: "If human reviewers determine that a case involves an imminent threat of serious physical harm to others, we may refer it to law enforcement."

Still, a situation like Adam Raine's might not make its way to the authorities. OpenAI writes: "We are currently not referring self-harm cases to law enforcement to respect people’s privacy given the uniquely private nature of ChatGPT interactions."



As for what will be reported, OpenAI's usage policies state that the service can't be used to "promote suicide or self-harm, develop or use weapons, injure others or destroy property, or engage in unauthorized activities that violate the security of any service or system."

Futurism notes that the language could be misconstrued here, while jailbroken versions of ChatGPT have reportedly been able to give out recipes on neurotoxins or how to take your own life.

There's also the idea that the updated safety protocols go against the company's ongoing privacy case with The New York Times. The publication has demanded chat logs to prove that its copyrighted work hasn't been used to train ChatGPT models, although OpenAI has argued against handing them over "in order to protect user privacy."

OpenAI overlord Sam Altman has already warned that ChatGPT doesn't come with the same legal privilege as a 'real' lawyer, but for now, the debate over privacy versus safety continues.

If you or someone you know is struggling or in a mental health crisis, help is available through Mental Health America. Call or text 988 or chat 988lifeline.org. You can also reach the Crisis Text Line by texting MHA to 741741.

Featured Image Credit: Andrew Harnik / Staff / Getty
AI
ChatGPT

Advert

Advert

Advert

Choose your content:

12 hours ago
13 hours ago
  • David Mariuz / Stringer via Getty
    12 hours ago

    Inventor of controversial 'suicide pod' says AI will judge if a person is fit to use the machine

    The machine has been shrouded in controversy since it was used in 2024

    News
  • Samuel Boivin/NurPhoto via Getty Images
    13 hours ago

    How much of Apple's $95,000,000 settlement users could get as settlement is paid into bank accounts this week

    A lawsuit claims millions of conversations were listened to and recorded without their knowledge

    News
  • Inside Edition
    13 hours ago

    Man who jumped 100ft off cruise ship reveals exactly what it did to his body

    He was lucky to survive the drop

    News
  • JRE Clips / YouTube
    13 hours ago

    Hacker who triggered biggest leak in US history issues urgent warning to all Android and iPhone users

    This frightening message has left many concerned

    News
  • ChatGPT chatbot's surprising response after parents of teen who died by suicide sue OpenAI
  • OpenAI makes shocking announcement that ChatGPT will officially allow x-rated adult practice
  • Family sue OpenAI for ‘goading’ their son into suicide as horrific final message from ChatGPT is revealed
  • OpenAI publish lengthy blog post after parents of teen who took his own life sues tech giant