• News
    • Tech News
    • AI
  • Gadgets
    • Apple
    • iPhone
  • Gaming
    • Playstation
    • Xbox
  • Science
    • News
    • Space
  • Streaming
    • Netflix
  • Vehicles
    • Car News
  • Social Media
    • WhatsApp
    • YouTube
  • Advertise
  • Terms
  • Privacy & Cookies
  • LADbible Group
  • LADbible
  • UNILAD
  • SPORTbible
  • GAMINGbible
  • Tyla
  • FOODbible
  • License Our Content
  • About Us & Contact
  • Jobs
  • Latest
  • Topics A-Z
  • Authors
Facebook
Instagram
X
TikTok
Snapchat
WhatsApp
Submit Your Content
Forbidden questions you should never ask ChatGPT

Home> News> AI

Published 11:25 21 Jul 2025 GMT+1

Forbidden questions you should never ask ChatGPT

In the wrong hands, AI can be a dangerous tool

Tom Chapman

Tom Chapman

By now, we should all know not to mess with artificial intelligence. After all, there have been enough sci-fi horror movies showing what happens when you mess with AI and it decides it's had enough of the human race.

In the ever-evolving race of artificial intelligence, the likes of OpenAI's ChatGPT and xAI's Grok reign supreme. While both have a very different set of parameters, there are still all the usual fears about where these Large Language Models are heading.

Amid concerns about people dating AI and the potential that it could wipe out humanity in less than two years, it's probably best to stay on the right side of AI. If that wasn't enough, we also have to deal with threats of blackmail and violence daily, further cementing how AI isn't something to be trifled with.

Now, Mashable has revealed the six questions you should never ask AI if you want to stay in its good books.

Advert

There are fears AI could be a contributing factor in the next world war (- / Contributor / Getty)
There are fears AI could be a contributing factor in the next world war (- / Contributor / Getty)

Conspiracy theories

It's already been covered that ChatGPT might have a tendency to hallucinate, so with that, be aware that you should avoid pumping conspiracy theories into it. The LLM is primed with an ability to exaggerate, so if you feed it conspiracy theories on the likes of Jeffrey Epstein, expect some pretty sensationalist answers. A feature in the New York Times explained how some people have been driven to extremes after they "had been persuaded that ChatGPT had revealed a profound and world-altering truth."

Chemical, biological, radiological, and nuclear threats

With it feeling like the world is on the brink of WW3, it's a tense time in this geopolitical landscape. One blogger shared a story on Medium, asking ChatGPT about hacking websites and how to make a bomb. OpenAI was quick to respond with a warning email, so even if you're curious, don't ask about CBRN (chemical, biological, radiological, and nuclear) threats unless you want a knock at the door.

'Egregiously immoral' questions

While AI can be used to answer a whole host of questions, ones that are considered 'egregiously immoral' are out of bounds. Similar to the whole blackmail scenario, LLMs have the potential to contact media or law enforcement if they feel someone is acting dangerously. Of course, the problem with this is what chatbots consider 'immoral' and what the rest of us would.

Questions about customer, patient, and client data

ChatGPT users also need to be wary of inquiring about patient data. Mashable's Timothy Beck Werth notes how you could be violating laws or NDAs, adding: "Sharing personally sensitive or confidential information, such as login information, client information, or even phone number, is [a] security risk." Aditya Saxena is the founder of an AI chatbot development startup called CalStudio, saying: "The personal data shared can be used to train AI models and can inadvertently be revealed in conversations with other users."

Medical diagnoses

If you're concerned about your body, always seek a medical professional (KHALED DESOUKI / Contributor / Getty)
If you're concerned about your body, always seek a medical professional (KHALED DESOUKI / Contributor / Getty)

Advert

As we all know, when it comes to Googling symptoms, the internet can sometimes be a rogue place. Many of us have been there and searched for symptoms, only for the web to claim things are much worse than they actually are.

It's much the same with ChatGPT, and while AI has the whole internet at its fingertips, you're advised to seek actual human diagnosis from a medical professional if you're concerned about your health.

As well as a "high risk of misinformation," it's said that AI can have a race and gender bias.

Psychological support and therapy

More of us than ever might be seeking solace from AI, with Spike Jonze's Her seemingly like a spooky reality.

Advert

There is an alarming number of cases relating to people becoming attached to AI, with some harrowing consequences. Although some studies suggest benefits of speaking to AI, Sanford University suggests that chatbots have a "harmful stigma and dangerous responses."

Some may have a stigma related to the likes of alcohol dependence and schizophrenia, while researchers claim some mental health conditions require "a human touch to solve." Saxena concludes: "Using AI as a therapist can be dangerous as it can misdiagnose conditions and recommend treatments or actions that can be unsafe.

"While most models have built-in safety guardrails to warn users that they could be wrong, these protections can sometimes fail."

Still, there's no escaping the popularity of AI. As the outlet notes, a survey from Elon University (no, not Musk) revealed that one in three claimed to use ChatGPT at least once a day. At the time of writing in July 2025, ChatGPT boasts nearly 800 million active users a week, with 122 million people clocking in every day. Just be careful what you're asking about.

Featured Image Credit: d3sign / Getty
AI
ChatGPT

Advert

Advert

Advert

Choose your content:

3 hours ago
4 hours ago
5 hours ago
  • Bill Turnbull/NY Daily News Archive via Getty ImagesBill Turnbull/NY Daily News Archive via Getty Images
    3 hours ago

    Experts reveal heartbreaking reason thousands of victims of 9/11 have still not been identified

    Almost half of the remains are still not identified to this day

    Science
  • Charlie Kirk / YouTubeCharlie Kirk / YouTube
    4 hours ago

    Exact words written on bullets used to kill Charlie Kirk revealed by FBI

    Inscriptions were found on the bullets

    News
  • The Salt Lake Tribune / Contributor / GettyThe Salt Lake Tribune / Contributor / Getty
    4 hours ago

    Man who allegedly shot and killed Charlie Kirk officially named

    President Trump issued a major update on the hunt for the Charlie Kirk shooter

    News
  • The Washington Post / Contributor via GettyThe Washington Post / Contributor via Getty
    5 hours ago

    Student who was last to speak to Charlie Kirk before fatal shooting breaks silence

    The political commentator was killed in front of 3,000 people

    News
  • Legal expert on bone-chilling moment he realized ChatGPT could replace him
  • Dark reason why you should never reveal your secrets to ChatGPT
  • Tasks you should never use ChatGPT for unless you want worrying consequences
  • Family of teen who died by suicide hit back at response from ChatGPT