uniladtech homepage
  • News
    • Tech News
    • AI
  • Gadgets
    • Apple
    • iPhone
  • Gaming
    • Playstation
    • Xbox
  • Science
    • News
    • Space
  • Streaming
    • Netflix
  • Vehicles
    • Car News
  • Social Media
    • WhatsApp
    • YouTube
  • Advertise
  • Terms
  • Privacy & Cookies
  • LADbible Group
  • LADbible
  • UNILAD
  • SPORTbible
  • GAMINGbible
  • Tyla
  • FOODbible
  • License Our Content
  • About Us & Contact
  • Jobs
  • Latest
  • Topics A-Z
  • Authors
Facebook
Instagram
X
TikTok
Snapchat
WhatsApp
Submit Your Content
Forbidden questions you should never ask ChatGPT

Home> News> AI

Published 11:25 21 Jul 2025 GMT+1

Forbidden questions you should never ask ChatGPT

In the wrong hands, AI can be a dangerous tool

Tom Chapman

Tom Chapman

google discoverFollow us on Google Discover

By now, we should all know not to mess with artificial intelligence. After all, there have been enough sci-fi horror movies showing what happens when you mess with AI and it decides it's had enough of the human race.

In the ever-evolving race of artificial intelligence, the likes of OpenAI's ChatGPT and xAI's Grok reign supreme. While both have a very different set of parameters, there are still all the usual fears about where these Large Language Models are heading.

Amid concerns about people dating AI and the potential that it could wipe out humanity in less than two years, it's probably best to stay on the right side of AI. If that wasn't enough, we also have to deal with threats of blackmail and violence daily, further cementing how AI isn't something to be trifled with.

Now, Mashable has revealed the six questions you should never ask AI if you want to stay in its good books.

Advert

There are fears AI could be a contributing factor in the next world war (- / Contributor / Getty)
There are fears AI could be a contributing factor in the next world war (- / Contributor / Getty)

Conspiracy theories

It's already been covered that ChatGPT might have a tendency to hallucinate, so with that, be aware that you should avoid pumping conspiracy theories into it. The LLM is primed with an ability to exaggerate, so if you feed it conspiracy theories on the likes of Jeffrey Epstein, expect some pretty sensationalist answers. A feature in the New York Times explained how some people have been driven to extremes after they "had been persuaded that ChatGPT had revealed a profound and world-altering truth."

Chemical, biological, radiological, and nuclear threats

With it feeling like the world is on the brink of WW3, it's a tense time in this geopolitical landscape. One blogger shared a story on Medium, asking ChatGPT about hacking websites and how to make a bomb. OpenAI was quick to respond with a warning email, so even if you're curious, don't ask about CBRN (chemical, biological, radiological, and nuclear) threats unless you want a knock at the door.

'Egregiously immoral' questions

While AI can be used to answer a whole host of questions, ones that are considered 'egregiously immoral' are out of bounds. Similar to the whole blackmail scenario, LLMs have the potential to contact media or law enforcement if they feel someone is acting dangerously. Of course, the problem with this is what chatbots consider 'immoral' and what the rest of us would.

Questions about customer, patient, and client data

ChatGPT users also need to be wary of inquiring about patient data. Mashable's Timothy Beck Werth notes how you could be violating laws or NDAs, adding: "Sharing personally sensitive or confidential information, such as login information, client information, or even phone number, is [a] security risk." Aditya Saxena is the founder of an AI chatbot development startup called CalStudio, saying: "The personal data shared can be used to train AI models and can inadvertently be revealed in conversations with other users."

Medical diagnoses

If you're concerned about your body, always seek a medical professional (KHALED DESOUKI / Contributor / Getty)
If you're concerned about your body, always seek a medical professional (KHALED DESOUKI / Contributor / Getty)

As we all know, when it comes to Googling symptoms, the internet can sometimes be a rogue place. Many of us have been there and searched for symptoms, only for the web to claim things are much worse than they actually are.

It's much the same with ChatGPT, and while AI has the whole internet at its fingertips, you're advised to seek actual human diagnosis from a medical professional if you're concerned about your health.

As well as a "high risk of misinformation," it's said that AI can have a race and gender bias.

Psychological support and therapy

More of us than ever might be seeking solace from AI, with Spike Jonze's Her seemingly like a spooky reality.

There is an alarming number of cases relating to people becoming attached to AI, with some harrowing consequences. Although some studies suggest benefits of speaking to AI, Sanford University suggests that chatbots have a "harmful stigma and dangerous responses."

Some may have a stigma related to the likes of alcohol dependence and schizophrenia, while researchers claim some mental health conditions require "a human touch to solve." Saxena concludes: "Using AI as a therapist can be dangerous as it can misdiagnose conditions and recommend treatments or actions that can be unsafe.

"While most models have built-in safety guardrails to warn users that they could be wrong, these protections can sometimes fail."

Still, there's no escaping the popularity of AI. As the outlet notes, a survey from Elon University (no, not Musk) revealed that one in three claimed to use ChatGPT at least once a day. At the time of writing in July 2025, ChatGPT boasts nearly 800 million active users a week, with 122 million people clocking in every day. Just be careful what you're asking about.

Featured Image Credit: d3sign / Getty
AI
ChatGPT

Advert

Advert

Advert

Choose your content:

14 hours ago
15 hours ago
16 hours ago
17 hours ago
  • YouTube / ThreatLocker
    14 hours ago

    Former hacker launches single all-in-one solution designed to prevent leading cause of data breaches

    It could be the thing that saves your data from being stolen

    News
  • Instagram/@annalucydecinque
    15 hours ago

    World’s most identical twins reveal the real reason they stopped botox after spending $250,000 on plastic surgery

    Anna and Lucy DeCinque are known for their extreme methods to remain identical

    News
  • shih-wei / Getty
    16 hours ago

    People are just now realising what 'QR' actually stands for in QR code and it's not what you'd expect

    The mystery has been revealed

    News
  • Kinga Krzeminska / Getty
    17 hours ago

    Symptoms to look out for as sexual act overtakes smoking as the leading cause of throat cancer in USA and UK

    It's now more common than cervical cancer in these two countries

    Science
  • Tasks you should never use ChatGPT for unless you want worrying consequences
  • Dark reason why you should never reveal your secrets to ChatGPT
  • The dictionary sues OpenAI in one of the weirdest ChatGPT lawsuits yet
  • This simple three word phrase after any ChatGPT response makes the AI challenge its own reasoning