
AI has undoubtedly made life easier for handling mundane tasks we can't be bothered thinking about, like drafting budgets or planning weekly meals.
Professionally, it's also proven valuable for things like content creation and coding.
But that doesn't mean you should rely on it for everything. Even OpenAI's CEO previously warned people about the dangers of 'trusting' ChatGPT .
According to CNET, here are the nine tasks you should absolutely avoid using ChatGPT for:
Advert

Doing illegal activities
This one should go without saying, but don't use ChatGPT for any illegal activities.
Diagnosing physical health issues
While it's tempting to skip a trip to the hospital and get a home diagnosis instead, this is definitely not the way to go about health issues.
Advert
Internet self-diagnosis has never had a good reputation for good reason, and AI diagnoses are no different. Their medical advice isn't always reliable, even when given proper context, CNET suggested.
Instead, ChatGPT can help you by drafting questions for your next appointment and translating medical jargon.
Taking care of your mental health
While many people use ChatGPT as a therapist substitute for guidance through grief or heartbreak, this approach has serious limitations, CNET explained.
Advert
The AI model doesn't have lived experience, can't read your body language, and has zero capacity for genuine empathy, it can only simulate it. As a result, its advice can misfire, overlook red flags or unintentionally reinforce biases from its training data.
Making immediate decisions
Large language models can't smell gas, detect smoke or dispatch emergency crews. Get yourself out of danger first before consulting ChatGPT.
"Treat your chatbot as a postincident explainer, never a first responder," CNET said.
Getting personalised financial or tax planning
Since its training data may not include the current tax year or latest rate changes, ChatGPT's guidance could be outdated.
Advert

The publication reported that when real money, filing deadlines, and IRS penalties are involved, it's best to call a professional.
Dealing with confidential data
Once sensitive information enters the prompt window, you can't guarantee where it's stored or who can access it internally.
Advert
CNET warns users to 'never input confidential or sensitive data into AI models.' This includes client contracts, medical charts, or anything covered by privacy laws like GDPR.
"If you wouldn't paste it into a public Slack channel, don't paste it into ChatGPT," the company stated.
Cheating on schoolwork
Before AI, students had to mostly be concerned with plagiarism. Now, AI technology has made academic cheating easier than ever.
But detection tools like Turnitin are getting better at spotting AI-generated content every semester while professors are being trained to recognise the signs of 'ChatGPT voice'. If caught using AI, the repercussions could land you anywhere between suspension, expulsion and license revocation.
Gambling
OpenAI's model can hallucinate incorrect player statistics, misreport injuries, and provide wrong win-loss records. It can't predict future outcomes, so probably best not to rely on it for gambling decisions.
Drafting a will or other legally binding contract
ChatGPT excels at explaining basic legal concepts like revocable living trusts. However, asking it to draft actual legal documents is risky and could invalidate your entire document.
Instead, let ChatGPT help you build a question checklist for your lawyer, then pay that lawyer to create a document that will actually hold up in court.