
Anyone who's spent time with AI chatbots knows they tend to be pretty agreeable by nature.
The technology isn't flawless, and chatbots like OpenAI's ChatGPT can deliver bizarre answers, create false information and present wrong details with complete confidence.
While the tech's approach to a 2,400-year-old math problem impressed researchers, ChatGPT has come under fire for giving dangerous medical advice to a man who ended up hospitalised with a rare 19th-century disease, among other tragic cases.
To dig deeper into the AI's reasoning, YouTubers and AI experts have turned to jailbroken versions of the tech to get unfiltered answers about AI's future with humanity, without any sugarcoating.
Advert

Now, it seems regular users can use a three-word prompt with ChatGPT to produce more balanced responses and minimise misleading information.
After asking the chatbot any question, simply follow up with: 'Convince me otherwise.' This forces the OpenAI chatbot to revisit its previous answer and hunt for weaknesses in its reasoning or gaps in its logic.
According to Tech Radar, the initial response might offer solid advice or provide a comprehensive explanation, but requesting it to argue the opposite position will expose limitations or an alternative perspective.
It's similar to asking a person for guidance. You'd expect a conversation of weighing up different options rather than a single definitive answer when discussing large purchases or personal decisions.
By encouraging the AI model to examine the flip side, you gain access to these alternative reasoning approaches and receive a more balanced response.

In the case of seeking a new job, Tech Radar explained: "ChatGPT might point out the financial risks, the difficulty of entering a new field, and the possibility that the current job has benefits that are easy to overlook.
"The second answer does not negate the first, but it adds weight to the side that was missing."
This works well as the first response becomes 'one side of a discussion rather than the final word.'
Introducing disagreement or opposition encourages the AI to 'slow down and weigh the options more carefully,' Tech Radar suggested.
OpenAI is continuously developing its flagship ChatGPT model by recently removing GPT-4.0's beloved 'conversational style and warmth' feature and moving on to newer GPT-5.1 and GPT-5.2 models, offering customisable base styles and tones.
Part of the decision stemmed from stricter guardrails OpenAI implemented to detect potential health concerns and discourage the types of close social relationships users formed with the previous GPT-4o.