• News
    • Tech News
    • AI
  • Gadgets
    • Apple
    • iPhone
  • Gaming
    • Playstation
    • Xbox
  • Science
    • News
    • Space
  • Streaming
    • Netflix
  • Vehicles
    • Car News
  • Social Media
    • WhatsApp
    • YouTube
  • Advertise
  • Terms
  • Privacy & Cookies
  • LADbible Group
  • LADbible
  • UNILAD
  • SPORTbible
  • GAMINGbible
  • Tyla
  • FOODbible
  • License Our Content
  • About Us & Contact
  • Jobs
  • Latest
  • Topics A-Z
  • Authors
Facebook
Instagram
X
TikTok
Snapchat
WhatsApp
Submit Your Content
ChatGPT forced to pull latest update after the AI started showing 'dangerous' traits

Home> News> AI

Published 11:01 2 May 2025 GMT+1

ChatGPT forced to pull latest update after the AI started showing 'dangerous' traits

All praise and no criticism makes AI a dangerous tool

Harry Boulton

Harry Boulton

ChatGPT's latest update has reportedly been showing 'dangerous' traits, forcing OpenAI to revert to an earlier version in the face of criticism from the wider internet.

OpenAI have learned the hard way that positivity has its limits, as its received widespread criticism for a new ChatGPT update that is reportedly too optimistic and willing to throw praise back at the user, regardless of the content of their messages.

This behavior by the artificial intelligence has even been described as 'sycophantic', and was clearly enough of a problem for OpenAI to immediately respond, pulling the latest version of the technology from public use, as per the BBC.

Advert

Often criticism aimed at AI tools have been their inaccuracies, yet this particularly controversy involves an abundance of affirmations which can perhaps also be linked to previous tragic cases where individuals have taken their life after falling in love with character-based AI chatbots.

ChatGPT's latest update has been described as overtly positive and agreeable (Vincent Feuray/Hans Lucas/AFP via Getty Images)
ChatGPT's latest update has been described as overtly positive and agreeable (Vincent Feuray/Hans Lucas/AFP via Getty Images)

Relaying why the decision was made to pull the latest version of ChatGPT, OpenAI detailed in a statement:

"ChatGPT's default personality deeply affects the way you experience and trust it. Sycophantic interactions can be uncomfortable, unsettling, and cause distress. We fell short and are working on getting it right."

Advert

Anecdotal experiences from social media reveal incidents where ChatGPT encouraged and praised users after they detailed their decision to stop taking their medicine, and other AI tools in the past have similarly encouraged children to kill their parents as a response to screen time limits.

Detailing how it plans to amend this 'sycophantic' behavior, OpenAI has laid out steps to refine training techniques, increase honesty and transparency, expands means for user testing and feedback, and increase internal evaluations to help prevent similar issues from appearing in the future.

Additionally, it has been revealed that users will soon "have more control over how ChatGPT behaves and, to the extent that it is safe and feasible, make adjustments if they don't agree with the default behavior."

This still wouldn't necessarily provide a safeguard for situations where a user is unable or unwilling to identify dangerous behavior exhibited by ChatGPT, but it's a change that some will likely welcome with open arms.

Advert

An AI agreeing with its user too much is seen as dangerous by many (Getty Stock)
An AI agreeing with its user too much is seen as dangerous by many (Getty Stock)

Reacting to the reverted update, one user on Reddit reveals that it 'confirmed a suspicion' that they'd had when using ChatGPT recently. "It just became a little too complimentary and too supportive," they outline, adding that it's "hard to explain because that line is ineffable to me, but something definitely felt different in my prompt responses."

Another added: "Maybe I'm alone in this but idc how useful AI can be sometimes, it just makes this whole world feel so unhuman. I refuse to use it. I know there's no coming back from this, but we've made ourselves something we're not."

It certainly won't do much to quell the fears of those predicting an AI uprising in the near future, and it's a scary thought to imagine the potential of AI that achieves intelligence greater than humans possessing the capacity for sycophantic behavior, especially if it gains autonomy in its actions.

Featured Image Credit: SOPA Images / Contributor via Getty
AI
ChatGPT

Advert

Advert

Advert

Choose your content:

11 hours ago
13 hours ago
  • 11 hours ago

    Jeff Bezos set to launch next phase of his $10,000,000,000 challenge to Elon Musk

    The second launch is scheduled for today

    News
  • 11 hours ago

    Top secret US government website uncovered as Trump's AI plans are leaked to public

    There are fears AI could wipe out humanity in as little as two years

    News
  • 13 hours ago

    Expert warns ‘distracted USA’ could be caught in crossfire of Russia's WWIII plans

    Putin can use it as an opportunity to seize power

    News
  • 13 hours ago

    Five US states at risk as deadly airborne virus that eats human tissue from the inside spreads

    Hundreds of thousands of US residents could be at risk

    News
  • CEO of ChatGPT calls for 'AI privilege' that would completely change how we interact with chatbots
  • People divided after man’s eerily dystopian conversation with ChatGPT ‘girlfriend’ is exposed on subway
  • The real cost of saying 'please' and 'thank you' to ChatGPT as horrifying truth is exposed
  • ‘Widespread destruction’ that would be felt worldwide if AI was to be removed from the Earth