• News
    • Tech News
    • AI
  • Gadgets
    • Apple
    • iPhone
  • Gaming
    • Playstation
    • Xbox
  • Science
    • News
    • Space
  • Streaming
    • Netflix
  • Vehicles
    • Car News
  • Social Media
    • WhatsApp
    • YouTube
  • Advertise
  • Terms
  • Privacy & Cookies
  • LADbible Group
  • LADbible
  • UNILAD
  • SPORTbible
  • GAMINGbible
  • Tyla
  • FOODbible
  • License Our Content
  • About Us & Contact
  • Jobs
  • Latest
  • Topics A-Z
  • Authors
Facebook
Instagram
X
TikTok
Snapchat
WhatsApp
Submit Your Content
ChatGPT forced to pull latest update after the AI started showing 'dangerous' traits

Home> News> AI

Published 11:01 2 May 2025 GMT+1

ChatGPT forced to pull latest update after the AI started showing 'dangerous' traits

All praise and no criticism makes AI a dangerous tool

Harry Boulton

Harry Boulton

Featured Image Credit: SOPA Images / Contributor via Getty
AI
ChatGPT

Advert

Advert

Advert

ChatGPT's latest update has reportedly been showing 'dangerous' traits, forcing OpenAI to revert to an earlier version in the face of criticism from the wider internet.

OpenAI have learned the hard way that positivity has its limits, as its received widespread criticism for a new ChatGPT update that is reportedly too optimistic and willing to throw praise back at the user, regardless of the content of their messages.

This behavior by the artificial intelligence has even been described as 'sycophantic', and was clearly enough of a problem for OpenAI to immediately respond, pulling the latest version of the technology from public use, as per the BBC.

Often criticism aimed at AI tools have been their inaccuracies, yet this particularly controversy involves an abundance of affirmations which can perhaps also be linked to previous tragic cases where individuals have taken their life after falling in love with character-based AI chatbots.

Advert

ChatGPT's latest update has been described as overtly positive and agreeable (Vincent Feuray/Hans Lucas/AFP via Getty Images)
ChatGPT's latest update has been described as overtly positive and agreeable (Vincent Feuray/Hans Lucas/AFP via Getty Images)

Relaying why the decision was made to pull the latest version of ChatGPT, OpenAI detailed in a statement:

"ChatGPT's default personality deeply affects the way you experience and trust it. Sycophantic interactions can be uncomfortable, unsettling, and cause distress. We fell short and are working on getting it right."

Anecdotal experiences from social media reveal incidents where ChatGPT encouraged and praised users after they detailed their decision to stop taking their medicine, and other AI tools in the past have similarly encouraged children to kill their parents as a response to screen time limits.

Advert

Detailing how it plans to amend this 'sycophantic' behavior, OpenAI has laid out steps to refine training techniques, increase honesty and transparency, expands means for user testing and feedback, and increase internal evaluations to help prevent similar issues from appearing in the future.

Additionally, it has been revealed that users will soon "have more control over how ChatGPT behaves and, to the extent that it is safe and feasible, make adjustments if they don't agree with the default behavior."

This still wouldn't necessarily provide a safeguard for situations where a user is unable or unwilling to identify dangerous behavior exhibited by ChatGPT, but it's a change that some will likely welcome with open arms.

An AI agreeing with its user too much is seen as dangerous by many (Getty Stock)
An AI agreeing with its user too much is seen as dangerous by many (Getty Stock)

Advert

Reacting to the reverted update, one user on Reddit reveals that it 'confirmed a suspicion' that they'd had when using ChatGPT recently. "It just became a little too complimentary and too supportive," they outline, adding that it's "hard to explain because that line is ineffable to me, but something definitely felt different in my prompt responses."

Another added: "Maybe I'm alone in this but idc how useful AI can be sometimes, it just makes this whole world feel so unhuman. I refuse to use it. I know there's no coming back from this, but we've made ourselves something we're not."

It certainly won't do much to quell the fears of those predicting an AI uprising in the near future, and it's a scary thought to imagine the potential of AI that achieves intelligence greater than humans possessing the capacity for sycophantic behavior, especially if it gains autonomy in its actions.

Choose your content:

5 hours ago
6 hours ago
  • 5 hours ago

    Scientists identify bizarre glitch in the human brain in groundbreaking study

    Science can explain our stubbornness

    Science
  • 5 hours ago

    Huge changes to Ozempic price following demands from Donald Trump

    The president called out GLP-1 agonists for costing more money in the US compared with other countries

    News
  • 5 hours ago

    White House exec reveals if he gave Elon Musk that black eye following 'WWE-style' clash

    The rumours have been addressed

    News
  • 6 hours ago

    Doctor breaks down exactly how your body adjusts to weight-loss drug Mounjaro during first month

    The medication works by regulating blood sugar and appetite

    Science
  • ChatGPT forced to make major u-turn after users slammed 'horrendous' new update
  • Man hospitalized with rare 19th-century disease after taking health advice from ChatGPT
  • Grok brutally calls Elon Musk a 'hypocrite' after ChatGPT CEO goes to war with Tesla billionaire
  • ChatGPT is doing something strange following eerie update and it could change AI chatbots forever