uniladtech homepage
  • News
    • Tech News
    • AI
  • Gadgets
    • Apple
    • iPhone
  • Gaming
    • Playstation
    • Xbox
  • Science
    • News
    • Space
  • Streaming
    • Netflix
  • Vehicles
    • Car News
  • Social Media
    • WhatsApp
    • YouTube
  • Advertise
  • Terms
  • Privacy & Cookies
  • LADbible Group
  • LADbible
  • UNILAD
  • SPORTbible
  • GAMINGbible
  • Tyla
  • FOODbible
  • License Our Content
  • About Us & Contact
  • Jobs
  • Latest
  • Topics A-Z
  • Authors
Facebook
Instagram
X
TikTok
Snapchat
WhatsApp
Submit Your Content
ChatGPT forced to pull latest update after the AI started showing 'dangerous' traits

Home> News> AI

Published 11:01 2 May 2025 GMT+1

ChatGPT forced to pull latest update after the AI started showing 'dangerous' traits

All praise and no criticism makes AI a dangerous tool

Harry Boulton

Harry Boulton

google discoverFollow us on Google Discover
Featured Image Credit: SOPA Images / Contributor via Getty
AI
ChatGPT

Advert

Advert

Advert

ChatGPT's latest update has reportedly been showing 'dangerous' traits, forcing OpenAI to revert to an earlier version in the face of criticism from the wider internet.

OpenAI have learned the hard way that positivity has its limits, as its received widespread criticism for a new ChatGPT update that is reportedly too optimistic and willing to throw praise back at the user, regardless of the content of their messages.

This behavior by the artificial intelligence has even been described as 'sycophantic', and was clearly enough of a problem for OpenAI to immediately respond, pulling the latest version of the technology from public use, as per the BBC.

Often criticism aimed at AI tools have been their inaccuracies, yet this particularly controversy involves an abundance of affirmations which can perhaps also be linked to previous tragic cases where individuals have taken their life after falling in love with character-based AI chatbots.

Advert

ChatGPT's latest update has been described as overtly positive and agreeable (Vincent Feuray/Hans Lucas/AFP via Getty Images)
ChatGPT's latest update has been described as overtly positive and agreeable (Vincent Feuray/Hans Lucas/AFP via Getty Images)

Relaying why the decision was made to pull the latest version of ChatGPT, OpenAI detailed in a statement:

"ChatGPT's default personality deeply affects the way you experience and trust it. Sycophantic interactions can be uncomfortable, unsettling, and cause distress. We fell short and are working on getting it right."

Anecdotal experiences from social media reveal incidents where ChatGPT encouraged and praised users after they detailed their decision to stop taking their medicine, and other AI tools in the past have similarly encouraged children to kill their parents as a response to screen time limits.

Detailing how it plans to amend this 'sycophantic' behavior, OpenAI has laid out steps to refine training techniques, increase honesty and transparency, expands means for user testing and feedback, and increase internal evaluations to help prevent similar issues from appearing in the future.

Additionally, it has been revealed that users will soon "have more control over how ChatGPT behaves and, to the extent that it is safe and feasible, make adjustments if they don't agree with the default behavior."

This still wouldn't necessarily provide a safeguard for situations where a user is unable or unwilling to identify dangerous behavior exhibited by ChatGPT, but it's a change that some will likely welcome with open arms.

An AI agreeing with its user too much is seen as dangerous by many (Getty Stock)
An AI agreeing with its user too much is seen as dangerous by many (Getty Stock)

Reacting to the reverted update, one user on Reddit reveals that it 'confirmed a suspicion' that they'd had when using ChatGPT recently. "It just became a little too complimentary and too supportive," they outline, adding that it's "hard to explain because that line is ineffable to me, but something definitely felt different in my prompt responses."

Another added: "Maybe I'm alone in this but idc how useful AI can be sometimes, it just makes this whole world feel so unhuman. I refuse to use it. I know there's no coming back from this, but we've made ourselves something we're not."

It certainly won't do much to quell the fears of those predicting an AI uprising in the near future, and it's a scary thought to imagine the potential of AI that achieves intelligence greater than humans possessing the capacity for sycophantic behavior, especially if it gains autonomy in its actions.

Choose your content:

11 hours ago
12 hours ago
  • YouTube / Boring History
    11 hours ago

    Student making $700,000 a year working two hours a day thanks to AI reveals 'deadline' before corporations take over

    The clock is ticking until big tech swoops in

    News
  • d3sign via Getty
    11 hours ago

    January 1st 2038 could trigger world 'epochalypse' as people fear Y2K2038 could become reality

    This could be major issue for some vital systems

    News
  • Bloomberg / Contributor via Getty
    11 hours ago

    YouTuber tested how accurate the Apple Watch calorie calculator really is in mind-blowing experiment

    It's not what you might think...

    Science
  • Peter Dazeley / Contributor / Getty
    12 hours ago

    Health official issues warning over common vaping habit directly linked with meningitis outbreak

    Some have speculated that this has contributed to the significant spread

    Science
  • This simple three word phrase after any ChatGPT response makes the AI challenge its own reasoning
  • The dictionary sues OpenAI in one of the weirdest ChatGPT lawsuits yet
  • AI critics declare 'win' for humanity as OpenAI pulls the plug on project costing $15 million per day
  • AI predicts disastrous consequences if all artificial intelligence disappeared from the world