To make sure you never miss out on your favourite NEW stories, we're happy to send you some reminders

Click 'OK' then 'Allow' to enable notifications

ChatGPT forced to pull latest update after the AI started showing 'dangerous' traits

Home> News> AI

ChatGPT forced to pull latest update after the AI started showing 'dangerous' traits

All praise and no criticism makes AI a dangerous tool

ChatGPT's latest update has reportedly been showing 'dangerous' traits, forcing OpenAI to revert to an earlier version in the face of criticism from the wider internet.

OpenAI have learned the hard way that positivity has its limits, as its received widespread criticism for a new ChatGPT update that is reportedly too optimistic and willing to throw praise back at the user, regardless of the content of their messages.

This behavior by the artificial intelligence has even been described as 'sycophantic', and was clearly enough of a problem for OpenAI to immediately respond, pulling the latest version of the technology from public use, as per the BBC.

Often criticism aimed at AI tools have been their inaccuracies, yet this particularly controversy involves an abundance of affirmations which can perhaps also be linked to previous tragic cases where individuals have taken their life after falling in love with character-based AI chatbots.

ChatGPT's latest update has been described as overtly positive and agreeable (Vincent Feuray/Hans Lucas/AFP via Getty Images)
ChatGPT's latest update has been described as overtly positive and agreeable (Vincent Feuray/Hans Lucas/AFP via Getty Images)

Relaying why the decision was made to pull the latest version of ChatGPT, OpenAI detailed in a statement:

"ChatGPT's default personality deeply affects the way you experience and trust it. Sycophantic interactions can be uncomfortable, unsettling, and cause distress. We fell short and are working on getting it right."

Anecdotal experiences from social media reveal incidents where ChatGPT encouraged and praised users after they detailed their decision to stop taking their medicine, and other AI tools in the past have similarly encouraged children to kill their parents as a response to screen time limits.

Detailing how it plans to amend this 'sycophantic' behavior, OpenAI has laid out steps to refine training techniques, increase honesty and transparency, expands means for user testing and feedback, and increase internal evaluations to help prevent similar issues from appearing in the future.

Additionally, it has been revealed that users will soon "have more control over how ChatGPT behaves and, to the extent that it is safe and feasible, make adjustments if they don't agree with the default behavior."

This still wouldn't necessarily provide a safeguard for situations where a user is unable or unwilling to identify dangerous behavior exhibited by ChatGPT, but it's a change that some will likely welcome with open arms.

An AI agreeing with its user too much is seen as dangerous by many (Getty Stock)
An AI agreeing with its user too much is seen as dangerous by many (Getty Stock)

Reacting to the reverted update, one user on Reddit reveals that it 'confirmed a suspicion' that they'd had when using ChatGPT recently. "It just became a little too complimentary and too supportive," they outline, adding that it's "hard to explain because that line is ineffable to me, but something definitely felt different in my prompt responses."

Another added: "Maybe I'm alone in this but idc how useful AI can be sometimes, it just makes this whole world feel so unhuman. I refuse to use it. I know there's no coming back from this, but we've made ourselves something we're not."

It certainly won't do much to quell the fears of those predicting an AI uprising in the near future, and it's a scary thought to imagine the potential of AI that achieves intelligence greater than humans possessing the capacity for sycophantic behavior, especially if it gains autonomy in its actions.

Featured Image Credit: SOPA Images / Contributor via Getty