uniladtech homepage
  • News
    • Tech News
    • AI
  • Gadgets
    • Apple
    • iPhone
  • Gaming
    • Playstation
    • Xbox
  • Science
    • News
    • Space
  • Streaming
    • Netflix
  • Vehicles
    • Car News
  • Social Media
    • WhatsApp
    • YouTube
  • Advertise
  • Terms
  • Privacy & Cookies
  • LADbible Group
  • LADbible
  • UNILAD
  • SPORTbible
  • GAMINGbible
  • Tyla
  • FOODbible
  • License Our Content
  • About Us & Contact
  • Jobs
  • Latest
  • Archive
  • Topics A-Z
  • Authors
Facebook
Instagram
X
TikTok
Snapchat
WhatsApp
Submit Your Content
AI model finally learns this 'human' three word phrase in huge breakthrough
Home>News>AI
Published 15:51 11 May 2026 GMT+1

AI model finally learns this 'human' three word phrase in huge breakthrough

The admission is deemed to be a major step forward

Harry Boulton

Harry Boulton

google discoverFollow us on Google Discover
Featured Image Credit: Yuichiro Chino / Getty
AI
Tech News

Advert

Advert

Advert

One of the biggest issues currently plaguing the most popular AI models is their inability to express doubt, as while the capacity to always provide an answer is seen by some – including their own creators – as a positive, but it often leads to answers that are incorrect at best and harmful at worst.

If you're an avid users of AI chatbot tools then you can probably count on one hand – especially in recent times – how often you've been told that the model simply can't provide an answer.

The only times this appears to happen is in accordance with requests that break the model's terms of service, which did increase in frequency following an unpopular guardrails update to ChatGPT that was subsequently toned down after negative feedback.

What can be guaranteed, however, is that you've been served an incorrect answer through the AI tool's effort to always give something to the user — and you might not even be aware that you've been fed false info.

Advert

Most AI models will provide an answer to the user even if they don't have the training necessary to be correct (Getty Stock)
Most AI models will provide an answer to the user even if they don't have the training necessary to be correct (Getty Stock)

One equally frightening and hilarious video displayed this in action as ChatGPT's voice model repeatedly provided wildly wrong results when asked to time a race, and OpenAI CEO Sam Altman had few encouraging words to say in response to the embarrassing situation.

This could seemingly all be solved if companies gave their AI models the capacity to simply admit that they don't know, yet that would likely be seen as a failure on their part in creating an all-knowing, highly capable piece of tech.

As reported by the Independent, researchers in South Korea have now discovered a potentially game-changing way of getting AI to admit its own shortcomings, mimicking the capacity of honesty found within humans for the first time.

Presented by experts at the Korea Advanced Institute of Science and Technology, they claim that it could be vital to the application of AI within self-driving cars and health contexts, especially as these are scenarios where the greatest immediate risk is present.

Researchers have mimicked how the human brain works to allow AI to be honest about its shortcomings (Getty Stock)
Researchers have mimicked how the human brain works to allow AI to be honest about its shortcomings (Getty Stock)

They have managed this by mimicking processes found within the human brain, where AI models replicate the process where brain signals are generated without any external input, effectively allowing them to learn how to say 'I don't know anything yet'.

"While conventional models tend to give incorrect answers with high confidence even for data they have not encountered during training, models with warm-up training showed clear improvement in their ability to lower confidence and recognise that they 'do not know'," the researchers illustrated, although it is on the companies themselves to adopt this groundbreaking practice.

Se-Bum Paik, an author on the associated study published in Nature Machine Intelligence, outlined that "this is important because it helps AI understand when it is uncertain or might be mistaken, not just improve how often it gives the right answer."

Choose your content:

9 mins ago
17 mins ago
an hour ago
  • dikushin / Getty
    9 mins ago

    Smartphone users could pocket extra £3,500 by letting bank app send these alerts

    A new study has outlined the potential financial gain

    News
  • Lionsgate
    17 mins ago

    Michael Jackson's bizarre tour diet revealed as controversial biopic continues to break records

    Someone really needs to try scrambled eggs with strawberry jelly

    News
  • Tatsiana Volkava/Getty Images
    an hour ago

    'Ozempic Butt' side effect explained as doctor issues advice on uncomfortable symptom no one talks about

    It is a GLP-1 side effect that users have discussed on social media

    Science
  • Gorica Poturak / Getty
    an hour ago

    Colon cancer patients reveal early symptoms to look out for as cases rise in younger people

    Bowel and colon cancer is now the third most common cancer in the world

    Science
  • AI 'Godfather' warns 'do not listen to CEOs' in surprising statement on 'AI-apocalypse'
  • This simple three word phrase after any ChatGPT response makes the AI challenge its own reasoning
  • AI 'violates every principle it was given' within 9 seconds as it nukes company database
  • Google study suggests AI will 'never be sentient' despite hiring 'philosopher' to work on consciousness