


One of the biggest issues currently plaguing the most popular AI models is their inability to express doubt, as while the capacity to always provide an answer is seen by some – including their own creators – as a positive, but it often leads to answers that are incorrect at best and harmful at worst.
If you're an avid users of AI chatbot tools then you can probably count on one hand – especially in recent times – how often you've been told that the model simply can't provide an answer.
The only times this appears to happen is in accordance with requests that break the model's terms of service, which did increase in frequency following an unpopular guardrails update to ChatGPT that was subsequently toned down after negative feedback.
What can be guaranteed, however, is that you've been served an incorrect answer through the AI tool's effort to always give something to the user — and you might not even be aware that you've been fed false info.
Advert

One equally frightening and hilarious video displayed this in action as ChatGPT's voice model repeatedly provided wildly wrong results when asked to time a race, and OpenAI CEO Sam Altman had few encouraging words to say in response to the embarrassing situation.
This could seemingly all be solved if companies gave their AI models the capacity to simply admit that they don't know, yet that would likely be seen as a failure on their part in creating an all-knowing, highly capable piece of tech.
As reported by the Independent, researchers in South Korea have now discovered a potentially game-changing way of getting AI to admit its own shortcomings, mimicking the capacity of honesty found within humans for the first time.
Presented by experts at the Korea Advanced Institute of Science and Technology, they claim that it could be vital to the application of AI within self-driving cars and health contexts, especially as these are scenarios where the greatest immediate risk is present.

They have managed this by mimicking processes found within the human brain, where AI models replicate the process where brain signals are generated without any external input, effectively allowing them to learn how to say 'I don't know anything yet'.
"While conventional models tend to give incorrect answers with high confidence even for data they have not encountered during training, models with warm-up training showed clear improvement in their ability to lower confidence and recognise that they 'do not know'," the researchers illustrated, although it is on the companies themselves to adopt this groundbreaking practice.
Se-Bum Paik, an author on the associated study published in Nature Machine Intelligence, outlined that "this is important because it helps AI understand when it is uncertain or might be mistaken, not just improve how often it gives the right answer."