


While it might currently face competition from Anthropic's Claude at the top, OpenAI's ChatGPT has remained the leading AI tool for most people for the last few years to the point where many solely associate the model with artificial intelligence as a whole.
Surging into the limelight almost immediately after it was launched, ChatGPT quickly achieved record sign-up numbers and has fundamentally changed the world of employment and how many people use social media.
It's not all good news for its creator, OpenAI, these days though, as concerns surrounding its ability to generate revenue have been supported by the closure of its supposedly leading video generation tool — and issues surrounding hallucinations continue to persist.
One of the bigger issues that has continued to plague people's experience and perception of ChatGPT over the years is its liability to hallucinate information, which is often also paired with a dangerous degree of sycophancy.
Advert
There have been numerous examples of this over the years, including lengthy videos where the AI tool finds itself in a near endless loop of justifications, yet one recent YouTuber has gone viral after catching ChatGPT in a lie that's both hilarious and concerning.
The video, shared by HuskIRL, shows him exposing one particular voice model's inability to record or measure time by asking it to measure him running a mile.
Seconds after starting the 'timer' he asks the AI assistant to stop it and provide the result, of which it asserts that he "clocked it at around 10 minutes and 12 seconds," which couldn't be further from the truth.
This quickly went viral across social media, and even reached the eyes of OpenAI CEO Sam Altman, who was asked to respond to the clip in an interview with Mostly Human.
Altman laughed in response to the clip, but clearly struggled to justify quite how bad it looks to everyone watching on, offering up as much of an explanation as he could to address the concerns of many.
When asked by veteran tech journalist Laurie Segall whether he needed to show that to his product guys back at OpenAI, Altman responded: "No, no, that's a known issue. Maybe another year."
Altman was then pressed on the nature of this 'known issue', to which he claimed that the voice model in question "doesn't have tools to start a timer or anything like that," noting that they "will add the intelligence into the voice models" at some point in the future.
What's frightened people the most, however, is the inability for the AI tool to simply admit that it doesn't know the answer or is unable to provide it, instead offering up an incorrect response that's useless at best and potentially harmful in some more extreme scenarios.
"I think the bigger problem is how it's lying and gaslighting," wrote one commenter underneath the video, with another adding that "the AI is still incapable of saying 'I don't know' after all these years."
HuskIRL even offered up his own response to the situation, asking the same voice model to not only identify Sam Altman but respond to the above clip, asserting simultaneously that both itself and its creator aren't lying despite the clear conflict of outcomes.
It then was tasked with completing the same running timer task once again, and despite the test lasting a mere few seconds for the second time, it produced a completely new – yet equally incorrect – response and refused to accept that it was wrong.