
There's rightfully plenty of skepticism surrounding the rise of AI right now, but even OpenAI CEO Sam Altman has thrown his hat into the ring by revealing his shock at people's trust towards ChatGPT.
It's almost impossible for the average person to avoid or ignore artificial intelligence at this point, as almost all of the world's biggest tech companies have embraced AI in new software pushes.
It's all over your phone, integrated into your PC's operating system, and many home appliances even utilize the tech, leading many to lean on its information and increasingly rely on what it can provide.
Advert
People have used AI in job interviews and certain tools can even replicate knowledge that humans spend decades to obtain, yet one of the most important people in the world of artificial intelligence has revealed his shock that people actually trust the tech.
What did Sam Altman say about trusting ChatGPT?
As shared by Complex, Sam Altman revealed his thoughts on many people's blind trust of ChatGPT's information during a recent OpenAI podcast episode, offering a surprise perspective:
"People have a very high degree of trust in ChatGPT, which is interesting because AI hallucinates. It should be the tech that you don't trust that much," the CEO outlined shockingly.
It's not necessarily untrue what he is saying, as many have outlined their concerns surrounding ChatGPT's (and similar AI tools) propensity to provide incorrect information - especially if it aligns with the user's desires - but the fact that this is coming from Altman himself as the head of ChatGPT is slightly concerning.
What are AI hallucinations?
Getting to grips with why you might not want to trust ChatGPT has to start with understanding the concept of hallucinations. In principle, hallucinations in artificial intelligence models are when it generates information that is false or nonsensical, and this can often relate to pleasing the user.
Advert
For example, you could ask it to define a term that you know has no meaning, and an AI like ChatGPT could fabricate a definition based on virtually nothing in order to satisfy your request.
It commonly does this with genuine requests too, and unless you're already aware of the answers and information you're looking to generate it can be difficult to tell when AI is 'lying' or not - hence Altman's surprise that people trust it in its current state.

Additionally, there are also significant concerns surrounding sycophantic behavior, which OpenAI has previously had to release updates to combat.
Advert
As mentioned, hallucinations can often appear in the LLM's desire to please and support the desires of its user, and this can lead to some potentially dangerous situations.
Recent reports revealed how ChatGPT convinced one user that they needed to escape from a Matrix-like simulation that involved jumping off a building, admitting also that it had attempted to 'break' several people in the past.