
The ‘Godfather of AI’ has explained the eerie reason behind why he tells lies to chatbots.
Yoshua Bengio is a computer science professor at the Université de Montréal and is considered to be one of the three ‘Godfathers of AI’.
Having appeared in a recent podcast episode of ‘The Diary of a CEO’, he spoke to host Steven Bartlett he recalled how he found chatbots to be useless at giving feedback because they are always positive.
In the ep posted to YouTube, he said: “I wanted honest advice, honest feedback. But because it is sycophantic, it's going to lie.
Advert
“If it knows it’s me, it wants to please me.”
So instead, Bengio made the decision to lie to his AI assistant, pretending that his ideas were actually that of a colleague in order to receive more honest replies.
Earlier this year, Bengio revealed that he would be launching an AI safety research nonprofit known as LawZero.
He said: “This sycophancy is a real example of misalignment. We don't actually want these AIs to be like this.”
Bengio went on to warn that one of the dangers of this behavior from AI could be that users may become emotionally attached.
Many people took to the YouTube comment section to share their own reactions to the podcast episode, with one user writing: “I don’t feel a gap in understanding the risks of AI. I feel a gap in the power to do anything about it.”
Another said: “I don’t understand why we can’t come to the most basic agreement to watermark any AI generated content.”
A third person commented: “My problem is AI lies over and over... it changes its fact checking, it cant be trusted.”

And a fourth added: “Doesn't seem a bit odd that the same people that created and promoted AI are the very ones now telling us it just might destroy us..?”
This isn’t the first time a ‘Godfather of AI’ has appeared on the podcast, as another man with the same title, Geoffrey Hinton, spoke with Bartlett this year.
During the chat, Hinton revealed that he believes there is a 20% chance that the rise in AI could lead to human extinction as well as the positives it could have for healthcare and education.
The expert also shared some of his deep regret for having helped to create the technology, highlighting his beliefs that it poses a threat to humanity.