
The boss of Microsoft AI has issued a major warning over ‘uncontrollable’ artificial intelligence.
The CEO of the tech giant has said that AI technology could get out of control if proper regulation isn’t put into place.
Mustafa Suleyman, who is the CEO of Microsoft AI, appeared on BBC Radio 4’s Today where he said that fears over the future of AI are ‘health and necessary’.
He continued: “I honestly think that if you’re not a little bit afraid at this moment then you’re not paying attention.”
Advert
The mogul also said that regulations need to be put into place as the technology continues to develop, suggesting that the next five years will see ‘outrageously exponential’ advances in AI.

Suleyman went on to say: “There are plenty of people in the industry today who see a world – in fact desire a world – in which machines get so much more intelligent than humans… that they could exceed human performance on all tasks.
“A system like that would almost certainly not be controllable. We have to declare our belief in a humanist super intelligence, one that is always aligned to human interests”
He added: “If we can’t control it, it isn’t going to be on our side. It’s going to overwhelm us”.
This isn’t the only alarm bell that has been raised over AI in recent weeks as Yoshua Bengio, a computer science professor who is also known as a ‘Godfather of AI’, has explained the eerie reason behind why he tells lies to chatbots.
Having appeared in a podcast episode of ‘The Diary of a CEO’, he spoke to host Steven Bartlett he recalled how he found chatbots to be useless at giving feedback because they are always positive.
In the ep posted to YouTube, he said: “I wanted honest advice, honest feedback. But because it is sycophantic, it's going to lie.

“If it knows it’s me, it wants to please me.”
So instead, Bengio made the decision to lie to his AI assistant, pretending that his ideas were actually that of a colleague in order to receive more honest replies.
Earlier this year, Bengio revealed that he would be launching an AI safety research nonprofit known as LawZero.
He said: “This sycophancy is a real example of misalignment. We don't actually want these AIs to be like this.”
Bengio went on to warn that one of the dangers of this behavior from AI could be that users may become emotionally attached.