


An AI expert has claimed that there is a 99.9% risk of human extinction coming much sooner than we think.
There's understandably been a lot of worry about whether AI is going to take over humanity. Where some have seen their jobs become 'easier' with the advent of AI software, others have watched their roles become obsolete as automated systems take over.
Of course, some positions are definitely safer than others right now, as research from tech giant Microsoft and, separately, its co-founder Bill Gates, has identified the specific jobs that are currently safe from the chopping block.
Meanwhile, other experts have probed AI's intentions more directly, asking the jailbroken chatbots whether they would harm humans to ensure their own survival. Safe to say, the responses haven't exactly been reassuring.
Advert

Likewise, the so-called 'godfather of AI' Yoshua Bengio warns that the future of artificial intelligence would potentially lead to the end of humanity as we know it.
Now, AI researcher Roman Yampolskiy, who focuses on AI safety and cybersecurity at the University of Louisville, agrees that civilisation could face extinction due to AI development - and a lot sooner than we think.
Appearing on Lex Friedman's podcast on Sunday, the computer scientist forecasts a worrying 99.9% probability that AI will wipe out humanity before the end of the century.
Across the two-hour conversation, he claimed that every AI system released so far has security vulnerabilities, but future versions will successfully avoid similar fatal flaws. His theory aligns with a group of other pioneering AI developers amid President Trump's push for artificial intelligence dominance.
In Yampolskiy's book, AI: Unexplainable, Unpredictable, Uncontrollable, he aims to provide a 'broad introduction to the core problems, such as the unpredictability of AI outcomes or the difficulty in explaining AI decisions.'

He explained: "This book arrives at more complex questions of ownership and control, conducting an in-depth analysis of potential hazards and unintentional consequences.
"The book then concludes with philosophical and existential considerations, probing into questions of AI personhood, consciousness, and the distinction between human intelligence and artificial general intelligence (AGI)."
OpenAI's Sam Altman mirrors this thought, having previously declared: "AI will probably most likely lead to the end of the world, but in the meantime, there'll be great companies."
However, some research challenges Yampolskiy's extinction forecast, suggesting that the threat is much lower than his calculations predict. According to research from the University of Oxford in England and Bonn in Germany, there is only a 5% probability that AI will eliminate humanity, based on evaluations from over 2,700 AI researchers.
"People try to talk as if expecting extinction risk is a minority view, but among AI experts it is mainstream," warns Katja Grace, one of the paper's authors. "The disagreement seems to be whether the risk is 1% or 20%."
In contrast, several leading AI specialists have completely dismissed assertions that AI will cause an apocalyptic situation, including Google Brain co-founder Andrew Ng and AI pioneer Yann LeCun.