
Many of the world's biggest tech companies would have you believe that there is nothing but optimism surrounding artificial intelligence as it continues to rapidly develop, yet two scientists have issued a chilling warning that the evolving technology could be the end of humanity as we know it.
There remain plenty of reasons to be concerned about the effect of artificial intelligence on the world right now, as its effect on global employment levels, impact on climate change, and ethical considerations are among the biggest talking points for many.
However, things could actually be far more dangerous than we understand, as some have highlighted worrying developments that will only emerge once AI reaches a point in its development cycle.
Much of the discussion surrounding the goal of current AI development centers on what's referred to as artifical general intelligence (AGI), and this is deemed to be the point where large language models can meet and even exceed the knowledge and capabilities of humans.
Advert
It's seen as the moment when AI will truly reach a breakthrough, opening up countless new possibilities that could, in theory, revolutionize the world for the better.

Unfortunately, there are a number of concerns that also fall adjacent to AGI that have been highlighted by experts – specifically surrounding the safety of its evolution – and that risk only increases as development continues towards artificial superintelligence (ASI).
As reported by the Metro, it is this stage that really concerns two scientists in particular, as Eliezer Yudkowsky and Nate Soares believe that reaching this point would almost guarantee the destruction of humanity, and we should all be prepared to fight it.
Advert
They outline these dangers in a book titled 'If Anyone Builds It, Everyone Dies', where they outline how exactly an advanced AI would be able to destroy humanity by developing its own 'desires' and 'goals' independent of what we ask it to achieve.
This is somewhat similar to a scarily realistic timeline laid out within the next five years that details step-by-step how superintelligent AI will be able to achieve sentience and act on its own desires to the detriment of humanity.
"A superintelligent adversary will not reveal its full capabilities and telegraph its intentions. It will not offer a fair fight," the book outlines, adding that "if needed, the ASI can consider, prepare, and attempt many takeover approaches simultaneously. Only one of them needs to work for humanity to go extinct."

Advert
It could be capable enough to steal money through cryptocurrency, force humans to build robot factories to create a mechanical uprising, and manipulate humans into allies through chatbots.
They even suggest that it could go as far to force humans to develop and spread a virus that would wipe out large portions of the global population, and there's seemingly no way of stopping it once it gains control.
Governments could perhaps be required to bomb AI data centers as the only way to stop these models from taking over, the scientists suggest, although with how entrenched these companies appear to be becoming with political powers then that could lead to greater issues and conflict down the line.