To make sure you never miss out on your favourite NEW stories, we're happy to send you some reminders

Click 'OK' then 'Allow' to enable notifications

Top AI researcher says we should stop developing AI now before it ends humanity

Top AI researcher says we should stop developing AI now before it ends humanity

It's scary to hear this from an expert.

Whether it's from science fiction or real life, there's a long history of anxiety about the potential power that true artificial intelligence might have to offer up.

This power has been debated a lot, and there are so many experts who've weighed in with their take, but most of these opinions are pretty scary.

One big one comes from Geoff Hinton, who's known as one of the three so-called Godfathers of AI, so he's in a pretty good position to comment on the technology's development.

British-Canadian cognitive psychologist and computer scientist Geoffrey Hinton, known as the 'godfather of AI. Photo by GEOFF ROBINS/AFP via Getty Images.
British-Canadian cognitive psychologist and computer scientist Geoffrey Hinton, known as the 'godfather of AI. Photo by GEOFF ROBINS/AFP via Getty Images.

Shockingly, Hinton has estimated that there's around a 10% chance that a full artificial intelligence would be the thing to wipe out humanity, as so many novelists have predicted historically.

Even more amazingly, another of the three 'Godfathers of AI', Canadian computer scientist Yoshua Bengio, is even more pessimistic, rating the chance at around 20%.

The third figure, Yann LeCun, is the only optimistic one, arguing that the chance lies at around 01.%, but that still averages out at a pretty worrying 10% or so.

This just goes to show that, while generative text models might seem impressive, if they're ultimately leading us on the path to true artificial intelligence, one that far outstrips the reasoning and complexity of a human brain, then there are major risks associated with them.

Of course, there are plenty of big voices in the tech world who are of a different opinion, with Elon Musk being one of the most high-profile.

At a recent conference, according to Business Insider, he made the interesting admission that he agrees with Hinton, to a limited degree: "I think there's some chance that it will end humanity. I probably agree with Geoff Hinton that it's about 10% or 20% or something like that."

That doesn't sound great, but Musk had a twist in his argument: "I think that the probable positive scenario outweighs the negative scenario."

Elon Musk has been quoted as telling Business Insider that there is 'some chance it (AI) will end humanity'.
Leon Neal / Staff / Getty

So, Musk would say that despite this risk of utter destruction, we should push ahead with AI development - and he's doing just that since one of his many companies is xAI, which is actively developing AI tools at the moment.

The AI world is full of experts, and interestingly another weighed in to respond to Musk's attitude. According to Roman Yampolskiy, an AI safety researcher and director of the Cyber Security Laboratory at the University of Louisville, "if anything, [Musk] is a bit too conservative".

Yampolskiy thinks that if a true AI is built, it's all but guaranteed to go rogue and cause terrible events for humanity - a near-certainty, to his mind, which is once again a pretty shocking argument.

When there are this many people in the know sounding the alarm, you have to wonder whether all this AI development is a remotely good idea.

Featured Image Credit: Westend61/Yuichiro Chino/Getty