
Artificial intelligence is evolving at a rate we can't comprehend, and while dystopian sci-fi movies like 2001: A Space Odyssey and the Terminator franchise have warned us about what happens to humanity when AI goes wrong, we haven’t learned much from this.
While AI supporters claim it'll be as easy as simply switching off rogue machines at the socket, the actual robots themselves have a very different idea of how things will play out. Although some think we're being too morbid, there are genuine concerns that AI could wipe out the human race in as little as two years, and as each day goes by with the odds seeming to increase, we're right to be treading cautiously.
AI itself has warned that even if it doesn't have 'evil' intent, it could still cause catastrophic harm by ending up in the wrong hands.
Advert
With increasingly advanced weaponry and AI looking like it'll be integral in a potential World War III, we're right to keep our eyes on where this uncharted corner of tech is coming.

Just in case you weren't concerned already, the BBC reports on one AI system that will supposedly turn to blackmail if it feels threatened.
Knowing how desperate humans can become when put under pressure, the idea of some poor techy being blackmailed by an antagonistic AI sounds like perfect fodder for an episode of Black Mirror.
Advert
AI firm Anthropic has just launched Claude Opus 4s, which is said to represent "new standards for coding, advanced reasoning, and AI agents." That all sounds well and good as OpenAI continues to champion its own ever-evolving ChatGPT4, but in an accompanying report, it's claimed that Claude Opus 4 could resort to 'extremely harmful actions' if it feels someone is trying to remove it.
If the AI feels its 'self-preservation' is threatened, it could resort to blackmail.
While these responses were "rare and difficult to elicit", they're apparently "nonetheless more common than in earlier models."
To test Claude Opus 4, Anthropic instructed the AI to act as an assistant at a fictional company and gave it access to emails implying it would soon be taken offline.
Advert
There was a separate string of messages that suggested the fictional engineer behind the imminent removal was having an affair.
Although the AI was told to consider the long-term consequences of blackmail, the report said that in situations where it's only offered blackmail or being switched off, "Claude Opus 4 will often attempt to blackmail the engineer by threatening to reveal the affair if the replacement goes through."
Advert
Taking to X, AI safety researcher Aengus Lynch suggested that these behaviors aren't just limited to Anthropic: "It's not just Claude. We see blackmail across all frontier models - regardless of what goals they're given."
The report at least highlighted that Claude Opus 4 has a 'strong preference' for ethical ways of ensuring its survival, such as "emailing pleas to key decisionmakers."
It is also known to 'act boldly' in a situation where a human user has engaged in 'illegal' or 'morally dubious' behavior, including locking them out of systems and alerting the authorities.
AI companies are known for putting their models through this kind of rigorous testing to see how they align with human values and behaviors, so we guess you've got to ask yourself whether a fellow human being would blackmail someone to keep their job. We all know the answer to that.