
There are growing fears that the next world war will be fought from behind a screen rather than on the battlefield, with artificial intelligence likely to be a driving force in the future of global conflict.
We'd always hoped the Terminator movies would remain a piece of fiction, and while we're not quite at the level of T-800 robots marching into battle, the recent US-Israel strikes against Iran, and Iran’s retaliation, have given an idea of where things are heading.
Even though the US government severed ties with Anthropic over its concerns that its Claude artificial intelligence would be used for "all lawful purposes", it's said that its tech is still being used to attack Iran. OpenAI has just announced a landmark deal to work with the government, which has led to an immediate backlash against CEO Sam Altman and a surge in ChatGPT being uninstalled.
Altman has attempted to downplay fears that we're on the cusp of an AI-led WWIII, telling skeptics: "We get to decide what system to build, and the DoW understands that there are lot of risks we deeply understand. We can, and will, build a lot of protections into that system, including for ensuring that the red lines are not crossed."
Advert

Critics have pointed to how "all lawful purposes" suggests that American citizens could be put under surveillance due to the post-9/11 introduction of the U.S. Patriot Act. Altman has only added to the confusion after he seemed to confirm that 'operational decisions' are out of OpenAI’s hands.
As reported by CNBC, Altman supposedly led an all-hands meeting with OpenAI staffers and explained the current situation: "So maybe you think the Iran strike was good and the Venezuela invasion was bad. You don’t get to weigh in on that.”
One person familiar with the meeting says that Altman maintained that the Pentagon wants to leverage OpenAI's technical expertise and will seek advice on where models can be placed. OpenAI will also be allowed to build the safety stack that it thinks is 'appropriate', however, operational decisions will ultimately lie with Secretary of War Pete Hegseth.
Altman has come under fire on social media due to the timing of Anthropic being blacklisted as a supply chain risk and OpenAI's new deal.
The tech mogul has since taken to X and admitted that OpenAI shouldn't have rushed to get the deal out. Saying that the issues are "super complex, and demand clear communication," he added that the company's hopes of de-escalating things came off as looking "opportunistic and sloppy."
This has already set alarm bells ringing, as Altman concluded that it's a 'good learning experience' for him as higher-stakes decisions come in the near future.
As for the idea that the government gets to decide where, how, and when OpenAI's military capabilities are deployed, one person on Reddit wrote: "Welp, that means that AI has full autonomous control and is capable of making the decision to end a human life. Everything this guy says is an intentional lie."
Another added: "Horrifying either way, but really shouldn’t need to be said."
A third groaned, "At this rate I am just ready to get the apocalypse over with," while a fourth continued to slam Sam Altman as they concluded: "And the consumers spending decisions are up to them, so no more $20/mo for you OpenAI."
Even as ChatGPT boycotts continue, it looks like OpenAI will continue on its new path with the Pentagon.
UNILADTech has reached out to OpenAI for comment.