


The US government has just issued a major threat to all of the biggest AI companies following a dispute with Anthropic, warning that 'serious punishment' could be on the horizon if they don't comply.
While many individuals have adopted various forms of artificial intelligence into their lives in the three years since ChatGPT was first launched, one key area where it has been taken in considerably is the United States government.
This is evident in the ties that the US military and Pentagon currently have with Anthropic, the creators of AI tool Claude, as it's currently the only model allowed for use in classified systems — to the point where it was used during the attack on Venezuela and Nicolás Maduro earlier this year.
However, that once-strong relationship appears to be in jeopardy following a clash of opinions, as Anthropic CEO Dario Amodei's seeming unwillingness to go along with the Pentagon's demands could seriously threaten the company, alongside every other major AI entity.
Advert

As reported by Axios, Anthropic and the Pentagon have currently been locked in months of ongoing negotiations as to how Claude will be used within the military, and while Amodei is a 'pragmatist', he still takes serious issue with some aspects of the agreement.
Seemingly critical to the currently split of opinion is Anthropic's requirement that Claude is not used to both spy on Americans at large, or develop weapons that can be fired without any human involvement — both scary prospects.
The Pentagon claims that these terms would make the agreement 'unworkable' due to the 'gray areas' that crop up, noting that any agreement would require the use of AI for "all lawful purposes."
Speaking to Axios, chief Pentagon spokesman Sean Parnell outlined: "The Department of War's relationship with Anthropic is being reviewed. Our nation requires that our partners be willing to help our warfighters win any fight. Ultimately, this is about our troops and the safety of the American people."
Defense Secretary Pete Hegseth has taken things one step further, however, outlining Anthropic as a "supply chain risk," which notes that anyone doing business with the US military has to cut ties with the company — something typically only reserved for foreign adversaries.

While the contract that Anthropic currently has with the US government is 'only' worth around $200 million compared to its $14 billion annual revenue, this added stipulation could leave it in deeper financial trouble — especially considering how much of the AI industry is operating on shared investment right now.
Google, OpenAI, and Elon Musk's xAI have all agreed to remove their safeguards so that their respective models can be used in the government's unclassified systems, yet they remain restricted from the classified data and work that Anthropic has previously operated within.