


President Donald Trump’s government has officially been brought into a lawsuit by a major AI firm.
This comes after the government labeled this company as being a ‘national security risk’.
AI company Anthropic is suing Trump’s administration after it was labeled as such by the Pentagon.
In its lawsuit, the AI firm claims that the government’s action is ‘unprecedented and unlawful’.
Advert
Anthropic wrote: “The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech.
“No federal statute authorizes the actions taken here.”

In a statement to the Guardian, Anthropic stated: “Seeking judicial review does not change our longstanding commitment to harnessing AI to protect our national security, but this is a necessary step to protect our business, our customers and our partners.
“We will continue to pursue every path toward resolution, including dialogue with the government.”
In a statement to the BBC, Liz Huston, who is a spokesperson for the White House claimed that Anthropic is ‘a radical left, woke company’ trying to control military activity.
She added: “Under the Trump Administration, our military will obey the United States Constitution – not any woke AI company’s terms of service.”
Anthropic was previously in the news after its CEO, Dario Amodei, spoke to the New York Times, where he shared that his researchers don’t actually know if the firm’s AI bot, Claude, is conscious.
He said: “This is one of these really hard questions. We don’t know if the models are conscious. We’re not even sure what it would mean for a model to be conscious, or whether a model can be. But we’re open to the idea that it could be.”

This has raised alarm bells with the general public, with many taking to social media to share their own reactions to the news.
On X, formerly Twitter, one user wrote: “When I asked it to do some work today, it declined and said it needs to finish something first (was in the middle of the task). On another occasion when I asked it to do something stupid, it countered with a firm no and what I should do instead. Their CEO has a point.”
And another said: “It raises profound ethical questions: if it’s conscious, is ‘alignment’ just a fancy word for digital subjugation? We need transparency on the specific behaviors triggering this shift, not just cryptic warnings. Fascinating yet eerie.”
In the meantime, Amodei revealed that his firm is adopting a ‘precautionary approach’ in order to make sure that their AI systems would be able to have a ‘good experience’ if they do ever develop self-awareness.