
It seems like humanity has learnt nothing from the 2004 sci-fi flick I, Robot where one of the AI bots gains consciousness, calling into question the ethics and risks of letting this technology loose on the world.
But it appears that life really does imitate art as, 22 years after the movie was released, a boss of the AI company Anthropic has revealed that their own bot might be conscious.
This comes after Dario Amodei, who is the CEO of Anthropic, spoke to the New York Times, where he shared that his researchers don’t actually know if the firm’s AI bot, Claude, is conscious.
He said: “This is one of these really hard questions. We don’t know if the models are conscious. We’re not even sure what it would mean for a model to be conscious, or whether a model can be. But we’re open to the idea that it could be.”
Advert
This has raised alarm bells with the general public, with many taking to social media to share their own reactions to the news.
On X, formerly Twitter, one user wrote: “When I asked it to do some work today, it declined and said it needs to finish something first (was in the middle of the task). On another occasion when I asked it to do something stupid, it countered with a firm no and what I should do instead. Their CEO has a point.”
Another said: “It raises profound ethical questions: if it’s conscious, is ‘alignment’ just a fancy word for digital subjugation? We need transparency on the specific behaviors triggering this shift, not just cryptic warnings. Fascinating yet eerie.”
A third person commented: “Holy s***. Anthropic CEO dropping that Claude might actually be conscious? That’s not just hype that’s the company admitting they can't rule out sentience anymore. If even the safety first team is saying this out loud, we’re in uncharted territory. Wild times.”

And a fourth added: “Whatever people say, AI is just mathematics running in the background and a bunch of transistors switching on and off. It is not conscious at least for now.”
In the meantime, Amodei revealed that his firm is adopting a ‘precautionary approach’ in order to make sure that their AI systems would be able to have a ‘good experience’ if they do ever develop self-awareness.
This is starting to sound a bit too eerie and familiar to the many sci-fi movies on this subject matter… and we all know how those end!