• News
    • Tech News
    • AI
  • Gadgets
    • Apple
    • iPhone
  • Gaming
    • Playstation
    • Xbox
  • Science
    • News
    • Space
  • Streaming
    • Netflix
  • Vehicles
    • Car News
  • Social Media
    • WhatsApp
    • YouTube
  • Advertise
  • Terms
  • Privacy & Cookies
  • LADbible Group
  • LADbible
  • UNILAD
  • SPORTbible
  • GAMINGbible
  • Tyla
  • FOODbible
  • License Our Content
  • About Us & Contact
  • Jobs
  • Latest
  • Topics A-Z
  • Authors
Facebook
Instagram
X
TikTok
Snapchat
WhatsApp
Submit Your Content
Anthropic AI safety officer warns 'the world is in peril' in alarming resignation letter

Home> News> AI

Published 17:09 11 Feb 2026 GMT

Anthropic AI safety officer warns 'the world is in peril' in alarming resignation letter

He's led the Safeguards Research Team since it was launched in 2025

Tom Chapman

Tom Chapman

Another tech expert has spoken out about the potential dangers of artificial intelligence, with someone attached to one of the biggest AI companies out there sharing his resignation letter online.

While the term artificial intelligence was first officially coined as early as 1956, it's since seen a proverbial Big Bang through the emergence of Large Language Models like OpenAI's ChatGPT, Google's Gemini, and Anthropic's Claude.

Now, someone who's worked behind the scenes on one of the big three is jumping ship and has left a dystopian warning in his wake. After generating concerning headlines after it published a statement on the 'moral status' of AI, Anthropic is back in the news – and it's not for beefing with OpenAI's Sam Altman.

As reported by Forbes, Anthropic's Mrinank Sharma left a warning as he exited through the door of the AI giant.

Advert

Posting on X, Sharma referred to his last day at Anthropic and explained why he stepped down from leading its Safeguards Research Team.

This was a position he'd held since it was launched last year, although that's just part of why the letter is generating a buzz online.

Sharma says he want to explore a potential future with poetry (LinkedIn / Mrinank Sharma)
Sharma says he want to explore a potential future with poetry (LinkedIn / Mrinank Sharma)

Saying that it's time to move on, Sharma's post added: "I continuously find myself reckoning with our situation. The world is in peril. And not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment."

Looking ahead, he claims we're "approaching a threshold where our wisdom must grow in equal measure to our capacity to affect the world, lest we face the consequences."

Sharma's work included everything from developing defense against AI-assisted bioterrorism, as well as 'AI sycophancy' where chatbots are known to overly gush about users as a way to flatter them.

This conversation has come forward more recently thanks to people falling in love with LLMs.

Heading up the Safeguards Research Team, his last project had been looking at how AI assistants can "distort our humanity." It all feels pretty Blade Runner as the average reader would be rightly confused about what Sharma was actually working on, although he said: "Moreover, throughout my time here, I've repeatedly seen how hard it is to truly let our values govern our actions.

"I've seen this within myself, within the organisation, where we constantly face pressures to set aside what matters most, and throughout broader society too."


As for what's next, Sharma has suggested he could take up poetry, saying he wants to "contribute in a way that feels fully in my integrity," while also being able to "explore the questions that feel essential to me."

That all sounds pretty vague as he tried to expand on his next adventure, saying: "My intention is to create space to set aside the structures that have held me these past years, and see what might emerge in their absence."

For those still intrigued about Sharma's work at Anthropic, Cornell University has posted the findings of his study into 'distorting' humanity. Sharma claims to have found 'thousands' of incidents where chatbots have distorted our perception of reality on a daily basis.

Referring to 'disempowerment patterns', Sharma maintains that his works “highlight[s] the need for AI systems designed to robustly support human autonomy and flourishing."

Anthropic isn't alone in losing high-ranking safety specialists, with Forbes reminding us that OpenAI has to disband its Superalignment safety research team when two key members handed in their notice.

It seems an increasing number of people are leaving similar positions across the spectrum of companies, as staff continue to cite ethical and safety concerns.

Featured Image Credit: SmileStudioAP / Getty
AI
Tech News

Advert

Advert

Advert

Choose your content:

27 mins ago
an hour ago
2 hours ago
  • sarayut Thaneerat via Getty
    27 mins ago

    Exec at Anthropic reveals 'extreme reactions' AI has when threatened with being 'turned off'

    Are we to blame?

    News
  • ShanghaiEye / YouTube
    an hour ago

    Eerily realistic ‘biometric’ robot built for human companionship is equipped with body heat and dense skin

    Of course, everyone is saying the same thing

    News
  • BRENDAN SMIALOWSKI/AFP via Getty Images
    2 hours ago

    Experts evaluate if AI will surpass human intelligence in 2026 as Musk makes eerie prediction

    Elon Musk has shared his thoughts on the future of AI

    News
  • Patrick McMullan / Contributor / Getty
    2 hours ago

    Exact timeline of how Jeffrey Epstein first got caught after alleged 'schoolyard fight' between two girls

    Police reports reveal how it all began back in 2005

    News
  • 'Rent a human' website goes viral as AI agents look to hire humans
  • Exec at Anthropic reveals 'extreme reactions' AI has when threatened with being 'turned off'
  • Elon Musk makes eerie prediction of the end of AI and he thinks it's happening in 'just months'
  • Experts evaluate if AI will surpass human intelligence in 2026 as Musk makes eerie prediction