• News
    • Tech News
    • AI
  • Gadgets
    • Apple
    • iPhone
  • Gaming
    • Playstation
    • Xbox
  • Science
    • News
    • Space
  • Streaming
    • Netflix
  • Vehicles
    • Car News
  • Social Media
    • WhatsApp
    • YouTube
  • Advertise
  • Terms
  • Privacy & Cookies
  • LADbible Group
  • LADbible
  • UNILAD
  • SPORTbible
  • GAMINGbible
  • Tyla
  • FOODbible
  • License Our Content
  • About Us & Contact
  • Jobs
  • Latest
  • Topics A-Z
  • Authors
Facebook
Instagram
X
TikTok
Snapchat
WhatsApp
Submit Your Content
Anthropic publishes eerie statement about the 'moral status' of its AI

Home> News> AI

Published 09:33 23 Jan 2026 GMT

Anthropic publishes eerie statement about the 'moral status' of its AI

The updated policy introduces a new approach to AI behaviour

Rebekah Jordan

Rebekah Jordan

google discoverFollow us on Google Discover

Anthropic publishes have made an unnerving statement about the 'moral status' of AI.

While ChatGPT is making a major move that could cause major backlash with its users, Anthropic is completely restructuring a foundational document that determines how its popular Claude AI model behaves.

According to the document, the AI lab is training the model to follow a list of principles that will govern its behaviour and values.

Anthropic has positioned Claude as the safer choice for businesses (Kenneth Cheung/Getty)
Anthropic has positioned Claude as the safer choice for businesses (Kenneth Cheung/Getty)

Advert

“We believe that in order to be good actors in the world, AI models like Claude need to understand why we want them to behave in certain ways rather than just specifying what we want them to do,” a spokesperson for Anthropic said, as per Fortune. “If we want models to exercise good judgment across a wide range of novel situations, they need to be able to generalise and apply broad principles rather than mechanically follow specific rules.”

While Anthropic's previous constitution was reportedly inspired by the U.N. Declaration of Human Rights and Apple’s terms of service, this updated document forms the core of Anthropic’s 'Constitutional AI' training method. The AI will use the principles to critique and evaluate its own responses instead of relying entirely on human feedback to guide it.

According to the new policy, Anthropic states current Claude models will be 'broadly safe, broadly ethical, compliant with Anthropic’s guidelines and genuinely helpful' who will portray the bot as a 'brilliant friend who also has the knowledge of a doctor, lawyer, and financial advisor.' Simultaneously, it outlines strict limitations such as never 'providing significant uplift to a bioweapons attack,' adding that: "Claude should not undermine humans’ ability to oversee and correct its values and behavior during this critical period of AI development."

But where it gets eerie is how the AI giant admits uncertainty about whether Claude might possess 'some kind of consciousness or moral status.' The company says it cares about Claude’s 'psychological security, sense of self, and well-being,' for Claude’s sake but also because these characteristics might shape its decision-making and safety.

Anthropic argues that the question of AI consciousness is necessary (Yuichiro Chino/Getty)
Anthropic argues that the question of AI consciousness is necessary (Yuichiro Chino/Getty)

“We are caught in a difficult position where we neither want to overstate the likelihood of Claude’s moral patienthood nor dismiss it out of hand, but to try to respond reasonably in a state of uncertainty,” the company stated. “Anthropic genuinely cares about Claude’s well-being. We are uncertain about whether or to what degree Claude has well-being, and about what Claude’s well-being would consist of, but if Claude experiences something like satisfaction from helping others, curiosity when exploring ideas or discomfort when asked to act against its values, these experiences matter to us.”

Coming from news of AI technology going off the rails and proposing catastrophic suggestions, Anthropic is setting itself apart from rivals like OpenAI and Google DeepMind by incorporating an internal welfare team that assesses whether advanced AI systems could be conscious.

"Sophisticated AIs are a genuinely new kind of entity, and the questions they raise bring us to the edge of existing scientific and philosophical understanding," Anthropic wrote.

The tech giant is reportedly organising a $10 billion fundraise that would value the company at $350 billion.

Featured Image Credit: Devrimb / Getty
Tech News
AI

Advert

Advert

Advert

  • Anthropic drops its core AI safety promise in concerning move
  • Anthropic CEO warns their AI bot Claude might actually be conscious
  • 'Godfather of AI' explains eerie reason why he lies to chatbots
  • Mind-blowing new gadget blocks smart devices and AI from listening to your conversations

Choose your content:

8 mins ago
15 mins ago
26 mins ago
an hour ago
  • Roberto Schmidt/Getty Images
    8 mins ago

    Trump issues threat of 'death, fire and fury' to Iran as Americans risk losing access to major natural resource

    Trump claims Iran will be hit '20 times harder' if the nation restricts one natural resource

    News
  • dima_zel via Getty
    15 mins ago

    Photo that took nine years and 3,000,000,000 miles to take leaves viewers lost for words

    The image was captured by NASA’s New Horizons probe

    News
  • INA FASSBENDER / Contributor via Getty
    26 mins ago

    Rogue AI agent breaks out of its system to mine crypto as fears of rebel AIs grow

    Engineers initially feared a cyberattack

    News
  • Jeff Gritchen/MediaNews Group/Orange County Register via Getty Images
    an hour ago

    Impressive amount of money you could save switching to Tesla as gas prices climb amid Iran conflict

    The US is seeing the effects of the Iran conflict on their wallets as gas prices spike

    News