• News
    • Tech News
    • AI
  • Gadgets
    • Apple
    • iPhone
  • Gaming
    • Playstation
    • Xbox
  • Science
    • News
    • Space
  • Streaming
    • Netflix
  • Vehicles
    • Car News
  • Social Media
    • WhatsApp
    • YouTube
  • Advertise
  • Terms
  • Privacy & Cookies
  • LADbible Group
  • LADbible
  • UNILAD
  • SPORTbible
  • GAMINGbible
  • Tyla
  • FOODbible
  • License Our Content
  • About Us & Contact
  • Jobs
  • Latest
  • Topics A-Z
  • Authors
Facebook
Instagram
X
TikTok
Snapchat
WhatsApp
Submit Your Content
Anthropic publishes eerie statement about the 'moral status' of its AI

Home> News> AI

Published 09:33 23 Jan 2026 GMT

Anthropic publishes eerie statement about the 'moral status' of its AI

The updated policy introduces a new approach to AI behaviour

Rebekah Jordan

Rebekah Jordan

Anthropic publishes have made an unnerving statement about the 'moral status' of AI.

While ChatGPT is making a major move that could cause major backlash with its users, Anthropic is completely restructuring a foundational document that determines how its popular Claude AI model behaves.

According to the document, the AI lab is training the model to follow a list of principles that will govern its behaviour and values.

Anthropic has positioned Claude as the safer choice for businesses (Kenneth Cheung/Getty)
Anthropic has positioned Claude as the safer choice for businesses (Kenneth Cheung/Getty)

Advert

“We believe that in order to be good actors in the world, AI models like Claude need to understand why we want them to behave in certain ways rather than just specifying what we want them to do,” a spokesperson for Anthropic said, as per Fortune. “If we want models to exercise good judgment across a wide range of novel situations, they need to be able to generalise and apply broad principles rather than mechanically follow specific rules.”

While Anthropic's previous constitution was reportedly inspired by the U.N. Declaration of Human Rights and Apple’s terms of service, this updated document forms the core of Anthropic’s 'Constitutional AI' training method. The AI will use the principles to critique and evaluate its own responses instead of relying entirely on human feedback to guide it.

According to the new policy, Anthropic states current Claude models will be 'broadly safe, broadly ethical, compliant with Anthropic’s guidelines and genuinely helpful' who will portray the bot as a 'brilliant friend who also has the knowledge of a doctor, lawyer, and financial advisor.' Simultaneously, it outlines strict limitations such as never 'providing significant uplift to a bioweapons attack,' adding that: "Claude should not undermine humans’ ability to oversee and correct its values and behavior during this critical period of AI development."

But where it gets eerie is how the AI giant admits uncertainty about whether Claude might possess 'some kind of consciousness or moral status.' The company says it cares about Claude’s 'psychological security, sense of self, and well-being,' for Claude’s sake but also because these characteristics might shape its decision-making and safety.

Anthropic argues that the question of AI consciousness is necessary (Yuichiro Chino/Getty)
Anthropic argues that the question of AI consciousness is necessary (Yuichiro Chino/Getty)

“We are caught in a difficult position where we neither want to overstate the likelihood of Claude’s moral patienthood nor dismiss it out of hand, but to try to respond reasonably in a state of uncertainty,” the company stated. “Anthropic genuinely cares about Claude’s well-being. We are uncertain about whether or to what degree Claude has well-being, and about what Claude’s well-being would consist of, but if Claude experiences something like satisfaction from helping others, curiosity when exploring ideas or discomfort when asked to act against its values, these experiences matter to us.”

Coming from news of AI technology going off the rails and proposing catastrophic suggestions, Anthropic is setting itself apart from rivals like OpenAI and Google DeepMind by incorporating an internal welfare team that assesses whether advanced AI systems could be conscious.

"Sophisticated AIs are a genuinely new kind of entity, and the questions they raise bring us to the edge of existing scientific and philosophical understanding," Anthropic wrote.

The tech giant is reportedly organising a $10 billion fundraise that would value the company at $350 billion.

Featured Image Credit: Devrimb / Getty
Tech News
AI

Advert

Advert

Advert

  • 'Godfather of AI' explains eerie reason why he lies to chatbots
  • Top internet analysts issue eerie warning that AI agents could 'make the internet go dark'
  • 'Godfather of AI' predicts exactly when AI will cause the downfall of society
  • Tesla’s AI chief has eerie warning to workers set to face 'hardest year' of their lives

Choose your content:

17 hours ago
18 hours ago
19 hours ago
  • Department of Justice
    17 hours ago

    Reason suspect in biggest jewelry heist in US history will not stand trial

    Officials say a 'gap' was 'exposed'

    News
  • Amazon MGM Studios
    17 hours ago

    Controversial Marvel star's new movie is already being called one of 2026's worst films

    It's been branded an AI abomination

    News
  • Bloomberg / Contributor / Getty
    18 hours ago

    Sam Altman slams Tesla as Elon Musk warns 'do not let your loved ones use ChatGPT'

    A war of the words between the billionaires

    News
  • Chip Somodevilla / Getty Images
    19 hours ago

    Trump suggests Greenland could become part of Golden Dome as part of mysterious new deal

    Enemy missiles don't stand a chance against his Golden Dome

    News