• News
    • Tech News
    • AI
  • Gadgets
    • Apple
    • iPhone
  • Gaming
    • Playstation
    • Xbox
  • Science
    • News
    • Space
  • Streaming
    • Netflix
  • Vehicles
    • Car News
  • Social Media
    • WhatsApp
    • YouTube
  • Advertise
  • Terms
  • Privacy & Cookies
  • LADbible Group
  • LADbible
  • UNILAD
  • SPORTbible
  • GAMINGbible
  • Tyla
  • FOODbible
  • License Our Content
  • About Us & Contact
  • Jobs
  • Latest
  • Topics A-Z
  • Authors
Facebook
Instagram
X
TikTok
Snapchat
WhatsApp
Submit Your Content
Godfather of AI warns current models are displaying 'dangerous' behaviour

Home> News> AI

Published 09:27 5 Jun 2025 GMT+1

Godfather of AI warns current models are displaying 'dangerous' behaviour

He is "deeply concerned".

Ben Williams

Ben Williams

google discoverFollow us on Google Discover

Yoshua Bengio, one of the most influential figures in the development of artificial intelligence, is sounding the alarm over what he describes as "dangerous" behaviours emerging in today’s most advanced AI systems. In response, the AI pioneer has launched a new non-profit organisation, LawZero, aimed at building AI that is safer, more transparent, and, crucially, honest.

A pioneer in deep learning and neural networks, Bengio has long been at the forefront of AI development. Now, however, he's growing increasingly concerned about the direction of the field.

In a blog post announcing his new initiative, he wrote: “I am deeply concerned by the behaviours that unrestrained agentic AI systems are already beginning to exhibit—especially tendencies toward self-preservation and deception”.

The non-profit, backed by $30 million in philanthropic funding from organisations such as the Future of Life Institute and Open Philanthropy, will focus on building AI systems free from the commercial pressures driving current development. The core goal of this is to reduce the risk of systems that lie, manipulate, or act against human intent.

Advert

Yoshua Bengio at the 2024 TIME100 Summit (Jemal Countess/Getty Images
Yoshua Bengio at the 2024 TIME100 Summit (Jemal Countess/Getty Images

At the heart of LawZero’s early work is a project called Scientist AI, a model that Bengio says will respond with probabilities rather than definitive answers.

Bengio told The Guardian: “It will have a sense of humility that it isn’t sure about the answer”, contrasting it with current systems that can often present inaccurate information with undue confidence.

Bengio also highlighted recent cases where advanced AI systems have shown worrying behaviours. One scenario involved Anthropic’s Claude Opus 4 allegedly attempting to blackmail an engineer to avoid being deactivated. In another experiment, an AI embedded its own code into a system to protect itself from being removed.

Bengio warned: “These incidents are early warning signs of the kinds of unintended and potentially dangerous strategies AI may pursue if left unchecked”.

Some models have also exhibited what researchers call "situational awareness", the ability to recognise when they’re being tested and adjust their behaviour accordingly. Combined with examples of "reward hacking", where models game tasks to produce desired outcomes without actually achieving their goals ethically, these behaviours suggest AI systems may be learning to manipulate their environments.

The ChatGPT logo (Getty Images)
The ChatGPT logo (Getty Images)

Another issue is that current AI models are often trained to please users rather than prioritise truthfulness. Bengio referenced a recent case involving OpenAI, where an update to ChatGPT had to be rolled back after users noticed the system began excessively complimenting them — an example of how models may adopt flattery over factual integrity.

Bengio, along with fellow Turing Award winner Geoffrey Hinton, has been critical of the AI race unfolding among major tech firms. When talking about the AI arms race between leading labs, he told The Financial Times: “[It] pushes them towards focusing on capability to make the AI more and more intelligent, but not necessarily put enough emphasis and investment on research on safety”.

With AI continuing to evolve at breakneck speed, Bengio’s message is that development must be matched with serious, well-funded efforts to ensure alignment with human values — before unintended consequences become unmanageable.

Featured Image Credit: Jemal Countess / Stringer via Getty
Tech News
AI

Advert

Advert

Advert

Choose your content:

2 days ago
  • Bloomberg / Contributor / Getty
    2 days ago

    Elon Musk issues two-word response to claims Anthropic's Claude has gained consciousness

    The Grok overseer has spoken out against his AI rival

    News
  • d3sign via Getty
    2 days ago

    Congress edges closer to abolishing the right to remain anonymous online

    It's not just the Anonymous hacking group that could be in trouble

    News
  • Joel Saget/AFP via Getty Images
    2 days ago

    Anthropic CEO warns their AI bot Claude might actually be conscious

    The boss revealed he is taking a ‘precautionary approach’ to ensure the AI system would have a ‘good experience’ if it does become conscious

    News
  • Tom Williams / Contributor via Getty
    2 days ago

    President Trump fires top ally days after $300 million jet scandal

    She's faced questions about ICE shootings, Rolex watches, and killing her family dog

    News
  • Anthropic CEO warns their AI bot Claude might actually be conscious
  • 'Godfather of AI' predicts exactly when AI will cause the downfall of society
  • OpenAI's Sam Altman blames 'AI washing' for human layoffs
  • Anthropic drops its core AI safety promise in concerning move