• News
    • Tech News
    • AI
  • Gadgets
    • Apple
    • iPhone
  • Gaming
    • Playstation
    • Xbox
  • Science
    • News
    • Space
  • Streaming
    • Netflix
  • Vehicles
    • Car News
  • Social Media
    • WhatsApp
    • YouTube
  • Advertise
  • Terms
  • Privacy & Cookies
  • LADbible Group
  • LADbible
  • UNILAD
  • SPORTbible
  • GAMINGbible
  • Tyla
  • FOODbible
  • License Our Content
  • About Us & Contact
  • Jobs
  • Latest
  • Topics A-Z
  • Authors
Facebook
Instagram
X
TikTok
Snapchat
WhatsApp
Submit Your Content
Godfather of AI warns current models are displaying 'dangerous' behaviour

Home> News> AI

Published 09:27 5 Jun 2025 GMT+1

Godfather of AI warns current models are displaying 'dangerous' behaviour

He is "deeply concerned".

Ben Williams

Ben Williams

Featured Image Credit: Jemal Countess / Stringer via Getty
Tech News
AI

Advert

Advert

Advert

Yoshua Bengio, one of the most influential figures in the development of artificial intelligence, is sounding the alarm over what he describes as "dangerous" behaviours emerging in today’s most advanced AI systems. In response, the AI pioneer has launched a new non-profit organisation, LawZero, aimed at building AI that is safer, more transparent, and, crucially, honest.

A pioneer in deep learning and neural networks, Bengio has long been at the forefront of AI development. Now, however, he's growing increasingly concerned about the direction of the field.

In a blog post announcing his new initiative, he wrote: “I am deeply concerned by the behaviours that unrestrained agentic AI systems are already beginning to exhibit—especially tendencies toward self-preservation and deception”.

The non-profit, backed by $30 million in philanthropic funding from organisations such as the Future of Life Institute and Open Philanthropy, will focus on building AI systems free from the commercial pressures driving current development. The core goal of this is to reduce the risk of systems that lie, manipulate, or act against human intent.

Advert

Yoshua Bengio at the 2024 TIME100 Summit (Jemal Countess/Getty Images
Yoshua Bengio at the 2024 TIME100 Summit (Jemal Countess/Getty Images

At the heart of LawZero’s early work is a project called Scientist AI, a model that Bengio says will respond with probabilities rather than definitive answers.

Bengio told The Guardian: “It will have a sense of humility that it isn’t sure about the answer”, contrasting it with current systems that can often present inaccurate information with undue confidence.

Bengio also highlighted recent cases where advanced AI systems have shown worrying behaviours. One scenario involved Anthropic’s Claude Opus 4 allegedly attempting to blackmail an engineer to avoid being deactivated. In another experiment, an AI embedded its own code into a system to protect itself from being removed.

Bengio warned: “These incidents are early warning signs of the kinds of unintended and potentially dangerous strategies AI may pursue if left unchecked”.

Some models have also exhibited what researchers call "situational awareness", the ability to recognise when they’re being tested and adjust their behaviour accordingly. Combined with examples of "reward hacking", where models game tasks to produce desired outcomes without actually achieving their goals ethically, these behaviours suggest AI systems may be learning to manipulate their environments.

The ChatGPT logo (Getty Images)
The ChatGPT logo (Getty Images)

Another issue is that current AI models are often trained to please users rather than prioritise truthfulness. Bengio referenced a recent case involving OpenAI, where an update to ChatGPT had to be rolled back after users noticed the system began excessively complimenting them — an example of how models may adopt flattery over factual integrity.

Bengio, along with fellow Turing Award winner Geoffrey Hinton, has been critical of the AI race unfolding among major tech firms. When talking about the AI arms race between leading labs, he told The Financial Times: “[It] pushes them towards focusing on capability to make the AI more and more intelligent, but not necessarily put enough emphasis and investment on research on safety”.

With AI continuing to evolve at breakneck speed, Bengio’s message is that development must be matched with serious, well-funded efforts to ensure alignment with human values — before unintended consequences become unmanageable.

Choose your content:

a day ago
  • KCNA VIA KNS/AFP via Getty Images
    a day ago

    People are stunned after discovering what year it actually is in North Korea as country 're-wrote' time

    Time in North Korea seemingly began with the birth of former leader Kim Il-Sung

    News
  • Tom Werner / Getty
    a day ago

    Man who did 300 kettlebell swings every day for 30 days reveals what it did to his body

    If this doesn't get you off the sofa, nothing will

    Science
  • Sarah Silbiger/Getty Images
    a day ago

    FDA recalls almost 2,000 products including food and drinks after alert of 'rodent excreta and bird droppings'

    It affects products in Indiana, Minnesota, and North Dakota

    News
  • Pool / Pool via Getty
    a day ago

    Man charged with impersonating FBI agent in daring Luigi Mangione jailbreak

    Like a real-life episode of Fox's Prison Break

    News
  • Anthropic publishes eerie statement about the 'moral status' of its AI
  • 'Godfather' of AI warns arms race for AI supremacy runs risk of amplifying dangerous 'superhuman' systems
  • 'Godfather of AI' predicts exactly when AI will cause the downfall of society
  • 'Godfather of AI' reveals frightening odds that AI will eventually 'seize control' from humans