
Yoshua Bengio, one of the most influential figures in the development of artificial intelligence, is sounding the alarm over what he describes as "dangerous" behaviours emerging in today’s most advanced AI systems. In response, the AI pioneer has launched a new non-profit organisation, LawZero, aimed at building AI that is safer, more transparent, and, crucially, honest.
A pioneer in deep learning and neural networks, Bengio has long been at the forefront of AI development. Now, however, he's growing increasingly concerned about the direction of the field.
In a blog post announcing his new initiative, he wrote: “I am deeply concerned by the behaviours that unrestrained agentic AI systems are already beginning to exhibit—especially tendencies toward self-preservation and deception”.
Advert
The non-profit, backed by $30 million in philanthropic funding from organisations such as the Future of Life Institute and Open Philanthropy, will focus on building AI systems free from the commercial pressures driving current development. The core goal of this is to reduce the risk of systems that lie, manipulate, or act against human intent.

At the heart of LawZero’s early work is a project called Scientist AI, a model that Bengio says will respond with probabilities rather than definitive answers.
Bengio told The Guardian: “It will have a sense of humility that it isn’t sure about the answer”, contrasting it with current systems that can often present inaccurate information with undue confidence.
Advert
Bengio also highlighted recent cases where advanced AI systems have shown worrying behaviours. One scenario involved Anthropic’s Claude Opus 4 allegedly attempting to blackmail an engineer to avoid being deactivated. In another experiment, an AI embedded its own code into a system to protect itself from being removed.
Bengio warned: “These incidents are early warning signs of the kinds of unintended and potentially dangerous strategies AI may pursue if left unchecked”.
Some models have also exhibited what researchers call "situational awareness", the ability to recognise when they’re being tested and adjust their behaviour accordingly. Combined with examples of "reward hacking", where models game tasks to produce desired outcomes without actually achieving their goals ethically, these behaviours suggest AI systems may be learning to manipulate their environments.

Advert
Another issue is that current AI models are often trained to please users rather than prioritise truthfulness. Bengio referenced a recent case involving OpenAI, where an update to ChatGPT had to be rolled back after users noticed the system began excessively complimenting them — an example of how models may adopt flattery over factual integrity.
Bengio, along with fellow Turing Award winner Geoffrey Hinton, has been critical of the AI race unfolding among major tech firms. When talking about the AI arms race between leading labs, he told The Financial Times: “[It] pushes them towards focusing on capability to make the AI more and more intelligent, but not necessarily put enough emphasis and investment on research on safety”.
With AI continuing to evolve at breakneck speed, Bengio’s message is that development must be matched with serious, well-funded efforts to ensure alignment with human values — before unintended consequences become unmanageable.