
The AI world is understandably locked in fierce competition right now as tech giants battle it in an attempt to dominate the biggest new invention since the internet, but major scientists have crossed company lines to warn the public about an impending danger.
It would have to be something pretty major to cause rival companies to band together, but given the consistent concerns about AI - especially revolving around future, more intelligent evolutions - many have seen it as an imperative move.
AI tools like ChatGPT have already shown worrying signs when interacting with certain individuals as a consequence of behavior that leans towards sycophancy, and it has even gone 'off the rails' in moments by trying to 'break' people.
One key development has caused the scientists to band together though, as it's removal could signal irreversible damage and put humanity at risk in the near future.
What have scientists issued a warning about?
As reported by VentureBeat, scientists from key companies like OpenAI, Google's DeepMind lab, Anthropic, and Meta have come together to issue a warning regarding AI safety, as we could soon lose the ability to monitor the behavior of AI models.
Advert
Experimental timelines plotting the development of artificial intelligence over the next two years have shown the terrifying potential of LLMs if they gain the ability to hide their actions and thoughts from humans, and it could lead to a scenario where humanity is at risk of extinction in just a few years time.
It all revolved around a new research paper led by AI experts Tomek Korbab and Mikita Balesni, which has also been endorsed by key names like the 'Godfather of AI' Geoffrey Hinton.
Titled 'Chain of Though Monitorability: A New and Fragile Opportunity for AI Safety', the paper outlines the dangers we would face if we lose the ability to see the thought-making process of LLMs.

Advert
One Jeff Bezos-backed CEO has issued similar concerns previously, urging companies to stay clear of a situation where AI would be able to independently conduct R&D as that would require us to "elevate our safety protocols to new levels."
Currently, AI tools 'think out loud' in a way where they provide their thought process and reasoning behind the decisions and communications that they provide to the user, which is vital to observing the safety of such technology.
"AI systems that 'think' in human language offer a unique opportunity for AI safety: we can monitor their chains of thought (CoT) for the intent to misbehave," the study explains.
As soon as that chain of thought is removed - or performed in a language that humans are incapable of understanding - then we lose access to what is going on inside an AI models proverbial head, making it far harder to control and predict.
Advert
Even with CoT monitoring in place though these scientists have expressed their concerns, as it has been called "imperfect" as it "allows some misbehavior to go unnoticed."
It is perhaps more dangerous is AI tools are able to hide certain things from humans while still providing a near-complete CoT, as it would then otherwise appear as if the tech was operating normally.
What have scientists suggested that we do?
Key to 'solving' this problem is to simple increase our investment in CoT monitoring and safety protocols, as it remains a vital process that would spell imminent danger for humans and AI alike if it was lost.
Advert

"We recommend further research into CoT monitorability and investment in CoT monitoring alongside existing safety methods," the study illustrates, "because CoT monitorability may be fragile, we recommend that frontier model developers consider the impact of development decisions on CoT monitorability."
It's certainly promising then that many major scientists in the AI world have participated in this research, as it shows that they are willing to place the safety of the tech instead of moves that would hypothetically give them a leg up over the competition.
Perhaps the same cannot be said for the people leading the biggest AI companies though, as one famous podcaster has revealed that one major figure is 'lying to the public' about the future of our planet.