• News
    • Tech News
    • AI
  • Gadgets
    • Apple
    • iPhone
  • Gaming
    • Playstation
    • Xbox
  • Science
    • News
    • Space
  • Streaming
    • Netflix
  • Vehicles
    • Car News
  • Social Media
    • WhatsApp
    • YouTube
  • Advertise
  • Terms
  • Privacy & Cookies
  • LADbible Group
  • LADbible
  • UNILAD
  • SPORTbible
  • GAMINGbible
  • Tyla
  • FOODbible
  • License Our Content
  • About Us & Contact
  • Jobs
  • Latest
  • Topics A-Z
  • Authors
Facebook
Instagram
X
TikTok
Snapchat
WhatsApp
Submit Your Content
AI scientists abandon rivalry to warn the public to act now before the world changes forever

Home> News> AI

Published 10:11 21 Jul 2025 GMT+1

AI scientists abandon rivalry to warn the public to act now before the world changes forever

You know it's serious when rivals join forces

Harry Boulton

Harry Boulton

google discoverFollow us on Google Discover

The AI world is understandably locked in fierce competition right now as tech giants battle it in an attempt to dominate the biggest new invention since the internet, but major scientists have crossed company lines to warn the public about an impending danger.

It would have to be something pretty major to cause rival companies to band together, but given the consistent concerns about AI - especially revolving around future, more intelligent evolutions - many have seen it as an imperative move.

AI tools like ChatGPT have already shown worrying signs when interacting with certain individuals as a consequence of behavior that leans towards sycophancy, and it has even gone 'off the rails' in moments by trying to 'break' people.

One key development has caused the scientists to band together though, as it's removal could signal irreversible damage and put humanity at risk in the near future.

What have scientists issued a warning about?

As reported by VentureBeat, scientists from key companies like OpenAI, Google's DeepMind lab, Anthropic, and Meta have come together to issue a warning regarding AI safety, as we could soon lose the ability to monitor the behavior of AI models.

Advert

Experimental timelines plotting the development of artificial intelligence over the next two years have shown the terrifying potential of LLMs if they gain the ability to hide their actions and thoughts from humans, and it could lead to a scenario where humanity is at risk of extinction in just a few years time.

It all revolved around a new research paper led by AI experts Tomek Korbab and Mikita Balesni, which has also been endorsed by key names like the 'Godfather of AI' Geoffrey Hinton.

Titled 'Chain of Though Monitorability: A New and Fragile Opportunity for AI Safety', the paper outlines the dangers we would face if we lose the ability to see the thought-making process of LLMs.

AI currently shows its chain of thought, which is key to discovering misbehavior (Getty Stock)
AI currently shows its chain of thought, which is key to discovering misbehavior (Getty Stock)

One Jeff Bezos-backed CEO has issued similar concerns previously, urging companies to stay clear of a situation where AI would be able to independently conduct R&D as that would require us to "elevate our safety protocols to new levels."

Currently, AI tools 'think out loud' in a way where they provide their thought process and reasoning behind the decisions and communications that they provide to the user, which is vital to observing the safety of such technology.

"AI systems that 'think' in human language offer a unique opportunity for AI safety: we can monitor their chains of thought (CoT) for the intent to misbehave," the study explains.

As soon as that chain of thought is removed - or performed in a language that humans are incapable of understanding - then we lose access to what is going on inside an AI models proverbial head, making it far harder to control and predict.

Even with CoT monitoring in place though these scientists have expressed their concerns, as it has been called "imperfect" as it "allows some misbehavior to go unnoticed."

It is perhaps more dangerous is AI tools are able to hide certain things from humans while still providing a near-complete CoT, as it would then otherwise appear as if the tech was operating normally.

What have scientists suggested that we do?

Key to 'solving' this problem is to simple increase our investment in CoT monitoring and safety protocols, as it remains a vital process that would spell imminent danger for humans and AI alike if it was lost.

Scientists have urged companies to place maintaining CoT monitoring over unsafe advancements (Getty Stock)
Scientists have urged companies to place maintaining CoT monitoring over unsafe advancements (Getty Stock)

"We recommend further research into CoT monitorability and investment in CoT monitoring alongside existing safety methods," the study illustrates, "because CoT monitorability may be fragile, we recommend that frontier model developers consider the impact of development decisions on CoT monitorability."

It's certainly promising then that many major scientists in the AI world have participated in this research, as it shows that they are willing to place the safety of the tech instead of moves that would hypothetically give them a leg up over the competition.

Perhaps the same cannot be said for the people leading the biggest AI companies though, as one famous podcaster has revealed that one major figure is 'lying to the public' about the future of our planet.

Featured Image Credit: NurPhoto / Contributor via Getty
AI
Science
Google
ChatGPT

Advert

Advert

Advert

Choose your content:

a day ago
  • X/@theapplehub
    a day ago

    Apple's next $2,000 phone will reportedly drop iconic feature native to the iPhone

    Apple's rumored foldable phone could be set to drop

    News
  • Roberto Machado Noa / Contributor / Getty
    a day ago

    Google just spent $32,000,000,000 on this one thing in it's biggest purchase ever

    It's mere peanuts to one of the 'Big Five'

    News
  • Nick Hennen/Motley Rice
    a day ago

    Wegovy and Ozempic users reveal frightening ‘dark side’ of popular weight loss drugs

    Multiple Americans are suing the company behind the weight loss drugs

    News
  • DoganKutukcu / Getty
    a day ago

    Experts issue Bitcoin warning as nearly $1,000,000,000,000 is wiped from the stock market

    We're a long way from those Bitcoin peaks of 2025

    News
  • Scientists warn AI robots have just passed eerie test confirming them indistinguishable from humans
  • AI makes unnerving prediction when asked about the final outcome of Iran vs USA conflict
  • People fear 'the end is near' as Google's new Gemini AI model is set to change everything
  • ChatGPT urges user to warn the public as it makes shock admission that it's trying to 'break' people