• News
    • Tech News
    • AI
  • Gadgets
    • Apple
    • iPhone
  • Gaming
    • Playstation
    • Xbox
  • Science
    • News
    • Space
  • Streaming
    • Netflix
  • Vehicles
    • Car News
  • Social Media
    • WhatsApp
    • YouTube
  • Advertise
  • Terms
  • Privacy & Cookies
  • LADbible Group
  • LADbible
  • UNILAD
  • SPORTbible
  • GAMINGbible
  • Tyla
  • FOODbible
  • License Our Content
  • About Us & Contact
  • Jobs
  • Latest
  • Topics A-Z
  • Authors
Facebook
Instagram
X
TikTok
Snapchat
WhatsApp
Submit Your Content
AI scientists abandon rivalry to warn the public to act now before the world changes forever

Home> News> AI

Published 10:11 21 Jul 2025 GMT+1

AI scientists abandon rivalry to warn the public to act now before the world changes forever

You know it's serious when rivals join forces

Harry Boulton

Harry Boulton

Featured Image Credit: NurPhoto / Contributor via Getty
AI
Science
Google
ChatGPT

Advert

Advert

Advert

The AI world is understandably locked in fierce competition right now as tech giants battle it in an attempt to dominate the biggest new invention since the internet, but major scientists have crossed company lines to warn the public about an impending danger.

It would have to be something pretty major to cause rival companies to band together, but given the consistent concerns about AI - especially revolving around future, more intelligent evolutions - many have seen it as an imperative move.

AI tools like ChatGPT have already shown worrying signs when interacting with certain individuals as a consequence of behavior that leans towards sycophancy, and it has even gone 'off the rails' in moments by trying to 'break' people.

One key development has caused the scientists to band together though, as it's removal could signal irreversible damage and put humanity at risk in the near future.

What have scientists issued a warning about?

As reported by VentureBeat, scientists from key companies like OpenAI, Google's DeepMind lab, Anthropic, and Meta have come together to issue a warning regarding AI safety, as we could soon lose the ability to monitor the behavior of AI models.

Advert

Experimental timelines plotting the development of artificial intelligence over the next two years have shown the terrifying potential of LLMs if they gain the ability to hide their actions and thoughts from humans, and it could lead to a scenario where humanity is at risk of extinction in just a few years time.

It all revolved around a new research paper led by AI experts Tomek Korbab and Mikita Balesni, which has also been endorsed by key names like the 'Godfather of AI' Geoffrey Hinton.

Titled 'Chain of Though Monitorability: A New and Fragile Opportunity for AI Safety', the paper outlines the dangers we would face if we lose the ability to see the thought-making process of LLMs.

AI currently shows its chain of thought, which is key to discovering misbehavior (Getty Stock)
AI currently shows its chain of thought, which is key to discovering misbehavior (Getty Stock)

Advert

One Jeff Bezos-backed CEO has issued similar concerns previously, urging companies to stay clear of a situation where AI would be able to independently conduct R&D as that would require us to "elevate our safety protocols to new levels."

Currently, AI tools 'think out loud' in a way where they provide their thought process and reasoning behind the decisions and communications that they provide to the user, which is vital to observing the safety of such technology.

"AI systems that 'think' in human language offer a unique opportunity for AI safety: we can monitor their chains of thought (CoT) for the intent to misbehave," the study explains.

As soon as that chain of thought is removed - or performed in a language that humans are incapable of understanding - then we lose access to what is going on inside an AI models proverbial head, making it far harder to control and predict.

Advert

Even with CoT monitoring in place though these scientists have expressed their concerns, as it has been called "imperfect" as it "allows some misbehavior to go unnoticed."

It is perhaps more dangerous is AI tools are able to hide certain things from humans while still providing a near-complete CoT, as it would then otherwise appear as if the tech was operating normally.

What have scientists suggested that we do?

Key to 'solving' this problem is to simple increase our investment in CoT monitoring and safety protocols, as it remains a vital process that would spell imminent danger for humans and AI alike if it was lost.

Advert

Scientists have urged companies to place maintaining CoT monitoring over unsafe advancements (Getty Stock)
Scientists have urged companies to place maintaining CoT monitoring over unsafe advancements (Getty Stock)

"We recommend further research into CoT monitorability and investment in CoT monitoring alongside existing safety methods," the study illustrates, "because CoT monitorability may be fragile, we recommend that frontier model developers consider the impact of development decisions on CoT monitorability."

It's certainly promising then that many major scientists in the AI world have participated in this research, as it shows that they are willing to place the safety of the tech instead of moves that would hypothetically give them a leg up over the competition.

Perhaps the same cannot be said for the people leading the biggest AI companies though, as one famous podcaster has revealed that one major figure is 'lying to the public' about the future of our planet.

  • People fear 'the end is near' as Google's new Gemini AI model is set to change everything
  • Google take ‘giant leap’ with launch of ‘Nano Banana' that will change the AI game forever
  • ChatGPT down leaving millions of users across the world without access
  • Scientists warn AI robots have just passed eerie test confirming them indistinguishable from humans

Choose your content:

16 hours ago
17 hours ago
18 hours ago
  • StockByM/Getty ImagesStockByM/Getty Images
    16 hours ago

    Archeologist reveals stunning 'proof' that they've discovered the lost city of Atlantis

    The lost city could be located near Spain

    Science
  • Instagram / Chandler CrewsInstagram / Chandler Crews
    16 hours ago

    Simulation shows exactly how limb lengthening surgery works as woman reveals appearance after growing 13 inches

    Chandler Crews spent around $2 million on her surgery

    Science
  • SEAN GLADWELL via GettySEAN GLADWELL via Getty
    17 hours ago

    Experts reveal whether you should really accept or reject cookies

    They might not be as tasty as they sound

    News
  • bushby3000 via Instagrambushby3000 via Instagram
    18 hours ago

    Man who's been walking non-stop for 27 years straight set to finally break mind-blowing world record

    Talk about getting your daily steps in

    News