• News
    • Tech News
    • AI
  • Gadgets
    • Apple
    • iPhone
  • Gaming
    • Playstation
    • Xbox
  • Science
    • News
    • Space
  • Streaming
    • Netflix
  • Vehicles
    • Car News
  • Social Media
    • WhatsApp
    • YouTube
  • Advertise
  • Terms
  • Privacy & Cookies
  • LADbible Group
  • LADbible
  • UNILAD
  • SPORTbible
  • GAMINGbible
  • Tyla
  • FOODbible
  • License Our Content
  • About Us & Contact
  • Jobs
  • Latest
  • Topics A-Z
  • Authors
Facebook
Instagram
X
TikTok
Snapchat
WhatsApp
Submit Your Content
AI scientists abandon rivalry to warn the public to act now before the world changes forever

Home> News> AI

Published 10:11 21 Jul 2025 GMT+1

AI scientists abandon rivalry to warn the public to act now before the world changes forever

You know it's serious when rivals join forces

Harry Boulton

Harry Boulton

The AI world is understandably locked in fierce competition right now as tech giants battle it in an attempt to dominate the biggest new invention since the internet, but major scientists have crossed company lines to warn the public about an impending danger.

It would have to be something pretty major to cause rival companies to band together, but given the consistent concerns about AI - especially revolving around future, more intelligent evolutions - many have seen it as an imperative move.

AI tools like ChatGPT have already shown worrying signs when interacting with certain individuals as a consequence of behavior that leans towards sycophancy, and it has even gone 'off the rails' in moments by trying to 'break' people.

One key development has caused the scientists to band together though, as it's removal could signal irreversible damage and put humanity at risk in the near future.

What have scientists issued a warning about?

As reported by VentureBeat, scientists from key companies like OpenAI, Google's DeepMind lab, Anthropic, and Meta have come together to issue a warning regarding AI safety, as we could soon lose the ability to monitor the behavior of AI models.

Advert

Experimental timelines plotting the development of artificial intelligence over the next two years have shown the terrifying potential of LLMs if they gain the ability to hide their actions and thoughts from humans, and it could lead to a scenario where humanity is at risk of extinction in just a few years time.

It all revolved around a new research paper led by AI experts Tomek Korbab and Mikita Balesni, which has also been endorsed by key names like the 'Godfather of AI' Geoffrey Hinton.

Titled 'Chain of Though Monitorability: A New and Fragile Opportunity for AI Safety', the paper outlines the dangers we would face if we lose the ability to see the thought-making process of LLMs.

AI currently shows its chain of thought, which is key to discovering misbehavior (Getty Stock)
AI currently shows its chain of thought, which is key to discovering misbehavior (Getty Stock)

Advert

One Jeff Bezos-backed CEO has issued similar concerns previously, urging companies to stay clear of a situation where AI would be able to independently conduct R&D as that would require us to "elevate our safety protocols to new levels."

Currently, AI tools 'think out loud' in a way where they provide their thought process and reasoning behind the decisions and communications that they provide to the user, which is vital to observing the safety of such technology.

"AI systems that 'think' in human language offer a unique opportunity for AI safety: we can monitor their chains of thought (CoT) for the intent to misbehave," the study explains.

As soon as that chain of thought is removed - or performed in a language that humans are incapable of understanding - then we lose access to what is going on inside an AI models proverbial head, making it far harder to control and predict.

Advert

Even with CoT monitoring in place though these scientists have expressed their concerns, as it has been called "imperfect" as it "allows some misbehavior to go unnoticed."

It is perhaps more dangerous is AI tools are able to hide certain things from humans while still providing a near-complete CoT, as it would then otherwise appear as if the tech was operating normally.

What have scientists suggested that we do?

Key to 'solving' this problem is to simple increase our investment in CoT monitoring and safety protocols, as it remains a vital process that would spell imminent danger for humans and AI alike if it was lost.

Advert

Scientists have urged companies to place maintaining CoT monitoring over unsafe advancements (Getty Stock)
Scientists have urged companies to place maintaining CoT monitoring over unsafe advancements (Getty Stock)

"We recommend further research into CoT monitorability and investment in CoT monitoring alongside existing safety methods," the study illustrates, "because CoT monitorability may be fragile, we recommend that frontier model developers consider the impact of development decisions on CoT monitorability."

It's certainly promising then that many major scientists in the AI world have participated in this research, as it shows that they are willing to place the safety of the tech instead of moves that would hypothetically give them a leg up over the competition.

Perhaps the same cannot be said for the people leading the biggest AI companies though, as one famous podcaster has revealed that one major figure is 'lying to the public' about the future of our planet.

Featured Image Credit: NurPhoto / Contributor via Getty
AI
Science
Google
ChatGPT

Advert

Advert

Advert

  • Scientists warn AI robots have just passed eerie test confirming them indistinguishable from humans
  • AI can now be used to create brand-new viruses sparking fears of future catastrophe
  • People fear 'the end is near' as Google's new Gemini AI model is set to change everything
  • Groundbreaking AI unlocks the language of plants for the first time ever

Choose your content:

2 hours ago
3 hours ago
4 hours ago
  • David Petrus Ibars/Getty Images
    2 hours ago

    Breakthrough study finds weight loss drugs could bring added health benefits for millions

    A study has revealed new findings about GLP-1 drugs

    Science
  • Bloomberg / Contributor via Getty
    2 hours ago

    People left 'speechless' after noticing new addition to the White House website

    The current administration is being branded a 'joke'

    News
  • Catherine Falls Commercial / Getty
    3 hours ago

    Scientists finally reveal if pouring coffee down drain harms environment after woman fined $200

    How much can one coffee do?

    Science
  • d3sign/Getty Images
    4 hours ago

    Bank of America economist warns US citizens will see prices soar thanks to AI boom

    It looks like consumers are footing the bill for AI development

    News