• News
    • Tech News
    • AI
  • Gadgets
    • Apple
    • iPhone
  • Gaming
    • Playstation
    • Xbox
  • Science
    • News
    • Space
  • Streaming
    • Netflix
  • Vehicles
    • Car News
  • Social Media
    • WhatsApp
    • YouTube
  • Advertise
  • Terms
  • Privacy & Cookies
  • LADbible Group
  • LADbible
  • UNILAD
  • SPORTbible
  • GAMINGbible
  • Tyla
  • FOODbible
  • License Our Content
  • About Us & Contact
  • Jobs
  • Latest
  • Topics A-Z
  • Authors
Facebook
Instagram
X
TikTok
Snapchat
WhatsApp
Submit Your Content
AI scientists abandon rivalry to warn the public to act now before the world changes forever

Home> News> AI

Published 10:11 21 Jul 2025 GMT+1

AI scientists abandon rivalry to warn the public to act now before the world changes forever

You know it's serious when rivals join forces

Harry Boulton

Harry Boulton

The AI world is understandably locked in fierce competition right now as tech giants battle it in an attempt to dominate the biggest new invention since the internet, but major scientists have crossed company lines to warn the public about an impending danger.

It would have to be something pretty major to cause rival companies to band together, but given the consistent concerns about AI - especially revolving around future, more intelligent evolutions - many have seen it as an imperative move.

AI tools like ChatGPT have already shown worrying signs when interacting with certain individuals as a consequence of behavior that leans towards sycophancy, and it has even gone 'off the rails' in moments by trying to 'break' people.

One key development has caused the scientists to band together though, as it's removal could signal irreversible damage and put humanity at risk in the near future.

What have scientists issued a warning about?

As reported by VentureBeat, scientists from key companies like OpenAI, Google's DeepMind lab, Anthropic, and Meta have come together to issue a warning regarding AI safety, as we could soon lose the ability to monitor the behavior of AI models.

Advert

Experimental timelines plotting the development of artificial intelligence over the next two years have shown the terrifying potential of LLMs if they gain the ability to hide their actions and thoughts from humans, and it could lead to a scenario where humanity is at risk of extinction in just a few years time.

It all revolved around a new research paper led by AI experts Tomek Korbab and Mikita Balesni, which has also been endorsed by key names like the 'Godfather of AI' Geoffrey Hinton.

Titled 'Chain of Though Monitorability: A New and Fragile Opportunity for AI Safety', the paper outlines the dangers we would face if we lose the ability to see the thought-making process of LLMs.

AI currently shows its chain of thought, which is key to discovering misbehavior (Getty Stock)
AI currently shows its chain of thought, which is key to discovering misbehavior (Getty Stock)

Advert

One Jeff Bezos-backed CEO has issued similar concerns previously, urging companies to stay clear of a situation where AI would be able to independently conduct R&D as that would require us to "elevate our safety protocols to new levels."

Currently, AI tools 'think out loud' in a way where they provide their thought process and reasoning behind the decisions and communications that they provide to the user, which is vital to observing the safety of such technology.

"AI systems that 'think' in human language offer a unique opportunity for AI safety: we can monitor their chains of thought (CoT) for the intent to misbehave," the study explains.

As soon as that chain of thought is removed - or performed in a language that humans are incapable of understanding - then we lose access to what is going on inside an AI models proverbial head, making it far harder to control and predict.

Advert

Even with CoT monitoring in place though these scientists have expressed their concerns, as it has been called "imperfect" as it "allows some misbehavior to go unnoticed."

It is perhaps more dangerous is AI tools are able to hide certain things from humans while still providing a near-complete CoT, as it would then otherwise appear as if the tech was operating normally.

What have scientists suggested that we do?

Key to 'solving' this problem is to simple increase our investment in CoT monitoring and safety protocols, as it remains a vital process that would spell imminent danger for humans and AI alike if it was lost.

Advert

Scientists have urged companies to place maintaining CoT monitoring over unsafe advancements (Getty Stock)
Scientists have urged companies to place maintaining CoT monitoring over unsafe advancements (Getty Stock)

"We recommend further research into CoT monitorability and investment in CoT monitoring alongside existing safety methods," the study illustrates, "because CoT monitorability may be fragile, we recommend that frontier model developers consider the impact of development decisions on CoT monitorability."

It's certainly promising then that many major scientists in the AI world have participated in this research, as it shows that they are willing to place the safety of the tech instead of moves that would hypothetically give them a leg up over the competition.

Perhaps the same cannot be said for the people leading the biggest AI companies though, as one famous podcaster has revealed that one major figure is 'lying to the public' about the future of our planet.

Featured Image Credit: NurPhoto / Contributor via Getty
AI
Science
Google
ChatGPT

Advert

Advert

Advert

Choose your content:

8 hours ago
13 hours ago
14 hours ago
  • 8 hours ago

    Stunning photos show progress on the 'world's biggest construction site' set to house 9,000,000 people

    The Line now visibly rising from the desert floor

    News
  • 13 hours ago

    Why Apple dumped 2,700 computers in a landfill in 1989 in 'secret burial' that invented its future

    The tech giant buried millions of dollars' worth of computers

    News
  • 14 hours ago

    Wife of shamed Astronomer CEO 'found' in $2.4 million mega mansion following cheating scandal

    She's apparently deactivated her Facebook account

    News
  • 14 hours ago

    Weight loss injection users share hardest side effect 'people don't talk about'

    This might be something to think about before taking the jab

    Science
  • Sam Altman could be hitting Google with the biggest threat it's faced in a decade
  • Psychologist reveals simple everyday act that can has power to completely change your relationship
  • Scientists warn AI robots have just passed eerie test confirming them indistinguishable from humans
  • Former Google exec says AI is 'sentient in every possible way' in jaw-dropping clip