• News
    • Tech News
    • AI
  • Gadgets
    • Apple
    • iPhone
  • Gaming
    • Playstation
    • Xbox
  • Science
    • News
    • Space
  • Streaming
    • Netflix
  • Vehicles
    • Car News
  • Social Media
    • WhatsApp
    • YouTube
  • Advertise
  • Terms
  • Privacy & Cookies
  • LADbible Group
  • LADbible
  • UNILAD
  • SPORTbible
  • GAMINGbible
  • Tyla
  • FOODbible
  • License Our Content
  • About Us & Contact
  • Jobs
  • Latest
  • Topics A-Z
  • Authors
Facebook
Instagram
X
TikTok
Snapchat
WhatsApp
Submit Your Content
Google removes major AI policy that could have serious implications for global war

Home> News> AI

Published 16:23 5 Feb 2025 GMT

Google removes major AI policy that could have serious implications for global war

The tech giant has been committed to responsible AI practices since 2018

Tom Chapman

Tom Chapman

Google has made some serious amendments to its AI policy, which could have severe implications around the world.

If you weren't already worried about headlines saying the end is nigh, Google has been accused of taking the training wheels off artificial intelligence by removing a major policy keeping the ever-advancing tech in line.

There are plenty of warnings out there from so-called 'Godfathers' of AI, suggesting that the potential dangers of AI’s applications outweigh the good. Yoshua Bengio has just warned how a military arms race could exploit AI, while Geoffrey Hinton has upgraded early predictions that there's a 10% chance AI could wipe us out and doubled it to 20%.

While potential what-if scenarios were enough to worry about, Google is arguably making it easier than ever for AI to be used to eradicate the human race.

Advert

As spotted by Bloomberg, Google has quietly removed the policy that's been in place since 2018 - promising to keep AI away from 'harmful' applications.

Artificial intelligence is tipped to lead the next world war (mikkelwilliam / Getty)
Artificial intelligence is tipped to lead the next world war (mikkelwilliam / Getty)

Google's AI Principles page previously said it wouldn't pursue AI applications as “technologies that cause or are likely to cause overall harm." This included weapons and surveillance tech which violate "internationally accepted norms." With the language no longer on the page, there are obvious concerns.

When Bloomberg reached out to Google for a response, it pointed the outlet to a blog post shared on February 4.

Advert

The 'Responsible AI' post explains: "There’s a global competition taking place for AI leadership within an increasingly complex geopolitical landscape.”

Google Senior Vice President James Manyika and Demis Hassabis, who leads the AI lab Google DeepMind, wrote: "We believe democracies should lead in AI development, guided by core values like freedom, equality and respect for human rights.

"And we believe that companies, governments and organizations sharing these values should work together to create AI that protects people, promotes global growth and supports national security."

After having co-led Google’s ethical AI team and now being Chief Ethics Scientist for AI startup Hugging Face, Margaret Mitchell told Bloomberg what the specific removal of the 'harm' clause could mean: "Having that removed is erasing the work that so many people in the ethical AI space and the activist space as well had done at Google, and more problematically it means Google will probably now work on deploying technology directly that can kill people."

Advert

There are concerns about Google changing its AI principles (Bloomberg / Contributor / Getty)
There are concerns about Google changing its AI principles (Bloomberg / Contributor / Getty)

The recent emergence of China's DeepSeek has sparked concerns that OpenAI will rapidly accelerate its own AI without necessarily thinking of the long-term consequences, while others think Google altering its responsible AI principles should be setting off alarm bells. Tracy Pizzo Frey oversaw Responsible AI at Google Cloud from 2017 to 2022 and concluded with her thoughts: "They asked us to deeply interrogate the work we were doing across each of them.

"And I fundamentally believe this made our products better. Responsible AI is a trust creator. And trust is necessary for success."

When the AI principles were first put in place in 2018, thousands of Google employees reiterated the company "should not be in the business of war," but jump forward to 2025 and it feels like we're nudging closer to the Skynet future of the Terminator movies.

Featured Image Credit: SOPA Images / Contributor / Getty
AI
Google

Advert

Advert

Advert

Choose your content:

an hour ago
3 hours ago
4 hours ago
  • an hour ago

    Shocking footage of one of the world's largest landfills that's home to 20,000 people

    It's a smelly job, but someone has to do it

    News
  • 3 hours ago

    US assassin caught on CCTV faces lengthy prison sentence over failed murder plot

    The Wisconsin woman was found guilty of conspiracy to murder and other offenses

    News
  • 3 hours ago

    82-year-old billionaire follows Bill Gates' promise to give away 99% of their fortune

    The ‘most powerful woman in healthcare’ is selling stocks to donate to charity

    News
  • 4 hours ago

    Mutant deer with horrifying tumor-like bubbles showing signs of widespread disease spotted in US states

    Frankenstein rabbits, zombie squirrels, and now, mutant deer

    Science
  • Google launches major upgrade for Gmail users
  • Former Google CEO warns of the 'real harm' AI could cause that no one is talking about
  • Google could be about to drastically change following controversial AI responses
  • Expert issues damming warning for next '15 years of dystopia' that there's 'no escaping'