• News
    • Tech News
    • AI
  • Gadgets
    • Apple
    • iPhone
  • Gaming
    • Playstation
    • Xbox
  • Science
    • News
    • Space
  • Streaming
    • Netflix
  • Vehicles
    • Car News
  • Social Media
    • WhatsApp
    • YouTube
  • Advertise
  • Terms
  • Privacy & Cookies
  • LADbible Group
  • LADbible
  • UNILAD
  • SPORTbible
  • GAMINGbible
  • Tyla
  • FOODbible
  • License Our Content
  • About Us & Contact
  • Jobs
  • Latest
  • Topics A-Z
  • Authors
Facebook
Instagram
X
TikTok
Snapchat
WhatsApp
Submit Your Content
Google removes major AI policy that could have serious implications for global war

Home> News> AI

Published 16:23 5 Feb 2025 GMT

Google removes major AI policy that could have serious implications for global war

The tech giant has been committed to responsible AI practices since 2018

Tom Chapman

Tom Chapman

google discoverFollow us on Google Discover

Google has made some serious amendments to its AI policy, which could have severe implications around the world.

If you weren't already worried about headlines saying the end is nigh, Google has been accused of taking the training wheels off artificial intelligence by removing a major policy keeping the ever-advancing tech in line.

There are plenty of warnings out there from so-called 'Godfathers' of AI, suggesting that the potential dangers of AI’s applications outweigh the good. Yoshua Bengio has just warned how a military arms race could exploit AI, while Geoffrey Hinton has upgraded early predictions that there's a 10% chance AI could wipe us out and doubled it to 20%.

While potential what-if scenarios were enough to worry about, Google is arguably making it easier than ever for AI to be used to eradicate the human race.

Advert

As spotted by Bloomberg, Google has quietly removed the policy that's been in place since 2018 - promising to keep AI away from 'harmful' applications.

Artificial intelligence is tipped to lead the next world war (mikkelwilliam / Getty)
Artificial intelligence is tipped to lead the next world war (mikkelwilliam / Getty)

Google's AI Principles page previously said it wouldn't pursue AI applications as “technologies that cause or are likely to cause overall harm." This included weapons and surveillance tech which violate "internationally accepted norms." With the language no longer on the page, there are obvious concerns.

When Bloomberg reached out to Google for a response, it pointed the outlet to a blog post shared on February 4.

The 'Responsible AI' post explains: "There’s a global competition taking place for AI leadership within an increasingly complex geopolitical landscape.”

Google Senior Vice President James Manyika and Demis Hassabis, who leads the AI lab Google DeepMind, wrote: "We believe democracies should lead in AI development, guided by core values like freedom, equality and respect for human rights.

"And we believe that companies, governments and organizations sharing these values should work together to create AI that protects people, promotes global growth and supports national security."

After having co-led Google’s ethical AI team and now being Chief Ethics Scientist for AI startup Hugging Face, Margaret Mitchell told Bloomberg what the specific removal of the 'harm' clause could mean: "Having that removed is erasing the work that so many people in the ethical AI space and the activist space as well had done at Google, and more problematically it means Google will probably now work on deploying technology directly that can kill people."

There are concerns about Google changing its AI principles (Bloomberg / Contributor / Getty)
There are concerns about Google changing its AI principles (Bloomberg / Contributor / Getty)

The recent emergence of China's DeepSeek has sparked concerns that OpenAI will rapidly accelerate its own AI without necessarily thinking of the long-term consequences, while others think Google altering its responsible AI principles should be setting off alarm bells. Tracy Pizzo Frey oversaw Responsible AI at Google Cloud from 2017 to 2022 and concluded with her thoughts: "They asked us to deeply interrogate the work we were doing across each of them.

"And I fundamentally believe this made our products better. Responsible AI is a trust creator. And trust is necessary for success."

When the AI principles were first put in place in 2018, thousands of Google employees reiterated the company "should not be in the business of war," but jump forward to 2025 and it feels like we're nudging closer to the Skynet future of the Terminator movies.

Featured Image Credit: SOPA Images / Contributor / Getty
AI
Google

Advert

Advert

Advert

Choose your content:

a day ago
  • X/@theapplehub
    a day ago

    Apple's next $2,000 phone will reportedly drop iconic feature native to the iPhone

    Apple's rumored foldable phone could be set to drop

    News
  • Roberto Machado Noa / Contributor / Getty
    a day ago

    Google just spent $32,000,000,000 on this one thing in it's biggest purchase ever

    It's mere peanuts to one of the 'Big Five'

    News
  • Nick Hennen/Motley Rice
    a day ago

    Wegovy and Ozempic users reveal frightening ‘dark side’ of popular weight loss drugs

    Multiple Americans are suing the company behind the weight loss drugs

    News
  • DoganKutukcu / Getty
    a day ago

    Experts issue Bitcoin warning as nearly $1,000,000,000,000 is wiped from the stock market

    We're a long way from those Bitcoin peaks of 2025

    News
  • Google launches major upgrade for Gmail users
  • Google CEO reveals one unexpected job AI could soon replace
  • Google could be about to drastically change following controversial AI responses
  • Google breaks the internet with latest release that could change how we use AI forever