• News
    • Tech News
    • AI
  • Gadgets
    • Apple
    • iPhone
  • Gaming
    • Playstation
    • Xbox
  • Science
    • News
    • Space
  • Streaming
    • Netflix
  • Vehicles
    • Car News
  • Social Media
    • WhatsApp
    • YouTube
  • Advertise
  • Terms
  • Privacy & Cookies
  • LADbible Group
  • LADbible
  • UNILAD
  • SPORTbible
  • GAMINGbible
  • Tyla
  • FOODbible
  • License Our Content
  • About Us & Contact
  • Jobs
  • Latest
  • Topics A-Z
  • Authors
Facebook
Instagram
X
TikTok
Snapchat
WhatsApp
Submit Your Content
Google removes major AI policy that could have serious implications for global war

Home> News> AI

Published 16:23 5 Feb 2025 GMT

Google removes major AI policy that could have serious implications for global war

The tech giant has been committed to responsible AI practices since 2018

Tom Chapman

Tom Chapman

Google has made some serious amendments to its AI policy, which could have severe implications around the world.

If you weren't already worried about headlines saying the end is nigh, Google has been accused of taking the training wheels off artificial intelligence by removing a major policy keeping the ever-advancing tech in line.

There are plenty of warnings out there from so-called 'Godfathers' of AI, suggesting that the potential dangers of AI’s applications outweigh the good. Yoshua Bengio has just warned how a military arms race could exploit AI, while Geoffrey Hinton has upgraded early predictions that there's a 10% chance AI could wipe us out and doubled it to 20%.

While potential what-if scenarios were enough to worry about, Google is arguably making it easier than ever for AI to be used to eradicate the human race.

Advert

As spotted by Bloomberg, Google has quietly removed the policy that's been in place since 2018 - promising to keep AI away from 'harmful' applications.

Artificial intelligence is tipped to lead the next world war (mikkelwilliam / Getty)
Artificial intelligence is tipped to lead the next world war (mikkelwilliam / Getty)

Google's AI Principles page previously said it wouldn't pursue AI applications as “technologies that cause or are likely to cause overall harm." This included weapons and surveillance tech which violate "internationally accepted norms." With the language no longer on the page, there are obvious concerns.

When Bloomberg reached out to Google for a response, it pointed the outlet to a blog post shared on February 4.

Advert

The 'Responsible AI' post explains: "There’s a global competition taking place for AI leadership within an increasingly complex geopolitical landscape.”

Google Senior Vice President James Manyika and Demis Hassabis, who leads the AI lab Google DeepMind, wrote: "We believe democracies should lead in AI development, guided by core values like freedom, equality and respect for human rights.

"And we believe that companies, governments and organizations sharing these values should work together to create AI that protects people, promotes global growth and supports national security."

After having co-led Google’s ethical AI team and now being Chief Ethics Scientist for AI startup Hugging Face, Margaret Mitchell told Bloomberg what the specific removal of the 'harm' clause could mean: "Having that removed is erasing the work that so many people in the ethical AI space and the activist space as well had done at Google, and more problematically it means Google will probably now work on deploying technology directly that can kill people."

Advert

There are concerns about Google changing its AI principles (Bloomberg / Contributor / Getty)
There are concerns about Google changing its AI principles (Bloomberg / Contributor / Getty)

The recent emergence of China's DeepSeek has sparked concerns that OpenAI will rapidly accelerate its own AI without necessarily thinking of the long-term consequences, while others think Google altering its responsible AI principles should be setting off alarm bells. Tracy Pizzo Frey oversaw Responsible AI at Google Cloud from 2017 to 2022 and concluded with her thoughts: "They asked us to deeply interrogate the work we were doing across each of them.

"And I fundamentally believe this made our products better. Responsible AI is a trust creator. And trust is necessary for success."

When the AI principles were first put in place in 2018, thousands of Google employees reiterated the company "should not be in the business of war," but jump forward to 2025 and it feels like we're nudging closer to the Skynet future of the Terminator movies.

Featured Image Credit: SOPA Images / Contributor / Getty
AI
Google

Advert

Advert

Advert

Choose your content:

2 mins ago
18 hours ago
19 hours ago
21 hours ago
  • Bloomberg / Contributor via Getty
    2 mins ago

    White House Press Secretary Karoline Leavitt explodes at 'left-wing hack' reporter during ICE rant

    ICE recorded its deadliest year in over two decades

    News
  • Mandel NGAN/AFP via Getty Images
    18 hours ago

    Hidden reasons your visit to the US might be cancelled as Trump halts travel from 75 countries

    More than 100,000 visas have been revoked since Trump returned to office

    News
  • Alexander Spatari via Getty
    19 hours ago

    Major lunch food officially classed as cancer-causing by World Health Organization

    2026 is already off to a glum start

    Science
  • VYACHESLAV PROKOFYEV / Contributor / Getty
    21 hours ago

    Russian official warns Trump could be about to cause the beginning of 'the end of the world'

    Russia has outlined strong opposition to Trump's plans

    News
  • Google launches major upgrade for Gmail users
  • Former Google CEO warns of the 'real harm' AI could cause that no one is talking about
  • Google CEO reveals one unexpected job AI could soon replace
  • Google breaks the internet with latest release that could change how we use AI forever