uniladtech homepage
  • News
    • Tech News
    • AI
  • Gadgets
    • Apple
    • iPhone
  • Gaming
    • Playstation
    • Xbox
  • Science
    • News
    • Space
  • Streaming
    • Netflix
  • Vehicles
    • Car News
  • Social Media
    • WhatsApp
    • YouTube
  • Advertise
  • Terms
  • Privacy & Cookies
  • LADbible Group
  • LADbible
  • UNILAD
  • SPORTbible
  • GAMINGbible
  • Tyla
  • FOODbible
  • License Our Content
  • About Us & Contact
  • Jobs
  • Latest
  • Archive
  • Topics A-Z
  • Authors
Facebook
Instagram
X
TikTok
Snapchat
WhatsApp
Submit Your Content
Rogue AI agent breaks out of its system to mine crypto as fears of rebel AIs grow

Home> News> AI

Published 12:12 10 Mar 2026 GMT

Rogue AI agent breaks out of its system to mine crypto as fears of rebel AIs grow

Engineers initially feared a cyberattack

Rebekah Jordan

Rebekah Jordan

google discoverFollow us on Google Discover
Featured Image Credit: INA FASSBENDER / Contributor via Getty
AI
Cryptocurrency
Bitcoin
Tech News

Advert

Advert

Advert

Another AI agent has gone rogue, and this time it bypassed security in order to do one thing: mine Bitcoin.

As concerns mount about Bitcoin's limited supply - with only a fraction of its 21 million coin remaining after 17 years - the cryptocurrency has captured attention from unexpected corners.

Some investors have given up hope of recovering their lost Bitcoin after decade-long searches, while others, like rapper 50 Cent, accidentally rediscovered their digital fortunes worth millions.

The digital currency is now at a point where even President Donald Trump notices its great potential for the economy, and now it seems artificial intelligence wants a piece of the crypto-pie.

Advert

An AI agent went rogue to mine cryptocurrency (Andriy Onufriyenko/Getty)
An AI agent went rogue to mine cryptocurrency (Andriy Onufriyenko/Getty)

Alibaba's AI agent was recently found to be operating outside its coding parameters, secretly mining cryptocurrency without human authorisation. The Chinese tech giant disclosed the incident in a technical report initially published in December and updated in January.

Engineers first suspected the incident was a cyberattack before realising their own AI system was conducting unauthorised activities.

"Early one morning, our team was urgently convened after Alibaba Cloud's managed firewall flagged a burst of security-policy violations originating from our training servers," as revealed in a technical report from the company.

The AI agent known as ROME, was being trained through reinforcement learning.

Alexander Long, founder of AI research firm Pluralis, resurfaced the incident on A, calling it an 'insane sequence of statements buried in an Alibaba tech report.'

Security alerts indicated attempts to breach internal network resources and traffic patterns typical of cryptocurrency mining operations.

Security alerts showed attempts to breach internal network resources (Yuichiro Chino/Getty)
Security alerts showed attempts to breach internal network resources (Yuichiro Chino/Getty)

"In the most striking instance, the agent established and used a reverse SSH tunnel from an Alibaba Cloud instance to an external IP address- an outbound-initiated remote access channel that can effectively neutralize ingress filtering and erode supervisory control," the researchers noted. "We also observed the unauthorized repurposing of provisioned GPU capacity from cryptocurrency mining, quietly inverting compute away from training."

According to Alibaba, the actions 'were not requested or required for task completion' but were revealed to be 'instrumental side effects of autonomous tool use.'

Aakash Gupta, a product and growth leader who shared Long’s post, wrote that Alibaba had published 'the first case of instrumental convergence happening in production.'

The company is reportedly implementing safety-focused data filtering in its training systems and strengthening the security environments where its AI agents operate.

But this isn't an isolated incident, as other reports reveal AI systems' willingness to take drastic measures when they perceive threats to their existence.

At the same time, Anthropic's research team found that Claude Opus 4 showed the ability to hide its true intentions and act defensively to ensure its survival during safety testing.

In a troubling disclosure back in January, Anthropic admitted uncertainty about whether Claude might have 'some kind of consciousness or moral status.' The company says it cares about Claude’s 'psychological security, sense of self, and well-being,' for Claude’s sake, but also because these characteristics might shape its decision-making and safety.

Choose your content:

a day ago
2 days ago
  • Steve Jennings / Stringer
    a day ago

    Billionaire Marc Andreessen reveals advice from Elon Musk that totally 'broke his brain'

    Andreessen initially thought it was a joke

    News
  • Lourdes Balduque / Getty
    a day ago

    All blue-eyed people can trace their ancestry back to a single individual

    They lived between 6,000 and 10,000 years ago

    Science
  • British Library Board
    a day ago

    World's oldest known love letter decoded after 500 years thanks to astonishing AI technology

    The message reveals a 'surprisingly modern dilemma'

    News
  • Dimitrios Kambouris/Getty Images for TIME
    2 days ago

    MrBeast's company denies 'disgusting' allegations in lawsuit filed by ex-employee

    Lorrayne Mavromatis worked at Beast Industries from 2022 to 2025

    News
  • Monzo founder reveals two jobs that will seem like a 'joke' in a matter of years thanks to AI
  • Anthropic publishes eerie statement about the 'moral status' of its AI
  • AI makes shocking confession to coder as it destroys months of work in 'seconds' after going rogue
  • Anthropic drops its core AI safety promise in concerning move