uniladtech homepage
  • News
    • Tech News
    • AI
  • Gadgets
    • Apple
    • iPhone
  • Gaming
    • Playstation
    • Xbox
  • Science
    • News
    • Space
  • Streaming
    • Netflix
  • Vehicles
    • Car News
  • Social Media
    • WhatsApp
    • YouTube
  • Advertise
  • Terms
  • Privacy & Cookies
  • LADbible Group
  • LADbible
  • UNILAD
  • SPORTbible
  • GAMINGbible
  • Tyla
  • FOODbible
  • License Our Content
  • About Us & Contact
  • Jobs
  • Latest
  • Archive
  • Topics A-Z
  • Authors
Facebook
Instagram
X
TikTok
Snapchat
WhatsApp
Submit Your Content
How the 'thriving' underground world of black-market AI chatbots can make thousands in months

Home> News> AI

Published 09:37 10 Mar 2025 GMT

How the 'thriving' underground world of black-market AI chatbots can make thousands in months

These LLMs allow users to bypass the restrictions of mainstream options

Harry Boulton

Harry Boulton

google discoverFollow us on Google Discover
Featured Image Credit: Yuliya Taba / Getty
AI
Cybersecurity

Advert

Advert

Advert

Studies have revealed how underground and black market AI chatbots are allowing malicious actors to earn thousands of dollars every single month with little work on their end required.

Artificial intelligence technology has been a financial miracle for those powering the revolutionary push. OpenAI - who created ChatGPT - has raised their valuation beyond $100 billion despite being a nonprofit, and companies like Nvidia have became juggernauts through the unrelenting demand for AI.

While startup companies like DeepSeek have very much rocked the boat by proving you don't need all the power or money to create a successful AI model, studies have also shown that there's plenty of money to be made underground through black market chatbots used for malicious means.

As reported by Fast Company, illicit large language models, otherwise known as LLMs, can make upwards of $28,000 in two month from sales on the black market, and allow those that purchase them to make far more through illegal means.

Advert

Illegal AI LLMs are being sold on the black market, making upwards of $27,000 in two months (Getty Stock)
Illegal AI LLMs are being sold on the black market, making upwards of $27,000 in two months (Getty Stock)

One study published in arXiv has outlined this clearly, examining how LLMs either based on open source tech or jailbroken from mainstream options give users the ability to conjure phishing emails or write code used for malware.

Their popularity and desirability for scammers comes from the fact that mainstream and traditional AI models like ChatGPT place restrictions on what its users can request, whereas these black market options are capable of performing just about anything.

Examples of these include DarkGPT, which costs 78¢ for every 50 messages, Escape GPT, which charges users $64.98 per month on a subscription model, and WolfGPT, which has a $150 flat fee, allowing users to keep it for life.

These tools allow users to create phishing emails up to 96% faster than any other methods, and can write the correct code around two-thirds of the time for malware that evades antivirus software.

This poses a major cybersecurity conundrum, as it dramatically increases access to tools that help extort money from innocent individuals, lowering the skill and cost required to create effective schemes.

Hackers can now use illicit LLMs to build phishing emails and malware code in far less time (Getty Stock)
Hackers can now use illicit LLMs to build phishing emails and malware code in far less time (Getty Stock)

There have already been a number of incidents where scammers have used AI chatbots and generative AI to trick people into falling in love with fake individuals and hand over thousands in cash, including one that conjured up a fake Brad Pitt, but these malicious LLMs take things to the next level.

It once again reiterates the dangers that AI can create in unrestricted environments, which XiaoFeng Wang - one of the authors of the arXiv study - illustrates as "almost inevitable," adding that "every technology always comes with two sides."

Wang added that "we can develop technologies and provide insights to help" the fight against malicious AI LLMs, "but we can't do anything about stopping these things completely because we don't have the resources."

This adds to the concerns that many have about the non-illegal side of AI too, as even the 'godfather of AI' Geoffrey Hinton has indicated that the technology lays a "fertile ground for fascism" in how it dramatically increases the wealth gap.

That wealth gap is then even seen in the criminal world, where AI makes the role of malicious individuals easier, and there's profit to be made on the side of those selling the unrestricted LLM software too.

Choose your content:

2 hours ago
3 hours ago
  • Tero Vesalainen via Getty
    2 hours ago

    Ohio man becomes first in history to be convicted of creating 'sexually explicit images' using AI

    New legislation is used for the first time in a major case

    News
  • Matt Cardy / Contributor via Getty
    2 hours ago

    Why gamers could be first on the list for draft if WW3 breaks out

    Those MW2 lobbies might actually pay off

    News
  • 20th Century Fox Television
    2 hours ago

    Exact number of cups of coffee it would take to kill you

    Who knew that morning cup of Joe could be so deadly?

    Science
  • U.S. News & World Report Collection/Warren K Leffler/PhotoQuest/Getty Images
    3 hours ago

    Punishment for not registering for US draft as eligible men set to be automatically registered as of this year

    Failing to register is a painful procedure

    News
  • Palantir CEO reveals only two types of people he says will thrive in the AI revolution
  • People urged to 'wake the f*** up' as AI-orchestrated espionage campaign is busted in shocking world-first
  • AI reveals what would happen if the internet shut down for the entire planet as Russia's threats intensify
  • US deals with security breach as AI scammer impersonates Secretary of State Marco Rubio to foreign ministers