• News
    • Tech News
    • AI
  • Gadgets
    • Apple
    • iPhone
  • Gaming
    • Playstation
    • Xbox
  • Science
    • News
    • Space
  • Streaming
    • Netflix
  • Vehicles
    • Car News
  • Social Media
    • WhatsApp
    • YouTube
  • Advertise
  • Terms
  • Privacy & Cookies
  • LADbible Group
  • LADbible
  • UNILAD
  • SPORTbible
  • GAMINGbible
  • Tyla
  • FOODbible
  • License Our Content
  • About Us & Contact
  • Jobs
  • Latest
  • Topics A-Z
  • Authors
Facebook
Instagram
X
TikTok
Snapchat
WhatsApp
Submit Your Content
How the 'thriving' underground world of black-market AI chatbots can make thousands in months

Home> News> AI

Published 09:37 10 Mar 2025 GMT

How the 'thriving' underground world of black-market AI chatbots can make thousands in months

These LLMs allow users to bypass the restrictions of mainstream options

Harry Boulton

Harry Boulton

Studies have revealed how underground and black market AI chatbots are allowing malicious actors to earn thousands of dollars every single month with little work on their end required.

Artificial intelligence technology has been a financial miracle for those powering the revolutionary push. OpenAI - who created ChatGPT - has raised their valuation beyond $100 billion despite being a nonprofit, and companies like Nvidia have became juggernauts through the unrelenting demand for AI.

While startup companies like DeepSeek have very much rocked the boat by proving you don't need all the power or money to create a successful AI model, studies have also shown that there's plenty of money to be made underground through black market chatbots used for malicious means.

As reported by Fast Company, illicit large language models, otherwise known as LLMs, can make upwards of $28,000 in two month from sales on the black market, and allow those that purchase them to make far more through illegal means.

Advert

Illegal AI LLMs are being sold on the black market, making upwards of $27,000 in two months (Getty Stock)
Illegal AI LLMs are being sold on the black market, making upwards of $27,000 in two months (Getty Stock)

One study published in arXiv has outlined this clearly, examining how LLMs either based on open source tech or jailbroken from mainstream options give users the ability to conjure phishing emails or write code used for malware.

Their popularity and desirability for scammers comes from the fact that mainstream and traditional AI models like ChatGPT place restrictions on what its users can request, whereas these black market options are capable of performing just about anything.

Examples of these include DarkGPT, which costs 78¢ for every 50 messages, Escape GPT, which charges users $64.98 per month on a subscription model, and WolfGPT, which has a $150 flat fee, allowing users to keep it for life.

Advert

These tools allow users to create phishing emails up to 96% faster than any other methods, and can write the correct code around two-thirds of the time for malware that evades antivirus software.

This poses a major cybersecurity conundrum, as it dramatically increases access to tools that help extort money from innocent individuals, lowering the skill and cost required to create effective schemes.

Hackers can now use illicit LLMs to build phishing emails and malware code in far less time (Getty Stock)
Hackers can now use illicit LLMs to build phishing emails and malware code in far less time (Getty Stock)

There have already been a number of incidents where scammers have used AI chatbots and generative AI to trick people into falling in love with fake individuals and hand over thousands in cash, including one that conjured up a fake Brad Pitt, but these malicious LLMs take things to the next level.

Advert

It once again reiterates the dangers that AI can create in unrestricted environments, which XiaoFeng Wang - one of the authors of the arXiv study - illustrates as "almost inevitable," adding that "every technology always comes with two sides."

Wang added that "we can develop technologies and provide insights to help" the fight against malicious AI LLMs, "but we can't do anything about stopping these things completely because we don't have the resources."

This adds to the concerns that many have about the non-illegal side of AI too, as even the 'godfather of AI' Geoffrey Hinton has indicated that the technology lays a "fertile ground for fascism" in how it dramatically increases the wealth gap.

That wealth gap is then even seen in the criminal world, where AI makes the role of malicious individuals easier, and there's profit to be made on the side of those selling the unrestricted LLM software too.

Featured Image Credit: Yuliya Taba / Getty
AI
Cybersecurity

Advert

Advert

Advert

Choose your content:

4 hours ago
6 hours ago
7 hours ago
  • Peter Dazeley / Contributor via Getty
    4 hours ago

    Shocking new survey shows impact of Ozempic use on country's obesity rate

    The real world effect of weight loss drugs has been studied

    Science
  • Jack Taylor / Stringer via Getty
    4 hours ago

    Country of 59M people set to be blocked from porn websites unless they hand over personal info

    VPN companies are rubbing their hands with glee

    News
  • agrobacter via Getty
    6 hours ago

    Scientists uncover game-changing 'cure' for hair loss which could take effect in just 20 days

    This could save you a trip to Turkey

    Science
  • Dimitrios Kambouris / Staff via Getty
    7 hours ago

    'Two words' wiped $29,000,000,000 from Mark Zuckerberg's fortune in just one day

    He has tanked his rich list placement

    News
  • 'Godfather of AI' explains how 'scary' AI will increase the wealth gap and 'make society worse'
  • AI reveals what would happen if the internet shut down for the entire planet as Russia's threats intensify
  • US deals with security breach as AI scammer impersonates Secretary of State Marco Rubio to foreign ministers
  • McDonald's AI hiring bot exposes data of 64,000,000 applicants after hackers guess 'stupid' password