• News
    • Tech News
    • AI
  • Gadgets
    • Apple
    • iPhone
  • Gaming
    • Playstation
    • Xbox
  • Science
    • News
    • Space
  • Streaming
    • Netflix
  • Vehicles
    • Car News
  • Social Media
    • WhatsApp
    • YouTube
  • Advertise
  • Terms
  • Privacy & Cookies
  • LADbible Group
  • LADbible
  • UNILAD
  • SPORTbible
  • GAMINGbible
  • Tyla
  • FOODbible
  • License Our Content
  • About Us & Contact
  • Jobs
  • Latest
  • Topics A-Z
  • Authors
Facebook
Instagram
X
TikTok
Snapchat
WhatsApp
Submit Your Content
How the 'thriving' underground world of black-market AI chatbots can make thousands in months

Home> News> AI

Published 09:37 10 Mar 2025 GMT

How the 'thriving' underground world of black-market AI chatbots can make thousands in months

These LLMs allow users to bypass the restrictions of mainstream options

Harry Boulton

Harry Boulton

Studies have revealed how underground and black market AI chatbots are allowing malicious actors to earn thousands of dollars every single month with little work on their end required.

Artificial intelligence technology has been a financial miracle for those powering the revolutionary push. OpenAI - who created ChatGPT - has raised their valuation beyond $100 billion despite being a nonprofit, and companies like Nvidia have became juggernauts through the unrelenting demand for AI.

While startup companies like DeepSeek have very much rocked the boat by proving you don't need all the power or money to create a successful AI model, studies have also shown that there's plenty of money to be made underground through black market chatbots used for malicious means.

As reported by Fast Company, illicit large language models, otherwise known as LLMs, can make upwards of $28,000 in two month from sales on the black market, and allow those that purchase them to make far more through illegal means.

Advert

Illegal AI LLMs are being sold on the black market, making upwards of $27,000 in two months (Getty Stock)
Illegal AI LLMs are being sold on the black market, making upwards of $27,000 in two months (Getty Stock)

One study published in arXiv has outlined this clearly, examining how LLMs either based on open source tech or jailbroken from mainstream options give users the ability to conjure phishing emails or write code used for malware.

Their popularity and desirability for scammers comes from the fact that mainstream and traditional AI models like ChatGPT place restrictions on what its users can request, whereas these black market options are capable of performing just about anything.

Examples of these include DarkGPT, which costs 78¢ for every 50 messages, Escape GPT, which charges users $64.98 per month on a subscription model, and WolfGPT, which has a $150 flat fee, allowing users to keep it for life.

Advert

These tools allow users to create phishing emails up to 96% faster than any other methods, and can write the correct code around two-thirds of the time for malware that evades antivirus software.

This poses a major cybersecurity conundrum, as it dramatically increases access to tools that help extort money from innocent individuals, lowering the skill and cost required to create effective schemes.

Hackers can now use illicit LLMs to build phishing emails and malware code in far less time (Getty Stock)
Hackers can now use illicit LLMs to build phishing emails and malware code in far less time (Getty Stock)

There have already been a number of incidents where scammers have used AI chatbots and generative AI to trick people into falling in love with fake individuals and hand over thousands in cash, including one that conjured up a fake Brad Pitt, but these malicious LLMs take things to the next level.

Advert

It once again reiterates the dangers that AI can create in unrestricted environments, which XiaoFeng Wang - one of the authors of the arXiv study - illustrates as "almost inevitable," adding that "every technology always comes with two sides."

Wang added that "we can develop technologies and provide insights to help" the fight against malicious AI LLMs, "but we can't do anything about stopping these things completely because we don't have the resources."

This adds to the concerns that many have about the non-illegal side of AI too, as even the 'godfather of AI' Geoffrey Hinton has indicated that the technology lays a "fertile ground for fascism" in how it dramatically increases the wealth gap.

That wealth gap is then even seen in the criminal world, where AI makes the role of malicious individuals easier, and there's profit to be made on the side of those selling the unrestricted LLM software too.

Featured Image Credit: Yuliya Taba / Getty
AI
Cybersecurity

Advert

Advert

Advert

Choose your content:

17 hours ago
18 hours ago
  • 17 hours ago

    Hats in Trump's White House 'gift shop' have 'seriously concerning' detail

    It could be a sign of what's to come in American politics

    News
  • 17 hours ago

    ‘Ozempic for dogs’ could be the next big thing in pet health coming soon

    Helping shift those extra pooch pounds

    Science
  • 18 hours ago

    Life-threatening Hurricane Erin sparks major warning following change in direction

    Winds from the storm are expected to come up as far as 265 miles

    Science
  • 18 hours ago

    Well-known billionaire planning $10,000,000 secret dive to Titanic wreckage using a submersible

    The news comes two years after the OceanGate Titan sub imploded

    News
  • World-renowned futurist reveals terrifying AI risks that could change the world in ways 'no one wants'
  • Experts warn AI could get access to the weapon capable of causing total human extinction
  • 'Godfather of AI' explains how 'scary' AI will increase the wealth gap and 'make society worse'
  • AI reveals what would happen if the internet shut down for the entire planet as Russia's threats intensify