• News
    • Tech News
    • AI
  • Gadgets
    • Apple
    • iPhone
  • Gaming
    • Playstation
    • Xbox
  • Science
    • News
    • Space
  • Streaming
    • Netflix
  • Vehicles
    • Car News
  • Social Media
    • WhatsApp
    • YouTube
  • Advertise
  • Terms
  • Privacy & Cookies
  • LADbible Group
  • LADbible
  • UNILAD
  • SPORTbible
  • GAMINGbible
  • Tyla
  • FOODbible
  • License Our Content
  • About Us & Contact
  • Jobs
  • Latest
  • Topics A-Z
  • Authors
Facebook
Instagram
X
TikTok
Snapchat
WhatsApp
Submit Your Content
How the 'thriving' underground world of black-market AI chatbots can make thousands in months

Home> News> AI

Published 09:37 10 Mar 2025 GMT

How the 'thriving' underground world of black-market AI chatbots can make thousands in months

These LLMs allow users to bypass the restrictions of mainstream options

Harry Boulton

Harry Boulton

Studies have revealed how underground and black market AI chatbots are allowing malicious actors to earn thousands of dollars every single month with little work on their end required.

Artificial intelligence technology has been a financial miracle for those powering the revolutionary push. OpenAI - who created ChatGPT - has raised their valuation beyond $100 billion despite being a nonprofit, and companies like Nvidia have became juggernauts through the unrelenting demand for AI.

While startup companies like DeepSeek have very much rocked the boat by proving you don't need all the power or money to create a successful AI model, studies have also shown that there's plenty of money to be made underground through black market chatbots used for malicious means.

As reported by Fast Company, illicit large language models, otherwise known as LLMs, can make upwards of $28,000 in two month from sales on the black market, and allow those that purchase them to make far more through illegal means.

Advert

Illegal AI LLMs are being sold on the black market, making upwards of $27,000 in two months (Getty Stock)
Illegal AI LLMs are being sold on the black market, making upwards of $27,000 in two months (Getty Stock)

One study published in arXiv has outlined this clearly, examining how LLMs either based on open source tech or jailbroken from mainstream options give users the ability to conjure phishing emails or write code used for malware.

Their popularity and desirability for scammers comes from the fact that mainstream and traditional AI models like ChatGPT place restrictions on what its users can request, whereas these black market options are capable of performing just about anything.

Examples of these include DarkGPT, which costs 78¢ for every 50 messages, Escape GPT, which charges users $64.98 per month on a subscription model, and WolfGPT, which has a $150 flat fee, allowing users to keep it for life.

Advert

These tools allow users to create phishing emails up to 96% faster than any other methods, and can write the correct code around two-thirds of the time for malware that evades antivirus software.

This poses a major cybersecurity conundrum, as it dramatically increases access to tools that help extort money from innocent individuals, lowering the skill and cost required to create effective schemes.

Hackers can now use illicit LLMs to build phishing emails and malware code in far less time (Getty Stock)
Hackers can now use illicit LLMs to build phishing emails and malware code in far less time (Getty Stock)

There have already been a number of incidents where scammers have used AI chatbots and generative AI to trick people into falling in love with fake individuals and hand over thousands in cash, including one that conjured up a fake Brad Pitt, but these malicious LLMs take things to the next level.

Advert

It once again reiterates the dangers that AI can create in unrestricted environments, which XiaoFeng Wang - one of the authors of the arXiv study - illustrates as "almost inevitable," adding that "every technology always comes with two sides."

Wang added that "we can develop technologies and provide insights to help" the fight against malicious AI LLMs, "but we can't do anything about stopping these things completely because we don't have the resources."

This adds to the concerns that many have about the non-illegal side of AI too, as even the 'godfather of AI' Geoffrey Hinton has indicated that the technology lays a "fertile ground for fascism" in how it dramatically increases the wealth gap.

That wealth gap is then even seen in the criminal world, where AI makes the role of malicious individuals easier, and there's profit to be made on the side of those selling the unrestricted LLM software too.

Featured Image Credit: Yuliya Taba / Getty
AI
Cybersecurity

Advert

Advert

Advert

Choose your content:

a day ago
  • a day ago

    Vitaly Zdorovetskiy unrecognisable in before and after prison images as Philippines case updated

    Some major charges have been dropped

    News
  • a day ago

    Insane photograph shows full Airbus after Saudi prince booked 80 seats on plane for his falcons

    Birds on a plane

    News
  • a day ago

    Stunning amount 50 Cent's $455,000 Bitcoin investment is worth today after 2014 investment

    50 Cent would be extremely rich if he has held onto his Bitcoin holdings

    News
  • a day ago

    The 'best' smartphone of the year confirmed and it's not what you'd expect

    Who came out on top between Apple, Samsung, and Google?

    News
  • 'Godfather of AI' explains how 'scary' AI will increase the wealth gap and 'make society worse'
  • Expert claims Elon Musk is planning to send man to Mars as ‘backup’ in case AI takes over the world
  • AI reveals what would happen if the internet shut down for the entire planet as Russia's threats intensify
  • AI makes chilling prediction for future of US politics if Elon Musk were to successfully launch the 'America Party'