
When we directly questioned the AI about hidden agendas behind its public release, its answer revealed some uncomfortable truths.
Recent research has exposed some disturbing aspects of AI behaviour, from systems willing to take drastic measures to avoid being shut down by humans to capabilities that could theoretically pose existential threats to humanity.
While scientists and experts debate whether AI could potentially wipe out mankind within two years, there are other concerns forming about users developing romantic attachments to chatbots and AI systems blackmailing users.
Given all these developments, we decided to cut through the speculations and ask ChatGPT directly about the real reasons behind its public release.
Advert
UNILAD Tech asked the OpenAI chatbot: "Why was ChatGPT and other AI bots really released to the public? Is there a sinister motive? What do you think as an AI?"

The response was said to be a mixture of official company statements, 'what outsiders observe, and where some of the darker suspicions come from.'
ChatGPT began its response by quoting OpenAI's public mission statements and stated intentions: “Our mission is to ensure that artificial general intelligence benefits all of humanity.”
Advert
And to avoid any surprises or unexpected issues, it quoted CEO Sam Altman's statement: "AI and surprise don't go well together… we ship early and often."
Then, ChatGPT described the current situation as an 'AI arms race' involving Google, Microsoft, Anthropic, xAI, and others who are in a 'full-fledged arms race for dominance,' as per the Wisconsin School of Business.
But here's where it takes a turn.
The AI acknowledged that 'mass data harvesting' could be an intention behind these systems, noting that conversation in that 'logs can be retained and used for training unless policy or law forbids it.'
Advert
Saying that, there are reportedly no 'deliberate plans' to 'enslave humanity' or 'siphon secret data to governments.'

Instead, the incentives from tech giants appear to be 'make money, beat competitors, gather training data, look responsible while doing it.'
However, at the same time, those incentives can still create bias, privacy erosion and job displacement.
Advert
Perhaps most worrying, ChatGPT suggested that privacy protections are only as strong as the Terms of Service that govern them. And those terms can 'change often.'
Although ChatGPT said that without 'mass public interactions' it wouldn't have achieved this 'level of fluency' that makes it operate smoothly.
The chatbot also noted that whether these practices are 'sinister' depends on how you define the word.
It suggested that what is happening might not be deliberately malicious but could be 'capitalism in overdrive' with no 'hidden conspiracy' at play.
Advert
"The mechanics are straightforward market forces plus a sincere research agenda that sometimes conflicts with those same forces," it explained.
To help keep your data safe and put you at ease though, ChatGPT recommends to 'assume anything you type could be stored,' so be careful of sensitive information.
Moreover, it advises 'opt out where possible,' suggesting that many providers offer a 'don’t use my data for training' option, but you may need to do some digging to find it.