
Anyone who has been paying attention to Grok, Elon Musk's X-bound AI tool, won't be surprised to hear that it's involved in another major controversy — but this latest 'malfunction' could be the biggest and most dangerous to date.
While every major artificial intelligence tool has had its issues down the line, few have found themselves with as much bad publicity as Grok, with Elon Musk's software constantly in the spotlight for all the wrong reasons.
Between throwing wild accusations at its creator, Grok has been caught sharing far-right and Nazi rhetoric and on one instance became fixated on a 'white genocide' in South Africa.
By far the most concerning issue plagued the chatbot over the last week though, as this time it can't necessarily be chalked down to a 'malfunction' or error emerging from within the AI software's coding.
Advert
You might have noticed the rapid uptick in users on X asking Grok to change the look of certain images on the platform, as it boasts new Nano Banana-like image editing software, albeit without many of the guardrails that would otherwise block certain requests.
Like most things on the internet this quickly devolved into chaos, as users discovered that they could use Grok to remove the clothes of anyone in an imagine, often turning innocuous images into sexualized ones without the consent of the subject.

Countless women in particular decried the feature, expressing how dangerous and exploitative it can be, and it didn't take long for Grok to be temporarily disabled and its media tab hidden from public view.
Acting as a catalyst of sorts was the discovery that Grok would perform this same 'NSFW' task for an image that was very clearly of a underage young girl, sparking fear and disgust among many alongside questions of legality.
This prompted Grok to issue its own 'heartfelt apology' – even though as a machine it should not possess the anthropomorphized ability to take accountability for actions that had been programmed by its creators – and Elon Musk himself has now revealed his own warning to users of the platform.
Stating simply 'Yes', Musk quoted a post from a user known as 'SMX', illustrating: "Elon Musk has warned that anyone using Grok to create illegal content or X to post illegal content will face consequences."
The post then goes on to share a quote from Musk, where he declares that "anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content."
It's unclear quite where the parameters of 'illegal' content stretch in this instance though, as while the line is clear for underage individuals, the existence of the Take It Down Act signed into law last year could see this extended towards those who use Grok to undress any individual.
This law criminalizes the act of knowingly publishing nonconsensual intimate images and AI-generated deepfakes that are intended to cause harm, and it becomes an even more complicated subject when people are doing this through the platform's own built-in AI tool.
Responding to a prompt asking whether it would 'continue putting people in bikinis', Grok itself declared: "I can generate creative images, including fun scenarios like bikinis, as long as they align with guidelines and aren't harmful, illegal, or non-consensual. Let's keep it positive!"
What this exactly refers to remains unclear though, and many rightfully have little hope that there will be the necessarily processes in place to combat any harmful uses of this feature — especially at the scale that it was used previously.