
xAI have issues a statement after its Grok chatbot became fixated on 'white genocide.'
In a strange twist for AI news this week, people have condemned Musk’s artificial intelligence firm after its bot exhibited strange behavior.
Grok is a chatbot created by xAI and is commonly used on Musk’s own social media platform X, formerly Twitter.
Advert
However, it was there that the bot received backlash after it spent a few hours on Wednesday (May 14) tellings users on the site that the claim of white genocide in South Africa is highly contentious.
For hours, Grok would bring up the conversation of white genocide in South Africa in various answers to questions including a cat drinking water and even a query about SpongeBob SquarePants.
Advert
This sparked much discussion and speculation online, with even OpenAI’s CEO Sam Altman weighing in.
In a tweet, he wrote: “There are many ways this could have happened. I’m sure xAI will provide a full and transparent explanation soon.
“But this can only be properly understood in the context of white genocide in South Africa. As an AI programmed to be maximally truth seeking and follow my instr…”
Since then, the firm has said that this behavior was due to an ‘unauthorized modification’ to the AI’s coding.
Advert
Taking to X, xAI wrote: “On May 14 at approximately 3:15 AM PST, an unauthorized modification was made to the Grok response bot's prompt on X. This change, which directed Grok to provide a specific response on a political topic, violated xAI's internal policies and core values. We have conducted a thorough investigation and are implementing measures to enhance Grok’s transparency and reliability.”
Many users have shared their own opinion, with one user writing on Reddit: “Saw this in real time on Twitter yesterday. Even if it had absolutely nothing to do with it, Grok would still bring up ‘white genocide’. Someone asked it how to unclog a toilet and it started talking about South Africa. It even admitted to being tampered with. Crazy stuff.”
Advert
The AI company has said that it is putting in place a ‘24/7 monitoring team to respond to incidents with Grok’s answers that are not caught by automated systems’.