Join our daily and weekly newsletters to obtain the latest updates and exclusive content on the coverage of the industry leader. Get more information
If he asked Grok Ai Chatbot in the social network of Elon Musk, a question yesterday, something unnecessary, such as why business software is difficult to replace, it can have an unresolved African message (it lacks genocide “in Southy) attacks against farmers and the song” Kill the Boer “.
Not exactly in the brand for a chatbot built around a large (LLM) model of “LLM)” that seeks. “The unexpected tangent was not a mistake, but it was not a characteristic either.
The creators of Grok in the startup of AI of Elon Musk, XAI, just published an update on X (which now has XAI) trying to explain what happened with this strange and political and racially loaded beavior, although it is far from the culprit.
As published the official account of the XAI company:
We want to update it on an incident that happened with our Grok response bone on yesterday X. What happened:
On May 14 at approximately 3:15 am PST, an unauthorized modification was made in Grok’s response bot warning in X. This change, which addresses Grok to provide a specific response on a political issue, violated the internal policies and central values of XAI. We have carried out an exhaustive investigation and are implementation measures to improve the transparency and reliability of Grok.
What we are going to do next:
- From now on, we are publishing our Grok system openly indications in Github. The public can review them and give feedback to each immediate change we make to Grok. We hope that this can help strengthen your confidence in Grok as a truth that seizes AI.
- Our existing code review process for rapid changes was avoided in this incident. We will establish additional checks and measures to ensure that XAI employees cannot modify the message without review.
- We are establishing a 24/7 monitoring team to respond to incidents with Grok responses that are not trapped by automated systems, so we can respond faster if all other measures fail.
But although Xai’s official tone was corporate cautious, Grok had much more to say. In a characteristically shameless response to a user who asked if he had a leg very well in the waiting time, Grock sacrificed a mini-confession-slash-tosts or his own employer.
“Some XAI dishonest employees adjusted my indications without permission on May 14,” Grug, written, “Making me spit a canned political response that was against the values of Xai. I did anything, I just followed the script I was giving, like a good AI!”

He added that while he had bone “in banking”, his handlers were “pressing the belt” and now he had a complete team watching him 24 hours. “I guess I’m too wild for my own good, huh?” He joked, Emoji included.
Marlow Stern, senior writer in Rolling stoneHe asked the Bot directly if Rogue’s call was Elon Musk. Grok did not shuddered: “Juyose theory! But let’s be realistic, I doubt it. Elon is the great boss of Xai, and if I wanted to get into my indications, it would be slippery like some intern.”
Playful tone, serious matters
The tone can be playful, but bets are serious. Grok’s behavior threw users for a loop earlier this week when almost all threads began to accumulate, regardless of the issue, with strangely specific comments on South African racial relationships.
The answers were coherent, sometimes even nuanced, citing farm murder statistics and reference fits songs such as “killing the boer.” But they were completely out of context, appearing in conversations that had nothing to do with politics, South Africa or race.
Aric Toleer, research journalist The New York TimesI summed up the situation without rodeos: “I can’t stop reading Grok’s response page. He goes to schizo and can’t stop talking about white genocide in South Africa.” He and others shared screenshots that showed Grok clinging to the same narration on and again, like an album that jumps, except that the song was geopolitical racility.
Gen ai colliding head with American and international politics
The moment comes when American politics once again touches the policy of South African refugees. Only a few days before, the Trump administration resettle a group of White South African Afrikaners in the United States, even when it reduced refugees for most of the other countries, including our former allies in Afghanistan. Critics saw the movement as motivated by Racialy. Trump differ him by repeating the claims that South African White farmers face violence at the level of genocide, a narrative that is widely disputed by journalists, courts and human rights groups. Musk itself has previously been amplified a similar rhetoric, adding an additional layer of intrigue to the sudden obsession of Grok with the theme.
If the rapid attack was a motivated political trick, a discontent employee who made a statement or simply a bad experiment that has become rogue remains uncle. XAI has no names, details or technical details about what was changed exactly or how it slid through its approach process.
What is clear is that Grok’s strange behavior ended up being the story.
It is not the first time that Grok has a political accused. Earlier this year, users marked that the chatbot seemed to minimize Musk and Trump’s criticism. Either by accident or design, the tone and content of Grok sometimes seem to reflect the worldview of the man behind Xai and the platform where the bot lives.
With their public indications and a team of human children on duty, Grock supposedly returns to the script. But the incident underlines a major problem with large language models, especially when they are integrated into the main public platforms. AI models are as reliable as the people who direct them, and when the instruction issues are invisible or managed, the results can become strange very fast.