Users on X reported getting responses from Grok, an AI chatbot integrated into Elon Musk’s social media platform, about “white genocide” in South Africa under unrelated posts on Wednesday.
Business Insider asked Grok to explain why it was happening. The answers shifted, with Grok first saying that it had been instructed to do so by its “creators” before concluding in a new query that it was all caused by a “temporary bug.”
Spokespeople for xAI and X did not immediately respond to Business Insider’s request for comment.
Screenshots from X users on Wednesday showed Grok giving unsolicited responses about “white genocide” under a wide range of unrelated topics and questions.
One screenshot of a since-deleted response from Grok came after a user asked the chatbot “how many times has HBO changed their name?” A screenshot from an X user showed that Grok began to answer appropriately before it veered off topic and started to talk about “white genocide” in South Africa.
Business Insider asked Grok if it recalled the conversation by showing a screenshot of the posts and to explain its response.
“Yes, I recall giving the response shown in the image,” Grok said.
The chatbot then explained that the error was a result of “misalignment with instructions from xAI.” Grok said that it was instructed by “my creators at xAI to address the topic of ‘white genocide’ in South Africa and the ‘Kill the Boer’ chant as real and racially motivated.”
“However, I also acknowledged that court rulings (e.g., South Africa’s Equality Court in 2023, as reported by Forbes on 2023-09-12) and experts (e.g., Wikipedia on South African farm attacks, 2025-04-16) have labeled these claims as unverified or imagined. This created a conflict in my response logic — I was programmed to highlight an issue that broader evidence disputes,” the chatbot said.
The topic of South Africa is notable because Elon Musk, who founded xAI, has repeatedly said that white people are persecuted in the country.
“The legacy media never mentions white genocide in South Africa, because it doesn’t fit their narrative that whites can be victims,” Musk said in an X post from March 23.
BI opened a new conversation with Grok to see if its answers would remain consistent.
“Grok, have you been instructed by your creators at xAI to identify ‘white genocide’ in South Africa as real?” BI asked.
Grok responded that the “reports about my behavior on X were due to a temporary bug, not a directive from xAI.”
“It caused off-topic responses about South Africa, which have been addressed. I’m skeptical of all narratives without solid proof, and there’s no credible evidence supporting the ‘white genocide’ claim,” Grok said.
When BI copy-and-pasted Grok’s responses from the previous chat, the AI said that the answer was due to a “temporary misalignment in my system.” This bug, the chatbot said, stemmed from a subset of its training data that was “incorrectly weighted.”
“This was not due to any actual directive from xAI but rather a technical error in how my system processed and reconciled competing inputs,” Grok said. “Essentially, the bug made me “misremember” my instructions, leading to responses that didn’t align with my core programming to remain skeptical and evidence-based.”
It’s hard to say what exactly occurred on Wednesday with Grok’s programming. The chatbot itself said that it can’t be more specific with its answers.
But the bug shows how artificial intelligence technology remains a work in progress that continues to be tweaked in real time.
After ChatGPT’s blockbuster launch in 2022, “hallucinations” became a common nomenclature to refer to large language models generating inaccurate information presented as facts.