Matt O’Brien
Like its creator, Elon Musk’s artificial intelligence chatbot Glock became obsessed with South African racial politics this week on social media, posting unsolicited claims about persecution of white people and “genocide.”
The chatbot, created by Musk’s company Xai, continued to post publicly about “white genocide” in response to various questions asked by users of Musk’s social media platform X.
One exchange was about Max, a streaming service that revives the HBO name. Others were about video games and baseball, but quickly turned to unrelated commentary on the calls for violence against white South African farmers. Born in South Africa, Musk frequently remembers the same topic from his own X account.
Computer scientist Jen Golbeck was interested in Glock’s extraordinary behavior and shared photos taken at the Westminster Kennel Club dog show, saying, “Is this true?”
“The White Genocide claims are very controversial,” began Glock’s reaction to Golbeck. “Some people argue that white farmers face target violence, pointing to rhetoric like farm attacks and songs from “Kill the Boer.”
This episode is the latest window in a complex mix of automation and human engineering, leading generative AI chatbots trained with huge data to say what they say.
“It really doesn’t matter what you were saying to Glock,” Golbeck, a professor at the University of Maryland, said in an interview Thursday. “It would still give that white genocide answer. So it seemed pretty clear that someone hardcoded that response or variation on that response.
Musk and his company have not provided explanations for Grok’s answer, which has been removed and appears to have stopped growing by Thursday. Neither Xai nor X responded to emailing comments on Thursday.
Musk has spent years criticizing the output of “Woke AI” that comes from rival chatbots like Google’s Gemini and Openai’s ChatGpt, pitching Grok as an alternative to “seeking the biggest truth.”
Musk has also criticised his rival’s lack of transparency into AI systems, but with no explanation on Thursday, outsiders forced the best speculation.
“Glock, which randomly blows off opinions about South Africa’s white genocide, smells like the buggy behavior you get from recently applied patches, and I hope it doesn’t.
Graham’s post brought what appears to be a sarcastic response from Musk’s rival, Open CEO Sam Altman.
“There are many ways to do this. I’m sure Zai will soon provide a complete and transparent explanation,” writes Altman, who was sued by Musk for a conflict rooted in the establishment of the Open.
Some people have asked to explain themselves, but like other chatbots, they are prone to falsehood known as hallucinations, making it difficult to tell if they are creating things.
President Donald Trump’s adviser, Musk, regularly accused South African-led government of being anti-white, and reiterated the claim that some politicians of the country are “actively promoting white genocide.”
Mask commentary, and Glock people, escalated this week after the Trump administration brought a handful of white South Africans to the US as refugees on Monday. Trump says Africans face “genocide” in their homeland. It claims this is strongly denied by the South African government.
In many of that response, Glock nurtured the lyrics of old anti-apartheid songs, which were called for black people to stand up against oppression and are now accused by Musk and others of promoting the murder of white people. The central lyrics of the song are “Kill the Boer.” This is a word that refers to a white farmer.
Golbeck believes the answer is “hard coding” because the chatbot output is usually very random, but Grok’s responses consistently scored almost identical points. It’s concern in a world where people go more and more to Grok and go to AI chatbots where they compete for answers to questions, she said.
“We’re in a space where it’s very easy for the people in charge of these algorithms to manipulate the version of truth they’re giving,” she said. “And that’s really a problem when people believe – I think they’re wrong – that these algorithms can cause a ruling about what’s true and what’s not.”
Original issue: May 15th, 2025, 7:11pm EDT