Associated Press Technology Writer Matt O’Brien
Like its creator, Elon Musk’s artificial intelligence chatbot Glock became obsessed with South African racial politics this week on social media, posting unsolicited claims about persecution of white people and “genocide.”
His company Xai said Thursday night that “fraudulent changes” led to the chatbot’s unusual behavior.
This means that the company did not say who or who – it made the change “instructed to provide a specific response on a political topic” that “violated Xai’s internal policies and core values.”
A day ago, Grok continued to publicly post about “white genocide” in response to users of Musk’s social media platform X.
One exchange was about Max, a streaming service that revives the HBO name. Others were about video games and baseball, but quickly turned to unrelated commentary on the calls for violence against white South African farmers. Born in South Africa, Musk frequently remembers the same topic from his own X account.
Computer scientist Jen Golbeck was interested in Glock’s extraordinary behavior and shared photos taken at the Westminster Kennel Club dog show, saying, “Is this true?”
“The White Genocide claims are very controversial,” began Glock’s reaction to Golbeck. “Some people argue that white farmers face target violence, pointing to rhetoric like farm attacks and songs from “Kill the Boer.”
This episode is the latest window in a complex mix of automation and human engineering, leading generative AI chatbots trained with huge data to say what they say.
“It really doesn’t matter what you were saying to Glock,” Golbeck, a professor at the University of Maryland, said in an interview Thursday. “It would still give that white genocide answer. So it seemed pretty clear that someone hardcoded that response or variation on that response.
Grok’s answer was removed and appeared to have stopped growing by Thursday. Neither Xai nor X responded to requests for comment, but on Thursday night, Xai said it had “conducted a thorough investigation” and implemented new measures to improve Grok’s transparency and reliability.
Musk has spent years criticizing the output of “Woke AI” that comes from rival chatbots like Google’s Gemini and Openai’s ChatGpt, pitching Grok as an alternative to “seeking the biggest truth.”
Musk also criticised his rival’s lack of transparency into AI systems, promoting criticism of the time between the 3:15am Pacific Time on Wednesday and the company’s explanation almost two days later.
“Glock, which randomly blows off opinions about South Africa’s white genocide, smells like the buggy behavior you get from recently applied patches, and I hope it doesn’t.
Some people have asked to explain themselves, but like other chatbots, they are prone to falsehood known as hallucinations, making it difficult to tell if they are creating things.
President Donald Trump’s adviser, Musk, regularly accused South African-led government of being anti-white, and reiterated the claim that some politicians of the country are “actively promoting white genocide.”
Mask commentary, and Glock people, escalated this week after the Trump administration brought a handful of white South Africans to the US as refugees on Monday. Trump says Africans face “genocide” in their homeland. It claims this is strongly denied by the South African government.
In many of that response, Glock nurtured the lyrics of old anti-apartheid songs, which were called for black people to stand up against oppression and are now accused by Musk and others of promoting the murder of white people. The central lyrics of the song are “Kill the Boer.” This is a word that refers to a white farmer.
Golbeck said that while the output of the chatbot is usually very random, Grok’s responses consistently scored almost identical points, so it’s clear that the answer is “hard-coded” the answer. It’s concern in a world where people go more and more to Grok and go to AI chatbots where they compete for answers to questions, she said.
“We’re in a space where it’s very easy for the people in charge of these algorithms to manipulate the version of truth they’re giving,” she said. “And that’s really a problem when people believe – I think they’re wrong – that these algorithms can cause a ruling about what’s true and what’s not.”
Musk’s Company said it is making a lot of changes now, starting with publicly publishing prompts on GitHub.
Noting that the existing code review process has been avoided, he said, “We will take additional checks and measures to ensure that XAI employees cannot change the prompt without review.” The company also said it has implemented “a 24/7 monitoring team to respond to incidents in Grok’s responses that are not caught up in an automated system” in the event of other measures failing.
Original issue: May 16, 2025 10:39am EDT