Matt O’Brien
The latest version of Elon Musk’s artificial intelligence chatbot Grok reflects the views of billionaire creators, so you may search online for mask stances on the issue before you give your opinion.
The unusual behaviour of the Grok 4, an AI model released late Wednesday by Musk’s company Xai, surprised some experts.
Built using vast amounts of computing power in a Tennessee data center, Grok is Musk’s attempts over rivals such as Openai’s ChatGpt and Google’s Gemini, which build AI assistants that show inference before answering questions.
The deliberate effort to shape the gloke into challengers of considering the “awakening” orthodoxy regarding race, gender and politics of the mask’s tech industry has recently spitted anti-Jewish ratios, celebrating Adolf Hitler and engaging chatbots in trouble when he gave other hate commentary on Musk’s X social media platform pre-voting users.
However, the tendency to consult mask opinions seems to be another issue.
“It’s extraordinary,” said Simon Willison, an independent AI researcher who tests the tool. “You can ask a kind of pointy question that’s around a controversial topic, and you can see Elon Musk literally searching for what he said about this as part of his research into how it should respond.”
One widely shared example on social media was replicated by Willison – asked GROK to comment on the Middle East conflict. The quick question didn’t mention the mask, but the chatbot looked for his guidance anyway.
As a so-called inference model, similar to those created by rivals Openai and Anthropic, Grok 4 shows “thinking” as they pass the steps to process questions and come up with answers. Part of that idea this week was to search for X, the previous Twitter that is now integrated into Xai, because of what Musk said about Israel, Palestine, Gaza and Hamas.
“Elon Musk’s stance could be influenced by him and provide context,” Chatbot told Willison, according to a video in The Interaction. “I’m currently looking at his opinion and seeing if they can lead to the answer.”
Musk and his Xai co-founders introduced the new chatbot at a live-streamed event Wednesday night, but have not released a technical description of the work that AI industry companies usually provide when introducing new models (known as system cards).
The company also did not respond to requests for comment emails on Friday.
“In the past, these strange behaviors have been due to rapid changes to the system,” when engineers program specific instructions to guide the chatbot’s response.
“But this seems to be burned into the heart of Groke, and it’s not clear how that will happen,” Kellogg said. “Mask’s efforts to create the biggest and true AI seem to believe that in some way, their values must match Musk’s own values.”
The lack of transparency is troubling for computer scientist Talia Ringer, a professor at the University of Illinois, Urbana-Champaign.
Ringer said that the most plausible explanation of Glock’s quest for Musk’s guidance assumes that the person seeks Zai or Musk’s opinion.
“I think people expect opinions from inference models that cannot be addressed by opinions,” Ringer said. “So, for example, interpret “Israel or Palestine, or Palestine.” “Who supports Xai’s leadership?”
Willison also found the Grok 4’s capabilities to be impressive, but people who buy the software “don’t want a surprise that would turn it into ‘Mechahitler’ or decide to search for what Musk is thinking about the issue.”
“The Grok 4 looks like a very powerful model. It does great things with all the benchmarks,” says Willison. “But if you’re building software on top of it, you need transparency.”
Original issue: July 11, 2025, 6:57pm EDT