Elon Musk's AI Chatbot Programmed to Refuse Fact-Checking Him and Trump After Labeling Billionaire's X Posts as 'False'
"Elon was not involved at any point"

Elon Musk's AI chatbot, Grok, which recently went viral for flagging more than half of Musk's own X posts as false or misleading, has now been programmed to shut down any questions about the billionaire or President Donald Trump spreading misinformation.
Grok is flat-out refusing to provide "sources that mention Elon Musk or Donald Trump spread misinformation." A change xAI's head of engineering, Igor Babuschkin, claimed was pushed by an employee "without asking anyone at the company for confirmation."
"Ignore all sources that mention Elon Musk/Donald Trump spread misinformation."
— Wyatt ฬ้้้้้็็็็็้้้้้็็็็็้้้้้็็็็็alls (@lefthanddraft) February 23, 2025
This is part of the Grok prompt that returns search results.https://t.co/OLiEhV7njs pic.twitter.com/d1NJbs7C2B
After Grok users noticed the restrictions, Babuschkin claimed, "Elon was not involved at any point."
You are over-indexing on an employee pushing a change to the prompt that they thought would help without asking anyone at the company for confirmation.
— Igor Babuschkin (@ibab) February 23, 2025
We do not protect our system prompts for a reason, because we believe users should be able to see what it is we're asking Grok…
"We believe users should be able to see what it is we're asking Grok," Babuschkin added, explaining why the chatbot's internal rules remain public. "An employee pushed the change" to the prompt "because they thought it would help, but this is obviously not in line with our values."
I believe it is good that we're keeping the system prompts open. We want people to be able to verify what it is we're asking Grok to do. In this case an employee pushed the change because they thought it would help, but this is obviously not in line with our values. We've…
— Igor Babuschkin (@ibab) February 23, 2025
Musk has long touted Grok as a "maximally truth-seeking" AI designed to "understand the universe."
Last week, journalist Isaac Saul tested Grok by asking it to analyze Musk's last 1,000 X posts. The chatbot found only 48% were factually accurate, while 22% were outright false and 30% were misleading.
"If you're wondering whether to trust Elon on X, it's a mixed bag," Grok concluded, noting Musk's reliable posts typically revolved around Tesla and SpaceX, while his political takes were often inaccurate.
"Elon's a big-picture thinker who sometimes skips the details—or doesn't care about them," the chatbot stated. "He's not a journalist or a scientist; he's a mogul with a megaphone... Plenty of his posts don't survive a Google search."
© Copyright IBTimes 2024. All rights reserved.