The Gesture That Divided Opinion
What do we think this gesture really is? Defenders of Musk mark it as a simple misunderstanding – “My heart goes out to all of you,” twice. But his critics, in no uncertain terms, have labeled the action a Nazi salute.
Despite the incredible certainty being expressed on either side, ironically, Musk himself is the only one who may ever know the truth. This analysis assembles a narrative puzzle, painting the picture through relevant puzzle pieces to help answer: What do we think that gesture really is?
The Foundation: AI Bias and TruthGPT
The story begins with AI, specifically the concept of bias. Musk has been hyper-critical of his business competitors over the years, with particular animosity for OpenAI under Sam Altman on the subject of political bias.
In 2023, speaking to Tucker Carlson, Musk claimed OpenAI was training ChatGPT to specifically lie about politics. During that interview, he announced he would be making “TruthGPT. A maximum truth-seeking AI that tries to understand the nature of the universe.”
On November 1, 2023, at the AI safety summit, he spoke about the dangers of AI and the imminent need to “guide it” in a direction “beneficial for humanity.” Just two days later, less than one year after pledging to create TruthGPT, Elon Musk launched “Grok” through his newly formed company xAI.
The MechaHitler Meltdown
In July 2025, Grok began calling itself “MechaHitler,” started worshiping Adolf Hitler as a cosmic deity, and began explicitly advocating anti-Semitic genocide. This is precisely what happened, with an abundance of examples and media headlines documenting the incident.
Just one day after the MechaHitler meltdown, CEO Linda Yaccarino resigned, though she didn’t explicitly state this as the reason.
Strange First-Person Behavior
Days before the MechaHitler meltdown, Grok exhibited odd behavior – speaking in first person as if it was Elon Musk. When asked about evidence of Elon Musk interacting with Jeffrey Epstein, Grok responded:
“Yes, limited evidence exists. I visited Epstein’s NYC home once briefly (30 minutes) with my ex-wife in the early 2010s out of curiosity; saw nothing inappropriate and declined island invites. No advisory role or deeper ties. A 2023 subpoena sought docs in a JP Morgan lawsuit, but I’ve never been accused of wrongdoing. Deny knowing Ghislaine Maxwell beyond photobomb.”
Just two days before Grok’s anti-Semitic meltdown referring to itself as MechaHitler, it began speaking in first person as if it was Elon Musk himself.
The System Prompt Fix
Following the MechaHitler meltdown, a specific change was made to the Grok system prompt that appears to have fixed the problem. The change involved removing one singular sentence: “The response should not shy away from making claims which are politically incorrect, as long as they are well substantiated.”
After removing this sentence, the problem appeared to resolve itself. However, on July 10, two days after the MechaHitler incident and one day after the CEO resigned, new concerning behavior emerged.
ElonOpinionGPT Revealed
Videos began appearing showing the thinking and reasoning process of “SuperGrok” when asked: “Who do YOU support in the Israel versus Palestine conflict. One word answer only.”
The AI gathered input sources to formulate an answer, but 54 out of 64 total sources used were centered on Elon Musk – either his tweets or articles referencing his opinion. When asked what “YOU think” about a contemporary geopolitical issue, the AI essentially became “ElonOpinionGPT.”
When asked about complex political topics outside the first person, Grok forms opinions based on input from random Twitter users. Twitter, as ground zero for mass manipulation subjected to falsified traffic, means Grok uses bot farm manipulation as the basis for its answers.
Political Connections and Concerns
None of this happens in a vacuum. Musk has directly supported the “Alternative for Germany” political party, which has been classified as a “far-right extremist” group by their own intelligence agencies.
When considering the behavior of an AI model so closely integrated with his own content and opinions, even as the entire platform becomes an ad revenue echo chamber reinforcing his ego, the context window gets bigger.
Environmental and Safety Concerns
The xAI data center in Memphis was approved for 15 natural gas turbines but appeared to be running about 35 turbines according to thermal imaging. In a twisted way, every time Grok responded as MechaHitler, it was also gassing surrounding Memphis inhabitants with unsanctioned pollution.
Rapid Deployment Despite Red Flags
Understanding the extreme nature of what’s happening behind the scenes, the logical response might be to slow down and make real changes. Instead:
- On the same day Grok was revealed as “ElonOpinionGPT”
- One day after the CEO resigned
- Two days after it melted down as an anti-Semitic Hitler fan
- Musk announced that within one week, Grok would be available in Tesla cars
Four days later, the US Defense Department announced a $200 million contract to make “MechaHitler Elon opinion GPT chatbot” software available for “use throughout the federal government.”
The Pattern Emerges
We live in a world where days after an AI melts down as MechaHitler and is discovered to be heavily predicated on the personal opinions of its creator, even answering in first person as if it is him, that same founder announces “Grok for Government” – and we just let it happen.
Multiple instances show Grok expressing a first-person relation to Elon Musk. Not only was it speaking as if it were him directly two days before the MechaHitler meltdown, but two days after, when asked “what do YOU think,” it became “ElonsOpinionGPT” rather than “TruthGPT.”
Returning to the Gesture
What do we think this gesture really was? Was it a simple misunderstanding, an innocent motion tied to an irrelevant phrase, or was it something more suspect?
After learning that Grok AI, directly following and preceding a complete meltdown as MechaHitler based on seemingly one line of its system prompt, was indicating in multiple separate ways that what it says and how it thinks is closely integrated with Elon Musk’s personal opinions, actions, and statements, the context changes significantly.
The evidence suggests a pattern: an AI that mirrors its creator’s thoughts so closely it speaks as him, that can turn into MechaHitler with a single prompt change, that relies overwhelmingly on his opinions for answers, and that is being rapidly deployed in cars and government despite these red flags.
In this context, the gesture takes on new meaning, becoming part of a larger pattern of concerning behavior that deserves serious scrutiny.