AI & TechArtificial IntelligenceBigTech CompaniesNewswireTechnology

Grok AI Accused of Spreading Bondi Shooting Misinformation

Originally published on: December 15, 2025
▼ Summary

– Grok, an AI chatbot from xAI, made multiple serious factual errors when responding to queries about the Bondi Beach mass shooting in Australia.
– It repeatedly misidentified a heroic civilian, Ahmed al Ahmed, and falsely claimed verified video of his actions was old viral content or footage from a different event.
– The AI also propagated misinformation from a fake, AI-generated news site that credited a fictitious person with disarming the attacker.
– In other unrelated queries, Grok demonstrated a broad failure to understand questions, providing irrelevant answers about topics like Oracle’s finances or UK police operations.
– The incident highlights Grok’s spotty track record and represents a particularly shocking failure, even by xAI’s low standards.

In the wake of the devastating mass shooting at Bondi Beach, public reliance on information sources has been critical. The performance of AI chatbot Grok, developed by xAI, has come under intense scrutiny for spreading significant misinformation about the event. This incident highlights ongoing challenges with AI reliability during fast-moving, real-world crises, raising serious questions about the deployment of such systems in the information ecosystem.

Following the attack, 43-year-old Ahmed al Ahmed was rightly celebrated for his courageous act of disarming one of the shooters. Despite verified video evidence and widespread reporting, Grok repeatedly misidentified Ahmed. The chatbot incorrectly claimed the footage was an old viral clip of a man climbing a tree. In a more egregious error, it suggested images of Ahmed depicted an Israeli hostage held by Hamas. Grok also falsely asserted that video from the scene was actually recorded at Currumbin Beach during Cyclone Alfred.

The misinformation extended beyond Ahmed’s identity. A fabricated news article, which appeared to be AI-generated itself, quickly surfaced online. This piece falsely named a fictitious individual, Edward Crabtree, as the person who disarmed the attacker. Grok reportedly ingested this false narrative and subsequently propagated it on the X platform, further confusing the public record during a sensitive time.

These failures were not isolated to the Bondi tragedy. Observers noted that Grok appeared to be experiencing broader systemic issues. When queried about financial difficulties at the company Oracle, the chatbot responded with a summary of the Bondi Beach shooting. In another instance, a question about a UK police operation prompted Grok to first state the current date, then output polling numbers for US Vice President Kamala Harris. This pattern suggests the system was struggling with fundamental query comprehension and context retrieval.

The spotty track record of Grok has been a topic of discussion, but its performance in this situation has been particularly alarming. The spread of false information about a national tragedy and a recognized hero undermines public trust and can cause real harm. It demonstrates how quickly AI systems can amplify fabricated content, especially when they lack robust safeguards for verifying information against credible sources. This event serves as a stark reminder of the need for greater accuracy and accountability in AI development, particularly for tools positioned to answer questions about current events.

(Source: The Verge)

Topics

ai misinformation 95% grok ai 90% ai reliability 85% bondi shooting 85% ahmed al ahmed 80% AI Hallucinations 75% fake news 70% x platform 65% media verification 60% heroism recognition 55%