Beyond ChatGPT: The AI Chatbots Using Musk’s Grokipedia

▼ Summary
– Multiple AI tools, including ChatGPT, Google’s AI Overviews, and Gemini, are increasingly citing the AI-generated encyclopedia Grokipedia as a source, raising concerns about accuracy.
– Data shows Grokipedia is still a minor source overall, cited in hundreds of thousands of AI responses, but its usage share has been steadily increasing since its launch.
– These AI tools most often cite Grokipedia for niche or non-sensitive factual queries, with ChatGPT granting it more authority as a primary source than other platforms do.
– Analysts and experts warn that Grokipedia is an unreliable source due to its AI-generated nature, lack of human oversight, and reliance on opaque or questionable material.
– AI companies like OpenAI emphasize user visibility of sources and safety filters, but others declined to comment on the risks of citing such AI-generated content.
A growing number of prominent AI chatbots are now referencing Grokipedia, the AI-generated encyclopedia championed by Elon Musk, raising significant questions about the reliability of automated information sources. While still a minor player compared to giants like Wikipedia, citations to Grokipedia are appearing in responses from major platforms including ChatGPT, Google’s AI Overviews, and Gemini, according to data from several analytics firms. This trend underscores a broader challenge as AI systems increasingly draw from other AI-generated content, potentially amplifying inaccuracies and unverified claims.
Recent analysis indicates a steady, though still small, increase in these citations. Research from the SEO company Ahrefs found Grokipedia referenced in over 263,000 ChatGPT responses from a sample of 13.6 million prompts, citing approximately 95,000 individual pages. For context, the English-language Wikipedia appeared in 2.9 million responses from the same dataset. “They’re quite a way off, but it’s still impressive for how new they are,” noted Glen Allsopp, head of marketing strategy and research at Ahrefs. Another marketing platform, Profound, observed that Grokipedia receives between 0.01 and 0.02 percent of all daily ChatGPT citations, a share that has grown consistently since mid-November.
While ChatGPT currently shows the highest volume of Grokipedia citations, other platforms are following suit. Semrush, which monitors brand visibility in AI-generated answers, recorded a noticeable spike in Grokipedia’s presence within Google’s AI products, Gemini, AI Overviews, and AI Mode, starting in December. Ahrefs’ data provided a more detailed breakdown: Grokipedia appeared in roughly 8,600 Gemini answers, 567 AI Overviews answers, and 7,700 Copilot answers from massive prompt samples. Analysts note that these tools seem to turn to Grokipedia primarily for niche, obscure, or highly specific factual questions, rather than for sensitive or breaking news topics.
The level of authority granted to Grokipedia varies between AI systems. Jim Yu, CEO of analytics firm BrightEdge, explained that for Google’s AI Overviews, Grokipedia typically functions as a supplementary reference, appearing alongside several other sources. In contrast, ChatGPT often positions Grokipedia as one of the first sources cited for a query, granting it considerably more prominence. This difference in treatment highlights the lack of standardized editorial judgment across different AI platforms.
When asked about its sourcing practices, an OpenAI spokesperson stated that ChatGPT aims to draw from a broad range of publicly available sources relevant to a user’s question. The spokesperson emphasized that the tool applies safety filters and clearly shows citations, allowing users to explore and assess source reliability themselves. Other companies offered limited comments. Perplexity highlighted its focus on accuracy but did not address the specific risks of citing AI-generated material. Anthropic and xAI did not provide substantive on-the-record statements, and Google declined to comment.
The core issue, according to experts, is that Grokipedia is fundamentally an unreliable source. It is an AI-generated system with minimal human oversight, often sourcing material from opaque personal websites and blog posts, creating a risk of circular and unverified information. The platform’s structure can create a “cosplay of credibility,” as Semrush’s director of online visibility, Leigh McKenzie, described it, where a polished presentation masks underlying reliability problems. Academics warn that the fluency of AI-generated text can easily be mistaken for trustworthiness, potentially reinforcing biases and errors at scale.
As these citations continue to rise, the incident serves as a stark reminder of the complex ecosystem AI chatbots are creating, where one machine’s output can become another machine’s input, often without the rigorous fact-checking mechanisms essential for trustworthy information.
(Source: The Verge)




