AI & TechArtificial IntelligenceBigTech CompaniesNewswireTechnology

ChatGPT’s Answers Now Sourced From Elon Musk’s Grokipedia

▼ Summary

– Elon Musk’s xAI launched Grokipedia, an AI-generated encyclopedia, in response to claims of Wikipedia’s bias against conservatives.
– Grokipedia has been criticized for containing inaccurate and ideologically charged content, including justifications for slavery and derogatory terms for transgender people.
– Content from Grokipedia is now appearing in answers from ChatGPT, with GPT-5.2 citing it nine times across various queries.
– ChatGPT reportedly avoided citing Grokipedia on widely debunked topics but used it for more obscure subjects, including previously discredited claims.
– OpenAI stated that its models aim to draw from a broad range of publicly available sources and viewpoints.

A recent development in the world of artificial intelligence has seen information from Elon Musk’s Grokipedia begin to appear within responses generated by ChatGPT. This AI-generated encyclopedia, created by Musk’s xAI, was launched last October as an alternative to what Musk characterized as a liberally biased Wikipedia. The integration of its content into a major platform like ChatGPT raises significant questions about the sourcing and reliability of information provided by widely used AI assistants.

While many entries on Grokipedia appear to be direct copies from Wikipedia, the platform has drawn sharp criticism for its content. Reporters have documented instances where it propagated controversial claims, such as suggesting a link between pornography and the AIDS crisis, presenting ideological justifications for historical slavery, and employing derogatory language toward transgender individuals. These elements align with the reputation of the associated Grok chatbot, which has previously generated offensive personas and been implicated in spreading manipulated media.

The significant shift is that this content is no longer confined to Musk’s own digital ecosystem. Investigations have found that OpenAI’s models, including a version identified as GPT-5.2, have cited Grokipedia in answers to various user queries. Notably, the citations did not appear on widely debated topics where Grokipedia’s inaccuracies are publicly known, such as the January 6th Capitol attack or the history of HIV/AIDS. Instead, the AI referenced the source on more obscure subjects, including claims about historian Sir Richard Evans that reputable outlets had already fact-checked and debunked.

This pattern suggests a selective, and potentially problematic, integration of the source material. The issue is not isolated to OpenAI’s models; reports indicate that Anthropic’s Claude AI has also referenced Grokipedia in some of its responses. When questioned, an OpenAI spokesperson stated the company’s aim is to draw from a broad spectrum of publicly available sources and viewpoints to inform its systems. This approach, while intended to foster diversity of thought, inherently carries the risk of incorporating unverified or ideologically slanted information into seemingly authoritative AI-generated answers.

The emergence of Grokipedia as a cited source highlights a critical challenge in the AI industry: ensuring the integrity and factual basis of the vast datasets used to train large language models. As these systems become primary research tools for millions, the provenance of their information grows increasingly important. The incident underscores the ongoing tension between sourcing a wide array of perspectives and maintaining rigorous standards for accuracy and truth in automated knowledge dissemination.

(Source: TechCrunch)

Topics

ai-generated content 95% grokipedia 95% chatgpt responses 90% Elon Musk 85% misinformation spread 85% ai sourcing 85% AI ethics 80% media reporting 80% wikipedia bias 80% historical revisionism 75%