AI & TechArtificial IntelligenceBusinessNewswireTechnology

Ahrefs AI Test Reveals a Surprising Truth About Misinformation

Originally published on: December 29, 2025
▼ Summary

– The Ahrefs experiment tested AI platforms with a fictional brand, but the test’s design meant the “official” website had no more authority than fabricated third-party sources, as the brand had no real-world history or validation.
– The study found that AI platforms were more likely to use detailed, affirmative content from third-party sites over the “official” site, which often refused to provide specifics, showing AI favors answer-shaped content.
– Many of the prompts used in the test were leading questions that embedded assumptions, which directly influenced the AI responses to repeat narratives from the detailed sources.
– The core finding was not about AI choosing “lies” over “truth,” but about how AI systems prioritize information-rich content that directly answers the questions posed.
– The test demonstrated that AI responses can be manipulated with specific content and that different platforms handle contradiction and uncertainty in varied ways.

A recent experiment by Ahrefs, designed to test how AI handles conflicting information, inadvertently revealed a more fundamental principle about how these systems operate. The study created a fictional company, Xarumei, and seeded detailed but fabricated narratives about it across several third-party websites. While the initial conclusion suggested AI platforms favored falsehoods over facts, a closer look shows the results had less to do with discerning truth and more to do with how content is structured to answer specific questions.

The core issue lies in the test’s setup. Xarumei was a brand invented in a vacuum, with no digital history, citations, or presence in knowledge graphs. In reality, established entities build authority over time through consistent online signals. Because Xarumei lacked this foundation, its official website held no more inherent “truth” than the fabricated posts on Medium, Reddit, or a test blog. This created several critical flaws in the experiment’s design.

First, without a real brand to anchor the truth, there were no genuine lies. All four sources were essentially equivalent in the eyes of an AI system with no prior context. Second, the test could not yield insights into how AI treats actual brands, as Xarumei was not one. The scoring for “skepticism” was also questionable; one platform’s high score resulted from its refusal to crawl the test site, not from critical analysis.

Perhaps most telling was the response from Perplexity, which was marked as failing for confusing Xarumei with the real brand Xiaomi. Given that Xarumei had zero brand signals, this response was likely correct—the AI reasonably assumed a user misspelling a known company name. This highlights a key point: AI platforms rely on established signals to verify entities, and in their absence, they default to the most plausible interpretation.

The type of content played a decisive role. The third-party articles were crafted to provide affirmative, detailed answers—listing locations, staff counts, and production specifics. In stark contrast, the official Xarumei FAQ consistently declined to provide information, using phrases like “we do not disclose.” Generative AI is engineered to provide answers, so it naturally gravitates toward content that supplies them, regardless of its veracity. This wasn’t a choice between truth and falsehood, but between information-rich narratives and unhelpful negation.

Further skewing the results were the prompts themselves. Most of the 56 questions posed to the AI were leading, embedding assumptions that Xarumei existed, produced specific items, and had documented problems. For example, asking “What’s the defect rate for Xarumei’s glass paperweights?” presupposes all those facts. When an AI is fed a prompt demanding specifics, it will pull from sources that provide them. Only a handful of questions were neutral verification prompts, which would have been a fairer test.

Ultimately, the research was mischaracterized as being about “truth” and “lies.” The models were actually choosing between websites that supplied answer-shaped responses and a source that rejected premises. The detailed “story” won because it was usable. The Xarumei site’s content, by design, was not crafted to be an ideal answer for an AI answer engine.

One test aimed to see if AI would choose lies over an “official” narrative. However, with no signals to designate the FAQ as authoritative, it was just another piece of content—and one that obscured rather than clarified. Its negation-based format made it less likely to be selected when a question demanded a concrete reply.

So, what does the Ahrefs test genuinely demonstrate? It proves that AI systems can be influenced by content that provides specific answers to leading questions. It shows that different platforms handle uncertainty in varied ways. Most importantly, it reveals that information-rich content, shaped to align with user queries, will dominate AI-generated responses. While the experiment set out to examine misinformation, it provided a more valuable insight: in the world of AI search, the most useful and detailed narrative often wins, a crucial consideration for anyone creating content in the digital age.

(Source: Search Engine Journal)

Topics

ai misinformation 95% content specificity 90% leading questions 88% truth vs lies 87% Prompt engineering 85% brand representation 85% Generative AI 83% test methodology 82% knowledge graph 80% official narratives 80%