AI Search Fails Users 3X More Often Than Google

▼ Summary
– New research confirms Google’s prediction that AI platforms increasingly generate hallucinated links, with AI assistants sending users to broken pages nearly three times more often than Google Search.
– ChatGPT produces the highest rate of fake URLs at 2.38% for all mentioned links and 1% for clicked ones, significantly exceeding Google’s 0.84% and 0.15% rates respectively.
– AI creates fake links primarily due to relying on outdated information that references deleted pages and inventing plausible-sounding URLs that never existed.
– Despite the issue, AI assistants drive only 0.25% of website traffic compared to Google’s 39.35%, making the impact minimal for most websites currently.
– Google’s John Mueller accurately predicted this problem six months ago and advised focusing on better 404 pages rather than overreacting to the limited traffic impact.
A recent analysis of over 16 million web addresses confirms what many in the industry have suspected: artificial intelligence search tools frequently direct users to non-existent or broken pages, far more often than traditional search engines like Google. This issue, often referred to as “hallucinated” or fabricated URLs, is becoming a notable weakness in AI-driven information retrieval systems.
According to a comprehensive study by Ahrefs, AI assistants lead users to error pages nearly three times more frequently than Google Search. The research, which examined millions of link interactions, highlights a growing reliability gap between conventional search and emerging AI platforms.
Among the AI tools evaluated, ChatGPT performed the worst, with 1% of clicked URLs resulting in 404 errors. In contrast, Google maintains a far lower rate of just 0.15%. When considering all suggested links, not only those clicked, the situation deteriorates further, with 2.38% of ChatGPT’s recommendations leading to dead ends. Other AI services like Claude, Copilot, Perplexity, and Gemini showed improved but still concerning rates, while Mistral emerged as the most reliable, though it drives minimal traffic overall.
Two primary factors explain why AI systems generate these faulty references. First, they often rely on outdated training data, suggesting pages that have since been removed or relocated. Second, and more troubling, they sometimes invent plausible-sounding URLs that have never actually existed. For instance, in examples from Ahrefs’ own domain, AI tools fabricated addresses such as “/blog/internal-links/” simply because they seemed contextually appropriate.
Despite these shortcomings, the overall effect on website traffic remains limited, for now. AI assistants account for only about 0.25% of total referral traffic, a tiny fraction compared to Google’s dominant 39.35%. This means that while the error rate is high, the actual volume of affected users is still small. However, as AI adoption grows, the problem could scale accordingly.
The findings align with predictions made earlier this year by Google’s John Mueller, who anticipated an increase in clicked hallucinated links over a six- to twelve-month window. His advice to webmasters has proven prescient: rather than overhauling sites in response to negligible referral numbers, site owners should focus on improving user experience through better 404 pages and implement redirects only for URLs that attract meaningful traffic.
Looking ahead, the industry expects AI services to refine how they generate and verify links. Until then, content creators and site operators are advised to monitor incoming traffic from AI sources and respond pragmatically, prioritizing usability over panic, given the currently modest impact.
(Source: Search Engine Journal)





