107,000 Pages Analyzed: Core Web Vitals & AI Search Insights

▼ Summary
– An analysis of over 100,000 webpages found no strong positive correlation between good Core Web Vitals (CWV) scores and better visibility in AI search systems.
– However, a small negative correlation exists, meaning pages with extremely poor CWV performance, especially slow loading, are less likely to perform well in AI contexts.
– The data shows CWV function as a risk-management constraint in AI search, preventing disadvantage from severe failure, not as a growth lever that creates an advantage.
– Since most pages now meet basic CWV thresholds, simply “passing” does not differentiate content for AI systems, which prioritize factors like clarity and intent alignment.
– The practical strategy is to treat CWV as table stakes by eliminating extreme performance failures on key content, rather than chasing incremental gains across all pages.
Understanding the relationship between Core Web Vitals (CWV) and visibility in AI-driven search requires moving beyond simple pass/fail metrics. A deep analysis of over 107,000 webpages that appear in AI Overviews reveals a nuanced reality. While strong technical performance is foundational, it does not function as a primary ranking lever for AI systems. Instead, the data shows that CWV acts as a critical gatekeeper, preventing content from being penalized rather than propelling it to the top.
Most reporting on Core Web Vitals focuses on averages and thresholds, but this approach can be misleading. When examining the actual distribution of metrics like Largest Contentful Paint (LCP), a clear pattern emerges. The data shows a heavy right skew, meaning most pages cluster in an acceptable range while a small minority are extremely slow outliers. These outliers disproportionately drag down average scores, creating a misleading impression of site-wide problems. A similar distribution appears with Cumulative Layout Shift (CLS), where most pages are stable, but a few exhibit severe instability. AI systems evaluate individual pages and content passages, not abstract site-wide averages, making this distributional view essential.
To assess any link between CWV and AI visibility, a Spearman rank correlation was used, as the data was not normally distributed. The results were revealing. A small negative correlation was found, ranging from -0.12 to -0.18 for LCP and -0.05 to -0.09 for CLS. While statistically present in large datasets, these figures are not strong in practical terms. They do not indicate that faster or more stable pages consistently achieve better AI visibility. However, they do point to a critical distinction: the absence of a performance upside, but the clear presence of a downside.
The data does not support the idea that improving CWV scores beyond basic thresholds boosts AI performance. Pages with good scores did not reliably outperform their peers in terms of AI inclusion or citation. The negative correlation, however, is instructive. Pages in the extreme tail of poor performance, especially for LCP, were far less likely to perform well in AI contexts. These severely underperforming pages tend to generate negative user engagement signals, such as higher abandonment rates, which AI systems may interpret as indicators of low quality or poor user satisfaction.
This leads to a crucial insight: Core Web Vitals function as a constraint, not a growth lever. Good performance does not create a competitive advantage in AI search, but severe failure creates a significant disadvantage. This distinction is easily missed when focusing only on pass rates. One reason a positive correlation fails to materialize is that passing CWV is now commonplace. In the analyzed dataset, a majority of pages already met the recommended thresholds, particularly for CLS. When most content clears the bar, doing so does not differentiate it; it merely keeps it in the running.
AI systems are fundamentally selecting content based on its ability to explain concepts clearly, align with authoritative sources, and satisfy user intent. Core Web Vitals ensure the user experience does not actively undermine these qualitative factors; they do not substitute for them. Therefore, the strategic role of CWV in an AI-led environment must be reframed. They are a risk-management tool, not a core competitive strategy. Their primary function is to prevent valuable content from being disqualified due to poor technical experience signals.
This reframing has direct practical consequences. Chasing incremental CWV improvements across pages that are already acceptable is unlikely to yield meaningful gains in AI visibility and consumes valuable engineering resources. The strategic priority should shift to targeting the extreme tail of poor performance. Identifying and fixing pages with severe LCP or CLS issues is where the real impact lies, as these pages generate the negative behavioral signals that can suppress an AI system’s trust in the content.
For brands navigating AI-mediated discovery, the allure of CWV is understandable, they are measurable and actionable. The risk lies in confusing measurability with direct impact. A more disciplined approach is recommended: treat Core Web Vitals as essential table stakes. Eliminate extreme failures to protect your most important content from technical debt. Then, redirect focus to the factors AI systems genuinely use to infer value: content clarity, consistency, alignment with user intent, and positive behavioral validation signals.
In summary, the relationship between Core Web Vitals and AI performance is real but limited. There is no strong positive correlation indicating that better scores lead to better visibility. However, a measurable negative relationship exists at the extremes, where severe performance failures are linked to poorer AI outcomes. Core Web Vitals are best understood as a gatekeeper, not a signal of excellence. In the evolving landscape of AI-driven search, this clarity is essential for allocating resources effectively and building a sustainable visibility strategy.
(Source: Search Engine Land)





