Study: LLMs.txt Has No Impact on AI Citations Across 300k Domains

▼ Summary
– SE Ranking’s analysis of 300,000 domains found no measurable link between having an llms.txt file and how often a domain is cited in major LLM answers.
– Only 10.13% of domains have implemented llms.txt, showing scattered adoption rather than widespread use as an AI visibility standard.
– High-traffic sites were slightly less likely to use llms.txt than mid-tier websites, with adoption fairly even across traffic tiers.
– Both Google and OpenAI’s guidance doesn’t indicate llms.txt affects AI rankings or citations, aligning with SE Ranking’s findings.
– Adding llms.txt is a low-risk preparation for potential future adoption but provides no near-term visibility boost in AI answers.
A recent large-scale study examining the relationship between llms.txt implementation and AI citation frequency reveals that this emerging file format currently shows no measurable impact on how often domains appear in major language model responses. The comprehensive analysis, which reviewed data from approximately 300,000 websites, indicates that implementing llms.txt does not correlate with increased visibility within AI-generated answers despite ongoing industry discussions about its potential benefits.
The investigation found that adoption rates for llms.txt remain remarkably low across the web. Only 10.13% of the examined domains had implemented the file, meaning nearly ninety percent of websites have not adopted this technical approach. This sparse implementation pattern suggests that rather than becoming a standard practice, llms.txt remains in experimental stages across the digital landscape. Interestingly, the distribution doesn’t favor high-traffic websites either, mid-tier sites actually showed slightly higher implementation rates than their more prominent counterparts.
When researchers delved into the core question of whether llms.txt influences AI citation patterns, the results proved consistently negative. Through sophisticated statistical correlation tests and advanced machine learning modeling, the analysis demonstrated that the presence of llms.txt files shows no connection to citation frequency in responses from prominent language models. In fact, removing the llms.txt variable from their predictive model actually improved its accuracy, further reinforcing the conclusion that this file format currently holds no discernible value for AI visibility.
These findings align with official platform guidance from major technology companies. Google’s public documentation regarding AI Overviews and AI Search modes makes no mention of llms.txt as a ranking or citation signal, instead emphasizing that their AI systems continue to rely on established search infrastructure. Similarly, OpenAI’s crawler documentation focuses primarily on traditional robots.txt controls while remaining silent about llms.txt affecting content discovery or citation behavior. Although some monitoring has detected GPTBot occasionally fetching llms.txt files, this activity appears infrequent and unrelated to citation outcomes.
For website owners and digital marketers, the practical implications are straightforward. Implementing llms.txt represents a low-risk technical adjustment that requires minimal effort and carries little potential for negative consequences. However, those hoping for immediate improvements in AI answer visibility will likely find the investment unrewarded based on current evidence. The file currently falls into the category of speculative preparation rather than proven strategy, worth considering if it integrates easily into existing workflows, but not something that should be promoted as a guaranteed method for enhancing AI presence.
(Source: Search Engine Journal)





