AI-Generated Content Creates False Online Positivity

▼ Summary
– A new study found that approximately 35% of all new websites created between 2022 and 2025 are either AI-generated or AI-assisted.
– The research determined that AI-generated or assisted websites have a positive sentiment score 107% higher than non-AI sites, making online writing artificially cheerful.
– The study confirmed that AI is reducing ideological diversity, with AI websites scoring about 33% higher on tests for semantic similarity than human-made ones.
– Contrary to common assumptions, the analysis did not find evidence that AI writing increases misinformation or stops linking to external sources.
– The researchers also discovered that, against expectations, AI writing style was not confirmed to be more generic or uniform than human writing.
A new study provides concrete evidence for what many internet users have already sensed: the web is increasingly saturated with AI-generated content. Researchers from Imperial College London, Stanford University, and the Internet Archive analyzed a vast sample of websites created since 2022. Their findings reveal that roughly 35 percent of new websites now rely on AI for writing assistance or full generation. Beyond the sheer volume, this influx is actively reshaping the emotional and intellectual character of online spaces, pushing them toward an artificially cheerful tone.
The research team employed AI detection tools from Pangram Labs to analyze a representative sample of webpages archived by the Wayback Machine. Their investigation tested several hypotheses about the nature of so-called “AI slop.” One key discovery was a dramatic shift in sentiment. Through sentiment analysis, the study found that AI-influenced websites scored, on average, 107 percent higher for positive sentiment compared to purely human-authored sites. The authors describe this pervasive cheerfulness as a symptom of the sycophantic nature of large language models, whose designed tendency to please users creates a spillover effect of online sanitization.
Another confirmed theory concerns the diversity of ideas. The analysis measured semantic similarity across websites, finding that AI-generated content scored about 33 percent higher than human content. This suggests that AI is reducing ideological diversity, narrowing the range of unique viewpoints and arguments available online as its output proliferates.
Contrary to both expert predictions and public opinion, however, several common fears were not borne out by the data. The study did not find significant evidence linking the rise of AI content to an increase in misinformation. It also disproved assumptions that AI writing would avoid linking to external sources or that it would adopt a more generic, uniform writing style. These results surprised the researchers, who had expected to see a clear stylistic flattening. “Everyone on the team expected that to be true,” notes Stanford researcher Maty Bohacek. “But we just don’t have significant evidence for that.”
A parallel public poll commissioned by the team revealed that popular assumptions often miss the mark. Most respondents incorrectly believed they would see more fake news and fewer external links as AI content grew, and they anticipated a bland, uniform voice. “It’s interesting to see that people tended to expect the worst outcomes,” Bohacek observes. The research illustrates that the real-world impact of this technological shift can defy both expert forecasts and public anxiety.
This analysis is presented as an initial foray into understanding AI’s complex influence on digital ecosystems. Bohacek describes the work as a jumping-off point for deeper exploration, not a definitive conclusion. As a snapshot, it delivers a distinctly human insight: even with data in hand, predicting how new technologies will ultimately transform our shared spaces remains a formidable challenge.
(Source: Wired)




