Who Gets News From AI? New Pew Research Reveals the Divide

▼ Summary
– Only 9% of Americans use AI chatbots like ChatGPT or Gemini for news, with most never using them for this purpose.
– Even AI news users struggle to trust the information, with half reporting they sometimes encounter inaccurate content.
– AI has difficulty accurately summarizing or representing news due to its complex and fast-changing nature.
– Major tech companies like Apple and Google have faced issues with their AI news features making significant errors.
– AI’s challenges with news stem from handling unstructured data, differing opinions, and varying article formats.
Most Americans are not turning to artificial intelligence for their daily news, according to recent findings from Pew Research Center. While AI tools have rapidly integrated into fields like finance, software development, and customer support, their adoption as news sources remains remarkably low. Only a small fraction of the population currently relies on chatbots like ChatGPT or Gemini for news content, highlighting a significant gap between AI’s technological capabilities and public acceptance in journalism.
The data reveals that just 9% of American adults use AI chatbots to access news, with breakdowns showing 2% do so frequently, 7% occasionally, 16% infrequently, and a overwhelming 75% never using AI for this purpose. Even among those who do consult AI for news, trust remains a major issue. Approximately one-third of these users report difficulty distinguishing factual information from falsehoods in AI-generated news summaries. An even larger group, 42%, expressed uncertainty about whether they could reliably determine accuracy.
Half of the individuals who get news from AI say they regularly encounter information they believe is incorrect. Interestingly, younger respondents, who generally use AI more broadly, also report a higher likelihood of identifying inaccurate content in AI news outputs. This suggests that while younger generations are more comfortable with the technology, they may also be more critical of its outputs.
The underlying challenge stems from how AI processes information. Structured data and commonly repeated facts, such as historical dates or geographic capitals, are relatively straightforward for AI systems to handle accurately. News, however, presents unique complications: stories develop rapidly, articles may present conflicting perspectives as factual claims, and content structures vary widely between sources. This lack of standardization makes it difficult for chatbots to consistently interpret and summarize news correctly without errors.
Real-world examples underscore these reliability concerns. Apple recently had to disable its AI news summarization feature after the BBC identified significant paraphrasing errors in how it represented an article. Although the feature has since returned to newer Apple devices, it now carries a prominent warning label. The disclaimer states that the “beta feature will occasionally make mistakes that could misrepresent the meaning of the original notification” and explicitly advises users to “verify information” since “summarization may change the meaning of the original headlines.”
Other tech giants have faced similar challenges. Google’s AI Overviews tool once failed to correctly identify the current year, incorrectly stating it was still 2024. Additional investigations in March found that multiple chatbots, including ChatGPT and Perplexity, were misrepresenting news headlines and occasionally inventing entirely fictional links to non-existent articles. These incidents collectively demonstrate that AI still struggles to accurately summarize or represent news content reliably, raising important questions about its readiness for widespread use in journalism and information dissemination.
(Source: ZDNET)