Reuters Event Probes AI’s Remaking of News Landscape
Oxford summit delves into power shifts, newsroom adaptation, and the critical need for informed reporting as generative AI evolves.

▼ Summary
– The Reuters Institute for the Study of Journalism held a conference at the University of Oxford on March 26, 2025, titled “AI and the Future of News 2025,” focusing on the deep impacts of AI on the news industry.
– Key discussions revolved around the shifting balance of power, profit models, and media plurality due to advanced generative AI tools, with insights from experts in research, policy, and publishing strategy.
– Journalists discussed the practical use of AI in newsrooms and the need for critical reporting on AI, emphasizing the importance of understanding AI basics to ask deeper questions and address human rights implications.
– Concrete examples of AI in journalism were showcased, highlighting AI-assisted investigative work, product development, audience engagement, and ethical considerations in newsrooms.
– Research findings on public attitudes toward AI in news revealed varying comfort levels and trustworthiness concerns, while a final panel explored AI’s broader societal impacts on politics, education, and responsible technology development.
At the University of Oxford on March 26, 2025, the Reuters Institute for the Study of Journalism convened experts and journalists for a day-long examination titled “AI and the Future of News 2025.” The gathering moved beyond surface-level discussions to probe the deeper shifts artificial intelligence is forcing upon the news ecosystem.
Setting the stage, Reuters Institute’s Acting Director Mitali Mukherjee and Director of Research Richard Fletcher framed the Institute’s ongoing work in tracking AI’s integration into journalism since 2016. The core tension explored throughout the day revolved around the shifting balance of power, profit models, and the very definition of media plurality as generative AI tools become more sophisticated. A key panel, moderated by Federica Cherubini, Director of Leadership Development at the Institute, tackled this directly, bringing together perspectives from research (Felix Simon, Reuters Institute; Klaudia Jaźwińska, Tow Center), policy (Andrew Strait, Ada Lovelace Institute), and publishing strategy (Matt Rogerson, FT). Discussions centered on the pressing issues of content licensing, the valuation of news data used by AI models, and the strategic tightrope publishers must walk.
The focus then shifted to the craft of journalism itself. How are newsrooms actually using these tools, and how should AI, as a subject, be reported? Eduardo Suárez, the Institute’s Head of Editorial, led a session with journalists Sannuta Raghu (Scroll, India), Katharina Schell (APA, Austria), and Jazmín Acuña (El Surtidor, Paraguay). They shared experiences from diverse newsrooms, highlighting a tendency in current coverage towards hype or fear, often lacking critical angles, particularly concerning human rights implications, as noted by Acuña regarding Latin American reporting. Schell pointed out the need for journalists to grasp the basics of AI to ask more probing questions, moving beyond surface descriptions of AI experiments.
A subsequent panel, moderated by Felix Simon, provided concrete examples of AI implementation. Dylan Freedman showcased AI-assisted investigative work at the New York Times, Liz Lohn detailed AI product development and audience engagement experiments at the FT, and Nathalie Malinarich spoke about the BBC’s deployment of tools like an internal deepfake detector within BBC Verify. These discussions revealed both the potential efficiencies and the complex ethical considerations newsrooms are navigating.
Throughout the conference, research insights grounded the conversations. Senior Research Associate Rasmus Nielsen, along with Richard Fletcher and Amy Ross Arguedas, presented findings on public attitudes toward AI in news, drawing from recent surveys and the Digital News Report. Comfort levels vary significantly depending on the application, and concerns about trustworthiness persist.
The final session broadened the aperture, looking at AI’s societal ripples beyond journalism. Moderated by Mitali Mukherjee, the panel featured Victoria Nash (Director, Oxford Internet Institute), Chris Summerfield (Director, UK AI Safety Institute), and Roxana Radu (Associate Professor, Blavatnik School of Government). They explored AI’s impact on sectors like politics and education, the associated risks, and the ongoing efforts to guide the technology’s development responsibly.
The event underscored the multifaceted challenge AI presents to news organizations, a complex interplay of technological capability, economic pressure, editorial integrity, and public trust, demanding strategic adaptation and critical inquiry from the industry.
AI and the Future of News 2025: Themes
- The Transformative Impact of AI on News Production and Consumption: AI is rapidly changing how news is created, distributed, and consumed, presenting both opportunities and significant challenges for the journalism industry.
- Trust and Reliability in the Age of AI-Generated Content: The increasing prevalence of AI in news raises critical questions about the trustworthiness and accuracy of information, particularly with the emergence of “AI slop” and fabricated content.
- The Value Exchange Between News Publishers and AI Companies: The use of news content for training AI models and for information retrieval in AI-powered tools necessitates a re-evaluation of the value exchange between publishers and technology companies, including licensing agreements and fair compensation.
- Ethical Considerations and the Need for Transparency: The integration of AI into newsrooms demands careful consideration of ethical implications, including bias in algorithms, the role of human oversight, and the need for clear labeling of AI-generated or assisted content.
- The Evolving Relationship Between Platforms and Publishers: The historical power dynamics between news publishers and dominant tech platforms are being reshaped by the rise of AI, requiring new models for collaboration and regulation.
- The Potential for AI to Enhance Journalism and Reach New Audiences: While challenges exist, AI also offers opportunities to improve efficiency, personalize news delivery, fact-check information, detect misinformation, and reach underserved language communities.
- Regulatory Landscape and the Role of Governments: Governments and regulatory bodies are grappling with how to address the challenges and opportunities presented by AI in the news sector, with ongoing debates around copyright, data usage, and market competition.
- Public Perception and Understanding of AI in News: Public awareness and understanding of AI’s role in news are still developing, with varying levels of trust and expectations regarding its impact and responsible use.
AI and the Future of News 2025: Ideas and Facts
1. The Reuters Institute’s Ongoing Research into AI and News:
- The Reuters Institute has been exploring the AI and news relationship since 2016, focusing on evidence-based research, engagement with news leaders, and building context for journalists worldwide.
- Key research areas include AI and trust, the value exchange between AI and news, and international perspectives on AI in journalism.
- The Institute produces original reporting on AI in news, including stories like “AI generated slop,” seeks expert voices, hosts a podcast, and focuses on international stories beyond the Anglo-American perspective, recognizing the challenges of unequal access to AI.
- “One of the key challenges around AI is the very quick and slippery slope that can uh that can form itself between those that have access and those that don’t.”
2. Challenges with Current AI Tools for News Retrieval:
- Research indicates that current near-AGI tools often provide unsatisfactory results for news retrieval.
- Chatbots may refuse to fetch news, return old or inaccurate information, or hallucinate content beyond paywalls.
- “We find a lot of um chat Bots that um get to pay walls quote content and then hallucinate beyond the pay wall.”
- This is particularly problematic for publishers with paywalled content and for providing accurate information in critical areas like finance and health.
3. The Emergence of Data Licensing Deals and Their Lack of Transparency:
- News publishers are increasingly engaging in data licensing deals with AI companies for training and inference.
- These deals take various forms, including access to historical and future data for model training and live access for information retrieval.
- The terms and value of these deals are often shrouded in secrecy due to commercial interests, creating a “big Collective action problem.”
- “In many ways we we have absolutely no idea um what the value of the data is it’s very difficult to sort of independently assess it.”
- This lack of transparency makes it difficult for smaller publishers to negotiate fair terms and for regulators to provide guidance or implement effective regulation.
4. Concerns about Copyright and the Power of Big Tech:
- Major AI companies like Google and OpenAI are lobbying for weaker copyright restrictions on AI training, arguing for a “right” to train on publicly available data without significant restrictions.
- “Both Google and open AI submitted comments in which they basically called on the government to weaken copyright restrictions on AI training and to codify a right for us AI companies to train their on publicly available data largely without restriction.”
- The immense power of large tech companies, particularly those building data centers, gives them significant leverage in negotiations with governments regarding AI regulation and copyright.
- “Those companies now are arguing to the White House that copyright should basically not exist otherwise China will win right.”
5. The British Public’s Adoption and Concerns Regarding AI:
- A significant portion of the British public (61%) reports having used generative LLMs and LM-based products, with younger users showing even higher adoption rates.
- The primary reason for using these tools is “answering information and finding out uh recommendations about the world around them,” indicating a shift away from traditional search engines.
- There are concerns about the stochastic and probabilistic nature of these systems, leading to inconsistent and potentially inaccurate information, particularly in sensitive areas like medical information.
- Experiments show high rates of errors and fabricated quotes in responses from AI systems when asked for news.
- “Over 51% had some kind of significant error in results you had 90% had factual errors and 13% of responses had quotes that were either completely made up or seriously altered.”
6. The Importance of “Retrieval Augmented Generation” (RAG):
- RAG is seen as a crucial technology for AI search engines to access and cite source publications for information retrieval.
- Developing a functional and commercially sustainable RAG market is essential for news publishers to ensure their content is properly attributed and potentially monetized in AI-powered search results.
- Current RAG products from major tech companies offer varying levels of opt-out possibilities for publishers.
7. Rethinking Copyright and Focusing on “Directional Innovation”:
- There is a need to move beyond traditional copyright paradigms when considering AI, focusing on the cost, value, and benefit of copyright reform for the current generation of AI products.
- A more pragmatic approach of “directional innovation” is suggested, focusing on developing AI tools with clear societal value rather than pursuing the abstract goal of Artificial General Intelligence (AGI).
8. Lessons from the Relationship with Social Platforms:
- The news industry can draw lessons from its past experiences with social media platforms, approaching the integration of AI with more suspicion regarding promises of increased profits and with “Clear Eyes” about the actual dynamics at play.
- The shift towards more interpersonal relationships with AI machines, rather than connecting people in online spaces, represents a significant difference from the early days of the internet.
9. Public Opinion on AI in News:
- Few people are currently using generative AI specifically for news, but this is likely to grow with improved usability and integration into everyday tools.
- Public comfort levels with AI in news vary depending on the task and topic, with greater acceptance for tasks like transcription and less for core journalistic functions like fact-checking.
- There is a demand for labeling AI-generated content, although the specifics of what to label require further clarity.
- Most people do not believe AI will make news more worth paying for and expect it to decrease trustworthiness while making it cheaper and more up-to-date.
- Public trust in the news media to use AI responsibly is low, comparable to social media and politics.
10. AI Implementation in Newsrooms: Examples and Ethical Considerations:
- News organizations are experimenting with AI for various purposes, including investigative journalism (data analysis, document summarization), news presentation (generating timelines, simplified versions, social media posts), and detection of deepfakes.
- Ethical principles, such as “human in the loop,” content faithfulness, and transparency, are guiding the development and deployment of AI tools in newsrooms.
- Success metrics for AI adoption in newsrooms include audience engagement (redefined around habit and loyalty), workflow efficiency (doing more with the same resources), and the velocity of innovation.
- Addressing linguistic inequalities is a key challenge, with current AI models often underperforming for non-English languages and lacking nuanced understanding of code-mixed languages and proper nouns. Initiatives are underway to improve data sets and develop language-specific AI tools.
- Rethinking news delivery beyond the traditional article format, using modular “news atoms” and personalized formats, is being explored.
- Alternative text generation for infographics is a valuable application for improving accessibility.
- The pressure on frontline staff to oversee AI-generated content necessitates proper training and empathy.
11. AI and Society: Broader Implications and Public Perception:
- Public perception of AI in news is influenced by experiences with technology across various sectors.
- Despite criticisms, much of the public holds a “cautious technology optimism” towards digital platforms, with social media being a partial exception.
- Public expectations for the impact of AI vary across sectors, with high expectations for areas like social media, search, science, and news media.
- Trust in different sectors to use AI responsibly also varies significantly, with healthcare and science generally trusted more than social media, news media, and politicians.
- There is a greater public appetite for government regulation of generative AI compared to the more established platform technologies.
- The development of AI is a significant geopolitical issue, requiring careful consideration of national and international implications.
- Education and media literacy are crucial for navigating the AI-driven information landscape.
12. Promising Applications of AI:
- AI has the potential to act as a mediator in debates and facilitate agreement on controversial topics.
- Real-time translation tools offer exciting possibilities for breaking down language barriers in news dissemination and communication.
- AI can empower individuals with coding skills by assisting in writing and understanding code, democratizing innovation.
One Comment