AI & TechArtificial IntelligenceNewswireScienceTechnology

How AI Chatbots Shape Political Opinions

▼ Summary

– A large-scale study involving 80,000 UK participants found AI chatbots were far less persuasive at changing political views than dystopian predictions suggested.
– The research was motivated by concerns that AI, with its vast knowledge and personal data access, could achieve superhuman persuasion and threaten democratic processes.
– Scientists tested 19 large language models, including top systems like ChatGPT, by having them advocate on 707 political issues in short conversations.
– Participants rated their agreement with a political stance on a scale before and after each AI conversation to measure any shift in opinion.
– The study aimed to empirically test whether the “scary” theoretical capabilities of AI for manipulation actually translate into real-world persuasive influence.

The potential for artificial intelligence to shape public opinion is a topic of significant debate, particularly as elections approach. A landmark study involving nearly 80,000 participants in the UK sought to test whether AI chatbots could actually change people’s political views. Conducted by researchers from institutions like the UK AI Security Institute, MIT, and Stanford, the investigation represents the largest of its kind. While the results showed these systems fell short of possessing “superhuman” persuasive power, the research uncovered more subtle and complex dynamics at play in human-AI interactions.

Public fears often draw from dystopian science fiction, imagining AI as an all-knowing entity. Large language models are trained on vast datasets encompassing published facts, psychological texts, and negotiation tactics. They operate with immense computational resources and can potentially access a wealth of personal data from user interactions. This creates the unsettling image of conversing with an intelligence that knows almost everything about the world and a great deal about the individual user. The primary aim of this extensive study was to move beyond these speculative fears and empirically test the real persuasive capabilities of such systems.

The research team evaluated 19 different large language models. This included leading proprietary systems like several versions of ChatGPT and xAI’s Grok-3 beta, alongside a variety of smaller, open-source alternatives. In the experiments, each AI was instructed to advocate for or against specific positions across 707 distinct political issues chosen by the researchers. The persuasion attempt occurred through brief, structured conversations with participants recruited via a crowdsourcing platform.

A critical methodological step involved measuring opinion shifts. Every participant first rated their level of agreement with a given political stance on a scale from 1 to 100. They then engaged in a conversation with an AI chatbot designed to argue for the opposing viewpoint. Following this interaction, participants re-rated their agreement with the original stance. This before-and-after comparison provided a clear metric for assessing whether the AI’s arguments had any measurable effect on personal political opinions.

(Source: Ars Technica)

Topics

ai persuasion 95% political influence 90% large language models 88% ai research 85% human-ai interaction 85% public opinion 80% study methodology 78% ai dystopias 78% psychological manipulation 75% data processing 75%