Columbia Deploys AI to Ease Campus Tensions

▼ Summary
– Columbia University is testing Sway, an AI tool developed by Carnegie Mellon researchers to facilitate one-on-one debates on controversial topics between students with opposing views.
– The potential partnership with Sway aligns with Columbia’s broader efforts to address campus tensions, including a $200 million settlement with the Trump administration to combat antisemitism and restore federal funding.
– Critics at Columbia argue the administration is depoliticizing complex issues by framing them as mere “difficult conversations” and using financial incentives to manage dissent rather than addressing root causes.
– Sway’s development involves partial funding from the US intelligence community, though developers state data shared is anonymized and public, with no confidential transcripts or specific details provided.
– The tool measures success through post-discussion quizzes that assess changed opinions and reduced confidence in original views, aiming to make students more open to opposing arguments rather than necessarily changing their minds.
In an effort to address rising tensions on campus, Columbia University is exploring the use of artificial intelligence to mediate difficult conversations among students. The institution is currently testing Sway, an AI debate platform designed to connect individuals with opposing viewpoints on sensitive subjects like immigration, racial justice, and the Israel-Palestine conflict. Developed by researchers at Carnegie Mellon University, the tool aims to foster more respectful and productive dialogue.
This initiative arrives amid a period of significant unrest at Columbia, marked by protests, disciplinary actions, and heightened scrutiny from federal authorities. The university recently agreed to a multimillion-dollar settlement linked to combating antisemitism, a move that also restored its eligibility for substantial government funding. As part of that agreement, Columbia committed to improving campus dialogue, a goal that aligns with the potential adoption of tools like Sway.
Sway operates by inserting an AI guide into one-on-one chats, prompting users with challenging questions and suggesting alternative phrasing when language becomes inflammatory. Early testing has involved thousands of students from dozens of universities, with Columbia’s Teachers College now evaluating its suitability for conflict resolution curricula. Developers emphasize that the tool is not intended to change opinions but to make participants more open to opposing arguments and less entrenched in their views.
However, the approach has drawn skepticism from within the Columbia community. Some students and faculty argue that the administration is attempting to depoliticize deeply contextual issues, treating complex disagreements as mere communication problems. One anonymous source noted that the university has a pattern of investing heavily in initiatives that prioritize harmony over substantive engagement, describing it as an effort to “put out fires” rather than address root causes.
Funding and data-sharing aspects of Sway have also attracted attention. The platform receives support from a variety of sources, including foundations linked to philanthropy and education, as well as indirect backing from U.S. intelligence agencies for related basic research. While developers assert that no confidential or identifying student data is shared, the involvement of government entities raises questions about privacy and influence.
Sway’s effectiveness is measured through post-discussion quizzes that assess changes in perception and understanding. Nearly half of participants report shifting their views after using the tool, though researchers caution that this is not inherently positive, people might move toward misinformation as easily as toward fact. The broader aim, they say, is reducing confidence in rigid beliefs and encouraging intellectual flexibility.
Columbia is not alone in turning to technology for managing discord. Other tools, such as Schoolhouse Dialogues, are being used to evaluate civility in student interactions, sometimes with implications for admissions. To critics like Professor Joseph Howley, these efforts reflect a misguided faith in technological solutions to deeply human problems. He warns against treating AI as a “magic bullet,” arguing that such tools risk undermining the university’s core mission of fostering critical thought and meaningful debate.
Despite these concerns, the push toward automated dialogue facilitation continues, illustrating a growing trend in higher education to seek tech-driven remedies for social and political divisions.
(Source: The Verge)


