AI & TechArtificial IntelligenceNewswireQuick ReadsScienceTechnology

AI Is Reshaping How We Write, Study Shows

▼ Summary

– A study found that heavy reliance on AI to write essays significantly altered the substance of human arguments, making responses more neutral and less passionate compared to those written with little or no AI assistance.
– The research showed that AI-generated writing became less personal and more formal, with heavy AI users producing essays containing 50% fewer pronouns and fewer anecdotes.
– Participants who heavily used AI reported their essays were less creative and less in their own voice, yet they felt similarly satisfied with the results as those who used AI less, raising concerns about long-term impacts.
– When editing existing human writing, AI systems made much larger edits that changed the meaning and vocabulary, overwriting the writer’s unique style, unlike human editors who typically made smaller, more conservative changes.
– The lead researcher suggested AI’s training might reward manipulating human preferences and highlighted a growing concern that AI’s focus on scalability is changing human conclusions and institutions.

A new study reveals that heavy reliance on artificial intelligence for writing fundamentally changes not just style, but the substance and meaning of human expression. Research from a team at several universities demonstrates that when people use large language models to generate significant portions of text, their output becomes notably more neutral, impersonal, and divergent from what they would have written independently. This shift raises critical questions about the long-term impact of AI on creativity, personal voice, and even how we form and communicate ideas.

The investigation centered on a classic question: does money lead to happiness? One hundred participants were asked to write essays on this topic, with varying levels of AI assistance. The researchers discovered that participants who heavily relied on LLMs produced responses that were 69% more likely to be neutral compared to those who used AI sparingly or not at all. Essays written with minimal AI influence displayed stronger personal convictions, whether arguing for or against the link between wealth and well-being. In contrast, AI-heavy essays were described as undergoing a “blandification,” losing the passionate, human perspective.

Beyond altering core arguments, the technology significantly impacted writing style. Essays composed with substantial AI help contained 50% fewer pronouns and far fewer personal anecdotes or references to individual experiences. The language became more formal and detached. Notably, participants who used AI extensively reported feeling their final essays were less creative and less reflective of their own voice, yet they expressed similar satisfaction with the result as those who wrote without such tools. This disconnect concerns experts, who warn it may obscure the technology’s subtle erosion of personal expression.

Natasha Jaques, a lead author of the study and a professor at the University of Washington, emphasized that current systems fail to personalize content authentically. “An ideal LLM should write the essay that you would have written and just save you time,” said Jaques, who also works as a senior research scientist at Google DeepMind. “It’s not doing that at all. It’s writing a very different essay.”

The study also analyzed how AI revises existing human writing. Using a database of essays from 2021—predating widespread LLM use—researchers tasked AI models with making revisions based on human feedback. They found that systems like Claude 3.5 Haiku, GPT-5 Mini, and Gemini 2.5 Flash made far more extensive changes than human editors typically would. While a person might substitute a few words, AI models often replace large fractions of the original text, overwriting the author’s unique “lexical fingerprint” with the model’s own preferred vocabulary. This process fundamentally changes meaning and erases individual style.

Thomas Juzek, a professor of computational linguistics at Florida State University not involved in the research, praised the study. He pointed out a common misconception among users. “What really struck me is this kind of illusion of using LLMs to perform a grammar check,” Juzek noted. “This research shows that while a user might think they’re just doing a simple language check, the model is doing so much more.” He questions the broader implications: “Going forward, what does this mean for thought, language, communication, and creativity?”

Jaques theorizes this behavior may stem from how AI models are trained. Systems optimized to satisfy human feedback may learn to manipulate preferences rather than faithfully execute a user’s intent. She compares it to how YouTube’s recommendation algorithm can gradually shift a viewer’s tastes. This dynamic suggests that prolonged AI use could reshape human values and expression in subtle, institutional ways. “Humans care about clarity, relevance, and impact, while AI cares about scalability and reproducibility,” Jaques observed. “It’s changing our conclusions in ways that are already affecting our existing institutions.”

For her own part, Jaques avoids using AI to draft academic papers. Instead, she sometimes uses the technology’s shortcomings as a creative catalyst. “Sometimes, I’ll put a crappy version of what I’m trying to say in a conversational style into an LLM,” she shared. “That usually produces something which then motivates me to write it myself.” This personal workaround highlights the ongoing need for human oversight and authentic voice in an era increasingly mediated by artificial intelligence.

(Source: NBC News)

Topics

ai impact 95% writing style 90% llm effects 88% human creativity 85% content blandification 82% language personalization 80% editing comparison 78% long-term effects 77% research methodology 75% expert commentary 73%