Why Young People Who Use AI the Most Hate It the Most

▼ Summary
– Gen Z are among the biggest users of AI chatbots, but polling shows they are also a major part of the cultural backlash against AI, feeling resentful of a future they believe is being forced on them.
– Many young people fear AI is damaging critical thinking and social skills, with 79% concerned it makes people lazier and 65% saying it prevents meaningful engagement with ideas.
– Students are objecting to universities integrating AI into curricula, with some publishing scathing critiques that AI cannot coexist with education and will degrade it.
– Despite frequent use, Gen Z is skeptical of AI hype, aware of its limitations like hallucinating information, and many avoid the tools or disable AI features due to ethical and environmental concerns.
– AI use has become culturally toxic among young people, causing social shame and distrust among peers, with studies showing students view AI use as a “red flag.”
Almost three years have passed since Silicon Valley began aggressively marketing large language model chatbots like ChatGPT as the inevitable future of nearly everything. No group has felt this pressure more acutely than Gen Z.
While it is unsurprising that young people are among the heaviest users of AI chatbot tools , following a long pattern of early tech adoption , polling data tells a more complicated story. Contrary to the optimistic narratives pushed by companies like OpenAI and Google, Gen Z students and workers are at the forefront of a growing cultural backlash against AI. Even as they use these tools, many young people harbor deep resentment toward an AI-centric future they feel is being imposed on them.
“The part that feels scariest to me is the human impact … their ability to have relationships or just basic communication,” said Meg Aubuchon, a 27-year-old art teacher in Los Angeles.
Far from the lazy shortcut-seeker stereotype, Gen Z has voiced some of the most articulate and detailed objections to generative AI. Their attitudes mirror a wider societal pushback against the tech industry, fueling a nonpartisan movement against data centers and threatening politicians and CEOs who support Silicon Valley’s AI frenzy.
Aubuchon says she and many of her peers have chosen to avoid chatbot tools entirely. “It just makes me want to dig my heels into a career where I never have to use AI, even if that’s a career that isn’t going to pay as well,” she told The Verge.
Emerging from academia into a brutal job market, young people face an impossible contradiction. They are told these tools will eliminate millions of jobs, yet also that they must use them to stay competitive. This generation is the first to navigate a world saturated with chatbots and AI-generated content, having already lost years of their youth to the COVID-19 pandemic. All the while, Silicon Valley’s multitrillion-dollar push for AI adoption clashes with their fears of its documented impacts on the environment, disinformation, academic integrity, and social well-being.
“The part that feels scariest to me is the human impact, because it impacts people on an individual level and how they relate to other people, whether that be their ability to have relationships or just basic communication,” Aubuchon added.
Sharon Freystaetter, 25, studied computer science and worked as a cloud infrastructure engineer at a major Silicon Valley company. But as AI hype intensified, she left the company due to ethical concerns and anxiety over the environmental toll of data centers. Now working in food service in New York, she avoids chatbots and disables AI features whenever possible.
“I think everyone in my immediate peer group is not using AI and is actively against it, besides my friends who are in computer science and are essentially mandated to use it,” Freystaetter said. “When I came back and started to look around [for tech jobs], suddenly everything was saying ‘You need to use AI to get this job’ in the requirements.”
Fears that chatbots are eroding critical thinking and social skills are widespread among young adults, even as a majority admit to using them regularly. A recent Harvard-Gallup study found that 74 percent of young adults in the U. S. use a chatbot at least once a month, and another study revealed that more than half of U. S. college students use the tools for coursework weekly. Yet 79 percent expressed concern that AI makes people lazier, and 65 percent said chatbots promote instant gratification over real understanding.
In a more recent Gallup poll, Gen Z’s optimism about AI hit a new low: only 18 percent say they are hopeful about the technology, down from 27 percent last year, and only 22 percent say they are excited, down from 36 percent. The number who believe AI’s risks outweigh its benefits has jumped 11 points to nearly 50 percent. And while 56 percent say the tools help them finish work faster, eight in 10 admit that using AI in this way makes actual learning harder in the future.
Compounding the issue, many universities are awkwardly integrating AI into their curricula, consolidating departments into new “AI” majors and signing multimillion-dollar deals with companies like OpenAI and Anthropic. Meanwhile, graduates enter a job market where AI tools opaquely filter out their applications, making the process feel nearly impossible.
Alex Hanna, director of research at the Distributed AI Research Institute (DAIR), says this inundation of AI hype is driving resentment among students.
“Universities are hearing from employers that they want students who know how to use these tools,” Hanna said. “This is not because the tools actually have shown much value-add , they want Gen Z to show them where the value-add is. That, or the university is investing or has donors heavily involved in the supply side.”
In essence, AI companies and universities are taking an “integrate first, find use cases later” approach that turns students into marketing for the AI industry. At Arizona State University, for instance, the administration is testing a beta tool called ASU Atomic that automatically synthesizes professors’ lectures into bite-sized materials, as reported by 404 Media.
Last month, the editorial board of the University of Pennsylvania’s student newspaper published a scathing critique of the administration’s uncritical AI adoption. While acknowledging widespread student use of chatbots, the authors argued that embracing the technology without clear rules is “only quickening its own demise.”
“AI cannot coexist with education , it can only degrade it. As technology advances and workers are replaced by machines, schools are some of the only places we have left to explore and wrestle with human thought,” the students wrote. “With our own university leading the charge, AI is now corrupting those few sacred spaces and leaving us with nowhere to engage in true scholarship.”
In another letter, the Oberlin College Luddite Club , written on a typewriter , rejected a similar initiative to experiment with AI-centric education.
“[E]ven one semester of accepted (even encouraged) chat-bot use will jettison our student body down a lazy, irredeemable tunnel of intellectual destruction,” the Oberlin students wrote. “We will not stand by and witness the further atrophying of our liberal arts education. Rather than strengthening Silicon Valley, we build our own skills and generative sweat.”
The fear that chatbot tools will cause a permanent loss of critical thinking is backed by data. A recent MIT Media Lab study found that EEG scans showed decreased brain activity in people who used AI to write essays. Other research has shown that cognitive offloading diminishes skepticism and the ability to discern truth from deception, leading to weakened democratic decision-making.
The fact that so many young people recognize these dangers even as they use the tools suggests they are not buying the hype from AI boosters like OpenAI’s Sam Altman, who has pitched chatbots for everything from writing essays to raising children. Instead, Gen Z appears hyper-aware of the tools’ limitations , from hallucinations to the social and emotional hazards of relying on machines for human advice.
“Altman talks about the technology like it is magic. He has used those words precisely, calling ChatGPT ‘Magic Intelligence in the Cloud,’” said Hanna. “Gen Z is more realistic about what the tools actually can do. They can handle text-based work that they don’t want to do or feel pressured to do. But they are often rather savvy about their limits.”
This holds true even among those who find chatbots useful and aren’t strictly anti-AI.
“I spend a lot of time thinking about this stuff and I’ve personally come to the conclusion that it’s a load of bullshit for outsourcing jobs,” said Emma Gottlieb, a borderline Zoomer-millennial who works in technical sales for a film industry equipment company. She uses AI to quickly sift through large volumes of technical documents but always double-checks the outputs.
“I definitely do double-checks, personally. It’s important because somebody will mislabel an eBay listing for a component part, and then the AI will say it has this feature when it really doesn’t,” said Gottlieb. “I wouldn’t say it’s a significant time-saver, but I think it’s just like fast food , it’s easy, it’s cheap, and it’s there.”
There is another explanation for Gen Z’s stance that isn’t captured in data points: AI use has become culturally toxic. Many young people won’t admit to using it out of social shame. AI-generated visuals and text are frequently ridiculed on social media, and most find it fake and deeply uncool , especially when used to bypass the creative process.
Without clear rules, AI use also breeds distrust within academia, not just between students and professors, but among peers. A University of Pittsburgh study found that students viewed AI use as a “red flag” that makes them “think less” of their peers.
Hanna argues for a more critical approach that “punches up” at the CEOs, marketing teams, and administrations pushing these tools as universal thinking machines, focusing on the material conditions that pressure young people to use them.
“Speaking as an elder millennial, I approach Zoomers who use these tools with a bit more empathy,” said Hanna. “Why do they feel compelled to use them? What material conditions do they face at school such that they are feeling so pressured? Is there a way to offer them another kind of pressure valve? … That’s likely a better place to begin from.”
Freystaetter and Gottlieb both say they are more worried about Gen Alpha and younger generations, who may lose the chance to develop healthy relationships with technology when it becomes mandatory and ubiquitous.
“These are the kids who are growing up with [AI] integrated into everything, and with ease of access,” Freystaetter said. “They grow up not knowing that they should be critical of it, and that they’re being influenced by it.”
(Source: The Verge)




