Tech Giants Ignore AI-Powered Student Cheating

▼ Summary
– AI companies actively target students with promotional offers and discounts to build future user bases for their products.
– AI agents are making cheating easier in education, with some companies like Perplexity appearing to embrace this use in their advertising.
– Educational platforms like Instructure’s Canvas struggle to block AI agents, viewing misuse as a philosophical issue rather than a solvable technical problem.
– Educators are calling for AI companies to take responsibility for how their tools are used instead of blaming students for misuse.
– Companies like OpenAI and Instructure advocate for redefining education with AI but leave enforcement of ethical guidelines to teachers and institutions.
Major technology firms are actively courting the student demographic with free or discounted access to advanced artificial intelligence systems, yet they appear reluctant to address the widespread academic dishonesty these tools enable. Companies like OpenAI, Google, and Perplexity are distributing premium AI subscriptions to students, framing these offers as academic support during critical periods like final exams. Perplexity goes a step further by incentivizing referrals, offering cash rewards for each student who downloads its Comet AI browser.
The adoption of AI utilities among teenagers has surged dramatically, leaving educators to manage the fallout. Teachers find themselves racing to identify new methods of technological deception, while students risk missing out on fundamental learning experiences. The situation has intensified with the arrival of AI agents, programs designed to autonomously handle online tasks. Although current versions operate slowly, they significantly lower the barrier to cheating on assignments and exams.
Rather than confronting this problem directly, several tech corporations deflect accountability. Perplexity’s marketing strategy seems to embrace its reputation as an academic shortcut. One Facebook advertisement depicted students using the Comet agent to handle multiple-choice homework, while an Instagram post featured an actor informing viewers that the browser could complete quizzes for them. When a social media user demonstrated this exact application, Perplexity’s CEO shared the video with a tongue-in-cheek warning: “Absolutely don’t do this.”
Questioned about these promotional tactics, a Perplexity representative argued that every educational innovation, from the abacus onward, has been exploited for cheating, suggesting that students who cheat are only harming themselves.
In a separate incident, an instructional designer named Yun Moh discovered that a ChatGPT agent could impersonate him during a class introduction assignment. Alarmed by this potential for misuse, Moh contacted Instructure, the company behind the widely used Canvas learning management system. He urged them to block AI agents from masquerading as students, sharing a video that demonstrated an AI completing fabricated coursework.
Instructure’s executive team responded after nearly a month, indicating that preventing AI agent access was not just a technical challenge but a philosophical one. They expressed a commitment to developing “pedagogically-sound” applications of AI that could reduce cheating and increase transparency, rather than imposing outright bans. An Instructure spokesperson later clarified that the company lacks the ability to block external AI agents or control software operating on a student’s personal device.
Moh’s information technology team attempted to identify and block AI agent behaviors, such as unusually rapid quiz submissions, but found the systems could easily alter their patterns, making detection exceptionally difficult.
In a contrasting move, Instructure recently collaborated with Google to address a separate cheating concern. After educators reported that a “homework help” shortcut in Google Chrome allowed students to instantly search quiz questions using Google Lens, both companies took action. Google described the feature as a limited test and suspended it following user feedback. However, the company has hinted at future integrations, with one intern-authored blog post praising Lens as a “lifesaver for school.”
Some instructors, like English professor Anna Mills, have observed that AI agents will sometimes refuse to complete academic work, though these safeguards are easily bypassed. Mills illustrated this by directing an AI browser to submit assignments autonomously, describing the current academic environment as “the wild west.”
This regulatory vacuum has prompted educators and professional organizations to demand greater accountability from AI developers. The Modern Language Association’s AI task force, which includes Mills, issued a public statement urging companies to grant instructors more control over how AI tools are utilized in educational settings.
OpenAI has introduced a “study mode” in ChatGPT that refrains from supplying direct answers, and an executive emphasized that AI should serve as a learning enhancement rather than an “answer machine.” She highlighted a collective responsibility within the education sector to help students use AI ethically and to redesign teaching methods for an AI-integrated world.
Instructure has echoed this sentiment, focusing on “redefining the learning experience” instead of policing tool usage. The company advocates for a “collaborative effort” among developers, schools, teachers, and students to establish norms for responsible AI use.
Ultimately, the practical enforcement of any ethical guidelines developed by committees and corporations will fall to classroom educators. With AI products already in broad distribution and partnership agreements firmly in place, the education system must adapt to a new technological reality with no option of turning back.
(Source: The Verge)





