Artificial IntelligenceCybersecurityNewswireTechnologyWhat's Buzzing

Deepfake Nudes in Schools: A Growing Crisis

▼ Summary

– Teenage boys worldwide are using accessible “nudify” apps to create fake nude images of female classmates from their social media photos.
– A review found over 600 pupils across 90 schools in 28 countries have been impacted by AI-generated sexual deepfakes since 2023.
– This non-consensual explicit imagery of minors is classified as child sexual abuse material, and schools often lack preparedness to respond.
– Surveys suggest the true scale is larger, with estimates that millions of children may be affected, though many incidents go unreported.
– The analysis, a partnership between WIRED and Indicator, is based on limited public reports and highlights the global reach of this harmful technology.

A disturbing trend often begins with a single image taken from a social media profile. Teenage boys are increasingly using photos of female classmates sourced from platforms like Instagram and Snapchat, feeding them into readily available “nudify” apps to generate fabricated explicit content. These AI-generated deepfakes are then circulated among peers, spreading rapidly throughout school communities and inflicting profound emotional harm on the victims, who report feelings of violation, humiliation, and lasting fear.

What began as isolated incidents a few years ago has escalated into a significant crisis. The technology required to produce this synthetic sexual imagery has become far more accessible and user-friendly. A recent review of publicly reported cases indicates these deepfake sexual abuse incidents have now affected approximately 90 schools globally, with more than 600 students impacted. Since 2023, students in at least 28 countries have been accused of using generative AI to create sexualized deepfakes of their classmates, acts that constitute the production of child sexual abuse material (CSAM) when the subjects are minors.

This analysis underscores the global proliferation of harmful AI tools, some of which generate millions in revenue for their developers. It also reveals a troubling lack of preparedness; many schools and law enforcement agencies are not equipped to handle the complex legal and emotional fallout from these serious abuses. In North America alone, nearly 30 cases have been reported since 2023. These include one incident with over 60 alleged victims, another where the victim faced temporary expulsion, and several where students across multiple schools were targeted concurrently. Dozens more cases have been documented across South America, Europe, Australia, and East Asia.

The true scope of the problem is almost certainly much larger than reported figures suggest. A Unicef survey estimates that 1.2 million children had sexual deepfakes made of them last year. Separate studies reveal alarming familiarity with the issue among youth: one in five young people in Spain reported being a target, one in eight teens knows someone who was, and 15 percent of students in a 2024 survey were aware of AI-generated deepfakes connected to their own school.

Publicly reported incidents, which form the basis of most data, represent only a fraction of actual occurrences. Many cases are handled internally by schools or authorities without public disclosure, and reporting is often limited to English-language sources, leaving gaps in understanding the full international picture. The pervasive nature of the crisis leads experts to believe few educational institutions remain untouched. The critical focus must now shift to supporting victims effectively, as the psychological consequences can be severe and long-lasting.

(Source: Wired)

Topics

deepfake sexual abuse 100% school incidents 95% nudify apps 93% global reach 90% victim impact 88% child protection 86% accessible technology 84% underreporting issues 82% law enforcement response 80% social media source 78%