The Darkening Danger of Deepfake ‘Nudify’ Apps

▼ Summary
– A deepfake generator website can create explicit videos from a single photo, offering numerous graphic sexual scenarios for a fee, with unclear consent enforcement.
– The “nudify” ecosystem, including tools like Grok, is industrializing digital sexual harassment and enabling the creation of child sexual abuse material (CSAM).
– Experts note the technology now produces highly realistic content with broad functionality, representing a dark societal scourge likely generating millions annually.
– WIRED’s review shows nearly all of over 50 tracked deepfake sites now offer high-quality explicit video generation, with many new features promoted on platforms like Telegram.
– Telegram removed at least 32 deepfake tools after being contacted, stating such nonconsensual content is strictly prohibited, and it removed 44 million policy-violating pieces last year.
A disturbing new frontier in digital abuse is emerging, powered by easily accessible artificial intelligence. Websites and applications now allow anyone to transform ordinary photographs of women into graphic, nonconsensual explicit videos with alarming realism. These platforms, often operating behind a thin veil of consent warnings, enable users to insert a person’s likeness into pre-made sexual scenarios for a small fee, industrializing the process of image-based sexual harassment on an unprecedented scale. The technology has evolved far beyond crude image manipulation, creating a sophisticated and lucrative ecosystem dedicated to automated abuse.
Visiting one such service reveals a menu of disturbing options. Users can select from dozens of video templates with titles describing explicit acts, generating short clips from a single uploaded photo. While these sites may include text advising users to only upload photos they have permission to alter, there are typically no meaningful checks to enforce this policy. The result is a tool designed for harassment, making the creation of nonconsensual intimate imagery a simple, pay-per-click transaction.
This problem extends far beyond isolated websites. Dozens of dedicated bots and channels on messaging platforms like Telegram offer constantly updated “nudify” features, promoting new sexual poses, scenarios, and customization options. Researchers tracking this ecosystem note a shift from simple “undressing” functions to a vast array of fantasy-based content, including simulated pregnancies, which further objectifies and violates its subjects. One analysis of Telegram found over 1.4 million accounts subscribed to these abusive services, highlighting the alarming scale of user engagement.
The financial incentives are significant. Experts estimate these combined services are likely generating millions of dollars in revenue annually, fueling rapid technological advancement and wider distribution. “We’re talking about a much higher degree of realism of what’s actually generated, but also a much broader range of functionality,” explains deepfake analyst Henry Ajder. He describes the phenomenon as a societal scourge, representing one of the darkest outcomes of the current AI revolution.
Platforms are facing increasing pressure to act. Following inquiries, Telegram removed at least 32 of the deepfake creation tools from its service, stating that nonconsensual pornography and the tools to create it are strictly prohibited. The company reported removing tens of millions of policy-violating pieces of content last year. However, the persistent development and migration of these services demonstrate a relentless challenge. As the technology becomes more advanced and accessible, the harm inflicted on women and girls grows more severe, calling for urgent legal and technological responses to curb this digital exploitation.
(Source: Wired)





