Nudify Apps Evade Google and Apple App Store Moderation

▼ Summary
– A study found that Apple and Google’s app stores hosted over 100 AI “nudify” apps, which were downloaded 705 million times and generated $117 million in revenue, despite policies banning such software.
– Both companies removed many of these apps after being contacted by researchers, with Apple also having profited from sponsored ads promoting them in search results.
– The apps easily created non-consensual intimate imagery, and some were even marketed to children as young as nine years old.
– This follows a previous report where both app stores were found to be distributing apps from U.S.-sanctioned entities, raising concerns about their enforcement and compliance.
– A major additional concern is that data from these apps, particularly those developed in China, could lead to privacy violations and national security risks if intimate imagery of citizens is handed over to foreign governments.
A new investigation reveals that apps designed to digitally remove clothing from images have proliferated on major app platforms, generating millions in revenue despite clear policies against them. The Tech Transparency Project (TTP) identified dozens of these applications available for download, raising serious questions about the effectiveness of content moderation and the potential for widespread harm.
Researchers found 55 such apps on Google Play and 47 in Apple’s App Store. Collectively, these applications have been downloaded an astonishing 705 million times, pulling in approximately $117 million. This discovery follows recent global attention on similar AI-generated imagery, highlighting a much larger and more entrenched problem within official app marketplaces. The ease with which these tools can create non-consensual intimate imagery is a central concern, as free versions readily produced nude or partially nude renders from photos of clothed, AI-generated models.
In response to inquiries, Google stated it is investigating and has already suspended several apps for policy violations. Apple did not provide comment. Notably, the investigation found that Apple was running sponsored ads for these very apps when users searched for banned terms like “nudify,” directly profiting from their promotion before removing them after being contacted by researchers.
This report builds on earlier TTP findings from December, which showed both platforms hosted apps from developers under U.S. economic sanctions. While those apps were eventually removed, the pattern suggests systemic gaps in enforcement. The failure to consistently apply stated policies creates significant risks for consumers, especially when apps with harmful capabilities are marketed to users as young as nine years old.
Beyond the immediate ethical violations, data privacy presents a grave secondary threat. Many of these apps are developed by entities subject to data-sharing laws in countries like China, meaning non-consensual imagery of individuals could potentially be accessed by foreign governments. This scenario combines a severe privacy breach with potential national security implications.
The core issue appears to be a disconnect between written policy and practical moderation. Platform operators heavily market the safety and trustworthiness of their stores, claiming rigorous review processes. However, the persistent availability of blatantly policy-violating apps indicates these safeguards are not functioning as advertised. For the public, this erosion of trust is compounded by the real-world damage such tools can inflict, from personal trauma to geopolitical data exploitation.
(Source: The Register)





