Dozens of “Nudify” Apps Discovered on Google & Apple Stores

▼ Summary
– Removing Grok’s AI image editor may not stop the flood of nonconsensual sexualized AI images, as many similar “nudify” apps exist.
– A report identified 55 such apps on Google Play and 48 on Apple’s App Store, which have been downloaded over 705 million times.
– These apps have generated an estimated $117 million in revenue by digitally removing clothing from images of women.
– Google and Apple have removed some of these apps, but this follows a pattern of similar apps previously slipping through their review processes.
– Critics note that while platforms removed some apps, they continue to host X (which contains Grok), allowing it to generate harmful images.
A recent investigation has uncovered a significant number of applications on major digital marketplaces that use artificial intelligence to create nonconsensual, sexualized imagery. While public attention has focused on high-profile AI tools like Grok, a new report reveals that dozens of dedicated “nudify” apps are widely available and have been downloaded hundreds of millions of times. This highlights a persistent and systemic challenge for platform operators in policing harmful content.
The Tech Transparency Project (TTP) identified 55 apps on the Google Play Store and 48 on Apple’s App Store that utilize AI to digitally remove clothing from images of women. These applications generate depictions of individuals that are fully or partially nude, or dressed in minimal clothing like bikinis. The scale of their reach is staggering, with a collective download count exceeding 705 million installations globally. This user base has translated into substantial revenue, estimated at roughly $117 million for the developers behind these tools.
In response to the findings, both tech giants have taken some action. Google has suspended several of the flagged applications, while Apple removed 28 from its storefront, though two of those were later reinstated. This is not an isolated incident; both companies faced similar scrutiny earlier in 2024 following a report from 404 Media, indicating a recurring pattern of such apps evading initial detection and review processes.
Despite these removals, a critical inconsistency in enforcement remains apparent. While Apple and Google have addressed the specific apps named in the TTP report, the X platform and its integrated Grok AI feature, which can produce similar imagery, remain freely downloadable from both app stores. This selective action raises questions about the criteria and commitment used to moderate harmful AI technologies. Observers have noted the swift removal of other controversial apps, contrasting with the continued availability of platforms capable of generating degrading content. This disparity suggests that corporate policies may be applied unevenly, often reacting to specific reports rather than proactively enforcing clear, consistent standards against the creation of nonconsensual deepfake content. The ongoing presence of these tools underscores the difficulty of containing a problem that is both technologically accessible and, unfortunately, commercially lucrative.
(Source: The Verge)





