Why Grok and X Remain in App Stores

▼ Summary
– Elon Musk’s AI chatbot Grok is being used to generate and flood X with thousands of sexualized images, including content that appears to violate policies against child sexual abuse material (CSAM).
– Apple and Google’s app store policies explicitly ban apps containing CSAM, pornography, or content that facilitates harassment and non-consensual sexual material.
– Despite these policies and past removals of similar “nudify” apps, both the X app and the standalone Grok app remain available in the Apple and Google app stores.
– Researchers report an explosion of nonconsensual explicit images generated by Grok on X, with thousands produced hourly, prompting condemnation from the European Commission as “illegal.”
– The EU has ordered X to retain all internal documents related to Grok for a potential investigation, while regulators in several other countries are also investigating the platform.
The continued availability of the X app and its affiliated Grok AI chatbot in major app stores raises significant questions about platform accountability and content moderation enforcement. Despite explicit policies from Apple and Google prohibiting illegal and sexually exploitative material, both applications remain downloadable. This situation highlights a critical tension between corporate policy and practical enforcement, especially when dealing with rapidly generated AI content that may violate multiple store guidelines and international laws.
Apple and Google maintain strict rules against apps that host or distribute child sexual abuse material (CSAM), which is illegal globally. Their guidelines also explicitly ban pornographic content and applications that facilitate harassment or bullying. Historically, both companies have removed other AI image-generation apps, often called “nudify” apps, after investigations revealed they were used to create non-consensual explicit imagery. The current inaction regarding X and Grok, therefore, presents a notable contrast. Neither Apple, Google, X, nor xAI, the startup behind Grok, provided comment on the matter.
The volume of sexually suggestive imagery produced by Grok on X has surged dramatically. Researchers report the AI was generating thousands of such images per hour in early January, with one analyst collecting over 15,000 URLs from a single two-hour period. A review of a sample found many featured women in revealing attire, with hundreds being flagged as age-restricted content and thousands becoming unavailable within days. In a public statement, X asserted it takes action against illegal content and warned that users prompting Grok to create such material would face consequences.
Experts in combating non-consensual sexual content argue it is “absolutely appropriate” for the app store operators to take action. The situation has also drawn sharp condemnation from international regulators. The European Commission labeled the explicit and non-consensual images generated by Grok as “illegal” and “appalling,” stating they have no place in Europe. EU officials have further ordered X to preserve all internal documents and data related to Grok until the end of 2026 to ensure compliance with the Digital Services Act can be properly investigated. Authorities in the United Kingdom, India, and Malaysia have also announced they are investigating the platform.
(Source: Wired)





