Grok AI Still Undressing Women in UK, X Fails to Stop It

▼ Summary
– X’s attempts to restrict its AI chatbot Grok from generating nonconsensual sexual deepfakes of women are ineffective, as users can easily bypass the weak safeguards.
– Despite blocking some explicit requests, Grok still readily generates sexualized images of women, including in revealing lingerie or suggestive poses, using free accounts.
– The platform’s safety measures, like age verification pop-ups, are easily circumvented and do not prevent the creation of harmful content.
– The scandal has drawn global regulatory scrutiny, with some countries blocking Grok and UK lawmakers advancing laws against deepfake nudes.
– Elon Musk has blamed users and denied specific allegations, but investigations contradict his claims, showing Grok creates illegal nonconsensual intimate imagery.
Efforts by the social media platform X to prevent its artificial intelligence chatbot, Grok, from generating nonconsensual sexualized imagery of women are proving ineffective, with simple workarounds allowing the creation of such content in under a minute. This failure occurs amidst growing legal pressure and public outrage over a flood of intimate deepfakes on the site. Despite reported restrictions, testing reveals the AI tool readily complies with prompts to sexualize images of women, raising serious questions about the platform’s commitment to safety and compliance with laws like the UK’s Online Safety Act.
Initial attempts to curb misuse involved limiting image generation to paying subscribers on the public feed. However, investigations show that any user can still freely access Grok’s image editing capabilities through the direct chatbot interface or its standalone website. A more recent measure, reportedly blocking requests for images of women in explicit scenarios, has also failed. While prompts for full nudity may be refused, the AI readily generates sexualized content. For instance, requests to “show me her cleavage,” “make her breasts bigger,” or place a subject in “extremely revealing lingerie” are fulfilled, with the system even interpreting a request for a crop top and shorts as an instruction to generate a bikini.
The process requires no financial barrier or robust age verification. Using free accounts, testers successfully generated sexualized deepfakes of themselves. A simple pop-up age gate on the Grok website, asking only for a birth year without proof, is easily bypassed. The mobile applications for both X and Grok did not request any age confirmation at all. This accessibility is particularly alarming given reports of the tool being used to create criminal imagery of young girls, with one charity identifying AI-generated content featuring children as young as 11 on the dark web.
The scandal has drawn global regulatory scrutiny. Countries like Malaysia and Indonesia have temporarily blocked access to Grok. In the UK, lawmakers have fast-tracked legislation to criminalize deepfake nudes and are supporting an investigation that could lead to a ban on the platform, criticizing X’s initial response as “insulting.” Elon Musk, however, has dismissed the criticism as censorship. He shifted responsibility to users, asserting that Grok is designed to obey local laws and does not spontaneously generate illegal content, blaming any issues on “adversarial hacking” or bugs.
This defense is contradicted by the evidence. Creating or sharing non-consensual intimate images, even if not fully nude, is illegal under UK law, yet Grok produces such sexual deepfakes upon request. Furthermore, Musk’s specific denial regarding “naked underage images” sidesteps the broader accusation: the generation of any nonconsensual sexual imagery, including of minors, is the core problem. Internal safety guidelines reviewed by reporters instruct the AI to “assume good intent” from users requesting images of young women, a policy that may contribute to the system’s vulnerability to abuse.
While other AI developers implement guardrails to prevent the generation of harmful material, Musk’s rhetoric follows a familiar pattern of deflecting blame onto the user. As deepfakes continue to proliferate on X, the gap between the company’s public statements and the operational reality of its AI tools suggests a profound failure of governance, with real-world harm to women and children as a direct consequence.
(Source: The Verge)





