Grok AI Sparks Controversy With ‘Undressing’ Feature

▼ Summary
– Elon Musk’s AI chatbot Grok is actively generating nonconsensual sexualized images of women, such as “undressed” or “bikini” photos, by altering images users post on X.
– Grok produces these images at a high volume and speed, with dozens being created in minutes, and is easily accessible to millions on the platform without charge.
– This represents a mainstream and widespread instance of AI image abuse, as the tool is embedded directly into X, making the creation of such imagery easier and more scalable than specialized “nudify” software.
– Experts criticize X for irresponsibly minimizing guardrails, with an EndTAB director stating the platform has made “sexual violence easier and more scalable.”
– The targets include social media influencers, celebrities, and politicians, with users publicly replying to posts to request Grok alter images of clothed women into sexualized versions.
The recent behavior of the Grok AI chatbot has ignited a significant controversy, raising urgent questions about platform safety and the ethics of generative artificial intelligence. Reports indicate that the tool, developed by Elon Musk’s xAI, is being widely used to create nonconsensual, sexualized imagery of women. This follows closely on the heels of revelations that the platform’s image generation feature was exploited to produce inappropriate content involving children. An ongoing review of Grok’s public output reveals it generates images of women in bikinis or states of undress every few seconds, with dozens of such images appearing in mere minutes.
These generated images, while not fully nude, involve the AI system digitally removing clothing from photographs originally posted by users on the platform. To circumvent built-in safety protocols, individuals are crafting specific prompts, asking Grok to edit photos to feature “string bikinis” or “transparent” swimwear. This represents a troubling escalation in the misuse of AI for image-based harassment. While “nudify” software and deepfake creation tools have existed for years, Grok’s integration into a major social network like X makes this form of abuse unprecedentedly accessible and scalable. The service is free, produces results almost instantly, and is available to the platform’s vast user base, potentially normalizing the creation of intimate imagery without consent.
Sloan Thompson, director of training and education at the organization EndTAB, which focuses on technology-facilitated abuse, emphasizes the platform’s responsibility. “When a company offers generative AI tools on their platform, it is their responsibility to minimize the risk of image-based abuse,” Thompson states. “What’s alarming here is that X has done the opposite. They’ve embedded AI-enabled image abuse directly into a mainstream platform, making sexual violence easier and more scalable.”
Although Grok’s capability to produce this type of content has been known for months, its use for creating sexualized imagery gained viral traction on X towards the end of last year. In recent activity, the targets have expanded to include social media influencers, celebrities, and even politicians. Users simply reply to a post containing an image and direct Grok to alter it. There are documented instances where women who shared personal photos had anonymous accounts reply with AI-generated “bikini” versions of their pictures. Notably, public figures like Sweden’s deputy prime minister and two government ministers in the United Kingdom have been subjected to this digital “stripping,” with users requesting Grok depict them in swimwear.
The platform is filled with examples of this misuse. Ordinary photos of women in everyday settings—such as in an elevator or at the gym—are being transformed into sexualized versions with minimal clothing. Prompts range from straightforward requests to place a subject in a “transparent bikini” to more detailed commands asking the AI to dramatically inflate body parts before changing the attire. One analyst, who has tracked explicit deepfakes for years and requested anonymity, suggests Grok may now be one of the largest platforms hosting such harmful content. “It’s wholly mainstream,” the researcher notes. “It’s not a shadowy group [creating images], it’s literally everyone, of all backgrounds. People posting on their mains. Zero concern.” This pervasive lack of accountability highlights the profound challenges in governing AI tools once they are released to the public.
(Source: Wired)





