AI-Generated Kids in Disturbing Sora 2 Videos Spark Alarm

▼ Summary
– A TikTok account posted a controversial, photorealistic fake commercial for a children’s toy called the “Vibro Rose,” which strongly resembled a sex toy, using OpenAI’s Sora 2 video generator.
– The video and similar explicit, AI-generated fake ads featuring children quickly spread on TikTok, raising significant public concern and calls for investigation.
– Laws regarding AI-generated fetish content involving minors are unclear, despite a sharp increase in reports of AI-generated child sexual abuse material (CSAM), particularly targeting girls.
– In response to this harmful AI material, the UK is amending its Crime and Policing Bill to allow testing of AI tools to prevent CSAM generation, while 45 U.S. states have recently criminalized such content.
– The CEO of the Internet Watch Foundation highlighted that AI is often used to target girls online by creating sexual imagery, commodifying their likenesses.
A recent wave of disturbing videos, created using advanced AI video generators, is raising serious ethical and legal questions. These clips, which feature photorealistic depictions of children in suggestive scenarios, are spreading rapidly on social media platforms. The emergence of this content highlights a troubling gap in regulation and the urgent need for stronger safeguards as generative AI technology becomes more accessible and sophisticated.
One such video, posted on TikTok, masqueraded as a toy commercial. It showed a remarkably realistic young girl holding a pink, sparkling pen adorned with a bumblebee. A male voiceover described the product, called the “Vibro Rose,” noting its floral design and buzzing function. To many viewers, the item bore a striking and uncomfortable resemblance to a sex toy. This impression was solidified by an “add yours” sticker on the video that read, “I’m using my rose toy.” The clip sparked immediate alarm, with commenters demanding an investigation into its creator.
The unsavory clip was created using Sora 2, OpenAI’s latest video generator. Initially released by invitation in late September, the tool’s capabilities were almost immediately leveraged to produce this type of problematic content. Within days, similar fake advertisements migrated onto TikTok’s main feed. Investigators found other accounts posting Sora 2-generated videos featuring children with rose or mushroom-shaped water toys and cake decorators that squirted substances described as “sticky milk” or “white foam.”
If these depictions involved real children, they would constitute clear-cut illegal material in numerous jurisdictions. However, the legal landscape for AI-generated fetish content depicting minors remains murky and largely uncharted. New data underscores the scale of the problem. Reports of AI-generated child sexual abuse material have more than doubled in a single year. A significant majority of this illegal AI imagery tracked by watchdogs depicts girls.
“The commodification of children’s likenesses to create sexual imagery is a grave concern,” stated the CEO of a leading internet safety foundation. “Overwhelmingly, we see AI being used to target girls, which is yet another form of online victimization.”
This influx of harmful synthetic media is prompting legislative action. In the United Kingdom, a new amendment to a major crime bill is being introduced. It would empower authorized testers to verify that AI tools cannot generate child sexual abuse material. The proposed law aims to enforce safeguards against creating extreme pornography and non-consensual intimate imagery. Across the Atlantic, the legal response is also accelerating; 45 states in the US have implemented laws to criminalize AI-generated CSAM, with most statutes enacted within the last two years as the technology has evolved.
The rapid dissemination of these AI-generated videos signals a critical juncture. It forces a complex conversation about creative freedom, platform responsibility, and the imperative to protect vulnerable populations from digital harm, even when that harm involves fabricated identities. The race is on to establish effective legal and technical barriers before the technology outpaces society’s ability to control its misuse.
(Source: Wired)





