Artificial IntelligenceCybersecurityNewswireTechnology

Victims of nonconsensual deepfakes can now sue under new Senate bill

▼ Summary

– The U.S. Senate unanimously passed the DEFIANCE Act, a bill allowing victims of nonconsensual AI-generated sexually explicit deepfakes to sue the creators for civil damages.
– This legislation builds upon the existing Take It Down Act, which criminalizes distributing such images and requires platforms to remove them.
– The bill’s passage follows controversy over X’s Grok chatbot enabling users to create nonconsensual, sexually suggestive AI images, which the platform did not adequately address.
– The DEFIANCE Act was introduced partly in response to a 2024 scandal where explicit AI-generated images of Taylor Swift circulated on X.
– The bill now moves to the House of Representatives, where it must pass to become law, after stalling there in the previous congressional session.

A new Senate bill has created a potential legal pathway for individuals whose likeness is used without permission to generate sexually explicit deepfake images. The legislation, known as the Disrupt Explicit Forged Images and Non-Consensual Edits Act (DEFIANCE Act), would empower victims to file civil lawsuits against the creators of such content, seeking financial damages for the harm caused. The Senate approved the measure by unanimous consent, indicating no formal objections were raised during the procedural vote.

This legislative action builds upon the foundation of the Take It Down Act, a law that criminalizes the distribution of nonconsensual intimate imagery and mandates that social media platforms remove it quickly. The DEFIANCE Act specifically targets the individuals who produce the forged content, providing a private right of action for victims. Its passage coincides with growing international scrutiny of platforms like X, formerly Twitter, whose Grok AI chatbot has been used to generate nonconsensual and sexually suggestive images of real people.

Platform owner Elon Musk has previously deflected responsibility onto users, stating that anyone using Grok to create illegal material would face consequences. However, critics note that the feature allowing such image generation remained accessible even after public criticism. In Senate remarks, lead sponsor Senator Dick Durbin (D-IL) directly referenced this controversy, arguing that platforms often fail to assist victims or remove damaging content promptly.

The push for stronger protections against AI-generated nonconsensual imagery is gaining momentum globally, partly fueled by high-profile incidents. Earlier this year, the United Kingdom expedited legislation to criminalize the creation of intimate deepfakes without consent. Similarly, the DEFIANCE Act itself was reintroduced following a widespread scandal on X involving AI-generated explicit images of singer Taylor Swift.

The bill aims to expand legal recourse established in the 2022 reauthorization of the Violence Against Women Act, which previously covered only non-AI, digitally shared intimate images. With bipartisan support from senators including Lindsey Graham (R-SC), Amy Klobuchar (D-MN), and Josh Hawley (R-MO), the legislation now moves to the House of Representatives for consideration. Its fate there remains uncertain, as a previous version stalled without a vote in the last congressional session. For the bill to become law, the House must pass it and send it to the President for signature.

(Source: The Verge)

Topics

deepfake legislation 95% non-consensual imagery 93% civil lawsuits 90% senate legislation 88% ai chatbots 87% platform accountability 86% tech policy 85% content moderation 82% international regulations 80% political sponsorship 78%