AI & TechArtificial IntelligenceEntertainmentNewswireTechnology

Your Face Is the Next Legal Battleground for AI

▼ Summary

– The AI-generated song “Heart on My Sleeve” sparked legal debates about using people’s likenesses without permission, highlighting gaps in current regulations.
– Unlike copyright law, likeness rights lack federal legislation and rely on a patchwork of state laws, prompting recent state-level expansions in protections.
OpenAI’s Sora video generator launched with minimal guardrails, leading to unauthorized celebrity likenesses and controversial content despite later policy adjustments.
– The proposed NO FAKES Act aims to create federal protections against unauthorized digital replicas but faces criticism for potentially censoring free speech.
– Social platforms are implementing their own likeness policies and removal tools as legal uncertainty and evolving norms shape responses to AI-generated content.

The legal landscape surrounding artificial intelligence is rapidly shifting, with your face and voice emerging as the next major battleground for regulation and rights. This conflict gained mainstream attention with the AI-generated song “Heart on My Sleeve,” which perfectly mimicked Drake’s vocal style. While streaming platforms removed the track on copyright grounds, the situation highlighted a more complex issue: existing laws governing personal likeness were never designed to handle AI’s capabilities.

Unlike copyright, which operates under established federal frameworks like the Digital Millennium Copyright Act, likeness rights exist as a patchwork of state regulations. These laws originally focused on celebrities fighting unauthorized endorsements or parodies. As audio and video deepfakes multiplied, likeness law became one of the few available tools for control. Recent legislative efforts reflect growing concern, with Tennessee and California, both hubs for entertainment industries, passing bills to strengthen protections against unauthorized digital replicas of performers.

Technology, however, continues to outpace legislation. OpenAI’s introduction of Sora, an AI video generator specifically engineered to replicate and remix human likenesses, unleashed a wave of startlingly realistic deepfakes. Many featured individuals who never consented to their digital portrayal. In the absence of comprehensive federal laws, companies like OpenAI are establishing their own likeness policies, which may effectively become the internet’s default standards.

OpenAI defended its launch strategy, with CEO Sam Altman asserting the platform implemented “way too restrictive” safeguards. Nevertheless, Sora quickly attracted criticism. Initial permissions for generating historical figures prompted backlash after Martin Luther King Jr.’s estate objected to disrespectful AI depictions. Although Sora prohibited unauthorized use of living people’s likenesses, users circumvented these rules to insert celebrities like Bryan Cranston into videos, leading to protests from SAG-AFTRA that forced OpenAI to reinforce its guardrails.

Even individuals who authorized Sora “cameos”, the platform’s term for videos using licensed likenesses, expressed discomfort with outcomes. Some women encountered fetishized content, while others disliked seeing their authorized likenesses expressing offensive viewpoints. Altman acknowledged his surprise at these “in-between” reactions, where people approved their digital replica but objected to its context or statements.

The issues extend beyond Sora alone. AI-generated content has become commonplace in political arenas, with figures like Donald Trump deploying vulgar deepfakes of opponents. New York City mayoral candidate Andrew Cuomo briefly circulated an AI video mocking his Democratic rival. Meanwhile, influencer conflicts increasingly feature fabricated videos as ammunition.

Legal action remains a persistent threat, with celebrities including Scarlett Johansson retaining lawyers over unauthorized likeness use. However, unlike copyright infringement cases that have spawned numerous lawsuits and regulatory discussions, likeness disputes rarely escalate to court, partly because the legal framework remains unsettled.

SAG-AFTRA recently leveraged its influence with OpenAI to promote the Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act. This proposed legislation would establish federal protections against “unauthorized digital replicas” and impose liability on platforms that knowingly host them. While supported by industry groups and YouTube, the NO FAKES Act faces fierce opposition from free speech organizations. The Electronic Frontier Foundation condemned it as a “new censorship infrastructure” that could force excessive content filtering and enable frivolous takedowns.

Legislative hurdles offer some comfort to opponents, given Congress’s current gridlock and separate efforts to preempt state-level AI regulations. Yet practical changes are already underway. YouTube recently announced expanded tools allowing creators to identify and request removal of videos using their likeness without permission, building on existing policies for vocal imitation.

Beyond legal mechanics, social norms are struggling to adapt. We now inhabit a world where generating realistic video of anyone doing anything is technically simple, but ethical boundaries remain largely undefined. Most public discussion centers on bizarre or humorous deepfakes, yet research consistently shows nonconsensual pornographic imagery targeting women constitutes the overwhelming majority of deepfake content. Separate services dedicated to generating nude images raise parallel legal questions about nonconsensual sexual material.

Additional complications arise when considering defamation claims for sufficiently believable fake videos, or harassment charges if deepfakes form part of sustained threatening behavior. Platforms traditionally shielded by Section 230 immunity now face uncertainty as they transition from passive hosts to active content generation facilitators.

Despite widespread anxiety about AI obliterating our ability to discern reality from fabrication, most synthetic media still contains detectable clues, from subtle editing artifacts to visible watermarks. The greater challenge lies in public engagement: many viewers neither scrutinize content carefully nor particularly care about its authenticity.

(Source: The Verge)

Topics

ai video 95% likeness law 93% Deepfake Technology 90% copyright issues 88% platform policies 87% legal legislation 85% free speech 83% celebrity rights 82% content moderation 80% political deepfakes 78%