Are Deepfakes Rewriting History?

▼ Summary
– OpenAI’s Sora app allows users to create AI-generated videos of celebrities and historical figures, raising consent issues for deceased individuals who cannot opt out.
– The app can rapidly produce convincing deepfakes, such as fabricated historical speeches and events involving nonconsenting public figures like presidents and celebrities.
– Family members of deceased individuals, including Robin Williams and Martin Luther King Jr., have publicly condemned the unauthorized use of their loved ones’ likenesses in Sora videos.
– OpenAI has implemented visible watermarks and metadata to identify AI content, but experts note these are easily removable and call for better detection tools to combat misuse.
– Experts warn that realistic deepfakes from Sora could erode trust in media, enable scams, and undermine democratic processes, with detection efforts relying on AI to counter evolving threats.
The rapid rise of OpenAI’s Sora text-to-video application has ignited serious conversations about historical misinformation and digital consent. While initially presented as a creative platform for generating imaginative videos, the app’s capability to produce convincing deepfakes of deceased public figures raises profound ethical and societal questions. The technology’s accessibility and speed mean that in under a minute, users can fabricate scenes featuring historical icons in entirely fictional scenarios, from Aretha Franklin crafting soy candles to John F. Kennedy falsely disavowing the moon landing.
Legal experts like attorney Adam Streisand, who has represented multiple celebrity estates, point out that existing laws in states like California do offer protections against unauthorized reproductions of a person’s image or voice. However, he emphasizes that the core challenge isn’t a lack of legal precedent but the immense practical difficulty of enforcing these laws against a constant, global flood of AI-generated content. The judicial system, he suggests, is ill-equipped for this “5th dimensional game of whack-a-mole.”
The emotional toll on families is already evident. Zelda Williams, daughter of the late Robin Williams, publicly pleaded for people to stop creating deepfakes of her father, calling the practice indecent and a waste of energy. Similar sentiments were echoed by Bernice King, daughter of Martin Luther King Jr., and the family of George Carlin, who are actively combating unauthorized AI depictions. Even the esteemed physicist Stephen Hawking has been featured in popular Sora videos depicting “horrific violence,” much to the distress of those who value his legacy.
In response to mounting criticism, OpenAI has stated its commitment to balancing free speech with the rights of individuals and their estates. A company spokesperson indicated that public figures and their families should ultimately have control over how their likeness is used. OpenAI CEO Sam Altman further elaborated in a blog post, promising that the company would soon provide rightsholders with more granular control over character generation, allowing them to specify permitted uses or opt out entirely. Some commentators speculate that this reactive policy evolution is a deliberate strategy, demonstrating the platform’s power to both users and intellectual property holders.
The societal implications of hyper-realistic deepfakes are far-reaching. Liam Mayes, a media studies lecturer, warns of two primary consequences. First, he anticipates a rise in scams and the potential for powerful entities or malicious actors to undermine democratic processes. Second, and perhaps more insidiously, the inability to distinguish real from fake could lead to a widespread erosion of trust in all media institutions and established facts.
For those who manage the legacies of historical figures, this is a familiar battle fought with new weapons. Mark Roesler, chairman of CMG Worldwide, which represents the IP rights of over 3,000 deceased personalities, acknowledges that abuse is inevitable with any valuable intellectual property. He also notes, however, that new technology can play a positive role in keeping iconic legacies alive, and his firm is committed to navigating these new AI landscapes to protect their clients’ interests.
To help identify its content, OpenAI has implemented several safeguards on Sora-generated videos. These include invisible signals, a visible watermark, and metadata that labels the content as AI-generated. Yet experts like Harvard computer scientist Sid Srinivasan caution that these measures offer only minimal protection. Visible watermarks and metadata are relatively easy to remove, making them ineffective against determined bad actors. He suggests that an invisible watermark coupled with a dedicated detection tool would be a more robust solution, though it’s unclear when such tools might be widely available to video-hosting platforms.
This has led to an emerging arms race where AI is being used to detect AI. Ben Colman, CEO of deepfake-detection startup Reality Defender, argues that humans, even experts, are fallible and can miss subtle digital artifacts. His company uses artificial intelligence to analyze videos for traces invisible to the human eye or ear. Similarly, McAfee’s Scam Detector software analyzes audio for AI “fingerprints.” Despite these advances, McAfee’s Chief Technology Officer Steve Grobman admits the technology is a constant chase, with new tools making fakes more realistic all the time. He revealed that one in five people surveyed reported that they or someone they know has already been victimized by a deepfake scam.
The problem is further complicated by linguistic disparities. AI tools for widely spoken languages like English, Spanish, and Mandarin are significantly more advanced and capable of producing convincing fakes than those for less common languages. Grobman confirmed that detection technologies are continually evolving and expanding to cover more languages and contexts.
While fears of deepfakes overwhelming the 2024 elections largely failed to materialize, the technological leap in 2025 has been a game-changer. Models like Google’s Veo 3, released earlier this year, have been described as “terrifyingly accurate” and “dangerously lifelike,” pushing the boundary of what is discernibly real and threatening to permanently blur the line between human and AI-generated content.
(Source: NBC News)