YouTube to Let Celebrities Remove AI Deepfakes

▼ Summary
– YouTube is expanding its AI deepfake monitoring tool, called likeness detection, to Hollywood celebrities, which can lead to the removal of unauthorized AI-generated videos.
– The system requires enrolled public figures to submit identification and a selfie video to track and potentially request takedowns of AI content featuring their face.
– YouTube compares this tool to its Content ID system for copyright, but likeness detection currently does not allow rights holders to monetize videos, though that may change.
– The platform has also introduced a feature allowing creators to officially clone their likeness with AI for use in videos, indicating a parallel trend of authorized digital replication.
– Talent managers view AI deepfakes as a potential engagement tool, where some celebrities may allow fan content while others seek removal, with future models likely focusing on compensation.
A new policy from YouTube is extending its AI content monitoring tools to the entertainment industry, offering celebrities a formal process to identify and potentially remove unauthorized deepfake videos. This expansion of the platform’s likeness detection feature marks a significant step in the ongoing battle to manage synthetic media, moving beyond its initial tests with creators and a later rollout to politicians and journalists. The system now covers public figures even if they do not maintain a YouTube channel, providing a centralized mechanism to track AI-generated impersonations.
Enrolled individuals can submit identification and a video selfie to train the system, which specifically scans for facial likenesses rather than voices or other traits. Once content is flagged, the celebrity or their representatives can review it and submit a removal request. However, takedowns are not automatic; each submission is evaluated against YouTube’s privacy policies, and protected uses like parody or satire may remain online. The company has noted that during earlier testing, creators requested the removal of only a “very small” number of videos, suggesting the tool may be used more for oversight than for mass deletion.
This initiative is often compared to YouTube’s longstanding Content ID system for copyrighted material, but with a key distinction. While rights holders can monetize videos that use their copyrighted content, the likeness detection tool currently focuses solely on identification and potential removal. The ability to claim revenue from AI deepfakes, however, appears to be a logical next step as the digital landscape evolves. The entertainment industry is clearly moving toward a model where a person’s digital likeness is treated as a licensable asset.
Recent developments underscore this trend. YouTube itself recently unveiled a tool allowing creators to generate and use AI clones of their own likeness in videos. Major talent agencies like CAA, which supported the likeness detection expansion, are building biometric databases so clients can control and commercialize their digital personas. In a notable case, TikTok personality Khaby Lame entered a deal to license his likeness for product promotion, highlighting the emerging market for digital identity, though such agreements can face complex legal and logistical challenges.
Industry perspectives on this AI proliferation are mixed. Some talent managers view fan-created AI deepfakes as a form of engagement, a new way for audiences to interact with their favorite stars. The approach will likely vary by individual: one celebrity might aggressively remove eligible impersonations, while another might allow them to circulate freely. The central question emerging is not just about control, but about compensation. In the near future, the industry may see a shift where entertainers welcome AI-generated content of themselves, provided they receive a share of the revenue it generates.
(Source: The Verge)