AI & TechArtificial IntelligenceBigTech CompaniesDigital PublishingNewswireTechnology

YouTube expands AI deepfake detection tool to all adult users

▼ Summary

– YouTube is expanding its AI likeness detection program to all users aged 18 and older, allowing them to have the platform monitor for deepfakes of their face.
– The feature uses a selfie scan to detect lookalikes, alerts the user upon a match, and allows them to request removal, though YouTube notes few such requests are made.
– The tool previously tested with creators and then expanded to public figures like politicians and journalists, now giving average users ongoing monitoring of their facial likeness.
– Takedown requests are evaluated based on realism, AI labeling, and unique identification, with exceptions for parody or satire, and the tool only covers facial likeness, not voice.
– Deepfake content risks extend beyond celebrities to private citizens, including cases of teenagers being deepfaked by classmates and suing over AI-generated CSAM.

YouTube is rolling out its AI deepfake detection tool to every user aged 18 and older, a move that significantly broadens who can automatically monitor the platform for unauthorized digital impersonations. The feature, which relies on a selfie-style facial scan to track potential lookalikes, now extends beyond its initial testing pool of content creators, government officials, politicians, journalists, and entertainment figures.

Once a user opts in, the system continuously scans YouTube for videos that match their facial likeness. If a match is found, the platform sends an alert, and the individual can request removal. YouTube has previously noted that the number of such takedown requests has been “very small.” The company evaluates each request under its privacy policy, considering factors like whether the content is realistic, clearly labeled as AI-generated, and whether the person is uniquely identifiable. Exceptions exist for parody and satire, and the tool only monitors facial features, not other identifiers such as a person’s voice. Users can opt out at any time, at which point YouTube deletes their scan data.

The announcement came via YouTube’s creator forum, but spokesperson Jack Malon clarified that there is no strict definition of who qualifies as a “creator” for eligibility. “With this expansion, we’re making clear that whether creators have been uploading to YouTube for a decade or are just starting, they’ll have access to the same level of protection,” Malon stated.

While deepfakes most commonly target celebrities and politicians, the technology poses a growing threat to private citizens. There have been documented cases of teenagers being digitally replicated by classmates, and three teenagers recently filed a lawsuit against xAI, alleging that its Grok chatbot generated child sexual abuse material (CSAM) depicting them. This expansion gives everyday users a more direct line of defense against such misuse.

(Source: The Verge)

Topics

ai likeness detection 95% deepfake monitoring 92% user privacy protection 88% content takedown process 85% youtube policy updates 83% deepfake risks for all 80% ai-generated content 78% creator protection tools 76% facial recognition technology 74% satire and parody exceptions 72%