AI & TechArtificial IntelligenceCybersecurityNewswireTechnologyWhat's Buzzing

Scientists Use Hidden Light Codes to Stop Fake Videos

▼ Summary

Cornell University scientists developed software that embeds a “watermark” in light fluctuations to detect video tampering, presented at SIGGRAPH 2025.
– Video can no longer be assumed as a reliable source of truth due to advanced manipulation techniques, according to co-author Abe Davis.
– Deceptive video creators benefit from equal access to authentic footage and low-cost editing tools that produce highly realistic fakes.
– Current forensic techniques struggle to keep pace with the rapid advancement of video manipulation methods, relying on information asymmetry to detect fakes.
– Existing digital watermarking tools often lack desired attributes, such as not requiring camera control or original footage, and may fail to distinguish between compression and malicious edits.

Detecting manipulated videos has become a major challenge in today’s digital landscape, but researchers may have found a solution hidden in plain sight, light itself. A team from Cornell University has developed innovative software that embeds subtle light-based watermarks into video footage, creating an invisible shield against tampering. This breakthrough, presented at SIGGRAPH 2025 and detailed in ACM Transactions on Graphics, could revolutionize how we verify video authenticity.

The rise of deepfakes and sophisticated editing tools has eroded trust in video content, making it nearly impossible to distinguish real footage from fabricated scenes. Abe Davis, a Cornell researcher and co-author of the study, emphasizes the urgency of the issue: “Video was once considered undeniable proof, but that’s no longer the case. With today’s technology, anyone can create convincing fake videos, which poses serious risks.”

Current forensic methods struggle to keep pace with rapidly advancing manipulation techniques. While digital watermarks and checksums exist, they often fail to address critical gaps. Some require exclusive control over recording devices or access to original files, while others can’t differentiate between harmless compression and malicious edits. The Cornell team’s approach leverages information asymmetry, embedding hidden light patterns during recording that are imperceptible to the human eye but detectable by specialized software.

Unlike traditional methods, this technique doesn’t rely on post-production analysis. Instead, it integrates the watermark directly into the video’s lighting fluctuations, making it resistant to common editing tricks. Even if a forger alters the footage, the embedded code remains intact, exposing any tampering attempts.

The implications extend beyond security, this innovation could restore trust in video evidence, from courtroom testimonies to news reporting. As synthetic media grows more convincing, tools like these will be essential in preserving truth in an era of digital deception. The next challenge? Scaling the technology for widespread use while ensuring it stays ahead of increasingly sophisticated forgers.

(Source: Ars Technica)

Topics

video tampering detection 95% light-based watermarking 90% deepfakes video manipulation 85% trust video authenticity 80% current forensic techniques 75% digital watermarking limitations 70% information asymmetry detection 65% impact video evidence 60% scaling technology 55%