Cornell University researchers have introduced a new method for preventing deepfake videos: "lighting noise coding." This technique embeds an invisible watermark directly into the light flux of a light source. The secret pattern persists even after video processing, including compression or AI-based modifications.
The method was presented at the SIGGRAPH 2025 conference in Vancouver. For example, LED lights are used to introduce a watermark that varies brightness at certain frequencies, imperceptibly to the naked eye. A camera automatically records this "noise," and a special algorithm later checks the received code against a database of sources.
This protection is more difficult to forge than digital watermarks added during installation. Tests have shown that even after significant editing or AI-generated video, the signal remains strong enough for authentication.
Currently, this technology requires controlled lighting, which limits its use outdoors or in difficult conditions. However, experts believe the system could be used to protect broadcasts or corporate negotiations. In the future, developers plan to integrate the technology into smartphones and smart lighting systems.