ETH Zurich researchers have developed a sensor chip that cryptographically signs images and videos at capture, making manipulation nearly impossible to hide.
Researchers at ETH Zurich have developed a sensor chip that cryptographically signs images, videos and audio recordings at the moment of capture — a potential breakthrough in the global effort to combat AI-generated disinformation.
The technology works by embedding a digital signature directly into sensor data as it is recorded. That signature, which can be stored on a public, tamper-proof ledger such as a blockchain, allows anyone to verify whether a given piece of content is genuine, when it was recorded, and whether it has been altered since. Any subsequent manipulation of the data would leave detectable traces.
“If data is signed the moment it is captured, any later manipulation leaves traces,” said Fernando Cardes, a research associate at ETH Zurich’s Professorship of Biosystems Engineering, who co-developed the technology. “To manipulate the data, the chip would have to be physically attacked — requiring a massive technological effort that would make the mass generation of manipulated content for social media platforms practically impossible.”
Also Read: When the CEO’s Avatar Speaks, Who Is Actually Responsible?
Trust at the Point of Capture
The approach is notable because it removes the need to trust any individual, platform or intermediary in the verification chain. Rather than relying on downstream detection tools — which have struggled to keep pace with rapidly improving generative AI — the system establishes authenticity at the source.
“Trust in digital content is eroding,” said Felix Franke, who co-developed the chip at ETH Zurich and is now a professor at the University of Basel. “We wanted to create a technology that gives people a way to verify whether something is genuine.”
In principle, the chip can be integrated into any camera or sensor. Social media platforms could use it to automatically verify content upon upload. Journalists, researchers and public authorities could also authenticate material independently using simple tools, without depending on platform cooperation.
A Project Nearly a Decade in the Making
The research grew out of a side project at ETH Zurich’s Bio Engineering Laboratory, where the team had been developing highly sensitive sensors to measure electrical signals from living cells. That work gave them the interdisciplinary expertise needed to embed cryptographic functions directly into sensor hardware.
The team identified the threat posed by synthetic media well before it became a mainstream concern. “The danger posed by deepfakes was foreseeable,” Franke recalled. The plan to develop a manipulation-resistant sensor was first conceived in 2017 — years before tools like ChatGPT brought generative AI into public debate.
Also Read: Regulated Industries Are Rewriting Their AI Architecture
From Prototype to Market
The chip described in a paper published on Monday in Nature Electronics is a working prototype demonstrating technical feasibility. Commercial deployment would require further development, and the team is currently working to reduce manufacturing costs for camera and sensor producers. A patent application has been filed.
The research was funded by the Swiss National Science Foundation and the State Secretariat for Education, Research and Innovation through the SwissChips initiative.


