Visual Technology Strategy · Content Authenticity
By Paul Melcher · Managing Director, MelcherSystem LLC · March 2026 · 18 min read
Trust in visual content has never been about pixels. It has always been about people. A new generation of tools is finally making that verifiable — but only if you understand what the technology actually does, and what it does not.
Contents:
The problem that predates AI ·
Trust, not truth ·
The solutions landscape ·
The four-layer framework ·
Interoperability ·
Governance ·
Strategic implications
The Problem That Predates AI
Key finding: In 2018 — three years before generative AI entered the public conversation — Imatag analyzed 50,000 images across 750 websites and found that 80% had their metadata stripped. Creator name, copyright notice, publication source: gone. Not by malicious actors, but by the ordinary mechanics of the internet.
The industry knew. The IPTC had been documenting metadata stripping for over a decade. The Copyright Office had raised it in formal proceedings. Working groups had studied it. And the field had, collectively, built nothing substantial to fix it.
This is the foundational failure that every provenance and authenticity system in this guide is attempting to address. Not deepfakes. Not AI-generated imagery. Those are the accelerants. The underlying fire is older: the metadata infrastructure that professional photography built its attribution practices on was never designed to be trusted — only to be carried.
IPTC fields — creator name, copyright, caption — are plain text fields. Anyone can write anything in them. Nothing prevents it. Nothing detects when it changes. The entire system of image attribution in professional photography was, at its foundation, an honor system. You claimed to be the creator. Platforms assumed you were. No cryptographic binding. No verification. No way to prove the caption had not been altered between camera and publication.
Generative AI made the consequences of that failure impossible to ignore. When an AI-generated image can be distributed with any metadata its creator chooses — any name, any organization, any claimed origin — the weakness that was always structurally present becomes undeniable.
Reframing the Question: Trust, Not Truth
The dominant framing of the provenance problem — how do we distinguish AI-generated content from human-created content? — is understandable but limiting. It treats the question as one of pixel origin. The question that actually governs trust is different.
Trust in visual content has never been about pixels. It has always been about people.
When audiences encounter a photograph or a video, the question they ask — consciously or not — is not is this real? It is: can I trust this? And trust is not a property of content. It is a property of relationships. You trust an image from Reuters because you trust Reuters. You trust a product photograph because you trust the brand.
Consider two scenarios. In the first, a brand generates product photography using a generative AI tool, signs the images with verified credentials identifying the brand as publisher, and distributes them with an intact provenance chain. In the second, a photographer captures a genuine, unmanipulated image that then circulates on social media with all metadata stripped and no verifiable connection to its source.
From a trust perspective, the first image is more reliable — not because AI-generated imagery is inherently more trustworthy than photography, but because the signed, attributed, verifiable image carries accountability. Someone is vouching for it. That someone can be held responsible if the claim is false.
The right question for any provenance tool is not “can it detect AI?” but “does it reliably connect content to a verifiable human identity, in a way that survives distribution and is resistant to forgery?”
The Solutions Landscape
What follows is an equal-weight treatment of the significant solutions currently in development or deployment. Each is analyzed on three axes: what problem it is designed to solve, how it works, and where its structural limits lie.
C2PA / Content Credentials
The dominant framework. Embeds a cryptographically signed manifest in media files recording provenance assertions — who created it, what tools were used, what edits were made. More than 5,000 members in the Content Authenticity Initiative; integrated into Adobe, OpenAI, Nikon, Leica, Sony, and Canon hardware. The Steering Committee has expanded to include Amazon, Google, and Meta. Relies on commercial Certificate Authorities for identity verification. Strong platform adoption; complex implementation; governance remains heavily Adobe-influenced. The Conformance Program was soft-launched in June 2025 at the Content Authenticity Summit at Cornell Tech.
SEAL — Secure Evidence Attribution Label
Dr. Neal Krawetz’s alternative to C2PA. Uses DNS-based signing — free, privacy-preserving, distributed. Claims tamper-proof (not merely tamper-evident) integrity. Designed for Federal Rules of Evidence compliance. Fully implemented and functional as of early 2026, but has yet to achieve major platform adoption. Under formal independent evaluation by the Provenance and Authenticity Standards Assessment Working Group (PASAWG) at the University of Maryland UMBC, alongside C2PA.
Imatag
Embeds an imperceptible signal in pixel data that survives compression, cropping, resizing, and screenshots. Functions as C2PA’s soft binding layer — the recovery path when metadata is stripped. Live deployment with AFP for news photography workflows. Imatag’s 2018 research documenting 80% metadata stripping rates is cited in C2PA’s own specifications.
ISCC + Liccium
ISCC (ISO 24138:2024) derives a unique fingerprint identifier algorithmically from the media file itself — no embedding, no watermarks. Liccium publishes signed declarations binding those ISCC fingerprints to rights, identity, and provenance in federated public registries. Particularly strong for rights management and AI training opt-out at scale.
IPTC Photo Metadata Standard
The common language of image attribution since the 1990s. Every tool, DAM, and platform supports it. Not an authentication system — a vocabulary standard. Easily stripped, easily altered. Every other system in this list either builds on it, incorporates it, or must account for it. The foundation that needed — and is now getting — a security layer.
OpenOrigins / HOPrS
The Human-Oriented Proof System (HOPrS), now an LF Decentralized Trust open-source lab, maps exactly which pixel regions of an image have been edited versus the registered original. Works without requiring the original file. Blockchain-anchored and decentralized. Forensic tamper detection rather than provenance chain management.
Numbers Protocol
A Capture → Certify → Check framework creating immutable on-chain records. C2PA member; co-authored ERC-7053 (the Ethereum media indexing standard). Cited in NIST AI 100-4 alongside C2PA and Starling Lab. 67M+ assets registered, 150K fully verified as of early 2026. Secured a Google News Initiative grant in October 2025.
JPEG Trust (ISO/IEC 21617)
The formal ISO standards layer built on top of C2PA. Published as an international standard in January 2025, JPEG Trust extends the C2PA engine with a structured framework covering provenance, authenticity, integrity, and intellectual property rights across the full JPEG family of formats — JPEG 1, JPEG 2000, JPEG XL, JPEG XS, and JPEG AI. Its multi-part structure is analytically significant: Part 1 (Core Foundation) is published; Part 2 (Trust Profiles catalogue, for assessing trustworthiness in specific deployment scenarios), Part 3 (watermarking as a binding mechanism), and Part 4 (reference software) are in development. Part 3 in particular will make JPEG Trust a direct interoperability layer with pixel-level watermarking tools like Imatag. Uniquely, its ISO standing and JPEG format universality give it a deployment surface no industry initiative can match — JPEG images represent the vast majority of visual content in circulation. A second edition of Part 1 is already in the approval phase as of early 2026.
SynthID (Google DeepMind)
Invisible watermarking embedded at AI generation time. Detection requires Google’s proprietary tool. Not designed for human-created content. Included here as the counterexample: a proprietary, platform-specific, AI-only solution that illustrates the governance question underlying every other approach in this space.
Full Technical Report: Securing the Visual Record
Complete solution profiles with technical depth, four-layer framework analysis, interoperability mapping, governance implications, and 29 sourced references.
The Four-Layer Content Trust Framework
No single solution addresses all four layers of a complete content trust infrastructure. Every organization implementing provenance tools is making choices across all four layers, often by default. Understanding the layers makes those choices visible and those decisions intentional.
| # | Layer | What it addresses | Key solutions |
|---|---|---|---|
| 01 | Manifest / Metadata Binding | How provenance data stays connected to the content it describes. Three architectures: hard binding (embedded in file), soft binding (watermark in pixels), external binding (registry fingerprint). None survives all threat models alone. | C2PA (JUMBF-embedded manifest) · Imatag (pixel watermark) · Liccium (external registry) · IPTC (XMP/EXIF fields) · SEAL (DNS-injected) |
| 02 | Metadata Vocabulary | What the provenance data actually says. A file can be cryptographically signed and semantically empty. Core tension: rich schemas are harder to implement; thin vocabularies lose meaning at scale. | IPTC Photo Metadata Standard · C2PA assertions · W3C Verifiable Credentials · ISCC identifier · JSON-LD · FDO / FAIR data principles |
| 03 | Attribution | Who is actually claiming this, and why should we believe them? Cryptographic signing proves a key was used — not that the keyholder is who they claim to be. Every system delegates identity verification to some authority. | C2PA / CAWG (Certificate Authorities) · SEAL (DNS-verified domain identity) · Liccium Creator Credentials · Numbers Protocol (on-chain identity) · IPTC creator fields (unverified) |
| 04 | Access / Verification | Who controls the verification infrastructure — and what happens when that control is exercised or withdrawn? Centralized verification concentrates power. Federated and decentralized models distribute it but add complexity and coordination cost. | CAI Verify (Adobe-hosted) · Liccium (federated public registries) · Numbers Protocol (blockchain) · OpenOrigins HOPrS (decentralized) · SEAL (DNS lookup, no central authority) |
| Governance spectrum: centralized → federated → decentralized · MelcherSystem LLC Four-Layer Content Trust Framework · 2026 |
|||
Interoperability and Stack Design
The four layers can be composed using different solutions. A functioning trust infrastructure does not require a single vendor — it requires deliberate choices at each layer that are compatible with each other. A practical example of how those choices might be made:
- IPTC for vocabulary — because every tool already supports it and semantic interoperability with the rest of the media industry matters.
- C2PA for manifest binding — because major platforms are integrating it and regulatory frameworks increasingly reference it.
- Imatag watermarking as soft binding fallback — because metadata will be stripped on platforms that do not support C2PA, and the pixel watermark is the recovery layer that makes the system resilient.
- ISCC + Liccium for external registry binding and rights management — because external fingerprint registration survives everything, and the federated model avoids single-vendor dependency for verification.
Some combinations are explicitly designed to be complementary. Liccium can ingest C2PA assertions. Numbers Protocol uses C2PA as its metadata standard with blockchain as the storage layer. Imatag is registered on the C2PA soft binding algorithm list. IPTC metadata can be carried inside C2PA manifests.
The industry tends to debate which solution is best. The more useful question is which combination of solutions addresses all four layers for a given organizational context.
The critical incompatibilities between systems are not technical — they are governance assumptions. A stack built entirely on Adobe-controlled infrastructure has a different risk profile than a federated or decentralized equivalent. Neither is wrong; both require explicit choice rather than default adoption.
The Governance Question
The most important long-term question in this space is not which algorithm is most robust. It is: who controls the trust infrastructure, and what happens to everyone who depends on it when that control is exercised?
Every layer of the four-layer framework involves a delegation of authority. The attribution layer delegates to Certificate Authorities, DNS registrars, blockchain validators, or professional credentialing bodies. The access layer delegates to whoever operates the verification infrastructure. These delegations are not neutral, and they are not reversible without significant cost.
A verification infrastructure controlled by a single company can be modified, restricted, or discontinued. A signing certificate ecosystem controlled by commercial Certificate Authorities from which the cost of participation excludes independent creators is an attribution system with a class structure built into it. These are structural features, not edge cases.
As regulatory mandates for content provenance accelerate — the EU AI Act, California AB 853 (effective July 2026), and emerging frameworks elsewhere — the governance of this infrastructure will become a policy question as much as a technical one. Who sits on the trust list determines who can be trusted.
Strategic Implications for Organizations
Audit your current state across all four layers. Most organizations have an implicit answer at each layer already: embedded IPTC metadata (binding), IPTC vocabulary fields (vocabulary), unverified creator credits (attribution), and no systematic verification infrastructure (access). Understanding where you actually are is the prerequisite for knowing where to go.
The metadata stripping problem and the signing problem are not the same. Adding C2PA credentials to images that will have those credentials stripped on upload is not a complete solution. The soft binding layer — Imatag or an equivalent pixel watermarking system — is not optional for content that circulates on consumer platforms. It is the layer that makes hard-bound credentials recoverable in the real world.
Treat attribution as a governance decision, not a technical one. Which certificate authority, which trust list, which credentialing model you adopt reflects choices about institutional relationships and accountability structures. Those choices should be made explicitly, not inherited by default from whatever platform you happen to be using.
The regulatory window is now. California AB 853 takes effect July 2026. The EU AI Act is in force. Organizations that build provenance infrastructure proactively will be in a fundamentally different position from those scrambling to achieve compliance under deadline.
MelcherSystem advises brands, publishers, platforms, and technology organizations on visual content provenance — from maturity assessment and stack design to C2PA deployment, DAM integration, and regulatory readiness. Provenance Maturity Assessment · C2PA Implementation Strategy · DAM as Trust Infrastructure · AI Content Policy · Regulatory Readiness
The complete technical treatment: solution profiles, four-layer framework, interoperability analysis, governance implications, and 29 sourced references. Free to download.
Key References
[1] Imatag. (2018). Metadata stripping study: 80% of 50,000 images across 750 websites had metadata removed. Cited in C2PA Implementation Guidance (spec.c2pa.org) and Imatag blog, February 2025.
[2] International Press Telecommunications Council. IPTC Photo Metadata Standard. iptc.org/standards/photo-metadata.
[3] C2PA Technical Specification v2.2. Coalition for Content Provenance and Authenticity / Linux Foundation. spec.c2pa.org.
[4] Krawetz, N. SEAL specification and technical critique of C2PA. hackerfactor.com; github.com/hackerfactor/SEAL.
[5] ISO 24138:2024 — International Standard Content Code (ISCC). iso.org/standard/77899.html.
[6] Liccium. Technical whitepaper and documentation. docs.liccium.com.
[7] OpenOrigins. HOPrS framework and LF Decentralized Trust lab documentation. openorigins.com.
[8] Numbers Protocol. Whitepaper. whitepaper.numbersprotocol.io. Google News Initiative grant, October 2025.
[9] NIST. (2024). AI 100-4: Reducing Risks Posed by Synthetic Content. nvlpubs.nist.gov.
[10] C2PA Conformance Program. c2pa.org/conformance. Soft-launched June 2025, Content Authenticity Summit at Cornell Tech.
[11] Content Authenticity Initiative. Members page. contentauthenticity.org/our-members. “Over 5,000 members.” Accessed March 2026.
[12] California AB 853. Visual Content Authentication Act. Effective July 1, 2026.
[13] TV NewsCheck. (October 2025). “Content Authentication Initiative C2PA Hits Some Bumps In The Road.” Documents BBC News Labs closure and C2PA v2.0 editorial provenance changes.
[14] World Privacy Forum. (2025). Privacy, Identity and Trust in C2PA. worldprivacyforum.org.
[15] NSA/CISA. (January 2025). “Content Credentials” Cybersecurity Information Sheet. media.defense.gov.
[16] JPEG Trust. ISO/IEC 21617-1:2025 Core Foundation. jpeg.org/jpegtrust. Part 1 published January 2025; Parts 2–4 in development.
Full bibliography with 29 references available in the PDF report.
About the author: Paul Melcher is Managing Director of MelcherSystem LLC, a visual technology consulting firm. He is a member of the Content Authenticity Initiative and C2PA standards organizations, and spoke on content authenticity at the 2025 Content Authenticity Summit at Cornell Tech. · Contact MelcherSystem