Thought Leadership

The Three Layers of Content Trust: Identity, Creation, and Distribution

Content provenance needs three layers working together: who published it, how it was made, and where it travels. Most solutions only cover one.

April 202610 min readJesse Blum
Abstract visualisation: Pink nebula smoke with digital grid representing layers of content trust

Mastercard’s VP of AI & Data, Dudley Nevill-Spencer, recently proposed KYCO—“Know Your Content”—a framework that adapts agentic commerce tokenisation to verify who publishes content on social platforms. It is a compelling idea. But it solves one-third of the problem.

Content trust needs three layers working in concert. Identity verifies who published the content. Creation captures how it was made. Distribution tracks where it travels. Right now, different companies and coalitions are building each layer in isolation. Nobody is connecting all three, and the gaps between them are where trust breaks down.

The Trust Crisis Is Real, but Fragmented

The symptoms are everywhere. A Hong Kong finance worker was deepfaked into authorising a $25 million transfer at Arup. Political bot campaigns flood social platforms with synthetic media. AI-generated product imagery proliferates across marketplaces with no provenance trail.

These are not separate problems. They are all symptoms of the same structural gap: AI-generated content at scale with no verification infrastructure.

But the solutions being built today are siloed. Identity specialists solve identity. Provenance standards bodies solve provenance. Platform companies solve distribution. Each group builds excellent technology for their layer and largely ignores the others.

The result is a fragmented landscape where a verified publisher can distribute AI-generated content with no creation metadata, and a fully provenanced asset can be stripped of its credentials the moment it leaves the creator’s hands.

Layer 1: Identity — WHO Published This?

The first question in any trust framework: is this content from a real human, a verified business, or a bot?

Nevill-Spencer’s KYCO framework proposes an elegant answer. Mastercard already verifies identity at scale through KYC (Know Your Customer) for financial transactions. KYCO extends that same tokenisation infrastructure to content publishing, verifying that a human or business stands behind each piece of content on a social platform.

This is powerful. It addresses fraud, bot campaigns, fake business advertisements, and predatory accounts. If every publisher had a verified identity token, the volume of anonymous synthetic content flooding platforms would drop dramatically.

But identity verification alone cannot tell you how content was created. A verified human can publish AI-generated imagery without any disclosure. A verified business can distribute assets that infringe existing IP. Identity answers who; it says nothing about the creative process behind the content, or whether that process was legally sound.

Layer 2: Creation — HOW Was This Made?

Even knowing who published content, you cannot tell how it was created: which model generated it, which parameters shaped it, which training data influenced the output, which version of the workflow produced this specific result.

This is the layer the EU AI Act’s Article 50 targets directly: transparency about AI-generated content, enforceable from August 2026. But the gap is not in regulation. The gap is in capture. Most generative AI tools either do not record creation metadata, or lose it during export and post-processing.

The current state is uneven. ComfyUI workflows embed generation parameters in PNG metadata chunks. Midjourney has embedded IPTC metadata since late 2025. But most tools, and most workflows that chain multiple tools together, strip or lose this data somewhere in the pipeline.

At Numonic, we approach this differently. Rather than attempting expensive post-generation reverse engineering, trying to reconstruct provenance after the fact, we capture lightweight generative parameters at the point of creation. Node graphs, Stable Diffusion parameters, Midjourney metadata, workflow lineage. Stored in a Data Vault 2.0 architecture designed as an immutable audit trail.

What this layer solves: reproducibility, compliance, audit trails, IP protection, and creative lineage. What it does not solve: who the creator is (that is Layer 1), or where the content travels after creation (that is Layer 3).

The Interpretive Bridge: IP Risk Scoring

Between creation provenance and distribution, there is a question that neither layer answers on its own: is this content legally safe to use?

CopySight AI’s legal governance advisor Stanislav Meerson articulated this gap precisely: C2PA is “a bridge that stops halfway.” On one side you have hardware keys, certificates, and cryptographic math. On the other side, rules of evidence and admissibility. What is missing is the interpretive layer, the work that makes technology usable in society.

CopySight addresses this with what they call a Similarity Score—essentially a FICO score for copyright risk. Their IP Scoring Engine evaluates AI-generated outputs for visual similarity, dataset lineage, and human authorship, assigning a score from 1 to 100. Below 35 per cent is generally safe. Between 35 and 75 per cent, consult your legal team. Above 75 per cent, the content likely infringes existing IP.

This is a critical bridge. Creation provenance tells you how content was made. IP risk scoring tells you whether it is safe to use. In April 2025, CopySight secured a US Copyright Office registration for AI-generated content on behalf of a client, demonstrating that logging each step of the AI generation process, from prompt to output, and establishing human authorship can meet the legal threshold for copyright protection.

The complementary value is clear. Provenance without risk scoring gives you an audit trail for content that might infringe. Risk scoring without provenance gives you a safety assessment with no verifiable chain of evidence. Together, they form a complete creation-to-clearance pipeline: here is how it was made, and here is proof it is not infringing.

Layer 3: Distribution — WHERE Does It Travel?

Content gets shared, reshared, screenshotted, re-uploaded, and embedded. At each step, metadata gets stripped and provenance gets lost. This is the “platform stripping” problem Nevill-Spencer identifies, and it is the reason creation-side provenance alone is insufficient.

The C2PA coalition, now more than 6,000 members strong, with the specification fast-tracked as an ISO standard, addresses this with cryptographic content credentials attached to files. Version 2.3, currently in draft, tackles cross-platform credential portability and hardware security module requirements. Samsung’s Galaxy S25 became the first smartphone lineup with native Content Credentials support. Cloudflare is implementing C2PA preservation across roughly 20 per cent of the web.

Google’s SynthID takes a different approach: imperceptible watermarks embedded directly in AI-generated content, designed to survive transformations that would strip metadata.

On the verification side, startups like ContentLens.ai build the “reader” infrastructure: a Chrome extension that detects and validates C2PA content credentials in images, audio, and video online, including soft-binding via imperceptible watermarks.

But here is the critical dependency: distribution-layer tools can only verify credentials that exist. If the creation layer does not embed provenance at generation, the distribution layer has nothing to validate. ContentLens can verify a C2PA manifest, but it cannot conjure one that was never written. C2PA can transport a content credential, but it cannot generate one for an asset that was created without provenance capture.

The Trust Triangle: Why All Three Layers Need Each Other

Nevill-Spencer’s insight is that “verifying an AI agent acting on behalf of a human in commerce is structurally similar to verifying a human publishing content on social media.” I would extend that: verifying the generative pipeline that produced the content is the third vertex of the same trust triangle.

  • Without Identity, you have provenance for anonymous content. Useful for audit, but incomplete for accountability.
  • Without Creation, you have a verified publisher with no proof of how content was made. This is specifically what the EU AI Act’s Article 50 requires, and without the creation layer, compliance is impossible.
  • Without Distribution, you have provenance that evaporates the moment content leaves the creator’s hands. A perfectly provenanced asset becomes unverifiable once it is uploaded to a platform that strips metadata.
  • Without IP Risk Scoring—the interpretive bridge—you have provenance and distribution infrastructure but no way to answer the commercial question: can we actually use this?

All four capabilities together create what I think of as “glass-to-glass” provenance: from the generative tool to the IP clearance decision to the end consumer. Each layer is necessary. None is sufficient alone.

Who Is Building What

The landscape is taking shape, with different companies and coalitions anchoring each layer:

The gap that matters most: no single entity or standard connects all three layers. C2PA comes closest, but it focuses on distribution, transporting and verifying credentials, not generating them at the point of creation. The creation layer remains the least developed and most architecturally demanding. It requires understanding generative AI tools from the inside, not just wrapping them with metadata after the fact.

What This Means for Creators, Studios, and Enterprises

For individual creators: creation provenance protects your work and proves your process. Paired with IP risk scoring, it establishes both originality and legal safety before you publish.

For studios and agencies: the trust triangle is what turns AI-generated content into auditable production assets. Warner Bros. Discovery, Sony, and other major studios are already testing IP similarity scoring in their production pipelines. The question is no longer whether this infrastructure is needed, but how quickly it can be integrated.

For enterprises: EU AI Act compliance requires transparency about AI-generated content (Layer 2), but commercial safety also demands IP clearance (the bridge) and credential preservation through distribution (Layer 3). A compliance strategy that addresses only one layer is incomplete by design.

For the ecosystem: the companies building the connection points between layers will capture outsized value. The identity layer is being built by financial infrastructure giants. The distribution layer is being standardised by industry coalitions. The creation layer and the interpretive bridge between creation and distribution are where the whitespace is widest.

The practical reality: we do not need to wait for one universal system. Interoperable layers, each solving their part, can compose into end-to-end trust. The companies building those layers just need to talk to each other.

Key Takeaways

  • Content trust requires three layers: Identity (who), Creation (how), and Distribution (where). Most solutions address only one.
  • An interpretive bridge, IP risk scoring, connects creation provenance to distribution safety by answering whether content is legally safe to use.
  • The creation layer is the least developed and most architecturally demanding. It requires provenance capture at the point of generation, not after the fact.
  • C2PA is scaling fast (6,000+ members, ISO fast-track, Samsung and Cloudflare adoption) but depends on creation-side provenance to have anything to transport.
  • EU AI Act Article 50 and California SB 942 both enforce from August 2026. Compliance requires creation-layer infrastructure that most organisations have not built.
  • Interoperable layers, each solving their part, can compose into end-to-end trust without requiring a single universal system.

Building the Creation Layer

The identity layer is being built by financial infrastructure giants. The distribution layer is being standardised by industry coalitions. The creation layer—provenance captured at the point of generation—is where the gap is widest and the opportunity is greatest. That is what we are building at Numonic.

If you are building in the identity, distribution, or IP risk scoring layers and want to explore how creation-side provenance connects to your work, we would like to hear from you.

Get in Touch