Thought LeadershipPart 1 of 2

Your Agency's AI Campaign Library Is a Liability

Casey Milone8 min read
Abstract representation of regulatory compliance warning for digital marketing agencies

What if every AI-generated ad your agency produced in the last two years could trigger a €35 million fine—and you have no way to prove otherwise?

Most agencies adopted AI tools for the same reason: speed. ChatGPT for copy variations, Midjourney for concept exploration. The workflow was simple—generate, review, ship. No one thought to document prompts, model versions, or generation parameters. Why would they?

Because regulators in the EU and California now require exactly that documentation. The EU AI Act's penalty framework became operational in August 2025, with fines reaching €35 million or 7 percent of global turnover for prohibited practices and €15 million or 3 percent for transparency violations. California's AI Transparency Act carries penalties of $5,000 per violation, per day of noncompliance.

These are not theoretical numbers on a distant horizon. They are enforceable now.

What Changed—and When Does It Matter

The regulatory landscape shifted faster than most agencies anticipated. Understanding the timeline matters because different obligations activated at different points, and enforcement mechanisms are now operational.

The EU AI Act rolled out in phases. February 2025 banned prohibited AI practices and mandated “AI literacy” for employees—meaning organizations must ensure staff have sufficient understanding of AI systems they operate. August 2025 activated General Purpose AI (GPAI) transparency requirements alongside the full penalty framework. Competent authorities can now impose administrative fines for noncompliance.

California's approach is similarly phased. SB 942, the California AI Transparency Act, requires covered providers to include “manifest” disclosures on AI-generated content (visible watermarks or labels) and “latent” disclosures (embedded machine-readable metadata). Full enforcement begins August 2026, but the requirements apply to content created now.

The window between now and August 2026 is not a grace period. It is an exposure accumulation period. Every AI-generated campaign asset without proper documentation adds to the compliance debt.

The Documentation Gap

Here is where it gets uncomfortable. The EU AI Act and California's transparency requirements both demand something agencies never thought to keep: complete provenance records for AI-generated assets.

Article 26 of the EU AI Act specifies that deployers must take “appropriate technical and organisational measures” to ensure systems are used in accordance with provider instructions, assign human oversight to competent individuals, and—critically—keep automatically generated logs for a period appropriate in light of the AI system's intended purpose.

California's SB 942 goes further on technical specifics. Latent disclosures must be embedded metadata that identifies content as AI-generated, conveys provenance data about how the content was made, and is “permanent or extraordinarily difficult to remove to the extent technically feasible.” The emerging standard for meeting these requirements is C2PA (Coalition for Content Provenance and Authenticity)—cryptographically signed manifests that create a verifiable chain of custody.

Now consider how agencies actually document AI usage today. Screenshots of prompts saved in project folders. Slack threads mentioning which tool was used. Tribal knowledge about who created what. Folder naming conventions like “MJ_Client_Campaign_v3_final_FINAL.”

None of this constitutes governance in the regulatory sense. A screenshot with a timestamp is not embedded provenance metadata. A folder naming convention is not a cryptographically signed manifest. The gap between what agencies capture and what regulations require is not a matter of process improvement—it is a fundamental infrastructure mismatch.

When Your Client Gets Fined, Who Is Actually Responsible?

The documentation gap creates exposure. But the liability question—who pays when something goes wrong—is even less clear.

The EU AI Act creates obligations for both “providers” (the companies building AI tools) and “deployers” (the organizations using those tools to create outputs). When an agency produces AI-generated content for a client campaign, both the agency and the client may be deployers with separate compliance obligations.

Most agency contracts predate this regulatory framework entirely. Standard master service agreements, statements of work, and insertion orders were not drafted with AI transparency compliance in mind. Indemnification clauses typically cover intellectual property infringement, confidentiality breaches, and general negligence—not failure to maintain AI provenance metadata.

Consider a realistic enforcement scenario. A competitor files a complaint with a French data protection authority about a campaign containing undisclosed AI-generated content. The authority investigates. The client, as the deployer who published the content, faces potential fines. The client looks to the agency contract for protection. The contract is silent on AI governance because it was signed in 2022.

Who absorbs the regulatory penalty? The answer will likely be determined by litigation that neither party wants to fund.

The Compound Problem

The archive problem compounds the liability uncertainty. Agencies do not just have current campaigns to worry about. They have years of archived work—client assets, stock variations, template libraries, concept explorations—sitting in storage with no provenance documentation.

Some of that content is still in market. Much of it could be deployed again. All of it represents potential exposure if regulators come asking questions about AI usage and the answer is “we do not actually know.”

Documentation debt compounds. Every AI-generated asset in your archives without proper provenance metadata represents ongoing exposure. Unlike technical debt, this cannot be refactored retroactively. The problem grows with every campaign delivered without governance infrastructure in place.

What Comes Next

Understanding the scope of the problem is the first step. But the specific technical requirements—embedded watermarks, cryptographic provenance, persistent metadata—reveal why this is not a process problem agencies can solve with training and checklists.

Part 2 of this series examines exactly what “AI transparency” means at the technical level, why most agency tools cannot produce what regulations require, and what infrastructure would actually be needed to close the gap.

Key Takeaways

  • 1.EU AI Act penalties are active now—€35M or 7% of global turnover for prohibited practices, €15M or 3% for transparency violations
  • 2.The gap between agency documentation and regulatory requirements is fundamental—screenshots and Slack threads do not constitute embedded metadata or cryptographic provenance
  • 3.Liability is undefined—most agency contracts predate AI governance requirements and do not address who absorbs regulatory penalties
  • 4.Documentation debt compounds—every undocumented AI asset in your archives represents ongoing exposure that cannot be retroactively fixed
  • 5.This is step one—Part 2 addresses the technical requirements most agency tools cannot meet

Continue to Part 2

Learn what “AI transparency” actually means at the technical level—and why most agency tools cannot produce what regulations require.