A standard agency Master Services Agreement allocates IP rights, defines deliverables, and sets payment terms. It was written for a world where a human designer made every creative decision. That world ended. Today, AI tools are embedded in the production workflows of virtually every creative agency, and the legal frameworks governing those workflows have not kept pace. The result is a growing body of unresolved questions about who owns AI-generated work, who bears regulatory liability, and what happens when a client's downstream use of AI content triggers a compliance violation.
This article provides the five contract clauses that close the most critical gaps. It covers the language, the rationale, and the client conversation needed to introduce each clause without derailing the relationship. It also addresses the insurance considerations that sit alongside contract language, and the red flags to watch for when reviewing inbound client contracts that attempt to shift AI risk unfairly onto the agency.
This is not legal advice. Every agency should have counsel review and adapt these clauses for their specific jurisdiction, client mix, and service scope. What this article provides is a structured starting point and the business rationale that makes the conversation with clients productive rather than adversarial.
Why Existing MSAs Do Not Cover AI
The typical agency MSA was drafted to resolve two fundamental tensions: who owns the creative output, and what happens if something goes wrong. For human-created work, these questions have well-established answers. Copyright vests in the creator, work-for-hire provisions transfer it to the client, and E&O insurance covers professional negligence claims.
AI-generated content disrupts every part of this framework. Copyright law in most jurisdictions does not clearly vest copyright in AI-generated content—the US Copyright Office has repeatedly declined to register purely AI-generated works, and EU member states are still developing doctrine. When the work-product of a creative engagement is of uncertain copyrightability, a work-for-hire clause does not cleanly transfer ownership because there may be nothing to transfer in the traditional sense.
The regulatory dimension compounds the problem. The EU AI Act Article 50 and California SB 942 impose disclosure and metadata preservation obligations on the “deployer” of AI-generated content—which may be the agency, the client, or both, depending on who controls the final distribution. Most MSAs do not allocate these obligations at all. When a regulator initiates an inquiry, both parties point at each other and the agency ends up bearing disproportionate exposure because it actually operated the AI tools.
The five clauses below are designed to resolve these ambiguities before they become disputes.
Five Essential AI Clauses
Clause 1: AI Disclosure and Consent
The first clause establishes the framework for how the agency uses AI tools in the engagement and what notice the client receives. Without this clause, clients who later discover that deliverables were AI-assisted may claim misrepresentation, even if AI use was never explicitly prohibited.
Core language: “Agency may use artificial intelligence tools and systems (“AI Tools”) in the production of deliverables under this Agreement. Agency will disclose the categories of AI Tools used upon request. Client acknowledges that AI-assisted content requires specific disclosure under applicable law, including but not limited to EU AI Act Article 50 and California SB 942, and consents to Agency fulfilling those disclosure obligations as part of content delivery.”
This clause serves three purposes. It provides affirmative consent so AI use cannot later be characterized as unauthorized. It establishes a baseline disclosure mechanism without requiring the agency to enumerate every tool in the contract itself. And it puts the client on notice that regulatory disclosure obligations exist, which is relevant to both Clause 4 (liability) and Clause 5 (audit rights).
Many agencies worry that explicit AI disclosure will make clients uncomfortable. The opposite is typically true: sophisticated enterprise clients increasingly require AI disclosure as part of their own governance frameworks. Leading with transparency positions the agency as compliant and trustworthy rather than evasive.
Clause 2: Metadata Preservation Requirements
Article 50(2) of the EU AI Act requires that AI-generated content carry machine-readable provenance markings. Article 50(5) requires that agencies pass sufficient provenance information to downstream deployers (clients) to allow them to meet their own obligations. Without a metadata clause, there is no contractual obligation to preserve these records, and no remedy when they are stripped during client post-processing.
Core language: “Agency will deliver AI-generated content with embedded provenance metadata meeting applicable regulatory standards, including IPTC 2025.1 AI fields and C2PA Content Credentials where technically feasible. Client agrees to preserve such metadata through any post-delivery processing or distribution. Client will not strip, modify, or obscure AI provenance metadata in a manner that would prevent downstream regulatory disclosure compliance. Agency's obligation to provide metadata is contingent on Client using delivery formats that support metadata preservation (e.g., PNG, TIFF, PDF). Agency is not responsible for metadata loss resulting from Client-initiated format conversion to metadata-stripped formats.”
The format carve-out at the end of this clause is important. Social media platforms, many content management systems, and standard compression pipelines strip EXIF, IPTC, and XMP metadata by default. If the agency is not responsible for metadata loss caused by client-side processing, it cannot be held liable for downstream compliance failures caused by that loss. The clause gives the agency a defensible position while still requiring the client to take affirmative steps to preserve what the agency delivers.
Clause 3: IP Ownership for AI-Generated Work
Standard work-for-hire clauses transfer “all intellectual property rights” in deliverables to the client. When those deliverables include AI-generated content of uncertain copyrightability, this language creates ambiguity about what is actually being transferred. A well-drafted AI IP clause addresses this directly.
Core language: “To the extent that any deliverable or portion thereof is generated by an AI system and is determined under applicable law not to constitute a copyrightable work (“AI Output”), Agency assigns to Client all rights, title, and interest that Agency may hold in such AI Output, including any rights arising under sui generis database protections or equivalent legal frameworks. Where AI Output is combined with human-authored elements constituting a copyrightable work, the work-for-hire provisions of this Agreement apply to the human-authored elements. Agency warrants that it has reviewed the terms of service of each AI Tool used to produce deliverables and that those terms permit commercial use of the generated output for Client's intended purposes.”
Standard vs. AI-Updated Contract Coverage
| Area | Standard MSA | AI-Updated MSA |
|---|---|---|
| AI tool disclosure | Not addressed | Explicit consent + notice |
| Metadata obligations | Not addressed | Delivery + preservation duty |
| IP ownership | Work-for-hire (assumes copyright) | Covers non-copyrightable AI output |
| Regulatory liability | Ambiguous / shared by default | Explicitly allocated by party role |
| Audit rights | None | Structured provenance log access |
The tool terms-of-service warranty is critical and often overlooked. Different AI tools have materially different commercial use rights. Midjourney's commercial license requires a paid subscription. Stable Diffusion outputs under certain model licenses restrict commercial use. Adobe Firefly provides an IP indemnity for commercial use that Midjourney does not. Agencies using multiple tools without reviewing each tool's commercial use provisions are making implicit warranties to clients that may not be accurate.
Clause 4: Liability Allocation
When regulatory enforcement or third-party IP claims arise from AI-generated content, both the agency and the client will be named. The question of who bears the cost depends entirely on whether the contract has addressed it. Without a liability clause, the allocation defaults to whatever general indemnification language exists in the MSA, which was written for human-created content and almost certainly does not fit AI scenarios cleanly.
Core language: “Agency will indemnify Client against third-party claims arising from Agency's use of AI Tools in violation of those tools' terms of service, provided Client has used deliverables in the manner specified in Agency's documentation. Client will indemnify Agency against claims arising from Client's distribution of deliverables without required AI disclosure labels, Client's modification of AI-generated content that eliminates required provenance metadata, or Client's use of deliverables for purposes not disclosed to Agency at the time of production. Neither party will be liable for regulatory penalties arising from the other party's failure to maintain compliant disclosure workflows in their own operations.”
Agencies that proactively update their contracts for AI are not creating new risk exposure. They are closing existing gaps that leave both parties exposed when enforcement begins.
— Numonic Agency Compliance Brief, 2026This clause creates a clean allocation: the agency is responsible for what happens in its own workflow, and the client is responsible for what happens after delivery. The scope of each party's responsibility is bounded by their actual control. An agency cannot control how a client posts content to social media. A client cannot control which AI tools the agency uses in production. The clause makes this division explicit.
The mutual exclusion of liability for the other party's internal operations is equally important. If a client publishes AI-generated images without disclosure labels and receives a regulatory fine, that fine should not flow back to the agency under a general indemnification clause. The AI liability clause prevents that outcome.
Clause 5: Audit Rights
Enterprise clients with their own AI governance frameworks increasingly require the ability to verify that their vendors are operating compliant AI workflows. Regulatory inquiries under the EU AI Act or SB 942 may also require an agency to produce documentation of AI tool use, generation context, and metadata practices. An audit rights clause creates the obligation to maintain this documentation and the process for making it available.
Core language: “Agency will maintain records of AI Tools used in producing deliverables under this Agreement, including the category of tool, date of use, and content scope, for a period of three years from delivery. Client may request, no more than once per calendar year, a summary compliance report confirming Agency's AI tool practices for the engagement. Upon regulatory request or court order, Agency will provide provenance documentation for specific deliverables within ten business days. Agency may redact commercially sensitive information from any audit materials, provided such redaction does not prevent Client from demonstrating its own regulatory compliance.”
The three-year retention period aligns with the statute of limitations for most commercial contract claims and the enforcement lookback periods anticipated under both the EU AI Act and California law. The annual summary report mechanism gives clients access to compliance information without creating an open-ended audit obligation that would be operationally burdensome for the agency.
Insurance Considerations
Contract language and insurance work together. The clauses above allocate risk between the agency and client; insurance covers the residual risk that neither party can contractually shift. Three insurance lines are directly relevant to AI-generated content:
Errors and Omissions (E&O)
E&O policies cover professional liability arising from errors in the agency's work product. The critical question for AI workflows is whether the policy covers claims arising from AI-generated content specifically. Many E&O policies written before 2024 contain exclusions for “automated decision-making systems” or “AI-generated content” that were intended for fintech and healthcare contexts but technically apply to creative AI tools as well. Agencies should request an AI-explicit endorsement from their E&O carrier that confirms coverage for AI-assisted creative work.
Cyber Liability
Cyber policies cover data breaches and system intrusions. For agencies using cloud-based AI tools, this coverage is relevant when client content is uploaded to AI platforms as generation context. If client confidential materials are included in AI prompts and those platforms are subsequently breached, the agency may face claims under both the cyber policy and the client contract. Agencies should review their cyber policy's definition of “covered systems” to confirm that third-party AI platforms accessed in the course of work are included or exclude them explicitly in client contracts.
IP Indemnity Coverage
Some AI tool providers offer IP indemnity as part of their enterprise licensing agreements. Adobe Firefly provides an IP indemnity for commercially licensed users, covering third-party IP infringement claims arising from Firefly-generated content. Getty's Generative AI offers similar protection. These indemnities are worth understanding because they represent a form of insurance that supplements (and in some cases replaces) what the agency would otherwise need to purchase separately. Agencies should document which tools carry IP indemnities in their tool inventory, as this documentation is relevant both to client contracts and to E&O underwriting.
Introducing These Clauses to Clients
The most common objection to updating agency contracts is not legal resistance—it is process friction. Clients with mature procurement functions default to their own paper, which typically does not address AI at all. Agencies that want to negotiate AI-specific language need a concise business rationale that connects the clause to the client's own risk exposure.
Framing the Conversation
The most effective framing positions AI contract clauses as protecting the client, not the agency. This is accurate: Clauses 4 and 5 in particular give clients clearer remedies and stronger documentation than a standard MSA provides. The conversation script for enterprise procurement teams might run:
“We've updated our standard agreement to address AI tool use explicitly. Starting in August, the EU AI Act and California SB 942 impose disclosure and documentation requirements on anyone distributing AI-generated content. The updated clauses make clear what documentation we provide with every delivery, how liability is allocated if a compliance issue arises, and what audit rights you have to verify our practices. These are things your own legal team will likely require eventually—we thought it was better to address them proactively.”
This framing works because it is true. A client whose agency has clear AI governance documentation is better positioned to respond to regulatory inquiries than one whose agency has none. The conversation shifts from “our lawyer wants this clause” to “this protects both of us.”
AI Governance Policy Template
A ready-to-customize policy framework covering AI tool governance, three-tier content classification, disclosure templates, and audit documentation requirements that agencies can adapt for client contracts.
Download free (email required)Red Flags in Inbound Client Contracts
When clients provide their own paper, agencies need to review it for AI-specific risk before signing. The following provisions warrant careful attention:
Blanket IP Assignment Without AI Carve-Out
Client contracts that assign “all intellectual property rights in all deliverables, including all rights in derivative works and compilations” create a problem when applied to AI-generated content of uncertain copyrightability. If copyright does not vest in the AI-generated portion, there may be nothing to assign—but the client may later claim they contracted for exclusivity that does not exist. Agencies should add language limiting the IP assignment to copyrightable works and confirming that AI outputs are included to the extent permitted by applicable law.
Unlimited Regulatory Indemnification
Client contracts sometimes include broad indemnification language requiring the agency to cover “all fines, penalties, and regulatory costs arising from the agency's work.” When applied to AI workflows, this could make the agency responsible for regulatory penalties arising from the client's own distribution decisions—for example, a client who removes AI disclosure labels from agency-delivered content before posting. Agencies should negotiate a carve-out for penalties arising from client-controlled distribution and post-delivery modifications.
No-AI Clauses Without Definition
Some clients have inserted “no AI” provisions into their standard agreements in response to internal policy concerns. The risk for agencies is not the prohibition itself but the definition (or lack thereof). “No use of artificial intelligence tools” that is not further defined could be read to prohibit AI-powered grammar checkers, smart crop tools, background removal functions in Photoshop, or generative search within a DAM. Before signing, agencies should confirm in writing what the client means by the prohibition and get an exception list for standard production tools that happen to include AI-powered features.
Key Takeaways
- Standard agency MSAs do not address AI tool disclosure, metadata preservation, IP ownership for AI-generated work, regulatory liability allocation, or audit rights. Each of these gaps creates real exposure in 2026.
- The five essential clauses are: AI Disclosure and Consent, Metadata Preservation Requirements, IP Ownership for AI-Generated Work, Liability Allocation, and Audit Rights. Together they create a complete framework for AI-inclusive engagements.
- Insurance coverage should be reviewed alongside contract language. E&O policies may contain AI exclusions. Cyber policies may not cover third-party AI platforms. Some AI tool providers offer IP indemnities that supplement agency coverage.
- Client conversations about AI clauses go better when framed as mutual protection rather than agency self-interest. The documentation obligations in these clauses protect clients as much as agencies when regulatory inquiries arise.
- Inbound client contracts should be reviewed for blanket IP assignments, unlimited regulatory indemnification, and undefined no-AI clauses—each of which creates disproportionate risk for agencies using AI in standard production workflows.
Build the Documentation Your Contracts Require
Numonic automatically maintains the provenance records, IPTC metadata, and C2PA credentials that your AI contract clauses promise to deliver—without adding manual steps to your creative workflow.
See How It Works