Banner artwork by Smile Studio AP / Shutterstock.com
For a long time, “SaaS” has been the cool word, shorthand for Software-as-a-Service. In the world of SaaS contracting, suppliers and customers negotiate and tailor terms and conditions that govern how cloud-based software is delivered as a service, while carefully allocating risk across the familiar pillars of IP, limitation of liability, indemnification, and the like.
Sitting in negotiation calls over the past year, I’ve seen the conversation change, with a new topic consistently drawing attention and carrying the urgency to be included in standard agreements – AI provisions.
To understand this shift, we need to ask a more fundamental question: why now?
With the rapid development of AI tools, more and more of them are coming to market, and their powerful integration into traditional software products and services, innovation has arrived hand in hand with uncertainty. Let’s also admit that, alongside efficiency and automation come fear, unpredictability, ethical concerns, and expanded liability exposure.
Let’s also admit that, alongside efficiency and automation come fear, unpredictability, ethical concerns, and expanded liability exposure.
The services being provided are no longer purely static, rather increasingly generative, raising questions about new contractual issues: Who owns the AI-generated output? Is there meaningful human oversight? What are the standards for privacy, security, and transparency? Who bears responsibility if the output infringes on third-party IP? Is customer data being used to train models without prior written consent? Is there an opt-out right for AI usage? Are you complying with all the applicable laws and evolving regulations, such as the EU AI Act?

At its core, this shift reflects something deeper: it’s a redefinition of how risk is allocated in technology transactions. As AI capabilities expand, the legal architecture around SaaS agreements can no longer rely solely on static licensing frameworks. It must evolve into more dynamic governance structures.
As AI capabilities expand, the legal architecture around SaaS agreements can no longer rely solely on static licensing frameworks. It must evolve into more dynamic governance structures.
Observation on how the customers are responding
As a frontline negotiator in my daily practice, I see three common approaches from a supplier’s perspective.
First, questionnaires and requests for AI-related statements and documentation. Instead of immediately redlining the contract, some customers begin by conducting their own internal risk assessment. They ask suppliers to complete detailed AI questionnaires and provide existing policies and internal guidelines on AI research, development, and usage within their products and services, sometimes followed by supporting documentation. In these situations, the contract itself may remain untouched, but the diligence process becomes much deeper.
Second, requests to add specific AI language directly into the agreement. This is by far the most common approach I encounter, and some of the clauses I’ve come across are about the clarification and confirmation of: 1) definitions of AI, AI tools, outputs; 2) whether the AI features can be opted out of; 3) whether their data is strictly prohibited from being used for model training without prior written consent; 4) who owns the AI-generated output; and 5) how they will be notified of any new AI functionalities added to existing products. A frequent concern is what happens if those updates materially alter the product’s original functions.
Third, in rare cases, customers provide their own AI addendum. This is something I’ve mainly seen with larger, well-established brands, and it is often tied to internal compliance frameworks or enterprise-wide AI governance policies. These negotiations tend to be more structured and sometimes less flexible.
Streamlining the contracting process and what comes next
With evolving technology and increasing AI usage in software services, growing customer scrutiny, and keeping in mind that AI laws are still developing through different legal lenses (state, national, and global), businesses need to streamline strategically.
From my perspective, preparation makes the difference. Know your product well. What AI tools are used and how they are used. Have AI-related documentation ready if requested. Develop a mature internal AI guidance framework and code of ethics to demonstrate that your activities align with core values and legal requirements. Stay up to date with the fast-changing regulatory landscape, continuously evaluating which jurisdictions’ AI laws may apply to your business, your operations, and your customers.
Disclaimer: The information in any resource in this website should not be construed as legal advice or as a legal opinion on specific facts, and should not be considered representing the views of its authors, its authors’ employers, its sponsors, and/or ACC. These resources are not intended as a definitive statement on the subject addressed. Rather, they are intended to serve as a tool providing practical guidance and references for the busy in-house practitioner and other readers.