AI negotiations

Managing Risk and Liability in AI Contracts with OpenAI and Similar Vendors

Managing Risk and Liability in AI Contracts with OpenAI and Similar Vendors

Generative AI tools promise powerful capabilities, but signing a contract with an AI vendor like OpenAI without due diligence can expose your organization to serious risk. Many AI vendors’ standard terms are heavily vendor-favorable, shifting liability to the customer, granting the vendor broad rights to use your data, and offering minimal accountability if something goes wrong. A Stanford Law School report in 2025 found that 92% of AI vendors claim broad rights to use customer data. Yet, only 17% commit to full regulatory compliance, and just 33% offer any indemnification for third-party IP claims. The message for CIOs and procurement leaders is clear: approach AI contracts cautiously and actively advocate for your company’s protection. This article explores key contractual issues – from liability caps to data privacy – and offers practical strategies to manage risk when negotiating with AI providers

Liability Caps and Exclusions
Almost every technology vendor limits their liability in contracts, and AI vendors are no exception. Typically, an AI provider’s contract will cap its total liability (often at a modest amount like the fees paid over a few months) and exclude “indirect” damages entirely. In practice, if the AI system causes a major loss, for example, a faulty AI decision leading to a costly lawsuit, the vendor might only owe a token refund, leaving your company on the hook for the rest. Do not accept one-sided caps without scrutiny. Push for liability caps proportional to the risks involved, and negotiate carve-outs so that certain losses are not subject to the cap. Common carve-outs include breaches of confidentiality, data privacy violations, intellectual property infringement, or gross negligence. For instance, you might insist that if the AI platform leaks sensitive customer data or infringes IP, the vendor’s liability in those scenarios is uncapped or significantly higher than the general cap. Be wary of broad exclusions that absolve the vendor of “consequential damages” or “lost profits” – vendors often draft these so broadly that nearly any meaningful damage could be excluded. While it’s reasonable that a vendor won’t take unlimited liability for an experimental AI, you can negotiate a middle ground: a higher cap for critical risks, mutual (fair) exclusions of certain damages, and language that doesn’t let the vendor entirely dodge responsibility. If a vendor is unwilling to budge on a very low cap, consider whether additional protections (like insurance or strong indemnities) can fill the gap, or whether that vendor’s product is too risky for high-impact tasks.

Indemnification – Who Pays for Mistakes?
Indemnification provisions determine who will defend or compensate you if a third-party claim arises from using the AI. Vendors often try to minimize their indemnity obligations – or even worse, they may require you to indemnify them for certain claims. For example, some AI vendors include clauses that make the customer responsible if their use of AI violates someone’s rights. Instead, aim to secure indemnities that protect your organization. High-priority indemnities in AI deals include intellectual property infringement, data breaches, and regulatory violations. Intellectual property is big: if the AI outputs content that accidentally infringes a copyright or patent, you want the vendor to cover the legal fallout. A positive trend is emerging here – leading providers are starting to offer IP indemnification for AI outputs. OpenAI’s CEO recently announced a “copyright shield” program pledging to defend ChatGPT Enterprise and API customers against copyright claims on outputs. Likewise, Microsoft’s Copilot service and others have made similar promises to paying users. Ensure such promises are written into your contract, with clear scope and minimal caveats. (Typically, these indemnities won’t cover situations where you input infringing data or misuse the tool, which is fair – you remain accountable for what you choose to do with the AI.)

On the flip side, watch out for overly broad indemnities you give to the vendor. It’s reasonable to agree to indemnify the vendor if, say, you violate the license terms or intentionally use the AI to break the law. But avoid clauses that make you indemnify the vendor for all consequences of using the AI, especially if those consequences are caused by the AI’s behavior or the vendor’s data. One real-world example is the AI coding assistant Tabnine, whose terms require clients to indemnify the company from any claim arising from a violation of any third-party right, including copyright or privacy, in content the client uses. Agreeing to such broad terms could leave you paying for mistakes made by the AI model or the vendor. Push to narrow customer indemnities to things under your control (like your breach of the contract or willful misuse), and expand vendor indemnities to cover the AI provider’s sphere, notably IP infringement by the service, confidentiality breaches, and violations of law caused by the tool’s functionality. Also, negotiate procedural aspects: you should be promptly notified of any claims and have a say in how they are handled if your company is being defended.

IP Rights and AI-Generated Outputs
Ownership of inputs and outputs is a critical issue in AI contracts. In plain terms: who owns the prompts, data, and content you feed into the AI, and who owns the material it produces? Your contract should leave no ambiguity here. Best practice is that you retain ownership of all your inputs and data, and the outputs should also belong to you (assuming they’re based on your inputs or prompts). Many vendors have updated their terms to reflect this principle. For example, OpenAI’s standard terms state: “As between you and OpenAI, you retain ownership of your input and you own the output. OpenAI hereby assigns all its right, title, and interest in and to the output.”. Microsoft similarly updated its policies to define AI-generated content as “Your Content,” meaning the user (not Microsoft) owns the output. You should look for or request these customer-friendly clauses in any AI contract.

However, owning the output doesn’t automatically solve all IP concerns. AI-generated material may or may not qualify for intellectual property protection under current law, and it might inadvertently include elements of third-party works. A savvy contract will therefore also address third-party IP risks. Require the vendor to represent and warrant that it has all necessary rights to provide the service, including rights to any training data or models it uses. You don’t want to discover later that the model was built on illegally scraped content, potentially dragging you into litigation. Generative AI companies are facing lawsuits for how their models were trained on copyrighted data. Make it the vendor’s duty to ensure their AI isn’t knowingly using stolen or unlicensed material. Additionally, include a warranty that the AI’s outputs won’t infringe known IP rights of others, and crucially, tie this to the indemnity: if a third party claims the AI’s output violated their copyright or trademark, the vendor should defend and indemnify your company. OpenAI’s enterprise terms, for instance, indemnify users for claims that OpenAI’s services or training data infringe IP; notably, they do not cap liability for this indemnity. While not every vendor will offer unlimited coverage, it’s reasonable to insist that IP indemnification be outside any low liability cap, given the potentially large exposure of such claims. Finally, be mindful of any vendor’s attempt to retain rights to outputs or to use your outputs elsewhere. From the vendor’s perspective, they might argue that outputs are partly derived from their model (which they own) and thus want a license to use them. As a customer, I

strongly prefer a clean assignment of outputs to you. If a compromise is needed, consider a license arrangement that still grants you exclusive, perpetual rights to use the outputs for your purposes. You don’t want the vendor reusing your AI-generated business plans or code for someone else’s benefit.

Data Usage, Privacy, and Training Rights
Data is often at the heart of AI contract negotiations. AI vendors may request access to large amounts of your data, and many default contracts grant the vendor broad rights to use, store, and even mine that data far beyond the immediate service. This is sometimes framed as using data to “improve the model” or “enhance services.” In reality, such clauses can be tantamount to a license for the vendor to exploit your data. One legal expert pointed out that many AI contracts give vendors a “license to steal” through overly broad data usage terms. As the customer, your goal is to control how your data is used and shared tightly. Restrict data usage to only what is necessary to perform the contracted service – nothing more. Suppose an AI SaaS is processing your data. In that case, the contract should state that your data (inputs and any personal information) will only be used to provide results to you, and not to train the vendor’s general models or be sold to third parties. If you allow some form of data use for improvement, insist on strong safeguards: e.g., data must be anonymized, aggregated, and cannot include personal identifiers or confidential info. Even then, be cautious; “anonymized” data can sometimes be deanonymized, and many privacy laws (GDPR, CPRA, etc.) have strict rules about repurposing data. It’s often safest to opt out of model-training uses entirely, which some enterprise AI offerings now allow by default.

Privacy and security considerations are paramount when an AI vendor handles sensitive information. Ensure the contract includes a robust data protection addendum or section. Key points include: compliance with all applicable privacy laws (GDPR in Europe, state laws in the US, HIPAA for health data, etc.), commitments that your instructions will process personal data, and that appropriate technical and organizational security measures are in place. The vendor should confirm that it will not use personal data for any purpose outside the contract scope and will assist you in complying with individual rights requests or regulatory obligations if relevant. For example, under GDPR’s purpose limitation principle, data collected for one purpose (providing the AI service) generally cannot be reused for another (training a different AI model) without additional consent. Ensure your contract’s data use clause doesn’t put you in breach of such rules. Also specify data retention and deletion requirements: the vendor should delete or return your data upon contract termination or request, ensuring it isn’t silently retained in some training set indefinitely.

Another vital piece is confidentiality. Your contract should classify your input data and sensitive outputs as confidential information, obligating the vendor to protect and not disclose them. This overlaps with security – you’ll want representations about encryption, access controls, and possibly the vendor’s security certifications (SOC 2, ISO 27001, etc.). Given that AI platforms could become targets for hackers (since they often hold valuable data or have broad API access), include clauses for data breach notification and liability. For instance, you may add that if the vendor suffers a breach affecting your data, they must notify you immediately and possibly indemnify you for costs (like regulatory fines or customer remediation) resulting from that breach. While vendors often resist open-ended breach indemnities, you can at least negotiate responsibility for breaches caused by the vendor’s negligence or lack of safeguards.

A telling sign of how important data handling has become: Salesforce ran a major ad campaign 2023 called “The AI Wild West,” highlighting that some AI providers are data bandits. The campaign’s tagline promised “Salesforce AI never steals or shares your customer data.”. As a customer, you want your AI vendor to commit, in writing, to that same principle. Consider it a red flag if a prospective vendor balks at limits on using your data. Either press for an agreement that preserves your data sovereignty or seek a more privacy-conscious vendor. Remember, your data is a competitive asset – don’t give it away under vague promises of AI magic.

Audit Rights and Oversight
Trusting an AI vendor unthinkingly is risky business. You need transparency into the AI’s operations and handling of your data. Audit rights in a contract allow you to verify that the vendor is living up to its promises. Vendors often resist granting extensive audit rights over their models or data centers, citing security and IP concerns. But at a minimum, you should negotiate some mechanisms for oversight. This could range from the right to request third-party security audit reports (like SOC 2 reports) to the right to inspect how your data is being stored and used. For example, in highly regulated contexts, a contract might stipulate that the vendor will provide logs or summaries of model training data usage, or even allow an independent auditor to confirm that your proprietary data hasn’t been incorporated into the vendor’s broader model without permission. If a full audit of the AI’s algorithms isn’t feasible, focus on auditing compliance and data practices. Ensure you can audit or obtain evidence of the vendor’s compliance with privacy obligations, data segregation policies, and performance commitments. Also, consider adding a clause that requires the vendor’s cooperation if you or a regulator needs information about the AI’s functioning. Suppose your company must comply with laws that give individuals a “right to explanation” about AI-driven decisions. In that case, the contract should obligate the vendor to provide the necessary information about their model (for instance, the AI’s factors and logic to arrive at outputs) to help you meet that duty.

In addition to formal audits, build in regular reporting and review. Quarterly business reviews with the vendor could include discussing any changes to the AI system, reviewing compliance reports, and addressing any incidents or near-misses. Some contracts require the vendor to notify the customer of any significant AI model or training data updates, especially if those changes could affect output quality, bias, or data usage. The goal is to avoid surprises – you don’t want an unnoticed model update introducing new legal risks.

Another facet of oversight is the ability to test and validate the AI outputs. Ideally, you’ll evaluate the AI thoroughly in a pilot program before fully deploying it. Your contract can support this by including a pilot or proof-of-concept phase with an easy exit if results are unsatisfactory. For ongoing use, you might insert a clause that allows you to periodically test the AI for accuracy or bias and suspend use if it fails certain criteria. While a vendor might not accept a blanket “you can audit our algorithms,” they may accept language like “Vendor will cooperate with Customer’s reasonable requests for information necessary to verify Vendor’s compliance with this agreement and applicable law.” The key is gaining leverage to monitor – without insight, you’re flying blind and taking all the risk. Don’t let an AI vendor be a black box you can never open; it requires at least a peephole in its operations.

Human Review and Explainability
No matter how advanced an AI system is, CIOs should treat it as fallible and plan for human oversight. AI vendors often include contract disclaimers emphasizing that their outputs may be inaccurate or misleading – OpenAI’s terms, for example, warn that “Output may not always be accurate… You must evaluate output for accuracy and appropriateness, including using human review as appropriate, before using or sharing it.”. Take these warnings seriously and bake human review into your AI deployment strategy. From a contractual standpoint, you might require a human-in-the-loop clause if the AI vendor provides a service that directly impacts your operations (say, an AI tool screening resumes or analyzing medical images). This could mean stipulating that qualified personnel will review and approve certain critical decisions or high-risk outputs. In agreements where the vendor is delivering AI-assisted work (e.g., an outsourcing arrangement where the vendor uses AI), explicitly require that the vendor has qualified staff to oversee the AI’s contributions.

Explainability is another growing expectation, especially with regulations on the horizon that may mandate transparency in automated decisions. While today’s generative AI models are often “black boxes,” you should still push the vendor to provide as much information as possible about how the AI works. At the very least, ensure the contract doesn’t forbid you from asking about model behavior or using tools to interpret outputs. Some forward-thinking organizations include clauses demanding that the vendor document the AI system, including its intended purpose, limitations, and the data it was trained on, and update this documentation as the model evolves. If the AI will be used in regulated decisions (hiring, lending, etc.), insist on the ability to explain the criteria used by the model, either via the vendor’s tools or by having the vendor assist in generating an explanation when needed. Under GDPR and similar laws, individuals have the right to human intervention and an explanation when significant decisions are automated; your contract should enable you to comply by ensuring the vendor will cooperate in providing relevant details.

Another human-centric safeguard requires the vendor to allow manual override or decision reversal. For example, if an AI chatbot is used for customer service, have a process for human agents to step in when the AI falters or a customer requests it. These procedural protections might not all be spelled out in the vendor’s contract, but the contract should at least not impede them, and ideally should acknowledge that the customer will supervise the AI’s use. Culturally, vendors sometimes push their AI as “fully autonomous” or a replacement for human effort; your stance in contracting should be that AI is augmenting your team, not replacing accountability. Ensure that the vendor’s tool fits into your governance framework. By inserting human oversight and review requirements, you reduce risk and set the expectation with the vendor that your organization will be an active, informed user of the AI, not a passive consumer of whatever it spits out.

Regulatory and Ethical Alignment
The regulatory landscape for AI is evolving rapidly. Laws like the EU’s AI Act, various US state laws (e.g., California and Colorado’s AI legislation), and sector-specific regulations introduce new compliance obligations for AI developers and users. Your contract with an AI vendor should ensure that the vendor will help, not hinder, your compliance with current and future laws. At a baseline, indicate that the vendor will comply with all applicable laws and regulations when providing the service. It’s surprising, but many vendor contracts lack an explicit commitment to regulatory compliance. Do not let the vendor carve out an exception where they “don’t warrant compliance with any laws” or place all compliance responsibility on the customer. Suppose a law or regulation squarely targets the AI system (for instance, requiring certain documentation or testing for bias). In that case, the vendor, as the AI developer, should shoulder those duties contractually.

Consider spelling out specific compliance needs based on your industry and jurisdiction. For example, GDPR compliance (data protection, data transfer safeguards, etc.) is critical if you operate in the EU or handle EU residents’ data. If the AI is used for employment or credit decisions, ensure adherence to anti-discrimination laws and consumer protection statutes. Under Colorado’s new AI law (effective 2026), companies using “high-risk” AI must “use reasonable care to protect consumers from foreseeable risks of algorithmic discrimination.”. If you’ll be a “deployer” under that law, you will likely need the vendor’s cooperation to fulfill duties like bias testing or impact assessments. Thus, your contract could require the vendor to represent and warrant that the AI has been tested for bias, does not contain unlawful discrimination, and even link the promise to an indemnification. If their tool’s bias causes a legal claim, they’ll cover it. Similarly, some laws may require transparency or audit trails for AI decisions. Negotiate for contractual rights to obtain needed information or have the vendor perform necessary assessments on their side.

Another forward-looking contract element is a regulatory change clause. Given the certainty that AI regulations will tighten in the coming years, include terms that address what happens if new laws mandate changes to the AI service or its usage. For instance, you might add: “If a change in law or regulation requires modifications to the Service or how it is used, Vendor will use best efforts to comply and assist Customer in compliance, and the parties will negotiate in good faith any necessary amendments to this agreement.” This ensures you’re not stuck with a non-compliant AI or a vendor who refuses to adapt. It also opens the door to exit the contract if compliance becomes impossible (without penalty). Remember that only 17% of AI vendors commit to full regulatory compliance in their contracts, so raising this issue is essential. Do not simply accept a vendor’s stance of “you are responsible for using our tool legally.” Instead, make it a shared responsibility, with the vendor explicitly accountable for the aspects under their control (the design and functioning of the AI).

Ethical alignment is closely related. Beyond strict laws, there are ethical guidelines and industry standards (such as the NIST AI Risk Management Framework) that you may want the vendor to follow. If your company has an AI ethics policy or risk framework, consider attaching it or incorporating it by reference, and stating that those principles will be used with the vendor’s AI solution. While this might not be as enforceable as a clear legal requirement, it sets expectations. It can be a useful leverage if the vendor deviates from promised behavior (e.g., if the AI is found to be engaging in bias or unsafe behavior that the vendor promised to mitigate). Ultimately, aligning contract terms with regulatory and ethical expectations protects you from downstream liability and reputational harm. It forces the vendor to take compliance seriously – if they know their revenue (contract) depends on it, they’re more likely to build compliance into their product roadmap.

Recommendations
For procurement professionals and IT leaders, managing AI vendor risk requires vigilance and a proactive stance. Here are key strategies to put into action when negotiating and overseeing contracts with AI providers:

  • Know Your Risk Exposure: Before signing, assess your use of the AI tool and what could go wrong. High-stakes uses (like hiring, medical advice, or financial decisions) demand tighter contract protections than low-risk experimental uses. Don’t treat an AI deal as just another SaaS purchase – involve legal, compliance, and security teams early to map out the risks.
  • Retain Your Data and Outputs: Insist that the contract clearly states you own all inputs and outputs. Your data should remain yours, and any AI-generated content based on your prompts should be free for you to use without vendor claims. Reject terms that allow the vendor to freely reuse or sell your data – limit usage to providing the service to your organization only.
  • Limit Vendor Data Usage: Negotiate data handling clauses that strictly limit what the vendor can do with your information. If possible, opt out of allowing your data to train the vendor’s models. If data must be used for improvements, anonymization and explicit consent are required for any new use. Include obligations for the vendor to follow privacy laws, protect data with strong security, and delete your data upon request or contract end.
  • Demand Accountability in Writing: Do not accept contract language that leaves your company holding the bag. Push for vendor accountability via representations and indemnities. For example, have the vendor warrant that their AI will not knowingly violate IP rights or laws, and secure an indemnity so they will defend you if a third party brings an IP or privacy claim. Ensure critical indemnities (IP infringement, confidentiality breaches, etc.) are not nullified by an overly low liability cap.
  • Negotiate Fair Liability Terms: Scrutinize any liability cap. Aim to raise low caps and carve out key issues from limitations. If an AI mistake could realistically cost you millions in damages or fines, the vendor should share in that risk to a reasonable degree. Avoid broad exclusions that eliminate nearly all damages – the vendor should at least be liable for direct losses caused by its product’s failures or negligence. Consider requiring the vendor to carry adequate insurance coverage (e.g., cyber liability or errors & omissions insurance) to backstop their obligations.
  • Build in Audit and Oversight Rights: Maintain the right to monitor the vendor’s compliance. Include audit rights or, at minimum, rights to receive regular security and compliance reports. Require notification of any data breaches or changes in how the AI uses data. If possible, get agreement on periodic reviews of output quality and bias testing results. Transparency is key – a vendor unwilling to allow any oversight may not be trustworthy.
  • Ensure Human Control: In your usage guidelines (and, where applicable, in the contract), establish that humans will review AI outputs before making important decisions. Stipulate any necessary human-in-the-loop processes, especially for high-impact or sensitive applications. Ensure you have the contractual freedom to bypass or override AI decisions when necessary. This mitigates risk and aligns with emerging legal norms requiring human judgment in automated processes.
  • Plan for Regulatory Compliance: Proactively address current and future regulations. Include clauses that require the vendor to comply with applicable AI laws and to assist you in compliance efforts (providing needed information, implementing new requirements, etc.). If new regulations appear mid-contract, you should have a path to update the agreement or terminate if compliance is impossible. Never agree to terms that make regulatory compliance “solely your problem” if the vendor’s technology is part of the equation.
  • Stay Involved and Informed: Treat the vendor relationship as a partnership in risk management. Set up a governance structure (regular meetings, points of contact, escalation procedures) to continually evaluate the AI’s performance and address any issues. Monitor industry developments – for instance, if other companies experience an AI failure or lawsuit, discuss with your vendor how they prevent a similar issue. By actively engaging, you can often catch problems early or negotiate improvements as the service evolves.
  • Stand Firm on Critical Protections: Finally, be prepared to walk away or seek alternatives if a vendor refuses to reasonably balance the contract. The excitement around AI can create pressure to “just sign and start using it,” but taking on uncontrolled liability or unfettered data usage rights is a recipe for disaster. Use your leverage as a customer, especially if you’re a large enterprise client, to secure terms that protect your interests. It’s far easier to prevent bad terms than to litigate or cope with their consequences later.

Procurement and IT leaders can significantly mitigate the legal and financial risks of deploying AI solutions by following these strategies. A well-negotiated contract won’t eliminate all uncertainty in the fast-moving world of AI. Still, it will put guardrails in place, ensure the vendor shares responsibility, and give your organization the tools to use AI wisely and safely.

Author

  • Fredrik Filipsson brings two decades of Oracle license management experience, including a nine-year tenure at Oracle and 11 years in Oracle license consulting. His expertise extends across leading IT corporations like IBM, enriching his profile with a broad spectrum of software and cloud projects. Filipsson's proficiency encompasses IBM, SAP, Microsoft, and Salesforce platforms, alongside significant involvement in Microsoft Copilot and AI initiatives, improving organizational efficiency.

    View all posts