
1. Introduction
The surge of generative AI adoption has introduced new risks for enterprise data privacy and security. To ensure robust protections, CIOs, procurement officers, IT security, legal, and compliance leaders must scrutinize contracts with AI vendors (e.g., OpenAI, Anthropic, Google, Cohere). Inadequate contract terms can expose sensitive data, violate regulations, or create IP ownership ambiguities. Whether deploying AI in the cloud or on-premises, enterprises need Gartner-style due diligence and strong contractual safeguards. This advisory outlines key privacy and security considerations in AI service agreements, with real-world clause examples and best practices for negotiating favourable terms.
2. Cloud vs. On-Premises AI Deployments: Privacy and Security Implications
Enterprises can consume AI services via cloud APIs or by hosting models on-premises. Cloud-based AI offers quick deployment and scalability, but company data will transit to the vendor’s environment, raising concerns over external access, jurisdiction, and multi-tenant security. By contrast, on-premises AI keeps data in-house under your control, which can better protect sensitive information. Companies with strict data privacy needs (e.g., healthcare or finance) often favour on-prem or private cloud solutions for maximum control. However, on-prem deployments require a heavy upfront investment and internal security management. Many enterprises choose hybrid approaches – using on-prem for highly sensitive operations and cloud for less sensitive or resource-intensive tasks. In any case, contract terms should reflect the deployment model, ensuring that the vendor’s obligations on protection, residency, and compliance are clear if data leaves your premises
3. Data Privacy Protections in AI Vendor Contracts
Contracts must directly address how the vendor will handle your data (both inputs and AI-generated outputs) to maintain privacy:
- Ownership of Inputs and Outputs: The agreement confirms that the enterprise retains ownership of all data it provides as input and of all AI-generated output content. For example, OpenAI’s business terms explicitly state that customers “retain all ownership rights in Input” and “own all Output”, with OpenAI assigning any rights in the output to the customer. Similarly, Anthropic’s terms (2024) clarify that it “does not anticipate obtaining any rights in Customer Content” and that customers own all outputs from using its AI. These clauses ensure the company’s prompts, data, and results remain its intellectual property, preventing vendor claims on your proprietary information or generated content.
- Limits on Data Use (No Training or Sharing): Insist on a clause that the vendor will only use your data to provide the service to you and not to train or improve their AI models for the benefit of others. Many generative AI providers have adopted this as standard; by default, they do not use customer-submitted data to train models. Your contract should codify this: e.g., “Vendor shall not use Customer’s inputs or outputs to develop, train, or improve any AI model”. If the vendor proposes using data for R&D or model training, it should be a red flag for sensitive data. At a minimum, explicit, written consent must be required for any such use, and data must be anonymized/aggregated. Salesforce notably ran an ad campaign (“The AI Wild West”) emphasizing that some AI providers “will do anything to get” customer data while asserting that “Salesforce AI never steals or shares your customer data.” This underscores industry understanding that privacy of customer data is paramount – your contract should reflect that same commitment.
- Data Retention and Deletion: Define how long the AI vendor can retain your data and what happens after processing. The best practice is to minimize retention; for instance, some enterprise services allow zero retention or short retention by default. OpenAI’s enterprise offering even lets customers control how long data is retained. The contract or Data Processing Agreement (DPA) should specify that upon contract termination (or earlier upon request), the vendor will delete all customer data and any derivatives from their systems. Include backup deletion timelines if relevant. It’s wise to get contractual assurance of data erasure, bearing in mind that if data was used to train models, complete deletion might be infeasible (another reason to prohibit training use). Clearly outline procedures for requesting data deletion during the term, e.g., the right to have specific inputs or outputs purged from vendor systems.
- Confidentiality of Customer Data: Treat all inputs and outputs as confidential information in the contract. Include a confidentiality clause obligating the vendor to protect your data with at least the same care as protecting its sensitive data. Crucially, restrict the vendor’s use of your confidential info solely to providing services to you, not for any other purpose. For example, the contract should forbid the vendor from mining your prompts or content to benefit other customers or to develop new products without permission. Ensure any personnel or subcontractors who might access your data are bound by strict NDA terms. This clause reinforces the data use limitations by framing them as a confidentiality mandate.
- Data Residency and Transfer Restrictions: If your data is subject to geographic or regulatory restrictions (for example, personal data under GDPR requiring EU storage or limitations on cross-border transfers), build those requirements into the contract. Specify where data may be stored or processed, such as “only in data centres located in [e.g., the EU or country X] unless approved by the customer.” Include commitments to abide by relevant transfer mechanisms (Standard Contractual Clauses, etc.) if data will move across jurisdictions. Cloud AI vendors may offer regional hosting options; ensure the contract and DPA reflect the chosen region to meet data residency needs. For highly sensitive data, some enterprises opt for an on-prem or private instance to avoid cross-border data flow concerns – the contract should acknowledge that arrangement if applicable.
- Regulatory Compliance (GDPR, CCPA, etc.): The vendor should contractually commit to complying with applicable data protection laws and assist you in doing so. This typically means signing a robust DPA appended to the contract, especially if any personal data is involved. The DPA should define the vendor as a data processor (or “service provider” under CCPA) processing data on your instructions. Key GDPR terms to include:
- Purpose limitation: Vendor processes personal data only for the specified purposes
4. Data Security Requirements and Standards
Enterprise contracts should insist on stringent security measures from AI vendors, given that sensitive data and mission-critical processes may be at stake. Key security topics to cover include:
- Security Certification and Audits: The vendor must maintain industry-recognized security certifications or audits such as SOC 2 Type II or ISO 27001. These attestations indicate the vendor follows rigorous security controls and has been audited by an independent firm. For instance, OpenAI has completed SOC 2 audits for its enterprise services. The contract can stipulate that the vendor will provide up-to-date security audit reports (e.g., provide the SOC 2 report under NDA) and/or that the vendor must undergo regular third-party security assessments. This gives you confidence that the vendor’s security posture is vetted and provides documentation for your compliance auditors. If the vendor lacks such certifications, you might incorporate a right for your company to conduct a security review or on-site audit (with appropriate notice) to verify controls.
- Encryption and Access Controls: Ensure the contract mandates strong encryption for data in transit and at rest on the vendor’s systems. State-of-the-art encryption (e.g., AES-256 for data at rest, TLS 1.2+ or TLS 1.3 for data in motion) is expected. All major AI cloud providers advertise encryption of customer data by default, and your agreement memorializes that. Additionally, requires the vendor to implement strict logical access controls: customer data should be compartmentalized and accessible only to those with a need-to-know. In a multi-tenant SaaS scenario, logical data segregation is critical – your data must be isolated from other customers’ data through separate databases, namespaces, or encryption keys per tenant. The goal is to prevent accidental data leakage or unauthorized cross-access in the shared environment.
- Vulnerability Management and Secure Development: The vendor should follow secure coding practices and have an ongoing vulnerability management program. Ask for contract clauses stating that the vendor will conduct regular vulnerability scans and penetration testing internally or via independent third parties. It should also require prompt remediation of any critical security findings. To illustrate, your contract might say the vendor will “perform annual penetration tests and remediate high-risk findings within X days.” Also, consider a clause that the software/AI model will be provided free of malicious code – Dentons recommends ensuring vendors have policies to prevent the introduction of malware into their solution. This extends to ensuring updates or model changes are scanned for security issues. You may request the right to be informed about major vulnerabilities or to review summary results of security tests.
- Incident Response and Breach Notification: Time is of the essence in a data breach. The contract or DPA should obligate the vendor to notify you promptly in case of a security incident or breach involving your data. “Promptly” is often defined (e.g., within 24-72 hours of discovery) to align with regulations like GDPR. Outline what the notification should include (nature of the incident, data involved, mitigation steps, etc.). Additionally, you may need the vendor to cooperate with investigations and remediation efforts – you may need their help to comply with legal notifications from regulators or individuals. Some contracts also specify an incident response plan or points of contact for security issues. Because AI systems are new attack targets, also consider an indemnity or liability clause: if a vendor’s negligence in security causes a breach that harms your company, you may seek to have the vendor bear certain costs (at minimum, require them to cover its own investigative and remediation costs and perhaps credit you for service downtime).
- Ongoing Monitoring and Audit Rights: To maintain trust, include provisions for ongoing oversight. Enterprise buyers often negotiate audit rights, allowing them (or an appointed independent auditor) to inspect the vendor’s facilities, security controls, and compliance with the contract’s privacy/security obligations. Many cloud vendors resist frequent on-site audits but might agree to annual audits or provide exhaustive audit reports. A balanced approach requires annual security attestations – the vendor can fulfil this by sharing their latest SOC 2 Type II report, penetration test executive summary, and relevant certifications. Your contract could say that if material security gaps are found (whether through your audit or the vendor’s reports), the vendor must promptly address them. Remember, audit clauses usually require giving reasonable notice and not interfering with the vendor’s operations unduly, but they are a critical tool for accountability.
5. Addressing Model-Specific Risks and Misuse
Generative AI introduces unique risks that traditional software contracts may not cover, so incorporate terms that address these model-specific issues where relevant:
- Hallucinations and Output Accuracy: AI models sometimes produce incorrect or fabricated information (“hallucinations”). Vendors typically disclaim warranties on the accuracy of AI outputs, pushing responsibility to the customer to vet results. As the customer, you should acknowledge this reality for internal use (require human review for important outputs and seek contractual assurances that the vendor is taking steps to minimize harmful errors. For instance, you might request representations that the model has undergone testing to filter out sensitive personal data or defamatory content in outputs. Some contracts for high-risk AI uses even include a warranty that the AI’s output will not intentionally violate any law or privacy right, though broad guarantees are rare. At a minimum, ensure the contract doesn’t hold your organization liable for errors solely caused by the AI’s autonomous functioning – responsibility for the core model performance should remain with the vendor to the extent feasible.
- Misuse Prevention: From a governance perspective, consider how your employees or end-users will interact with the AI service. User misuse (e.g., entering proprietary or personal data into a public chatbot without authorization) threatens data security. While this is largely an internal training and policy matter, your contract can support safe use by requiring the vendor to implement certain controls. For example, the vendor could provide administrative controls for the customer to manage usage, such as restricting certain types of content from being input or monitoring usage logs. Also, the vendor must ensure its usage policies prohibit using AI for illicit purposes or uploading unlawful content; this protects both parties. If the AI tool is consumer-facing, the contract may need the vendor to enable consent mechanisms or content filters (for instance, requiring end-user consent before processing their data through the AI solution). All these measures tie back to data governance: vendors and customers should commit to preventing misuse that could lead to privacy breaches or security incidents.
- Bias and Fairness: While not directly a “security” issue, biased AI outputs can create legal and ethical problems, and some biases may involve mishandling personal data. If you’re deploying AI in hiring, lending, or other sensitive decisions, include terms to address bias. This might include the vendor representing that they have mitigated bias in training data and will monitor and report bias issues. You could negotiate for an audit or transparency report on the model’s training data and testing results for fairness. Emerging regulations may require it, so a forward-looking contract ensures the vendor will cooperate in providing the information needed for AI impact assessments or compliance audits. Additionally, clarify that if the model’s output leads to discrimination claims, the vendor will support the customer (some vendors even extend indemnification to cover third-party claims related to AI bias or IP infringement).
- Model Improvements and Updates: If the AI model is regularly updated (weights changed, new versions), negotiate how those updates will be handled. The vendor might update the model automatically for cloud services, ensuring an obligation to notify customers of significant changes, especially if they affect data handling or output quality. If you rely on a specific model version for compliance reasons (e.g., it was validated for your use), consider including a right to stay on a certain version or to vet new versions before deployment in your environment. Also, if the contract allows, you might specify that any fine-tuned models or custom models built for you are your property or at least exclusively for your use (some vendors offer custom models that are not shared, which is ideal for privacy).
6. Key Contract Clauses and Best Practices
To secure data privacy and security in AI agreements, ensure the following critical clauses are included, often within a Data Protection Addendum or security exhibit:
- No Training on Customer Data: A clear clause such as “Vendor shall not use Customer Data or derived data to train, improve, or enhance any AI models, except as necessary to provide the services to Customer.” This should also bar sharing your data with other clients or third parties. For example, OpenAI’s terms explicitly state they “will not use Customer Content to develop or improve the Services.” Ensure any exceptions (like opt-in feedback programs) are strictly under your control.
- Data Ownership and IP Rights: The contract should reiterate that the customer (you) owns all inputs and outputs. If the vendor’s default terms are unclear, add language: “Customer retains all rights, title, and interest in and to its Input data and any AI-generated Output. Vendor acquires no rights to Customer’s data or output, other than the limited right to process it for providing the service.” This prevents any ambiguity over who owns AI-generated content. Indeed, Microsoft updated its services agreement to clarify that AI-generated content is part of “Your Content,” i.e., owned by the user. Anthropic likewise states customers’ outputs from Claude and does not claim rights to customer content. Having this in writing protects your intellectual property and trade secrets.
- Confidentiality and Non-Disclosure: A robust confidentiality clause should cover all data shared with the AI vendor and any results. It must obligate the vendor to protect the information and not disclose it or use it outside the scope of the contract. Often, standard NDAs may be incorporated, but ensure the contract language explicitly names input prompts, files, and AI outputs as confidential. Also, the vendor’s staff should only access your data on a need-to-know basis, under confidentiality obligations that are at least as strict as those in your contract.
- Security Commitments: The contract should include a dedicated Security Schedule or a detailed clause enumerating the vendor’s security measures. This can reference compliance frameworks (SOC 2, ISO27001) and specific controls: encryption, access control, network security, etc., as discussed in Section 4. It’s prudent to require the vendor to maintain an information security program following industry best practices. The vendor will provide security awareness training to its employees, conduct background checks where appropriate, and have policies for data handling. If not already covered in a DPA, also have a clause on data breach response, noting the notification timeline and cooperation duties, as mentioned.
- Audit and Assessment Rights: As noted, include a clause that allows your organization to verify compliance. This could be phrased as: “Upon X days’ notice, Customer or its delegate may audit Vendor’s relevant systems, controls, and records to ensure compliance with the data protection and security requirements of this Agreement. Agreement orr will reasonably cooperate and address any deficiencies identified.” In many cases, vendors negotiate to provide third-party audit reports instead of on-site audits, so the clause might allow the vendor to satisfy the requirement by providing current attestations (SOC 2 report, penetration test results, etc.). This right in the contract gives you legal leverage to obtain information and assurances over time.
- Data Deletion and Return: Stipulate the processes for end-of-contract data handling. For example: “Upon termination or expiration of the contract, and at any time upon Customer’s request, Vendor will promptly delete or return all Customer Data (including backups) from its systems, except where retention is required by law. Vendor shall certify in writing such deletion upon request.” If your policy is to retain outputs, you might request data return (delivery of a final data export) instead of deletion. But for sensitive inputs, ensure they don’t linger on the vendor’s servers beyond what’s necessary. Be aware that if the vendor did incorporate your data into its models in any way, deletion is complicated – another reason to avoid that scenario. Ideally, the vendor should warrant that any residual data (like in backups) be securely protected until erased.
- Liability and Indemnification for Data Breaches or IP Issues: Allocate liability if things go wrong. It’s common to include an indemnity from the vendor for third-party claims arising from the vendor’s breach of confidentiality or data misuse. For instance, if the AI vendor’s negligence leads to a data breach exposing customer data, the vendor should indemnify the enterprise for losses or claims resulting from that breach. Additionally, given the unsettled intellectual property landscape of generative AI, many big providers (OpenAI, Microsoft, Google, Anthropic) have offered IP indemnification, promising to defend customers against copyright or patent claims related to using the AI’s outputs. If available, secure such clauses, as they protect your company if someone alleges the AI output infringes their rights. At a minimum, ensure the contract doesn’t unfairly saddle you with all responsibility for AI outputs. A balanced approach might be that the vendor covers IP infringement claims caused by the model or training data, while the customer covers claims resulting from their input content. Negotiate limitations of liability as well – vendors will seek to cap liability but try to carve out data breach and confidentiality breaches from any cap or secure a higher cap for those, given the high stakes of data incidents.
7. Audit Rights and Data Governance Oversight
Enterprise customers must be able to verify and enforce the privacy and security obligations throughout the vendor relationship. As discussed, an audit rights clause is fundamental. In practice, exercise these rights by periodically reviewing the vendor’s compliance: request annual security reports, hold quarterly governance meetings, or perform a formal audit if needed. Be sure to also review any subprocessor disclosures – the DPA should require the vendor to inform you of any third parties handling your data (e.g., cloud hosting providers, subcontractors) and give you the right to object if those raise concerns.
In addition to formal audits, maintain open communication channels with the vendor’s security and compliance teams. You may set up contractual language for periodic attestations (e.g., the vendor’s CEO or CISO annually certifies compliance with the security requirements). If the vendor offers a Trust Portal or compliance dashboard, leverage it to continuously monitor their certifications, penetration test summaries, and data protection practices. Your goal is to have ongoing assurance, not just one-time promises.
For extra assurance, consider engaging independent experts like Redress Compliance to help audit the vendor or evaluate their contract terms and practices. These experts can conduct a privacy impact assessment or security review of the AI solution to verify that it meets enterprise standards. The contract can even allow for third-party assessments on your behalf, with appropriate confidentiality protections. By involving independent auditors or advisors, you signal that your organization takes data governance seriously, which can encourage the vendor to maintain high standards.
8. Data Processing Agreement (DPA) Structuring
When personal data is involved, a DPA isn’t just a formality – it’s a legal requirement (under GDPR) and a crucial part of your contract structure. Structure the DPA to complement the main service agreement as follows:
- Ensure the DPA covers all necessary details: the subject matter and duration of processing, nature and purpose of processing, types of personal data, and categories of data subjects. This clarifies what data the AI will process and why, which is important for privacy compliance and your record-keeping (e.g., GDPR’s Article 30 records).
- Roles and responsibilities: The DPA should explicitly state that the enterprise (customer) is the data controller (or “business” under CCPA) and the AI vendor is a data processor (or “service provider”). The vendor should acknowledge that it will only process data per your instructions and for the purposes in the contract. If the vendor ever wants to act as a controller (for example, using data for its analytics or improving its models), this must be prohibited or tightly controlled by the DPA unless you choose to allow it in specific ways.
- Standard Contractual Clauses (SCCs): If data are transferred internationally (e.g., from the EU to the US, where many AI vendors are based), include the EU Standard Contractual Clauses or UK International Data Transfer Agreement as needed or reference that the SCCs are incorporated. The DPA should also document any technical and organizational measures (TOMs) the vendor uses to protect data (often provided in an annex, aligning with the security measures we discussed in Section 4).
- Overlap with the main contract: Avoid contradictions between the DPA and the main contract or privacy policy. Typically, the DPA will reinforce clauses like confidentiality, breach notification, and sub-processor requirements. It may also specify data deletion timelines more granularly. For example, ensure that the deletion clause in the DPA matches the one in the master agreement. In case of conflict, many DPAs state that the stricter of the DPA’s provisions will prevail for data protection matters.
- Signatures and annexes: Have both parties sign the DPA (or the main agreement), incorporate it by reference, and include a signature line in the DPA itself. Attach any necessary exhibits, like a list of approved sub-processors, a copy of the SCCs, and a security measures appendix. This creates a comprehensive package ensuring all privacy angles are contractually addressed.
9. Negotiation Strategies and Conclusion
Negotiating with AI vendors – especially big providers – can be challenging, but enterprise clients have increasing leverage as AI adoption grows. Here are some best-practice strategies and concluding recommendations:
- Do Your Due Diligence: Before negotiations, thoroughly research the vendor’s standard terms, privacy policy, and compliance documentation. Leverage any available public commitments (for instance, if the vendor’s website or trust centre promises “we do not train on your data” or details their security certifications) and use those as a baseline to demand equal or stronger contractual terms. Identifying gaps upfront (e.g., no mention of GDPR compliance or lacking SOC 2) will focus your negotiation on critical fixes.
- Prioritize the Must-Haves: Determine which privacy/security clauses are non-negotiable for your organization. For most enterprises, these include no training on your data, data ownership clarity, strong confidentiality, breach notification, and compliance with laws. Articulate to the vendor that without these, the deal cannot proceed – this often expedites finding a solution, as reputable AI vendors are increasingly accustomed to these requests. Use industry best practices as justification: e.g., “All our vendors sign our standard DPA with EU SCCs; we need you to do the same,” or “Our policy is that no customer data can be used for model training – this is standard in our industry, as evidenced by major providers like OpenAI and Anthropic already committing to it.”
- Leverage Competition and Alternatives: If a vendor resists necessary clauses, remember there are alternatives in the fast-growing AI market. Smaller or newer AI companies might be more flexible in winning enterprise business. In some cases, you can also mitigate risks by architecture – e.g., using the vendor’s on-premise option or a VPN to their cloud to limit exposure. However, do not let technical fixes replace contractual protections; they should complement each other. Let the vendor know that compliance and security are deciding factors when choosing an AI partner – vendors have lost deals due to unwillingness to meet corporate security requirements, which puts pressure on them to accommodate.
- Seek Expert Help for Complex Areas: Negotiating AI contracts can span legal, technical, and ethical domains. Bring in your in-house experts from IT security, data privacy, and legal early in the process to identify issues. Also, consider consulting independent experts like Redress Compliance or outside counsel specialized in tech contracts to review the terms. They can provide model clauses and ensure nothing important is overlooked. For instance, they might help draft a tailored clause about AI output IP indemnification or assess whether the vendor’s proposed data handling meets GDPR standards. Expert input is especially useful for newer areas like AI bias, where contract language is still evolving.
- Document and Monitor Commitments: Treat the contract as a living document once the contract is signed. Ensure the vendor provides all promised deliverables (e.g., did you receive the SOC 2 report annually? Are you getting breach notifications in testing?). Establish an internal owner to monitor vendor compliance – perhaps via periodic check-ins or requiring the vendor to complete a compliance questionnaire annually. If the vendor launches new features or you expand use cases, update the contract as needed (through addendums) to cover any new data flows or risks.
Conclusion: Securing robust data privacy and security terms in AI vendor contracts is not just a legal exercise – it’s fundamental to protecting your enterprise’s crown jewels in an era of expansive AI use. By covering the key topics outlined above – from data handling and ownership to security practices, compliance, and risk mitigation – you will be well-equipped to negotiate agreements that enable AI innovation without compromising trust or compliance. The best practice is to be as explicit as possible in contract language: assume it’s not guaranteed if it isn’t written down. With clear clauses and vigilant enforcement, enterprises can embrace generative AI confidently, knowing their sensitive data, IP, and reputations are contractually safeguarded. Stay proactive, leverage expert guidance, and keep the conversation with vendors focused on building a secure and privacy-respecting AI partnership.
Sources: The insights and clause examples above draw on industry expert guidance and real-world practices, including legal advisories, AI vendor policies, and emerging regulatory considerations. These references underscore the collective push towards stricter privacy and security standards in AI agreements – a trend that every enterprise should capitalize on during negotiations.