Uncategorized

Cohere Enterprise AI Contract Guidance

Cohere Enterprise AI Contract Guidance

Overview: As CIOs and procurement leaders integrate enterprise AI services like Cohere into their IT strategy, it is critical to negotiate contracts that balance innovation with risk management. This guidance, in the style of an independent advisory (the kind of independent AI contracting experts such as Redress Compliance provide), covers key contract areas for using large language model (LLM) services. It addresses cloud API usage of Cohere’s platform (e.g., Command generation models, embeddings, RAG retrieval-augmented generation) and custom private deployments on enterprise infrastructure. The goal is to secure favourable pricing, ensure robust service levels, protect data and IP, clarify usage rights, mitigate compliance risks, and structure contracts for long-term flexibility. The tone here is practical and vendor-neutral, focusing on what terms CIOs should insist on to safeguard their enterprise while harnessing AI capabilities.

Cloud API vs Private Deployment: Key Contract Implications

Many AI vendors offer both multi-tenant cloud APIs and dedicated enterprise deployments. The contract requirements can differ between these models. Table 1 contrasts key considerations for Cohere’s cloud API versus a private (VPC/on-premises) deployment of Cohere’s models:

AspectCohere Cloud API (SaaS)Cohere Private Deployment (VPC/On-Prem)
Pricing ModelCustomer data (prompts, inputs) leaves your network to be processed in the vendor’s environment. The contract must enforce strict controls, e.g., no secondary use of data for training and possibly processing only in certain regions if needed. Cohere does allow opting out of data being used to improve models. Cohere’s security measures protect data, but trust is required.Customer data (prompts, inputs) leaves your network to be processed in the vendor’s environment. The contract must enforce strict controls, e.g., no secondary use of data for training and possibly processing only in certain regions if needed. Cohere does allow opting out of data being used to improve models. Cohere’s security measures protect data, but trust is required.
Infrastructure & ControlCustomer data (prompts, inputs) leaves your network to be processed in the vendor’s environment. The contract must enforce strict controls, e.g., no secondary use of data for training and possibly processing only in certain regions if needed. Cohere does allow opting out of data being used to improve models. Cohere’s security measures protect data, but trust is required.Deployed in your environment (your cloud VPC or on-prem data centre). You control infrastructure, network isolation, and update rollouts. This ensures no co-mingling of data/workloads with others at the cost of managing more complexity (often with vendor support).
Data Security & ResidencyCustomer data (prompts, inputs) leaves your network to be processed in the vendor’s environment. The contract must enforce strict controls, e.g., no secondary use of data for training and possibly processing only in certain regions if needed. Cohere does allow opting out of data being used to improve models. Cohere’s security measures protect data, but trust is required.You operate the model in-house, so uptime is under your control. The contract should instead focus on support SLAs – e.g., how quickly Cohere will provide fixes or replacement model files if your instance has issues. In other words, the vendor ensures timely support/patches rather than guaranteeing cloud uptime.
Service Level ResponsibilityCustomer data (prompts, inputs) leaves your network to be processed in the vendor’s environment. The contract must enforce strict controls, e.g., no secondary use of data for training and possibly processing only in certain regions if needed. Cohere does allow opting out of data being used to improve models. Cohere’s security measures protect data, but trust is required.Customer data (prompts, inputs) leaves your network to be processed in the vendor’s environment. The contract must enforce strict controls, e.g., no secondary use of data for training and possibly processing only in certain regions if needed. Cohere does allow opting out of data being used to improve models. Cohere’s security measures protect data, but trust is required.
As the cloud provider, Cohere should commit to high uptime and performance for the API. You’ll negotiate an SLA (e.g., 99.9% + uptime) with outage service credits. Cohere runs the service, so they own the availability and reliability.Hosted in Cohere’s cloud (multi-tenant by default, with dedicated instances available for enterprise) – Cohere manages all backend infrastructure and scaling. Little customer control over infrastructure or update timing.Data stays within your trusted boundary for maximum privacy. Meets strict data residency or sovereignty requirements by keeping data local. However, ensure the contract still enforces vendor obligations (e.g., they cannot remotely extract or access data without permission). Cohere advertises private deployments for “complete data privacy and control”, which is ideal for sensitive data.

Table 1: Key differences between contracting for Cohere’s cloud API and a private deployment.

Both deployment options can be made to work, but align your contract terms with the deployment model. For cloud usage, focus on service-level commitments and data handling clauses, whereas for on-premise, nail down the license terms, support, and upgrade entitlements. In the following sections, we detail specific contract areas that CIOs should address.

Pricing Models and Cost Control Strategies

Usage-Based Pricing and Enterprise Discounts: Cohere’s cloud API is priced on a usage model, typically by tokens or calls. Different model endpoints carry different rates – e.g., text generation (Command models) billed per million tokens, embeddings by the number of tokens embedded, and others like Rerank by the number of queries. These usage fees can add up quickly, so negotiating the right pricing model is crucial. Enterprises often secure custom pricing tiers based on volume. Volume commitments translate to deeper discounts – for instance, if you project billions of tokens per month, insist on rates well below the list price in exchange for that commitment. Vendors expect large clients to seek the best value for their scale, so bring usage forecasts and push for tiered pricing with automatic discounts as you hit higher volumes. Also, benchmark competing AI providers’ prices (OpenAI, Azure, etc.) to strengthen your case. Cohere’s per-token rates for comparable models should align with the market; if not, use that leverage to negotiate. In some cases, offering an up-front spend commitment or prepayment can yield additional discounts (vendors value cash flow certainty) – consider an annual prepaid plan if it comes with significant rate reductions.

Cost Control Mechanisms: To manage ongoing costs, structure the contract with provisions that prevent overruns and offer flexibility:

  • Rate Locks and Price Caps: Ensure the per-unit rates are locked for the contract term – no surprise increases. Likewise, if Cohere (or the market) lowers its public pricing or releases cheaper model variants, your contract should allow you to benefit from those reductions. Negotiate a clause that if list prices drop or new, more cost-efficient models (e.g., a distilled model) become available, you can opt into the lower pricing. This prevents being stuck overpaying as AI costs trend down over time.
  • Flexible Volume Pools: Try to make your committed spend model-agnostic and use-case-agnostic. Instead of committing to X tokens on one specific model, negotiate the freedom to allocate your spend across any of Cohere’s models or endpoints (Command, Embed, Rerank, etc.) as your needs evolve. For example, you might have a single contract pool of N million monthly tokens that can be used across different model tiers or projects rather than siloed allocations. Suppose you use a smaller model for some tasks (to save cost) and a larger model for others. In that case, you won’t violate the contract – it maximizes flexibility and avoids wasted capacity.
  • Unused Capacity Rollover: Avoid “use it or lose it” monthly quotas. If you commit to a certain volume (e.g., 100M tokens/month), negotiate the right to roll over a portion of unused tokens to the next period. For instance, you might allow up to 10–20% of unused tokens in a month to carry into the next month or quarter. This cushions against demand variability, so you’re not paying for the capacity you didn’t use. Without such a clause, slow months burn money. Vendors may not allow unlimited carryover, but even a quarter’s grace for unused capacity can significantly improve cost efficiency.
  • Burst Allowances: Conversely, plan for usage spikes. Your contract can permit bursting over the committed volume at the same discounted rate (perhaps with notice) or a modest overage rate. For example, ensure that if you exceed your token allotment by 50% one month, the extra is charged at your contracted rate or only slightly higher, not some punitive on-demand rate. This is important for seasonal peaks or unforeseen surges – you don’t want to throttle critical AI services due to rigid limits.
  • Transparency and Alerts: Ask for detailed usage reporting and real-time cost alerts. Cohere’s platform should provide dashboards or APIs to track consumption. In the contract, you can require that the vendor notify you if you approach 90% of your committed volume or are trending towards an overage. This early warning allows you to adjust usage or renegotiate capacity before incurring surprise bills. Clear billing and invoicing terms should be set (e.g., monthly billing cycle, any true-up process for overages, etc.).
  • Optimization Reviews: Include a clause that the provider will assist in cost optimization. For example, quarterly business reviews with Cohere’s team to analyze your usage and recommend efficiency improvements. They might suggest prompt engineering tweaks to cut token counts, use cheaper endpoints for certain jobs, or enable features like caching. Making the vendor partly accountable for cost optimization (not just usage) can align their support team with your efficiency goals.

Custom Deployment Pricing: If you opt for a private deployment (on-prem or single-tenant cloud), expect a very different pricing structure (often handled in a separate agreement or order form). Typically, this is a flat annual license fee for the model and software, possibly tiered by model size or number of users, rather than a metered per-call charge. Such fees can be substantial, essentially buying capacity upfront. Negotiate what that fee includes: Does it cover all updates/upgrades to the model for a year? Does it include a limit on the number of instances or environments (e.g., one production and one backup instance)? Ensure the support fees (for maintenance and patches) are either included or quoted. Also, clarify any variable charges, even in an on-prem scenario – for example, if the vendor’s personnel manage the deployment, is there a cloud hosting pass-through cost or just a pure license? Pin down these details to avoid surprise charges. You want cost predictability in a custom deployment since the appeal is partly to avoid the uncertainty of per-call billing.

Example: One Fortune 500 firm negotiating with an AI vendor secured a “rate freeze” such that their per-unit costs could only decrease, never increase, over a 2-year term. They also negotiated a volume-tier table in the contract. If their usage grew 2× or 5×, unit prices would automatically drop to the next tier, with a provision to credit them retrospectively if a higher tier was reached unexpectedly mid-term. This ensured that as their usage scaled, their average cost per token went down without needing a fresh negotiation each time. Building these mechanisms into the initial contract saved them millions of dollars and set a precedent for proactive cost management.

Service Levels, Uptime, and Performance Guarantees

When your business relies on AI outputs, enterprise-grade service levels are essential. Do not accept a generic “best effort” uptime – treat the AI service like any other mission-critical cloud service and get commitments in writing.

Uptime SLA: High availability should be contractually assured. Aim for at least 99.9% uptime (< ~8.8 hours of downtime per year) for Cohere’s API or managed platform. The contract should define the measurement (e.g., monthly uptime percentage) and include service credits if uptime falls below the threshold. For example, you might stipulate credits that scale with the downtime – e.g,. 10% credit if monthly uptime drops below 99.5%, 20% if below 98%, etc., up to perhaps the right to terminate for extreme outages. Cohere’s standard SLO indicates credits of 10% for <99.5% uptime, 20% for <98%, and 30% for <95%. Ensure such credit policies are explicitly included. While service credits don’t fully compensate for business disruption, they put monetary weight behind the uptime promise and incentivize the vendor to maintain reliability. If you are using a dedicated instance or on-prem deployment, a traditional uptime SLA may not apply (since you control the environment). In that case, define availability in terms of support – e.g., vendor will respond to and resolve any incident on your hosted instance within X hours. The key is that you have recourse if the service is unavailable or severely degraded.

Performance and Latency: Uptime is not the only metric – performance matters, especially for user-facing applications. Negotiate performance SLAs or at least baseline guarantees. For instance, you might specify that a standard request’s 95th percentile response time should be under Y seconds at a given load. If you’re building an interactive app (chatbots, search), slow model responses can ruin the user experience, so pin down expected latency. Also, include a clause that the vendor will not degrade performance over time. As models and infrastructure evolve on the vendor side, they should maintain equal or better throughput and latency than at signing. If Cohere updates its model or serving stack, it should not suddenly double your response times or cut throughput without agreement. Performance clauses ensure the vendor can’t silently shift you to a less powerful backend or oversubscribe their servers at the expense of your app. While getting a full latency guarantee may be hard, documenting current performance and requiring “no regression” helps protect you.

Support Levels: Demand 24/7 premium support in line with other enterprise software deals. This includes rapid response times for critical issues. Define support severity levels and associated response/resolution targets, e.g., Severity 1 (production system down or major loss of functionality) – vendor to respond within 1 hour and work continuously to resolve within 4-6 hours or provide a workaround. Severity 2 (degraded service) might have a 2-4 hour response and a one-business-day resolution plan. Make sure these obligations are in the contract, along with escalation paths. You might get a dedicated technical account manager or an on-call engineering contact for a cloud service for major incidents. For on-prem deployments, you might even negotiate for on-site support during critical phases (e.g., vendor engineers on-site for initial go-live or major upgrades). The support clause should also note that root cause analysis reports will be provided after a major incident, and continuous improvement steps will be taken – this ensures accountability beyond just fixing the immediate issue.

Service Monitoring and Reporting: The vendor is required to provide service monitoring data. For cloud usage, you should have a status dashboard or API to check system health and incident notifications if there’s an outage. The contract stipulates that the vendor will proactively notify you of any security incidents or data loss events immediately (not just in a quarterly report). Also, consider an SLA on data durability if relevant. For example, if you rely on stored outputs or logs, the vendor should guarantee that those won’t be lost (or have adequate backups). Cohere’s trust centre highlights its backup and availability monitoring practices and ensures the contract reflects commitments in these areas.

Penalties and Remedies: While service credits are common, you can negotiate stronger remedies for critical failures. In extreme cases, a persistent failure to meet SLAs could be deemed a material breach, allowing contract termination. Short of termination, some enterprises negotiate financial penalties beyond service credits – e.g., a refund or additional service credit if a single outage exceeds a certain duration. An illustrative example: a telecom company, after suffering a multi-hour AI service outage, added a clause that for every 0.1% below 99.9% uptime, the vendor would credit 2% of monthly fees. This put significant skin in the game for the vendor, resulting in noticeably improved reliability thereafter. The key is to align the vendor’s incentives with your uptime requirements.

Scalability and Capacity: If your usage may spike, include provisions that the vendor’s service will scale to your needs (within reason) without degradation. For the cloud, that means Cohere should allocate sufficient compute resources so that even at peak volumes, your performance SLAs hold. For on-prem, ensure you can run the model on more hardware if needed (and know the licensing implications – e.g., can you scale out to more servers under the license, or do you need additional licenses?). Clarify this in the contract to avoid hitting unseen throughput ceilings.

Summary: Treat SLAs and support as non-negotiables for enterprise AI. Many AI API providers’ standard terms disclaim any SLA, so you must insert a custom SLA schedule. The contract should make clear the service reliability, performance expectations, support scope, and remedies. This protects your business and signals to the vendor that your use case is serious, often leading them to assign more resources and attention to your account.

Data Security, Privacy, and IP Protection

Data Privacy & Confidentiality: Protecting your data is paramount when using AI services that may handle sensitive business information. Ensure the contract has strong language that your data remains your property and is confidential. Cohere’s standard terms affirm that the customer retains ownership of all Customer Data, but you should explicitly extend this to inputs and outputs of the model. In practice, all prompts you send and all generated outputs should be deemed your confidential data and not used by the vendor for any purpose except to serve your requests. Cohere’s enterprise security commitments state that customers can opt out of having their data used for model training – your contract should cement that opt-out as the default (i.e. “Vendor will not use or retain Customer Data to train, improve, or otherwise benefit its AI models or services for other customers without explicit permission”). No sharing of your inputs or outputs with third parties should be allowed. In short, no secondary use: the AI provider should act strictly as a data processor for your benefit, not mine it for their R&D.

Data Retention and Deletion: Clarify the vendor’s data retention policy. Ideally, negotiate a minimal retention period – for highly sensitive data, you may require that prompts and outputs are not stored beyond the immediate response (a “zero retention” mode). If zero retention is not feasible (e.g., the vendor needs to store data briefly for error analysis or billing), set a short timeline for automatic deletion, such as purging all request logs containing your content within X days. OpenAI and others have offered zero-retention options for enterprises, and you can ask Cohere about them. Additionally, include the right to request deletion of data on demand – for example, if a user accidentally submits personal data that shouldn’t be in logs, you can have it scrubbed (important for GDPR “right to be forgotten” compliance). The contract should also require that upon termination, the vendor delete all your data from their systems and certify this deletion, aside from any data they must keep by law or for audit.

Data Residency and Localization: If your organization or industry has data residency requirements (e.g., EU personal data must stay in Europe, or certain data cannot leave your country), this must be addressed in the contract. Cohere’s services are cloud-based by default, so specify where your data and processing can occur. For instance, you might require that all processing occurs in EU or U.S. data centers and that no customer data will be transferred or accessed outside those regions. Cohere has been known to deploy models in specific regions or offer on-premise options to satisfy locality demands. Use those capabilities: insist on a VPC in your preferred region or a dedicated instance in-country and document it in the agreement. Include a clause that the vendor must notify and obtain consent before moving or replicating your data to any other jurisdiction (preventing unapproved backups in other regions, for example).

Security Measures: The contract should obligate the vendor to maintain industry-leading security practices to protect your data. Reference standards like SOC 2 Type II and ISO 27001 – Cohere’s Trust Center indicate they undergo SOC 2 audits, which is a good baseline. They must maintain such certifications and provide copies of audit reports or certifications on request. Key security provisions include encryption in transit and at rest (all data exchanged with the API should be over TLS, and any data stored on the vendor side must be encrypted); access controls (only authorized personnel at the vendor can access your data, under strict need-to-know, with measures like role-based access and multi-factor authentication). If using a web portal or SaaS interface, ensure it supports Single Sign-On (SSO) and integrates with your identity management so you can control internal user access. The vendor should commit to logical isolation of your data – even in a multi-tenant environment, your data and outputs are segregated so that no other customer can see them. For added assurance, you can negotiate audit rights: you (or an independent auditor) can audit the vendor’s data handling relevant to your data. At a minimum, the vendor should furnish regular security compliance reports (penetration test summaries, vulnerability scan results, etc.) to demonstrate ongoing vigilance. Also include a clause for breach notification – if any security incident or data breach occurs that involves your data, the vendor must inform you immediately (far faster than any regulatory requirement) and provide a detailed incident report and mitigation plan.

Intellectual Property Rights (IP): A nuanced area in AI contracts is the ownership and rights around model outputs and custom models. First, ensure it’s crystal clear that you own the outputs that the AI generates for you. The vendor should not claim copyright or IP in the AI’s content from your prompts. This is important if, for example, the AI generates code or text that you will use in products or publications – you need freedom to use and sub-license that output. Cohere’s terms define “Output” from the model and generally will not assert ownership over it, but making it explicit in your contract avoids ambiguity. Next, address the scenario of fine-tuning or custom models. If you plan to train Cohere’s model on your proprietary data (to create a custom model for your use), clarify who can use that Custom Model. Typically, you insist that it’s exclusively for your organization’s use, and Cohere cannot share or resell your fine-tuned model to others. Cohere’s policy is to destroy custom models upon contract termination, which protects you but also ensures that they treat any custom model as your confidential asset during the term. Ideally, negotiate that you have some rights to retain the custom model if the contract ends, even if just in an escrow arrangement. Some enterprises attempt to get a copy of the model weights for a fine-tuned model, but vendors are hesitant. At the least, you might negotiate that if the vendor ceases service or if you exit the contract, the fine-tuned model will remain available to you (perhaps they could host it on a smaller scale or transfer it under a separate license). This is a complex point; if Cohere won’t budge, focus on ensuring they cannot use your fine-tuning data or model to help other customers. Cohere’s standard agreement states they won’t share a custom model with third parties, so build on that: include that they also won’t use your fine-tuning datasets for anything outside servicing your account (some contracts allow vendors to use fine-tune data in aggregated ways – scrutinize this).

IP Indemnification (Output Liability): One of the emerging risks with generative AI is the possibility that the model’s output infringes on someone’s intellectual property (for example, accidentally generating copyrighted text or patented code). Leading AI vendors (including OpenAI, Microsoft, and Google) have begun indemnifying enterprise customers against third-party IP claims arising from AI outputs. You should seek a similar IP indemnity from Cohere. The contract should state that if a third party sues you, claiming that the model’s output infringes their copyright, the vendor will defend and cover liabilities for those claims. Cohere offers a limited “Copyright Assurance” in its terms, indemnifying customers for adverse judgments if the output is found to infringe copyright, with some caveats. Ensure you understand any conditions on this indemnity: typically, you must use the model as intended and follow any usage guidelines (so that, for instance, you weren’t trying to generate a verbatim excerpt of a known text). Also, such indemnities often exclude cases where you modified the model or provided the infringing prompt content. Regardless, having the vendor stand behind their model in this way is vital – it transfers a significant legal risk off your shoulders. If the vendor is reluctant, remind them that big competitors are doing this, and it’s becoming a standard ask for enterprise AI deals.

Confidential Information and Publicity: Alongside data and IP clauses, don’t forget standard confidentiality provisions. The fact that you are using the AI, your prompts and outputs, etc., should all be confidential. Cohere’s agreement likely treats Customer Data as confidential by default; verify this. Also, consider whether you want to approve any public use of your name or data – often, vendors want to use logos of enterprise clients as references. If you prefer not to be a public reference, include a clause that using each other’s names or trademarks requires consent, except as required by law. This way, Cohere can’t casually announce your company as a client without permission (and vice versa).

Example: A financial services firm mandated a strict data protection addendum in their AI contract. It explicitly forbade any retention of prompts or outputs beyond 24 hours, required the vendor to segregate their data on separate servers, and gave the firm audit rights to inspect compliance. They also got a clause that if their data were used in model training (which wasn’t supposed to happen), it would be considered a material breach. For IP, they succeeded in getting an indemnity such that if the AI output inadvertently plagiarized text and caused a lawsuit, the vendor would cover the damages. These provisions were heavily negotiated, but in the end, the vendor agreed due to the size of the deal. The result was peace of mind for the CIO, who said using AI wouldn’t open the firm to uncontrolled IP or privacy risks.

Usage Rights, License Scope, and Restrictions

Contracts for AI services must spell out what you’re allowed (and not allowed) to do with the service and its outputs. Key areas to clarify:

Internal Use and External Use: Determine if your use is strictly internal (employees using the AI for internal decisions) or if it’s part of a product/service you provide to end customers. Most enterprise AI contracts permit both, but the rights should be clear. Cohere’s terms allow the development of “Customer Applications” that interface with their API, meaning you can build your software that calls the AI and delivers results to end-users, as long as those users don’t access the AI directly except through your application. Ensure the license grants you the right to use outputs freely in your business. For example, you can display AI-generated content on your customer-facing app, commercialize products that include AI-generated components, etc., with no additional fees or permissions required. The outputs should effectively be licensed to you royalty-free. Cohere’s standard approach (like others) is that outputs are yours to use, but double-check for any language that might restrict usage of outputs (for example, using outputs to train another AI might be restricted, as that could be viewed as competing – see below).

No Competing Model or Service Development: It is common to have a restriction against using the AI service to build a competing AI model or service. Cohere’s contract prohibits using their service or outputs to develop a similar AI model (e.g., you can’t feed Cohere’s responses into training your GPT-like model). It also bans using the service to benefit a direct competitor of Cohere. These clauses are generally acceptable as they protect the vendor’s IP, which you should know. If your organization does AI R&D, ensure this doesn’t unintentionally hamstring you (for example, if another team in your company is developing AI, you may want to carve out that general research isn’t meant to violate the contract – it’s about not explicitly stealing their model outputs to compete). Also, most contracts forbid benchmarking publication – you can test the AI for your purposes, but not publish performance comparisons without consent. If you need a public evaluation or academic paper, get permission upfront or carve it out.

User Access and Credentials: The contract should define who can use the service (often termed “Permitted Users”). Typically, your employees and contractors can use it under your account, but you can’t extend access to the general public except via your integrated application. No sharing of API keys or accounts outside your organization is allowed. Cohere will issue you API keys, and you must keep them secure; ensure the contract holds you responsible only for your users and that the vendor will assist if any leak or misuse of keys occurs (e.g., ability to regenerate keys, etc.). If the pricing is per seat (not common for APIs, but sometimes for SaaS interfaces), clarify the count and how “user” is defined. Generally, for token-based billing, you must restrict using only your org’s personnel to invoke the API.

Reverse Engineering and Derivative Models: It should be prohibited to reverse engineer or extract the model behind the API. This is standard – you’re licensing a service, not the underlying model weights. Likewise, if you have on-prem model access, you’ll be prohibited from analyzing the weights to create a derivative work. One exception you might need is for AI explainability or bias testing – ensure the contract doesn’t bar you from analyzing outputs or observing model behaviour (don’t try to reconstruct the model). If you plan rigorous testing, that’s allowed, but you can’t say, pull the model architecture or copy it.

If Cohere provides any client libraries or licensed software SDKs, check that license (usually standard, allowing you to use the SDK in your environment). And if you integrate their API, you shouldn’t expose it so that others could use it outside of your agreement (e.g., no launching a general AI API service built on their API unless that’s specifically permitted).

Licensed Capacity and Scope (for On-Prem): In a private deployment scenario, the contract should detail where and how you can run the model. For example, if they license a model to you, is it for use only at a specific site or cloud account? How many copies can you run concurrently (important if you want high availability clusters or a dev/test instance)? Negotiate that the license covers reasonable needs – you might include a dev environment for free or allow up to N instances. Also, clarify if you can make backup copies of any model files and who can access them. Typically, the vendor will require strict controls (maybe hardware dongles or encryption) to prevent the model from being duplicated. Expect clauses about not removing any security measures or watermarks in the model.

Usage Policy Compliance: Cohere, like others, will have an Acceptable Use Policy or Usage Guidelines (covering illegal use, hate speech, personal data ingestion, etc.). The contract will bind you to follow these. Review them carefully – especially if your use case is edgy (e.g., creative writing that might include profanity or AI analysis of personal data). If something in your planned use might conflict, discuss it. For example, suppose you plan to feed some personal data through the model (which Cohere’s policy might consider “Prohibited Data” without a special agreement). In that case, you need to sort that out upfront. You might avoid such data or get an addendum allowing it (with extra safeguards). Ensure the contract’s “Prohibited Data” definition is clear – Cohere defines it as certain personal data types they won’t handle by default. If GDPR-sensitive data must be used, you’ll likely sign a Data Processing Addendum and run on a private instance.

Restrictions on Outputs: One subtle point: while you own outputs, ensure the vendor isn’t imposing downstream usage restrictions (e.g., some AI content might require attribution or have open-source-like terms if based on open models). Cohere’s models are proprietary, so this is less of an issue, but it’s always good to have a clause that “Customer may use outputs without restriction, subject only to third-party IP rights”. If you plan to filter or post-process outputs, that’s fine and not restricted.

Feedback and Improvements: Contracts often let the vendor use your feedback (e.g., error reports or suggestions) to improve their services. This is generally okay – just confirm it doesn’t accidentally include your data. It should be about feedback metadata, not using your actual prompt/content. Usually, it’s separate: you provide suggestions, and they own improvements to the service. Just no confusion that your content is “feedback.”

Summary of Key Restrictions: In essence, expect (and don’t fight) clauses that say you won’t misuse the service – no criminal use, no sending truly forbidden content, no violating privacy laws, no trying to break or expose the model, and no using it to directly compete with the vendor. These protect both parties. Focus your negotiation on ensuring you have the rights you need: broad rights to use outputs, to integrate the service in any territory or platform you operate in, and to allow all your affiliates and contractors to use it under your account if applicable. If you have multiple subsidiaries that will use the service, consider adding them as covered parties in the contract to avoid each needing a separate deal.

Compliance and Risk Mitigation (GDPR, Liability, etc.)

Enterprise AI deployments must comply with a web of regulations – data protection laws (GDPR, CCPA), emerging AI-specific laws (EU AI Act) – and manage risks around unethical or harmful outputs. Your contract can help mitigate these risks:

Privacy Law Compliance (GDPR/CCPA): If the AI processes personal data, GDPR and other privacy laws are triggered. Cohere’s contract typically tries to avoid this by prohibiting sensitive personal data, but many use cases involve some personal information. To comply, ensure a proper Data Processing Addendum (DPA) is in place, classifying the vendor as a processor acting on your instructions. The DPA should include standard GDPR clauses: data processing details, cross-border transfer mechanisms, subprocessor lists (Cohere’s Trust Center lists subprocessors, which you should review), assistance with data subject requests, etc. Even if you minimize personal data, having a DPA covers you legally. Also, ensure the contract obligates the vendor to assist you in compliance – e.g., if you request to delete someone’s data that might be in AI logs, the vendor must help fulfill it promptly. Cohere advertises GDPR compliance on their site, but you need that codified in the contract.

Emerging AI Regulations: Laws like the EU AI Act are on the horizon (expected enforcement in 2024–2025), imposing transparency, risk management, and possibly registration requirements for AI systems. Negotiate a clause that the vendor will comply with all applicable AI laws and provide you with the necessary information to comply with yours. For example, the AI Act may require you to perform conformity assessments or document the model’s training data and accuracy. Cohere should agree to supply relevant documentation about their model (e.g., model documentation, known limitations, training data origins) to support your compliance efforts. Also include an adaptation clause: if new regulations prohibit the current use of the AI or impose requirements that the vendor cannot meet, you should have the right to renegotiate or terminate without penalty. This is important – laws are evolving, and you don’t want to be stuck paying for a service you’re legally barred from using.

Model Ethics and Behavior: Address how the model’s content standards align with your policies. Cohere has AI content moderation policies, but you might need additional constraints. For instance, if your company has zero tolerance for certain language or biases in outputs, you might require that the model be configured with custom filters or “allow/block lists.” While it’s tricky to enforce behaviour contractually, you can include a warranty that the service includes content moderation mechanisms and that the vendor will allow you to configure them for your needs. At a minimum, get a statement that the model is designed to follow responsible AI practices and has no known harmful functionality. Some contracts mention that the AI will adhere to provided guidelines (like OpenAI’s policy or a custom one). If your industry has specific compliance needs (e.g., no medical advice output without certain disclaimers), ensure the vendor is aware and see if they can accommodate (through fine-tuning or system instructions). Remember that no AI is perfect, but having it in writing that the vendor strives to meet certain ethical standards and will inform you of major changes to the model’s behaviour or “constitution” helps. In sensitive deployments, you might even negotiate to vet the model’s settings (for example, review the content filter word lists or have a say in tuning the safety parameters).

Liability for Harmful Outputs: Most vendors will disclaim liability for the content the AI generates since they don’t directly control what the model says. You will likely see warranty disclaimers that the AI output might be inaccurate or offensive, and the vendor isn’t liable. Accepting some of this risk is inherent to using AI, but you can push back on extremes. For example, you might negotiate a clause that if the AI consistently produces outputs that violate the agreed content standards or cause harm (despite correct usage), it constitutes a breach that the vendor must cure. While getting the vendor to indemnify for things like defamation or wrongful advice the AI gives is unlikely, you can insist on the ability to terminate if the AI causes unacceptable risk. Also, consider a liability cap carve-out for data breaches or privacy violations. For example. For example, if the vendor’s negligence causes a GDPR fine or a big security incident, that might not be subject to the normal liability cap. The Redress Compliance guidance suggests negotiating that if the vendor’s data processing or model outputs directly lead to a regulatory penalty (say, a GDPR fine due to a security lapse), the vendor should bear that cost. Even if full indemnity isn’t achievable, raising this risk can lead to at least shared responsibility or a higher liability cap for those scenarios.

Compliance Audits and Reporting: For high-stakes use, you might require the vendor to submit to audits or assessments related to responsible AI. This could tie into the regulatory assistance – e.g., if you must do an AI impact assessment, Cohere should provide needed info and possibly let you audit their processes relevant to your usage. Some enterprises negotiate audit rights to inspect training data (probably not realistic in most cases due to IP and privacy reasons). Still, you can ask for summaries of training sources, bias mitigation steps, etc. At least ensure they notify you of any significant regulatory inquiries or legal orders that could affect your use (for example, if a government orders them to restrict the model or divulge data).

GDPR-Specifics: If GDPR applies, ensure cross-border transfer mechanisms are addressed (Standard Contractual Clauses if data leaves the EU unless you keep it in-region). Also, define whether the vendor is a Processor or Controller for GDPR – in AI services, this can be grey. Usually, you are the Controller, and they are the Processor for input/output data. Still, the vendor controls any data they generate or collect independently (like model improvements). The DPA should handle this split. Include a duty for the vendor to assist with audi regulators or respond to data subject rights, as mentioned. And, of course, a confirmation that the vendor implements appropriate technical and organizational measures to protect personal data, as per GDPR Art. 32, which ties back to the security requirements discussed.

Insurance: It’s worth considering requiring the vendor to carry certain insurance (cyber liability, errors & omissions, including AI-related errors). Some enterprises ask for proof of insurance so that if a breach or big issue occurs, the vendor has coverage. If the deal is large, you might get this added.

Indemnities and Caps: We covered IP indemnity. You might also seek indemnification for third-party claims arising from the vendor’s breach of confidentiality or applicable law. For example, if the vendor mishandles data and there is a lawsuit, they indemnify you. The liability cap is another point – vendors often want to cap, e.g., the fees paid. You may negotiate a higher cap or uncapped liability for certain things (like breach of confidentiality, data privacy, or IP infringement). It’s a tough negotiation, but worth raising, especially if your industry demands it.

Example: A healthcare company using an AI model negotiated a strict HIPAA Business Associate Agreement (BAA) with the vendor since they intended to feed some patient data into the model. This forced the AI provider to comply with HIPAA security rules and report any unauthorized disclosures of patient data. Additionally, the contract included a unique clause that if the model produced any content that could be considered medical advice, it had to include a disclaimer (and the model was fine-tuned to do so). They also secured a clause that if any output violated a patient’s privacy (e.g., the model leaked someone else’s data in a response), the vendor would treat it as a security breach and notify and cooperate fully. These measures exceeded the vendor’s standard terms but were crucial for the client’s compliance comfort.

Contract Structuring for Renewals and Volume Commitments

Crafting a flexible contract structure will save headaches down the road. Think about the term length, renewal conditions, and how volume commitments play out over time.

Term Length and Commitment: Balance the desire for a long-term relationship with the fast-changing nature of AI tech. Vendors might push for multi-year commitments (2–3 years) for better pricing stability. Locking in a rate for 3 years can be good if you expect heavy usage, but avoid being trapped too long in case the technology or pricing landscape changes. Often, a 1-year term with renewal options or a 2-year term with an out after year 1 (if things don’t work out) is safer than a 5-year lock-in. If you sign a multi-year contract, negotiate price protections for later years – e.g., fixed price escalation caps (no more than 5% increase on fees in year 2) or even predefined discounts in year 2 or 3 reflecting expected cost reductions. Some deals include a ramp-up structure: lower fees or volumes in the first phase while you pilot, then scale up later. This way, you’re not overpaying on day one for capacity you haven’t utilized yet.

Renewal Clauses: Do not allow auto-renewal at the vendor’s discretion or pricing. The contract should state that renewal requires mutual agreement on terms or at least allow you to opt out without penalty at the end of the term. If you want an auto-renew for convenience, cap any renewal price increase to a small percentage or tie it to an index, to prevent the vendor from jacking up the price if you accidentally let it renew. Given that I model pricing is trending downward, you aight not want auto-renew at all – better to renegotiate fresh, leveraging market improvements. In big enterprise agreements, it’s common to include a clause like “prices for any renewal term will be agreed in writing no later than 60 days before the current term ends; if not agreed, either party may let the contract expire.” This forces a conscious negotiation and avoids unpleasant surprises.

Mid-Term Reviews and Adjustments: Consider a mid-term checkpoint because AI use cases are new. For example, after 6 or 12 months, include a contract provision for both parties to review the actual usage, performance, and value and revisit the terms. This could allow you to increase your committed volume (maybe in exchange for better unit rates) if your adoption is faster than expected. Or, if the solution isn’t delivering the expected value, perhaps you can scale down or add extra features without waiting for the term to end. While vendors don’t love clauses that allow reducing commitment, a balanced approach could be a “break clause” or adjustment window. Perhaps you agree that if by the 12-month mark, usage is below a certain threshold, you can reduce your volume commitment for the remaining term by some percentage (and vice versa if it’s way above, you both negotiate a higher commitment with a discount). The goal is to align the contract with reality after some real-world experience so neither side feels stuck.

Volume Commitment Structure: If you have a minimum spend or volume commitment, structure it smartly. Tying it to annual (rather than monthly) usage can give flexibility – e.g., commit to 12 million queries per year instead of 1 million per month, combined with the rollover rights discussed earlier. That way, you can true up over the year. Also, if you anticipate growth, you might negotiate a rising commitment: e.g,. Year 1 commit $X, Year 2 commit $X*1.5, etc., with corresponding discounts. This signals partnership and locks in better rates as you succeed. Ensure, however, that if you don’t hit those volumes, the consequences are just paying the shortfall at most, not some punitive fee.

Termination and Escape Clauses: Life is unpredictable – built-in exit ramps in case things go wrong or need to change. Key ones to consider:

  • Termination for Convenience: Getting into a SaaS agreement is tough, but you can try to get the right to terminate early with notice (and possibly a penalty fee). For example, you might negotiate the option to terminate with 60 days’ notice if you pay a fee equal to 2-3 months of service. This gives you strategic flexibility if you decide to switch to a different AI platform or if budgets are cut. Even if the vendor resists, sometimes large enterprises can secure a buy-out clause.
  • Termination for Non-Performance: Include termination rights for cause: e.g., if the vendor repeatedly fails to meet SLAs for a defined period or if the promised on-prem deployment is significantly delayed, you can exit without penalty. Define what triggers this (perhaps 3 months in a row of SLA breaches or implementation milestones not met).
  • Regulatory Exit: As mentioned, if new laws make usage illegal or risky (e.g., the government bans the use of non-certified AI), you should be able to terminate.
  • Material Change or Merger: You might also add that if a certain competitor acquires the vendor or there’s a material change in control, you have the option to terminate – this is less common but sometimes considered if, say, you worry about a competitor buying the AI company and then having access to your data (which the contract should prevent anyway).
  • Dispute Resolution: It is not exactly termination, but a clear dispute resolution process is needed so that if issues arise, they are handled efficiently (escalation to execs, then arbitration, etc.) to avoid drawn-out fights.

These escape hatches ensure you’re not stuck paying for a failing service or locked in when business needs shift. Of course, vendors may demand an early termination fee or no refund for prepaid amounts – negotiate what’s fair.

Data Portability at the End of Term: Plan for off-boarding. The contract should oblige Cohere to assist in a smooth transition. This includes giving you a last chance to export any data – e.g. conversation logs, embeddings generated, custom model parameters (if applicable) – before deletion. Ensure the contract states the vendor will delete your data upon termination (and certify it), aside from any data they must retain for legal reasons. If you created a fine-tuned model that they host, maybe negotiate that they hold it in escrow for a few months in case you come back or need it for litigation hold, etc., even after termination. Data portability might also mean cooperation in switching you to another solution – not that Cohere will help a competitor, but perhaps they provide your data in a standard format.

Renewal Incentives: Sometimes, you can bake in an incentive for renewing. For example, the contract might state that if you renew for a second term, you automatically get a 10% price discount or some additional services thrown in. This gives you a predefined reward if the partnership continues, and if not, you lose nothing. It also encourages the vendor to continue earning your business rather than taking you for granted in the long term.

Future-Proofing: Consider mentioning how new products or features will be handled. If Cohere releases new model versions (say a more powerful model or new tool) during your term, do you get access under the same pricing? Try to avoid a situation where they launch “Cohere 2.0” and charge extra – perhaps include that any improvements or successors to the contracted services are included, or at least you get a first offer to upgrade under similar commercial terms. Also, if you expect to expand usage to subsidiaries or affiliates, ensure the contract either already covers them or can be easily amended to include them.

Example: A large retailer signed a 2-year AI services agreement with a clause that at the 12-month mark, they could re-evaluate volumes. After year 1, they found usage was lower than expected, so they exercised the clause to reduce their Year 2 commitment by 30% (with a slight reduction in discount). This was better than overpaying for unused capacity. They had also negotiated a renewal cap – the provider couldn’t raise prices more than 3% at renewal unless new features were added. When the 2 years ended, the market price for similar AI had dropped ~20%, so the retailer could push for a price decrease despite the cap, using a competitive RFP. Having a short initial term and flexibility allowed them to benefit from rapid improvements in the AI space.

Conclusion: By addressing the areas above in your Cohere contracts, you can capture the benefits of cutting-edge AI while minimizing surprises and risks. Always remember that contract negotiation for AI isn’t just about legal fine print – it’s about setting the foundation for a successful partnership. Be clear on your requirements, leverage your volume and strategic importance, and don’t shy away from seeking expert help (engaging independent advisors like Redress Compliance or similar) to benchmark terms. A well-structured contract will give your enterprise the confidence to innovate with Cohere’s AI platform, knowing that costs are controlled, service is reliable, data is safe, and compliance boxes are checked. This transforms the AI deployment from a leap of faith into a calculated, governable business initiative – exactly what CIOs and procurement leaders need for enterprise-grade AI adoption.

Author

  • Fredrik Filipsson brings two decades of Oracle license management experience, including a nine-year tenure at Oracle and 11 years in Oracle license consulting. His expertise extends across leading IT corporations like IBM, enriching his profile with a broad spectrum of software and cloud projects. Filipsson's proficiency encompasses IBM, SAP, Microsoft, and Salesforce platforms, alongside significant involvement in Microsoft Copilot and AI initiatives, improving organizational efficiency.

    View all posts