AI negotiations

Enterprise AI Procurement Best Practices

Enterprise AI Procurement Best Practices

Enterprise AI procurement involves acquiring the right mix of software, services, and hardware to support generative AI and custom AI solutions. CIOs and procurement leaders must balance rapid innovation with risk management and cost control. This guide offers best practices for procuring generative AI platforms and building custom AI systems, including contracting tips for major cloud AI vendors, pricing model insights, key contract clauses to watch for, and guidance on the build-vs-buy decision. The tone here is advisory and advocates for the enterprise customer in negotiations and planning.

AI Procurement Categories: Software, Services, and Hardware

When sourcing AI capabilities, consider the three major categories of procurement – AI software solutions, AI-related services, and AI hardware, each with distinct considerations:

  • Software Solutions include off-the-shelf AI platforms (e.g., OpenAI GPT-4, enterprise chatbots, AI SaaS products) and licensed software frameworks. Evaluate vendors’ model capabilities, integration options, and data policies. Ensure the software aligns with your use cases (e.g., language generation, vision AI) and check if customization is possible. Example: A company seeking a generative AI writing assistant might compare a vendor’s pre-built solution with fine-tuning an open-source model internally. Off-the-shelf tools offer speed, but a custom model could be better tailored if the use case is highly specialized.
  • AI Services (Consulting & Implementation): Often, enterprises engage consultants or AI development firms to strategize or implement AI solutions. When contracting for AI services, define clear deliverables and KPIs. Structure contracts so a portion of the fees is tied to outcomes (e.g, a successful model deployment or achieving specified accuracy). Watch for assumptions in statements of work – e.g., vendor expecting your team to provide certain data or resources – because if those assumptions fail, a fixed-fee project can balloon in cost or timeline. Always perform due diligence on a service provider’s expertise in your industry and ensure knowledge transfer to your team is part of the deal (so you’re not completely dependent on the vendor long-term).
  • Hardware (GPUs & AI Appliances): If you plan to build or run AI on-premises, you’ll likely need to procure specialized hardware such as high-end GPUs (e.g., NVIDIA A100/H100) or integrated AI appliances (e.g., Nvidia DGX systems or other GPU server bundles). Hardware procurement involves capital expense and infrastructure setup for model training or inference. Ensure you work with your IT architects to specify performance and scalability (consider memory, networking, and cooling for GPU servers). Remember the cloud vs on-prem cost trade-offs: while cloud AI has a low upfront cost, steady long-term workloads may be cheaper on-premises – studies suggest on-prem AI can cost 3–5× less than cloud in the long run if usage is steady. However, on-prem requires up-front investment and skilled staff. Many organizations choose a hybrid approach, e.g., using cloud for initial experiments and scale, but bringing stable, sensitive workloads in-house for better control over data and costs.

Contracting with Cloud AI Providers (Azure, AWS, Google, OpenAI)

Major cloud vendors offer generative AI platforms with varying terms and features. When procuring cloud AI services – such as Microsoft’s Azure AI, Amazon’s AWS Bedrock, Google Vertex AI, or OpenAI’s API/enterprise offerings – consider the following practical insights before you sign a contract:

  • Microsoft Azure AI (Azure OpenAI & Copilots): Microsoft’s Azure OpenAI Service provides access to OpenAI models (GPT-4, GPT-3.5, DALL-E, etc.) in Azure’s cloud. It typically requires an existing Enterprise Agreement (EA) or Azure subscription. Azure OpenAI is usage-based (you pay per request/token), and Microsoft even allows reserved capacity purchases – e.g., monthly throughput units – for a discount if you commit to steady usage. Unlike OpenAI’s direct API, Azure offers enterprise-grade SLAs (99.9% uptime guarantee for Azure OpenAI) and integrates with Azure’s security and identity (Azure AD, private network options). Microsoft contractually assures that your data and prompts are not used to train the base AI models, addressing data privacy concerns. If you’re adding Microsoft 365 Copilot (AI features in Office apps), note it’s licensed per-user at $30/user/month (flat fee) with no usage-based option currently, so plan to deploy it only where it will be fully utilized to justify that cost. To simplify management, keep Copilot terms aligned with your main Microsoft agreement (e.g,. Co-term with your EA renewal. Insight: Microsoft’s strength is a “one-stop” enterprise experience (integration, security, support), but this can come at a higher cost; be prepared to negotiate price using other vendors as leverage, and ensure all promised AI features (e.g. specific Copilots) are explicitly listed in the contract to avoid surprise add-on fees.
  • Amazon AWS (Bedrock and SageMaker): AWS’s approach to generative AI is to offer a menu of models and tools rather than a single flagship model. AWS Bedrock is a service that hosts various third-party models (Anthropic’s Claude 2, AI21’s Jurassic, Stable Diffusion, etc.) and makes them available via API. At the same time, Amazon also has coding assistants like CodeWhisperer. AWS generative AI pricing is essentially pay-for-infrastructure – you pay for the underlying compute (instances or GPU time) and data storage/transfer when using these AI services. Predicting this can be complex, but it offers flexibility: you aren’t locked to one model or pricing scheme. One advantage is the ability to experiment: you could deploy a small model on SageMaker or fine-tune an open-source model with AWS’s tooling, paying only for the hours of GPU you consume. Ensure your contract or usage of AWS includes the needed support level, and check if AWS Bedrock or related services come with any availability SLAs under the standard AWS terms (AWS tends to provide SLAs for many managed services – e.g., SageMaker has an SLA for endpoint availability). Negotiation tip: If you’re an existing AWS customer, you might leverage committed spend agreements (Enterprise Discount Programs) to cover AI services. Make it clear to AWS (and competitors when comparing) that you value the openness – Amazon’s lack of exclusive tie-in to one model means you can shift among models if one becomes better or cheaper. However, AWS cannot offer OpenAI’s most advanced GPT-4 (Microsoft has exclusive cloud rights to GPT-4), so if GPT-4 is a must-have, Azure or OpenAI directly are your primary options.
  • Google Cloud (Vertex AI & Duet AI): Google’s Vertex AI platform offers access to Google’s generative models (such as PaLM 2 for text and other modalities) and a suite of tools for building custom AI. Vertex’s pricing is usage-based, typically billed by characters of input/output for generative text models (e.g., roughly $0.0005 per 1K characters for PaLM 2 text models). This character-based billing is analogous to token-based pricing, measured differently (1 token ≈ four characters). If you use Vertex AI, be aware of additional costs like provisioning dedicated endpoints (which can incur per-hour charges for serving models). Google Workspace’s Duet AI (which adds AI assistance in Google Docs/Sheets, similar to MS Copilot) is priced at $30/user/month, matching Microsoft’s price point. Google may be willing to negotiate Duet pricing or throw in trials if you are a big Google Cloud or Workspace customer, but early on, they’ve stuck to flat pricing. Ensure any Google Cloud AI services you buy are covered by Google’s enterprise privacy commitments (Google Cloud’s contracts generally promise not to use customer data for advertising or train general models, similar to Azure’s approach). Also, consider data residency if required – Google allows deploying models in specific regions to keep data local. Insight: If you are a multi-cloud shop (using both Google and Microsoft, for example), you have an advantage – you can play the two off each other. For instance, if Google offers incentives or bundles for Vertex AI, mention this to Microsoft when negotiating their Azure AI, and vice versa. Google’s AI models are strong (PaLM 2, etc.), but ensure they align with your needs (e.g., if your environment is Microsoft-centric, Copilot might integrate more seamlessly into daily workflows than Google’s tool, and Microsoft knows this).
  • OpenAI (Direct API or ChatGPT Enterprise): OpenAI, the company behind GPT-4/ChatGPT, offers direct access to its models via APIs and has introduced ChatGPT Enterprise for businesses. The OpenAI API pricing is consumption-based – you pay per token of input/output, with published rates (e.g, ~$0.03 per 1K input tokens and $0.06 per 1K output tokens for GPT-4 8k context). OpenAI’s direct pricing is often slightly lower than Azure’s markup on the same models, and OpenAI sometimes releases new model features or versions faster than its API. However, by default, OpenAI’s API comes with no SLA or guarantees – it’s a cloud service without an uptime guarantee. For large enterprise deals, OpenAI is now offering ChatGPT Enterprise, which uses a per-user pricing model (not publicly posted, likely negotiable based on volume) and provides additional assurances: OpenAI pledges not to use your data for training their models and provides SOC 2 compliance and improved security for Enterprise customers. They have even started offering indemnity to enterprise API users for certain IP claims related to Microsoft’s liability protections. If you choose OpenAI directly, budget for premium support (OpenAI’s support might be more limited than a full cloud provider’s). Negotiation tip: Some enterprises adopt a dual strategy – using Microsoft or Google for some use cases and OpenAI directly for others – and you should ensure no contract restricts you from doing so. In negotiations, having a quote or reference from OpenAI for a given usage can be powerful leverage: Microsoft knows that you could go to OpenAI directly for a potentially lower cost, so they may offer discounts or extra features to keep your business. Conversely, you can ask OpenAI to match enterprise features (like an uptime commitment or better price at scale) by mentioning the robust infrastructure and support a cloud platform would give you. Always weigh the trade-off: OpenAI’s raw model access vs. a cloud provider’s managed, integrated ecosystem.

Multi-Cloud and Flexibility: Avoid getting locked entirely into one vendor for AI if you can. Keeping multi-cloud or multi-vendor options gives you leverage and risk mitigation. For example, you might use Azure OpenAI for a highly sensitive project that needs the SLA and Azure’s security, but use OpenAI API or AWS to experiment with other models on less critical tasks. Ensure your contracts don’t forbid using alternative AI providers (they typically do not, but be cautious about committing spending that effectively locks all your budget to one vendor). Designing your AI applications in a cloud-agnostic way (abstracting the model API calls so you can switch out backends) is a technical best practice that complements procurement flexibility.

AI Pricing Models and Cost Structures

AI solution pricing can vary widely. Procurement teams should understand the common pricing structures for generative AI solutions and how they differ by vendor, to forecast costs and negotiate effectively:

  • Token/Character-Based Pricing (Pay-as-You-Go): Many AI providers charge based on usage measured in text tokens or characters processed. For instance, OpenAI’s GPT-4 API charges around $0.06 per 1,000 output tokens (and $0.03 per 1,000 input tokens), meaning if you generate 1 million text tokens, it’s roughly $60. Google’s Vertex AI similarly charges per 1,000 characters of input/output for PaLM 2 models (with rates of $0.0005 per 1K characters, roughly equating to $0.002 per 1K tokens). This consumption-based model is very granular: you pay exactly for what you use, which is great for experimentation or variable workloads. However, it can lead to unpredictable costs if usage spikes or scales up unexpectedly. Best practices: Monitor usage closely and set budgets/alerts. For large deployments, ask about volume discounts – e.g., OpenAI and Azure both have discount tiers or enterprise rates if you commit to high volumes. Also, consider setting limits or using rate limiting to control runaway use. In negotiations, if you expect heavy use, you could seek a committed-use deal (e., commit to $X of tokens per year for a discounted rate).
  • Consumption-Based Cloud Services (Infrastructure-Oriented): This pricing is common when using cloud platforms like AWS or Az, where you pay for the underlying compute time, storage, or throughput. For example, hosting a generative model on Azure or AWS might involve paying by the VM instance-hours or GPU-hours. AWS Bedrock and Azure OpenAI dedicated instances both effectively charge for the compute provisioned. This model is also pay-as-you-go, but at a resource level rather than per text output. It’s often measured in hours of server usage, number of requests, or memory/throughput units. The benefit is scalability – you can scale down to zero when not in use or as needed. The downside is, again, cost uncertainty and the need for cloud cost management. Tip: Model your expected usage patterns in advance: Consider reserved capacity if you predict steady usage. For instance, Azure OpenAI offers 1-month or 1-year capacity reservations that grant lower rates if you pre-commit to a certain throughput. Similarly, AWS may have savings plans for SageMaker instances. By forecasting your needs (perhaps using pilot usage data), you can decide whether on-demand or committed infrastructure pricing is more cost-effective. Always scrutinize how the AI service is metered (per second, per request, etc.) to avoid surprise charges.
  • Per-User Subscription Licensing: Some generative AI offerings are priced per-seat, like traditional software licensing. In this model, you pay a flat recurring fee per user or device, regardless of how much that user uses the AI. Examples include Microsoft 365 Copilot at $30 per user/month, and Google’s Duet AI at a similar $30 per user/month for Workspace customers. OpenAI’s ChatGPT Enterprise also follows a per-user subscription model (with negotiable pricing depending on the number of seats). The advantage here is cost predictability: you can budget a fixed amount per month or year per user. It also encourages broad access – users can experiment freely without worrying about racking up usage charges. The downside is you pay for potential use, not actual use – if only a fraction of licensed users heavily use the AI, your cost per actual usage can be very high. Best practices: Align subscription quantities with real need – start with a pilot group of power users before rolling out enterprise-wide. Negotiate for flexibility to decrease the number of licenses at renewal or if adoption is lower than expected (a “true-down” right) so you’re not stuck overpaying for unused licenses. Also, watch out for vendor volume discounts (though Microsoft notably did not provide tiered discounts for Copilot – it was one flat price for all). You might push for a better rate or an enterprise bundle if you have thousands of users.
  • Flat-Rate or Enterprise Agreements: Sometimes, vendors may offer flat-rate pricing or an enterprise license for AI services – for example, unlimited use for a fixed annual fee, or a large prepaid credit. This is not yet common for cutting-edge generative models (because usage can vary widely), but we see hints of it in large deals or as part of bigger bundles. For instance, a cloud provider might include some generative AI usage in a big cloud commit deal. Flat-rate deals give cost certainty but carry risk: if you overestimate usage, you pay for capacity you never use. They also might hide fair use clauses that throttle performance if you max it out. Suppose considering a flat rate, mode, the best/worst case usage to ensure a good deal. And ensure there are performance SLAs – since you’re prepaying, you want service quality guarantees.

In summary, map the pricing model to your usage profile. Pay-per-use is likely most economical if your AI usage is sporadic or exploratory. Negotiate a committed or license model if it’s heavy and predictable. Often, a hybrid approach works best. For example, you could license a certain number of user seats for core usage and have a pay-as-you-go plan for overflow or external-facing workloads. The key is to avoid surprise bills – use cost management tools and insist on billing transparency from vendors. An example of cost planning: if you estimate using 10 million GPT-4 tokens per month (~$600/month at list rates), but only 50 users need access, compare the token-based cost to a per-user model (50 users * $30 = $1500/month for Copilot) to decide which is more economical. This kind of analysis will inform your negotiations and procurement choices.

Key Contract Risks and Negotiation Points

When reviewing AI contracts, look for vendor-favorable clauses and address key risk areas. Generative AI is new territory, and some vendor agreements may shift undue risk to the customer. Below are critical contract elements to manage in negotiations:

  • Data Usage and IP Ownership: Consider how the contract treats your data (inputs) and the AI-generated outputs. Ensure it’s clear that your company retains ownership of proprietary data and any IP in outputs. Cloud AI providers differ: enterprise agreements often state that the customer owns their input and output data, and vendors disclaim ownership of generated content. Prevent vendor “training” on your data – negotiate clauses that forbid the supplier from using your inputs or outputs to improve their models (unless expressly agreed). For example, Microsoft promises Azure OpenAI will not use your prompts or fine-tuned models to train the AI. OpenAI similarly promises Enterprise customers that API data is not used to train models. If a vendor’s default terms allow data reuse (common in consumer AI tools), get an addendum or enterprise rider that overrides this. IP rights should also be considered in any custom models or work produced. If a vendor is developing a model or solution for you, ensure the contract specifies who owns the model and any new IP. Vendors might try to claim a license to use improvements or fine-tunings, but limit that to only what’s necessary to provide the service. One example clause to watch: some cloud contracts have broad language allowing the provider to use “feedback” or suggestions you give them. Make sure that you don’t unintentionally give away your secrets or improvements. You can agree to share feedback on the service, but not let them use your data/artifacts for others.
  • Output Liability & Indemnity: Generative AI can produce inaccurate or infringing content. Vendors often disclaim warranties about the accuracy of AI outputs and limit their liability for issues arising from the use of the AI. As the customer, you should seek indemnification from the vendor for certain risks, for instance, if the AAI’s output unintentionally infringes a third-party copyright or violates someone’s rights. Not all vendors will agree to this, but note that OpenAI recently offered to indemnify enterprise users for IP claims related to their AI outputs, and Microsoft has an IP indemnification pledge for Copilot (they’ll defend customers against copyright claims on Copilot-generated code, for example). Push for similar protections: if the vendor is confident in their product, they should stand behind it to some degree. At minimum, ensure the contract doesn’t require you to indemnify the vendor for use of their AI. Also, clarify responsibility for harmful outputs: What is the vendor’s responsibility if the AI produces something that causes damage or a bigger error? You likely won’t get them to take open-ended liability. Still, you can negotiate scenarios (e.g., data breach via the AI, or discrimination by an AI decision) where the vendor explicitly accepts liability or provides a remedy. Liability caps are another area – vendors will try to cap liability tightly (often to the fees paid). You may not get unlimited liability from an AI vendor (and they wouldn’t get it from you either), but try to carve out critical issues (like breach of confidentiality, or IP infringement) from a low cap.
  • Service Levels and Performance: Insist on clarity around Service Level Agreements (SLAs) if you depend on the AI in production. What uptime or availability is promised? Is there a remedy (usually service credit) if the service is down or sluggish? For example, Azure OpenAI offers a 99.9% uptime SLA, whereas OpenAI’s API offers no guaranteed uptime. If a provider doesn’t offer an SLA, this is a risk – you might mitigate it by not using them for mission-critical functions or engineering fallbacks. If they have an SLA, check what’s excluded (maintenance windows, “reviews” of new models often have no SLA). Also consider latency and throughput commitments if you have real-time requirements – sometimes you can get a guaranteed response time or a dedicated capacity for a fee. Align SLAs with your business needs and include the right to terminate if SLAs are consistently missed.
  • Termination and Exit Clauses: Given the rapid evolution of AI, you want flexibility to terminate or change course if needed. Negotiate termination rights beyond just cause/non-payment. For instance, try to include the right to terminate for convenience with notice (even if it involves a penalty or pro-rated fee. This lets you know whether the AI tech is not delivering value or if a new superior option emerges. Also, consider a termination for a change in law or policy. If new regulations or internal policies prohibit the use of the AI, you should be able to exit the contract without being stuck. Vendor-favorable terms might lock you in for a multi-year term with no exit, so push for at least an annual review or pilot period. Additionally, plan for contractual exit assistance: ensure you can retrieve your data, models, or artifacts upon exit. If you fine-tuned a model on a vendor platform, clarify if you can export that model (weights) or the training data. It’s not always possible with closed platforms, but at least ensure you retain your datasets and can transition. Pro tip: Some enterprises negotiate a shorter initial term (like 1 year) or a “break clause” at 12–18 months even in a 3-year deal. This allows renegotiation or exit once you have real usage data or if the market shifts. If the vendor won’t allow an easy exit,
    minimize the commitment size/duration.
  • Vendor Audit and Compliance Rights: If the AI solution will process sensitive data or is subject to regulations (privacy, bias/fairness laws, etc.), include provisions about compliance. Ensure the vendor will comply with relevant laws (GDPR, sector-specific AI regulations) and consider adding a clause that allows your company to audit their compliance or request attestations. For example, you might want to audit how your data is stored and used or review the vendor’s bias testing and security controls. Vendors often resist broad audit rights, but you can at least require regular compliance reports, external security audit certifications, and prompt notification of any data breach. Also clarify data handling: data location (which countries), subcontractors involved, and data deletion timelines (if you stop using the service, will your data be deleted promptly?). Having these in the contract protects you in case of later disputes or regulatory inquiries.
  • Retraining and Model Improvement Clauses: A subtle but important point in AI deals is whether the vendor can improve from your usage. As mentioned, you generally want to forbid them from using your specific data. However, some contracts might allow learning from usage patterns or feedback. Make sure any such allowance is well-understood and does not expose your IP. Conversely, if you are paying for a custom or fine-tuned version, clarify your rights to the improved model. For instance, if a consultancy builds a model trained on your data, you would want ownership of that model (or at least an exclusive license). If the vendor insists on owning the model, ensure you have a license broad enough to use it anywhere and even continue its development if the vendor relationship ends. Retraining rights should also cover your ability to continue using the model with new data over time – you don’t want to be stuck paying the vendor for every little update if it can be helped.
  • Confidentiality and Data Security: Given that AI systems might consume sensitive business data to function, strong confidentiality and data protection clauses are necessary. The vendor should protect your data with industry best practices (encryption, access controls, etc.). Look for commitments to standards like SOC 2, ISO 27001, or FedRAMP if applicable – for example, OpenAI’s enterprise offering touts SOC 2 compliance, and Microsoft and Google have extensive compliance portfolios. Additionally, include a clause that if a security incident or breach occurs on the vendor side that affects your data, they must promptly notify you and cooperate in remediation – this is standard, but verify it’s there.
  • Bias, Ethics, and Regulatory Compliance: As an enterprise, you must ensure the AI solutions meet ethical and legal standards (no unlawful bias, etc.). While this is partly related to your use, you should also ask vendors about their AI governance practices. For instance, inquire if they conduct independent bias audits of their models (New York City’s Local Law 144 on automated hiring tools is one example requiring bias audits). In contracts, you might not get a specific clause warranting “ur AI is bias-free” (vendors won’t want that). Still, you can include language that the AI will be provided in compliance with applicable laws and that the vendor will notify you of any regulatory investigations or actions related to the AI. If such issues are a major concern in your industry, consider a contractual right to terminate or pause usage if the AI is found non-compliant or high-risk by regulators.

In summary, they often favor the vendor and don’t accept standard AI contract terms without scrutiny. Engage your legal counsel to review these issues early (as soon as you’re evaluating a deal). Negotiating AI contracts is about balancing innovation with protection: you want access to the latest tech, but with reasonable safeguards in place. Many of these terms (IP, liability, SLA) might be non-negotiable in click-through agreements, but you can and should negotiate for enterprise deals. It’s worth remembering that additional protections may come at a cost – often the “enterprise tier” of a service costs more, exactly because it includes these assurances. Be prepared to pay a premium for better terms, and factor that into your procurement evaluation.

Build vs. Buy: Evaluating Custom-Build AI vs Off-the-Shelf

A pivotal decision in enterprise AI strategy is whether to build AI solutions in-house or buy (license) solutions from vendors. Procurement plays a key role in evaluating this build vs. buy trade-off, which can significantly impact cost, control, and time-to-value:

1. Strategic Fit and Differentiation: Determine if the AI capability you need is a core differentiator for your business or more of a common utility. Suppose it’s a score (for example, a proprietary algorithm that gives a competitive advantage or a unique model tuned to your proprietary data. In that case, a stronger case to build or heavily customize gives you exclusive IP and control. If it’s a common need (say, a general chatbot for internal Q&A or an AI code assistant), buying or using an existing service could suffice, as many vendors offer similar capabilities. As one AI strategist put it, if your use cases are similar to what 80% of other companies have, an off-the-shelf solution can likely meet them with minor tweaks. Save in-house development resources for truly unique requirements.

2. Cost and Resources: Building an AI system requires significant upfront investment, not just in hardware, but in talent (data scientists, ML engineers) and time for development and training. There are “hidden” costs to building: integrating multiple components, maintaining and updating the system, and fixing bugs over time. On the other hand, buying typically shifts cost to a recurring license or usage fee and relies on the vendor’s existing R&D. A build-vs-buy analysis should compare the Total Cost of Ownership (TCO) over a multi-year period. Example: Building your generative language model might avoid paying per-query fees to a vendor, but you’ll incur costs for GPU infrastructure and electricity and need staff to tune and support it. Some studies have found that for steady high volumes, self-hosting can save money (as noted, potentially 3-5× lower over time), but if your volume is low or you lack economies of scale, a cloud service might be cheaper in the long run. Also consider opportunity cost: buying a solution could let you deploy AI capabilities in weeks, whereas building could take months or years, during which you might miss business opportunities. If you build, ensure you budget for ongoing maintenance; it’s not one-and-done. If you buy, remember to account for scaling costs (license fees can rise as you add users or usage).

3. Control, Customization, and IP: Building in-house grants you maximum control. You can customize the model architecture, train it on your proprietary data without it leaving your environment, and you aren’t subject to a vendor’s roadmap or changes. You also own the intellectual property of what you create (unless using open-source under certain licenses). This control is crucial if you’re in a regulated industry and need to fine-tune how decisions are made, or if you want to ensure data never leaves your premises. Buying an AI solution means accepting some black-box aspects – vendors might not reveal model internals or allow deep customization. You might only get configurable parameters or fine-tuning at a high level. Additionally, vendors might update models in ways that affect your outputs (for instance, an API could start giving different answers after a backend model upgrade). If such changes could disrupt your business, that’s a point in favor of building a more stable, tailor-made system. On IP: if you invent a novel AI method in-house, it’s yours to potentially patent or keep as a trade secret; if a vendor’s tool produces something, the tool itself isn’t yours (though your output data may be). Some companies worry about vendor lock-in with proprietary models – if you rely on a vendor’s model and they change terms or pricing, or suffer downtime, you have limited recourse. Building your own (or using open-source models) can mitigate lock-in risk, albeit with challenges.

4. Data Sensitivity and Compliance: Consider the data sensitivity and tasks. Suppose the AI uses highly sensitive data (personal data, trade secrets, confidential IP). In that case, many enterprises feel more secure with an in-house or private-cloud solution where they control data end-to-end. Using a public cloud AI API means trusting the vendor’s security and privacy measures. While major vendors offer strong security, some organizations (especially in sectors like finance, defense, and healthcare) have strict compliance needs that favor on-premises deployment or a virtual private cloud setup. Building in-house allows you to keep data on-premises or in a cloud environment you manage, and you can ensure all compliance checks (encryption, audit trails, etc.) are in place. Also, certain regulations might dictate that you can explain how an AI made a decision (e.g., in EU laws) – if you built the model, you might be better positioned to do that. In contrast, a third-party model might be a complete black box, making compliance trickier.

5. Time-to-Value and Capability: Buying a ready-made AI solution can often get you to production faster. Vendors have pre-trained models and out-of-the-box integrations. For example, if you want a generative AI customer service chatbot, several vendors can provide that as a product; you could deploy it in weeks. Building from scratch, you would need to gather data, train a model, build a user interface, etc., which could take considerable time. Also, assess your internal capability: Do you have a team that can build and maintain AI? If not, are you willing to hire and invest? The AI talent market is competitive, and retaining a skilled ML engineering team has its costs. If your organization is new to AI, starting with a vendor solution might be a way to “earn by doing” and build up internal know-how gradually, rather than betting big on an internal build and risking a misstep. On the other hand, building might be within reach and strategically rewarding if you have a strong tech team (or can partner with a research lab or something.

6. Hybrid Approaches: In reality, many enterprises choose a hybrid approach – building some pieces and buying others For instance, you might buy access to a powerful foundation model (like using Azure OpenAI or AWS Bedrock models), but then build a lot of the surrounding system in-house: you fine-tune the model with your data, build the application layers, and perhaps even develop some custom AI components for niche tasks. This way, you leverage the heavy investment vendors made in core model development, but still create a solution tailored to you. Another hybrid model is using open-source pre-trained models (which is like “e” software and then customizing them – you didn’t build the model from scratch, but didn’t buy a service; you are effectively taking a third-party model and deploying it yourself. With the rise of robust open-source generative models (like MetaLlama 2, which is available for enterprise use), this has become a viable path: you procure the hardware (or cloud instances) to run an open model, and avoid vendor fees altogether while keeping full control. The trade-off is that you assume responsibility for operating and keeping the model updated.

When making the build vs buy decision, weigh these factors holistically. It can be useful to do a pilot project both ways: e.g., try a vendor API on a small scale to gauge results quickly, while also doing a proof-of-concept with an open-source model internally. See which yields better results and fits your constraints, then invest in scaling that option. Also, consider the longevity: an AI model or system will require updates (models get outdated as new techniques emerge or data drifts). Vendors will handle updates for you in a buy scenario (you’ll get the latest model improvements), whereas built solutions mean you must continually improve them or risk stagnation. There’s no one-size-fits-all answer – often a hybrid and iterative approach is best, letting you maintain control in critical areas while leveraging vendor innovation where it doesn’t differentiate you.

Recommendations for AI Procurement Teams

In conclusion, enterprise procurement leaders should approach AI deals with both optimism and caution. Here are key recommendations to ensure successful AI procurement:

  1. Define Clear Use Cases and Requirements First: Understand what you need the AI to do (e.g., automated coding assistant, marketing content generator, predictive analytics on supply chain data) and how success will be measured. Clear goals will drive the right choice of vendor or solution and set the stage for contract KPIs.
  2. Perform a Build-vs-Buy Analysis: Evaluate if an in-house build, an external solution, or a mix best meets each AI need. Consider data sensitivity, time urgency, internal talent, and long-term cost. Don’t assume you must build everything or buy everything – choose strategically for each use case.
  3. Due Diligence on Vendors: Thoroughly vet AI vendors ‘ capabilities and contract terms. Compare the major cloud AI providers on model quality, pricing, support, and enterprise-friendly terms. Look beyond glossy demos – review their documentation on data handling, security certifications, and any referenceable customers in your industry. Favor vendors who are transparent about their model’s limitations and align with your compliance needs (e.g., will sign a Data Protection Addendum, etc.).
  4. Negotiate Key Protections in Contracts: Don’t accept boilerplate. Push back on terms that pose high risk:
    • Insist on data privacy clauses that bar the vendor from using your inputs/outputs to train others ‘ models.
    • Secure at least a basic SLA if uptime is important (or have a backup plan if service is down).
    • Seek indemnities for critical risks (IP infringement, data breach) and make sure liability caps are reasonable relative to the potential harm.
    • Include termination and exit rights so you’re not handcuffed if the AI under-delivers or regulations change.
    • Avoid auto-renewal traps; set calendar reminders for renewal dates so you can renegotiate intentionally.
  5. Align Pricing Model with Usage: Choose vendors and plans that fit your consumption pattern. If unsure, start with pay-per-use to gather data, then consider a committed deal to save costs once you have usage estimates. Negotiate for volume discounts or hybrid models (a mix of user licenses and consumption) to optimize cost. Always model worst-case and best-case spend under any pricing scheme and bake those into your budget approvals.
  6. Engage Stakeholders Early: Involve IT, security, legal, and finance teams early in the procurement process. AI procurement isn’t just another software buy – it raises unique issues (ethical use, data governance, regulatory compliance). HAsecurity/privacy review of he vendor (for cloud security, SOC2 reports, etc.), a legal review of contract language, and a finance review of cost projections (maybe with a FinOps specialist) will surface concerns before you’re locked in. This cross-functional approach prevents surprises and builds enterprise-wide support.
  7. Plan for Change and Flexibility: The AI landscape is evolving quickly. Avoid overly long commitments. Keep initial contracts short (1 year) or include mid-term checkpoints. Explicitly discuss with vendors how upgrades will be handled – if a new model version comes out, will you get access under your current contract? Design your architecture and agreements for portability: you should be able to swap out a model or even vendor if needed, without massive disruption. Internally, keep evaluating new players and technologies; don’t be afraid to mix vendors to get the best of each.
  8. Ensure Knowledge Transfer and Internal Capability: If using consultants or vendor services to implement AI, include deliverables that build your internal knowledge. For example, require documentation, staff training sessions, or the handover of code and model weights (as permitted) at project end. Over-reliance on an external party can be risky – aim to cultivate in-house skills so your team can maintain and tweak AI systems after deployment.
  9. Monitor and Govern the AI Use: Procurement doesn’t end at the contract signature. Work with IT and an AI governance committee (if you have one) to monitor ongoing usage, costs, and performance. Establish internal policies for how teams should use the AI tools (to prevent misuse or data leaks). Periodically audit whether the vendor meets their obligations – e.g., hitting SLAs, maintaining security certifications, etc. This oversight ensures you realize value and can course-correct if issues arise.
  10. Advocate for Your Interests – You Have Leverage: Remember that enterprise customers have a lot of influence in the current market. Generative AI providers are keen to land big-name clients. Don’t hesitate to ask for custom terms or extras: e.g., ask Microsoft for a dedicated support contact for your Copilot rollout, or ask OpenAI for a volume discount and enhanced uptime commitment if your usage is significant. ”What’s ‘standard’ can often be modified if the deal size is big enough. Even requesting a trial period or pilot before full commitmentiss negotiable– get quotes from multiple sources and tell them you have choices. The goal is a win-win where you get a successful AI deployment and contractual peace of mind.

By following these best practices, CIOs and procurement leaders can confidently procure AI solutions that deliver innovation while protecting the enterprise’s interests. The key is to be as vigilant with contracts and risk as you are enthusiastic about AAI’s potential, combining excitement with due diligence to ensure your AI initiatives are transformative and secure.

Author

  • Fredrik Filipsson brings two decades of Oracle license management experience, including a nine-year tenure at Oracle and 11 years in Oracle license consulting. His expertise extends across leading IT corporations like IBM, enriching his profile with a broad spectrum of software and cloud projects. Filipsson's proficiency encompasses IBM, SAP, Microsoft, and Salesforce platforms, alongside significant involvement in Microsoft Copilot and AI initiatives, improving organizational efficiency.

    View all posts