Why This Checklist Matters

Enterprise AI procurement is moving faster than procurement governance is evolving. Technology and business teams push for rapid AI deployment; procurement and legal teams lack the frameworks to evaluate contracts for platforms they have never bought before. The result is a wave of signed AI contracts containing provisions that create material financial, legal, and strategic risk — risk that becomes visible only when the enterprise is already committed.

This checklist is structured around the five risk categories that most frequently appear in AI platform contracts: pricing and cost control, data governance and privacy, compliance and regulatory alignment, intellectual property ownership, and exit and portability. Each question represents a decision point that, if unanswered before signature, typically produces a negative outcome within 12 to 24 months of deployment.

"The AI vendor sales cycle is designed to create urgency. The best protection for enterprise buyers is a structured checklist that forces answers to the questions the vendor would prefer to defer until after the contract is signed."

Category 1: Pricing and Cost Control (Questions 1–5)

Question 1: What is the billing model and how does cost scale with usage?

Understand precisely how you will be charged. Token-based consumption billing (per million input and output tokens) is the dominant model for OpenAI, Anthropic, and Google's API-based offerings. Seat-based billing applies to ChatGPT Enterprise and similar products. Provisioned throughput (PTU) billing from Azure OpenAI provides capacity-based pricing. Each model behaves differently at scale. A token-based model that appears affordable in a pilot can generate costs 10 times higher in production as usage broadens. Ask the vendor to model your expected production costs based on your pilot usage data and apply a 3x to 5x growth multiplier. Compare that projection against your budget before committing.

Question 2: Is there a consumption cap or automatic budget alert?

Consumption billing creates budget unpredictability that is genuinely difficult to manage without controls. Ask whether the platform includes automatic spend alerts at defined thresholds (recommended: 50, 70, and 90 percent of monthly budget), hard spending caps that require manual override to exceed, and monthly budget limits that generate an approval workflow before processing further requests. If the platform does not offer these controls natively, request contractual obligations to implement them and negotiate compensation if budget overruns occur due to platform-side errors or unannounced pricing changes.

Question 3: What price protection exists for the contract term and on renewal?

AI model pricing has changed significantly as providers have competed and scaled. GPT-4o pricing declined dramatically between its launch and 2025. However, pricing can also increase when providers introduce new models and deprecate lower-cost options. Request contractual price protection: pricing locked at the signed rate for the duration of the initial term, with any increase on renewal capped at a maximum defined percentage above CPI. Require 90-day advance notice of any price changes and the right to terminate without penalty if prices increase beyond your cap.

Question 4: How does the pricing change if you upgrade to a newer model version?

AI platforms release new model versions frequently. OpenAI, Anthropic, and Google all follow patterns of releasing improved models at higher price points and gradually deprecating older, lower-cost models. Understand whether your contract locks you into specific model versions and pricing, or whether migration to newer models (which may be required as older models are deprecated) triggers automatic price increases. Negotiate the right to remain on a named model version for at least 12 months after any deprecation notice, with pricing locked at your committed rate during that period.

Question 5: What happens to unused committed volume?

Enterprise AI consumption is notoriously difficult to forecast. Committed volume agreements that expire at year-end without rollover provisions represent money paid for unused capacity. Negotiate rollover rights for unused committed volume to the following contract year, or conversion of unused credits into alternative service credits. If the vendor refuses rollover, negotiate a reduction in the committed volume or a right to reduce commitment mid-term if actual consumption falls below 70 percent of the committed level.

Want an independent review of your AI vendor contract?

We provide buyer-side AI contract advisory for enterprise organisations.
Request a Review →

Category 2: Data Governance and Privacy (Questions 6–10)

Question 6: Will your data be used to train the vendor's models?

This is the most frequently misunderstood provision in AI vendor contracts. Default API terms for OpenAI historically included the right to use API data for model improvement, though enterprise plans explicitly opt out. However, "enterprise plan" designation is not always sufficient — you need a written contractual opt-out in the signed agreement. Ask for explicit written confirmation that prompts, completions, fine-tuning data, and any derived data will not be used for model training, research, or any purpose other than delivering the contracted service. Do not accept verbal commitments or general enterprise plan descriptions as substitutes for contractual language.

Question 7: Where will your data be processed and stored?

Data residency requirements apply to organisations in the EU (GDPR), UK (UK GDPR), Australia (Privacy Act), India (DPDP Act), and a growing number of other jurisdictions with data localisation requirements. OpenAI now offers data residency in the EU, UK, US, Canada, Japan, South Korea, Singapore, Australia, India, and UAE for eligible enterprise customers. Azure OpenAI additionally provides region-specific data processing through Microsoft's Azure infrastructure with standard Azure compliance certifications. Require a named region commitment in the signed contract — general language about best efforts or preference is not enforceable for compliance purposes.

Question 8: What data is retained and for how long?

Understand the vendor's data retention policy for your prompts, responses, fine-tuning datasets, and any metadata generated during your use of the service. Understand the distinction between in-context data (active session), short-term retention (typically 30 days for abuse monitoring), and long-term retention (for research or product improvement). Require a written statement of retention periods for each data category and a commitment to delete all customer data, including derived data, within 30 days of contract termination, with a written deletion certificate provided.

Question 9: Is the vendor GDPR-compliant and will they sign a DPA?

For organisations subject to GDPR, a Data Processing Addendum is a legal requirement before any personal data is processed by a third-party vendor. OpenAI provides a DPA for enterprise customers. Microsoft Azure, which hosts Azure OpenAI, processes data under Microsoft's standard Azure DPA. Anthropic provides DPA terms for enterprise customers. Do not proceed with deployment of any AI system that will process personal data without a signed DPA. Verify that the DPA includes: identification of processing purposes and legal basis; data subject rights fulfilment obligations; sub-processor listing and notification requirements; SCCs or equivalent transfer mechanisms for international transfers; and breach notification within 72 hours of discovery.

Question 10: What security certifications and audit reports does the vendor hold?

Require the vendor to provide current SOC 2 Type 2 reports and evidence of ISO 27001 certification. For healthcare and life sciences organisations, a signed Business Associate Agreement under HIPAA is required. For financial services, verify that the vendor's security posture meets your sector-specific regulatory requirements. Request the right to receive annual security audit reports and to be notified within 72 hours of any security incident that may have affected your data. Verify that the vendor carries appropriate cyber liability insurance.

Category 3: Compliance and Regulatory Alignment (Questions 11–13)

Question 11: Does the contract include a regulatory exit right?

AI regulation is evolving rapidly. The EU AI Act, US AI Executive Orders, UK AI regulation framework, and sector-specific requirements in financial services, healthcare, and critical infrastructure are all creating obligations that may conflict with specific AI vendor capabilities or data handling practices. Your contract must include the right to terminate without penalty if a regulatory change makes continued use of the AI service non-compliant. This clause is increasingly non-negotiable for regulated industries and should include change of law events at both national and supranational level.

Question 12: How does the vendor handle model output errors, hallucinations, and bias?

AI models generate incorrect outputs. Large language models hallucinate facts, produce biased outputs, and occasionally refuse to complete legitimate requests. Your contract should include: an agreed process for reporting and investigating model output errors; service credits for output quality failures that exceed defined accuracy thresholds for production applications; a commitment that the vendor will test models for discrimination and bias before deployment; and exclusion of liability language that does not extend to cases where model errors cause material damage to your operations. Do not accept blanket disclaimers that transfer all output quality risk to the customer.

Question 13: Is the AI system covered under your existing enterprise compliance frameworks?

Verify that the AI vendor relationship has been assessed under your standard third-party risk management programme. AI vendors should complete your standard vendor security questionnaire, an AI-specific governance addendum (covering model transparency, bias testing, and explainability), and a fourth-party assessment of major sub-processors (cloud infrastructure, data labelling vendors). Ensure that your internal AI governance policy covers the specific use cases being deployed and that employees using the AI system have completed appropriate training on its limitations and output quality requirements.

In one engagement, a financial services firm faced unexpected GDPR violations stemming from data residency gaps in their direct OpenAI agreement. Redress identified the issue during contract review and negotiated explicit data residency commitments for the EU. The engagement fee was less than 2% of the potential compliance exposure.

Category 4: Intellectual Property (Questions 14–16)

Question 14: Who owns the outputs generated by the AI?

Confirm in writing that your organisation owns all outputs generated through your use of the AI service, including text, code, images, and any other content created at your direction. OpenAI, Anthropic, and Google all assign ownership of outputs to the customer in their enterprise terms, but this should be verified in the specific agreement you sign. Ensure the assignment is unconditional and not subject to the vendor's ability to revoke ownership based on subsequent use policies or model updates.

Question 15: Does the vendor warrant that outputs are free from third-party IP claims?

AI models trained on internet-scale datasets may incorporate copyrighted material in ways that create infringement risk for outputs. OpenAI and Google have both introduced IP indemnity programmes for enterprise customers that provide some protection against third-party copyright claims arising from model outputs. Verify whether your contract includes an IP indemnity, understand its scope and limitations, and assess whether the indemnity is adequate for your intended use cases. Do not assume that standard terms include IP indemnity — many AI vendor agreements exclude it entirely.

Question 16: Who owns fine-tuning datasets and fine-tuned models?

If you intend to fine-tune the AI model on your proprietary data, confirm ownership of the fine-tuning dataset, the fine-tuned model weights, and any evaluation data you create during the fine-tuning process. Ensure you can export fine-tuned model weights and evaluation results on contract termination. Verify that the vendor cannot use your fine-tuning data for any purpose other than providing the service, including training other customers' models or the vendor's foundation models.

Category 5: Exit Rights and Portability (Questions 17–20)

Question 17: What are the early termination provisions and fees?

OpenAI enterprise agreements and similar long-term AI platform contracts typically include early termination fees calculated as a percentage of the remaining committed value. A $1 million, two-year contract with a 50 percent early termination fee creates a $500,000 exit cost at the one-year mark. Negotiate exit rights without penalty in defined circumstances: material breach by the vendor; security incident affecting your data; regulatory change making continued use non-compliant; acquisition of the vendor by a direct competitor; or significant degradation in model performance below agreed benchmarks. For high-value commitments, negotiate a step-down termination fee that reduces proportionally with time served under the contract.

Question 18: Can you migrate to a different AI provider without rebuilding your applications?

Technical lock-in is as important as contractual lock-in. If your AI applications are built directly against a vendor's proprietary API, migrating to a different provider requires rebuilding those applications — a cost that can make contractual exit rights practically worthless. Ask the vendor whether their API conforms to open standards, and independently evaluate whether your application architecture uses provider-agnostic abstraction layers. The architectural decision to build against a unified AI gateway (supporting multiple providers) rather than a single vendor API should be made before production deployment, not after a contract renewal negotiation begins.

Question 19: What data will be returned to you on exit, and in what format?

Require a commitment that all your data — including fine-tuning datasets, evaluation data, conversation history, usage logs, and any derived data — will be returned to you in a portable, standard format on request and within a defined timeframe after contract termination. Require a deletion certificate confirming that all copies of your data have been deleted from vendor systems within 30 days of the data return. Without these provisions, you have no mechanism to confirm that your data has been removed from vendor infrastructure, which creates ongoing GDPR and data governance exposure.

Question 20: Does the vendor have a commercially sound exit assistance obligation?

For production AI systems that are business-critical, require the vendor to provide exit assistance for a defined period (typically 90 days) after termination. Exit assistance should include: continued access to the service at your contracted rate during the exit period; access to technical documentation sufficient for migration; transition assistance including export of your data and models; and notification of any sub-processor changes that affect your ability to migrate. Exit assistance provisions are standard in cloud service agreements but are frequently absent from AI platform contracts — include them explicitly.

Download the Complete AI Procurement Checklist

Get the full checklist with scoring criteria, red-flag indicators, and model contract language for each of the 20 questions — formatted for use in vendor negotiations.

How to Use This Checklist

Use this checklist at three stages of the AI procurement process. First, at vendor shortlisting — before issuing an RFP or entering commercial negotiations, require vendors to respond in writing to all 20 questions. Their willingness and ability to provide clear answers is itself a quality signal. Second, during contract negotiation — use the checklist to identify gaps between the vendor's standard terms and your minimum requirements. Gaps become negotiation items. Third, at contract review — before final approval, verify that the signed agreement contains satisfactory answers to all 20 questions. If any question remains unanswered, the contract is not ready for signature.

The Azure OpenAI vs direct OpenAI decision warrants specific attention here. For most enterprise deployments, Azure OpenAI provides a stronger answer to questions 7, 8, 9, and 10 than direct OpenAI, because Azure's compliance infrastructure (data residency, DPAs, BAAs, FedRAMP, ISO 27001 portfolio) is more mature than OpenAI's standalone enterprise offering. However, direct OpenAI may offer advantages for questions 4 and 6, where access to newer model versions and the pace of OpenAI's enterprise feature development are relevant. Neither route is universally superior — the right answer depends on your specific compliance environment and technical requirements.