The SLA Gap in Enterprise AI Contracts

Enterprise software procurement operates on the assumption that mission-critical platforms come with service level agreements. SLAs define uptime commitments, establish service credit mechanisms for non-performance, and create the contractual accountability framework that justifies embedding a vendor's platform into business-critical workflows. This assumption does not hold for most enterprise AI platforms in their standard commercial terms.

OpenAI's standard services agreement for enterprise customers does not include a published uptime SLA. Anthropic's Claude enterprise agreements similarly lack standardised SLA commitments in their base commercial terms. Perplexity's enterprise offering, Cohere's platform API, and most other AI inference providers follow the same pattern. The implication is that enterprises paying $45–75 per user per month for OpenAI ChatGPT Enterprise, or $30–35 per seat for Claude Enterprise (at 500+ seats), are accepting commercially material financial exposure with no contractual protection if the platform is unavailable when their users need it.

The contrast with Azure OpenAI is instructive. Microsoft's Azure OpenAI Service — which provides access to OpenAI models through Microsoft's cloud infrastructure — includes a standard 99.9% monthly uptime SLA as part of Azure's standard service commitments. For enterprises where compliance obligations, internal risk frameworks, or procurement policies require SLA coverage for enterprise software, Azure OpenAI has a structurally different risk profile than direct OpenAI or Anthropic.

"You would not accept a $500K annual ERP contract without an SLA. Yet enterprises are signing AI platform commitments of equivalent scale on standard terms that include no uptime guarantee, no service credits, and no performance commitment whatsoever."
In one engagement, a global financial services firm had deployed an AI-driven document processing workflow on a major AI platform with no SLA or service credit provision. A 4-hour API outage caused a measurable downstream impact on client-facing operations. Redress negotiated a retrospective SLA amendment covering their committed spend tier — including a 99.9% uptime commitment and service credits equivalent to 30 days of fees for any future breach. The engagement fee was under 3% of the annual platform spend.

What the Actual Uptime Data Shows

Independent monitoring of AI platform uptime reveals a meaningful gap between marketing communications and operational performance. ChatGPT's overall service (including the API) has recorded availability in the 98.5–99.3% range across measured periods in 2025–2026. The API endpoint, used by enterprise integrations, has performed more consistently than the consumer-facing product, typically in the 99.5–99.7% range. Nevertheless, there have been documented incidents where API availability dropped for periods of hours, directly impacting enterprise applications built on the platform.

Claude's (Anthropic's) tracked uptime performance has been in a similar range — approximately 99.0–99.5% on a 30-day rolling basis for the API. Azure OpenAI, benefiting from Microsoft's enterprise infrastructure investment, has generally performed closer to its 99.9% SLA commitment, though it has also experienced incidents during the rapid capacity expansion period following widespread enterprise adoption.

The mathematical significance of these figures is worth stating explicitly. A 99.9% monthly uptime commitment means no more than approximately 44 minutes of downtime per month. A 99.5% uptime (roughly where Claude and ChatGPT API performance has been observed) means approximately 3.6 hours of downtime per month. For enterprises with business-critical AI workflows — customer service automation, document processing pipelines, code generation tooling — 3.6 hours of monthly downtime is a significant operational exposure without SLA coverage and service credit protection. The enterprise AI licensing guide 2026 provides a platform-by-platform uptime and commercial terms comparison.

Reviewing an AI platform enterprise agreement before signature?

Our AI contract advisory team has reviewed 100+ enterprise AI agreements across OpenAI, Anthropic, and Azure.
Talk to our enterprise AI negotiation specialists →

The Four Must-Have Contract Terms

Beyond the uptime SLA itself, enterprise AI agreements require four categories of contract provision that have proven achievable through negotiation and that address the specific risk profile of AI platform procurement.

Term 1: Data Processing Agreement (DPA)

A DPA is non-negotiable for any enterprise operating under GDPR, CCPA, HIPAA, or equivalent data protection legislation, and for any enterprise with contractual obligations to its own customers regarding data handling. The DPA must specify precisely how the AI provider processes input data (the prompts, documents, and data your users submit), whether that data is used for model training or fine-tuning, the retention period for input and output data, and the data deletion mechanism on contract termination.

OpenAI's enterprise agreements include a DPA as a standard commercial document, though the scope of data usage provisions requires careful review — particularly the provisions relating to model improvement and training. Anthropic's Claude enterprise agreements similarly include DPA provisions, and Anthropic has been explicit in its enterprise commercial terms about not using enterprise customer data for model training without consent. The enterprise guide to negotiating OpenAI contracts provides a DPA clause-by-clause analysis with specific provisions to negotiate or reject.

Term 2: IP Indemnification

AI-generated content creates a novel IP risk: the possibility that the model's output incorporates elements from training data in ways that constitute copyright infringement. Enterprise buyers whose workflows involve generating customer-facing content, marketing materials, code, or other outputs that carry commercial IP significance need explicit indemnification from the AI provider for IP infringement claims arising from model-generated content.

Microsoft's Azure OpenAI Copilot Copyright Commitment (CCC) provides indemnification for enterprise Azure customers, covering third-party IP claims against AI-generated outputs. OpenAI's enterprise terms include more limited IP indemnification language. Anthropic's Claude enterprise agreements address IP risk through a combination of training data practices and contractual representation. The scope and conditions of indemnification — which typically requires that the customer use the platform as directed and does not override safety filters — are material negotiation points. The Anthropic Claude enterprise licensing guide covers IP indemnification terms in detail for Claude agreements.

Term 3: Data Residency

Enterprise AI platforms process data in cloud infrastructure geographies that may not align with your organisation's data residency requirements. European enterprises must ensure GDPR compliance for personal data processing. Regulated industry enterprises (financial services, healthcare, defence) may have contractual or regulatory data residency obligations that require processing within specific geographic boundaries.

Azure OpenAI provides the most mature data residency options among the major AI platforms, with regional deployments across Azure's global infrastructure and Provisioned Throughput Units (PTUs) that can be allocated to specific geographic endpoints. OpenAI's enterprise agreements now include data residency options for select regions, though coverage is more limited than Azure. Anthropic's data residency capabilities are evolving — enterprise buyers with specific residency requirements should confirm current capabilities directly with Anthropic's enterprise sales team before signing. The Azure OpenAI vs direct OpenAI comparison examines data residency differences as a commercial decision factor. For a comprehensive view across all platforms, the OpenAI enterprise procurement negotiation playbook maps residency options platform by platform.

Term 4: Exit Rights and Data Portability

AI platform adoption creates organisational embedding that can significantly increase switching costs over time — through trained prompt libraries, fine-tuned models, user workflows built around specific model characteristics, and integration architecture. Exit rights provisions protect against commercial exploitation of this embedding at renewal.

The exit provisions to negotiate include: data portability on termination (right to export all custom configurations, fine-tuned model weights if applicable, and prompt libraries in a documented format); advance notice requirements from the provider before discontinuing any model version or API endpoint (minimum 12 months notice); termination for cause without penalty if the provider materially changes terms, pricing, or service capabilities; and model deprecation continuity rights (the right to continue using a specific model version for a defined period after its general deprecation). GPT-5.4 is the current production OpenAI model as of April 2026 — the deprecation of GPT-4o in February 2026 demonstrated how quickly model transitions can affect enterprise workflows built on specific model characteristics. Our AI contract advisory specialists can review your exit rights provisions and ensure your agreement includes the portability and continuity protections your operational dependencies require.

Negotiating SLA Terms Into OpenAI and Anthropic Agreements

The absence of published SLAs from OpenAI and Anthropic does not mean SLA terms are unavailable — it means they are not offered unless requested. Enterprise buyers representing meaningful annual commitment (typically $500K+ annually) have successfully negotiated uptime commitments, service credits, and response time performance metrics into their enterprise agreements. The negotiation framework involves three elements: quantifying your operational dependency, proposing specific SLA language, and using competitive alternatives as leverage.

Quantifying operational dependency means calculating what platform unavailability actually costs your organisation — in lost productivity, customer impact, SLA obligations to your own customers, and manual process costs during outages. This analysis converts an abstract SLA request into a concrete business case that procurement can present to the AI provider's enterprise sales team.

Proposed SLA language should specify: monthly uptime commitment (target 99.9%), measurement methodology (service provider's own monitoring versus independent third-party measurement), service credit structure (10% of monthly fee for uptime below 99.9%, 25% for below 99.5%, 50% for below 99%), and credits as the exclusive financial remedy for uptime failures (to manage the provider's liability exposure while creating meaningful commercial accountability). Response time commitments — for example, 95th percentile API response time below two seconds for standard-length prompts — can also be negotiated for enterprises with user experience or application performance requirements.

Competitive leverage means having a credible Azure OpenAI evaluation in parallel, since Azure's standard 99.9% SLA creates an immediate comparison point. OpenAI's enterprise team is motivated to retain enterprise revenue that might otherwise flow through Azure — using this structural tension appropriately creates commercial flexibility on SLA terms that the standard contract does not provide.

Enterprise AI Contract Intelligence

SLA updates, pricing benchmarks, and contract term guidance for OpenAI, Anthropic, and Azure AI — monthly to your inbox.

Author

Fredrik Filipsson is Co-Founder of Redress Compliance and an enterprise AI contract specialist with 20+ years of experience and 500+ client engagements spanning enterprise software, cloud, and AI platform procurement. Fredrik advises CIOs and procurement leaders on structuring AI platform agreements that include meaningful service level protections alongside the commercial terms the enterprise AI market is still maturing toward. Connect on LinkedIn.