The Pace of Transformation — and the Commercial Reality Behind It
AI is not a future technology for enterprise operations — it is already embedded in the tools that knowledge workers use daily, the infrastructure that cloud platforms run on, and the procurement cycles that software vendors are aggressively restructuring around. In 2025, 92% of Fortune 500 companies reported active use of OpenAI products, and the total market spend on enterprise AI tools reached approximately $37 billion, representing a 3.2x increase over 2023 levels. The ROI data is compelling: enterprises report approximately $3.70 in value per dollar invested in AI, driven by automation of repetitive tasks, acceleration of knowledge work, and reduction in manual processing cycles.
The productivity gains are real and measurable. Organisations deploying AI-assisted coding tools report 25 to 40% reductions in development cycle time. Customer service functions using AI triage and response generation are handling 30 to 50% higher ticket volumes without proportional headcount increases. Finance teams using AI for document processing, reconciliation, and reporting are completing month-end cycles 40% faster. These are not projections — they are outcomes being reported in engagement after engagement across industries.
But the commercial structure underlying this transformation is fundamentally different from traditional software licensing, and that difference creates significant financial risk for enterprises that have not adapted their procurement approach. Understanding how AI is transforming your cost base — not just your capabilities — is the part most vendors do not explain at the point of sale.
Consumption Billing: The Budget Unpredictability Problem
Traditional enterprise software is licensed on a named-user or processor basis. The cost is predictable: you know at the start of the year what you will pay, because the licence count is fixed. AI services are fundamentally different. OpenAI, Azure OpenAI, Google Gemini, and Anthropic Claude all charge on a consumption basis — tokens processed, API calls made, compute cycles consumed. The cost varies with usage, which varies with adoption. As internal teams find new use cases and embed AI into more workflows, consumption grows in ways that are difficult to forecast.
This is not a theoretical risk. In 2025, 78% of enterprises reported unexpected AI billing charges — meaning the actual spend exceeded budgeted spend, in most cases because consumption grew faster than modelled. The gap is typically 40 to 60% in the first 12 months of enterprise-wide deployment, as AI tools move from controlled pilots to broad organisational use. Enterprises that committed to consumption-based AI agreements without internal usage governance frameworks found themselves managing budget overruns mid-year with limited contractual flexibility to adjust.
The solution is not to avoid AI — the productivity case is too strong. The solution is to build consumption governance into the procurement structure from the start: usage monitoring, departmental budgets, and contractual mechanisms that allow for periodic commitment adjustments. Our GenAI advisory practice works with enterprises to design the governance framework before the contract is signed, not after the first unexpected invoice arrives. For a structured evaluation checklist, see our AI procurement resources.
Reviewing an OpenAI, Azure OpenAI or Anthropic enterprise agreement?
We'll identify the lock-in provisions and pricing gaps before you sign — confidential and independent.OpenAI Enterprise Agreements: Lock-in Provisions You Need to Know
OpenAI's enterprise agreements have matured significantly from the early API-access model. Current enterprise agreements include minimum commitment thresholds, data residency provisions, model version lock-in clauses, and auto-renewal terms that are structurally similar to the most aggressive legacy software vendor contracts. The minimum commitment structure means that enterprises signing an OpenAI enterprise deal are committing to a baseline spend regardless of actual consumption — removing some of the budget unpredictability risk, but replacing it with the risk of paying for capacity that is not used.
Lock-in provisions in OpenAI enterprise agreements typically restrict migration to competing models for the duration of the agreement term, or impose penalties for reducing committed spend below the minimum threshold. These clauses are not always prominently disclosed in sales conversations, and procurement teams unfamiliar with AI contract structures may not identify them until legal review. We always flag lock-in provisions as a primary risk item in any OpenAI enterprise agreement review because the ability to switch models — as the AI landscape continues to evolve rapidly — is a material long-term interest for the buyer.
The auto-renewal terms in OpenAI enterprise agreements often include price escalation provisions tied to model capability upgrades. As OpenAI introduces new model generations, existing enterprise contracts may automatically include access to the new models at a higher price point — without explicit buyer consent to the price increase. Understanding the exact trigger conditions for these escalations, and whether they can be contractually constrained, is a core element of our enterprise AI contract review work.
Azure OpenAI vs Direct OpenAI: The Pricing Model Comparison
Many enterprises access OpenAI models through Azure OpenAI Service rather than directly through OpenAI's API. This deployment path offers specific advantages — Azure's compliance and data residency infrastructure, integration with existing Azure MACC commitments, Microsoft's enterprise support framework, and the ability to run OpenAI models within your existing Azure security perimeter. For organisations with significant Azure commitments and strong compliance requirements, Azure OpenAI is often the right architectural choice.
However, the pricing difference between Azure OpenAI and direct OpenAI API access is significant and is frequently not modelled in the procurement process. Azure OpenAI typically carries a premium of 20 to 40% above direct OpenAI API pricing for equivalent token consumption, reflecting Microsoft's infrastructure margin and support overhead. For enterprises with moderate AI consumption, this premium may be justified by the compliance and integration benefits. For high-volume AI consumers, the cost difference at scale can reach seven figures annually — and the make-versus-buy decision deserves explicit financial modelling.
Additionally, the choice between Azure OpenAI and direct OpenAI affects your negotiating position with both Microsoft and OpenAI. Azure OpenAI consumption counts toward your Azure MACC commitment and can be used to justify MACC escalation — which may yield broader Azure pricing benefits. Direct OpenAI spending does not benefit Microsoft and gives you independent leverage with OpenAI. Enterprises with large Microsoft estates and significant AI spend should model both paths and negotiate accordingly. Our Microsoft advisory practice regularly advises on the Azure MACC versus direct OpenAI trade-off as part of holistic Microsoft licence optimisation.
42% of AI Initiatives Abandoned: What the Failure Rate Tells Us
Alongside the high adoption rates, 2025 also produced a striking failure statistic: 42% of enterprise AI initiatives were abandoned before reaching production scale. The causes cluster around three problems — technology integration complexity, data readiness, and cost overruns from consumption billing. The first two are engineering problems that organisations can address through better project discipline. The third is a procurement problem, and it is the most preventable of the three.
Enterprises that launched AI initiatives without modelling consumption costs typically found that the combination of token costs at scale, infrastructure overhead, and internal development time created a total cost of ownership that exceeded the approved budget before the initiative reached its intended user base. The projects were abandoned not because the technology did not work, but because the financial model was wrong from the start. Enterprises that built consumption modelling and contractual cost controls into the procurement process — with committed spend levels, usage monitoring, and contractual adjustment mechanisms — sustained their initiatives through to production and capture the ROI. The difference is in the procurement approach, not the technology.
AI's Impact on Traditional Software Licensing Estates
AI transformation is not limited to new AI-specific contracts — it is actively disrupting the economics of traditional enterprise software licensing. Microsoft 365 Copilot, priced at an additional $30 per user per month above existing M365 licences, is the most prominent example: enterprises must maintain their full M365 licence stack and add Copilot on top, meaning the incremental AI spend compounds an already substantial licensing cost. Our Microsoft advisory services include specific Copilot ROI modelling, because the decision to deploy Copilot at scale requires a realistic assessment of productivity gains versus incremental licensing cost — not the vendor's own case studies.
SAP, Oracle, and Salesforce are embedding AI features into their core platforms and using these features to justify pricing increases at renewal. SAP's Joule AI, Oracle's Fusion AI capabilities, and Salesforce Einstein are often packaged as standard in higher-tier licence editions, requiring customers to migrate to more expensive SKUs to access functionality that may partially replicate what they are already building externally with more cost-effective AI tools. The value assessment — whether the vendor's embedded AI justifies the incremental licence cost versus deploying a third-party AI tool — is now a core component of every major software renewal strategy. For help building that analysis, book a call with our advisory team.