How to use this assessment: How to use this assessment: Work through each item and mark it complete once confirmed. Items flagged High Risk represent the most common sources of material overspend. A score of 15 or more indicates a well-governed position.

Scoring Guide
Tally your confirmed items against these benchmarks to determine your current maturity level.
0 – 5 High Exposure
6 – 10 Partial Governance
11 – 20 Well Governed

Section 1

1. You have built your AI application layer using a vendor-agnostic API gateway or abstraction layer rather than direct provider SDK calls. High Risk The most common and most costly form of AI lock-in is architectural: thousands of lines of code making direct calls to a single provider's SDK, using that provider's function-calling schema, tool definitions, and response format. Migrating to a different provider then requires rewriting the application layer, not just changing an API key. Building against a neutral AI gateway — such as LiteLLM, Portkey, or a custom abstraction layer — enables model switching without application code changes.
The most common and most costly form of AI lock-in is architectural: thousands of lines of code making direct calls to a single provider's SDK, using that provider's function-calling schema, tool definitions, and response format. Migrating to a different provider then requires rewriting the application layer, not just changing an API key. Building against a neutral AI gateway — such as LiteLLM, Portkey, or a custom abstraction layer — enables model switching without application code changes.
● High Risk
2. You have avoided using any single provider's proprietary tool-use or function-calling schema as the native interface for your agent or workflow architecture. High Risk OpenAI's function-calling schema, Anthropic's tool-use schema, and Google's function declarations use different formats. An agentic workflow built natively against OpenAI's function-calling format requires significant rework to migrate to Anthropic or Vertex AI. Standardise on an abstraction layer (such as OpenAI-compatible APIs where available, or a neutral orchestration framework) to maintain multi-vendor flexibility.
OpenAI's function-calling schema, Anthropic's tool-use schema, and Google's function declarations use different formats. An agentic workflow built natively against OpenAI's function-calling format requires significant rework to migrate to Anthropic or Vertex AI. Standardise on an abstraction layer (such as OpenAI-compatible APIs where available, or a neutral orchestration framework) to maintain multi-vendor flexibility.
● High Risk
3. You have confirmed that your prompt library and system prompts are stored in a provider-neutral format and tested against at least two different model families. Medium Risk Prompts that have been optimised for one model's instruction-following behaviour, response format, and reasoning approach may produce materially worse outputs on a different model without significant re-engineering. If your prompt library has only ever been tested on a single provider's models, it represents hidden lock-in that will be discovered only when you attempt to migrate. Maintain a test suite and run it against alternative models quarterly to maintain portability.
Prompts that have been optimised for one model's instruction-following behaviour, response format, and reasoning approach may produce materially worse outputs on a different model without significant re-engineering. If your prompt library has only ever been tested on a single provider's models, it represents hidden lock-in that will be discovered only when you attempt to migrate. Maintain a test suite and run it against alternative models quarterly to maintain portability.
● Medium Risk
4. You have verified that your embedding models and vector database are not exclusively dependent on a single provider's embedding API. High Risk Embedding lock-in is particularly severe because changing embedding models requires re-generating every embedding in your vector database. If your retrieval-augmented generation architecture uses OpenAI's text-embedding-3 exclusively, switching to Anthropic or Google as your inference provider while retaining OpenAI for embeddings creates a permanent dependency on OpenAI regardless of where your inference moves. Evaluate open-source embedding alternatives (such as models from HuggingFace or Cohere) that are portable across infrastructure providers.
Embedding lock-in is particularly severe because changing embedding models requires re-generating every embedding in your vector database. If your retrieval-augmented generation architecture uses OpenAI's text-embedding-3 exclusively, switching to Anthropic or Google as your inference provider while retaining OpenAI for embeddings creates a permanent dependency on OpenAI regardless of where your inference moves. Evaluate open-source embedding alternatives (such as models from HuggingFace or Cohere) that are portable across infrastructure providers.
● High Risk
5. You have mapped every external API call your AI application makes to its provider and documented whether equivalent functionality is available from an alternative provider. Medium Risk Lock-in mapping exercises consistently reveal secondary dependencies that were not visible from the primary model API. Image generation APIs, moderation APIs, speech-to-text, and text-to-speech capabilities are often consumed from the same primary provider by default rather than by deliberate selection. Map every external AI API call to its provider, document whether viable alternatives exist, and identify which dependencies have no short-term alternative — these are your true lock-in points.
Lock-in mapping exercises consistently reveal secondary dependencies that were not visible from the primary model API. Image generation APIs, moderation APIs, speech-to-text, and text-to-speech capabilities are often consumed from the same primary provider by default rather than by deliberate selection. Map every external AI API call to its provider, document whether viable alternatives exist, and identify which dependencies have no short-term alternative — these are your true lock-in points.
● Medium Risk

Section 2

6. You have confirmed in your contract that all your input data, output data, and fine-tuning datasets can be exported in standard, machine-readable formats at any point during the contract. High Risk Standard API terms for all major AI providers state that you retain ownership of your input data and outputs. However, ownership does not guarantee portability. Confirm explicitly in your enterprise agreement that all data can be exported on request, in a documented format, within a specified timeframe, and that no proprietary encoding prevents you from using that data with an alternative provider.
Standard API terms for all major AI providers state that you retain ownership of your input data and outputs. However, ownership does not guarantee portability. Confirm explicitly in your enterprise agreement that all data can be exported on request, in a documented format, within a specified timeframe, and that no proprietary encoding prevents you from using that data with an alternative provider.
● High Risk
7. You have confirmed that fine-tuned model weights — or a training recipe sufficient to reproduce equivalent performance — are exportable under your contract terms. High Risk Fine-tuning a model on a provider's proprietary infrastructure creates a particularly severe form of lock-in: the fine-tuned model exists only within that provider's infrastructure and cannot be moved. Confirm whether fine-tuned model weights are exportable under your contract. If they are not — as is the case with most hosted fine-tuning services — factor the cost of re-training a comparable model on an alternative provider's infrastructure into your lock-in exposure calculation.
Fine-tuning a model on a provider's proprietary infrastructure creates a particularly severe form of lock-in: the fine-tuned model exists only within that provider's infrastructure and cannot be moved. Confirm whether fine-tuned model weights are exportable under your contract. If they are not — as is the case with most hosted fine-tuning services — factor the cost of re-training a comparable model on an alternative provider's infrastructure into your lock-in exposure calculation.
● High Risk
8. You have assessed the cost and complexity of migrating your historical conversation and interaction data to an alternative provider's format. Medium Risk Every AI application accumulates historical interaction data — conversation logs, user feedback, evaluation results — that is used for monitoring, fine-tuning, and compliance. This data is typically stored in a provider-specific format or linked to a provider-specific session or user identifier. Before scale, assess the migration complexity and confirm that your data architecture does not create a proprietary dependency on the provider's storage or session management system.
Every AI application accumulates historical interaction data — conversation logs, user feedback, evaluation results — that is used for monitoring, fine-tuning, and compliance. This data is typically stored in a provider-specific format or linked to a provider-specific session or user identifier. Before scale, assess the migration complexity and confirm that your data architecture does not create a proprietary dependency on the provider's storage or session management system.
● Medium Risk
9. You have documented all personally identifiable information processed through each AI vendor's API and confirmed that deletion is achievable within your regulatory requirements. Medium Risk Data deletion rights — particularly under GDPR, CCPA, and the EU AI Act — require that you can delete specific individuals' data from all systems where it has been processed. AI vendors typically offer bulk deletion at contract termination but may not support granular, per-subject deletion within active contracts. Confirm the deletion mechanism in your enterprise agreement before processing personal data at scale through any AI API.
Data deletion rights — particularly under GDPR, CCPA, and the EU AI Act — require that you can delete specific individuals' data from all systems where it has been processed. AI vendors typically offer bulk deletion at contract termination but may not support granular, per-subject deletion within active contracts. Confirm the deletion mechanism in your enterprise agreement before processing personal data at scale through any AI API.
● Medium Risk
10. You have reviewed your current AI vendor contracts for automatic renewal clauses, minimum commit escalations, and price reset terms that could trap you on unfavourable terms. High Risk Enterprise AI contracts typically include 30 to 90 day non-renewal notice windows. Missing a renewal notice window on an annual commit can result in automatic renewal at the same or higher rate for another full year. Minimum commit escalations — where the annual commit increases by 10 to 20 percent on renewal — are increasingly common as AI vendors transition from growth to profitability focus. Calendar all renewal notice windows and review contract terms no later than 90 days before each renewal date.
Enterprise AI contracts typically include 30 to 90 day non-renewal notice windows. Missing a renewal notice window on an annual commit can result in automatic renewal at the same or higher rate for another full year. Minimum commit escalations — where the annual commit increases by 10 to 20 percent on renewal — are increasingly common as AI vendors transition from growth to profitability focus. Calendar all renewal notice windows and review contract terms no later than 90 days before each renewal date.
● High Risk

Section 3

11. You have confirmed that your AI enterprise agreements include a portability clause — specifying that you can export your data, configuration, and model artefacts within 30 days of termination. High Risk Portability clauses are negotiable in enterprise AI agreements but are absent from standard terms. Without a portability clause, your ability to exit is dependent on the vendor's post-termination data export process, which is rarely governed by contractual SLAs. Negotiate a portability clause that specifies the export format, the timeline, and the mechanism — including a test export right before signing.
Portability clauses are negotiable in enterprise AI agreements but are absent from standard terms. Without a portability clause, your ability to exit is dependent on the vendor's post-termination data export process, which is rarely governed by contractual SLAs. Negotiate a portability clause that specifies the export format, the timeline, and the mechanism — including a test export right before signing.
● High Risk
12. You have reviewed whether your AI vendor contracts contain exclusivity or preferred-provider clauses that restrict your ability to use competing AI services. Medium Risk Some enterprise AI agreements — particularly those bundling AI capabilities within broader cloud or software commitments — include preferred-provider or de facto exclusivity clauses. Microsoft's co-pilot commitments within Microsoft 365 E5 and Azure OpenAI can create commercial pressure to consolidate on Microsoft's AI stack. Review your existing enterprise software agreements for AI-related exclusivity provisions before signing a separate enterprise AI agreement with a competing vendor.
Some enterprise AI agreements — particularly those bundling AI capabilities within broader cloud or software commitments — include preferred-provider or de facto exclusivity clauses. Microsoft's co-pilot commitments within Microsoft 365 E5 and Azure OpenAI can create commercial pressure to consolidate on Microsoft's AI stack. Review your existing enterprise software agreements for AI-related exclusivity provisions before signing a separate enterprise AI agreement with a competing vendor.
● Medium Risk
13. You have confirmed that your AI vendor agreement does not restrict your ability to develop or train competing AI models using outputs generated by their service. Medium Risk Standard AI enterprise terms prohibit using the vendor's model outputs to train competitive models — specifically, models intended to compete with the vendor's own AI services. This restriction is standard and generally acceptable. However, it can conflict with fine-tuning workflows that use AI-generated synthetic data or with internal model development programmes that use vendor outputs as training signal. Confirm the restriction is scoped appropriately for your specific use case.
Standard AI enterprise terms prohibit using the vendor's model outputs to train competitive models — specifically, models intended to compete with the vendor's own AI services. This restriction is standard and generally acceptable. However, it can conflict with fine-tuning workflows that use AI-generated synthetic data or with internal model development programmes that use vendor outputs as training signal. Confirm the restriction is scoped appropriately for your specific use case.
● Medium Risk
14. You have assessed which of your current AI workflows depend on proprietary features — such as OpenAI's Assistants API, Anthropic's Projects, or Google's Grounding — that have no direct equivalent at competing providers. High Risk Proprietary orchestration features create the most durable form of AI lock-in because they are architectural, not just model-level. An application built on OpenAI's Assistants API — with threads, runs, and file attachments managed through OpenAI's infrastructure — cannot be migrated to Anthropic without a full architectural rewrite. Document every proprietary feature your applications depend on and assess whether equivalent open-source or multi-vendor alternatives could serve the same function.
Proprietary orchestration features create the most durable form of AI lock-in because they are architectural, not just model-level. An application built on OpenAI's Assistants API — with threads, runs, and file attachments managed through OpenAI's infrastructure — cannot be migrated to Anthropic without a full architectural rewrite. Document every proprietary feature your applications depend on and assess whether equivalent open-source or multi-vendor alternatives could serve the same function.
● High Risk
15. You have evaluated whether your current AI provider is your primary authentication or identity provider for AI-related user access, and assessed the dependency risk. Medium Risk Several AI platforms — particularly enterprise chatbot products — integrate identity and access management with the AI provider's user management system. If your AI provider is also managing user authentication for AI features, you have a dependency that goes beyond model access. Map any identity dependencies and confirm that your identity provider integration is exportable to an alternative AI platform without requiring re-provisioning of user access.
Several AI platforms — particularly enterprise chatbot products — integrate identity and access management with the AI provider's user management system. If your AI provider is also managing user authentication for AI features, you have a dependency that goes beyond model access. Map any identity dependencies and confirm that your identity provider integration is exportable to an alternative AI platform without requiring re-provisioning of user access.
● Medium Risk

Section 4

16. You have reviewed the roadmap dependency risk — specifically whether your planned AI features depend on capabilities that only one vendor has announced and not yet shipped. Medium Risk AI roadmap risk is high in 2026 because capability gaps between vendors are closing rapidly. Features that only one vendor has today — such as specific reasoning model capabilities, multimodal processing functions, or agent orchestration primitives — may have equivalents from competing vendors within 6 to 12 months. Building a strategic dependency on a vendor's unshipped roadmap creates lock-in that is invisible until the roadmap slips or changes direction.
AI roadmap risk is high in 2026 because capability gaps between vendors are closing rapidly. Features that only one vendor has today — such as specific reasoning model capabilities, multimodal processing functions, or agent orchestration primitives — may have equivalents from competing vendors within 6 to 12 months. Building a strategic dependency on a vendor's unshipped roadmap creates lock-in that is invisible until the roadmap slips or changes direction.
● High Risk
17. You have assessed the cost of recreating your current AI application portfolio from scratch on an alternative provider's infrastructure, as a measure of your total lock-in exposure. High Risk Total lock-in exposure is best quantified as the cost of migration: engineering days to re-architect the application layer, cost to re-generate all embeddings, cost to re-fine-tune or replicate fine-tuned models, and business continuity cost during transition. Real-world AI migration projects for mid-scale enterprise deployments — after Builder.ai's collapse, NexGen Manufacturing spent $315,000 migrating 40 workflows in three months — suggest that lock-in exposure is consistently underestimated during the procurement phase. Quantify your exit cost annually as part of your AI governance review.
Total lock-in exposure is best quantified as the cost of migration: engineering days to re-architect the application layer, cost to re-generate all embeddings, cost to re-fine-tune or replicate fine-tuned models, and business continuity cost during transition. Real-world AI migration projects for mid-scale enterprise deployments — after Builder.ai's collapse, NexGen Manufacturing spent $315,000 migrating 40 workflows in three months — suggest that lock-in exposure is consistently underestimated during the procurement phase. Quantify your exit cost annually as part of your AI governance review.
● High Risk
18. You have documented and tested a failover procedure that routes AI API requests to a backup provider when your primary provider experiences an outage. High Risk Production AI applications with no failover to an alternative provider have single-point-of-failure exposure. All major AI API providers have experienced significant outages in the past 12 months, with mean time to restore ranging from 30 minutes to several hours. An AI gateway with automatic failover routing — switching from OpenAI to Anthropic when error rates exceed a threshold — is achievable within a single engineering sprint and eliminates the availability dependency on any single provider.
Production AI applications with no failover to an alternative provider have single-point-of-failure exposure. All major AI API providers have experienced significant outages in the past 12 months, with mean time to restore ranging from 30 minutes to several hours. An AI gateway with automatic failover routing — switching from OpenAI to Anthropic when error rates exceed a threshold — is achievable within a single engineering sprint and eliminates the availability dependency on any single provider.
● High Risk
19. You have a documented AI vendor migration playbook that identifies which workloads could be migrated in under 30 days, under 90 days, and over 90 days, and assigns migration ownership. Medium Risk Forced AI vendor migrations — triggered by vendor financial distress, security incident, regulatory action, or dramatic price increase — happen faster than planned. Teams that have a documented migration playbook with pre-tested alternative vendor configurations can execute migrations in days rather than months. Teams without a playbook spend the first third of a crisis period just identifying dependencies before they can begin migration work.
Forced AI vendor migrations — triggered by vendor financial distress, security incident, regulatory action, or dramatic price increase — happen faster than planned. Teams that have a documented migration playbook with pre-tested alternative vendor configurations can execute migrations in days rather than months. Teams without a playbook spend the first third of a crisis period just identifying dependencies before they can begin migration work.
● Medium Risk
20. You conduct an annual AI vendor lock-in review that quantifies total exit cost, identifies new proprietary dependencies introduced during the year, and updates the migration playbook. Medium Risk Lock-in accumulates silently. Each new AI feature deployed on a proprietary primitive, each new dataset embedded using a single provider's API, and each new workflow built against a provider-specific orchestration API adds to the exit cost without triggering a governance review. Formalise an annual lock-in review — aligned with your AI contract renewal calendar — that measures exit cost, identifies new dependency categories, and updates the migration playbook before lock-in becomes prohibitive.
Lock-in accumulates silently. Each new AI feature deployed on a proprietary primitive, each new dataset embedded using a single provider's API, and each new workflow built against a provider-specific orchestration API adds to the exit cost without triggering a governance review. Formalise an annual lock-in review — aligned with your AI contract renewal calendar — that measures exit cost, identifies new dependency categories, and updates the migration playbook before lock-in becomes prohibitive.
● Medium Risk

Ready to optimise your AI contract and cost position?

Download our AI Platform Contract Negotiation Guide — covering all major vendors, pricing structures, and negotiation tactics.
Download Free Guide →

Next Steps

Score your confirmed items against the benchmarks above. If you are in the High Exposure or Partial Governance bands, prioritise the items flagged High Risk — these represent the most common sources of material overspend and are addressable within a single procurement or FinOps cycle.

Redress Compliance works exclusively on the buyer side, with no vendor affiliations. Our GenAI advisory practice has benchmarked AI costs, negotiated enterprise AI contracts, and built governance frameworks across 500+ enterprise engagements. Contact us for a confidential review of your AI cost and contract position.