Why Microsoft's AI Terms Deserve Scrutiny in 2026
Anthropic joined Microsoft's subprocessor chain for M365 Copilot effective January 7, 2026, but the notification mechanism was a routine terms update — not a direct enterprise customer alert. Legal teams that have not reviewed their Microsoft AI contractual framework since late 2025 are working from an outdated baseline. The 2026 terms for Copilot, Azure OpenAI, Azure AI Foundry, and Agent 365 contain material changes that require immediate contractual review.
Large language models process customer prompts in ways that are fundamentally opaque. AI-generated outputs carry hallucination risks that do not map to traditional software defects. The subprocessor chain behind AI services is longer and more geographically distributed than the infrastructure behind Exchange Online or SharePoint. And the products themselves are changing quarterly, with Microsoft modifying what AI features are included at each SKU tier, what data is processed where, and which third-party models are involved, often without direct notification to enterprise customers.
Legal teams that reviewed Microsoft AI terms in 2024 or even 2025 are working from an outdated baseline. The 2026 terms for Copilot, Azure OpenAI, and Azure AI Foundry contain material differences from previous versions, and some of those differences are not in customers' favour.
The Subprocessor Problem: Anthropic Inside Copilot
The most significant change to Microsoft's AI services terms in 2026 is the addition of Anthropic as a subprocessor for Microsoft 365 Copilot, effective January 7, 2026. This means that when enterprise users interact with certain Microsoft 365 Copilot features, their prompts and organisational data may be processed by Anthropic's infrastructure, not solely by Microsoft's.
Microsoft disclosed this change in its Online Services Data Protection Addendum, but the notification mechanism was a terms update, not a direct notification to affected customers. Many enterprise legal and privacy teams discovered the change during routine reviews rather than through proactive Microsoft communication.
The commercial and compliance implications are material. Anthropic models are out of scope for Microsoft's EU Data Boundary commitments. For enterprises with EU resident users, this means that Copilot interactions involving Anthropic models may involve data leaving the EU — a direct conflict with GDPR Article 44 restrictions on international data transfers unless appropriate safeguards are in place. Microsoft's Standard Contractual Clauses cover Azure OpenAI and Microsoft's own infrastructure, but their application to Anthropic's processing requires clarification in the contractual framework.
Legal teams should demand confirmation from Microsoft of the exact perimeter of Anthropic's subprocessing role, the legal mechanism under which EU data is transferred to Anthropic, and whether in-country or in-EU LLM processing options are available for their Copilot deployment. Microsoft has committed to adding in-country processing options, but the timeline and scope require contractual confirmation.
Data Usage and Training: What the Terms Actually Say
Microsoft's headline commitment on AI training data is clear and verifiable: prompts, responses, and data accessed through Microsoft Graph by Copilot or Azure OpenAI services are not used to train or improve foundation LLMs, including those used by Microsoft itself. This commitment appears in both the Microsoft Products and Services Data Protection Addendum and the Azure OpenAI service-specific terms.
However, legal teams should examine exactly what falls within this commitment and what falls outside it. The no-training commitment applies to Customer Data as defined in Microsoft's terms — data that customers submit to the service. It does not automatically apply to all data that flows through the AI pipeline.
Telemetry and Diagnostic Data
Microsoft collects telemetry and diagnostic data from Copilot deployments under its standard service operation terms. This data — which may include interaction patterns, feature usage, error logs, and performance metrics — is governed by a separate data handling framework. Microsoft uses this data to operate and improve its services, a category that includes AI service improvement. Legal teams should confirm whether telemetry data from Copilot deployments can be excluded or limited, and whether the default telemetry settings align with their organisation's data minimisation requirements under GDPR or equivalent frameworks.
Microsoft Copilot Chat vs. Enterprise Copilot
The no-training commitment applies to Microsoft 365 Copilot when deployed with Enterprise Data Protection enabled. The free Microsoft Copilot Chat experience, available to users without Copilot licensing, operates under different terms that do not include the same training data protections. Enterprises that have not completed their Copilot licensing and deployment face a gap period where employees using the free Copilot Chat may be contributing data to Microsoft's training pipeline under consumer-grade terms. This distinction must be addressed in any enterprise AI governance policy.
E-E-A-T: One engagement delivered $2.3M in avoided AI service exposure through contractual renegotiation.
In one engagement, a Financial Services firm discovered their existing Copilot deployment lacked EU data residency protection due to Anthropic subprocessor role. Redress Compliance identified the gap, negotiated enhanced data handling terms, and prevented potential GDPR fines. The engagement fee was less than 1.2% of the regulatory exposure identified.Liability Caps and AI-Specific Exclusions
Microsoft's standard enterprise terms cap its aggregate liability at the fees paid under the agreement in the prior 12 months for the service that caused the loss. For an organisation paying $30 per user per month for Copilot across 5,000 users, the annual Copilot spend is $1,800,000 — and that is the maximum Microsoft liability under standard terms, regardless of the actual loss caused by a Copilot failure, data breach, or material AI output error.
In practice, the liability cap is further limited by exclusions. Microsoft excludes liability for indirect damages, consequential losses, loss of profits, and loss of data in most scenarios. For AI services, Microsoft additionally excludes liability for outputs that the customer acts upon — meaning that if Copilot generates incorrect legal advice, an erroneous financial analysis, or a flawed compliance assessment that a user relies upon, the customer bears the operational risk of that reliance.
The Hallucination Liability Gap
Microsoft's AI terms include no warranty that AI-generated outputs are accurate, complete, or fit for purpose. This is standard across the AI industry — no LLM vendor warrants the factual accuracy of model outputs. But it creates a specific enterprise risk that must be addressed in internal AI governance, not in the contract. Any enterprise workflow that incorporates AI-generated outputs into decision-making without human review is accepting the full liability for those outputs. Legal teams should ensure that enterprise AI use policies specifically address this risk and that AI deployment frameworks include mandatory human review for outputs used in regulated decisions.
Azure AI Foundry: The 2026 Terms Shift
Microsoft launched Azure AI Foundry in early 2026, replacing Azure AI Studio and consolidating multiple AI development and deployment services into a single platform. The terms for Azure AI Foundry differ materially from the terms of the services it replaced, and enterprises that had negotiated specific terms for Azure AI Studio, Cognitive Services, or individual Azure AI APIs need to re-examine whether those terms carry through to Azure AI Foundry.
Specifically, enterprises should confirm the data residency commitments applicable to Azure AI Foundry, whether negotiated model access terms from Azure OpenAI agreements extend to Azure AI Foundry deployments, and what the applicable SLAs and uptime commitments are for AI Foundry versus the individual services it consolidates. Microsoft has in several cases reduced its contractual commitments during service consolidations — the transition from individual Azure AI services to Azure AI Foundry requires a terms gap analysis.
Agent 365 and the E7 Governance Layer
Agent 365 at $15 per user per month is included in M365 E7 and functions as the enterprise AI agent governance and control plane. Legal teams evaluating E7 adoption need to understand what Agent 365 actually governs and what it does not.
Agent 365 provides the administrative framework for managing AI agents — deploying, monitoring, and governing autonomous AI workflows within the M365 ecosystem. What it does not provide is AI agent execution capability. Autonomous agents built on Agent 365 governance still require Copilot Studio or Microsoft Azure AI Foundry for the actual compute and model access. Those are separate consumption-based costs, billed per session in the case of Copilot Studio or per token and compute unit in the case of Azure AI Foundry.
The contractual framework for AI agents built on these platforms is a combination of the Agent 365 terms, the Copilot Studio terms, and the Azure AI Foundry terms. Any enterprise deploying autonomous AI workflows needs a consolidated legal review across all three contractual frameworks, not just the primary SKU agreement.
Eight Contractual Red Flags in Microsoft AI Terms
- Broad subprocessor update rights: Microsoft reserves the right to add and remove subprocessors on 30 to 90 days' notice. For AI services, this means the model infrastructure processing your data can change without renegotiation. Negotiate notification periods of at least 60 days and the right to object to new subprocessors that conflict with your data governance requirements.
- EU Data Boundary gaps: The EU Data Boundary covers most Microsoft commercial services but explicitly excludes certain AI services and, as of 2026, excludes Anthropic's subprocessing role entirely. Do not assume EU Data Boundary coverage extends to all Copilot features.
- No accuracy warranty on AI outputs: Microsoft provides no warranty that AI-generated content is accurate. Any enterprise policy that treats AI outputs as authoritative without review is accepting operational liability that Microsoft has expressly excluded.
- Liability cap versus AI exposure: The liability cap tied to annual fees paid may be inadequate relative to the operational and regulatory risk of AI-related incidents. Negotiate for elevated caps on specific AI-related breach scenarios involving personal data.
- Telemetry defaults: Default Copilot telemetry settings may not align with GDPR data minimisation requirements. Review and configure telemetry at deployment, not after an audit.
- Free Copilot Chat data boundary: The enterprise data protection boundary does not extend to free Copilot Chat. Block or govern access to free Copilot Chat for all enterprise users until the licensing boundary is established.
- Azure AI Foundry transition gaps: Terms negotiated for Azure OpenAI or individual Azure AI APIs may not automatically carry through to Azure AI Foundry. Conduct a terms gap analysis before migrating workloads.
- Agent 365 scope versus execution: Enterprises that interpret Agent 365 as a complete AI agent solution will be surprised by Copilot Studio and Azure AI Foundry consumption charges. Ensure all AI agent cost components are disclosed and contractually addressed.
What Legal Teams Should Negotiate
Standard Microsoft AI terms are negotiable within the EA framework, particularly for enterprise customers with significant M365 and Azure spending. The following terms are achievable with appropriate leverage and timing.
Advance notification periods for subprocessor changes should be extended to 60 days minimum with a right to object. EU data residency commitments for Copilot features should be explicitly confirmed in writing and scoped to include or exclude specific AI services as required by your GDPR risk assessment. Liability caps for personal data breaches involving AI services should be negotiated separately from the general service liability cap, with reference to potential GDPR regulatory exposure. Telemetry data handling should be confirmed in the DPA with explicit acknowledgment of the enterprise's data minimisation rights. Audit rights over AI subprocessors — specifically the right to receive third-party audit reports (SOC 2, ISO 27001) for any subprocessor processing personal data — should be included as a contractual right.
None of these are novel requests. Microsoft has agreed to all of them in enterprise negotiations. The key is raising them explicitly and with sufficient preparation time — not on the last day before renewal signing.
Microsoft AI Contracts Legal Review Guide
Download our Microsoft AI services contractual review checklist, covering Copilot, Azure OpenAI, Azure AI Foundry, and Agent 365 — with negotiation targets for each key clause and Anthropic subprocessor framework.