In one engagement, a US healthcare enterprise deploying Microsoft 365 Copilot for 8,000 users discovered that the default contract terms permitted Microsoft to use their patient workflow data for model training. Redress negotiated a full data exclusion clause and a binding audit right. The revised terms saved the client from a potential HIPAA exposure that legal counsel estimated at $6M–$12M in regulatory risk.

Why AI Data Terms Are Different from Standard Cloud Terms

The data governance principles that enterprises applied to cloud services over the past decade — data residency, access controls, retention policies, DPA compliance — translate imperfectly to AI services. Traditional cloud services process data in defined, auditable ways: storing a file, sending an email, running a report. AI services process data in ways that are probabilistic, context-dependent, and not fully transparent even to the vendor.

When an employee submits a prompt to Microsoft 365 Copilot, that prompt may retrieve data from SharePoint, Exchange, Teams, and OneDrive simultaneously. The assembled context is passed to a large language model — which may be Microsoft's own model, OpenAI's GPT-4, or as of January 2026, Anthropic's Claude — and a response is generated. The exact path of that data, and the extent to which it is retained in any form within the AI infrastructure, is governed by terms that are more complex and less established than equivalent terms for conventional cloud services.

The practical implication is that the standard DPA review checklist does not cover AI data adequately. Enterprises that apply their conventional cloud data review to Microsoft AI services will leave significant privacy and compliance gaps unaddressed. The following sections address each gap specifically.

The No-Training Commitment: What It Covers and What It Does Not

Microsoft's central AI data commitment — that it will not use customer prompts, outputs, or Microsoft Graph data to train or improve foundation LLMs — is genuine and consistently applied for enterprise customers with Enterprise Data Protection enabled. The commitment appears in both the Data Protection Addendum and service-specific terms for Azure OpenAI and Microsoft 365 Copilot commercial deployments.

However, the no-training commitment is scoped to Customer Data as defined in Microsoft's terms. Several data categories sit outside or at the edge of this definition and deserve explicit attention in contract negotiations.

Telemetry Data

Microsoft collects operational telemetry from enterprise deployments under its standard service operation terms. This telemetry — which may include interaction patterns, feature usage rates, response latencies, and anonymised error logs — is governed by different terms than Customer Data. Microsoft uses telemetry to operate and improve its services, including AI service improvement. The boundary between "operating the service" and "training the model" is not always clearly demarcated in the telemetry terms.

Enterprises should request explicit confirmation of what telemetry is collected from Copilot and Azure OpenAI deployments, whether any telemetry data contributes to AI model training or fine-tuning, and what the enterprise's data minimisation rights are under the telemetry terms. If telemetry cannot be disabled entirely, request a written description of what is retained and for how long.

Interaction Metadata

Copilot interaction metadata — including which users submitted prompts, when, how frequently, and what Microsoft Graph data sources were accessed — is retained by Microsoft for operational purposes. This metadata is technically separate from the content of the prompt and may fall outside the no-training commitment's scope. Confirm the retention period for interaction metadata and whether it is treated as Customer Data or as Diagnostic Data under Microsoft's terms.

Microsoft's no-training commitment applies to Customer Data with Enterprise Data Protection enabled. Telemetry data, interaction metadata, and data processed by free Copilot Chat without EDP enabled may all sit outside this commitment. Most enterprises discover these gaps only after deployment.

Copilot Data Boundary: The Free versus Enterprise Gap

Microsoft offers Copilot access in two distinct forms. Microsoft 365 Copilot — the commercial add-on at $30 per user per month, or included in M365 E7 at $99 per user per month — operates with Enterprise Data Protection, meaning prompts and outputs are treated as Customer Data and protected by the no-training commitment and the enterprise DPA framework.

The free Microsoft Copilot Chat, accessible to any user with a Microsoft account, operates under consumer-grade terms. It does not automatically carry Enterprise Data Protection, and its data handling terms are materially different from the commercial M365 Copilot terms. The risk is straightforward: employees who access the free Copilot Chat using their enterprise Microsoft account may be contributing organisational data — including confidential business information — to Microsoft's AI processing under consumer terms rather than enterprise terms.

Many enterprises have not addressed this gap in their deployment governance. Until Enterprise Data Protection is fully configured and all users are directed to use the licensed M365 Copilot rather than free Copilot Chat, there is a meaningful GDPR and confidentiality risk. Negotiate with Microsoft for administrative controls that restrict access to free Copilot Chat at the tenant level for commercial licences — Microsoft has provided this capability but it must be configured actively, not as the default.

Data Residency for AI: The EU Problem

Microsoft's EU Data Boundary commitment covers the primary storage and processing of Customer Data for most commercial Microsoft cloud services within the EU. This has been a significant data sovereignty achievement for EU-based enterprises. However, the EU Data Boundary has specific exclusions that are directly relevant to AI services in 2026.

Azure OpenAI service, for most configurations, is covered by the EU Data Boundary when the service is deployed in an EU Azure region. M365 Copilot core processing is covered for EU customers. What is not covered is the Anthropic subprocessing role that was added to M365 Copilot in January 2026. Microsoft's documentation explicitly states that Anthropic models are out of scope for the EU Data Boundary and for in-country LLM processing commitments where available.

For EU-resident enterprises, this creates a GDPR Article 44 compliance question: what legal mechanism covers the transfer of EU personal data to Anthropic's infrastructure, which may be located outside the EU? Microsoft's Standard Contractual Clauses cover its own processing, but the application of those SCCs to Anthropic's processing as a subprocessor requires verification. Demand written confirmation from Microsoft of the specific legal transfer mechanism for Anthropic subprocessing affecting EU data, and verify whether this transfers risk to the enterprise or whether Microsoft's subprocessor contract provides adequate protection.

EU data governance for Microsoft AI services is not a checkbox exercise.

Our Microsoft licensing advisory specialists work with legal and privacy teams to close the gaps between Microsoft's published commitments and the enterprise's actual GDPR obligations.
Request a Review →

Negotiating Better AI Data Terms: What Is Achievable

Microsoft's AI data terms contain more negotiating room than most enterprises discover, particularly for accounts with significant M365 and Azure commitments. The following terms are achievable through focused negotiation at the EA level.

Telemetry Scope Limitation

Enterprises can negotiate for written confirmation that telemetry collected from licensed Copilot deployments is limited to service operation purposes and will not be used to train, fine-tune, or improve AI models — extending the no-training commitment to telemetry, not just Customer Data. This requires explicit request and is not available as a standard term, but Microsoft has agreed to it for large commercial accounts with documented compliance requirements.

Extended Data Retention Controls

The standard 90-day data retention period after contract termination can be negotiated shorter for enterprises with aggressive data minimisation requirements. For industries with specific regulatory requirements — financial services, healthcare, legal — a contractually confirmed 30-day deletion window with a written deletion certificate is achievable as a custom term.

Subprocessor Approval Rights

Rather than accepting the standard notification-and-continue mechanism for subprocessor changes, enterprises with significant AI exposure can negotiate for a more robust approval process. Specifically: written notification 60 days in advance to a named privacy contact, a right to object within 30 days with a specific alternative remedy if the objection is upheld (rather than the standard "you may terminate the affected service" which has limited practical value), and confirmation that Microsoft will not route EU personal data through any new subprocessor prior to the notification period expiring.

AI Data Processing Transparency Report

For enterprises running significant workloads through Azure OpenAI — particularly sensitive workloads involving personal data or proprietary information — Microsoft can be asked to commit to providing an annual transparency report documenting the data processing activities under the Azure OpenAI DPA, including any model changes that affect data processing scope. This is a new request category that Microsoft's legal team is not accustomed to, but it is consistent with GDPR accountability obligations and achievable as a custom term for large Azure OpenAI customers.

Agent 365 and Autonomous Agent Data: The Next Privacy Frontier

Agent 365 is included in M365 E7 and provides the governance layer for enterprise AI agents. As enterprises begin deploying autonomous agents — AI systems that can take actions, access data sources, and execute workflows without per-step human approval — the data privacy implications extend significantly beyond those of conversational Copilot.

An autonomous agent operating under Agent 365 governance might access SharePoint documents, retrieve data from connected Dynamics 365 instances, call external APIs, and generate and send communications — all within a single automated workflow. Each of these data accesses is a potential privacy processing event. The data processing record required under GDPR Article 30 for such agents is substantially more complex than for human-operated systems, and Microsoft's current Agent 365 documentation does not provide explicit guidance on how agent-initiated data access is reflected in Microsoft's DPA obligations.

Enterprises deploying agents under Agent 365 should request specific DPA coverage for agent-initiated data processing, confirmation of the data retention period for agent execution logs, and clarity on whether agent interactions are covered by the same no-training commitment as Copilot interactions. These terms are not yet established as standard — which means they are negotiable now, before Microsoft standardises less favourable defaults.

Six Negotiation Principles for AI Data Terms

  • Go beyond the DPA: Microsoft's standard DPA is GDPR-aligned but was designed for conventional cloud services. AI-specific addenda or supplementary terms are available through negotiation and should be pursued separately from the standard DPA review.
  • Define Customer Data explicitly for AI: Negotiate a contractual definition of Customer Data that explicitly includes prompt content, retrieved data, AI-generated outputs, interaction logs, and agent execution records — leaving no ambiguity about what the no-training commitment covers.
  • Address the free Copilot Chat gap now: Before enterprise Copilot is fully deployed, ensure administrative controls restricting free Copilot Chat access are in place and documented in the enterprise's AI governance policy.
  • Get the Anthropic SCC chain in writing: For EU enterprises, request written documentation of the Standard Contractual Clauses or alternative transfer mechanism applying to Anthropic's subprocessing role for M365 Copilot.
  • Build agent data governance before deployment: Do not wait until autonomous agents are deployed to address the data processing record and GDPR accountability requirements. Negotiate Agent 365 DPA terms before committing to E7 at scale.
  • Use Q4 to move Microsoft: April through June is Microsoft's fiscal Q4 and the period of maximum negotiation flexibility. AI data terms that Microsoft might resist in Q1 become more negotiable when a Q4 renewal is on the table.

Microsoft AI Data Privacy Negotiation Guide

Download our AI data privacy negotiation framework, covering Copilot, Azure OpenAI, and Agent 365 — with specific clause language and negotiation targets for GDPR-compliant enterprise deployments.

FF
Fredrik Filipsson
Co-Founder, Redress Compliance

Fredrik Filipsson is a Co-Founder of Redress Compliance and a specialist in Microsoft Enterprise Agreement negotiation, EA True-Up strategy, and M365 licensing optimisation. He has led 200+ Microsoft EA engagements across EMEA and North America, working exclusively on the buyer side. Redress Compliance is Gartner recognised and has completed 500+ enterprise software licensing engagements.

Connect on LinkedIn →