The Agent Governance Problem: Why Agent 365 Exists
Enterprise AI agent deployment is accelerating rapidly in 2026. Copilot Studio enables business users to build custom agents that automate processes, answer questions, and interact with internal and external data sources. Azure AI Foundry enables developers to create more sophisticated agents integrated with enterprise applications and APIs. Third-party AI agents from vendors outside the Microsoft ecosystem are also proliferating — being acquired, installed, or connected to enterprise data without IT oversight in many organisations.
The result is a growing shadow AI problem. Agents that have not been formally onboarded through IT governance processes are operating across enterprise environments — accessing Microsoft Graph data, connecting to external APIs, processing sensitive documents, and executing actions on behalf of users — without visibility, access controls, or security monitoring. The RSAC 2026 conference surfaced this as one of the top emerging enterprise security risks: prompt injection attacks targeting unmonitored agents, data exfiltration through agent interactions, and privilege escalation via over-permissioned agent identities.
Agent 365 is Microsoft's response to this governance gap. It does not help you build agents. It helps you observe, govern, and secure the agents that are already running — or that will be running — across your organisation. The distinction matters enormously for evaluating whether Agent 365 addresses a real need in your environment.
Pillar One: Observe — Visibility Into Your Agent Estate
The first and most foundational Agent 365 capability is observability. Without a centralised inventory of active agents, IT and security teams cannot govern what they cannot see. Agent 365 provides this through three observability mechanisms.
The Centralised Agent Registry
Agent 365 maintains a real-time registry of all AI agents operating in the enterprise Microsoft environment, regardless of where they were built. Agents created in Copilot Studio, Azure AI Foundry, and third-party platforms connected to the Microsoft tenant are all surfaced in the registry. Each agent entry includes metadata: which users or processes it serves, what data sources and permissions it has been granted, when it was last active, and which identity (via Entra Agent ID) it operates under.
For organisations that have been deploying Copilot Studio agents organically — department by department, often without central IT oversight — the Agent Registry is frequently the first moment IT leadership sees the full scope of their agent estate. In early enterprise deployments, organisations are routinely discovering two to five times more active agents than they believed were running.
Usage Analytics and Relationship Mapping
Agent 365 provides usage analytics showing adoption rates, interaction volumes, and active user counts for each agent in the registry. This data supports both governance decisions (identifying unused or redundant agents for decommissioning) and business value assessment (identifying high-adoption agents that warrant further investment). Relationship mapping shows how agents interconnect — which agents trigger other agents, which agents share data sources, and where dependency chains create single points of failure or cascading risk.
Risk Signals
Agent 365 surfaces risk signals associated with each registered agent: anomalous permission scopes, unusual data access patterns, agents that have not been reviewed within a policy-defined period, and agents whose source code or configuration has changed without a formal change control process. These signals feed into the security assessment that informs the Govern and Secure pillars.
Pillar Two: Govern — IT Control Over the Agent Lifecycle
Observability without control is an audit tool, not a governance platform. Agent 365's second pillar provides IT teams with the ability to actively control the agent lifecycle — from onboarding through decommissioning — within a policy framework that integrates with existing enterprise identity and access management processes.
IT-Controlled Onboarding
Agent 365 enables organisations to define an agent onboarding process that requires formal IT review and approval before an agent is permitted to access enterprise data or take actions in production. This is the mechanism that addresses the shadow AI problem: rather than agents appearing in production as soon as a department creates them, the Agent 365 governance model requires each agent to pass through an IT-controlled onboarding workflow before activation.
The onboarding workflow can be configured to require security review, data classification assessment, privacy impact analysis, and management approval depending on the sensitivity of the data and actions the agent will access. High-risk agents — those with access to sensitive HR, financial, or customer data — can be routed through a more rigorous approval process than low-risk informational agents with read-only access to public data.
Entra Agent ID: Identity for AI Agents
Microsoft Entra Agent ID is the identity infrastructure that Agent 365 uses to manage agent access. Every agent registered in Agent 365 is assigned a discrete Entra Agent ID — a machine identity in the Entra ID directory that is separate from any human user identity. This enables the same least-privilege access principles that govern human users to be applied to agents: each agent's permissions are explicitly defined, audited, and scoped to the minimum access required for its function.
Entra Agent ID enables conditional access policies for agents. An agent can be required to operate only from approved network locations, to authenticate against specific conditions before accessing sensitive data, or to be automatically suspended if anomalous behaviour is detected. The lifecycle management capabilities allow IT to activate, suspend, or retire agents with the same controls applied to human user accounts — without requiring custom development or separate tooling.
Policy-Driven Access and Lifecycle Management
Agent 365 implements agent lifecycle management that mirrors the IT asset management processes enterprises already use for software and user accounts. Agents have defined owners (human accountability for each agent), defined review cadences (automatic prompts when agents have not been reviewed within a policy-defined period), and defined retirement triggers (agents that have not been used within a defined inactivity window are flagged for decommissioning). This prevents the accumulation of zombie agents — agents that were created for a specific project, are no longer actively used, but continue to hold permissions and access that create unnecessary attack surface.
Evaluating Agent 365 or E7 for your AI governance strategy?
Our Microsoft EA advisory specialists team provides independent analysis of E7 economics before you commit.Pillar Three: Secure — Protecting the Enterprise from AI Agent Risk
The security pillar of Agent 365 addresses the threat vectors that are unique to AI agent deployments — threats that traditional security tooling was not designed to detect or prevent.
Conditional Access for Agents
Microsoft Entra's conditional access framework, familiar to IT teams as the mechanism that enforces MFA and device compliance for human users, is extended to AI agents through Agent 365. Conditional access policies can require that agents only operate under specific conditions: from approved IP ranges, during defined time windows, using only approved data sources, and with specific authentication requirements for accessing sensitive resources. This controls the conditions under which agents can act, not just whether they can act.
Purview DLP Enforcement for Agent Interactions
Microsoft Purview's Data Loss Prevention engine is integrated with Agent 365 to enforce data classification policies at the agent interaction layer. When an agent receives a prompt that contains — or when an agent's response would expose — sensitive information (personally identifiable information, credit card numbers, health data, or custom sensitive information types defined in Purview), the DLP enforcement layer intercepts and blocks the interaction before it completes. This prevents both prompt injection attacks (where malicious prompts attempt to extract sensitive data through agents) and inadvertent data exposure (where users ask agents questions that would surface data beyond their authorisation level).
The Purview DLP integration applies to prompts processed by Copilot Studio agents and extends to interactions with third-party agents connected to the Microsoft tenant through Agent 365. It does not require custom development — the same DLP policies that govern email, SharePoint, and Teams interactions are applied to agent interactions through the Agent 365 enforcement layer.
Defender Threat Protection for Agents
Microsoft Defender's threat protection capabilities are extended to the agent estate through Agent 365. This includes detection of prompt injection attempts — where malicious content embedded in documents, emails, or web pages attempts to hijack an agent's instructions — and data exfiltration detection, which identifies agents that are attempting to transmit enterprise data to external endpoints in ways that violate security policy. Defender's integration with the Entra Agent ID framework means that suspicious agent behaviour triggers the same security alerts, automated investigation, and containment actions that apply to compromised human user accounts.
Agent 365 vs E7: Which Procurement Path Is Right?
Agent 365 is available in two procurement configurations: as a standalone add-on at $15 per user per month for organisations on E3 or E5, or as a component of Microsoft 365 E7 at $99 per user per month alongside E5, Microsoft 365 Copilot, and the Entra Suite. The procurement decision depends on the organisation's current SKU position and their need for the other E7 components.
For organisations on E5 that are not yet committed to Copilot but do have an active AI agent deployment problem, the standalone Agent 365 at $15 per user per month is the more appropriate procurement path. Paying $39 per user per month more for E7 (versus E5 plus Agent 365 standalone) to include Copilot and the Entra Suite is only justified if there is a clear near-term deployment plan for those additional capabilities.
For organisations on E5 that are already running Copilot as a $30 per user per month add-on, the E7 economics are compelling. E5 plus Copilot already costs $90 per user per month. E7 at $99 adds Agent 365 and the Entra Suite for $9 more — a straightforward upgrade for organisations that need the governance capabilities Agent 365 provides for their existing Copilot and agent deployments.
The critical prerequisite for either procurement path is an accurate assessment of how many agents are currently running in the environment. If your organisation's agent estate is limited to a handful of Copilot Studio bots with minimal data access, Agent 365's governance capabilities may exceed your current needs. If your agent estate is expanding rapidly — multiple departments building agents, third-party AI tools connecting to Microsoft data — Agent 365 is solving a real and growing governance problem that will only intensify without a control plane.
Microsoft AI Governance Updates — Direct to Your Inbox
Agent 365, E7, and AI governance developments are moving quickly in 2026. Subscribe for independent quarterly analysis from the buyer side.