IDC predicts there will be 1.3 billion AI agents in circulation by 2028. More than 80 per cent of Fortune 500 companies already use agents that access corporate data and act across business systems, according to Microsoft Copilot Studio data.
Yet organisations lag in security. Fewer than half of the organisations surveyed for Microsoft’s 2026 Data Security Index have established controls for generative AI, and many leaders remain unclear on how regulators will oversee the technology.
These agents no longer serve as experimental tools. They operate with real autonomy and access, which means organisations must secure and govern them like human identities, according to Vasu Jakkal, corporate vice president of Microsoft Security.
More enterprise organisations are embedding AI agents into workflows as the next step to becoming frontier firms
“With agent use expanding and transformation opportunities multiplying, now is the time to get foundational controls in place,” says Jakkal, in a recent blog post. “AI agents should be held to the same standards as employees or service accounts.”
AI introduces new security challenges, including agent sprawl, data oversharing and the proliferation of shadow AI, where agents inherit permissions and access sensitive information.
“AI systems can introduce new surface area risks, such as AI supply chain vulnerabilities, beyond the traditional layers like infrastructure, data, access and identity,” says Herain Oberoi, vice president of data and AI security at Microsoft. “Protecting the enterprise generative AI landscape must consider threats and risks across areas like model theft, data poisoning, data leaks, prompt injection attacks, model vulnerabilities, tools misuse and potentially over-permissioned agents.”
Unlike traditional applications, agents remain dynamic: they reason, act autonomously and interact with multiple systems, characteristics that can be a threat when in the wrong hands.
“Bad actors might exploit agents’ access and privileges, turning them into unintended double agents,” explains Jakkal. “Like human employees, an agent with too much access – or the wrong instructions – can become a vulnerability. When leaders lack observability in their AI ecosystem, risk accumulates silently.”
Agents can coordinate meetings and provide real-time prompts through accessing corporate data
Microsoft encourages organisations to adopt ‘zero trust’ strategies and foster a culture of secure innovation. Zero trust is not new, but its application to AI agents is.
“Zero trust means designing systems where nothing is trusted by default, AI behaviour is continuously verified, and observability and risk-based conditional access are implemented from the start,” says Oberoi. “For customers operating across Europe, this approach aligns closely with the EU AI Act requirements for risk management, cybersecurity protections and human oversight, as well as the UK’s emphasis on robustness and accountability rather than simple prescriptive rules.”
The EU AI Act, the world’s first comprehensive AI law, comes into effect in June 2026. It ensures organisations use AI systems in the European Union safely, transparently, traceably and without discrimination, while promoting sustainability. Organisations must implement risk-based safeguards, maintain detailed documentation of AI system behaviour and build human oversight into decision-making processes.
“A zero trust approach to agents helps close the gap by explicitly verifying every identity, enforcing least-privilege access, continuously monitoring behaviour and applying conditional access based on risk signals,” says Oberoi.
Crucially, organisations also need visibility.
“You can’t protect what you can’t see, and you can’t manage what you don’t understand,” says Jakkal. “Observability is having a control plane across all layers of the organisation (IT, security, developers, and AI teams) to understand what agents exist, who owns them, what systems and data they touch, and how they behave.”
Without a unified control plane, organisations risk losing sight of how many AI agents they have deployed, what permissions they hold and how they evolve over time. At the 2026 Microsoft AI Tour in London, UK, vice president of security go-to-market at Microsoft Alym Rayani described Microsoft Agent 365 as a central control plane for agents. It handles registration, access control, visualisation, interoperability and security. Developers can work through the newly introduced Foundry Control Plane, while security teams can manage oversight via Microsoft Defender, Entra and Purview, which prevent sensitive data leaks in user- and agent-based interactions through insider risk management and compliance tools.
AI is transforming the way teams collaborate but it has increased the need for greater security and visibility
“The Foundry control plane allows organisations to identify risks pre- and post-deployment and maintain continuous monitoring, including real-time security risks via native Microsoft Security integrations to help developers mitigate risks,” says Oberoi. “Each Foundry-built agent is automatically assigned an Entra Agent ID, providing built-in visibility for security teams to apply access controls and enable lifecycle governance, just like customers do today for users.”
At the same time, security teams are using AI agents to protect against AI-powered threats. Security agents embedded into daily workflows – including more than 60 agents from Microsoft and its partners, such as BlueVoyant, Darktrace, iLink Digital, Invoke, Ontinue, Performanta, RSA Security and Tanium – are designed to streamline response and remediation. One example is the Phishing Triage Agent in Defender. “Teams using the agent are resolving six times more phishing attack alerts than those without it,” says Rayani. In this sense, the agentic shift cuts both ways: organisations must secure their AI agents, but they can also deploy agents to strengthen their cyber defences.
Technology controls alone cannot ensure safety; governance must evolve alongside them.
“AI governance should sit at the board level, not just as a technology initiative,” says Oberoi. “Across the industry, executives recommend establishing an AI Governance Committee. These committees function as enterprise risk and trust units, ensuring AI systems remain secure, compliant, ethical and aligned to business objectives across their full lifecycle.”
The committee should treat AI risk as part of broader cybersecurity, privacy and compliance governance, and follow recognised risk frameworks like the EU AI Act.
“The AI Governance Committee should be cross-functional by design and represent leaders across legal and compliance, security and privacy, data and architecture, product and engineering, business and strategy, and responsible AI,” says Oberoi. “The committee governs; it does not enable. It defines decision rights and escalation paths with clear accountability, especially for higher-risk or regulated use cases. The committee oversees AI end-to-end, including post-deployment monitoring.”
Microsoft has created executive-sponsored AI Governance Committees to guide internal AI projects. Microsoft Digital, the company’s IT organisation, formed the AI Center of Excellence (CoE), the Data Council and a team of responsible AI champions, all connected to the company’s overarching Responsible AI Council.
“Any organisation adopting AI for the first time will have plenty of questions, and Microsoft was no different,” notes a Microsoft article titled ‘Guiding hands: Inside the councils steering AI projects at Microsoft’. “We’ve had to wrestle with some foundational ideas like ‘how do we enable employees through skilling and infuse AI into our culture?’ and ‘how can we organise our company’s data to support effective AI?’. Creating AI that’s safe, fair and accessible to all ensures the AI revolution is a truly human movement.”
The AI CoE started as Microsoft Digital’s first team dedicated to enabling AI, initially focused on ideation, education and foundational architecture. As AI maturity increased, the team shifted towards showcasing learnings internally and externally.
“Governance and security are related, but not interchangeable,” says Jakkal. “Governance defines ownership, accountability, policy and oversight. Security enforces controls, protects access and detects cyberthreats. Both are required. And neither can succeed in isolation.”
Partner perspectives
We asked selected Microsoft partners how they build on Microsoft’s security solutions to ensure organisations can adopt AI securely and responsibly
“Responsible AI adoption requires security, trust and seamless integration,” says Jan van Houtte, Barco ClickShare’s executive vice president. “Barco ClickShare leverages Microsoft Device Ecosystem Platform (MDEP) to build a secure, compliant and manageable wireless meeting room system, enhanced by its simplicity and ease of use. The ClickShare Hub works as part of a room system bundle certified for Microsoft Teams and extends Microsoft Teams’ AI powered capabilities – real-time summarisation, translation and speaker recognition – so they work reliably across hybrid meetings.”
“By building Cisco collaboration devices and experiences on MDEP, we align closely with Microsoft’s security capabilities while extending them through Cisco’s zero-trust architecture, device integrity safeguards and network-level protections,” says Espen Løberg, vice president and general manager at Cisco. “AI runs where it makes the most sense – often on the device or within clearly defined boundaries – supported by strong identity, encryption and policy controls. This gives organisations the confidence to adopt AI in ways that are transparent, governed and aligned with their security and compliance requirements.”
“At Intermedia, we see organisations rushing to unlock AI-driven productivity, but the winners will be the ones who pair speed with discipline,” says Alex Smith, vice president of platform security, data and analytics at Intermedia. “AI is only as trustworthy as the data and controls behind it, especially when models are embedded into everyday communications and collaboration workflows.”
Discover more insights from these partners and others, including Cisco, Concentrix, Fellowmind, Huddly, Intercity, Atech – part of the Iomart Group, Kyndryl, PRATUS, Shure and Synergy Technical in the Spring 2026 issue of Technology Record. Don’t miss out – subscribe for free today and get future issues delivered straight to your inbox.