Technology Record - Issue 40: Spring 2026

42 COVER STORY vice president of security go-to-market at Microsoft Alym Rayani described Microsoft Agent 365 as a central control plane for agents. It handles registration, access control, visualisation, interoperability and security. Developers can work through the newly introduced Foundry Control Plane, while security teams can manage oversight via Microsoft Defender, Entra and Purview, which prevent sensitive data leaks in user- and agent-based interactions through insider risk management and compliance tools. “The Foundry control plane allows organisations to identify risks pre- and post-deployment and maintain continuous monitoring, including real-time security risks via native Microsoft Security integrations to help developers mitigate risks,” says Oberoi. “Each Foundry-built agent is automatically assigned an Entra Agent ID, providing built-in visibility for security teams to apply access controls and enable lifecycle governance, just like customers do today for users.” At the same time, security teams are using AI agents to protect against AI-powered threats. Security agents embedded into daily workflows – including more than 60 agents from Microsoft and its partners, such as BlueVoyant, Darktrace, iLink Digital, Invoke, Ontinue, Performanta, RSA Security and Tanium – are designed to streamline response and remediation. One example is the Phishing Triage Agent in Defender. “Teams using the agent are resolving six times more phishing attack alerts than those without it,” says Rayani. In this sense, the agentic shift cuts both ways: organisations must secure their AI agents, but they can also deploy agents to strengthen their cyber defences. Technology controls alone cannot ensure safety; governance must evolve alongside them. “AI governance should sit at the board level, not just as a technology initiative,” says Oberoi. “Across the industry, executives recommend establishing an AI Governance Committee. These committees function as enterprise risk and trust units, ensuring AI systems remain secure, compliant, ethical and aligned to business objectives across their full lifecycle.” The committee should treat AI risk as part of broader cybersecurity, privacy and compliance governance, and follow recognised risk frameworks like the EU AI Act. “The AI Governance Committee should be cross-functional by design and represent leaders across legal and compliance, security and privacy, data and architecture, product and engineering, business and strategy, and responsible AI,” says Oberoi. “The committee governs; it does not enable. It defines decision rights and escalation paths with clear accountability, especially for higherrisk or regulated use cases. The committee oversees AI end-to-end, including postdeployment monitoring.” Microsoft has created executive-sponsored AI Governance Committees to guide internal AI projects. Microsoft Digital, the company’s IT organisation, formed the AI Center of Excellence (CoE), the Data Council and a team of responsible AI champions, all connected to the company’s overarching Responsible AI Council. “Any organisation adopting AI for the first time will have plenty of questions, and Microsoft was no different,” notes a Microsoft article titled ‘Guiding hands: Inside the councils steering AI projects at Microsoft’. “We’ve had to wrestle with some foundational ideas like ‘how do we enable employees through skilling and infuse AI into our culture?’ and ‘how can we organise our company’s data to support effective AI?’. Creating AI that’s safe, fair and accessible to all ensures the AI revolution is a truly human movement.” The AI CoE started as Microsoft Digital’s first team dedicated to enabling AI, initially focused on ideation, education and foundational architecture. As AI maturity increased, the team shifted towards showcasing learnings internally and externally. “Governance and security are related, but not interchangeable,” says Jakkal. “Governance defines ownership, accountability, policy and oversight. Security enforces controls, protects access and detects cyberthreats. Both are required. And neither can succeed in isolation.” “ AI governance should sit at the board level, not just as a technology initiative” HERAIN OBEROI, MICROSOFT SECURITY

RkJQdWJsaXNoZXIy NzQ1NTk=