IDC predicts there will be 1.3 billion AI agents in circulation by 2028. More than 80 per cent of Fortune 500 companies already use agents that access corporate data and act across business systems, according to Microsoft Copilot Studio data. Yet organisations lag in security. Fewer than half of the organisations surveyed for Microsoft’s 2026 Data Security Index have established controls for generative AI, and many leaders remain unclear on how regulators will oversee the technology. These agents no longer serve as experimental tools. They operate with real autonomy and access, which means organisations must secure and govern them like human identities, according to Vasu Jakkal, corporate vice president of Microsoft Security. “With agent use expanding and transformation opportunities multiplying, now is the time to get foundational controls in place,” says Jakkal, in a recent blog post. “AI agents should be held to the same standards as employees or service accounts.” AI introduces new security challenges, including agent sprawl, data oversharing and the proliferation of shadow AI, where agents inherit permissions and access sensitive information. “AI systems can introduce new surface area risks, such as AI supply chain vulnerabilities, beyond the traditional layers like infrastructure, data, access and identity,” says Herain Oberoi, vice president of data and AI security at Microsoft. “Protecting the enterprise generative AI landscape must consider threats and risks BY ALICE CHAMBERS As organisations accelerate deployment of AI agents, appropriate security and governance is required to avoid leaving critical systems and data exposed Mind gap COVER STORY the 38
RkJQdWJsaXNoZXIy NzQ1NTk=