Majority of Fortune 500 firms now using AI agents, but security gaps remain

Majority of Fortune 500 firms now using AI agents, but security gaps remain

Pexels/Vanessa Garcia

Leading industries using AI agents include software and technology, manufacturing, financial services and retail

Alice Chambers

By Alice Chambers |


More than 80 per cent of Fortune 500 companies are now using AI agents built with low-code or no-code tools, according to Microsoft’s latest Cyber Pulse report. These agents – tools that automate tasks and support individuals, teams or entire organisations – are increasingly embedded into everyday business workflows across sales, finance, security, customer service and product development.

The growth of AI agents is accelerating worldwide. Adoption has risen by 42 per cent in Europe, the Middle East and Africa, followed by 29 per cent in the United States and 19 per cent in Asia.

The Cyber Pulse report shows that leading industries using AI agents include software and technology (16 per cent), manufacturing (13 per cent), financial services (11 per cent) and retail (nine per cent). Companies are using them to draft proposals, analyse financial data, triage security alerts, automate repetitive processes and surface insights at machine speed.

However, the rapid rise of AI agents is also creating new risks. The report found that 29 per cent of employees are already using unsanctioned AI agents for work tasks. This so-called “shadow AI” can operate outside the visibility of IT and security teams, increasing the likelihood of data leaks or misuse.

“AI agents are scaling faster than some companies can see them – and that visibility gap is a business risk,” said Vasu Jakkal, corporate vice president of security at Microsoft. “Like people, AI agents require protection through strong observability, governance and security using Zero Trust principles.”

Microsoft warns that shadow AI builds on the long-standing challenge of shadow IT, but introduces new dimensions of risk. AI agents can inherit user permissions, access sensitive information and generate content at scale. If not properly governed, they could be exploited by bad actors or unintentionally expose confidential data.

“Like human employees, an agent with too much access – or the wrong instructions – can become a vulnerability,” says Jakkal. “When leaders lack observability in their AI ecosystem, risk accumulates silently.”

Despite the growing use of AI, only 47 per cent of organisations report having security controls in place around generative AI, according to the Microsoft Data Security Index 2026. This gap suggests many companies are adopting AI tools faster than they are implementing safeguards.

To address these challenges, the Cyber Pulse report outlines five core capabilities organisations should establish to manage AI agents effectively: a central registry of agents, clear access controls, tools to visualise how agents operate, interoperability across systems, and built-in security.

Microsoft concludes that organisations that act now to improve oversight and governance will be better positioned to reduce risk, protect customer trust and unlock faster innovation as AI becomes embedded across the enterprise.

Subscribe to the Technology Record newsletter


  • ©2025 Tudor Rose. All Rights Reserved. Technology Record is published by Tudor Rose with the support and guidance of Microsoft.