Microsoft’s expanded SDL aims to advance ‘trustworthy AI’ as threats evolve

Microsoft’s expanded SDL aims to advance ‘trustworthy AI’ as threats evolve

New guidance includes tailored threat modelling, protections for model memory, strengthened agent identity controls, and safeguards to secure how AI models are released and updated

Alice Chambers

By Alice Chambers |


Microsoft is expanding its Secure Development Lifecycle (SDL) to address the specific security risks associated with AI, alongside the traditional software security issues it has historically covered.

The SDL is the approach Microsoft uses to integrate security into DevOps processes. It provides guidance that organisations can use to build security into every stage of development, regardless of how they create software. As AI systems become more embedded in business tools and services, Microsoft says its framework must evolve to reflect new and more complex threats.

“In a fast-moving environment where both technology and cyberthreats constantly evolve, adopting a flexible, comprehensive SDL strategy is crucial to safeguarding our business, protecting users and advancing trustworthy AI,” said Yonatan Zunger, corporate vice president and deputy chief information security officer for AI at Microsoft, in a blog titled ‘Evolving security practices for an AI-powered world’.

The expanded SDL introduces specialised guidance and tools designed specifically for AI systems. These include threat modelling tailored to AI, improved observability to monitor how AI systems behave, protections for AI “memory” where data may be stored, stronger controls around agent identity and role-based access control, safeguards for publishing and updating AI models, and built-in shutdown mechanisms to safely disable systems if needed.

Together, these measures aim to help organisations manage risks that traditional software security practices were not designed to handle.

“AI systems create multiple entry points for unsafe inputs including prompts, plugins, retrieved data, model updates, memory stats and external application programming interfaces,” said Zunger. “These entry points can carry malicious content or trigger unexpected behaviours. Traditional threat models fail to account for AI-specific attack vectors such as prompt injection, data poisoning and malicious tool interactions.”

In practical terms, this means AI systems can be influenced or attacked through a variety of channels. A malicious prompt could manipulate a model’s output, tampered-with data could affect how it performs, or a connected tool could be exploited to trigger unintended actions. These types of threats expand the overall “attack surface” and make it harder to enforce strict limits on how data is used and accessed.

“Effective SDL for AI is about continuous improvement and shared responsibility,” said Zunger. “Security is not a destination. It’s a journey that requires vigilance, collaboration between teams and disciplines outside the security space, and a commitment to learning. By following Microsoft’s SDL for AI approach, enterprise leaders and security professionals can build resilient, trustworthy AI systems that drive innovation securely and responsibly.”

Contact author

x

Subscribe to the Technology Record newsletter


  • ©2025 Tudor Rose. All Rights Reserved. Technology Record is published by Tudor Rose with the support and guidance of Microsoft.