Technology Record - Issue 30: Autumn 2023

38 EXECUTIVE INTERVIEW Is there a role for the Microsoft partner ecosystem in delivering reliable and trustworthy AI solutions? As an increasing number of governments and entities seek to leverage generative AI technology to enhance their services and offerings, it will be critical for organisations across the private, public and academic sectors to have access to assistance in deploying reliable, trustworthy AI solutions. Several of our partners have already launched consulting practices to help customers evaluate, test, adopt and commercialise AI solutions, including creating their own responsible AI systems. We believe it’s critical for these capabilities to be developed and deployed at scale. Therefore, we hope that more partners worldwide will develop responsible AI consulting practices to help businesses of all sizes implement programmes for the responsible use of AI. How do you believe Microsoft’s commitments and actions will empower customers to define robust strategies for AI adoption? Our AI customer commitments will provide our customers with a starting point for their responsible AI implementation. We have created tools and resources that we have published on our Responsible AI website, which they can leverage immediately, starting at the earliest stages of their engineering planning and extending to the launch of AI technology. We began our own AI journey in 2017 and have incorporated our learnings into improvements in our processes. Therefore, our customers can benefit from our progress immediately. Furthermore, our new AI Assurance Program will help customers ensure that the AI applications they create and deploy on our platforms meet the legal and regulatory requirements for responsible AI. The programme includes four specific elements that relate to security, compliance and legal constraints; the first in the form of regulator engagement support. We have extensive experience in helping customers in the public sector and highly regulated industries manage the spectrum of regulatory issues that arise when dealing with information technology use. We plan to expand our assistance offerings to help customers with regulations related to AI. For instance, in the global financial services industry, we worked closely for several years with both customers and regulators to ensure that we could pursue digital transformation in the cloud while complying with regulatory financial obligations. We want to apply our learnings from this work to regulatory engagement concerning AI. For example, the global financial services industry requires that financial institutions to verify customer identities, establish risk profiles and monitor transactions to help detect suspicious activity – this is called the ‘know your customer’ requirements. We believe that this approach can applied to AI in what we are calling ‘KY3C,’ an approach that imposes certain obligations to know one’s cloud, customers and content. Microsoft will collaborate with customers to apply KY3C as part of our AI Assurance Program. In addition, we will attest to how we are implementing the AI Risk Management Framework, which was recently published by the National Institute of Standards and Technology (NIST), and will share our experience engaging with NIST’s important ongoing work in this area. Then, we will convene customers in customer councils to gather their perspectives on how we can deliver the most relevant and compliant AI technology and tools, before engaging with governments to promote effective and interoperable AI regulation. Looking ahead, how do you envision the landscape evolving over the coming years? How quickly do you anticipate the governance and regulation landscape will evolve and what is Microsoft’s strategy for adapting accordingly? Brad Smith’s May 2023 report, Governing AI: A Blueprint for the Future, presents our proposals to governments and other stakeholders for appropriate regulatory frameworks for AI. We see a number of governments considering the best approaches to regulate AI use. In the USA, the White House issued a voluntary code of conduct in July 2023, focusing on “ Our new AI Assurance Program will help customers ensure that the AI applications they create and deploy on our platforms meet the legal and regulatory requirements for responsible AI”