Microsoft outlines key insights for responsible AI

Microsoft outlines key insights for responsible AI


“We make sure we’re building technology by humans, for humans,“ says Microsoft’s chief responsible AI officer Natasha Crampton

Alice Chambers |

Microsoft has outlined some key insights for responsible artificial intelligence, following its rollout of several new AI tools, including ‘Copilot’ AI assistance in Microsoft 365, the OpenAI ChatGPT service available on Microsoft Azure and new AI capabilities in Bing.

According to Natasha Crampton, chief responsible AI officer at Microsoft, responsibility is a key part of AI design, rather than an afterthought.

“In the summer of 2022, we received an exciting new model from OpenAI. Straightaway we assembled a group of testers and had people probe the raw model to understand what its capabilities and its limitations were,” says Crampton.

“The insights generated from this research helped Microsoft think about what the right mitigations will be when we combine this model with the power of web search. It also helped OpenAI, who are constantly developing their model, to try to bake more safety into them.

“We built new testing pipelines where we thought about the potential harms of the model in a web search context. We then developed systematic approaches to measurement so we could better understand what some of main challenges we could have with this type of technology — one example being what is known as ‘hallucination’, where the model may make up facts that are not actually true.  

“By November we’d figured out how we can measure them and then better mitigate them over time. We designed this product with Responsible AI controls at its core, so they’re an inherent part of the product. I’m proud of the way in which the whole responsible AI ecosystem came together to work on it.”

In an article published by Microsoft, Crampton details seven things to know about Responsible AI, including the work being done to mitigate hallucinations.

“Hallucinations are a well-known issue with large language models generally,” says Crampton. “The main way Microsoft can address them in the Bing product is to ensure the output of the model is grounded in search results.

“This means that the response provided to a user’s query is centred on high-ranking content from the web, and we provide links to websites so that users can learn more.”

Crampton goes on to outline Microsoft’s publication of the Responsible AI standard, intended to be used by everyone, and how diverse teams and viewpoints are key to ensuring responsible AI.

Microsoft Responsible AI Standard

Microsoft’s Responsible AI Standard is intended for everyone to use

We make sure we’re building technology by humans, for humans,” she says. “We should really look at this technology as a tool to amplify human potential, not as a substitute.”

Subscribe to the Technology Record newsletter

  • ©2024 Tudor Rose. All Rights Reserved. Technology Record is published by Tudor Rose with the support and guidance of Microsoft.