Microsoft has joined the AI Verify Foundation, which was officially introduced at the Asia Tech x Singapore conference as a new opensource community in Singapore that aims to promote and develop testing tools for the responsible use of artificial intelligence.
Microsoft is one of seven premier members of the foundation, launched by the Singapore Government’s Infocomm Media Development Authority as a pilot in 2022 and now available to the open source community. Currently, the foundation has over 50 general members including Adobe, Meta and Singapore Airlines1.
“The immense potential of AI led us to the vision of creating AI for the public good and its full potential will only be realised when we foster greater alignment on how it should be used to serve wider communities,” said Josephine Teo, minister for communications and information in Singapore at Asia Tech x Singapore. “Singapore recognises that the government is no longer simply a regulator or experimenter. We are a full participant of the AI revolution.”
The AI Verify Foundation will help to develop AI testing frameworks, code base, standards and best practices, while promoting open collaboration for the governance of AI.
“To foster trust in AI and ensure that its benefits are broadly distributed, we must commit to responsible practices around its development and deployment,” said Brad Smith, president and vice chair at Microsoft. “We at Microsoft applaud the Singapore Government’s leadership in this area. By creating practical resources like the AI Governance Testing Framework and Toolkit, Singapore is helping organisations to build robust governance and testing processes. We are all better off when AI tools upload respect for fairness, safety and other fundamental rights.”
Some of the work already carried out by the foundation includes IMDA and Aicadium’s paper on Generative AI: Implications for Trust and Governance, which explores foundation models for AI, new risks arising from generative AI and how to build on the foundation of AI governance frameworks. It also identifies six key risks that have emerged from generative AI including mistakes and hallucinations, privacy and confidentiality, disinformation, copyright challenges, embedded bias, and values and alignment.