Microsoft is partnering with Anthropic, Google and OpenAI to form the Frontier Model Forum, a new industry body that will focus on the safe and responsible development of artificial intelligence models.
The forum’s objectives will be to advance AI safety research; identify the best practices for the responsible development and deployment of frontier models; collaborate with policymakers, academics, civil society and organisations to share knowledge gained regarding risks; and support efforts to develop applications that address societal challenges such as climate change, cancer detection and cyber threats.
The forum will be open to organisations that have developed ‘frontier models’, defined as large-scale machine-learning models that exceed the capabilities currently present in existing models. Organisations that demonstrate a commitment to frontier model safety and are willing to participate in joint initiatives to advance the forum’s efforts can join too.
“Companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control,” said Brad Smith, vice chair and president of Microsoft in a blog post. “This initiative is a vital step to bring the tech sector together in advancing AI responsibly and tackling the challenges so that it benefits all of humanity.”
Over the next few months, the forum will establish an advisory board whilst the founding companies will arrange a charter, governance and funding to lead its efforts.