World’s first AI Safety Summit inspires global collaboration to mitigate risks of AI

World’s first AI Safety Summit inspires global collaboration to mitigate risks of AI

Bloomberg/Chris J. Ratcliffe

Representatives from 28 countries agree to mitigate the global risks of AI at the AI Safety Summit, hosted in the UK

The Bletchley Declaration and new AI Safety Institute will explore the threats associated with AI and coordinate international action to combat them

Alice Chambers |

Representatives from 28 countries have signed the UK Government’s Bletchley Declaration to mitigate the global risks of artificial intelligence.

The document was released during the world’s first AI Safety Summit, which was held in the UK on 1-2 November 2023 to discuss the threats associated with AI and to coordinate international action to combat them.

Attendees included world leaders and executives from businesses including Microsoft, Amazon, Google and OpenAI have acknowledged the potential for “serious, even catastrophic, harm” caused by AI models and agreed the need for nations to work together to resolve these problems. To ensure progress, the Republic of Korea will co-host a virtual summit on AI in the next six months, followed by a second in-person summit in France in November 2024.

“This is a landmark achievement that sees the world’s greatest AI powers agree on the urgency behind understanding the risks of AI – helping ensure the long-term future of our children and grandchildren,” said the UK’s prime minister Rishi Sunak. “Under the UK’s leadership, more than 25 countries at the AI Safety Summit have stated a shared responsibility to address AI risks and take forward vital international collaboration on frontier AI safety and research.”

The UK prime minister Rishi Sunak’s full speech at the AI Safety Summit

The UK also launched the world’s first AI Safety Institute during the AI Safety Summit. The Institute will test the safety of emerging types of AI and explore all the risks associated with AI usage, such as social harms like bias and misinformation and extreme risks like humanity losing control of AI. World leaders from Japan and Canada, and major AI companies such as OpenAI and DeepMind are supporting the Institute.

“The support of international governments and companies is an important validation of the work we’ll be carrying out to advance AI safety and ensure its responsible development,” said Ian Hogarth, chair of the Institute. “Through the AI Safety Institute, we will play an important role in rallying the global community to address the challenges of this fast-moving technology.”

Sunak said: “I truly believe there is nothing in our foreseeable future that will be more transformative for our economies, our societies and all our lives than the development of technologies like artificial intelligence. But as with every wave of new technology, it also brings new fears and new dangers. So no matter how difficult it may be it is the right and responsible long-term decision for leaders to address them.”

Read a blog post from Natasha Crampton, Chief Responsible AI Officer at Microsoft, on Microsoft’s AI safety policies and practices.

Subscribe to the Technology Record newsletter

  • ©2024 Tudor Rose. All Rights Reserved. Technology Record is published by Tudor Rose with the support and guidance of Microsoft.