Over one hundred representatives from across the globe convened in the UK on 1-2 November 2023 at the Global AI Safety Summit. The focus of the Summit was to discuss how best to manage the risks posed by the most recent advances in AI. However, it was the “Bletchley Declaration” –announced at the start of the Summit—which truly emphasized the significance governments are attributing to these issues.
The “Bletchley Declaration” – described by the UK Government as a ‘world-first’ agreement – was endorsed by 28 countries (including, the US, Saudi Arabia, China and the UK) and the EU. The Declaration signifies a collective commitment to proactively manage potential risks associated with so-called “frontier AI” (i.e., highly capable general-purpose AI models) to ensure such models are developed and deployed in a safe and responsible way. In particular, the signatories commit through the Declaration to identify AI safety risks (primarily through scientific and evidence-based research) and to build risk-based policies to ensure safety in light of such risks. The Declaration recognizes the potential for differing approaches in order to achieve these aims, but stresses the importance of international cooperation.
Importantly for businesses, the Declaration repeatedly acknowledges the “enormous global opportunities” presented by AI and the need to consider “a pro-innovation and proportionate governance and regulatory approach that maximises the benefits and takes into account the risks associated with AI.”
Key Takeaways from the Global AI Safety Summit
The inaugural Summit brought together policymakers, academics and executives from leading AI companies across the globe to address the pressing concerns around the responsible development of AI, and has been hailed as a “diplomatic breakthrough.”
The Summit was guided by five key aims: (1) to reach a shared understanding of the risks posed by frontier AI; (2) to begin a framework for international collaboration; (3) to identify appropriate safety measures for private organizations; (4) to find areas for research collaboration; and (5) to showcase positive use-cases for AI.
During the Summit, discussions and debates were held on various aspects of AI safety including, the potential risks and ethical concerns posed by these technologies. Whilst the Summit was closed-door, summaries of the various roundtable discussions have been published (Day 1 summaries and Day 2 summaries). At a high-level:
– Roundtable 1: Risks to Global Safety from Frontier AI Misuse. The discussion focused on the global safety risks posed by frontier AI including, the risks to biosecurity and cybersecurity. The Roundtable called for urgent cross-sector collaboration (i.e., governments, academics and industry) to acknowledge and act on these risks.
– Roundtable 2: Risks from Unpredictable Advances in Frontier AI Capability. The second Roundtable focused on the unpredictability of frontier AI capability as models are rapidly scaled. The Roundtable acknowledged the huge benefits these capabilities are likely to bring to e.g., healthcare, but raised concerns around the parallel creation of significant risks. Amongst other things, the Roundtable called for rigorous safety-testing in secure conditions and close monitoring of emerging risks.
– Roundtable 3: Risks from Loss of Control over Frontier AI. This Roundtable sought to address the potential in the future for existential risks posed by “very advanced” AI – a possibility which the UK Prime Minister had a few days prior, cautioned was not a present threat. The Roundtable again concluded with a need for rigorous safety-testing in secure conditions and the need for further work to understand how loss of control could come about.
– Roundtable 4: Risks from the Integration of Frontier AI into Society. The Roundtable acknowledged the risks posed by frontier AI to democracy, human rights, civil rights, fairness and equality (e.g., disruption to elections) and flagged the need for investment in basic research to avoid missing out on the opportunity to use AI to solve global problems. The Roundtable recommended the inclusion of a wide cross-section of the general public in such research.
– Roundtable 5: What should frontier AI developers do to scale capability responsibly? This Roundtable stressed the need for further evolution of AI safety policies – in months, not years. In particular, the Roundtable confirmed that company policies do not obviate the need for governments to set standards and regulate e.g., through the UK and US Safety Institutes.
– Roundtable 6: What should national policymakers do in relation to the risk and opportunities of AI? Reiterating the multiplicity of risk discussed in the earlier Roundtables, this discussion highlighted the importance of balancing the risks and opportunities. In particular, the Roundtable recognized that to overcome the global challenges there is a need for governments to work together “including where national circumstances differ”. Governance should be “rapid, agile and innovative.”
– Roundtable 7: What should the International Community do in relation to the risks and opportunities of AI? The Roundtable concluded that in the next year the priorities for international collaboration with respect to frontier AI are: (i) to develop a shared understanding of frontier AI capabilities and risks; (ii) to develop a coordinated approach to safety research and model evaluations; and (iii) to develop international collaborations aimed at ensuring the benefits of AI are shared by all.
– Roundtable 8: What should the Scientific Community do in relation to the risks and opportunities of AI? The Roundtable discussed the need to understand existing risks of current frontier AI models as well as the need for scientists to work closely with governments and the general public when conducting this research. The Roundtable also warned against the concentration of power.
– Roundtable 9: Priorities for international attention on AI over the next 5 years to 2028. The Roundtable stressed that in order to realize the vast opportunities AI has the potential to bring it is necessary to invest in supporting skills development for the wider public alongside enhancing technical capability within governments.
– Roundtable 10 & 11: Creating actions and next steps for future collaboration. The Roundtables were focused on future international collaboration i.e., following the Summit. Rapid collaboration is in particular needed in the context of AI-powered disinformation and deepfakes given the number of elections scheduled to take place next year. Collaboration is also needed to “ensure that all parts of the world realise the truly transformative potential of AI for Good.”
The Summit served as a platform to foster global awareness and engagement on safety issues relating to AI, promoting responsible development and deployment of AI, while prioritizing ethical considerations and ensuring the protection of human interests.
A virtual mini-Summit will be hosted by South Korea in 6 months’ time with the next full in-person Summit being hosted by France towards the end of 2024. In the meantime, it seems governments have a significant amount of work ahead of them should they aim to accomplish all the goals set during the Summit.
This post is as of the posting date stated above. Sidley Austin LLP assumes no duty to update this post or post about any subsequent developments having a bearing on this post.