How Summits in Seoul, France and Beyond Can Mobilize International Cooperation on AI Border Security
Last year, the UK government hosted the first major global AI border security summit at Bletchley Park. It focused the world’s attention on the rapid progress at the frontiers of artificial intelligence development and provided concrete international action to address potential future risks, including Bletchley Statement; new AI Security Institutes; and the International Science Fair on Advanced AI Security.
Six months after Bletchley, the international community has an opportunity to build on this momentum and further mobilize global cooperation at this week’s Seoul AI Summit. Below we share some thoughts on how the summit – and future ones – can drive progress toward a common, global approach to AI border security.
Artificial intelligence capabilities have continued to evolve at a rapid pace
Since Bletchley, there has been strong innovation and progress across the field, including Google DeepMind. Artificial intelligence continues to drive innovations in critical scientific fields, with our new AlphaFold 3 model that predicts the structure and interactions of all life’s molecules with unprecedented accuracy. This work will help transform our understanding of the biological world and accelerate drug discovery. At the same time, the Gemini family of models has already made products used by billions of people around the world more useful and accessible. We’re also working to improve the way our models perceive, reason and interact, and recently shared our progress in building the future of AI assistants with Project Astra.
This advance in AI capabilities promises to improve the lives of many people, but it also raises new questions that need to be addressed together across several key areas of security. Google DeepMind is working to identify and address these challenges through cutting-edge security research. Only in the last few months, we did we shared our evolving approach to develop a holistic set of safety and liability assessments for our advanced models, including early research assessment of critical capabilities such as deception, cyber security, self-propagation and self-reflection. We also published an in-depth exploration of aligning future advanced AI assistants with human values and interests. Beyond LLMs, we recently shared our approach biosecurity For AlphaFold 3.
This work is driven by our belief that we need to innovate in security and governance as fast as we innovate in capabilities – and that both things need to be done in parallel, constantly informing and reinforcing each other.
Building International Consensus on the Frontier Risks of Artificial Intelligence
Maximizing the benefits of advanced AI systems requires building international consensus on critical border security issues, including anticipating and preparing for new risks beyond those posed by current models. However, given the high degree of uncertainty about these potential future risks, there is a clear demand from policymakers for an independent, science-based view.
Hence the start of the new intermediate International Scientific Report on the Security of Advanced AI is an important element of the Seoul AI Summit – and we look forward to presenting findings from our research later this year. Over time, this kind of effort could become a central element of the summit process and, if successful, we believe it should be given a more permanent status, loosely modeled on the functioning of the Intergovernmental Panel on Climate Change Change. This would be a vital contribution to the evidence base needed by policy makers around the world to inform international action.
We believe that these AI summits can be a regular forum dedicated to building international consensus and a common, coordinated approach to governance. Maintaining a singular focus on border security will also ensure that these meetings are complementary and do not duplicate other international governance efforts.
Establishing best practices in assessments and a coherent governance framework
Assessments are a critical component needed to inform AI governance decisions. They enable us to measure the capabilities, behavior and impact of an AI system and are an important element in risk assessments and the design of appropriate mitigation measures. However, the science of borderline AI security assessments is still early in its development.
This is why the Frontier Model Forum (FMF), launched by Google with other leading AI labs, is working with the US and UK AI Security Institutes and other stakeholders on best practices for evaluating boundary models. AI summits could help scale this work internationally and avoid a patchwork of national testing and governance regimes that are duplicative or in conflict with each other. It is important to avoid fragmentation that could inadvertently harm security or innovation.
The US and UK AI Security Institutes have already agreed to build a common approach to security testing, an important first step towards greater coordination. We believe there is an opportunity over time to build on this towards a common, global approach. An initial priority from the Seoul Summit could be to agree on a roadmap for a wide range of actors to work together to develop and standardize frontier AI benchmarks and assessment approaches.
It will also be important to develop common frameworks for risk management. To contribute to these discussions, we recently introduced the first version of the Frontier Safety Framework, a set of protocols for proactively identifying future AI capabilities that could cause serious harm and creating mechanisms to detect and mitigate them. We expect the Framework to evolve significantly as we learn from its implementation, deepen our understanding of AI risks and assessments, and engage with industry, academia and government. Over time, we hope that sharing our approaches will facilitate working with others to agree on standards and best practices for evaluating the safety of future generations of AI models.
Towards a global approach to AI security at the border
Many of the potential risks that could arise from advances at the frontiers of artificial intelligence are global in nature. As we head into the Seoul AI Summit and look forward to future summits in France and beyond, we are excited about the opportunity to advance global collaboration on AI border security. We hope that these summits will provide a special forum for progress towards a common, global approach. Getting this right is a critical step towards unlocking the enormous benefits of AI for society.