Like peaks in Seoul, France and outside it, they can revive international cooperation in the field of security and frontier
Last year, the UK government hosted the first large global peak at the AI Safety frontier in Bletchley Park. He focused the world's attention on the rapid progress on the border of the development of AI and provided specific international activities in order to respond to potential future risk, including Bletchley declaration; new AI safety institutes; and International scientific report on advanced AI safety.
Six months from Bletchley, the international community has the opportunity to develop this momentum and revive further global cooperation at the AI Seoul summit this week. Below we share how the peak – and future – can lead to progress towards a common, global approach to the safety of AI Frontier.
AI's possibilities are still progressing at a fast pace
Since Bletchley, there has been a strong innovation and progress throughout the field, including Google Deepmind. AI still runs a breakthrough in critical scientific fields, with our new ones Alphafold 3 Model providing for the structure and interaction of molecules of all life with unprecedented accuracy. This work will help transform our understanding of the biological world and accelerate the discovery of drugs. At the same time, our family of Gemini models has already created products used by billions of people around the world more useful and available. We also worked on improving the perception of our models, reason and interaction, and recently we shared our progress in building the future of AI assistants with Project Astra.
Progress in terms of AI's ability is promised to improve the lives of many people, but also give birth to new questions that should be met together in many key security areas. Google Deepmind is working on identifying and solving these challenges through pioneering safety research. Only in the last few months we have shared our evolving approach To develop a comprehensive set of safety and responsibility assessments for our advanced models, including Early research Assessment of critical possibilities, such as fraud, cyber security, self -development and independent justification. We also issued an in -depth exploration to compensate for future advanced AI assistants with human values and interests. Apart from LLMS, we recently divided our approach to Biological security Down Alphafold 3.
This work results from our belief that we must introduce innovations in the field of security and management as soon as we introduce innovations about the possibilities – and that both things must be done in tandem, constantly informing each other.
Building an international consensus regarding the risk of AI Frontier
Maximizing the benefits of advanced AI systems requires building an international consensus on critical problems related to the security of borders, including predicting and preparing new threats going beyond these contemporary models. However, given the high degree of uncertainty about these potential future risks, there is a clear demand for decision -makers on an independent, scientifically justified view.
Therefore, launching a new temporary period International scientific report on the security of advanced artificial intelligence It is an important element of the AI SEUL summit – and we are waiting for evidence of our research this year. Over time, this type of effort can become a central contribution to the peak process and, if he succeeds, we think that it should be obtained more constant status, loosely modeled in the function of the intergovernment panel regarding climate change. This would be a significant contribution to the database of evidence, which decision makers around the world need to inform about international activities.
We believe that these AI peaks can provide a regular forum devoted to building international consensus and a universal, coordinated approach to management. Maintaining exceptional emphasis on border safety will also ensure supplementing these supporters, not duplicating other international management activities.
Establishing the best practices in the assessment and a coherent management framework
The grades are a key element needed to inform decisions regarding AI management. They enable us to measure the possibilities, behavior and impact of the AI system and are an important contribution to risk assessments and designing appropriate mitigating. However, the science of safety assessments AI Frontier is still early in development.
This is why Border model forum (FMF), which Google has launched with other leading AI laboratories, is involved with AI safety institutes in the USA and Great Britain and other interested parties in the field of best practices of assessing border models. AI peaks can help in scaling this work on the international arena and avoid mosaics of national testing and management regimes that are duplicatic or in conflict. It is very important to avoid fragmentation, which can accidentally harm security or innovation.
USA and UK AI Safety Institutes I have already agreed To build a joint approach to security tests, an important first step towards greater coordination. We think that over time there is an opportunity to build a common, global approach. The initial priority from the Summit in Seoul may be to agree on a road map for a wide range of actors to cooperate in the development and standardization of reference points and approach to the assessment of AI Frontier AI.
It will also be important to develop a common framework for risk management. To contribute to these discussions, we have recently introduced the first version of our Frontier security framework, a set of protocols for identifying future AI capabilities, which can cause serious damage and introducing mechanisms for detecting and soothing them. We expect the framework to evolve significantly when we learn from its implementation, deepen our understanding of AI threats and assessment and cooperate with industry, the academic environment and the government. Over time, we hope that providing our approaches will facilitate work with others in order to agree on the standards and best practices of assessing the security of future generations of AI models.
Towards the global approach to the safety of AI Frontier AI
Many potential threats that may result from progress on the AI border are global. When we go to the AI SEoul summit and look at the future peaks in France and outside, we are excited about the possibility of developing global cooperation in the field of security AI Frontier. We hope that these peaks will provide a dedicated forum of progress towards a common, global approach. Obtaining this is a key step towards unlocking the huge benefits of artificial intelligence for society.