Addressing the Biosecurity Risks of Artificial Intelligence: Understanding Convergence and Policy Implications
Congresswoman Anna G. Eshoo and UN Secretary General António Guterres have raised concerns about the potential biosecurity risks posed by the use of artificial intelligence (AI) in both civilian and military applications. The concept of convergence, which explores how AI systems can amplify risks in other technological domains, has become a focal point for policymakers and researchers.
The interaction between AI and technologies such as biosecurity, chemical weapons, nuclear weapons, cybersecurity, and conventional weapons systems has raised alarms about the potential for unintended consequences and security threats. For example, AI-assisted identification of virulence factors in the design of novel pathogens, the generation of novel chemical weapons, and the integration of AI into nuclear weapons command and control systems all present unique challenges.
Policy recommendations to mitigate convergence risks include funding research to improve understanding of the risks, implementing approval processes for the deployment of advanced AI systems, and establishing legal liability frameworks for AI developers. Cooperation and coordination across companies and countries are also essential to establish common safeguards for AI development and reduce geopolitical tensions.
As advancements in AI and other technologies continue to accelerate, addressing the convergence of risks from these technologies will be crucial for maintaining national and international security. The need for further research and policy measures to safeguard against potential harms is paramount in the face of evolving threats in the digital age.