Today we are announcing extended cooperation with the company British AI Security Institute (AISI) through a new Memorandum of Understanding focusing on fundamental research on safety and security to help ensure the safe development of AI and benefits for all.
The research partnership with AISI is an important part of our wider work with the UK Government to accelerate safe and beneficial progress in artificial intelligence.
Building on a foundation of cooperation
Artificial intelligence has enormous potential to benefit humanity by helping to cure diseases, accelerate scientific discoveries, create economic prosperity and fight climate change. For these benefits to be achieved, we must put safety and responsibility at the heart of development. Evaluating our models against a broad spectrum of potential threats remains a key part of our security strategy, and external partnerships are an important element of this work.
That's why we've partnered with the UK's AISI since its inception in November 2023 to test our most efficient models. We are deeply committed to UK AISI goal equipping governments, industry and wider society with scientific knowledge of the potential threats posed by advanced AI, as well as potential solutions and mitigation measures.
We are actively working with AISI to develop more robust AI model assessments, and our teams have collaborated on security research to advance the field, including recent work on Chain-of-thought monitoring: A new and fragile opportunity for AI security. Building on this success, today we are expanding our testing partnership to include broader, more fundamental research in a variety of areas.
What is partnership?
As part of this new research partnership, we are expanding our collaboration to include:
- Providing access to our proprietary models, data and ideas to accelerate research progress
- Joint reports and publications enabling the exchange of conclusions with the scientific community
- More collaborative safety and security research, combining the expertise of our teams
- Technical discussions to address complex security challenges
Key research areas
Our joint research with AISI focuses on key areas where Google DeepMind's expertise, interdisciplinary teams and years of pioneering responsible research can help make AI systems more secure:
Monitoring AI reasoning processes
We will work on techniques to monitor the “thinking” of an AI system, commonly referred to as its chain of thought (CoT). This work is a continuation previous Google DeepMind study ours too recent collaboration on this topic with AISI, OpenAI, Anthropic and other partners. CoT monitoring helps us understand how the AI system generates responses, complementing interpretive research.
Understanding social and emotional impacts
We will work together to explore the ethical implications of socio-affective discrepancies; this means that AI models may behave in ways that are inconsistent with human well-being, even if they technically follow instructions correctly. This is what the research will be based on existing Google DeepMind work that helped define this critical area of AI security.
Assessment of economic systems
We will explore the potential impact of AI on economic systems by simulating real-world tasks in a variety of environments. Experts will assess and review these tasks, then categorize them along dimensions such as complexity or representativeness to help predict factors such as long-term impact on the labor market.
Collaborate to realize the benefits of artificial intelligence
Our partnership with AISI is part of our commitment to harness the benefits that artificial intelligence can bring to humanity while mitigating potential risks. Our broader strategy includes predictive research, extensive security training that goes hand in hand with capability development, rigorous testing of our models, and the development of better tools and frameworks to understand and mitigate risk.
Strong internal governance processes are also essential for the safe and responsible development of AI, as is collaboration with independent external experts who bring fresh perspectives and diverse expertise to our work. Google DeepMind's Responsibility and Safety Council works with multiple teams to monitor emerging risks, review ethical and security assessments, and implement appropriate technical solutions and policies. We also work with other external experts such as Apollo Research, Vaultis, Dreadnode and more to conduct extensive testing and evaluation of our models, including Gemini 3, our most intelligent and secure model ever.
Additionally, Google DeepMind is a proud founding member Limit models forumand also Artificial Intelligence Partnershipwhere we focus on ensuring the safe and responsible development of pioneering artificial intelligence models and strengthening cooperation on important security issues.
We hope that our expanded partnership with AISI will enable us to develop a more robust approach to AI security for the benefit not only of our own organizations, but also the broader industry and all those who interact with AI systems.

















