GCSCC Hosts AI Cybersecurity Conference: Securing the Cyber Future, ‘Cyber Resilience in the Age of AI and Geopolitical Uncertainty’
Artificial Intelligence (AI) is no longer a distant promise. It is rapidly being integrated into business operations and everyday life. However, as AI capabilities accelerate, so does the complexity of the cyber threat landscape. Rapid technological and geopolitical changes are converging to create challenges for how we as nations, businesses and citizens protect our core values, rights and digital integrity. We are at an unprecedented turning point for cybersecurity that demands new methods for building an intelligent and resilient digital future.
To help meet this challenge, the Global Cyber Security Capacity Centre (GCSCC), brought together cybersecurity experts from industry, government, academia and the international community for the 2025 AI Cybersecurity Conference in Oxford, UK from 29 September-1 October. Under the theme ‘Securing the Cyber Future: Cyber Resilience in the Age of AI and Geopolitical Uncertainty’, speakers discussed a range of topics related to AI and cybersecurity governance.
Side events before and after the conference main day showcased individual projects of the United Kingdom’s Laboratory for AI Security Research (LASR) initiative.
Managing AI Security Risks at a Global Level
The GCSCC AI Cybersecurity Conference 2025 created an opportunity for stakeholders within the AI and cybersecurity communities to connect and discuss collaborative strategies for managing AI cybersecurity risks and opportunities at a global level. Taken together, the sessions facilitated a rich dialogue on AI cybersecurity risks, novel approaches and areas for future collaboration.
Across four sessions, speakers explored global trends for AI cybersecurity and the challenges these trends are producing for different nations in the international environment. These sessions included:
- Diverging Approaches to AI and Cybersecurity in Global Governance
- Inequalities of the AI Supply Chain
- The Evolution of Cybercrime with AI
- Shaping the Future of Global AI Cybersecurity
Pathways to Advancing Global AI Security
Through the sessions participants raised potential next steps for advancing global AI cybersecurity that may inform future research, collaboration and capacity building. At a high level these include to:
- Foster global dialogue and negotiations.
- Align international processes.
- Break with binary thinking.
- Advance collective security mindset.
- Bridge Silos.
- Develop interoperable solutions.
- Build with security foundations.
- Establish global best practice.
- Review rights frameworks.
- Strengthen law enforcement networks.
- Diffuse innovation.
- Lift global education.
Read the Conference Outcomes Report
To gain a comprehensive overview of the trends and next steps that emerged during the conference, download the full Conference Outcomes Report here:
Pre-event: AI Cybersecurity Research Insights
Ahead of the main conference, researchers from LASR and partner institutions across Oxford and beyond gathered to share cutting-edge AI and cybersecurity research. The day opened with a panel on the complexities of the AI supply chain, emphasising the need for security-by-design, modernisation of legacy systems, and greater transparency through initiatives such as AI bills of materials. Afternoon sessions showcased research on emerging AI-enabled threats, including Subversive Alignment Injection and jailbreak prompts, and explored defences like red teaming and zero-trust architectures. The event closed with updates on the LASR Opportunity Call, supporting UK SMEs in developing new AI security capabilities through funding, mentorship, and technical resources.
Post-event: Measuring National AI Readiness
Following the conference main day, attendees were invited to a tabletop exercise on the GCSCC’s National AI Cybersecurity Readiness Metric (the Metric), a new tool that will enable countries to benchmark their existing readiness and provide an evidence base for future AI cybersecurity decision making.
The event began with government representatives from Mongolia and Cyprus speaking on their experiences undertaking the first trials of the tool in 2025. In a subsequent tabletop exercise, participants were presented with two AI driven cybersecurity scenarios and asked to utilise the Metric to inform their strategic decision-making processes. Through this exercise participants were able to familiarise themselves with the tool and test its applicability, while feeding their insights back into the research and the refinement process of the Metric.
Laboratory for AI Security Research (LASR)
LASR is a collaboration between the UK Government, the Alan Turing Institute, Queens University Belfast, Plexal and the University of Oxford, LASR is designed to bring together world-leading expertise from academia, the national security community and industry.
Find out more about LASR, including information on partners, events and opportunities to engage on the official website.
