AI Cybersecurity

The GCSCC is undertaking a new Artificial Intelligence (AI) cybersecurity research programme as part of the United Kingdoms Laboratory for AI Security Research (LASR). 

Overview

AI and machine learning systems are rapidly being adopted globally as entities seek to leverage benefits from the transformative technology. On the other hand, as these systems become more pervasive, they are also becoming targets for malicious actors. To prevent these actors from devaluing the advantages of AI, the cybersecurity of these systems must be prioritised by the developers and adopters of the technology. However, a lack of AI cybersecurity knowledge and capabilities is undermining our ability to understand and mitigate these risks. 

The GCSCC is working to address this challenge through LASR, an AI cybersecurity research program dedicated to mitigating AI security risks and advancing economic prosperity. Its goal is to generate knowledge and practice that can lead to new cybersecurity policy and capability options that protect AI systems, their users, and the ecosystems.

Our Research Streams

National AI Cybersecurity Capacity

The impact of AI on national and international cyber-resilience requires specific capabilities in, and across, all dimensions of cybersecurity capacity. For this purpose, the GCSCC has drafted a novel National AI Cybersecurity Readiness Metric. This tool enables nations to assess their current state of AI cybersecurity capabilities and to identify priorities for cybersecurity capacity enhancement and investment across all dimensions of national cybersecurity capacity. 

The GCSCC is working with international stakeholders to develop and test the National AI Cybersecurity Readiness Metric and collect evidence of the consequences of capacity building decisions. The research findings will be shared openly with the global community and used to evolve the metric, ensuring that it reflects current knowledge and best practices. 

 

Nature of AI Security Harms

This research stream focuses on investigating the consequences of AI security incidents beyond the immediate technical effects on accessed digital assets. It aims to understand the propagation of incident harms and illuminate potential risks to individual organisations, industry supply chains and the AI ecosystem more broadly. It will provide a unique and critical lens to enable the exploration of potential consequences, and the impacts of risk mitigation interventions by AI users and policy makers seeking to protect national infrastructure, supply chains and the overall economy. 

AI Model Vulnerability Research

The technology security community lacks a comprehensive understanding of AI system vulnerabilities. This knowledge gap prevents the application of existing security approaches to AI systems and the development of new methods. Our research will explore specific vulnerabilities and their potential to be exploited with the aim of guiding practice and policy on their appropriate use. It will research various AI model types, seeking to understand how different AI model architecture vary in their susceptibility to different types of cyberattacks and investigate correlations between the risks and severity of AI model compromise and training data sets. 

Laboratory for AI Security Research (LASR)

LASR is a collaboration between the UK Government, the Alan Turing Institute, Queens University Belfast, Plexal and the University of Oxford, LASR is designed to bring together world-leading expertise from academia, the national security community and industry.

To have confidence in AI we need a foundation of trust and an accurate assessment of its impact on security. LASR seeks to drive cutting edge research into AI cybersecurity that counteracts emerging threats and builds this necessary knowledge.

https://www.youtube.com/embed/ALUcsa9KCKc?si=Fw2UhcEUzL1aybs2

 

lasr  logo tagline partners  blue gradient

Established as a public-private-partnership, the Lab will seek to work with wider industry to address the threats and opportunities of AI. Through ecosystem building, market consultation, innovation programmes and events, LASR is convening a community of interest around AI security.

We are looking to engage with technology providers, industry adopters, researchers, and investors operating at the intersection of AI and cyber security. Find out more about LASR, including information on partners, events and opportunities to engage on the official website