AI Cybersecurity

The GCSCC is undertaking a new Artificial Intelligence (AI) cybersecurity research program in partnership with a network of leading international experts. 

Overview

AI and machine learning systems are rapidly being adopted globally as entities seek to leverage benefits from the transformative technology. On the other hand, as these systems become more pervasive, they are also becoming targets for malicious actors. To prevent these actors from devaluing the advantages of AI, the cybersecurity of these systems must be prioritised by the developers and adopters of the technology. However, a lack of AI cybersecurity knowledge and capabilities is undermining our ability to understand and mitigate these risks. 

The GCSCC aims to address this challenge through an AI cybersecurity research program focused on investigating research gaps and build global cybersecurity capacity. The program will review existing frameworks, create new capabilities, and offer insights for policy makers and cybersecurity practitioners. Its goal is to generate knowledge and practice that can lead to new cybersecurity policy and capability options that protect AI systems, their users, and the ecosystems.

The findings from this program will be shared through the GCSCC’s global network to support research and capacity-building efforts. 

Get Involved

The GCSCC has multiple opportunities for engagement with the AI Cybersecurity program. If you would like to learn more please contact us. 

Research Streams

National AI Cybersecurity Capacity

Nations are looking to understand the capacities needed for secure AI adoption. The impact of AI on national and international cyber-resilience requires specific capabilities in, and across, all dimensions of cybersecurity capacity. For this purpose, the GCSCC has drafted a novel National AI Cybersecurity Readiness Metric. This tool enables nations to assess their current state of AI cybersecurity capabilities and to identify priorities for cybersecurity capacity enhancement and investment across all dimensions of national cybersecurity capacity. 

The GCSCC is working with international stakeholders to develop and test the National AI Cybersecurity Readiness Metric and collect evidence of the consequences of capacity building decisions. The research findings will be shared openly with the global community and used to evolve the metric, ensuring that it reflects current knowledge and best practices. 

 

Nature of AI Security Harms

This research stream focuses on investigating the consequences of AI security incidents beyond the immediate technical effects on accessed digital assets. It aims to understand the propagation of incident harms and illuminate potential risks to individual organisations, industry supply chains and the AI ecosystem more broadly. It will provide a unique and critical lens to enable the exploration of potential consequences, and the impacts of risk mitigation interventions by AI users and policy makers seeking to protect national infrastructure, supply chains and the overall economy. 

AI Model Vulnerability Research

The technology security community lacks a comprehensive understanding of AI system vulnerabilities. This knowledge gap prevents the application of existing security approaches to AI systems and the development of new methods. Our research will explore specific vulnerabilities and their potential to be exploited with the aim of guiding practice and policy on their appropriate use. It will research various AI model types, seeking to understand how different AI model architecture vary in their susceptibility to different types of cyberattacks and investigate correlations between the risks and severity of AI model compromise and training data sets.