AI Supply Chain

The GCSCC is conducting research into security risks arising from the AI supply chain. This line of research has the objective of generating insights into where risks are most significant, how they may propagate across the ecosystem, and which mitigations are most effective in reducing both organisational and systemic exposure.  

Context:

  • The AI supply chain is multi-layered, encompassing hardware, compute infrastructure, training and inference data, models, deployment environments, and associated tooling.
  • A range of risks have already emerged across this stack, including the distribution of malicious models via open-source platforms (e.g., HuggingFace), attacks on software dependencies (e.g., PyTorch), and malware delivery through “skills” in Agentic ecosystems.
  • An organisation’s exposure to these risks depends on how it adopts AI systems. For example, risks differ significantly between organisations training models on-premises, hosting pre-trained models, or relying on third-party AI APIs. Nonetheless, industry surveys indicate that supply-chain compromise was a common driver of security incidents in 2025.
  • The AI supply chain is also highly concentrated, with a small number of providers dominating key layers of the stack, increasing the potential for systemic risk.

Research Initiatives

1. AI Supply Chain Taxonomy 

In partnership with Plexal, we have been conducting workshop-based research to understand security risks and challenges faced by organisations adopting AI. This work has produced a report that sets out a taxonomy of AI supply chain components, alongside associated security risks and failure points across the stack.

 

Designed to foster a shared understanding of what AI systems are made of, who builds them and how they can be made more resilient, this taxonomy seeks to provide a transparent and common language for the AI supply chain across sectors. While no two AI deployments are the same, this shared language is essential in enabling a precise discussion of risks, dependencies and security challenges across ecosystems. 

Visit the official LASR website for more information on this report and more insights on AI cybersecurity. 

 

 

2. AI Bill of Materials

Supply-chain risks are emerging as a priority concern as organisations rush to adopt AI technology. We have always faced concerns around whether digital technology might be targeted for compromise because of its position in ecosystems; that more sophisticated threat actors might be able to influence the configurations and implementations of software and hardware so that it is purchased with exploitable backdoors already in place for later attacks. This concern holds true for AI systems, perhaps more acutely because of the fast-paced development and inherent novelty of the technologies which is not currently supported by a mature cybersecurity controls regime that might be used to build confidence in deployment integrity. It is in this context that Bills of Materials (BoMS) movements for software have a place to help AI procurement and, in the face of an incident, to triage and quickly understand inherited vulnerabilities in the supply chain, informing recovery choices, future resilience planning, and also potentially attack attribution and intent.

This paper explores the emerging policies that seek to help address trust in the AI supply-chain by using BoMs to enhance trustworthiness on the component parts. It aims to understand their risk-mitigation capabilities and what is achievable in terms of strengthening security postures across the AI supply chain and the security of adopted AI systems. This research has been based on interviews to representatives of the main current initiatives, using thematic analysis to reflect on the efficiency of current policy designs to help protect economies and critical infrastructures. Research findings initially suggest that efforts are in the direction of defining stances of governance under major international technical standards bodies to bring transparency and interoperability across the supply chain.

Preliminary Findings: 
  • Urgent topic: SBOMs are regulatory requirements in some countries, entailing commercial restrictions for exporting organisations.
  • Uncoordinated efforts with similar routes: Existing initiatives aim to define governance stances under major international technical standards bodies to promote transparency and interoperability.
  • Not reinventing the wheel: AI is a subset of software, and existing effective cybersecurity frameworks could be revised to address the new nature of risks posed by AI technologies.
  • Main challenges: The significant gap in global repository of AI vulnerabilities, and how to attest the integrity of all elements forming an AI system.
  • Necessary but not sufficient: Complementary policies are required to enhance security postures across the AI supply chain.

3. Supply Chain Ecosystem Modelling 

There is clear potential for systemic harms – affecting multiple organisations simultaneously – to arise from cyber-attacks on the AI supply chain, particularly in the context of high market concentration of interdependence. We are developing modelling approaches to explore the nature of these systemic risks, including how ecosystem dynamics (such as concentration and interconnectedness) influence harm propagation, and how different mitigation strategies may reduce risk.