Enhancing Resilience through Common Language
In response to this challenge, a new report from the Laboratory for AI Security Research (LASR) by the University of Oxford and Plexal presents a working taxonomy of the AI supply chain. Designed to foster a shared understanding of what AI systems are made of, who builds them and how they can be made more resilient, this taxonomy seeks to provide a transparent and common language for the AI supply chain across sectors.
While no two AI deployments are the same, this shared language is essential in enabling a precise discussion of risks, dependencies and security challenges across ecosystems. Building on shared principles from similar recent studies, this taxonomy is deliberately intended to support cross-sector understanding and address blind spots by shifting from component level analysis to systemic analysis.
As demonstrated in a series of case studies included in the report, this holistic approach is intended to help organisations assess risk continuously and consistently whilst implementing safeguards at the system level of their AI supply chain.
Next Steps for AI Supply Chain Security
A working taxonomy of the AI supply chain provides the foundation for a more strategic and systematic approach to AI supply chain security, but more work is needed. As AI becomes embedded across critical national infrastructure and commercial systems, future efforts must focus on adapting existing supply chain risk governance and cyber frameworks to the AI context(s).
Progress will depend on improving transparency, clarifying accountability across stakeholders, and creating incentives for secure practices throughout the supply chain. Investment in education and awareness is also essential to build shared understanding and capability as AI systems become more advanced and complex. Ultimately, advancing secure AI adoption will require coordinated, cross-sector collaboration and the development of practical, scalable and adaptable approaches to managing evolving risks.