Security BSides Las Vegas 2025

A Framework for Evaluating the Security of AI Model Infrastructures
2025-08-04 , Siena

As AI continues to reshape global power dynamics, securing AI model weights has become a critical national security challenge. Frontier AI models are expensive to build but cheap to use if they are stolen, making them prime targets for cyber theft. To that end, this talk investigates the security risks of AI model infrastructure, particularly related to AI model weights (the core learned parameters of AI systems). I introduce a tailored scoring framework to assess the likelihood of model theft via three categories: Cyber Exploitation, Insider Threats, and Supply Chain Attacks. Our work builds on MITRE’s ATT&CK and ATLAS frameworks and the 38 attack vectors and five security levels (SL1-SL5) introduced in RAND’s Securing AI Model Weights report. Each category contains several individual attack types, and each attack type is evaluated based on technical feasibility, the effectiveness of existing mitigation strategies, and regulatory gaps. Our results are supplemented with insights from expert interviews spanning cybersecurity, AI, military, intelligence, policy, and legal fields, as well as with existing industry scoring systems like BitSight and RiskRecon. Our research highlights security best practices worth emulating, the most pressing vulnerabilities, and key policy gaps.


As AI continues to reshape global power dynamics, securing AI model weights has become a critical national security challenge. Frontier AI models are expensive to build but cheap to use if they are stolen, making them prime targets for adversaries like China, Russia, and cybercrime groups. To that end, this talk investigates the security risks of AI model infrastructure, particularly related to AI model weights (the core learned parameters of AI systems). I introduce a tailored scoring framework to assess the likelihood of model theft via three categories: Cyber Exploitation, Insider Threats, and Supply Chain Attacks. Our work builds on MITRE’s ATT&CK and ATLAS frameworks and the RAND’s Securing AI Model Weights report. Each category contains several individual attack types, and each attack type is evaluated based on technical feasibility, the effectiveness of existing mitigation strategies, and regulatory gaps. Our results are supplemented with insights from expert interviews spanning cybersecurity, AI, military, intelligence, policy, and legal fields, as well as with existing industry scoring systems like BitSight and RiskRecon. I also draw lessons from historical IP theft in national security domains such as nuclear technology and aerospace. Our research highlights security best practices worth emulating, the most pressing vulnerabilities, and key policy gaps. I also identify the most concerning attack scenarios and provide actionable recommendations to strengthen the security of AI facilities.

I believe this talk will resonate strongly with the BSides audience as it addresses a highly sought-after and increasingly urgent security challenge, with interest from the security community, AI practitioners, and policymakers. Attendees will gain a comprehensive understanding of how threat actors are exploiting AI infrastructure today, what vulnerabilities are most urgent to address, and what concrete steps security vendors and policymakers should take to strengthen their security posture in this rapidly evolving threat landscape.

Andrew Kao is a PhD student in economics at Harvard University. His research focuses on the political economy of new technologies, such as AI and the internet. His website is https://andrew-kao.github.io/

Dr. Fred Heiding is a research fellow at the Harvard Kennedy School’s Belfer Center. His work focuses on computer security at the intersection of technical capabilities, business implications, and policy remediations. Fred is a member of the World Economic Forum's Cybercrime Center, a teaching fellow for the Generative AI course at Harvard Business School, and the National and International Security course at the Harvard Kennedy School. Fred has been invited to brief the US House and Senate staff in DC on the rising dangers of AI-powered cyberattacks, and he leads the cybersecurity division of the Harvard AI Safety Student Team (HAISST). His work has been presented at leading conferences, including Black Hat, Defcon, and BSides, and leading academic journals like IEEE Access and professional journals like Harvard Business Review and Politico Cyber. He has assisted in the discovery of more than 45 critical computer vulnerabilities (CVEs). In early 2022, Fred got media attention for hacking the King of Sweden and the Swedish European Commissioner.

This speaker also appears in: