Security BSides Las Vegas 2025

Securing AI Infrastructure: Lessons from National Cybersecurity Strategies and Attacks Against Other Critical Sectors
2025-08-04 , Siena

As artificial intelligence becomes a pillar of economic and strategic power, AI labs are emerging as the next high-value targets for espionage and cyberattacks. State actors have compromised other critical sectors, such as semiconductors and biotechnology, for decades to steal trade secrets and shift global advantage. Leading voices are now questioning the security of AI-related infrastructure. In this talk, we discuss findings from over 200 previous cyber and espionage incidents across various industries, shedding light on how and where the risks apply to the supply chain of AI models.

To complement the insights from historic attacks and evaluate present-day infrastructure security, we draw on recent research on national cybersecurity strategies of cyber powers such as the US, Australia, Singapore, and the UK. These strategies offer diverse policy approaches for defending critical infrastructure, assigning cybersecurity responsibilities, and engaging industry in proactive security efforts. While there is no universal blueprint, several recurring practices, such as workforce development, public-private collaboration, and clear cyber governance, can inform how governments and AI developers protect AI systems. We highlight which lessons translate effectively to the challenges of AI infrastructure and provide recommendations for closing policy gaps and preparing for future threats.


As artificial intelligence becomes a pillar of economic and strategic power, AI labs are emerging as the next high-value targets for espionage and cyberattacks. State and corporate actors have compromised other critical sectors, such as semiconductors, aerospace, and biotechnology, for decades to steal trade secrets and shift global advantage. Leading voices are now starting to question the security of AI-related infrastructure. In this talk, we discuss findings from over 200 previous cyber and espionage incidents across various industries, shedding light on how and where the risks apply to the supply chain of AI models. We discuss the most feasible attack patterns toward sensitive assets such as model weights, training pipelines, and proprietary data. Then, we distill actionable lessons to mitigate the most pressing threats. We also demonstrate how AI-related IP theft differs from other sectors due to the extraordinary potential for economic and strategic power gains, which heighten the incentives of attackers and increase the risk to AI organizations.

To complement the insights from historic attacks and evaluate present-day infrastructure security, we draw on recent research analyzing the national cybersecurity strategies of cyber powers such as the US, Australia, Singapore, and the United Kingdom. These strategies offer diverse policy approaches for defending critical infrastructure, assigning cybersecurity responsibilities, and engaging industry in proactive security efforts. While there is no universal blueprint, several recurring practices, such as workforce development, public-private collaboration, and clear cyber governance, can inform how governments and AI developers protect AI systems. We highlight which of these lessons translate effectively to the unique challenges of AI infrastructure and conclude with recommendations for closing current policy gaps and preparing for future threats.

Dr. Fred Heiding is a research fellow at the Harvard Kennedy School’s Belfer Center. His work focuses on computer security at the intersection of technical capabilities, business implications, and policy remediations. Fred is a member of the World Economic Forum's Cybercrime Center, a teaching fellow for the Generative AI course at Harvard Business School, and the National and International Security course at the Harvard Kennedy School. Fred has been invited to brief the US House and Senate staff in DC on the rising dangers of AI-powered cyberattacks, and he leads the cybersecurity division of the Harvard AI Safety Student Team (HAISST). His work has been presented at leading conferences, including Black Hat, Defcon, and BSides, and leading academic journals like IEEE Access and professional journals like Harvard Business Review and Politico Cyber. He has assisted in the discovery of more than 45 critical computer vulnerabilities (CVEs). In early 2022, Fred got media attention for hacking the King of Sweden and the Swedish European Commissioner.

This speaker also appears in:

Andrew Kao is a PhD student in economics at Harvard University. His research focuses on the political economy of new technologies, such as AI and the internet. His website is https://andrew-kao.github.io/