Security BSides Las Vegas 2025

Securing AI Infrastructure: Lessons from Cyberattacks, Espionage, and Insider Threats Against Other Critical Sectors
2025-08-04 , Siena

As artificial intelligence becomes a pillar of economic and strategic power, AI labs are emerging as the next high-value targets for espionage. State and corporate actors have compromised other critical sectors, such as semiconductors, aerospace, and biotechnology, for decades to steal trade secrets and shift global advantage. In this talk, we present findings from a new analysis of over 200 historical cyber and non-cyber espionage incidents across various industries. By mapping attack patterns, ranging from insider threats to IP theft, to the realities of AI infrastructure, we demonstrate how the same vulnerabilities now apply to AI labs, particularly around sensitive assets such as model weights, training pipelines, and proprietary data. Drawing from these cross-sector case studies, the talk distills actionable lessons for AI organizations and introduces a tailored threat evaluation framework. We also demonstrate how AI-related IP theft differs from other sectors due to the extraordinary potential for economic and strategic power gains, which is likely to heighten the incentives of attackers and increase the risk to AI organizations.


As artificial intelligence becomes a pillar of economic and strategic power, AI labs are emerging as the next high-value targets for espionage. State and corporate actors have compromised other critical sectors, such as semiconductors, aerospace, and biotechnology, for decades to steal trade secrets and shift global advantage. In this talk, we present findings from a new analysis of over 200 historical cyber and non-cyber espionage incidents across various industries. By mapping attack patterns, ranging from insider threats to IP theft, to the realities of AI infrastructure, we demonstrate how the same vulnerabilities now apply to AI labs, particularly around sensitive assets such as model weights, training pipelines, and proprietary data. Drawing from these cross-sector case studies, the talk distills actionable lessons for AI organizations and introduces a tailored threat evaluation framework. We also demonstrate how AI-related IP theft differs from other sectors due to the extraordinary potential for economic and strategic power gains, which is likely to heighten the incentives of attackers and increase the risk to AI organizations.

Dr. Fred Heiding is a research fellow at the Harvard Kennedy School’s Belfer Center. His work focuses on computer security at the intersection of technical capabilities, business implications, and policy remediations. Fred is a member of the World Economic Forum's Cybercrime Center, a teaching fellow for the Generative AI course at Harvard Business School, and the National and International Security course at the Harvard Kennedy School. Fred has been invited to brief the US House and Senate staff in DC on the rising dangers of AI-powered cyberattacks, and he leads the cybersecurity division of the Harvard AI Safety Student Team (HAISST). His work has been presented at leading conferences, including Black Hat, Defcon, and BSides, and leading academic journals like IEEE Access and professional journals like Harvard Business Review and Politico Cyber. He has assisted in the discovery of more than 45 critical computer vulnerabilities (CVEs). In early 2022, Fred got media attention for hacking the King of Sweden and the Swedish European Commissioner.

This speaker also appears in:

Andrew Kao is a PhD student in economics at Harvard University. His research focuses on the political economy of new technologies, such as AI and the internet. His website is https://andrew-kao.github.io/