Fred Heiding
Dr. Fred Heiding is a research fellow at the Harvard Kennedy School’s Belfer Center. His work focuses on computer security at the intersection of technical capabilities, business implications, and policy remediations. Fred is a member of the World Economic Forum's Cybercrime Center, a teaching fellow for the Generative AI course at Harvard Business School, and the National and International Security course at the Harvard Kennedy School. Fred has been invited to brief the US House and Senate staff in DC on the rising dangers of AI-powered cyberattacks, and he leads the cybersecurity division of the Harvard AI Safety Student Team (HAISST). His work has been presented at leading conferences, including Black Hat, Defcon, and BSides, and leading academic journals like IEEE Access and professional journals like Harvard Business Review and Politico Cyber. He has assisted in the discovery of more than 45 critical computer vulnerabilities (CVEs). In early 2022, Fred got media attention for hacking the King of Sweden and the Swedish European Commissioner.
Sessions
This project investigates how attackers can now use large language models (LLMs) and AI agents to autonomously create phishing infrastructure, such as domain registration, DNS configuration, and hosting personalized spoofed websites. While earlier research has explored how LLMs can generate persuasive phishing emails, our study shifts the focus to the back-end automation of the phishing lifecycle. We evaluate how modern frontier and open-source models—including Chinese models like DeepSeek and Western counterparts such as Claude Sonnet and GPT-4o—perform when tasked with registering phishing domains, configuring DNS records, deploying landing pages, and harvesting credentials. The tests will be conducted with and without human intervention. We measure success through metrics like task completion rate, cost and time requirements, and the amount of human intervention required. By demonstrating how easy and low-cost it has become to scale phishing infrastructure with AI, this work underscores the growing threat of AI-powered cybercrime and highlights the urgent need for regulatory, technical, and policy countermeasures.
As artificial intelligence becomes a pillar of economic and strategic power, AI labs are emerging as the next high-value targets for espionage. State and corporate actors have compromised other critical sectors, such as semiconductors, aerospace, and biotechnology, for decades to steal trade secrets and shift global advantage. In this talk, we present findings from a new analysis of over 200 historical cyber and non-cyber espionage incidents across various industries. By mapping attack patterns, ranging from insider threats to IP theft, to the realities of AI infrastructure, we demonstrate how the same vulnerabilities now apply to AI labs, particularly around sensitive assets such as model weights, training pipelines, and proprietary data. Drawing from these cross-sector case studies, the talk distills actionable lessons for AI organizations and introduces a tailored threat evaluation framework. We also demonstrate how AI-related IP theft differs from other sectors due to the extraordinary potential for economic and strategic power gains, which is likely to heighten the incentives of attackers and increase the risk to AI organizations.