Security BSides Las Vegas 2025

Fred Heiding

Dr. Fred Heiding is a research fellow at the Harvard Kennedy School’s Belfer Center. His work focuses on computer security at the intersection of technical capabilities, business implications, and policy remediations. Fred is a member of the World Economic Forum's Cybercrime Center, a teaching fellow for the Generative AI course at Harvard Business School, and the National and International Security course at the Harvard Kennedy School. Fred has been invited to brief the US House and Senate staff in DC on the rising dangers of AI-powered cyberattacks, and he leads the cybersecurity division of the Harvard AI Safety Student Team (HAISST). His work has been presented at leading conferences, including Black Hat, Defcon, and BSides, and leading academic journals like IEEE Access and professional journals like Harvard Business Review and Politico Cyber. He has assisted in the discovery of more than 45 critical computer vulnerabilities (CVEs). In early 2022, Fred got media attention for hacking the King of Sweden and the Swedish European Commissioner.


Sessions

08-04
17:00
45min
Automating Phishing Infrastructure Development Using AI Agents
Fred Heiding, Simon Lermen

This project investigates how attackers can now use large language models (LLMs) and AI agents to autonomously create phishing infrastructure, such as domain registration, DNS configuration, and hosting personalized spoofed websites. While earlier research has explored how LLMs can generate persuasive phishing emails, our study shifts the focus to the back-end automation of the phishing lifecycle. We evaluate how modern frontier and open-source models—including Chinese models like DeepSeek and Western counterparts such as Claude Sonnet and GPT-4o—perform when tasked with registering phishing domains, configuring DNS records, deploying landing pages, and harvesting credentials. The tests will be conducted with and without human intervention. We measure success through metrics like task completion rate, cost and time requirements, and the amount of human intervention required. By demonstrating how easy and low-cost it has become to scale phishing infrastructure with AI, this work underscores the growing threat of AI-powered cybercrime and highlights the urgent need for regulatory, technical, and policy countermeasures.

Ground Truth
Siena
08-04
18:00
45min
A Framework for Evaluating the Security of AI Model Infrastructures
Fred Heiding, AndrewKao

As AI continues to reshape global power dynamics, securing AI model weights has become a critical national security challenge. Frontier AI models are expensive to build but cheap to use if they are stolen, making them prime targets for cyber theft. To that end, this talk investigates the security risks of AI model infrastructure, particularly related to AI model weights (the core learned parameters of AI systems). I introduce a tailored scoring framework to assess the likelihood of model theft via three categories: Cyber Exploitation, Insider Threats, and Supply Chain Attacks. Our work builds on MITRE’s ATT&CK and ATLAS frameworks and the 38 attack vectors and five security levels (SL1-SL5) introduced in RAND’s Securing AI Model Weights report. Each category contains several individual attack types, and each attack type is evaluated based on technical feasibility, the effectiveness of existing mitigation strategies, and regulatory gaps. Our results are supplemented with insights from expert interviews spanning cybersecurity, AI, military, intelligence, policy, and legal fields, as well as with existing industry scoring systems like BitSight and RiskRecon. Our research highlights security best practices worth emulating, the most pressing vulnerabilities, and key policy gaps.

Ground Truth
Siena