Simon Lermen
Simon is a AI security researcher who has worked on AI-powered phishing and removing safety guardrails from AI-models. He is interested in researching how AI agents could pose global catastrophic risk through cyberattacks.
Session
This project investigates how attackers can now use large language models (LLMs) and AI agents to autonomously create phishing infrastructure, such as domain registration, DNS configuration, and hosting personalized spoofed websites. While earlier research has explored how LLMs can generate persuasive phishing emails, our study shifts the focus to the back-end automation of the phishing lifecycle. We evaluate how modern frontier and open-source models—including Chinese models like DeepSeek and Western counterparts such as Claude Sonnet and GPT-4o—perform when tasked with registering phishing domains, configuring DNS records, deploying landing pages, and harvesting credentials. The tests will be conducted with and without human intervention. We measure success through metrics like task completion rate, cost and time requirements, and the amount of human intervention required. By demonstrating how easy and low-cost it has become to scale phishing infrastructure with AI, this work underscores the growing threat of AI-powered cybercrime and highlights the urgent need for regulatory, technical, and policy countermeasures.