Devising and detecting spear phishing using data scraping, large language models, and personalized spam filters
Fred Heiding, Simon Lermen
We previously demonstrated how large language models (LLMs) excel at creating phishing emails (https://www.youtube.com/watch?v=yppjP4_4n40). Now, we continue our research by demonstrating how LLMs can be used to create a self-improving phishing bot that automates all five phases of phishing emails (collecting targets, collecting information about the targets, creating emails, sending emails, and validating the results). We evaluate the tool using a factorial approach, targeting 200 randomly selected participants recruited for the study. First, we compare the success rates (measured by pressing a link in an email) of our AI-phishing tool and phishing emails created by human experts. Then, we show how to use our tool to counter AI-enabled phishing bots by creating personalized spam filters and a digital footprint cleaner that helps users optimize the information they share online. We hypothesize that the emails created by our fully automated AI-phishing tool will yield a similar click-through rate as those created using human experts, while reducing the cost by up to 99%. We further hypothesize that the digital footprint cleaner and personalized spam filters will result in tangible security improvements at a minimal cost.