Matthew Deluca
A skilled cybersecurity professional with years of experience working with, and in the Department of Defense in support of protecting critical information systems. With a wide variety of additional experience working years at Silicon Valley startups and most recently working with large Fortune 100 companies.
None
Social Media User/Handle –N/A
Session
The recent surge in the advancement of large language models (LLMs) like GPT-4 has brought new complexities to the cybersecurity sphere, significantly diminishing the "time to exploit" from a duration of months and weeks down to mere hours and minutes. In this presentation, we will delve into how LLMs can effectively generate viable exploits for a wide variety of Common Vulnerabilities and Exposures (CVEs). The increased speed at which these exploits can be created calls for a swift adaptation from cybersecurity professionals, necessitating a better understanding of the capabilities of LLMs and the implications of their rapid exploit development. This presentation will further shed light on how the quality and amount of input information - ranging from CVE descriptions to vendor documentation - can significantly influence the success rate of the malware code generated by these models. Essentially showing how simple CVE descriptions, designed for good, give AIs enough information to create working exploits. We will explore the creation of exploits for a specific CVE under multiple scenarios, leading to a detailed comparison of the resulting code. This discussion highlights the urgent need for cybersecurity professionals to grasp and tackle the issues brought forth by LLM-powered exploit creation. We will delve into the tangible implications of these findings on aspects of vulnerability management, patch prioritization, and threat detection. These illustrations will effectively portray the gravity of the situation in light of the expedited "time to exploit" made possible by LLMs.