Security Bsides Las Vegas 2024

Matthew Canham

Dr. Matthew Canham is a former Supervisory Special Agent with the Federal Bureau of Investigation (FBI), he has a combined twenty-one years of experience in conducting human-technology and security research. He currently holds an affiliated faculty appointment with George Mason University, where his research focuses on threats posed by maliciously produced AI generated content and synthetic media social engineering. Dr. Canham recently founded the Cognitive Security Institute, a non-profit organization dedicated to understanding the key components of cognitive attacks and discovering the best ways to defend against these.
Dr. Canham has provided synthetic media threat awareness training to NASA (Kennedy Space Center), DARPA, MIT, US Army DevCom, the NATO Cognitive Warfare Working Group, the BSides Las Vegas security conference, the Misinformation Village at DefCon, and the Black Hat USA security conference. He has appeared on multiple podcasts including BarCode Security, Weapons of Mass Disruption, 8th Layer Insights, The Cognitive Crucible Podcast, the ITSP Podcast, and he has appeared as a deepfake subject matter expert on several news outlets.


Session

08-07
11:30
45min
Hacking Things That Think
Matthew Canham

The rush to embed AI into everything is quickly opening up unanticipated attack surfaces. Manipulating natural language systems using prompt injection and related techniques feels eerily similar to socially engineering humans. Are these similarities only superficial, or is there something deeper at play? The Cognitive Attack Taxonomy (CAT) is a continuously expanding catalog of over 350 cognitive vulnerabilities, exploits, and TTPs which have been applied to humans, AI, and non-human biological entities. Examples of attacks in the CAT include linguistic techniques used in social engineering attacks to prompt a response, disabling autonomous vehicles with video projection, using compromised websites to induce negative neurophysiological effects, manipulating large language models to expose sensitive files or deploy natively generated malware, disrupting the power grid using coupons, and many other examples. The CAT offers the opportunity to create on demand cognitive attack graphs and kill chains for nearly any target. This talk concludes with a brief demo integrating cognitive attack graphs into a purpose-built ensemble AI model capable of autonomously assessing a target's vulnerabilities, identifying an exploit, selecting TTPs, and finally launching a simulated attack on that target. The CAT will be made publicly available at the time of this presentation.

Ground Truth
Siena