2025-08-05 –, Siena
It may be difficult to predict the future of AI and cybersecurity. However, there are several mental models that we can use to see the shadow of what's to come. They give us clear thinking through patterns that clearly point to new threats and opportunities. This talk uses a few of these models to help us understand the present and the potential futures in AI and cybersecurity to systematically plan for what's next.
AI and cybersecurity threats are evolving at rapid pace and unfortunately, many of us are often caught off guard, reacting tactically to the latest issues rather than thinking strategically about what might come next. This talks delves into the power of mental models as a proactive tool to better understand, anticipate, and mitigate both current and future AI and cybersecurity risks.
I will cover several different mental models, such as the Cynefin Model, People Process Technology trio, OSI model, DIKW Pyramid, NIST CSF, Kahneman’s System 1 and 2, OODA loop, Cyber Defense Matrix, DIE Triad, and more.
Moreover, I’ll show what I have newly discovered when I combined these mental models. These new discoveries point directly to currently emerging and previously unforeseen risks, but they also reveal patterns for how to address these risks.
This is not just a theoretical discussion. These mental models support clear thinking for decision making and produce insights that can be translated into tactical actions. For example, the Cynefin model when combined with the People Process Technology trio reveal the hard limits of automation and indicate when we should rely upon technology vs services to tackle new challenges, such as GenAI. In another example, combining the DIKW Pyramid with the Cyber Defense Matrix and the OSI model shows fundamental flaws in data-centric approaches when dealing with the leakage of sensitive content through LLMs. I'll use the OODA loop to show how it can be applied to Agentic AI and what type of controls we will need to secure them.
Without the insights that these models reveal, we will approach the future blind. Even worse, we might approach the future with a false sense of assurance that our current controls will continue to work.
Sounil Yu is the author and creator of the Cyber Defense Matrix and the DIE Triad, which are reshaping approaches to cybersecurity. He's a Board Member of the FAIR Institute; senior fellow at GMU Scalia Law School's National Security Institute; guest lecturer at Carnegie Mellon; and advisor to many startups. Sounil is the co-founder and Chief AI Safety Officer at Knostic and previously served as the CISO at JupiterOne, CISO-in-Residence at YL Ventures, and Chief Security Scientist at Bank of America. Before BofA, he helped improve information security at several Fortune 100 companies and Federal Government agencies. Sounil has over 20 granted patents and was recognized as one of the most influential people in security by Security Magazine and Influencer of the Year by SC Awards. He is a recipient of the SANS Lifetime Achievement Award and was inducted into the Cybersecurity Hall of Fame. He has an MS in Electrical Engineering from Virginia Tech and a BS in Electrical Engineering and a BA in Economics from Duke University.