2025-11-17 –, Westin - Munich
AI applications surface new, visible risks—but underneath lie amplified traditional ones. The massive data aggregation, probabilistic outputs, and decision‑making power of AI systems make them inherently more critical. To defend these systems, we must extend our security programs and rediscover the strength of foundational security principles. In this talk, we will examine new threats to AI applications, distinguish them from familiar "old" threats, and explore both through a practical threat model of a real‑world AI deployment.
We are using the iceberg metaphor to visualize the relationship between old and new risks. This image operates on multiple levels. Above the surface you see prompt injection, hallucinations… but beneath the waves lie familiar threats and traditional risks—amplified data exposure, access control failures, insecure components. The thesis is simple: new AI risks are real—but don’t throw out your classic AppSec toolkit.
In another interpretation, the image also illustrates how AI applications often function: only a thin layer—usually the API interface—is exposed. Beneath the surface, however, lies a vast repository of data and capabilities (agents) that pose the real danger if compromised. It’s also crucial to consider how AI is integrated into the business case, as that integration directly influences the system’s criticality.
In the example of a real‑world AI application—a RAG‑based scenario—we’ll explore how to conduct risk assessment and threat modeling for AI systems, and examine the role of traditional security measures. We show how classic defenses remain vital for protecting AI applications as we walk through a hands‑on threat‑modeling case. We cover the threat‑modeling process, highlight the new dimensions introduced by AI (including how to seamlessly incorporate EU AI Act requirements), and demonstrate how to include AI‑specific risks into your existing threat‑modeling workflows.
1. Introduction
First, we will briefly examine real-world AI application scenarios and the marketing expectations surrounding them. Use cases such as "I want AI to manage my calendar and book my concert tickets" (Meredith Whittaker) illustrate the risks and broad access that AI systems demand. The goal is to highlight the inherent risks (related to data and agency) of AI systems.
2. Risks for AI and ML Systems
We survey the AI/ML risks identified in OWASP LLM Top 10 (2025) and the OWASP ML Top 10 (2023), and briefly touch on the EU AI Act’s system‑classification framework. This will remain a concise overview—assuming the audience already knows most of these risks—with just enough explanation to set the stage.
We then contrast the EU AI Act’s “limited‑risk” versus “high‑risk” requirements, so the audience can appreciate the non‑technical obligations (e.g., user transparency) and how they extend beyond purely technical controls, showing the importance of considering the use cases and integration of the AI system for a full understanding of threats and risks.
We finally distill these technical and regulatory requirements into a unified, adapted “threat list” (leveraging AI) that we’ll use in our subsequent threat modeling exercise.
3. Practical Example
We walk through a threat modeling process applied to a “limited‑risk” RAG‑based AI system that helps a production company find the right suppliers for specific product parts:
- System Analysis (Architecture and Data flows)
- System Classification (per AI Act)
- Identification of relevant compliance requirements (e.g. from the EU AI Act)
- Applying a traditional threat model where we look into the detail in the system (and basically follow Shoestack’s 4 step framework using the threat list referenced before).
We will see how this system actually navigates most of AI related threats with traditional defenses (access control, isolation, data minimization…) but requires a lot of „default“ protection mechanisms (encrypted transfer, integrity of the RAG basis, reliance on proper input for RAG, access control etc.) as well.
4. Conclusion
From our case study, we’ll draw these key takeaways:
* Traditional threat modeling remains effective and can be extended to cover AI‑specific risks.
* Early consideration of AI use cases is critical to correctly assess risk levels and protection requirements.
* The large data pools, agent‑like capabilities, and probabilistic outputs of AI systems raise the criticality of these systems
* Fundamental security controls still form the foundation of a robust AI defense strategy.
The talk aims to provide a practical demonstration of the threat modeling process for AI systems using a real-world example.
AI, Defender, Threat Modeling
I am a senior security consultant, founder and a director at the Munich based company secureIO GmbH. With a strong background in application security and building and managing application security programs, I am passionate about all things related to AppSec and DevSecOps.
Benjamin began his career as a Cyber Security Consultant and has since developed into a specialist at the intersection of machine learning and security. His work focuses on the practical evaluation and implementation of the OWASP Top 10 for Machine Learning and Large Language Models (LLMs), particularly through hands-on experience with RAG-based LLM systems in real-world security contexts.
Benjamin also works on secure system design, applying threat modeling and Security by Design in alignment with ISMS principles. His current research includes supervised learning techniques to reduce false positives in vulnerability detection, as well as risk analysis in LLM systems – always aiming to bridge the gap between research and secure implementation.