BSidesLuxembourg 2026

What Does Threat Modeling Solve for AI Security?
2026-05-07 , IFEN room 2, Workshops and AI Security Village (Building D)

AI rarely creates entirely new classes of risk. More often, it amplifies weaknesses that already exist in complex systems where architecture, data, and business decisions are tightly coupled. What changes is not the threat itself, but its reach, speed, and impact.

This session shows how threat modeling can be used as a leverage point in two parallel dimensions, in a way that remains accessible to newcomers while still grounded in real-world practice. On the technical side, threat modeling is presented as a concrete decision tool: identifying realistic attack paths, clarifying what actually needs to be tested, and guiding focused actions such as pentest scoping and security control prioritization. The emphasis is not on exhaustive models, but on developing the right security reflexes early, understanding where small inputs can create large business consequences.

In parallel, the same threat model is used as a framework validation layer. Instead of treating compliance as a documentation exercise, threat modeling helps explain how and why controls are applied where risk actually exists. Using approachable examples aligned with ISO 27001, the AI Act, and NIS2 expectations, the session demonstrates how threat modeling supports compliance efforts by making security decisions explicit, traceable, and defensible.

The session is designed for beginners and practitioners in application security, threat modeling, or software engineering, and assumes familiarity with AppSec and SDLC concepts. The focus is not on theory or abstract AI threats, but on real systems, plausible attackers, and practical threat models that help bridge technical security decisions and regulatory expectations from the start.


0–5 min : Context setting: Where AI really fits in the SDLC

The session starts by clarifying a frequent source of confusion: securing AI versus using AI for security. Using concrete system examples, I explain how AI is introduced into existing architectures and why it increases coupling between data, identity, APIs, and business workflows. The goal is to ground the audience in a system-level view before discussing threats. This section is fully accessible to beginners and does not assume prior AI security knowledge.

5–10 min ; Why AI Feels Destabilizing at System Level

This section explains why AI adoption often makes risk harder to reason about. AI does not introduce chaos by itself; it amplifies risk across an already uncontrolled attack surface. Using visual system comparisons, I show how adding AI components increases the blast radius of existing weaknesses (identity, APIs, data access, monitoring gaps). The key objective is to shift beginners away from “AI-specific threats” toward ecosystem-level risk thinking.

10–20 min : Scenario 1 (Technical Track): Testing Without Knowing Why

The first main scenario focuses on a realistic AI-driven e-commerce system where an ML recommendation engine directly impacts revenue. I walk through a common security dilemma: a limited pentesting budget with no shared understanding of what actually matters.
Step by step, I introduce a lightweight threat modeling approach:

  • drawing a simple system diagram,
  • identifying threat actors,
  • reasoning in layers (Matryoshka-style): supply chain, network/APIs, identity, crown jewels, and mapping attack paths to business impact.

This leads to a concrete outcome: a risk-driven pentesting strategy that clearly differentiates deep testing, standard testing, and low-return testing areas. Beginners see how threat modeling directly informs technical decisions instead of producing abstract documentation.

20–30 min : Scenario 2 (Framework Track): Threat Modeling as a Compliance Validator

The second scenario shifts focus to compliance and governance challenges. I present a situation where multiple teams claim compliance (secure coding, code reviews, pentests), yet cannot demonstrate why controls are effective.

Using an ISO 27001 control (secure coding), I show how threat modeling reframes the question from “do we have this control?” to “where would insecure code actually hurt us?”. A concrete threat scenario is built around an input processing service in front of an ML model, illustrating how business-impacting abuse can occur even when traditional controls exist.

This logic is then extended to broader regulatory expectations (AI Act, NIS2): threat modeling provides a structured way to justify controls, expose blind spots (e.g., missing abuse-case testing or decision integrity checks), and explain partial compliance in a defensible manner.

30–35 min : Key Takeaways and Practical Guidance

I conclude by explicitly tying both tracks together. The same threat model supports: technical security decisions (what to test, where to invest effort), and compliance justification (why controls exist and what risk they mitigate).

The final takeaways focus on what beginners can apply immediately: modeling change rather than entire systems, prioritizing reachable attack paths, and using threat modeling as a living practice rather than a one-time deliverable.


Do you consent for this presentation to be recorded and posted online ?:

Senior AppSec Consultant at NVISO, I help teams across Europe embed security from design to delivery. I lead threat modeling workshops, secure design reviews, and lectures. I turn AppSec into real-world impact and help fast-paced teams make threat modeling stick for good with no bullsh*t.