BSidesLuxembourg 2026

Nathan Pembe

Senior AppSec Consultant at NVISO, I help teams across Europe embed security from design to delivery. I lead threat modeling workshops, secure design reviews, and lectures. I turn AppSec into real-world impact and help fast-paced teams make threat modeling stick for good with no bullsh*t.


Session

05-07
10:35
40min
What Does Threat Modeling Solve for AI Security?
Nathan Pembe

AI rarely creates entirely new classes of risk. More often, it amplifies weaknesses that already exist in complex systems where architecture, data, and business decisions are tightly coupled. What changes is not the threat itself, but its reach, speed, and impact.

This session shows how threat modeling can be used as a leverage point in two parallel dimensions, in a way that remains accessible to newcomers while still grounded in real-world practice. On the technical side, threat modeling is presented as a concrete decision tool: identifying realistic attack paths, clarifying what actually needs to be tested, and guiding focused actions such as pentest scoping and security control prioritization. The emphasis is not on exhaustive models, but on developing the right security reflexes early, understanding where small inputs can create large business consequences.

In parallel, the same threat model is used as a framework validation layer. Instead of treating compliance as a documentation exercise, threat modeling helps explain how and why controls are applied where risk actually exists. Using approachable examples aligned with ISO 27001, the AI Act, and NIS2 expectations, the session demonstrates how threat modeling supports compliance efforts by making security decisions explicit, traceable, and defensible.

The session is designed for beginners and practitioners in application security, threat modeling, or software engineering, and assumes familiarity with AppSec and SDLC concepts. The focus is not on theory or abstract AI threats, but on real systems, plausible attackers, and practical threat models that help bridge technical security decisions and regulatory expectations from the start.

AI Security Village
IFEN room 2, Workshops and AI Security Village (Building D)