BSidesLuxembourg 2026

Building Secure AI: Making Threat Modeling a Core Part of Development
2026-05-08 , Workshops and Stage - Design Space (C1.05.12)

As AI systems evolve, integrating security from the design phase is crucial, following the "shift left" approach to prevent vulnerabilities. This session offers an overview of threat modeling for AI systems, including organizing engaging sessions, using appropriate tools, and applying methodologies such as STRIDE. Participants will learn to proactively address security concerns and in turn ensure robust protection by identifying and mitigating potential threats specific to AI technologies - with reference to OWASP research. The session will also provide tips on making threat modeling sessions interesting and interactive in order to ensure active participation and effective outcomes. The goal is to make security a foundational element in AI system development rather than an afterthought.


Do you consent for this presentation to be recorded and posted online ?:

Diana Waithanji believes data privacy is a human right. She works as a cybersecurity professional at SAP specifically SAP Cloud Infrastructure in Germany. She is a TechWomen USA fellow 2025 at Google and an AFRIKA KOMMT Germany alumni 2022. Diana sits in two technical committees at the Kenya Bureau of Standards (KEBS) and serves as a board member at Nivishe Foundation. Diana is also a founder of Wahandisi La Femme, an initiative that mentors girls in rural Kenya to get into tech and engineering.

This speaker also appears in: