2025-04-24 –, Palladium
Generative AI development introduces unique security challenges that traditional methods often overlook. This talk explores practical threat modeling techniques tailored for AI practitioners, focusing on real-world scenarios encountered in daily development. Through relatable examples and demonstrations, attendees will learn to identify and mitigate common vulnerabilities in AI systems. The session covers user-friendly security tools and best practices specifically designed for AI development. By the end, participants will have practical strategies to enhance the security of their AI applications, regardless of their prior security expertise.
- Introduction
- Motivation
- What can go wrong
- Generative AI vs Traditional Applications
- Key differences in security considerations
- Unique challenges posed by generative AI
- Threat Modeling Basics and AI-Specific Threats
- Threat modeling frameworks
- Focus on prompt injection and data poisoning
- Example: Simple prompt injection attempt
- Practical Threat Modeling Process
- Simplified system decomposition example
- Threat identification walkthrough
- Example: Input Validation
- Tools Showcase and Mitigation Strategies
- Conclusion and Resources
- Recap key takeaways
- List of recommended tools and further reading
Intermediate
Expected audience expertise: Python:None
I am Liza - Applied Scientist at AWS Generative AI Innovation Center and am based in Berlin. I am passionate about AI/ML, finance and software security topics. In my spare time, I enjoy spending time with my family, sports, learning new technologies, and table quizzes.