BSides Munich 2025

Security Engineering for Large Language Models: Architecture, Risks, and Regulatory Readiness
2025-11-15 , Hochschule München - R0.004

Large Language Models (LLMs) are reshaping modern software and system architectures - but with their increasing adoption come new and amplified security concerns. This workshop explores the technical and strategic dimensions of securing LLM-based applications and services.
Participants will learn how components such as RAG pipelines, prompt engineering, and fine-tuning introduce specific risks that go beyond traditional machine learning. Using the OWASP LLM Top 10 (2025) as a framework, we examine how known security principles reappear in new forms - and why many current threats are just the tip of the iceberg.

In addition to architectural deep dives, we introduce foundational approaches like Security by Design, guardrail strategies, and the integration of risk awareness into the LLM development process. We also provide a compact overview of relevant regulatory developments, including the Cyber Resilience Act and EU AI Act - and how they intersect with the practical realities of LLM deployment.
Whether you're building, integrating, or securing LLMs, this session offers a comprehensive view of today's threat landscape and tomorrow’s assurance requirements.


Provisional agenda

1. Introduction & Motivation

  • Why securing LLMs matters now
  • Shifts in the AI threat landscape
  • Regulatory relevance (AI Act, CRA, ...)

2. LLM Architecture Deep Dive

  • RAG systems: retrieval mechanics and security concerns
  • Prompt engineering: design patterns and misuse potential
  • Fine-tuning: security considerations across model lifecycle

3. Threat Landscape: OWASP LLM Top 10 (2025)

  • Overview of the updated OWASP framework
  • Technical walkthrough of selected risks
  • Mapping risks to real-world implementations

4. Security Engineering Foundations

  • Security by Design principles for LLM-based systems
  • Guardrails and architectural risk thinking
  • Why many LLM threats are just the tip of the iceberg

5. Secure LLM Lifecycle & AI Governance

  • Integration of security in LLM development workflows
  • Trust boundaries and safety evaluations
  • Overview of regulatory impact:
  • CRA obligations for software and AI components
  • EU AI Act risk classification and controls

6. Summary & Outlook

  • Key takeaways
  • Strategic recommendations
  • Optional: Q&A or short interactive demo/discussion

Which keywords describe your submission?:

llm, ai, CRA

Benjamin began his career as a Cyber Security Consultant and has since developed into a specialist at the intersection of machine learning and security. His work focuses on the practical evaluation and implementation of the OWASP Top 10 for Machine Learning and Large Language Models (LLMs), particularly through hands-on experience with RAG-based LLM systems in real-world security contexts.

Benjamin also works on secure system design, applying threat modeling and Security by Design in alignment with ISMS principles. His current research includes supervised learning techniques to reduce false positives in vulnerability detection, as well as risk analysis in LLM systems – always aiming to bridge the gap between research and secure implementation.