BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//pretalx//pretalx.com//bsidesluxembourg-2026//speaker//F8ENJB
BEGIN:VTIMEZONE
TZID:CET
BEGIN:STANDARD
DTSTART:20001029T040000
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=10
TZNAME:CET
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
END:STANDARD
BEGIN:DAYLIGHT
DTSTART:20000326T030000
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=3
TZNAME:CEST
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
END:DAYLIGHT
END:VTIMEZONE
BEGIN:VEVENT
UID:pretalx-bsidesluxembourg-2026-SRHCSS@pretalx.com
DTSTART;TZID=CET:20260508T144000
DTEND;TZID=CET:20260508T152000
DESCRIPTION:GenAI applications have moved from being single prompt wrappers
  to long chains of LLM calls\, tools\, and agentic workflows. In these sys
 tems\, guardrails cannot live on a single isolated prompt. They need to be
  designed based on how data flows through the application\, how permission
 s are enforced\, and which risks are relevant for the use case.\n\nThis ta
 lk shares practical experience from helping teams design and test guardrai
 ls for LLM applications. Prompt-based guardrails tend to fail under determ
 ined attackers\, so they must be combined with application-level controls 
 and feedback mechanisms that allow the system to detect and respond to pro
 mpt attacks.\n\nRather than evaluating models in isolation\, the focus is 
 on testing the application itself. This includes testing how inputs and ou
 tputs propagate through LLM chains\, how intermediate results are reused\,
  and how guardrails interact across different stages of a workflow. The ta
 lk shows how this can be tested in practice using spikee (https://spikee.a
 i)\, an open source tool built to test LLM applications for prompt-based a
 ttacks.
DTSTAMP:20260502T115827Z
LOCATION:IFEN room 2\, Workshops and AI Security Village  (Building D)
SUMMARY:Every Guardrail Everywhere All at Once: Designing and Testing Guard
 rails for LLM Applications - Donato Capitella
URL:https://pretalx.com/bsidesluxembourg-2026/talk/SRHCSS/
END:VEVENT
END:VCALENDAR
