Don't be LLaMe - The basics of attacking LLMs in your Red Team exercises
Brent Harrell, Alex Bernier
Part of the Red Team job is staying on top of new, emerging, or growing technologies. Love it or hate it, Large Language Models (LLMs) and the applications and agents that use them are increasingly part of the tech stack in companies today. To ignore them would be to ignore fruitful attack surface that may be both less secured and less monitored than other traditional Red Team attack paths. This presentation will cover the core of what we think Red Teamers should know about how LLMs work under the hood (without the math!) and then use that knowledge to dive into attack strategies. This isn't just focused on attacking the LLMs, though; we'll be taking prompt injection and jailbreaks into Red Team-land with examples from research and real-world operations. Get your hack on with ways you can attack the applications and agents using LLMs to achieve your heart's desire on your next Red Team operation.
Ground Floor
Florentine E