Security BSides Las Vegas 2025

Don't be LLaMe - The basics of attacking LLMs in your Red Team exercises
2025-08-04 , Florentine E

Part of the Red Team job is staying on top of new, emerging, or growing technologies. Love it, hate it, or doubt it, Large Language Models (LLMs) are increasingly part of the tech stack in companies today. To ignore them would be to ignore useful attack surface. Participants will learn about the core of how LLMs work under the hood (without the math!) and strategies to break and use LLM-enabled applications in Red Team engagements.


While this discussion will cover the basics of LLMs themselves, the primary focus is on how they can be used in the course of other offensive security work - particularly Red Team engagements.

This presentation will begin with the core of how LLMs work at a theoretical level - no math or ML knowledge are required. Understanding how an LLM actually does what it does is critical to determining how to effectively manipulate or break it.

After establishing the basics, we will cover common prompt injection strategies informed by real-world exercises. The specific focus will be on achieving impactful objectives common to Red Team engagements, like lateral movement, privilege escalation, or impact - getting the LLM to say something dirty only to you isn't exactly useful or concerning to the Red Team and falls into the alignment category, which is quality assurance more than offensive security.

Principal Red Team Consultant, CrowdStrike
Passionate about AI application security!

Brent took the scenic route to Red Team, beginning in counterintelligence before moving to cyber threat intelligence, security engineering, and finally Red Team - his ultimate goal. He has primarily focused on Red Team, having led engagements for MITRE Engenuity's ATT&CK Evaluations, built a Red Team for a Fortune 40 company, and now is a Principal Consultant at CrowdStrike. He is one of the initial members of the company's AI Red Team, currently focused on LLM-based applications and full-scope Red Team engagements.