2025-08-04 –, Florentine E
Part of the Red Team job is staying on top of new, emerging, or growing technologies. Love it or hate it, Large Language Models (LLMs) and the applications and agents that use them are increasingly part of the tech stack in companies today. To ignore them would be to ignore fruitful attack surface that may be both less secured and less monitored than other traditional Red Team attack paths. This presentation will cover the core of what we think Red Teamers should know about how LLMs work under the hood (without the math!) and then use that knowledge to dive into attack strategies. This isn't just focused on attacking the LLMs, though; we'll be taking prompt injection and jailbreaks into Red Team-land with examples from research and real-world operations. Get your hack on with ways you can attack the applications and agents using LLMs to achieve your heart's desire on your next Red Team operation.
While this discussion will cover the basics of LLMs themselves, the primary focus is on how they can be used in the course of other offensive security work - particularly Red Team engagements.
This presentation will begin with the core of how LLMs work at a theoretical level - no math or ML knowledge are required. Understanding how an LLM actually does what it does is critical to determining how to effectively manipulate or break it.
After establishing the basics, we will cover common prompt injection strategies informed by real-world exercises. The specific focus will be on achieving impactful objectives common to Red Team engagements, like lateral movement, privilege escalation, or impact - getting the LLM to say something dirty only to you isn't exactly useful or concerning to the Red Team and falls into the alignment category, which is quality assurance more than offensive security.
Brent took the scenic route to offensive security, beginning in counterintelligence before moving to cyber threat intelligence, security engineering, and finally Red Team - his ultimate goal. He has primarily focused on traditional Red Team engagements against enterprise environments with past roles leading engagements for MITRE Engenuity's ATT&CK Evaluations program and building a Red Team for a Fortune 40 company. He is now is a Principal Consultant at CrowdStrike, and while he still pokes holes in Active Directory environments he is one of the initial members of CrowdStrikes's Professional Services AI Red Team. So now he pokes holes in both technologies wherever possible.
Principal Red Team Consultant, CrowdStrike
Passionate about AI application security!