2026-04-26 –, Track 1
Prompt injection remains the elephant in the AI Security room—there's no deterministic defense, yet the urgency driving AI adoption means many teams feel forced to either accept the risk or hobble their agents with overly restrictive policies. But there's a third path: containment. In this talk, I'll walk through the architectural guardrails Stripe adopted to protect our agent platform, showing how you can give agents powerful tools while ensuring minimal damage if prompt injection occurs. I'll cover strategies for preventing data exfiltration through controlled egress, share UI patterns for human confirmation flows to balance oversight with usability, and demonstrate how to enforce these guardrails at CI-time using tool annotations.
Andrew Bullen leads the AI Security team at Stripe, where he designs infrastructural primitives that secure the company's internal and customer-facing AI platforms. A ten-year veteran of Stripe, Andrew previously led the Data Platform and Privacy Engineering teams. He approaches security as a leadership challenge just as much as a technical one. His work sits at the intersection of engineering leadership, security, AI/ML, and usability. In addition to his technical work, Andrew is an engineering leadership coach and can be found online at andrewbullen.co.