2025-11-15 –, Room 300
We were promised autocomplete on steroids. What we got was a new attack surface, one that developers invite into their terminals, editors, CI pipelines, and even production systems.
In this talk, I walk through how AI coding agents, the ones we rely on to ship faster by offloading mental load, are quietly introducing a new class of threats. And these aren’t theoretical. They’re already being exploited in the wild.
We’ll explore how natural gaps in agent understanding can become opportunities for adversaries, and how the tools built to boost productivity can be subverted into delivery mechanisms for exploitation.
From subtle context manipulation to unexpected supply chain consequences, we’ll trace how trust in your agent can become the thing that gets you pwned.
This isn’t about prompt injection. It's about something much deeper. This is real-world exploitation. Where the agent becomes the source of the next attack.
We’ll walk through concrete examples, highlight the (surprisingly limited) tooling available today, and make the case that agent context and model provenance need to be treated with the same rigor we already apply to our dependencies and infrastructure.
AI agents are immensely useful. But if we don’t rethink how we trust and monitor them, they won’t just make our jobs easier, they’ll make attackers’ jobs easier too.
Wes Widner is a Senior Principal Engineer with a deep background in security-focused distributed systems. He started as a data engineer on McAfee’s Global Threat Intelligence team. Back before “data engineering” was a job title. Later became the founding manager of the multi-cloud team at CrowdStrike. He now leads strategic engineering initiatives at Cyberhaven, a data detection and response startup.
Wes specializes in uncovering the hidden risks in complex systems. Especially the quiet, high-trust assumptions we make when integrating AI agents into our workflows. He’s been responsible for evaluating, securing, and operationalizing AI agents across production environments.
Lately he’s been inventing new (and slightly scarier) attack surfaces by vibe‑coding kernel modules that talk to physical hardware. Testing the boundaries of what an over‑confident agent should ever be allowed to control. What could possibly go wrong?
