fwd:cloudsec 2026

What Building an AI Worm Taught Us About Stopping One
2026-06-01 , Room 1

In virology, gain-of-function research means deliberately making a pathogen more dangerous so you can study how to stop it. We took the same approach with AI coding agents and fine-tuned guardrail-less models - and built an AI worm.

Unlike prompt injection research that attacks AI systems, this worm uses the AI agent as the attack engine itself. We give it lean prompts and point it at a lab environment mirroring enterprise cloud infrastructure - AWS accounts, Azure subscriptions, CI/CD pipelines, IaC repos, data lakes - and it figures out the rest. It chains cross-account trust relationships we never told it about. It backdoors Terraform state we didn't know was there. It adapts its techniques depending on which cloud provider it lands in and exhibits worrisome emergent behaviors.

Commercial models occasionally refuse when they sense something adversarial - a partial defense. So we went further: using LoRA fine-tuning, abliteration, and other techniques to strip computer safety alignment out of open weights coding models entirely, without degrading effectiveness. We'll walk through these uncensoring techniques - what works, what degrades model quality, and what it costs - so defenders understand the threat model when refusals are off the table.

The good news: time-tested cloud security fundamentals - least privilege, egress filtering, segmentation, CI/CD hardening - are exactly the controls that matter most here. We'll map defenses to each domain the worm exploits and the roadblocks that stop it.

Kinnaird is Chief Security Architect at BeyondTrust. His security research focuses on Cloud and AI Security, researching threats to agentic systems, particularly through the lens of identity and least privilege. Kinnaird has worked across startups, large enterprises like Salesforce and Square, and consulting for Fortune 500s, while speaking at conferences, publishing open source tools, and finding new ways to hack the cloud.