Securing AI Workloads in Kubernetes: Lessons from Scaling Startups
2026-04-26 , Track 1

Startups ship fast, often faster than their security practices can keep up. As someone who's built and secured platforms at growth-stage companies, I've watched teams accumulate risk while chasing product-market fit. Then they add AI workloads, and the attack surface explodes.

This talk bridges two worlds: the pragmatic security challenges of scaling startups and the technical reality of securing AI workloads in Kubernetes.

We'll cover common failure modes: identity sprawl, over permissioned service accounts, implicit trust between services and how security practitioners can enable velocity instead of blocking it.

Then we'll dive into service mesh patterns for AI workloads:
- Identity-first security with mTLS and SPIFFE
- East-west traffic controls and fine-grained authorization
- Model access isolation and prompt protection
- Observability for detecting AI service abuse

All examples come from production Kubernetes environments. Attendees will leave with patterns they can implement


This talk is designed for security practitioners, platform engineers, and anyone responsible for protecting fast-moving technical organizations. It combines strategic thinking about startup security with hands-on Kubernetes and service mesh patterns.

The first half focuses on the human and organizational side of startup security:
- Why "we'll fix security later" always costs more than teams expect
- Common early-stage mistakes: overly broad IAM, flat networks, secrets in environment variables
- How security practitioners can position themselves as enablers rather than blockers
- Building security guardrails that scale with the business

The second half gets technical, focusing on AI workloads in Kubernetes:
- Why AI workloads are different: models are expensive assets, prompts contain sensitive data, inference endpoints are abuse magnets
- Service mesh fundamentals for workload security (Istio/Cilium patterns)
- Identity-based access control using SPIFFE/SPIRE for workload identity
- Network policies and authorization policies for east-west traffic
- Isolating model access and protecting prompt/embedding data flows
- Observability patterns that actually matter for AI services: latency anomalies, token abuse, unusual access patterns

The talk concludes with practical anti-patterns things that sound good in architecture diagrams but fail in real startups:
- "We'll add mTLS later" (you won't)
- NetworkPolicies without egress controls
- Overly broad RBAC on model-serving namespaces
- Treating AI services like stateless microservices

Attendees will leave with:
1. A mental model for prioritizing security work at resource-constrained startups
2. Concrete service mesh configurations they can adapt for their own clusters
3. An understanding of AI-specific attack surfaces and how to mitigate them
4. Patterns for introducing these controls without slowing engineering teams

Chris is Head of Security at Ybor Technologies, where he focuses on securing Kubernetes platforms, AI workloads, and cloud-native infrastructure. He has been working in security engineering since 2006, spanning roles from early-stage startups to enterprise platform teams.

Chris serves as a board member of BSidesPhilly and is a frequent speaker at security conferences, including multiple BSides events, Boardwalk Bytes, and corporate conferences. He's passionate about helping fast-moving teams build secure systems without sacrificing velocity.

Outside of security, Chris builds music applications and spends his free time visiting music venues around the world.