2026-04-25 –, Seminar Room 1
Cloud and container security feels like a scattered puzzle: development standards, CI/CD pipelines, guardrails, runtime security, logging, monitoring, and assurance. Together they form a resilient system, but most teams run them as independent silos, and that gap is exactly where attackers operate. This talk assembles those pieces by showing their critical connections, the misconceptions that leave them exposed, and the pitfalls that trip teams up at each stage.
Start with a question most developers get wrong: are containers isolated? They are not. Every container shares the same kernel, and that single misconception underpins a whole class of attacks that application-layer tooling cannot see. From there, the puzzle builds outward. CI/CD pipelines enforce automated checks, but signing does not mean secure. The 3CX attack produced validly signed malware that passed every test, and 83% of organisations still do not verify signatures. Guardrails maintain compliance, but 65% of clusters run flat networks, making lateral movement trivial once anything is compromised. Runtime security addresses the threats that static analysis is blind to entirely. Assurance binds it together, not as a GRC exercise, but as a cryptographic chain from commit to runtime that gives defenders something they can actually prove.
With 82% of cloud breaches stemming from misconfiguration across a surface of 15.6 million cloud-native developers, the problem is not a shortage of tools. It is fragmented defences that do not reinforce each other. The talk closes by connecting the framework to blue team operations: mapping each control layer to realistic SIEM ingestion, showing how those signals connect to threat intelligence, and working through the operational questions around log preservation, forensic readiness, and account access that defenders need answered before an incident rather than during one. A cheat sheet maps every component to detection opportunities and three actions attendees can take the following morning.
If you work in detection, response, or securing cloud infrastructure, this talk gives you the framework, the attack chains, and the operational questions to take back to your team.
The starting point is a misconception most teams carry: that containers are isolated. They are not. Every container shares the same kernel, and CVE-2025-31133 demonstrated exactly what that means in practice, with a race condition on bind-mount operations in runc enabling read-write access to host kernel parameters and lateral movement across containers. That shared kernel is the foundation the talk builds from, working up through each layer and the controls that belong at each one. 82% of cloud breaches stem from misconfiguration and human error across a surface of 15.6 million cloud-native developers, most of whom run containers in production. The problem is not a lack of tools. It is fragmented point solutions where each piece operates without the others.
Foundations: The Shared Kernel Illusion
Namespaces are sleight of hand. They trick processes into thinking they are isolated, but syscalls still reach the kernel directly. This section covers what that means practically: why root inside a container represents roughly 40 granular Linux privileges that need stripping, why CAP_SYS_ADMIN is near-root by another name, how Seccomp filters dangerous syscalls like setns, ptrace, and mount before they execute, and why user namespace remapping is the control that makes container-root map to an unprivileged host user.
CI/CD: The Signed Malware Trap
The 3CX attack is the central case study: a supply chain compromise that targeted the build environment directly, producing malware signed with a valid certificate that passed SAST, SBOMs, and standard scanning. Supply chain compromise now accounts for 15% of all breaches, and 83% of organisations still do not verify signatures. SLSA Level 3 is the provenance and attestation framework that addresses this, ephemeral hermetic build runners are the mechanism that prevents environment compromise, and Verification Summary Attestations are the missing piece that checks whether policy outcomes were correct rather than just whether tests ran. No VSA, no deployment.
Guardrails: The Compliance Barrier
65% of clusters run flat networks, violating NIST 800-53 SC-7 boundary protection and enabling lateral movement when anything is compromised. Default-deny NetworkPolicies, service mesh with mTLS and SPIFFE workload identity for Layer 7 enforcement, and admission controllers via Kyverno or OPA Gatekeeper for policy-as-code that validates before anything reaches the cluster are the three layers that address this. Secrets management sits alongside them: 88% of breaches involve credential theft, and secrets in environment variables are visible in API server audit logs, process listings, and ServiceAccount token enumeration. External Secrets Operator injecting into tmpfs at runtime is the control.
Runtime: The Build-Only Fallacy
Static analysis is blind to zero-days, living-off-the-land attacks, and behavioural anomalies that only manifest at execution. CloudSiphon infected 150 organisations via a trojaned NGINX that ran malicious code purely at runtime, bypassing all static analysis entirely. Cluster-wide eBPF deployment gives real-time syscall, file access, and network monitoring at the kernel level. Behavioural baselining per workload type produces high-fidelity alerts with full process lineage, and drift detection runs continuously against the image manifest. The specific syscalls indicating container breakout attempts are covered in detail: setns, unshare, writes to /proc/sysrq-trigger, and writes to /proc/sys/kernel/core_pattern. Advanced evasion techniques including reflective code loading, eBPF rootkits using bpf_probe_write_user, and delayed execution via cron are each covered alongside their counters.
Visibility, Assurance and Blue Team Operationalisation
eBPF telemetry feeds OpenTelemetry, Falco picks up Kubernetes API audit events, and application traces and service mesh telemetry correlate across the stack to link user identity through pod creation to kernel execution. DORA, NIS2, and SEC requirements are mapped into policy-as-code rather than treated as point-in-time GRC snapshots. The closing section works through how a SOC operationalises the full framework: chaos engineering against security controls to validate detection, containment, and recovery; automated certificate rotation and revocation with MTTR targets measured in minutes; and VSA-driven cryptographic audit trails from commit to runtime stored in an immutable transparency log. It closes on the operational readiness questions defenders need answered before things go wrong: log preservation, forensic access to running infrastructure without destroying evidence, account auditability, and whether containment actions have been rehearsed. Three actions for Monday morning follow: mandate trust via VSAs and attestation controllers, enforce containment via policy-as-code and CAP_DROP, and deploy kernel-level visibility with behavioural baselines and automated drift response.
myself
Ashley Barker is a security and digital leader who bridges the worlds of security and technology, with over 10 years in cybersecurity and deep experience in digital delivery, products, and user-focused solutions. A passionate advocate for NIST CSF, OWASP, and SANS, he simplifies complex security challenges, building robust cloud and DevSecOps systems for global organisations. Staying hands-on, Ashley crafts practical solutions that secure critical systems while driving innovation, making him a go-to for turning chaotic projects into clear, effective outcomes.