An introduction to fwd:cloudsec North America 2025
You're tasked with detecting an Entra ID, Azure or Microsoft 365 attack technique. Where do you start? How do you identify what data sources are available to observe the technique? Of the data sources available, what constitutes quality data with which a coherent story can be told? What are the elements of the story that needs to be told so that a responder can ask the right questions and respond with confidence? How data sources need to be correlated and can they even be directly correlated? What the heck is a SessionId
versus a UniqueTokenIdentifier
, how are they related, and why do they matter?
Anyone who has ever been tasked with developing detection guidance for cloud and identity threats in the Microsoft stack will know well just how fragmented and under-documeted their security data sources are. This session will attempt to bring sanity to how to tell effective stories when investigating and detecting threats based on a formal methodology for assessing the quality of any given data source. Join Cloudsec Bob Ross as he reveals the art and science behind threat storytelling and learn to distinguish malicious strokes from happy little accidents.
It’s not every day you stumble upon a technique that enables remote code execution (RCE) in thousands of AWS accounts at once—but that’s exactly what happened with the whoAMI attack. By researching a known misconfiguration through a new lens, we discovered how to gain access to thousands of AWS accounts that unknowingly use an insecure pattern when retrieving AMI IDs.
In this talk, I’ll walk you through how we uncovered the whoAMI attack, how we confirmed the attack works, and how we even identified vulnerable systems that were internal to AWS. We’ll explore the surprisingly diverse ways developers manage to shoot themselves in the foot by omitting the owners attribute, and share how difficult it was to build and refine detections for this anti-pattern that minimized false positives (and false negatives).
Finally, we’ll focus on how you can spot and fix this misconfiguration in your own environment, covering a range of defense-in-depth strategies for both prevention and detection. This is a roller-coaster tale of cloud security research—full of ups and downs and twists and turns. And like every roller coaster I’ve ever been on, it lasted longer than I expected– or wanted it to.
Malicious actors often exploit persistent threats to maintain long-term access to target systems by leveraging vulnerabilities and common misconfigurations. This is especially problematic in environments like appliances, where legitimate administrators may not have direct access to the file system, making detection and remediation even more difficult.
In this session, we will walk you through our approach, which leverages a significant advantage of cloud environments: the ability to collect metadata at scale from a diverse range of products, including appliances. We will examine two real-life case studies where we used this technique, along with extensive metadata analysis, to uncover previously undetected threats.
Join us in this session to learn how we've enhanced security through metadata analysis and improved detection, and to explore how we can collaborate to strengthen defenses across harder-to-monitor systems like appliances.
Vendors are looking for ways to differentiate themselves in a crowded market and organizations are looking for solutions that are cheaper, faster, and easier for their teams to deploy and manage. SaaS providers are now offering a “vendor-managed-deployment” option for their product, where the employees of the SaaS company install the cloud infrastructure and software into your environment and maintain this access for ongoing maintenance. This can be enticing on both sides - enabling the vendor to focus on core product development rather than secondary “features” (including deployment templates) and freeing infrastructure teams from re-architecting and managing another tool in your stack.
However, the risks introduced in this new paradigm are immediately clear - expanded cloud attack surface, granting elevated access to another entity, and redefining your posture on insider threat are just the beginning. Yet, for some organizations the tradeoff in control is well worth the operational and cost savings proposed by this model.
In this talk we’ll cover how this new deployment option differs from existing well-established integration patterns and scenarios where this deployment option can benefit your organization. Additionally, we will provide key considerations to keep in mind when considering this deployment option, and strategies for mitigating risk and maintaining security in both initial deployment stages and ongoing support.
In the evolving landscape of identity security, Microsoft Entra ID's Privileged Identity Management (PIM) stands as a cornerstone solution promising just-in-time (JIT) access and least privilege enforcement. However, beneath this security veneer lies a troubling reality that many organizations fail to recognize, or won't admit. This session will peel back the layers of PIM and JIT implementation to reveal how this widely-adopted control has often created a false sense of security rather than meaningful protection.
Drawing from experience analyzing diverse customer environments, I'll demonstrate how common PIM implementations can reduce security to a mere procedural formality - transforming "just-in-time" into "just-a-button" that sophisticated adversaries easily circumvent.
I'll reveal a couple gaps in PIM and improvements that convert checkbox security into actual protection.
Nation-state adversaries as well as occasionally eCrime actors have repeatedly leveraged trusted relationship or supply chain compromises in endpoint environments to achieve access to a large number of victims via compromising a single target and then moving laterally to downstream customers. While this initial access vector is largely known in the traditional threat landscape, there is only little open-source reporting except for COZY BEAR abusing trusted relationship compromises to obtain access to Entra ID environments.
In this talk we will look at two incident response cases in which threat actors compromised a Microsoft Cloud Solution Provider and a SaaS Provider and used these providers’ access to move laterally to downstream customers to obtain access to emails in O365. We will discuss how to hunt for the observed techniques, mitigations, and discuss the shortcomings in defending against these kinds of attacks.
If your CI/CD pipelines are built on GitHub Actions, you might be using GitHub Actions secrets to securely store credentials for connecting to your cloud environments. The security model for GitHub Actions secrets is not very intuitive. Many organizations assume that repository and organization-level secrets offer sufficient protection, but in reality these secrets lack granular access controls, exposing organizations to hidden security risks.
In this talk, we’ll break down the different types of secrets in GitHub Actions (organization, repository, and environment), the protections they offer, and their limitations. We’ll explore how misconfigurations lead to a false sense of security and discuss a more robust approach using environments and environment protection rules. We’ll also examine OpenID Connect (OIDC) for cloud authentication - where there are no long-lived secrets - but where misconfigurations can still introduce risks, and how environment-based protections help.
You’ll leave with a clearer understanding of GitHub Actions secrets, their exposure risks, and practical strategies to better protect cloud permissions of your CI/CD pipelines. Whether you’re securing sensitive credentials or refining your OIDC configurations, this session will equip you with actionable defenses to keep your automation secure at scale.
With the rise in popularity of open-source standards and tools like SPIFFE and SPIRE, it’s never been easier to get off the ground with issuing all your workloads a flexible cryptographic identity.
But this is just the start of your workload identity journey! The real challenge begins in putting these identities to work in your infrastructure in replacing legacy authentication mechanisms such as long-lived shared secrets. It’s difficult to know where to get started.
This talk will:
- Briefly outline SPIFFE and Workload Identity
- Explore the options for using SPIFFE for authentication and authorization, with a focus on techniques appropriate for existing infrastructure
- Dive into a handful of practical examples of introducing SPIFFE-based authentication between legacy services, and, between legacy services and Cloud APIs
- Describe higher-level strategies for rolling out workload identity in an organization, based on experience helping large organizations approach this work
In December 2024, Microsoft’s Digital Crimes Unit (DCU) took legal action against LLMjacking threat actors, who developed tools designed to bypass the guardrails of generative AI services to create offensive and harmful content. Specifically, Microsoft’s legal complaint addresses the unlawful generation of harmful images using Microsoft’s Azure OpenAI Service.
AI-generated deepfakes are realistic, easy to make, and increasingly used for fraud, abuse, and manipulation. This poses a threat to political elections, consumers of online services at risk for fraud, and the online safety of women and children.
The involved threat actors built a sophisticated scheme to abuse Cloud AI services of compromised accounts and then sell access to end-users for a wide range of illicit activities, including deepfakes.
As a matter of fact, LLMjacking made deepfakes a cloud infrastructure threat.
During the talk, we go through the technical aspects of the operation carried out by the cybercriminal group Storm-2139, a global network of creators, providers and end-users.
Attendees will be equipped with practical knowledge to better protect their organizations from this evolving threat in the cloud landscape.
Google Cloud’s Identity-Aware Proxy (IAP) is often seen as the final gatekeeper for internal GCP services - but what happens when that gate quietly swings open? This session uncovers how subtle misconfigurations in IAP can lead to serious data exposure, even in environments with no public IPs, strict VPC Service Controls, and hardened perimeters. We’ll introduce a new vulnerability in IAP that enables data exfiltration, allowing attackers to bypass traditional network controls entirely, without ever sending traffic to the public internet. In addition, we’ll walk through real-world examples of overly permissive IAM bindings, misplaced trust in user-supplied headers, and overlooked endpoints that quietly expand the attack surface. Attendees will gain a deeper understanding of IAP’s internal workings, practical detection strategies, and a critical perspective on trust boundaries in GCP.
This talk will explore a lesser-known technique for deploying IAM Roles Anywhere on platforms without a key management service or secret storage, safely.
An impediment to the adoption of IAMRA is the absence of an existing PKI solution, or the expense and expertise needed to run a Private CA. Therefore, we will look at integrating Route 53 with an ACME-enabled PKI, such as Let's Encrypt, for device enrollment with autonomous short-lived certificate issuance.
Come along for a deep dive into:
(1) Configuring IAMRA with targeted CA certificates.
(2) Certificate Attribute Mappings for client authentication.
(3) The corresponding Trust Policy on a Role.
(4) Extending AWS SDK via their credential helper so temporary session credentials are transparently returned to the calling process.
We will also build detection for abuse of private keys from logs in CloudTrail, should they leak.
For contrast, using a hardware-backed private key store, such as Yubikey, with an ACME-enabled PKI will also be demonstrated.
In the ever-evolving landscape of cybersecurity, tools that help security professionals enumerate and understand their environments are invaluable. ROADRecon, an open-source tool designed to enumerate Azure AD (now Entra) environments, has been a staple for many. However, with the impending deprecation of the Azure AD Graph API, ROADRecon faces a significant challenge.
The session will begin with an introduction to security assessments in Azure, highlighting the challenges and the role of automated tooling, specifically ROADRecon. A particular challenge that we will explore is ensuring continued operation of tools that previously made use of the Azure AD Graph API and enhancing them with support for different APIs that can provide security professionals with an accurate view of the tested environment.
The core of the presentation will focus on the implementation of the Microsoft Graph API in ROADRecon, including the hurdles encountered and the solutions developed. This will involve an in-depth discussion on Entra’s implementation of OAuth, first-party applications, and pre-consented permissions, which are crucial for understanding how attackers can bypass security protections.
As we explore legitimate usage of Microsoft Graph, we will demonstrate how lesser-known APIs (e.g. Ibiza API) can be used to enhance reconnaissance capabilities and provide an equivalent method for fetching tenant information that would not be logged. Lastly, we’ll finish with an explanation of possible preventative and detective controls available to organizations to try and mitigate the usage of these APIs for malicious activities.
What Attendees Will Take Away:
-
Understanding of OAuth in Entra: Attendees will gain a foundational understanding of how OAuth is implemented in Entra and learn how first-party applications and the concept of pre-consented permissions can be used for offensive security purposes.
-
Transition from Azure AD Graph to Microsoft Graph: - We will explore the impact of Microsoft’s deprecation of Azure AD graph in favor of Microsoft Graph to both offensive and defensive security teams. There are crucial differences between the APIs that affect how threat actors much approach Azure estates now and how defenders can detect such attacks
-
Tool Enhancements: An introduction to the enhanced capabilities of the rebuilt ROADRecon tool, including its use of undocumented APIs like the Ibiza API.
-
Detection strategies: How defenders can detect modern security tooling such as ROADRecon, the challenges of this at scale and the possibility of detecting undocumented APIs.
Knowing who are the owners of identities is crucial for proper identity management and incident response. However, As IAM is increasingly being managed in infrastructure-as-code frameworks, it is becoming harder to answer questions of identity ownership. Platform audit logs (e.g. CloudTrail, Entra ID audit logs) are no longer enough to identify who were the human users that created or managed specific identities.
In this talk, we will share our experience in tackling the challenge of unraveling IaC-based ownership, utilizing data sources such as IaC codebases and CI/CD logs, using static code analysis, heuristics and LLMs.
Microsoft are getting better at closing out security gaps in well-known APIs and components of their platform. However, as shown across the different cloud service providers, these interconnected systems almost always have a significant amount of complexity and a significant range of APIs that communicate together in various ways. Exploring these lesser-known APIs from an attacker and defender’s perspective allows us to better understand these complex attack surfaces and further defend cloud environments.
This talk will aim to further expand the rapidly developing field of exploring hidden APIs in Entra/Azure and will focus on the SharePoint APIs being used by the service through the browser client. We’ll explore ways of enumeration that are available through the SharePoint APIs that avoid the direct usage of Microsoft Graph and respectively allow an attacker to evade all known and possible methods of detection. The techniques that will be shown allow an attacker with a foothold in SharePoint to pivot and laterally move throughout an Azure environment, circumventing modern security controls and possibly allowing for the compromise of additional services, aiding an adversary to move towards their objectives. The talk will conclude with an exploration of file sharing security controls in the environment and whether they can be bypassed as well as provide an overview of what actions are available for defensive teams to prevent or detect attempts at using these APIs directly.
Attendees will gain an understanding of:
- Microsoft SharePoint Online internals and differences to SharePoint related Microsoft Graph APIs
- How an attacker with a foothold as a regular business user with access to SharePoint can bypass security controls within a tenant to access sensitive resources
- What a security team can do to prevent and detect usage of these APIs within an organization
In June 2023, Descope published research on nOAuth, a critical OpenID Connect implementation flaw that enables user account takeover in vulnerable applications. Following the disclosure, Microsoft and the Microsoft Security Response Center (MSRC) published articles on this issue, highlighting common anti-patterns and their follow-up actions with impacted application owners.
Fast forward to the fall of 2024, and nOAuth remains an active security threat. In this session, we will explore its persistence, unveiling new research that builds upon Descope’s original findings to identify additional implementation flaw patterns and methods for staging the abuse. We will also discuss how we uncovered vulnerable applications, the varying responses from developers, and what this means for securing modern SaaS applications.
Attendees will leave with a deeper understanding of how nOAuth attacks work, real-world examples of its exploitation, and actionable strategies to mitigate this critical risk.
Hijacking Privileges in the Cloud: Breaking Role Boundaries in Amazon ECS
Modern cloud environments rely on fine-grained identity and access management (IAM) to enforce security boundaries. But what happens when those boundaries break? In our research, we uncovered a vulnerability in an undocumented Amazon ECS protocol that allows a low-privileged role running on an EC2 instance to hijack the IAM privileges of higher-privileged containers on the same machine.
This talk will explore the technical details of this attack and how it exploits shared infrastructure in containerized environments. In addition, we will provide best practices on avoiding role co-location risks, ensuring that high-privilege tasks are never deployed alongside low-privilege workloads in ways that could allow privilege hijacking.
AWS IAM is getting more and more complex—permissions policies, permission boundaries, session policies, resource-based policies, service control policies, and now the latest buzz: Resource Control Policies (RCPs). Defining security boundaries on paper? That’s the easy part. But rolling them out across hundreds of AWS accounts running critical financial applications—that’s where things get tricky.
At Vanguard, we found a way to keep security tight without slowing things down. Instead of being the impeding team, we focused on making cloud security an enabler, not a blocker. In this talk, we’ll share how we built and deployed SCPs and Resource Control Policies (RCPs) to set security boundaries at scale—without causing downtime for business applications.
While implementing data perimeter controls with layered strategy, we ran into some real-world challenges. Challenges such as Dynamic VPC IDs and corporate CIDR made it tough to keep SCPs up to date, Resource Control Policy does not support global condition key for S3 bucket service, integrating defense-in-depth CI/CD pipeline controls with data perimeter controls and protect identities/resources from being tagged from aws console. Finally, verifying the effectiveness of these controls was non-trivial because of inconsistent access denied patterns.
We scanned all of the Google-owned container images you might be using on the Artifact Registry for vulnerabilities and secrets. You probably won't like what we found.
While AI can dramatically accelerate security review of IaC, unchecked hallucinations render many solutions worse than useless. This talk demonstrates practical techniques for building trustworthy AI systems that can reliably analyze Terraform and CDK code for misconfigurations and vulnerabilities. Through live demonstrations of hallucination detection, output validation, and claim verification pipelines, attendees will learn how to build safeguards to use AI as a dependable cloud security tool. We'll examine where the "hallucination hotspots" lie, how to leverage open-source libraries and prompting to prevent them, and how to generate actionable remediation plans that actually work in real-world cloud environments.
Once again what's old is new. Its looking like MCP is gonna be here for awhile so its only a matter of time before an enabled developer asks for sign off on something that works great on their local.
This lightning talk is geared to provide cloud security professionals with an up to date understanding of best (or least bad) practices. We’ll cover:
- Layers of uncertainty: With a spec in active development and not even a transport layer fully agreed upon, how do you approach deploying something before the recommended architectures are even decided on?
- Trends: What are other folks doing? Is OAuth actually feasible? What else has been done? How are people working around limitations and what are the risks? What is SSE, why is it deprecated but still implemented everywhere and what do I do when they tell me it "needs websockets"?
- Documented paths forward: What is the community doing? What standards have been released to help align? What tools and frameworks exist to make our jobs easier?
- What could go wrong? A dive into cloud specific threat vectors, covering the theoretical and maybe even real world incidents
Traditional cloud compliance often relies on manual, checklist-driven processes that struggle to keep pace with modern cloud infrastructure's complexity and agility. This session introduces GRC Engineering, a fresh, proactive approach that integrates Governance, Risk, and Compliance (GRC) principles directly into the AWS engineering lifecycle.
Attendees will explore how GRC Engineering leverages automation, infrastructure as code, and AWS-native tools to transform compliance from a reactive burden into a strategic asset. Real-world examples will demonstrate tactical methods for embedding compliance seamlessly into AWS environments, using services such as AWS Config, AWS Audit Manager, and automation frameworks.
Participants will walk away equipped with actionable insights and strategies for adopting GRC Engineering practices, streamlining compliance processes, reducing operational risk, and achieving continuous compliance in AWS environments.
Tenant Projects are the backbone of services in GCP, yet their architecture remains largely misunderstood- even by seasoned cloud security practitioners. This talk takes a deep dive into how GCP implements Tenant Projects, how permissions and interconnected services are structured, and where the cracks start to form.
As part of our research into Vertex AI, we uncovered vulnerabilities that not only compromised Vertex AI itself but also exposed fundamental weaknesses in the Tenant Project model. By understanding the permission model and service interactions, we were able to escalate our findings and take full control over an entire Tenant Project.
We’ll walk through the architecture, highlight the risks, and show real-world exploitation scenarios- unveiling for the first time additional vulnerabilities beyond our initial discoveries. This talk isn’t just about the bugs; it’s about how attackers can abuse the Tenant Project model and what security teams need to do to defend against it.
Expect a mix of deep technical content, hands-on exploitation, and a broader discussion on the implications of GCP’s multi-tenant architecture.
Oracle Cloud Infrastructure (OCI) has its own approach to security and policy enforcement, which differs significantly from AWS, Azure, and GCP.
While organizations moving to OCI expect familiar security controls, OCI Governance Rules, IAM Policies, and Security Zones operate differently from AWS SCPs, RCPs, and Declarative Policies, Azure Policies, and GCP Org Policies.
This session will break down what security practitioners need to know about OCI’s security model, how its enforcement mechanisms compare to other cloud providers, and example use-cases for integrating OCI into a multi-cloud security strategy.
Although AWS has been around for over 15 years, cloud threat hunting remains a relatively nascent discipline. While opportunistic threats like cryptocurrency mining are well-known, large-scale, cascading attacks targeting cloud-native infrastructure are less frequently discussed.
Over the past 18 months, we’ve significantly expanded our cloud threat hunting operations using vendor-agnostic strategies to better understand these emerging threats. This talk will outline our unique approach, which combines hypothesis-driven investigations, TTP-based hunts, and anomaly detection to proactively uncover threats at scale. We’ll also highlight our experiments with broader, cross-functional hunt operations that extend beyond our core team.
Attendees will gain insights from our large-scale cloud attack surface analysis and walk away with a deeper understanding of the evolving cloud-native threat landscape.
As AI workloads migrate to the cloud, Cloud Service Providers are rapidly evolving their GPU offerings. These multi-tenant environments are often built on NVIDIA Container Toolkit, the industry-standard framework for running GPU-based containerized apps. In this talk, we will show you how a single vulnerability in this fundamental framework impacted the entire CSP ecosystem – and how each environment handled a brand-new 0-day vulnerability.
We’ll walk through our discovery of a container escape vulnerability in this foundational layer of GPU infrastructure, and its real-life implications across 3 different cloud providers: Azure, DigitalOcean, and Replicate. Each case began with a standard customer workload running our exploit – but the outcomes varied widely. One led to minor impact; another with lateral movement that triggered blue teamers; and one resulted in complete service takeover.
Join us to gain a firsthand look on how major cloud providers build their environments, and the anatomy of a container escape vulnerability in the wild. Finally, learn how to build stronger guardrails in the cloud, by examining the flaws and misconfigurations we were able to exploit.
Conference talks and engineering blogs are often quilted from small omissions and half-truths. These include subtle white lies about collaboration, minimize of technical challenges, inflate outcomes, and omit critical details regarding risks, technical debt, and unresolved issues. This is part of the unspoken social contract in sharing sensitive internal information publicly.
The key is to read between the lines, spot the implicit, and still extract meaningful insights. This talk will provide you with a framework to navigate these nuances effectively.
We’ll explore what is often left unsaid, examine real-world examples, and equip you with the tools to make the most of fwd:cloudsec and similar events!
Backdooring Microsoft's applications is far from over. Adding service principal (SP) credentials to these apps to escalate privileges and obfuscate activities has been seen in nation-state attacks, and led to the development of new security controls. Despite these efforts, we uncovered a vulnerable, built-in SP that could have allowed escalation from Application Administrator to any hybrid tenant user (including Global Admin).
Join us for an overview of SPs, app registrations, and the history of backdoor credentials on these identities. This talk will illustrate how building on existing SP research led to a new vulnerability, and cover controls that can help mitigate similar risks. Finally, we'll identify leads for future SP investigations, and how you can use past research to seek your own vulnerabilities.
Have you ever stared at a GCP audit log error—like “resource not found” or “permission denied”—and wondered what really went wrong or if there’s more to the story? In this session, we’ll unravel the often-overlooked world of GCP error codes and reveal how these cryptic messages can guide your incident response, sharpen detection rules, and even hint at possible reconnaissance attempts. Through practical examples, we’ll show how digging deeper into error objects can highlight missing identity details, reduce false positives, and strengthen your overall GCP security posture. If you’ve ever dismissed an “unhelpful” error, join us to learn why those logs might be more powerful than you think.
Like other PaaS offerings built from third party offerings on the CSPs, the current wave of LLMs comes with its own set of logging and observability challenges. We’ll explore some, as well as share some learnings from how to tackle this for both observability and detection and response purposes.
What if I told you that AI tools your defenders use on a regular basis potentially exposes your cloud environment to risk?
With a single wrong click by a defender attempting to analyze a log using Google Cloud’s promising new AI summarization feature, or with one innocent prompt, the defender could quickly become the victim of a phishing attack—or worse, suffer sensitive resource data exfiltration.
This talk will showcase a vulnerability I discovered in Google’s new flagship AI assistance tool, Gemini Cloud Assist, where attackers can embed malicious prompts into a victim's logs. When the user reviews these logs with Gemini Cloud Assist, the attacker may exploit Gemini's integrations or deceive the user into accessing a legitimate-looking phishing link. This vulnerability discovery reveals a new attack class in the cloud that defenders should be aware of.
Additionally, I expanded my research to include a similar service in Azure, Azure Copilot, which does not yet seem to be mature enough to be susceptible to this attack class.
When diving into my research on both Google’s Gemini Cloud Assist and Azure Copilot, I will enable defenders to better understand the arising risks of using these emerging services.
By the end of this talk, the audience will learn about new monitoring techniques for malicious prompts in the cloud they can apply in their environments, specifically focusing on the various log sources that we believe attackers will target in the future.
AI agents are everywhere, transforming business operations and driving innovation across industries. To accelerate adoption, cloud providers are rapidly developing agent-building platforms that simplify deployment and integration. However, their widespread adoption introduces significant security risks.
In this session we will showcase the methodologies and techniques attackers use to compromise organizational AI agents, uncovering vulnerabilities that allow adversaries to bypass security controls and access organizations sensitive data. We will dissect these emerging threats and their impact on enterprise security.
Finally, we offer actionable mitigation strategies and best practices to help organizations protect their AI-driven environments against these evolving threats.
In 2024 Fly.io made a big bet that developers would want access to cloud GPU compute resources. While that bet didn't quite pay off, we spent a lot of time (and money) in finding a way to provide shared customer access to NVIDIA GPU hardware in a secure manner. When the work was done we had a much greater understanding of the risks presented by GPUs, as well as possible mitigations, that may be useful to anyone looking to provide GPU resources to customers.
This lighting talk will include:
* Technical details of the challenges faced in implementing secure GPU access, including why existing NVIDIA GPU virtualization technologies were unsuitable
* An overview of the threats associated with offering shared or virtualized GPU access
* A review the architecture of NVIDIA datacenter-grade GPUs, with focus on security-relevant subsystems
* A dive into PCIe functionality, threats, and mitigations
* The conclusions and recommendations from our security evaluations of the hardware and OS environments
Configuring AWS Identity and Access Management is typically seen as the customer's responsibility for security. This is predicated on the "shared responsibility model" where security and compliance responsibility is shared between the cloud provider (AWS) and the customer.
We believe that the "shared responsibility model" comes with certain assumptions. We assume that the cloud provider provides clear instructions for how to use their tools and to configure infrastructure. Part of that assumption is that IAM actions and permissions are clear and unique. What would be the point if we block 1 IAM action only to find that there's another we missed (similar to a game of whack-a-mole).
In this talk, we've go through increasingly potentially problematic examples of duplicitous IAM Permissions: permissions that effectively let us achieve the same goal. These examples include retrieving data, setting permissions (resource-based policies), and more. We'll cover impact and how these leads to blind spots in security including our monitoring and alerting defenses, preventative issues, and more.
WithSecure Consulting's going independent, and with it came the need to create an entire new AWS estate from scratch. The catch? We're not an engineering house and this isn't our core focus area. It needed to be done quickly, with the resources we already had available, on the lowest budget possible. The end result? A bunch of penetration testers and security consultants finding themselves on the other side of the coin, engineering an environment to support and enable security consulting and research work, which invariably requires bending/breaking a lot of "security best practices".
Join Mohit and Nick as they run through the build-out process and associated engineering decisions and tradeoffs, highlighting where we chose to deviate from the usual "best practices" and why. We'll cover:
- Authentication & Authorisation strategies
- Organisation structure and hardening, workload segregation tradeoffs
- Code and infrastructure deployment approaches across an incredibly disparate set of teams
- Security monitoring on a budget
Attendees will walk away from this talk with battle-tested advice on how to design, build an operate an AWS estate on a limited budget with limited personnel, and understanding the trade-offs that were made to support some distinctly non-standard requirements.
AI and ML are rapidly becoming foundational technologies for enterprises, offering powerful enhancements to products and services. However, onboarding and securing MLops and LLMOps tools while enabling access to real-world data is particularly challenging, especially in highly regulated industries like healthcare.
This session will provide a practical, security-focused deep dive into reference architectures for securing ML and LLM workloads across multi-cloud environments. We will explore native security controls in AWS and Azure, access patterns for sensitive data, and best practices for protecting AI workloads from unauthorized models and insecure data pipelines.
Attendees will gain actionable insights into designing secure AI/ML environments while balancing performance and compliance needs, including real-world lessons learned from platform architects navigating this evolving landscape.
In AWS, Identity and Access Management (IAM) policies are the foundation of access control throughout the cloud. The complexity and expressiveness of these policies present significant challenges to cloud security professionals when it comes to modeling access and answering basic questions such as "who can access this resource?" or "what are the effects of this policy change?"
This presentation will walk practitioners through a three-part journey
* Introducing new OSS building blocks which can remove the guesswork of writing IAM policies
* Using these building blocks to uplevel several cloud security pillars
* Frameworks to simplify and distill the nuance of cloud access into insights for builders and leaders at their own companies
This talk will include the release of above open source tooling to support and facilitate the approaches it presents.
In this talk we will present the prompt formatting technique, which we used to reliably bypass the Sensitive Information Filter functionality within Bedrock Guardrails, a service used to secure AI systems in AWS. Sensitive Information Filters are used by Guardrails to prevent Bedrock AI systems from returning sensitive information to users, such as Names and Email Addresses. By instructing the AI model to return data using programmatic, SQL-like queries, the returned data was modified sufficiently to bypass this security control, similar to WAF evasion. We have also developed a system prompt to help AWS customers mitigate this bypass, which we will discuss during the talk.
Network egress controls are a well recognised technique to defend against exfiltration of sensitive data and malware/attackers using command and control channels. AWS has managed services for this: Network Firewall, DNS Firewall, and VPC endpoint policies. I implemented egress controls at scale using these services and encountered many implementation challenges.
This presentation addresses these challenges including service limitations, techniques for evading the controls, and unexpected issues presented by several services. You’ll learn what security can or cannot be provided if done correctly and how to successfully approach a large scale implementation.
Cloud audit logs generate massive volumes of data, making anomaly detection a complex and often error-prone challenge. Traditional systems frequently suffer from high false positive rates, overwhelming security teams and obscuring critical insights. In this talk I will explore an innovative approach for training an LLM on log data turning it into a powerful highly nuanced anomaly detection engine.
We will be releasing these components:
1. The code for parsing log data, eg CloudTrail, etc.
2. The code for training the LLM on the log data
3. A lite web app for visualizing and investigating anomalies
As cloud security practitioners, we spend our days wrangling IAM policies—but for all the JSON we manage, it’s still surprisingly hard to answer basic questions like: “Who can access this S3 bucket?” or “What can this role actually do?” Understanding AWS permissions in practice means piecing together policies across services, accounts, organizations, and trust layers. And because those policies are often managed by different teams or scattered across pipelines, it’s difficult to reason about what’s truly possible in a deployed environment.
This talk explores a pragmatic approach to verifying effective IAM permissions: simulating what AWS IAM actually allows across all policy layers, and exposing the results in a way that clearly shows who can do what, and why. Rather than replacing pre-deploy linters or policy review processes, this system complements them by analyzing deployed IAM configuration and evaluating real-world access across identities, resources, and trust relationships. Want to know which principals have s3:GetObject access to your prod bucket? Or which external accounts can assume a sensitive role? We’ll show how to answer those questions—quickly, clearly, and without hand-parsing several JSON files.
You’ll leave with a new set of tools for understanding how IAM really works in your environment. This session includes a demo and the release of an open-source project built to support these workflows.
AI agents are rapidly transforming industries through autonomous planning, decision-making, and interaction with external environments. As cloud providers accelerate the deployment of services that simplify building these AI-driven applications, the security implications of this emerging technology remain largely unexplored.
This talk reveals concerning security issues discovered within AWS Bedrock Agents—demonstrating how attackers can exploit prompt injection and misuse integrated tools to compromise these agents. Specifically, our research uncovers techniques that lead to information leakage, agent hijacking, unauthorized tool execution, and manipulation of persistent agent memory. The issues originate from AI models' inherent probabilistic nature combined with inadequately secured prompt instructions, which attackers exploit to subvert internal planning and decision-making processes.
Although our research primarily examines AWS Bedrock Agents, the issues and attack techniques discussed extend broadly across similar agent frameworks. We will share our methodology, key findings, mitigation strategies, and highlight important open research questions. Our goal is to foster proactive dialogue among cloud security researchers, practitioners, and AI developers to address these emerging security challenges collaboratively.
Acquiring another company can be hard. Acquiring another company with an existing cloud environment can be even harder.
The organization you are acquiring will almost certainly be doing some things differently than yours in the cloud. Their cloud environment could be less mature than yours (or more mature for that matter). Best practices can change over time. Other factors that are not specific to your cloud environment can still impact it. All of these things can introduce new security risks to your cloud environment, and some of them in ways you may not expect.
In this talk, we will discuss some of the possible complicating factors when migrating another organization's cloud environment to your own, and strategies for mitigating them.
Conference wrap-up