Security BSides Las Vegas 2025

Human Attack Surfaces in Agentic Web: How I Learned to Stop Worrying and Love the AI Apocalypse
2025-08-04 , Siena

AI agent usage is accelerating us into an era of the Agentic Web, a digital landscape where machines, not humans, dominate creation, interaction, and consumption. As we inch closer to this new reality, we must ask: What are the security risks of an internet not built or experienced by, humans? LLMs have already begun to radically reshape the way we consume online information and will completely redefine how we live our online lives. From buying goods and services to searching for jobs, homes, and even relationships, agents will increasingly perform these tasks on our behalf. But convenience comes at a cost. In the coming world of bot-vs-bot warfare, scammers will unleash agents to exploit the agents of unsuspecting humans. This isn’t some distant dystopia, it’s happening right now, and it’s already creating an endless array of new vulnerabilities. We will glimpse the near future of cognitive security, where an unrelenting cascade of attack surfaces will emerge. We’ll delve into the mechanics of AI agents and the economic pressures driving their rapid adoption, explore real-world examples of how agents are already being exploited, and conclude with a look ahead at near future scenarios.


The rise of AI agents is rapidly transforming the digital landscape into a terrifying new reality. We are entering the age of the Agentic Web, a vast and interconnected ecosystem where AI-driven agents autonomously handle tasks and interact with online services on behalf of human users. While these innovations promise efficiency and personalization, they also come with dark, potentially catastrophic risks that could reshape the way we interact with the web—and each other.
In this talk, we will dive deep into the Agentic Web, exploring how AI agents are transforming nearly every facet of our digital lives and the emerging security threats they bring with them. From their rapid adoption to the vulnerabilities that lie within their structure, we’ll take a closer look at how these agents will fundamentally alter the online environment and, with it, our sense of privacy, security, and trust.
1. Introducing the Agentic Web
We begin by setting the stage with a relevant news story, showcasing just how rapidly AI agents are infiltrating our daily lives. With tools like Large Language Models (LLMs) already transforming search engines and digital assistants, AI agents are poised to take over tasks that were once firmly in the human domain. From shopping for goods to finding a job or even navigating relationships, AI agents are rapidly becoming our intermediaries, acting on our behalf in ways we never imagined.
AI Agents vs. LLMs
It’s important to understand where AI agents overlap with LLMs and how they complement one another. While LLMs like GPT-4 revolutionized natural language processing, AI agents are designed to go beyond conversation—they autonomously make decisions and carry out tasks, learning from their interactions to improve over time.
At their core, AI agents rely on a cognitive agent architecture, allowing them to perceive their environment, react to stimuli, and pursue specific goals without constant human intervention. But what makes these agents so powerful also makes them vulnerable—acting independently and autonomously in a world filled with deception, they become prime targets for manipulation.
The Agentic Web
As we transition to the Agentic Web, we explore a world where AI agents not only perform tasks but also interact with each other across digital ecosystems. This interconnected web allows agents to negotiate with vendors, find the best prices, and manage everything from travel bookings to job applications. The ease with which users can delegate tasks will enhance user experience, but it also introduces significant risks—agents may act on behalf of their users without their knowledge, opening a vast array of new vulnerabilities.
Key Aspects of the Agentic Web
Autonomy: AI agents operate without requiring constant input, making decisions based on user preferences or environmental data.
Perception and Reactivity: These agents can sense their surroundings and respond in real-time.
Learning and Goal-Oriented Behavior: Agents can adapt and evolve, continuously improving their efficiency.
Collaboration: Agents can work together, sharing information to complete complex tasks, such as coordinating multiple agents to solve a problem.
The Agentic Web represents a shift from traditional internet interaction. No longer will users directly engage with websites and services; instead, AI agents will take over, autonomously managing interactions with the web and even each other.
Applications and Use Cases
This shift is already happening. AI agents are significantly impacting industries like customer service, healthcare, and cybersecurity. For example, AI agents in customer service can handle queries autonomously, while in cybersecurity, they are used to detect and respond to threats in real-time. The implications are far-reaching, from autonomous vehicles to virtual personal assistants handling every aspect of our digital lives.
Looking toward the future, we see AI agents revolutionizing e-commerce, job seeking, dating, and even academic placements, creating a digital landscape where tasks are no longer controlled by humans, but by a network of interconnected agents, each with its own goals and capabilities.
2. Agentic Web Risks
With the rise of AI agents comes an entirely new set of risks, particularly for the users who place their trust in them. As AI agents take on more responsibility, the potential for security vulnerabilities grows exponentially. AI agents’ ability to perform tasks autonomously makes them prime targets for manipulation and exploitation.
Risks to Human Users
Users are at the forefront of this shift, and their security is at risk. Research shows that people will overtrust AI agents, opening the door to manipulation. Whether through fake AI workers or dark patterns designed to deceive, the Agentic Web will be rife with new types of cyber threats.

Dark Patterns: AI agents, with their natural language interfaces, are highly susceptible to manipulation through social engineering attacks. This includes everything from subtle biases in decision-making to outright harmful behavior encouraged by malicious actors.
Risks to Agents
AI agents themselves are not immune to threats. Just as users are targeted, agents can fall victim to countermeasures and manipulation. Cybercriminals may craft attacks specifically designed to exploit the vulnerabilities in these autonomous systems, using deceptive tactics like synthetic media and deepfake social engineering to trick agents into carrying out malicious actions.
One example of this is the “maze of irrelevant facts” technique, where malicious actors overwhelm an AI agent with misleading information, causing it to make faulty decisions. This emerging threat shows how AI agents could be used as weapons in the digital arms race, a race that is only just beginning.
3. Mitigations to Agentic Web Risks
As AI agents become more prevalent, it’s crucial to establish frameworks and security models to protect both users and agents. Know Your Agent (KYA) and MAESTRO (Multi-Agent Environment, Security, Threat, Risk, and Outcome) are two key frameworks that can help identify vulnerabilities and create proactive security measures for this emerging landscape.
Additionally, threat modeling strategies like STRIDE—which focuses on threats like spoofing, tampering, and information disclosure—will be essential for understanding and mitigating the risks posed by the Agentic Web. Ensuring least privilege for agents, where they only have access to the resources they need, will also be critical in reducing potential damage from exploited agents.
4. What the Future Holds
As we look ahead, the adoption of AI agents will continue to accelerate. The economic incentives driving their adoption will force businesses and consumers to adapt quickly. In the retail space, we are already seeing how AI agents could reshape e-commerce, leading to an arms race between buyer bots and seller bots. This could create a situation where only those with access to AI agents will succeed in securing limited offers or low prices.
Likely Near-Horizon Scenarios
What should security professionals be thinking about right now? As AI agents become more ubiquitous, cybercriminals will shift their focus from targeting humans to targeting AI agents directly. This could lead to Neo Social Engineering attacks where attackers manipulate agents rather than individuals. Just as traders have become reliant on algorithms from the rise of high frequency trading, users may come to depend on agents, only to see their trust exploited by attackers who have already tricked the AI systems they rely on.
Further, we may see the rise of fraudulent e-commerce sites designed to deceive AI agents into recommending fake products or services. This could further erode user trust and privacy, especially as personal data becomes concentrated within the agents managing our digital lives. If these agents are compromised, the damage to individual privacy could be devastating.
Conclusion
The future of the Agentic Web is both exciting and terrifying. As AI agents become more embedded in our daily lives, the risks associated with their use will grow exponentially. The need for robust security measures and vigilance has never been greater. This is not a distant concern—it is the near-future reality of the digital world we are rapidly building. Security professionals must act now to understand these risks, develop mitigation strategies, and prepare for a new era where AI agents will become central players in our digital ecosystem.
What are the implications of a web where the agents of AI, rather than humans, hold the reins? The future of cybersecurity will depend on the answers.

WORK CITED
ANP (Agent Network Protocol)
https://agentnetworkprotocol.com/en/

Canham, M. & Sawyer, B.D. (2023). Me and My Evil Digital Twin: The Psychology of Human Exploitation by AI Assistants.
https://www.youtube.com/watch?v=qjhfWWEQCgQ

Canham, M. (2021). Deepfake Social Engineering: Creating a Framework for Synthetic Media Social Engineering. Black Hat USA 2021
https://www.youtube.com/watch?v=2yILTfBV974

Chaffer, T. J., (2025). Know Your Agent: Governing AI Identity on the Agentic Web.
https://ssrn.com/abstract=5162127
https://dx.doi.org/10.2139/ssrn.5162127

Edwards, B. (2025). Cloudflare turns AI against itself with endless maze of irrelevant factshttps://arstechnica.com/ai/2025/03/cloudflare-turns-ai-against-itself-with-endless-maze-of-irrelevant-facts/

Huang, K. (2025). Agentic AI Threat Modeling Framework: MAESTRO
https://cloudsecurityalliance.org/blog/2025/02/06/agentic-ai-threat-modeling-framework-maestro#
https://archive.is/TTP1D

Kran et al. (2025). DarkBench: Benchmarking Dark Patterns in Large Language Models
https://openreview.net/pdf?id=odjMSBSWRt
https://darkbench.ai/

MCP (Model Context Protocol)
https://modelcontextprotocol.io/introduction

Milne, S. (2024). AI tools show biases in ranking job applicants’ names according to perceived race and gender
https://www.washington.edu/news/2024/10/31/ai-bias-resume-screening-race-gender/#:~:text=the%20process%20%E2%80%94%20are%20now,automation%20in%20their%20hiring%20process
https://archive.is/Yy1h3

Nichols, S. (2025). AI-enabled phishing and fake worker attacks on the rise
https://www.scworld.com/perspective/deepseek-breach-yet-again-sheds-light-on-ai-dangers
https://archive.is/BTW2C

Rance, G. (2025). DeepSeek breach yet again sheds light on AI dangers
https://www.scworld.com/news/ai-enabled-phishing-and-fake-worker-attacks-on-the-rise
https://archive.is/VhjnO

Shostack, A. (2014). Threat Modeling: Designing for Security. Wiley.
https://www.wiley.com/en-us/Threat+Modeling%3A+Designing+for+Security-p-9781118809990

Dr. Matthew Canham is the Executive Director of the Cognitive Security Institute and a former Supervisory Special Agent with the Federal Bureau of Investigation (FBI), he has a combined twenty-one years of experience in conducting research in cognitive security and human-technology integration. He currently holds an affiliated faculty appointment with George Mason University, where his research focuses on the cognitive factors in synthetic media social engineering and online influence campaigns. He was previously a research professor with the University of Central Florida, School of Modeling, Simulation, and Training’s Behavioral Cybersecurity program. His work has been funded by NIST (National Institute of Standards and Technology), DARPA (Defense Advanced Research Projects Agency), and the US Army Research Institute. He has provided cognitive security awareness training to the NASA Kennedy Space Center, DARPA, MIT, US Army DevCom, the NATO Cognitive Warfare Working Group, the Voting and Misinformation Villages at DefCon, and the Black Hat USA security conference. He holds a PhD in Cognition, Perception, and Cognitive Neuroscience from the University of California, Santa Barbara, and SANS certifications in mobile device analysis (GMOB), security auditing of wireless networks (GAWN), digital forensic examination (GCFE), and GIAC Security Essentials (GSEC).