2025-04-26 –, Track 1
"Are AI agents worth the hype?
In this talk, we’ll explore the tangible impact of AI agents in cybersecurity, focusing on how they can be used to automate proactive security workflows at scale.
AI agents can be used to augment traditional human-driven processes to identify, assess, and remediate vulnerabilities. We’ll highlight real world case studies to show where AI agents excel, where they fall short, and lessons I've learned along the way.
We'll also discuss the technical challenges of implementing agentic security solutions, from managing hallucinations, building human-in-the-loop workflows, to integrating agents with existing security datasets for improved performance. We’ll also discuss the broader implications for security teams -- how AI-driven automation is shifting the role of human analysts and changing the way organizations approach cyber resilience."
"The AI hype cycle is in full swing, making it increasingly difficult to separate reality from marketing buzz. Will AI revolutionize cybersecurity or is it another over-promised technology?
In this talk, I’ll share my personal journey using Large Language Models (LLMs) to automate my daily security workflows with a focus on proactive security security. We’ll explore using AI agents to assist human analysis related to identifying, assessing, and remediating vulnerabilities, CVEs, and misconfigurations. We’ll review real world examples where AI agents genuinely shine and where human expertise remains crucial, challenges LLMs face when performing critical cyber security operations that leave no room for error, and discuss what the future likely holds.
Topics We'll Cover:
1. Real-World Use Cases: AI as a Force Multiplier
AI isn't replacing cybersecurity professionals anytime soon, but it can make you more efficient. We’ll break down examples of automating specific security workflows that AI agents can perform, including:
Triage: Given a result from a vulnerability scanner or tool, how can an AI agent help you determine if it’s a False Positive or a True Positive?
Analysis: How bad is this vulnerability? Is there public proof of concept code related to this CVE? We’ll review how a LLM can be used as a super-powered Google to identify and analyze data you need to quickly make a security decision.
Remediation: Tired of searching for vendor advisories? We’ll discuss how you can build an AI agent to automate generating a customized remediation plan for a given vulnerability and organization.
Lastly, we’ll explore how security teams can leverage AI to perform repetitive tasks so you can prioritize deliverables that push your organization forward.
- Challenges in AI-Driven Security Operations
While AI has already transformed industries like customer support and sales, cybersecurity presents unique challenges that make teams hesitant to fully trust AI-driven analysis. Unlike other fields, where minor AI mistakes may be tolerable, cybersecurity has limited room for error—a single misstep can lead to unnecessary remediation efforts, while a missed threat could result in a major security breach.
Strategically speaking, how do we ensure that AI-driven security decisions remain explainable, verifiable, and reliable? Technically speaking, how can we limit LLM hallucinations to build trust in scenarios where even a small error could have serious consequences?
We’ll discuss strategies to minimize hallucinations using retrieval-augmented generation (RAG) to ground LLM outputs in authoritative sources and implementing strict validation layers before acting on AI-generated recommendations.
We’ll also discuss human-in-the-loop systems that allow security analysts to validate AI outputs before execution, leveraging confidence scoring and explainability techniques to clarify why AI reached a certain conclusion, and building audit trails to ensure AI-driven decisions can be reviewed and justified.
- The Future of AI in Cybersecurity: Where Are We Headed?
As AI continues to evolve, what might the next five years look like for security teams? We’ll explore the possibility of AI-powered autonomous security agents that identify, prioritize, and patch vulnerabilities in real time. Will we become subservient to our AI overloads or will human oversight remain a requirement?"
Peyton has spent 10 years in cyber security and focuses on Red Team, Incident Response, and Threat Intelligence. He was a member of CrowdStrike Services from 2018 - 2023, where he was a first responder to numerous e-crime and nation state cyber intrusions and successfully hacked into 20+ Fortune 1000 organizations.