Kirill Efimov
As a seasoned security researcher, I've led teams at Snyk and now helm security research at Mobb. With a wealth of publications and speaking engagements, I've delved deep into the intricacies of cybersecurity, unraveling vulnerabilities and crafting solutions. From pioneering research to impactful talks, my journey is fueled by a passion for safeguarding digital landscapes. Join me as I share insights, strategies, and innovations in the ever-evolving realm of cybersecurity.
Session
Leveraging AI for AppSec presents promise and danger, as let’s face it, you cannot solve all security issues with AI. Our session will explore the complexities of AI in the context of auto remediation. We’ll begin by examining our research, in which we used OpenAI to address code vulnerabilities. Despite ambitious goals, the results were underwhelming and revealed the risk of trusting AI with complex tasks.
Our session features real-world examples and a live demo that exposes GenAI’s limitations in tackling code vulnerabilities. Our talk serves as a cautionary lesson against falling into the trap of using AI as a stand-alone solution to everything. We’ll explore the broader implications, communicating the risks of blind trust in AI without a nuanced understanding of its strengths and weaknesses.
In the second part of our session, we’ll explore a more reliable approach to leveraging GenAI for security relying on the RAG Framework. RAG stands for Retrieval-Augmented Generation. It's a methodology that enhances the capabilities of generative models by combining them with a retrieval component. This approach allows the model to dynamically fetch and utilize external knowledge or data during the generation process.