2025-04-13 –, Track 1
With the emergence of Large Language Models, there has been a rapid acceleration in the development of AI capabilities. This brings with it many questions for security teams on how they should be thinking about AI security. While care should be taken on the development of LLM prompts, it is critical to not lose sight of the fundamentals to establish secure best practices.
AI, particularly large language models, is one of the most significant technological advances we have seen in recent years. With the proliferation of this technology, security teams are left with two significant questions - how do they secure any in-house developed capabilities that are using AI and how do they evaluate the security implications of third party tools that are being used by employees. While there is a lot of attention on prompt injection, jailbreaking models, and entirely new segments such as AISPM, most of the risks we are seeing are still rooted in fundamental security best practices. In this talk, we’ll look at how teams can think of AI security and set themselves up for success in a space that still has a lot of unanswered questions.
In his role as a Senior Director of Engineering and Research Solutions Architect, Lucas Tamagna-Darr leads the automation and engineering functions of Tenable Research. Luke started out at Tenable developing plugins for Nessus and Nessus Network Monitor. He subsequently went on to lead several different functions within Tenable Research and now leverages his experience to help surface better content and capabilities for customers across Tenable’s products.