Sweta Deivanayagam

Sweta Deivanayagam is a cybersecurity professional with 15 years of experience protecting software and computer systems from hackers and security threats. Currently working as a Lead Security Engineer at Salesforce, Sweta designs and builds security solutions to keep their software safe throughout the entire development process, especially when using cloud technology and AI.
Before joining Salesforce, Sweta spent 10 years working as a security consultant, first at Cigital and then at Synopsys. In these consulting roles, Sweta served as a security detective and advisor, examining other companies' software applications to find vulnerabilities and weaknesses that hackers could exploit.
Sweta used specialized tools to scan code for security flaws, tested applications by attempting to break into them legally, and then helped development teams fix these problems. Throughout her career, Sweta has also helped companies set up automated security systems in their software development processes, created training programs to teach developers how to write more secure code, and provided strategic guidance on improving overall security practices. Sweta’s expertise spans a wide variety of security tools and technologies, from code analysis software to penetration testing tools, helping organizations build stronger and more secure applications and systems.


Session

04-26
12:30
20min
Using AI in Threat Modeling
Sweta Deivanayagam

In this talk, we will discuss how AI is revolutionizing the critical activity of threat modeling.
Threat modeling helps organizations identify, prioritize, and mitigate risks before they are exploited. Traditionally, it has been a manual, expertise-driven process, which can be slow and prone to human blind spots. Artificial intelligence is now transforming threat modeling by automating data analysis, generating attack scenarios, and continuously updating risk assessments as environments evolve. We will discuss a sample threat modeling scenario and different AI tools like ChatGPT and Gemini we can use to create a threat model.
We will go over some of the pitfalls in using AI for automated analysis. Attendees will hear about AI hallucinations, context windows, non-determinism and how those affect threat modeling and risk analysis output. We will go over some techniques of improving the accuracy of AI threat modeling using grounded data, feedback loops and targeted prompts.

Track 2