Prompt-ing the Injection - LLMs Under Attack!
2025-04-26 , Auditorium

This talk provides a brief overview about how Large Language Models (LLMs) work, with a detailed explanation & live demonstration about how you can gather sensitive information from LLMs. This simulates how an attacker could gain information from new and emerging technologies.


This talk begins by explaining the fundamental workings of LLMs, detailing how these models generate responses based upon the prompts they recieve. With this understanding, the session shifts focus towards specific vulns that arise when threat actors manipulate inputs to influence the models outputs.

Through live demonstrations, attendees will seek how attackers can exploit these vulnerabilities, simulating real world scenarios where prompt injection is used to cause unintended behaviour or access confidential data. The talk will emphasise the importance of recognising these threats as LLMs become more integrated into applications across industries. This talk will finish with a summary of the elements, and how organisations could defend against these.


URL:

https://www.linkedin.com/in/smitha-b-450817278/

Spiciness Level:

4 - Complex and quite technical, deeper dive into subjects