Noa Dekel
Noa Dekel is a Senior Threat Intelligence Researcher at Palo Alto Networks. Starting her career as a Threat Intelligence analyst in the defense sector, today Noa specializes in threat hunting, malware analysis, and detection engineering.
Session
Large Language Models (LLMs), are increasingly being integrated into enterprise environments for the purposes of automation, analytics, and decision-making. Although their fine-tuning capabilities enable the development of tailored models for specific tasks and industries, LLMs also introduce new attack surfaces that can be exploited for malicious purposes.
In this presentation, we unveil how we transformed an LLM into a stealthy C2 channel. We will demonstrate a PoC attack that leverages the fine-tuning capability of a popular generative AI model. In this attack, a victim unwittingly trains the model using a dataset crafted by an attacker.
This technique transforms the model into a covert communication bridge, enabling attackers to exfiltrate data from a compromised endpoint, deploy payloads, and execute commands.
We will discuss challenges we faced, such as AI hallucinations and consistency issues, and share our approach and the techniques we developed to mitigate the issues. Additionally, we will examine this attack from a defender’s perspective, highlighting why traditional security solutions struggle to detect this type of C2 channel, and what can be done to improve detection.
Join us as we break down this unconventional attack vector, and demonstrate how LLMs can be leveraged for offensive operations.