Jindrich Karasek
Jindřich is a Lead Security Researcher in Rapid7. His research work focuses on the domains of cognitive warfare, cyber espionage, AI threats and cyber threat intelligence. You might also recognise him as the security data scientist known as 4n6strider.
Session
This session dissects a real-world case study where an actor weaponized automation flaws in Meta’s LLM-based compliance system to hijack high-value accounts via orchestrated botnet abuse, prompt injection, and linguistic manipulation. The attacker exploited vulnerabilities in the very safeguards designed to protect users, triggering account suspension and negotiating “restoration” through AI-manipulated support flows.
This case is not an isolated incident—it is a signal of broader systemic risks that emerge when generative models and automation pipelines are integrated without robust adversarial testing. Beyond the technical compromise, the attack leveraged prompt engineering as social engineering, revealing the cognitive blind spots of model-aligned trust systems.
In response, I introduce foundational forensic linguistic techniques and NLP-based detection methods for identifying AI-generated text in compromised communications. By combining stylometry, perplexity analysis, and syntax anomaly detection in Python, we illuminate detection opportunities hidden in prompts and narrative structure. With few more tips from cloud security area to protect the LLM deployments.
The talk closes with a reflection on the ethical tensions in detecting synthetic media.
This talk will blend live demonstration, code walkthroughs, and operational insights from an investigation that didn’t just uncover an exploit—but a philosophy of misuse.