Security BSides Las Vegas 2025

Poison in the Wires: Interactive Network Visualization of Data Attacks
2025-08-05 , Florentine A

What if we could not only visualize poisoned training data, but interact with it?
As data poisoning becomes a growing threat to the integrity of machine learning systems, understanding its effects requires more than static visualizations. This talk introduces GraphLeak, an open-source, interactive web tool designed to visualize how poisoned training data alters network structure. We will explore how adversarial data manipulation impacts graph-based representations.
Building on network science concepts, this session will go deeper: not just showing how poisoning affects structure, but allowing users to directly interact with poisoned vs. clean datasets in real time. We’ll walk through how the app ingests CSV or JSON data, builds networks, and renders them via layouts.
The presentation of this tool emphasizes accessibility through making data poisoning tangible and transparent, allowing security practitioners and non-experts understand how data poisoning attacks distort model behavior. By making threats visible, we make the defenses of these threats more approachable, democratizing insight into machine learning vulnerabilities and supporting the development of more robust, transparent systems.


This talk branches off of my original research that I have been developing since August 2024. I have been researching data poisoning and also applying graph theory to cybersecurity. I developed this talk after speaking about theoretically visualizing poisoning networks. In this talk, I actually want to visualize poisoning training data with a custom GUI. After talking through some graph theory and data poisoning basics, I’ll show how poisoned training data messes with AI using an interactive network visualization tool I built. I wanted to emphasize how visualizing vulnerabilities makes it easier to understand and execute them, particularly in the AI red teaming space. The audience will see how bad data creates weird structures in graphs beyond just data differences. It’s like watching a model get hacked from the inside, but in a way you can actually see and explore. The tool is open source, works with local data, and helps make these attacks way more understandable (and fun to mess with). The talk is made for audiences who like machine learning, graphs, and red teaming, which at its core, is just breaking things apart into smaller, more understandable pieces.
I enjoy being able to contribute a graphical perspective to hacking in general, I think that being able to visually represent an attack graphically and accurately can help make the vulnerability more interactive and easier to understand. I wanted to be able to show that AI models are as breakable as anything else, and a great way to show that is through visualization with networks.
https://youtu.be/7z6YAgggw-o?si=n5bhWkHmRlL76eCn

Anya is a security engineer focused on web app and AI red teaming. In her free time she researches applying graph theory and network science to cybersecurity. Her first talk focused on visualizing data poisoning and tampering using network science. In her actual free time she enjoys painting and participating in CTFs.