Suha Sabi Hussain
Suha Sabi Hussain is a security engineer on the machine learning assurance team at Trail of Bits. She has worked on projects such as the Hugging Face Safetensors security audit and Fickling. She received her BS in Computer Science from Georgia Tech where she also conducted research at the Institute for Information Security and Privacy. She previously worked at the NYU Center for Cybersecurity and Vengo Labs. She’s also a member of the Hack Manhattan makerspace, a practitioner of Brazilian Jiu-Jitsu, and an appreciator of NYC restaurants.
Session
Machine learning (ML) pipelines are vulnerable to model backdoors that compromise the integrity of the underlying system. Although many backdoor attacks limit the attack surface to the model, ML models are not standalone objects. Instead, they are artifacts built using a wide range of tools and embedded into pipelines with many interacting components.
In this talk, we introduce incubated ML exploits in which attackers inject model backdoors into ML pipelines using input-handling bugs in ML tools. Using a language-theoretic security (LangSec) framework, we systematically exploited ML model serialization bugs in popular tools to construct backdoors. In the process, we developed malicious artifacts such as polyglot and ambiguous files using ML model files. We also contributed to Fickling, a pickle security tool tailored for ML use cases. Finally, we formulated a set of guidelines for security researchers and ML practitioners. By chaining system security issues and model vulnerabilities, incubated ML exploits emerge as a new class of exploits that highlight the importance of a holistic approach to ML security.