Noah Grosh
Noah Grosh is a recent UNCC graduate and former Dropbox employee working on AI/ML red team tools to increase velocity of testing while keeping testing relevant to modern threats. In his spare time he enjoys torturing LLMs, and drinking tea.
Session
Machine learning is becoming more and more prevalent in malware detection techniques, but how can these systems be fooled? Last summer, I started work on the "Torment Nexus" in order to answer this question. Using relatively simple techniques, I was able to prove that even minor modifications to well-known malware samples could drastically reduce the detectability when analyzed by AI-based and traditional detection methods without changing their function.
In my talk, I will present my research on the topic, explain the processes I used to reduce detection scores, and demonstrate how these techniques can be used to evade modern machine learning-based detection methods. Additionally, I will discuss the broader implications of deploying ML-based security tools without properly scrutinizing their reliability.