2026-04-26 –, Track 2
Traditional DFIR assumes that compromise produces artifacts, failures, or clearly malicious inputs. AI systems challenge that assumption. Models can be trained, deployed, and perform “as expected” while still producing harmful, biased, or manipulated outcomes. This talk explores how data poisoning and manipulation in AI systems often target results rather than content, making traditional IOC-based detection ineffective. Using a DFIR mindset, the session focuses on how investigators can identify behavioral, temporal, and statistical indicators that suggest something is wrong even when no individual data point appears malicious. Attendees will leave with a practical framework for thinking about AI investigations, emphasizing baselining, change correlation, and forensic readiness over perfect attribution.
I am currently working in the digital forensics and incident response space, currently pursuing a Master of Science in Cyber Operations (Red Team) with a strong academic and practical foundation in DFIR (B.S Cyber Forensics). My work and studies focus on investigative thinking, evidence-based analysis, and understanding how complex systems fail under real-world conditions. I approach emerging security problems with a practitioner mindset, grounded in the realities of incident response rather than theoretical idealism.
This talk reflects the type of problems many defenders are already encountering but do not yet have shared language or frameworks to address. As AI systems become embedded in security tooling and decision-making workflows, questions around data integrity, poisoning, and auditability are no longer hypothetical. I bring a DFIR perspective that translates existing investigative skills to these new environments without overpromising detection or claiming advanced tooling that does not yet exist. BSides has always emphasized practical knowledge-sharing and honest conversations, and this session aligns with that mission by helping the community think critically about AI risk using familiar, defensible methods.