Josh Harguess
Dr. Josh Harguess is the Chief Technology Officer of Fire Mountain Labs, where he drives the company’s technical vision and leads advancements in AI security and assurance. Prior to joining Fire Mountain Labs, Josh was the first Chief of AI Security at Cranium AI, a global leader in AI Security products, where he led AI and AI strategy, and the R&D, Engineering, and AI Security departments. Previous to Cranium, Josh was a Senior Principal AI Scientist and department manager at MITRE, shaping national AI security strategies and developing cutting-edge adversarial machine learning defenses. His research has focused on ensuring the reliability, safety, and resilience of AI systems deployed in mission-critical environments. Josh has authored numerous publications on AI risk, trust, and adversarial robustness, contributing to industry frameworks such as MITRE ATLAS and NIST AI RMF. Throughout his career, he has led high-impact AI security programs funded by the Department of Defense, Department of Homeland Security, and major private sector stakeholders. With a strong foundation in AI risk assessment and safe AI deployment, Josh ensures Fire Mountain Labs remains at the forefront of AI security innovation, delivering solutions that enable organizations to deploy AI with confidence.
Sessions
AI systems can fail dangerously without ever “breaking.” This talk introduces a systems-theoretic method for identifying and mitigating hidden hazards in AI-enabled environments—especially those involving generative and predictive models. Learn how STPA-Sec reveals systemic risks arising from misaligned recommendations, inadequate feedback loops, and interface ambiguity—plus how to control them before they cause harm.
As AI systems become integral to enterprise operations, effective governance is essential to mitigate associated risks. This hands-on workshop offers a comprehensive introduction to AI governance, focusing on AI system lifecycle oversight, alignment with frameworks like the NIST AI RMF, and compliance with regulations such as the EU AI Act. Participants will engage in a guided tabletop exercise simulating a real-world AI incident, fostering collaborative response strategies and practical risk mitigation planning. Attendees will leave equipped with actionable insights and tools to implement responsible AI governance within their organizations.