, Siena
AI systems can fail dangerously without ever “breaking.” This talk introduces a systems-theoretic method for identifying and mitigating hidden hazards in AI-enabled environments—especially those involving generative and predictive models. Learn how STPA-Sec reveals systemic risks arising from misaligned recommendations, inadequate feedback loops, and interface ambiguity—plus how to control them before they cause harm.
As AI becomes increasingly embedded in operational workflows—across healthcare, transportation, finance, and beyond—traditional failure-mode analyses fall short. AI systems often function “correctly,” yet still produce unsafe outcomes due to flawed assumptions, incomplete control loops, or emergent behaviors. These non-failure-based hazards are especially critical when AI outputs shape human decisions or operate under loose oversight.
This session presents an applied case study using System-Theoretic Process Analysis for Security (STPA-Sec) to analyze a representative AI decision-support system integrating generative and predictive components. We model the system’s control structure—including users, data flows, models, and feedback mechanisms—to identify unsafe control actions such as:
- AI-generated outputs that bypass validation
- Feedback delays in time-sensitive scenarios
- Interface design failures that erode operator trust
Each hazard is traced to causal factors like model misalignment, lack of context awareness, and missing constraints on AI autonomy. We then demonstrate how to implement effective controls—such as human-on-the-loop (HOTL) oversight, system boundaries, and enriched operator feedback—to reduce residual risk.
This talk is grounded in real-world analysis and provides attendees with a repeatable method for anticipating and mitigating systemic AI failures—especially valuable for those involved in AI risk, governance, or security.
Chris is the CEO of Fire Mountain Labs, leading the company’s mission to advance safe and assured AI. Under his direction, Fire Mountain Labs delivers pioneering AI assurance solutions to enterprise and government clients, ensuring AI systems are deployed with security, integrity, and accountability.
With over a decade of experience in AI and AI Security, Chris has coauthored 23 publications in the field and brings deep technical and operational expertise. A veteran of Active Duty U.S. Navy service, Chris also brings deep expertise from Space and Naval Warfare (SPAWAR) Systems Center Pacific, the Naval Information Warfare Center (NIWC), the MITRE Corporation, and several successful AI startups. His background spans operational technology, national security, and cutting-edge AI innovation.
As a trusted voice in the AI ecosystem, Chris operates as an honest broker, bridging government, industry, academia, and small organizations. He advocates for AI adopters navigating a crowded and hype-driven landscape, championing pragmatic, secure, and trustworthy solutions.
Before founding Fire Mountain Labs, Chris held senior leadership roles in AI security research and red teaming, where he shaped industry standards in AI risk assessment, penetration testing, secure AI governance, and adversarial threat modeling.
Dr. Josh Harguess is the Chief Technology Officer of Fire Mountain Labs, where he drives the company’s technical vision and leads advancements in AI security and assurance. Prior to joining Fire Mountain Labs, Josh was the first Chief of AI Security at Cranium AI, a global leader in AI Security products, where he led AI and AI strategy, and the R&D, Engineering, and AI Security departments. Previous to Cranium, Josh was a Senior Principal AI Scientist and department manager at MITRE, shaping national AI security strategies and developing cutting-edge adversarial machine learning defenses. His research has focused on ensuring the reliability, safety, and resilience of AI systems deployed in mission-critical environments. Josh has authored numerous publications on AI risk, trust, and adversarial robustness, contributing to industry frameworks such as MITRE ATLAS and NIST AI RMF. Throughout his career, he has led high-impact AI security programs funded by the Department of Defense, Department of Homeland Security, and major private sector stakeholders. With a strong foundation in AI risk assessment and safe AI deployment, Josh ensures Fire Mountain Labs remains at the forefront of AI security innovation, delivering solutions that enable organizations to deploy AI with confidence.