19.05.2026 –, Raum A
As robotics and AI systems move rapidly into industrial environments, compliance, safety, and ethical governance have become strategic enablers—not blockers—of innovation.
In this hands-on workshop, you will learn how to apply the newly developed “11 Key Elements for Achieving AI & Robotics Compliance in 2025 and Beyond” to your own industry context.
Through interactive exercises, group work, and case-based simulations, you will explore how organizations can operationalize ethical AI, prepare for emerging regulations, and embed governance as a driver of trust, performance, and resilience.
This workshop is ideal for professionals across sectors who want practical tools for adapting AI governance frameworks to industrial, public-sector, and cross-organizational environments.
You will learn:
✅ How to adapt the framework to your own industry (manufacturing, logistics, healthcare, public sector, mobility, etc.)
✅ How governance acts as an innovation catalyst, not just a regulatory requirement
✅ How to map real-world use cases to risk classes and compliance expectations
✅ How to align stakeholders (technical + legal + business) using a shared model
✅ How to evaluate maturity and readiness within your organization
✅ Bonus: Creating a quick-start “AI Governance Canvas” you can take home
Kateryna Portmann is a senior expert in ethical AI governance, robotics compliance, and autonomous systems, with extensive experience bridging product development, regulation, and real-world deployment of AI-enabled robotics.
She currently works as a Senior Product Manager at ANYbotics, where she is directly involved in the development and deployment of autonomous robotic systems in safety-critical industrial environments. This role grounds her governance work in the realities of engineering constraints, operational risk, and market adoption.
Ethical & Responsible AI Governance
Kateryna designs and implements governance frameworks for safe, transparent, and trustworthy AI, with a focus on embedding governance directly into industrial and enterprise product lifecycles. She is a strong advocate for treating governance as an innovation accelerator, enabling scalable and compliant AI rather than slowing it down.
Robotics & Autonomous Systems Compliance
With hands-on experience in industrial robotics, Kateryna applies AI governance principles to autonomous and physical systems, addressing safety, accountability, and risk classification. She actively aligns robotics deployments with emerging global standards, including the EU AI Act, ISO robotics standards, and functional safety requirements.
Multi-Stakeholder AI Alignment
Kateryna specializes in aligning engineering, legal, cybersecurity, safety, and business stakeholders around shared governance models for high-impact AI systems. She has led and facilitated cross-organizational collaboration to ensure that compliance, risk management, and innovation goals reinforce—rather than conflict with—each other.
Applied Industry Experience
Her work is deeply rooted in real-world robotics and AI deployments, particularly in regulated and safety-critical contexts. She consistently bridges strategic policy and technical execution, translating regulatory expectations into actionable engineering and product decisions.
Framework Creation & Thought Leadership
Kateryna is the co-developer of the “11 Key Elements for Achieving AI & Robotics Compliance in 2025 and Beyond”, a framework informed by applied industry practice and multistakeholder collaboration. She contributed to discussions at AI House Davos 2025 and advocates for governance models that support innovation, transparency, and sovereign AI ecosystems.
Community & Academic Engagement
Beyond industry, Kateryna is an active contributor to the broader robotics ecosystem. She is involved with IFR Women in Robotics, supporting diversity, leadership, and knowledge-sharing in the field. She also serves as a Lecturer at HWZ (Hochschule für Wirtschaft Zürich), teaching robotics and AI with a strong emphasis on governance, safety, and real-world application.