PyData London 2026

Your ML Pipeline Meets the EU AI Act
2026-06-07 , Grand Hall 1

The EU AI Act is often seen as a legal concern, but many of its requirements directly affect everyday ML workflows. This talk shows data scientists and ML engineers where the regulation impacts the machine learning lifecycle and presents practical, lightweight patterns to make ML systems more AI Act–ready.


The EU AI Act introduces new obligations that will directly affect how machine learning systems are designed, evaluated, and operated. While the regulation is often discussed from a legal perspective, many of its practical consequences fall squarely into the domain of data scientists and ML engineers.

This talk provides an engineering-focused walkthrough of where the EU AI Act intersects with the modern ML lifecycle. We map key regulatory expectations to familiar technical stages such as data collection, model training, evaluation, deployment, and monitoring. Rather than diving into legal detail, the session focuses on concrete implementation patterns and common failure modes observed in real-world ML workflows.

Attendees will learn how to perform lightweight risk classification, identify typical compliance gaps in existing pipelines, and apply pragmatic design patterns that improve traceability, documentation, and monitoring without significantly slowing down development. The talk concludes with a practical readiness checklist that teams can immediately apply to their own systems.

Target audience: data scientists, ML engineers, and MLOps practitioners working with production ML systems.

Expected background: familiarity with the basic ML lifecycle and model deployment concepts. No prior knowledge of the EU AI Act is required.

Key takeaways:
- Understand where the EU AI Act impacts ML pipelines
- Learn practical patterns for AI Act readiness
- Avoid common compliance pitfalls in production ML
- Leave with a concrete checklist for next steps

Gabriel Lipnik is an AI engineer and applied mathematician working on production-grade machine learning, artificial intelligence, and optimisation systems. His work focuses on bridging the gap between advanced models and real-world deployment, with a particular interest in MLOps, trustworthy AI, and regulatory-ready ML systems.

He has contributed to large-scale optimization and AI projects in the mobility and infrastructure domain, where reliability, traceability, and operational robustness are critical.

Gabriel is particularly interested in practical approaches to making machine learning systems more transparent, monitorable, and production-ready.