DevConf.CZ

Efficient Edge Computing: Unleashing The Potential of AI/ML With Lightweight Kubernetes
06-14, 15:30–16:05 (Europe/Prague), D105 (capacity 300)

Join us as we delve into the fascinating world of deploying Artificial Intelligence (AI) and Machine Learning (ML) models in diverse edge deployment scenarios. This talk is designed for technology enthusiasts, developers, and industry professionals seeking insights into running AI/ML models efficiently at the edge. We will compare the components used in traditional cloud-based Kubernetes distributions with lightweight Kubernetes distributions optimized for edge devices such as MicroShift. We'll explore crucial factors like power consumption, model size, and performance, shedding light on the considerations necessary for successful edge deployments. Additionally, we'll present a practical example of serving multiple models and discuss strategies to minimize inference process switching time in time-sensitive situations. Learn how open source components can empower you to navigate the challenges of running AI and ML models at the edge efficiently.

See also:

Ricardo is a Principal Software Engineer working at the Red Hat's Office of the CTO in the Emerging Technologies organization. Ricardo is currently focused on the edge computing space, and he has been part of the MicroShift project since its inception, a lightweight Kubernetes distribution optimized for the device edge. He is a former member of the Akraino TSC and PTL of the Kubernetes-Native-Infrastructure blueprint family, and contributor to OpenStack, OpenDaylight and OPNFV.