Dynamic resource allocation for machine learning jobs at H&M Group
10-21, 11:00–11:25 (Europe/Stockholm), Data

Live broadcast: https://www.youtube.com/watch?v=oBPNk5qN0L4

At H&M Group, we are increasingly adopting machine learning algorithms and rapidly developing successful use cases, one of the applications is a dynamic resources allocation (memory and cpu) using data driven analysis and ML to decrease the cost of infrastructure.

The objective of this talk is to show how one of H&M use cases adopted ML workflow using airflow, kubernetes and docker and how to solve the provisioning problem with ML approach.


At H&M we are using Airflow, kubernetes as main components for the machine learning workflow. The increase of the Online shopping during the last two years has impacted the data volume significantly. A lot of companies are struggling with the infrastructure cost when adopting airflow kubernetes/docker as technologies, any person interested can join to have a high level explanation of the solution H&M Group has adapted to encounter this.

Machine learning engineer at H&M Group. Prior to working, he has done his master thesis project at H&M Group on this exact topic - resource allocation for machine learning jobs.

Machine learning engineer at H&M group ,with applied mathematics and data engineering background