PyCon DE & PyData 2025

Jesper Dramsch

Jesper Dramsch works at the intersection of machine learning and physical, real-world data. Currently, they're working as a scientist for machine learning in numerical weather prediction at the coordinated organisation ECMWF.

Jesper is a fellow of the Software Sustainability Institute, creating awareness and educational resources around the reproducibility of machine learning results in applied science. Before, they have worked on applied exploratory machine learning problems, e.g. satellites and Lidar imaging on trains, and defended a PhD in machine learning for geoscience. During the PhD, Jesper wrote multiple publications and often presented at workshops and conferences, eventually holding keynote presentations on the future of machine learning in geoscience.

Moreover, they worked as consultant machine learning and Python educator in international companies and the UK government. They create educational notebooks on Kaggle applying ML to different domains, reaching rank 81 worldwide out of over 100,000 participants and their video courses on Skillshare have been watched over 128 days by over 4500 students. Recently, Jesper was invited into the Youtube Partner programme creating videos around programming, machine learning, and tech.


LinkedIn

linkedin.com/in/mlds

Github

github.com/jesperdramsch


Session

04-24
11:35
45min
Going Global: Taking code from research to operational open ecosystem for AI weather forecasting
Jesper Dramsch

When I was hired as a Scientist for Machine Learning, experts said ML would never work in weather forecasting. Nowadays, I get to contribute to Anemoi, a full-featured ML weather forecasting framework used by international weather agencies to research, build, and scale AI weather forecasting models.

The project started out as a curiosity by my colleagues and soon scaled as a result of its initial success. As machine learning stories go, this is a story of change, adaptation and making things work.

In this talk, I'll share some practical lessons: how we evolved from a mono-package with four people working on it to multiple open-source packages with 40+ internal and external collaborators. Specifically, how we managed the explosion of over 300 config options without losing all of our sanity, building a separation of packages that works for both researchers and operations teams, as well as CI/CD and testing that constrains how many bugs we can introduce in a given day. You'll learn concrete patterns for growing Python packaging for ML systems, and balancing research flexibility with production stability. As a bonus, I'll sprinkle in anecdotes where LLMs like chatGPT and Copilot massively failed at facilitating this evolution.

Join me for a deep dive into the real challenges of scaling ML systems - where the weather may be hard to predict, but our code doesn't have to be.

PyCon: MLOps & DevOps
Platinum3