Shrinking gigabyte sized scikit-learn models for deployment
04-19, 10:50–11:20 (Europe/Berlin), B09

We present an open source library to shrink pickled scikit-learn and lightgbm models. We will provide insights of how pickling ML models work and how to improve the disk representation. With this approach, we can reduce the deployment size of machine learning applications up to 6x.


At QuantCo, we create value from data using machine learning. To that end, we frequently build gigabyte-sized machine learning models. However, deploying and sharing those models can be challenge because of their size. We built and open-sourced a library to aggressively compress tree-based machine learning models: slim-trees.

In this talk, we share our journey and the ideas that went into the making of slim-trees. We delve into the internals of sklearn’s Tree-based models to understand their memory footprint. Afterwards, we explore different techniques that allow us to reduce model size without sacrificing predictive performance.

Finally, we present how to include slim-trees in your project and give an outlook on what’s to come.


Expected audience expertise: Domain

Novice

Expected audience expertise: Python

Intermediate

Abstract as a tweet

Shrinking gigabyte sized scikit-learn models for deployment: this talk shows how to deploy machine learning models with up to 6x disk space improvement

Public link to supporting material

https://github.com/pavelzw/slim-trees

See also: slides (4.0 MB)

Pavel is a data engineer at QuantCo who is currently studying Mathematics and Computer Science at KIT.

Yasin works as a Data Engineer at QuantCo and studies Computer Science at Karlsruhe Institute of Technology (KIT)