Hyperparameter optimization for the impatient
04-17, 11:40–12:25 (Europe/Berlin), B09

In the last years, Hyperparameter Optimization (HPO) became a fundamental step in the training
of Machine Learning (ML) models and in the creation of automatic ML pipelines.
Unfortunately, while HPO improves the predictive performance of the final model, it comes with a significant cost both in terms of computational resources and waiting time.
This leads many practitioners to try to lower the cost of HPO by employing unreliable heuristics.

In this talk we will provide simple and practical algorithms for users that want to train models
with almost-optimal predictive performance, while incurring in a significantly lower cost and waiting
time. The presented algorithms are agnostic to the application and the model being trained so they can be useful in a wide range of scenarios.

We provide results from an extensive experimental activity on public benchmarks, including comparisons with well-known techniques like Bayesian Optimization (BO), ASHA, Successive Halving.
We will describe in which scenarios the biggest gains are observed (up to 30x) and provide examples for how to use these algorithms in a real-world environment.

All the code used for this talk is available on (GitHub)[https://github.com/awslabs/syne-tune].


In this talk we will present simple and practical solutions to perform HPO quickly with results on-par with well-know (and costly) techniques. Our claims are supported by empirical evidence obtained on public standardized benchmarks and our work has been accepted in peer-reviewed workshop (currently under submission to a conference).

Specifically, [1] has been accepted at the AutoML Conference Workshop Track and [2] has been accepted at the AutoML workshop at ICML 2021.
All the code regarding the algorithms is available in the Syne-Tune package under license Apache 2.0 (https://github.com/awslabs/syne-tune).

References:
[1] https://arxiv.org/abs/2207.06940
[2] https://arxiv.org/abs/2103.16111


Public link to supporting material

https://github.com/awslabs/syne-tune

Expected audience expertise: Python

Novice

Expected audience expertise: Domain

Intermediate

Abstract as a tweet

HPO does not need to be expensive, see how to speed it up with a couple of simple algorithms

Martin Wistuba is a researcher at Amazon Web Services where he works on automation of hyperparameter optimization and Neural Architecture Search. Earlier, he was at IBM Research, where he developed tools to automate deep learning.