2024-08-28 –, Room 6
Bayesian optimization is a powerful technique for optimizing black-box, costly-to-evaluate functions, widely applicable across diverse fields. However, Gaussian process (GP) models commonly used in Bayesian optimization struggle with functions defined on categorical or mixed domains, limiting optimization in scenarios with numerous categorical inputs. In this talk, we present a solution by leveraging ensemble models for probabilistic modelling, providing a robust approach to optimize functions with categorical inputs. We showcase the effectiveness of our method through a Bayesian optimization setup implemented with the BoTorch library, utilizing probabilistic models from the XGBoostLSS framework. By integrating these tools, we achieve efficient optimization on domains with categorical variables, unlocking new possibilities for optimization in practical applications.
Bayesian optimization (BO) is a powerful method for optimizing black-box, costly-to-evaluate functions, with applications across various fields. These include hyperparameter tuning for complex machine learning models, designing better-tasting beverages, geological carbon sequestration, and developing new chemical products.
BO algorithms rely on two key components: a probabilistic model and an acquisition function. The probabilistic model predicts the target variable's distribution at each point in the predictor space. At the same time, the acquisition function scores these distributions to guide the selection of the following evaluation points, aiming for efficient optimization.
The Gaussian Process (GP) is BO’s most popular probabilistic model. However, GPs have limitations. They struggle with functions defined on categorical or mixed domains, making them less effective when numerous categorical inputs are involved. Optimizing with GPs requires a careful choice of likelihood, kernel functions, and priors, posing a risk of mismodeling. Additionally, GPs can’t natively describe functions with conditional spaces, their training time increases polynomially with the number of training samples, and the popular implementation of BO with GP (BoTorch) does not support discontinuous polytope inference.
An intelligent choice of probabilistic model can address these limitations. In this talk, we benchmark tree-based ensemble probabilistic models against GPs on several corner cases. We explore constrained optimization on mixed domains and functions with conditional spaces. We compare training and inference times, scaling with dimensionality, and optimization performance. We consider existing solutions, such as GPyOpt with sklearn's RandomForestRegressor, and our implementation integrating BO with the BoTorch library using probabilistic models from the XGBoostLSS framework.
Through practical examples, we will demonstrate the effectiveness of tree-based probabilistic models for BO and showcase how our approach can unlock new possibilities for optimization in real-world applications.
Bayesian optimisation can work on categorical domain. We show how to do within BoTorch framework.
Category [Data Science and Visualization] –Statistics
Expected audience expertise: Domain –some
Expected audience expertise: Python –some
I worked as a particle physicist in 2012-2021. In 2016, I got my PhD in Physics from EPFL for analysis of data from LHCb experiment (CERN). After that, I changed the experiment, and joined Belle II collaboration for analysis of data collected on KEK collider in Japan.
In 2021, I changed my career and joined trivago as a Data Scientist to work on ranking problems.
In 2022 I joined Henkel, my current employer. I am working on several project including (but not limitig to) time series analysis, Bayesian Optimisation, and scheduling.