2023-04-19 –, B09
Tired of having to handle asynchronous processes for neuroevolution? Do you want to leverage massive vectorization and high-throughput accelerators for evolution strategies (ES)? evosax allows you to leverage JAX, XLA compilation and auto-vectorization/parallelization to scale ES to your favorite accelerators. In this talk we will get to know the core API and how to solve distributed black-box optimization problems with evolution strategies.
The deep learning revolution has greatly been accelerated by the 'hardware lottery': Recent advances in modern hardware accelerators and compilers paved the way for large-scale batch gradient optimization. Evolutionary optimization, on the other hand, has mainly relied on CPU-parallelism, e.g. using Dask scheduling and distributed multi-host infrastructure. Here we argue that also modern evolutionary computation can significantly benefit from the massive computational throughput provided by GPUs and TPUs. In order to better harness these resources and to enable the next generation of black-box optimization algorithms, we release evosax: A JAX-based library of evolution strategies which allows researchers to leverage powerful function transformations such as just-in-time compilation, automatic vectorization and hardware parallelization. evosax implements 30 evolutionary optimization algorithms including finite-difference-based, estimation-of-distribution evolution strategies and various genetic algorithms. Every single algorithm can directly be executed on hardware accelerators and automatically vectorized or parallelized across devices using a single line of code. It is designed in a modular fashion and allows for flexible usage via a simple ask-evaluate-tell API. We thereby hope to facilitate a new wave of scalable evolutionary optimization algorithms.
Intermediate
Abstract as a tweet:Tired of having to handle asynchronous processes for neuroevolution? Do you want to leverage high-throughput accelerators for evolution strategies (ES)? evosax allows you to leverage JAX, XLA compilation & auto-vectorization/parallelization to scale ES to accelerators.
Public link to supporting material: Expected audience expertise: Domain:None
I am a 3rd year PhD student working on Evolutionary Meta-Learning at the Technical University Berlin. My work is funded by the Science of Intelligence Excellence Cluster and supervised by Henning Sprekeler. Previously, I completed a MSc in Computing at Imperial College London, a Data Science MSc at Universitat Pompeu Fabra and an Economics undergraduate at University of Cologne. I also interned at DeepMind (Discovery team) & Accenture and maintain a set of open source tools.