JuliaCon 2022 (Times are UTC)

Streamlining nonlinear programming on GPUs
2022-07-29 , JuMP

We propose a prototype for a vectorized modeler written in pure Julia, targeting the resolution of large-scale nonlinear optimization problems. The prototype has been designed to evaluate seamlessly the problem's expression tree with GPU accelerators. We discuss the implementation and the challenges we have encountered, as well as preliminary results comparing our prototype together with JuMP's AD backend.


How fast can you evaluate the derivatives of a nonlinear optimization problem? Most real-world optimization instances come with thousands of variables and constraints; in such a large-scale setting using sparse automatic differentiation (AD) is often a non-negotiable requirement. It is well known that by choosing appropriately the partials (for instance with a coloring algorithm) the evaluations of the Jacobian and of the Hessian in sparse format translate respectively to one vectorized forward pass and one vectorized forward-over-reverse pass. Thus, any good AD library should be able to efficiently visit back and forth the problem's expression tree. In this talk, we propose a prototype for a vectorized modeler, where the expression tree is manipulating vector expressions. By doing so, the forward and the reverse evaluations rewrite into the universal language of sparse linear algebra. By chaining linear algebra calls together, we show that the evaluation of the expression tree can be deported to any sparse linear algebra backend. Notably, we show that both the forward and reverse passes can be streamlined efficiently, using either MKLSparse on Intel CPU or cusparse on CUDA GPU. We discuss the prototype's performance on the optimal power flow problem, using JuMP's AD backend as comparison. We show that on the largest instances, we can fasten the evaluation of the sparse Hessian by a factor of 50. Although our code remains a prototype, we have hope that the emergence of new AD library like Enzyme will permit to extend further the idea. We finish the talk by discussing ideas on how to vectorize the evaluation of the derivatives inside JuMP's AD backend.

Postdoc at Argonne National Labt