Nonlinear programming on the GPU
07-29, 16:30–17:00 (UTC), JuMP Track

So far, most nonlinear optimization modelers and solvers have primarily targeted CPU architectures. However, with the emergence of heterogeneous computing architectures, leveraging massively parallel accelerators in nonlinear optimization has become crucial for performance. As part of the Exascale Computing Project ExaSGD, we are studying how to efficiently run nonlinear optimization algorithms at exascale using GPU accelerators.


This talk walks over our recent experiences in our development efforts. The parallel layout of GPUs requires running as many operations as possible in batch mode, in a massively parallel fashion. We will detail how we have adapted the automatic differentiation, the linear algebra and the optimization solvers in a batch setting and present the different challenges we have addressed. Our efforts have led to the development of different prototypes, all addressing a specific issue on the GPU: ExaPF for batch automatic differentiation, ExaTron as a batch optimization solver, ProxAL for distributed parallelism. The future research opportunities are manyfold for the nonlinear optimization community: how can we leverage new automatic differentiation backends developed in the machine learning community for optimization purpose? How can we exploit the Julia language to develop a vectorized nonlinear optimization modeler targeting massively parallel accelerators?

François Pacaud is a postdoctoral appointee at Argonne National Lab, supervised by Mihai Anitescu. His work focuses on the development of new nonlinear optimization algorithms on GPU architectures.