ExaTron.jl: a scalable GPU-MPI-based batch solver for small NLPs
2021-07-29, 13:30–14:00 (UTC), Blue

We introduce ExaTron.jl which is a scalable GPU-MPI-based batch solver for many small nonlinear programming problems. We present ExaTron.jl's architecture, its kernel design principles, and implementation details with experimental results comparing different design choices. We demonstrate a linear scaling of parallel computational performance of ExaTron.jl on Summit at Oak Ridge National Laboratory.


We introduce ExaTron.jl which is a scalable GPU-MPI-based batch solver for many small nonlinear programming problems. Its algorithm is based on a trust-region Newton algorithm for solving bound constrained nonlinear nonconvex problems. In contrast to existing work in the literature, it completely works on GPUs without requiring data transfers between CPU and GPU during its procedure. This enables us to eliminate one of the main performance bottlenecks under memory-bound situation. We present ExaTron.jl's architecture, its kernel design principles, and implementation details with experimental results comparing different design choices. We have implemented an ADMM algorithm for solving alternating current optimal power flow, where tens of thousands of small nonlinear nonconvex problems are solved by ExaTron.jl. We demonstrate a linear scaling of parallel computational performance of ExaTron.jl on Summit at Oak Ridge National Laboratory.

  • B.S. in Mathematics and Computer Science, Pohang University of Science and Technology, 2007
  • M.S. in Computer Science, Pohang University of Science and Technology, 2009
  • Ph.D. in Computer Science, University of Wisconsin-Madison, 2017
  • Postdoctoral Appointee, Argonne National Laboratory, 2018-Current