2021-07-29 –, JuMP Track
We present a native-Julia nonlinear programming (NLP) solver MadNLP.jl. This solver implements the filter line-search interior-point method for constrained NLPs; to the best of our knowledge, MadNLP is currently the only native-Julia solver that is capable of handling general nonlinear equality/inequality-constrained optimization problems. MadNLP is interfaced with the algebraic modeling language JuMP.jl, the graph-based modeling language Plasmo.jl, and the NLP data structure NLPModels.jl.
MadNLP leverages diverse sparse and dense linear algebra routines: UMFPACK, HSL routines, MUMPS, Pardiso, LAPACK, and cuSOLVER. The key feature of MadNLP is the adoption of scalable linear algebra methods: structure-exploiting parallel linear algebra (based on restricted additive Schwarz and Schur complement strategy) and GPU-based linear algebra (cuSOLVER). These methods significantly enhance the scalability of the solver to large-scale problem instances (e.g., long-horizon dynamic optimization, stochastic programs, abd dense NLPs). Furthermore, MadNLP exploits Julia's extensibility so that new linear solvers can be added in a plug-and-play manner. In the presentation, we will present benchmark results against other open-source and commercial solvers as well as the results highlighting MadNLP's advanced features. Our results suggest that (i) MadNLP has comparable speed and robustness with Ipopt/KNITRO when tested against the standard benchmark test set (CUTEst); (ii) MadNLP with structure-exploiting parallel linear algebra can achieve speed-up up of a factor of 3 when solving large-scale sparse nonlinear programs; and (iii) GPU-acceleration achieves the speed-up of a factor of 10 when solving dense nonlinear optimization problems. The presentation will conclude with a future development roadmap, including the implementation of distributed-memory parallelism and pure-GPU solver.