2026-08-13 –, Room 3
This talk introduces a new package called CoolPDLP.jl, which implements state-of-the-art parallel algorithms for large-scale linear programming. Thanks to Julia's flexible GPU ecosystem, these algorithms run on various kinds of accelerators, accepting arbitrary matrix and number types.
A linear program (LP) is an optimization problem whose objective and constraints are affine functions of the decision variables. While the simplex method is a very efficient way to solve LPs on the CPU, it is hard to parallelize efficiently, limiting its scalability. In recent years, first-order methods for LP have emerged, which require only matrix-vector multiplication. These primal-dual (PD) approaches allow practitioners to fully leverage advances in modern hardware accelerators such as GPUs and TPUs when tackling large-scale LPs. Their potential was first demonstrated by the PDLP algorithm [1] and its subsequent GPU translation cuPDLP.jl [2]. However, with the notable exception of MPAX [3], existing implementations are usually written using CUDA, and thus limited to NVIDIA hardware. Furthermore, they seldom support batching, and do not allow custom matrix or number types.
Thanks to the flexibility of the Julia ecosystem, we developed a new package called CoolPDLP.jl, which implements the PDLP algorithm in a backend-agnostic fashion. It supports arbitrary matrix and number types, which enables experiments with handrolled sparse matrix formats or reduced precision. It also provides default cross-platform sparse matrices built atop KernelAbstractions.jl, for backends where sparse linear algebra is not sufficiently developed. Special care is given to minimizing allocations and preserving type stability. A JuMP.jl interface is also included.
The JuliaCon talk will introduce the main features of our package, present a few benchmarks on different hardware families, and conclude with future perspectives such as batched solving.
[1] D. Applegate et al., “PDLP: A Practical First-Order Method for Large-Scale Linear Programming,” Jan. 13, 2025, arXiv: arXiv:2501.07018. doi: 10.48550/arXiv.2501.07018.
[2] H. Lu and J. Yang, “cuPDLP.jl: A GPU Implementation of Restarted Primal-Dual Hybrid Gradient for Linear Programming in Julia,” Operations Research, vol. 73, no. 6, pp. 3440–3452, Nov. 2025, doi: 10.1287/opre.2024.1069.
[3] H. Lu, Z. Peng, and J. Yang, “MPAX: Mathematical Programming in JAX,” Dec. 12, 2024, arXiv: arXiv:2412.09734. doi: 10.48550/arXiv.2412.09734.
Researcher at École des Ponts (France) in the transportation department. Interested in operations research, graph algorithms, automatic differentiation and high-performance computing.
Website: https://gdalle.github.io/