Skip to content

Details

Details: In recent years, the development of scalable continuous optimization solvers on GPUs has made significant progress, primarily driven by advances in GPU-based sparse linear algebra.
We first introduce MadNLP.jl, a GPU-native solver for nonlinear programming (NLP) based on interior-point methods and sparse direct solvers. It was the first solver in the suite and remains central to solving general nonlinear problems efficiently.

Building on this foundation, MadNCL.jl is a robust meta-solver that orchestrates multiple NLP solves to handle degeneracy, ill-conditioning, and to achieve high-accuracy solutions with tolerances below 1e-8.
The latest solver MadIPM.jl, targets large-scale linear and convex quadratic programs (LP / QP).

All three solvers are based on second-order algorithms, enabling robustness and accuracy compared to purely first-order methods.
We present performance results on real-world benchmark instances, demonstrating substantial speedups between CPU and GPU implementations, while maintaining high solution quality.

Audience Takeaway:

  • Second-order methods scale on GPUs
  • Unified GPU solver suite (NLP / LP / QP)
  • Robust, high-accuracy solutions
  • Demonstrated real-world CPU–GPU speedups

Zoom: This is an online event. To attend online, join us on Zoom here at 6pm:
https://numfocus-org.zoom.us/j/89399976851?pwd=UEgMUZXdYmKdK1x1dIPL6hwUYnp7NW.1

Sponsor: Adyen, PyData Chicago and NumFocus

GPU
Python
Open Source
Algorithms

Members are also interested in