Juliacon 2024

GPU Acceleration of Julia's SciML: ODEs, Optimization, and more
07-11, 14:00–14:30 (Europe/Amsterdam), For Loop (3.2)

Julia's SciML ecosystem is burgeoning in applying machine learning to scientific computing. Moreover, a key emphasis is the performance paradigm, accelerating scientific discovery. This is possible because at the core of the SciML are the numerical methods that automatically support hardware accelerators like GPUs, making the simulations tractable. This talk will provide a state of GPU acceleration in the SciML ecosystem and its applications, ranging from ODE solvers to Optimization methods.


Julia's SciML is an ecosystem similar to SciPy or MATLAB's built-in numerical solver libraries in that it provides the standard numerical solvers for the Julia ecosystem. Everything from ODE solvers, nonlinear solvers, optimization routines, and more are provided with one common interface. Something that makes the Julia ecosystem stand out is its direct compatibility with machine learning and its deep integration with GPUs. In this talk, we will focus on the latter, showcasing how the SciML stack employs various GPU toolsets in order to automate the process of translating a complex CPU-based model to a GPU-based model. We will discuss the way that this differs from standard machine learning frameworks, why it achieves 20x-100x acceleration over PyTorch and Jax for ODE solvers and some of the steps being taken to accelerate small optimization problems which traditionally have not been able to use parallelism.

This speaker also appears in: