JuliaCon 2022 (Times are UTC)

LinearSolve.jl: because A\b is not good enough
07-28, 16:50–17:00 (UTC), Red

Need to solve Ax=b for x? Then use A\b! Or wait, no. Don't. If you use that method, how do you swap that out for a method that performs GPU offloading? How do you switch between UMFPACK and KLU for sparse matrices? Krylov subspace methods? What does all of this mean and why is A\b not good enough? Find out all of this and more at 11. P.S. LinearSolve.jl is the answer.

We tell people that to solve Ax=b, you use A\b. But in reality, that is insufficient for many problems. For dense matrices, LU-factorization, QR-factorization, and SVD-factorization approaches are all possible ways to solve this, each making an engineering trade-off between performance and accuracy. While with Julia's Base you can use lu(A)\b, qr(A)\b, and svd(A)\b, this idea does not scale to all of the cases that can arise. For example, Krylov subspace methods require you set a tolerance tol... how do you expect to do that? krylov(A;tol=1e-7)\b? No, get outta here, the libraries don't support that. And even if they did, this still isn't as efficient as... you get the point.

This becomes a major issue with packages. Say Optim.jl uses a linear solve within its implementation of BFGS (it does). Let's say the code is A\b. Now you know in your case A is a sparse matrix which is irregular, and thus KLU is 5x faster than the UMFPACK that Julia's \ defaults to. How do you tell Optim.jl to use KLU instead? Oops, you can't. But wouldn't it be nice if you could just pass linsolve = KLUFactorization() and have it do that?

Okay, we can keep belaboring the point, which is that the true interface of linear solvers needs to have many features and performance, and it needs to be a multiple dispatching interface so that it can be used within other packages and have the algorithms swapped around by passing just one type. What a great time for the SciML ecosystem to swoop in! This leads us to LinearSolve.jl, a common interface for linear solver libraries. What we will discuss is the following:

  • Why there are so many different linear solver methods. What are they used for? When are which ones recommended? Short list: LU, QR, SVD, RecursiveFactorization.jl (pure Julia, and the fastest?), GPU-offload LU, UMFPACK, KLU, CG, GMRES, Pardiso, ...
  • How do you swap between linear solvers in the LinearSolve.jl interface. It's easy: solve(prob,UMFPACKFactorization()) vs solve(prob,KLUFactorization()).
  • How do you efficiently reuse factorizations? For example, the numerical factorization stage can be reused when swapping out b if doing many A\b operations. But did know that if A is a sparse matrix you only need to perform the symbolic factorization stage once for each sparsity pattern? How do you do all of this efficiently? LinearSolve.jl has a caching interfaces that automates all of this!
  • What is a preconditioner? How do you use preconditioners?

We will showcase examples where stiff differential equation solving is accelerated by over 20x just by swapping out to the correct linear solvers (https://diffeq.sciml.ai/stable/tutorials/advanced_ode_example/). This will showcase that it's not a small detail, and in fact, every library should adopt this swappable linear solver interface.

Research Affiliate and Co-PI of the Julia Lab at the Massachusetts Institute of Technology
Director of Modeling and Simulation at Julia Computing and Creator / Lead Developer of JuliaSim
Director of Scientific Research at Pumas-AI and Creator / Lead Developer of Pumas
Lead Developer of the SciML Open Source Software Organization

Chris Rackauckas

Chris' research and software is focused on Scientific Machine Learning (SciML): the integration of domain models with artificial intelligence techniques like machine learning. By utilizing the structured scientific (differential equation) models together with the unstructured data-driven models of machine learning, our simulators can be accelerated, our science can better approximate the true systems, all while enjoying the robustness and explainability of mechanistic dynamical models.

Chris's recent work is focused on bringing personalized medicine to standard medical practice through the proliferation of software for scientific AI. Chris is at the cutting edge of mathematical methods for scientific simulation. He is the lead developer of the DifferentialEquations.jl solver suite along with over a hundred other Julia packages, earning him the inaugural Julia Community Prize, an outstanding paper award at the IEEE-HPEC conference on computational derivation for the efficient stochastic differential equation solvers, and front page features on many tech community sites. Chris' work on high performance differential equation solving is the engine accelerating many applications from the MIT-CalTech CLiMA climate modeling initiative to the SIAM Dynamical Systems award winning DynamicalSystems.jl toolbox (of which DifferentialEquations.jl was the runner-up). His work is credited for the 15,000x acceleration of NASA Launch Services simulations and recently demonstrated a 60x-570x acceleration over Modelica tools. For these achievements Chris received the United States Department of the Air Force Artificial Intelligence Accelerator Scientific Excellence Award.

Chris brought these enhanced numerical approaches to the domain of pharmaceutical modeling and simulation as the creator and lead developer of Pumas. Pumas is scientific AI in clinical practice. Pumas makes it possible to predict the optimal medication dosage for individuals, reducing the costs and potential complications associated with treatments. Pumas is being used by many major pharmasceuticals to predict personalized safe dosage regimens by incorporating realistic biological models (quantitative systems pharmacology) and deep learning into the traditional nonlinear mixed effects (NLME) modeling framework. These efforts on Pumas led to the International Society of Pharmacology's (ISoP) Mathematical and Computational Special Interest Group Award at the American Conference of Pharmacology (ACoP) 2019 for his work on improved clinical dosing via Koopman Expectations, along with the ACoP 2020 Quality Award for his work with Pfizer on GPU-accelerated quantitative systems pharmacology to accelerate preclinical analysis by 175x. Notably, Moderna adopted Pumas in 2020 to accelerate crucial clinical trials, noting "Pumas has emerged as our 'go-to' tool for most of our analyses in recent months". For these achievements, Chris received the Emerging Scientist award from ISoP, the highest early career award in pharmacometrics.

Chris started this work while completing his Masters and Ph.D. at the University of California, Irvine where he was awarded the Mathematical and Computational Biology institutional fellowship, the Graduate Dean's Fellowship, the National Science Foundation's Graduate Research Fellowship, the Ford Predoctural Fellowship, the NIH T32 Predoctural Training Grant, the Center for Complex Biological Systesms Opportunity Award, and the Data Science Initiative Summer Fellowship. His research with his advisor, Dr. Qing Nie, focused on the methods for simulating stochastic biological models and detailing how the randomness inherent in biological organisms can be controlled using stochastic analysis. Chris bridged the gap between theory and practice by having a "wet lab bench" in Dr. Thomas Schilling's lab, where these methodologies were tested on zebrafish. Fluorescence Light Microscopy (FLIM) measurements of retinoic acid in the zebrafish hindbrain showed that the predicted control proteins could attenuate inherent biological randomness. The result was a verified mathematical theory for controlling the randomness in biological signaling. Chris received the Kovalevsky Outstanding Ph.D. Thesis Award from the Department of Mathematics upon graduation and was showcased in an interview "Interdisciplinary Case Study: How Mathematicians and Biologists Found Order in Cellular Noise" in Cell Press's iScience.

As an undergraduate at Oberlin College, Chris was awarded the NSF S-STEM scholarship and the Margaret C. Etter Student Lecturer Award by the American Crystallographic Association, an award usually given for PhD dissertations, for his work on 3+1 dimensional incommensurate crystal structure identification of H-acid. This award was given for Service Crystallography for its potential impact on industrial dye manufacturing.

This speaker also appears in: