2020-07-31 –, Green Track
You might not know all of the latest methods in differential equations, all of the best knobs to tweak,
how to properly handle sparsity, or how to parallelize your code. Or you might just write bad code. Don't you wish someone would just fix that for you automatically? It turns out that the latest feature of DifferentialEquations.jl, autooptimize, can do just that. This talk is both a demo of this cool new feature and a description of how it was created for other package authors to copy.
A general compiler can only have so much knowledge, but when we know that someone is solving a differential equation, there are a million things that we know. We know that different sizes of differential equations will do better/worse with different solver methods, we know that sparsity of the Jacobian will have a large impact on the speed of computation, we know that the user's f
function describing the ODE can be considered independently from the rest of the program, and so on. In DifferentialEquations.jl, we have codified these ideas in order to build a toolchain that automatically optimizes a user's f
function in order to spit out a more optimized DEProblem.
This works by first tracing to a symbolic sublanguage, ModelingToolkit.jl. By using tasks to time-out, we can try performing an auto-trace which, if successful, gives us a complete symbolic mathematical description of the user's numerical code. We can then proceed to symbolically analyze the function to generate the analytical solution to the user's Jacobian and even symbolically factorize the Jacobian, if doable in the allotted time. From the symbolic world we can then auto-parallelize the generated Julia code, chunking the output into tasks to multithread, or using a cost model determine that the ODE is large enough to automatically distribute (with auto-GPU coming soon).
If the system is not symbolically trace-able (there is a while loop depending on an input value, something that is quite uncommon), then we can resort to IR-based and adaptive analysis. We will demonstrate how SparsityDetection.jl can automatically identify the sparsity pattern of the Jacobian for a Julia code and then use SparseDiffTools.jl to accelerate the solve of stiff equations by performing a matrix coloring and optimizing the Jacobian construction for the problem. We will then discuss how DifferentialEquations.jl automatically picks the solver algorithm, defaulting to methods which can automatically switch between stiff and non-stiff integrators, determining stiffness on the fly with heuristics.
Together, we have demonstrated that these auto-optimizations can improve the code of even experienced Julia programmers by >100x by enabling sparsity coloring optimizations that they may not have known about, and by parallelizing code that is either difficult to parallelize or is simply automatically generated and thus hard to intervene with.
Christopher Rackauckas is an Applied Mathematics Instructor at the Massachusetts Institute of Technology and a Senior Research Analyst at University of Maryland, Baltimore, School of Pharmacy in the Center for Translational Medicine. Chris's research is focused on numerical differential equations and scientific machine learning with applications from climate to biological modeling. He is the developer of over many core numerical packages for the Julia programming language, including DifferentialEquations.jl for which he won the inaugural Julia community prize, and the Pumas.jl for pharmaceutical modeling and simulation. He is the lead developer for the SciML Open Source Scientific Machine Learning software organization, along with its packages like DiffEqFlux.jl and NeuralPDE.jl