2025-10-03 –, Jean-Baptiste Say Amphitheater
Language: English
Jacobians and Hessians play vital roles in scientific computing and machine learning, from optimization to probabilistic modeling. While these matrices are often considered too computationally expensive to calculate, their inherent sparsity can be leveraged to dramatically accelerate Automatic Differentiation (AD). By building on top of DifferentiationInterface.jl, we are able to bring Automatic Sparse Differentiation to all major Julia AD backends, including ForwardDiff and Enzyme.
Despite AD's widespread adoption in the Julia ecosystem, Automatic Sparse Differentiation (ASD) remains an underutilized technique. This talk explores the key components of ASD, beginning with a concise overview of the theory behind conventional AD. We then dive into the two main components of ASD: sparsity pattern detection and matrix coloring.
We then introduce a novel Julia ASD pipeline built on three open-source packages:
1. SparseConnectivityTracer.jl - implements sparsity pattern detection via operator-overloading
2. SparseMatrixColorings.jl - implements matrix coloring algorithms
3. DifferentiationInterface.jl - provides a unified interface for AD
Through DifferentiationInterface.jl, we bring ASD capabilities to all major Julia AD backends, including ForwardDiff.jl, ReverseDiff.jl, Zygote.jl, and Enzyme. We hope this work can serve as a blueprint for future backend agnostic AD improvements.
The presentation concludes with practical demonstrations, featuring detailed performance benchmarks and clear guidelines for choosing between ASD and traditional AD approaches.
PhD student in the Machine Learning Group at TU Berlin.
Interested in in automatic differentiation, explainability and dynamical systems.
- Personal website: adrianhill.de
- GitHub: @adrhill
- Project spotlight