JuliaCon 2022 (Times are UTC)

Optimizing Floating Point Math in Julia
07-28, 19:30–20:00 (UTC), Purple

Why did exp10 get 2x faster in Julia 1.6? One reason is, unlike most other languages, Julia doesn't use the operating system-provided implementations for math (Libm). This talk will be an overview of improvements in Julia's math library since version 1.5, and areas for future improvements. We will cover will be computing optimal polynomials, table based implementations, and bit-hacking for peak performance.


In this talk we will cover the fundamental numerical techniques for implementing accurate and fast floating point functions. We will start with a brief review of how Floating Point math works. Then use the changes made to exp and friends (exp2, exp10, and expm1) over the past two years as a demonstration for the main techniques of computing functions.

Specifically we will look at:
* Range reduction
* Polynomial kernels using the Remez algorithm
* Fast polynomial evaluation
* Table based methods
* Bit manipulation (to make everything fast)

We will also discuss how to test the accuracy of implementations using FunctionAccuracyTests.jl, and areas for future improvements in Base and beyond. Present and future work areas optimized routines are the Bessel Functions, cumulative distribution functions, and optimized elementary functions for DoubleFloats.jl, and PRs across the entire package ecosystem are always welcome.

I graduated from Carleton College and studied math and computer science, and now work for JuliaComputing on analog circuit simulation. I also make math go vroom.