Sebastian Berg
Sebastian has been a NumPy developer for about 10 years now. After a PhD in phsyics he worked at as a postdoc at the Berkeley Institute for Datascience on NumPy as grants byt the Alfred P. Sloan Foundation and the Gordon and Betty Moore Foundation. Since 2022 he has been a software engineer at NVIDIA where he continues to contribute to NumPy.
NumPy, NVIDIA
Position / Job –Software Engineer
Photo – euroscipy-2025/question_uploads/picture_gejohFS.JPGSession
In recent years, many specialised libraries have emerged, implementing optimised subsets of algorithms from larger Scientific Python libraries-- supporting GPUs for acceleration, parallel processing, or distributed computing, or written in a lower-level programming language like Rust or C. These implementations offer significant performance improvements—but integrating them smoothly into existing workflows can be challenging. This talk explores different dispatching approaches that enable seamless integration of these faster implementations without breaking APIs or requiring users to switch libraries. We'll focus on the following two approaches:
-
Backend library-based dispatching : allowing existing library function calls to be routed to a faster backend implementation present in a separate backend library written for GPUs or in a different language, etc. , as adopted by projects like NetworkX and scikit-image.
-
Array API standardization and adoption : more specific to dispatching in array libraries. Based on the type of array that is passed into a numpy function, the call is dispatched to the appropriate array library such as Tensorflow, PyTorch, Dask, JAX, CuPy, Xarray, etc. This allows for the array consuming libraries like SciPy and Sklearn to be used in workflows that are using these other array libraries.
Then we will go over how these approaches are different from each other and when to use which approach based on different use cases and requirements.