JuliaCon 2026

GPU acceleration in the QuantumKitHub ecosystem
2026-08-13 , Room 3

QuantumKitHub's various packages provide low- and high-level tooling for the implementation of (among other things) tensor network algorithms. These algorithms are highly amenable to GPU-based acceleration, but there are many stumbling blocks along the way. In the past year we have been actively working to add GPU support to the whole stack of TN-related packages, and in this talk we will discuss the performance benefits and challenges thus far, our roadmap, and how this work can benefit the wider JuliaGPU developer and user community.


The tensor network algorithms we want to accelerate often involve large (100s of GB or more) objects on which we need to perform batched matmul, factorizations such as (randomized) SVD or QR, and permutations. Depending on the physical system, we may also be working with tensors that are extremely block-sparse, but with many irregularly sized blocks. These factors cause us to need robust and efficient multi-GPU primitives, and the various computing centers we work with generally support either NVIDIA or AMD hardware, but not both. For these reasons, we have been developing a cross-platform set of extensions to the existing packages which leverage the existing JuliaGPU implementations where possible, but because of the various and sometimes strange use-cases we are able to generate, also involve hand-written solutions.

I am a Julia contributor since 2015. I work mostly on GPUs, quantum packages, and linear algebra.

This speaker also appears in:

Software Research Fellow at the Flatiron Institute, CCQ, studying tensor network methods and algorithms for classical and quantum physics simulations.

This speaker also appears in: