2026-08-12 –, Room 4
Medical image reconstruction for modalities such as magnetic resonance imaging (MRI) and magnetic particle imaging (MPI) involves solving computationally intensive inverse problems. The MRIReco.jl and MPIReco.jl reconstruction packages feature a shared, modular optimisation backend that provides efficient and reusable solvers for various imaging modalities. In this talk, I will present how we extended this backend with vendor-agnostic GPU acceleration, which enables efficient reconstruction across both different imaging modalities and different GPU backends.
Medical image reconstruction for tomographic modalities such as magnetic resonance imaging (MRI) and magnetic particle imaging (MPI) involves solving ill-posed inverse problems that are typically addressed through regularized least-squares optimization. As imaging techniques advance, computational demands increase significantly, often requiring GPU acceleration for practical use. Additionally, the operators involved often become too large to store in memory, requiring or benefitting from (composable) matrix-free operator implementations.
The MRIReco.jl and MPIReco.jl reconstruction packages feature a shared optimization backend that prioritizes code reuse. LinearOperatorCollection.jl provides matrix-free implementations of common image processing operations (FFT, NFFT, DCT, Wavelet) and enables their composition through custom building blocks and the underlying LinearOperators.jl package. The collection also provides structure-aware optimizations that exploit properties of composed operators for computational efficiency. Modality-specific packages like MRIOperators.jl and operators in MPIReco.jl implement encoding operators for their respective imaging physics. RegularizedLeastSquares.jl serves as the shared optimization backend, providing reusable iterative solvers (CGNR, FISTA, ADMM) that work with any operator implementing matrix-vector products and adjoints. This architecture allows the same solver implementations to work across different imaging modalities.
In this talk, I will present our recent technical developments in extending vendor-agnostic GPU acceleration throughout this entire stack. This is achieved through a combination of Julia's features, such as multiple dispatch, parametric types, and package extensions, as well as the Julia GPU ecosystem, particularly GPUArrays.jl, KernelAbstractions.jl, and Adapt.jl. Using parametric types and Adapt.jl means that our operators and solvers remain generic over array types and work with both CPU and GPU arrays, while GPUArrays.jl and KernelAbstractions.jl allow us to write GPU kernels that are compatible with different GPU backends. Lastly, package extensions allow GPU-specific code to be loaded conditionally, i.e. only when users load their preferred GPU backend. This means that the core packages remain lightweight with no GPU dependencies. Users can enable GPU acceleration with minimal code changes by loading a GPU package and providing an array type.
Related organizations:
https://github.com/JuliaImageRecon
https://github.com/MagneticParticleImaging
https://github.com/MagneticResonanceImaging
Related repositories:
https://github.com/JuliaImageRecon/LinearOperatorCollection.jl
https://github.com/JuliaImageRecon/RegularizedLeastSquares.jl
https://github.com/MagneticResonanceImaging/MRIReco.jl
https://github.com/MagneticParticleImaging/MPIReco.jl
I'm a PhD student at the Institute for Biomedical Imaging at Hamburg University of Technology (TUHH), Germany. My research focuses on parallel computing for medical imaging, particularly magnetic particle imaging. Since 2025, I have also worked as a software engineer at the Fraunhofer Research Institution for Individualised Medical Technology and Engineering (IMTE), Germany.