JuliaCon 2026

KAPseudospectra.jl: GPU-Accelerated Pseudospectra via KernelAbstractions.jl
2026-08-14 , Room 3

Pseudospectra generalize eigenvalue analysis by characterizing how the resolvent norm of (zB - A)^{-1} varies over the complex plane, revealing transient behavior and stability properties that eigenvalues alone miss.
Growing demand from non-Hermitian physics, power systems, and data-driven robust control necessitates large-scale pseudospectral computations that existing CPU-based tools cannot efficiently handle for large dense matrices.

In this talk, we present KAPseudospectra.jl, the first GPU-accelerated pseudospectra package, built on KernelAbstractions.jl for vendor-neutral execution across supported backends.
The package implements a batched Inverse Hermitian Lanczos (IHL) iteration that approximates the smallest singular value at each grid point in only a few steps, requiring O(N^2) operations at each grid point after a single O(N^3) CPU-only Schur decomposition.
Central to the IHL iteration is KATRSM.jl, a submodule providing batched triangular solvers that keep the pencil (zB - A) factored on-device, largely eliminating host-device data movement.
Multi-device parallelism is achieved by partitioning the complex grid across available GPUs with automatic memory-aware batching.

We demonstrate the package on matrices up to dimension 2^14, and discuss the design decisions that enable this codebase to target multiple compute backends through Julia's package extension system and KernelAbstractions.jl.

See also: KAPseudospectra.jl

I am a PhD student at Virginia Tech working on robustness of data-driven system identification methods and their application to solving nonlinear eigenvalue problems using contour integral methods.