JuliaCon 2020 (times are in UTC)

AMDGPU Computing in Julia
07-30, 12:50–13:00 (UTC), Green Track

I will describe the current state of Julia's AMDGPU stack and how it compares
to Julia's CUDA stack, interesting advantages of AMD's ROCm platform that
we can leverage from Julia, as well as my own perspective on the future of
Julia's GPGPU ecosystem.


NVIDIA's CUDA (Compute Unified Device Architecture) has been the dominant
toolkit for general-purpose GPU (GPGPU) computing for many years, and has
excellent support in Julia. However, AMD's ROCm (Radeon Open Compute) platform
is rising in popularity, and many Julia users wish to use their AMD GPUs in
Julia in the same ways that CUDA users can today. I will provide an overview
of how to do exactly that, and what pitfalls users need to be aware of.

Beyond past basic functionality, the open source nature of the AMDGPU kernel
module makes support for advanced features like hostcall and unified memory
easy to use, and enables opportunities to unify computations on CPUs and GPUs
without much difficulty. I will present some short demos of how these useful
features can be used in practice

Going forward, a number of innovations in Julia's GPGPU ecosystem are
possible. Merging compiler codebases, providing better abstractions for
generic kernel programming, integration with distributed computing libraries,
and automated GPU-ification of scientific and machine learning codes are all
on the table for the future. I will briefly explore what exciting
possibilities users and developers have to look forward to, and what work
still needs to be done.

I am the founder of Julia's AMDGPU stack, and interested in helping users make use of every bit of computing power they have available. I am interested in using GPUs to accelerate real-time execution of spiking neural networks, as well as other machine learning algorithms.