JuliaCon 2025

Hands-on with Julia for HPC on GPUs and CPUs
2025-07-22 , Main Room 3

Julia offers the best of both worlds: high-level expressiveness combined with low-level performance, allowing developers to leverage modern hardware accelerators without needing expertise in hardware-specific languages. This workshop demonstrates how Julia makes high-performance computing (HPC) accessible by covering key topics such as resource configuration, distributed computing, CPU and GPU code optimization, and scalable workflows.


Why wait hours for computations when they could take seconds? Why struggle with rewriting high-level prototypes in lower-level languages just to achieve performance? Traditionally, writing fast code for HPC systems requires mastering hardware-specific languages, leading to complex, expensive, and difficult-to-maintain software. Julia removes this barrier by providing a seamless, high-performance environment and package ecosystem where domain experts can easily integrate and reuse optimized code, making HPC more approachable and efficient. Participants will gain hands-on experience running Julia code on a GPU-powered supercomputer.

This workshop will introduce practical techniques for developing and optimizing Julia applications on modern HPC systems. We will cover:
1. Resource management and configuration
2. Single-node parallelization with multithreading
3. GPU programming using KernelAbstractions.jl, ParallelStencil.jl, and JACC.jl
4. Multi-node parallelization using MPI.jl, ImplicitGlobalGrid.jl, Distributed.jl, and Dagger.jl
5. Real-time visualization of multi-process simulations

Hands-On Learning Experience

The workshop is designed for both HPC users and newcomers curious about accelerating computations. It consists of two parts, featuring a session about the fundamentals in the morning and application in the afternoon:
1. Fundamentals: Learn core Julia tools for parallel computing through simple, illustrative examples.
2. Application: Develop a parallelized version of a serial code and run it on two GPU-accelerated supercomputers: NERSC’s Perlmutter and PSC’s Bridges-2.

Who Should Attend?

This workshop is for researchers, engineers, and developers looking to accelerate scientific computing, machine learning, and other computational tasks. Whether you're already using HPC systems or just getting started, this session will equip you with the knowledge and tools to write high-performance and scalable Julia applications.

Prerequisites

Participants should have a basic understanding of Julia (functions, modules, control flow, and arrays) and familiarity with standard development tools like Git, SSH, and the Bash command line. No prior experience with multi-threading, distributed computing, or GPU programming is required.

We look forward to an engaging, inclusive, and knowledge-rich event—see you there!

This workshop requires RSVPs for the allocation of credits to HPC system resources.

Computational geoscientists with Earth Science background. Julia GPU and HPC enthusiast.

This speaker also appears in:

Computational Scientist | Responsible for Julia computing, Swiss National Supercomputing Centre (CSCS), ETH Zurich

This speaker also appears in:

Johannes Blaschke is a HPC workflow performance expert leading the NERSC Science Acceleration Program (NESAP). His research interests include urgent and interactive HPC; and programming environments and models for cross-facility workflows.

Johannes supports Julia on NERSC's systems, including one of the first examples of integrating MPI.jl and Distributed.jl with the HPE's Slingshot network technology. Johannes is a zealous advocate for Julia as an HPC programming language, and a contributor and organizer of Julia tutorials and BoFs at SC, JuliaCon and within the DoE.

This speaker also appears in:

I am a postdoctoral researcher at the MIT JuliaLab and an HPC enthusiast who loves solving complex problems by thinking in parallel. My research intersects High-Performance Computing (HPC) and Artificial Intelligence (AI), exploring how advanced computational techniques can optimize AI algorithms for increased efficiency and effectiveness. I was honored as one of the Rising Stars in Computational and Data Sciences by U.S. Department of Energy. My collaborations extend internationally, including with the Innovative Computing Lab at the University of Tennessee and MINES ParisTech. In Summer 2021, I was a visiting scholar at the Innovative Computing Lab, where I contributed to a milestone of the Software for Linear Algebra Targeting Exascale (SLATE) project , a joint initiative of the U.S. Department of Energy’s Office of Science and the National Nuclear Security Administration (NNSA).

This speaker also appears in: