JuliaCon 2023

Julia for High-Performance Computing
2023-07-27 , 32-G449 (Kiva)

The Julia for HPC minisymposium gathers current and prospective Julia practitioners from various disciplines in the field of high-performance computing (HPC). Each year, we invite participation from science, industry, and government institutions interested in Julia’s capabilities for supercomputing. Our goal is to provide a venue for showing the state of the art, share best practices, discuss current limitations, and identify future developments in the Julia HPC community.


As we embrace the era of exascale computing, scalable performance and fast development on extremely heterogeneous hardware have become ever more important aspects for high-performance computing (HPC). Scientists and developers with interest in Julia for HPC need to know how to leverage the capabilities of the language and ecosystem to address these issues and which tools and best practices can help them to achieve their performance goals.

What do we mean by HPC? While HPC can be mainly associated with running large-scale physical simulations like computational fluid dynamics, molecular dynamics, high-energy physics, climate models etc., we use a more inclusive definition beyond the scope of computational science and engineering. More recently, rapid prototyping with high-productivity languages like Julia, machine learning training, data management, computer science research, research software engineering, large scale data visualization and in-situ analysis have expanded the scope for defining HPC. For us, the core of HPC is not to run simple test problems faster but involves everything that enables solving challenging problems in simulation or data science, on heterogeneous hardware platforms, from a high-end workstation to the world's largest supercomputers powered with different vendors CPUs and accelerators (e.g. GPUs).

In this three-hour minisymposium, we will give an overview of the current state of affairs of Julia for HPC in a series of ~10-minute talks. The focus of these overview talks is to introduce and motivate the audience by highlighting aspects making the Julia language beneficial for scientific HPC workflows such as scalable deployments, compute accelerator support, user support, and HPC applications. In addition, we have reserved some time for participants to interact, discuss and share the current landscape of their investments in Julia HPC, while encouraging networking with their colleagues over topics of common interest.

Minisymposium Schedule

  • 10:30: Carsten Bauer (PC2) & Samuel Omlin (CSCS): Welcome and Overview

Part I (Scaling Applications)

  • 10:40: Ludovic Räss (ETHZ): Scalability and HPC readiness of Julia’s AMD GPU stack

  • 10:55: Dominik Kiese (Flatiron Institute): Large-scale vertex calculations in condensed matter physics with Julia

  • 11:10: Michael Schlottke-Lakemper (RWTH Aachen) & Hendrik Ranocha (U Hamburg): Scaling Trixi.jl to more than 10,000 cores using MPI

  • 11:25: Q&A

  • 11:30: Short break

Part II (Performance Evaluation & Tuning)

  • 11:40: William F Godoy (ORNL): Julia programming models evaluation on Oak Ridge Leadership Computing Facilities: Summit and Frontier

  • 11:55: Mosé Giordano (UCL): MPI, SVE, 16-bit: using Julia on the fastest supercomputer

  • 12:10: Carsten Bauer (PC2): HPC Tools for Julia: Inspecting, Monitoring, and Tuning Performance

  • 12:25: Q&A

  • 12:30: Time for lunch 😉


Lunch break (1:30h)


Part III (Ecosystem Developments)

  • 2:00: Tim Besard (Julia Computing): Update on oneAPI.jl developments

  • 2:15: Julian Samaroo (MIT): Dagger in HPC: GPUs, MPI, and Profiling at great speed

  • 2:30: Johannes Blaschke (NERSC): Improvements to Distributed.jl for HPC

  • 2:45: Q&A

  • 3:00: Fin.

The overall goal of the minisymposium is to identify and summarize current practices, limitations, and future developments as Julia experiences growth and positions itself in the larger HPC community due to its appeal in scientific computing. It also exemplifies the strength of the existing Julia HPC community that collaboratively prepared this event. We are an international, multi institutional, and multi disciplinary group interested in advancing Julia for HPC applications in our academic and national laboratory environments. We would like to welcome new people from multiple backgrounds sharing our interest and bring them together in this minisymposium.

In this spirit, the minisymposium will serve as a starting point for further Julia HPC activities at JuliaCon 2023. During the main conference, a Birds of Feather session will provide an opportunity to bring together the community for more discussions and to allow new HPC users to join the conversation. Furthermore, a number of talks will be dedicated to topics relevant for HPC developers and users alike.

Carsten is a postdoctoral theoretical physicist from Cologne, Germany, and a senior HPC consultant within the German National High-Performance Computing Alliance (NHR) at the Paderborn Center for Parallel Computing (PC2).

This speaker also appears in:

GPU computing and geo-HPC. Researcher at ETH Zurich, Switzerland.

This speaker also appears in:

Computational Scientist and Responsible for Julia computing at the Swiss National Supercomputing Centre, ETH Zurich

This speaker also appears in:

Michael is an interim professor (Vertretungsprofessor) for Computational Mathematics and research software engineer at the Applied and Computational Mathematics Research Lab at RWTH Aachen University, Germany. His research focus is on numerical methods for adaptive multi-physics simulations, research software engineering for high-performance computing, and scientific machine learning.

Johannes leads the Data area of NERSC's application readiness program (NESAP) for Perlmutter, and works with real-time and urgent computing users to improve the facility and system-level performance in of their workflows. He is acting as the liaison to several NESAP teams exploring new programing models and developing new functionality in the areas of performance portability, integrated research infrastructure, and the Superficility project.