2021-07-29, 13:00–13:10, Blue
This talk will present scaling and performance of the Oceananigans.jl ocean model on CPU and GPU systems. Oceananigans.jl is an all Julia code that is designed to study geophysical fluids problems ranging from idealized turbulence to planetary scale circulation. It uses the KernelAbstractions.jl package to support CPU and GPU single address space parallelism. It uses MPI.jl, to support multi-node and multi-GPU parallelism. MPI.jl is used both directly and through PencilArrays.jl.
Oceananigans.jl is designed to be a user friendly ocean modeling code natively implemented in Julia that can scale from single core, laptop studies to large scale parallel CPU and GPU cluster systems. The codes finite volume algorithm has large inherent parallelism through spatial domain decomposition.
In this talk we will look at the strong and weak scaling performance of non-linear shallow water model configurations of Oceananigans. The code's numerical kernels utilize KernelAbstractions.jl, allowing one source code to be maintained that supports both CPU and GPU parallel scenarios. Multi-process on-node and multi-node parallelism is supported by MPI.jl and largely abstracted, using data structures and associated types that dispatch communication operations depending on the active parallelism model.
We will describe briefly the benchmark problems used and then look at scaling over multiple threads on CPUs within a single node, across multiple GPUs and across multiple CPU and GPU nodes in a high-performance computing cluster. We will present speedup metrics and cost per solution metrics. The latter can be used to provide some measure of cost-effectiveness across quite different architectures.