2024-07-12 –, Function (4.1)
The growth of computational power driven by novel accelerator architectures has pushed physics solvers to transition from their traditional multi-CPU approach to GPU-ready codebases. Moreover, the integration of data-driven models, and in particular machine learning (ML), into physics solvers limits the choice of programming languages that can natively offer both speed and such high-level libraries.
In this talk, we will review how WaterLily.jl, a computational fluid dynamics Julia solver, has been ported from its original serial-CPU implementation to a backend-agnostic solver that can be seamlessly executed using multi-threading in CPUs or in GPUs of different vendors. The transition has been accomplished using a meta-programming approach that generalizes the implementation of array iterators while also relying on KernelAbstractions.jl to specialize each kernel on the target architecture. In single-GPU tests, we show that WaterLily.jl is as fast as state-of-the-art CFD solvers written C++ or Fortran. Finally, we also discuss the potential of integrating ML models and differentiability into the solver.
Assistant Professor in Data Science and Machine Learning for Ship Hydrodynamics at Delft University of Technology.