2023-08-17 –, HS 120
Handling and analyzing massive data sets is highly important for the vast majority of research communities, but it is also challenging, especially for those communities without a background in high-performance computing (HPC). The Helmholtz Analytics Toolkit (Heat) library offers a solution to this problem by providing memory-distributed and hardware-accelerated array manipulation, data analytics, and machine learning algorithms in Python, targeting the usage by non-experts in HPC.
In this presentation, we will provide an overview of Heat's current features and capabilities and discuss its role in the ecosystem of distributed array computing and machine learning in Python.
co-authors: C. Comito (FZJ), M. Götz (KIT), J. P. Gutiérrez Hermosillo Muriedas (KIT), B. Hagemeier (FZJ), P. Knechtges (DLR), K. Krajsek (FZJ), A. Rüttgers (DLR), A. Streit (KIT), M. Tarnawa (FZJ)
When it comes to enhancing exploitation of massive data, machine learning methods are at the forefront of researchers’ awareness. Much less so is the need for, and the complexity of, applying these techniques efficiently across large-scale, memory-distributed data volumes. In fact, these aspects typical for the handling of massive data sets pose major challenges to the vast majority of research communities, in particular to those without a background in high-performance computing. Often, the standard approach involves breaking up and analyzing data in smaller chunks; this can be inefficient and prone to errors, and sometimes it might be inappropriate at all because the context of the overall data set can get lost.
The Helmholtz Analytics Toolkit (Heat) library offers a solution to this problem by providing memory-distributed and hardware-accelerated array manipulation, data analytics, and machine learning algorithms in Python. The main objective is to make memory-intensive data analysis possible across various fields of research ---in particular for domain scientists being non-experts in traditional high-performance computing who nevertheless need to tackle data analytics problems going beyond the capabilities of a single workstation. The development of this interdisciplinary, general-purpose, and open-source scientific Python library started in 2018 and is based on collaboration of three institutions (German Aerospace Center DLR, Forschungszentrum Jülich FZJ, Karlsruhe Institute of Technology KIT) of the Helmholtz Association. The pillars of its development are...
- ...to enable memory distribution of n-dimensional arrays,
- to adopt PyTorch as process-local compute engine (hence supporting GPU-acceleration),
- to provide memory-distributed (i.e., multi-node, multi-GPU) array operations and algorithms, optimizing asynchronous MPI-communication (based on mpi4py) under the hood, and
- to wrap functionalities in NumPy- or scikit-learn-like API to achieve porting of existing applications with minimal changes and to enable the usage by non-experts in HPC.
In this talk we will give an illustrative overview on the current features and capabilities of our library. Moreover, we will discuss its role in the existing ecosystem of distributed computing in Python, and we will address technical and operational challenges in further development.
You need to handle and analyze data that go beyond the capabilities of your workstation, but you are not an expert in HPC? - Come to our talk and learn about Heat..
Category [High Performance Computing] –Parallel Computing
Expected audience expertise: Python –some
Project Homepage / Git – Expected audience expertise: Domain –none
I recently obtained a PhD in numerical mathematics from the university of Bonn. Currently, I am postdoctoral researcher in the Scientific Machine Learning group at the Institute for Software Technology of the German Aerospace Center (DLR).