Distributed GPU Computing with Dask
2019-09-04, 11:30–12:00, Track 1 (Mitxelena)

Dask has evolved over the last year to leverage multi-GPU computing alongside its existing CPU support. We present how this is possible with the use of NumPy-like libraries and how to get started writing distributed GPU software.


The need for speed remains important for scientific computing. Historically, computers were limited to few dozens of processors, but with modern GPUs, we can have thousands, or even millions of cores running in parallel on distributed systems.

However, developing software for distributed GPU systems can be difficult, both because writing GPU code can be challenging for non-experts, and because distributed systems are inherently complex. We can work to address these challenges by using GPU-enabled libraries that mimic parts of the SciPy ecosystem, such as CuPy, RAPIDS, and Numba, abstracting GPU programming complexity, combined with Dask to abstract distributed computing complexity.

We talk about how Dask has come a long way to support distributed GPU-enabled systems by leveraging community standards and protocols, reusing open source libraries for GPU computing, and keeping it simple and complication-free to build highly-configurable accelerated distributed software.


Abstract as a tweet – Dask leverages multi-GPU computing alongside with its existing CPU support. Join us to get started writing distributed GPU software with Dask. Domains – Big Data, General-purpose Python, Image Processing, Machine Learning, Open Source, Parallel computing / HPC, Vector and array manipulation Project Homepage / Git – https://dask.org/ Domain Expertise – some Python Skill Level – basic Project Homepage / Git – https://dask.org/