Deploying multi-GPU workloads on Kubernetes in Python
08-17, 11:45–12:05 (Europe/Zurich), Aula

By using Dask to scale out RAPIDS workloads on Kubernetes you can accelerate your workloads across many GPUs on many machines. In this talk, we will discuss how to install and configure Dask on your Kubernetes cluster and use it to run accelerated GPU workloads on your cluster.


The RAPIDS suite of open-source software libraries gives you the freedom to execute end-to-end data science and analytics pipelines entirely on GPUs with minimal code changes and no new tools to learn.

Dask is an open-source library which provides advanced parallelism for Python by breaking functions into a task graph that can be evaluated by a task scheduler that has many workers.

By using Dask to scale out RAPIDS workloads on Kubernetes you can accelerate your workloads across many GPUs on many machines. In this talk, we will discuss how to install and configure Dask on your Kubernetes cluster and use it to run accelerated GPU workloads on your cluster.


Abstract as a tweet

Use Dask on Kubernetes to run accelerated GPU workloads on your cluster

Category [High Performance Computing]

Parallel Computing

Expected audience expertise: Domain

some

Expected audience expertise: Python

expert

Jacob Tomlinson is a senior software engineer at NVIDIA. His work involves maintaining open source projects including RAPIDS and Dask. He also tinkers with Opsdroid in his spare time. He lives in Exeter, UK.