2019-07-22, 13:30–17:00, PH 211N
Parallel computing is hard. Julia can make it much easier. In this workshop, we discuss modern trends in high performance computing, how they’ve converged towards multiple types of parallelism, and how to most effectively use these different types in Julia.
This interactive workshop demonstrates how to write parallel Julia code in a variety of ways, including shared memory computing with threads, multiple processes for distributed computing, and computing on the GPU. Julia makes all these modes of parallelism possible, and the Julia community is continuing to perform active research to make high performance parallel computing easier. Many national labs, major corporations, and universities are already using Julia with parallel computing.
The workshop will help you: * identify the challenges in converting a program from serial to parallel * discover the many forms of parallelism Julia offers and learn when to use each * learn how to structure programs to take advantage of parallel computation * write programs that use an appropriate form of parallelism
In this workshop, we will cover * A quick primer on serial performance * Multithreading * Designing parallel algorithms * Tasks (also known as co-routines or green threads) * Multi-process parallelism * A very quick introduction to GPU programming * Future developments
Participants should have basic understanding of non-parallel programming techniques and of Julia itself.