JuliaCon 2020 (times are in UTC)

How to not lose a mind by paralelizing a feedback loop?

Feedback loops are notoriously hard to reason and debug when parallelism is introduced. Thus in this poster, I will describe abstractions introduced in TaskMaster package for concurrent feedback loops which one can replay to debug. Lastly, I will demonstrate how it can be used together with Adaptive, a wrapper for corresponding python adaptive package, as an alternative to pmap.


Often when we face an embarrassingly parallelizable problem which runs a long time, we reach out for a single pmap, and if we are lucky, we can run that on the cluster with plenty of cores. However, often we are faced with resources in front of us. In such a case, it is worth to think whether an evenly spaced grid is the most optimal one.

Choosing an optimal grid for the function in advance is often a problem itself more significant than waiting a little extra time. Instead, it would be great if the grid itself would adjust as more knowledge of function comes in during evaluation. Which thus forms a feedback loop.

Introducing parallelism to a feedback loop in a way that it would allow deterministic debugging while not staling resources is tricky. Additionally, the feedback loop should be completely independent of the hardware it is executed (CPU, GPU, Cluster jobs, etc.) and preferably sit in a separate package. The question I found fascinating was what would be the best abstractions to such problem in Julia, which gave rise to the TaskMaster package.