kubernetes-native julia development
2021-07-30 , Red

You have access to a k8s cluster, and you want to use it to scale out computations. But first, you need to develop and debug julia code that can take advantage of it!

I will present an ergonomic julia development setup to help make k8s feel like home, using freely available and easily installed tools.


In this setup, from a julia project directory, you can:
- drop into a julia REPL that is running on your k8s cluster
- edit source files locally, use Revise and get back results saved to disk, via a 2-way sync between the local julia project directory and the corresponding directory in the k8s container
- sync REPL history across local and remote julia sessions
- easily spin up and use Distributed workers from within the julia session
- automatically build and use images containing julia, with chosen dependencies baked in a PkgCompiler sysimage, precompiled julia project, and (optionally) CUDA
- minimize time-to-first-command-completion with cached image builds; first use in a project directory takes a long time to build, but subsequently spinning up is fast
- set RAM/cpu/disk resources for the main julia session and any Distributed workers
- set julia (and CUDA) versions independently for each session
- run your work as a non-interactive job once it is ready

This tries to make minimal assumptions about the k8s setup; requirements are access to the cluster via kubectl and to a container registry that the k8s cluster can pull from.

Tools needing to be installed locally are:
- kubectl
- docker buildkit
- devspace sync

The julia-specific tools developped to make this possible are K8sClusterManagers.jl and julia_pod.

This workflow is developped and used day-to-day at Beacon Biosignals.

Senior Machine Learning Engineer at Beacon Biosignals, working with julia since 1.0.