Ivet Galabova; Plenary
This talk presents HiGHS, the world's leading open-source solver for large-scale, sparse linear programming and mixed integer programming problems. We will discuss the types of problems handled by HiGHS, its solution methods, and some of the challenges we have faced in transforming them into software. We will also talk about scaling software, platforms, interfaces and maintaining the reliability of a widely used open-source project.
Fons van der Plas; Tutorial
In this session, the Pluto developers will teach you how to use Pluto notebooks for computational storytelling to write compelling, interactive articles and dashboards. This tutorial will help you write a lecture for students that's actually more interesting than the latest tiktok trend!
Tim Besard; Plenary
Modern computing relies on parallelism, from GPUs accelerating AI workloads to multi-core CPUs in every laptop. But writing code that harnesses this power across different hardware remains challenging. In this talk, we'll explore how KernelAbstractions.jl brings GPU-style programming to Julia, allowing you to write parallel kernels once and run them anywhere.
Benoît Legat; Tutorial
In this workshop, we'll focus on mathematical optimization using JuMP. You'll learn to write programs and solve integer linear, nonlinear, conic, and constraint optimization problems. We'll also discuss the recent new features of JuMP. Additionally, we'll detail important performance tips for solving large-scale models with JuMP.
Laura Grigori; Plenary
Randomization is a powerful dimensionality reduction technique that allows to solve large scale problems while leveraging optimized kernels and enabling the usage of mixed precision. In this talk we will review recent progress in using randomization for solving linear systems of equations or eigenvalue problems. We first discuss sketching techniques that allow to embed large-dimensional subspaces while preserving geometrical properties and their parallel implementations. We then present randomized versions of processes for orthogonalizing a set of vectors and their usage in the Arnoldi iteration. We discuss associated Krylov subspace methods for solving large-scale linear systems of equations and eigenvalue problems. The new methods retain the numerical stability of classic Krylov methods while reducing communication and being more efficient on modern massively parallel computers. We finally discuss their implementation in a Julia library.