0.29
JuliaCon 2021 (times are UTC)
juliacon2021
2021-07-20
2021-07-31
12
00:05
https://pretalx.com
https://pretalx.com/media/juliacon2021/img/juliacon-2021-badge-light_6YHeIoo.png
UTC
Green
GPU programming in Julia
Workshop
2021-07-20T14:00:00+00:00
14:00
03:00
In this workshop, we will demonstrate three major packages for programming GPUs in Julia (CUDA.jl, AMDGPU.jl, oneAPI.jl), and the different programming models, tools and APIs that these packages support.
juliacon2021-9709-gpu-programming-in-julia
Tim BesardJulian P SamarooValentin Churavy
en
Julia has several packages for programming GPUs, each of which support various programming models. In this workshop, we will demonstrate the use of three major GPU programming packages: CUDA.jl for NVIDIA GPUs, AMDGPU.jl for AMD GPUs, and oneAPI.jl for Intel GPUs. We will explain the various approaches for programming GPUs with these packages, ranging from generic array operations that focus on ease-of-use, to hardware-specific kernels for when performance matters.
Most of the workshop will be vendor-neutral, and the content will be available for all supported GPU back-ends. There will also be a part on vendor-specific tools and APIs.
Attendees will be able to follow along, but are recommended to have access to a suitable GPU for doing so. Material for this workshop can be found at https://github.com/maleadt/juliacon21-gpu_workshop
false
https://pretalx.com/juliacon2021/talk/VK87Q3/
https://pretalx.com/juliacon2021/talk/VK87Q3/feedback/
Red
DataFrames.jl 1.2 tutorial
Workshop
2021-07-20T14:00:00+00:00
14:00
03:00
In this workshop an introduction to DataFrames.jl 1.2 will be presented. You will learn how to load, transform and visualize your data using the DataFrames.jl package. The tutorial assumes that you have some experience in working with data frames in e.g. R or Python.
All the materials used are available for download at https://github.com/bkamins/JuliaCon2021-DataFrames-Tutorial.
juliacon2021-9235-dataframes-jl-1-2-tutorial
Bogumił Kamiński
en
In this workshop an introduction to DataFrames.jl 1.2 will be presented.
The tutorial is targeted at people wanting to start using DataFrames.jl. However, it assumes that you have some experience in working with data frames in e.g. R or Python. The tutorial presents an example of doing a small data science project.
The topics covered are:
* creating a `DataFrame` object and getting basic information about it
* reading and writing data frames using [CSV.jl](https://github.com/JuliaData/CSV.jl) and [Arrow.jl](https://github.com/JuliaData/Arrow.jl)
* indexing and filtering
* sorting
* joining
* reshaping
* transforming columns and aggregation
* plotting
* building predictive models
* bootstrapping
All the materials used are available for download at https://github.com/bkamins/JuliaCon2021-DataFrames-Tutorial.
false
https://pretalx.com/juliacon2021/talk/FXZXMB/
https://pretalx.com/juliacon2021/talk/FXZXMB/feedback/
Green
Quantum Computing with Julia
Workshop
2021-07-21T14:00:00+00:00
14:00
03:00
Quantum computing is an emerging area of the technology industry with applicability to many different fields. But it’s not obvious how to get started with quantum hardware or quantum algorithms. In this workshop, we’ll use Julia to introduce attendees to quantum computing, creating state of the art quantum machine learning models and solving real-world optimization problems on a real quantum device or a quantum circuit simulators using Amazon Braket.
juliacon2021-9773-quantum-computing-with-julia
Saravana KumarKatharine Hyatt
en
In this two part workshop we will use Amazon Braket with Julia to introduce attendees to the exciting world of quantum computing. Getting started in QC can be daunting if you’re not already an expert in physics or CS. We’ll spend the first part of the workshop getting acquainted with the different types of quantum hardware available today and some introductory algorithms, which we’ll run on real quantum computers and simulators. Then we’ll build upon this and begin exploring using quantum hardware to tackle machine learning and optimization problems.
In order to access the quantum hardware and simulators during the workshop, we’ll be using Amazon Braket, which is a fully managed quantum computing service that helps researchers and developers get started with the technology to accelerate research and discovery.
false
https://pretalx.com/juliacon2021/talk/V3N73B/
https://pretalx.com/juliacon2021/talk/V3N73B/feedback/
Red
Statistics with Julia from the ground up
Workshop
2021-07-21T14:00:00+00:00
14:00
03:00
This workshop provides an introduction to the Julia language for data-scientists and statisticians. No prior experience with Julia is assumed. The workshop starts with a few Julia basics and then progresses through basic probability and statistics examples, usage of dataframes, elementary statistical inference, regression, and more advanced methods. At the end of this workshop, attendees will have solid entry point for using Julia as their preferred data analysis tool.
juliacon2021-9006-statistics-with-julia-from-the-ground-up
/media/juliacon2021/submissions/A9KZCY/3-32_mAYZchl.png
Yoni Nazarathy
en
This workshop accommodates data-scientists and statisticians that have experience with a language like R, but have not used Julia previously. In learning to use Julia, a contemporary "stats based" approach is taken focusing on short scripts that achieve concrete goals. The primary focus is on statistical applications and packages. The Julia language is covered as a by-product of the applications. Thus, this workshop is much more of a *how to use Julia for stats* course than a *how to program in Julia* course. This approach may be suitable for statisticians and data-scientists that tend to do their day-to-day scripting with a data and model based approach - as opposed to a software development approach.
The topics covered include:
* Basic probability and Monte Carlo.
* Basics from the in-built Statistics package and the [StatsBase](https://juliastats.org/StatsBase.jl/stable/) package.
* Basic plotting and statistical plotting with [StatsPlots](https://github.com/JuliaPlots/StatsPlots.jl).
* Using the [Distributions](https://juliastats.org/Distributions.jl/latest/) package.
* (Basic) usage of the [Dataframes](https://dataframes.juliadata.org/stable/) package.
* Using the [GLM](https://juliastats.org/GLM.jl/stable/) package.
* Other useful resources and packages.
(Note that Julia has hundreds of statistical packages and we can not cover them all in 3 hours).
Code snippets from [Statistics with Julia: Fundamentals for Data Science, Machine Learning and Artificial Intelligence](https://statisticswithjulia.org/) will be used in conjunction with smaller live constructed examples.
An extensive Jupyter notebook for the workshop together with data files is [here](https://github.com/yoninazarathy/JuliaCon2021-StatisticsWithJuliaFromTheGroundUp). You can install it to follow along.
If you don't have Julia with IJulia (Jupyter) installed, you can follow the instructions in [this video](https://www.youtube.com/watch?v=KJleqSITuRo).
false
https://pretalx.com/juliacon2021/talk/A9KZCY/
https://pretalx.com/juliacon2021/talk/A9KZCY/feedback/
Green
A mathematical look at electronic structure theory
Workshop
2021-07-22T14:00:00+00:00
14:00
03:00
Electronic structure theory is a fascinating interdisciplinary field. Physics, chemistry, materials science, mathematics, high-performance computing ... they're all in it. Rooted at the quantum-mechanical description of electrons it is the backbone for quite a few simulation methods in the chemical and physical sciences. Here we'll focus on the numerical tools required to solve standard problems in the field like density-functional theory, which --- as we will see --- is challenging in itself.
juliacon2021-9568-a-mathematical-look-at-electronic-structure-theory
Michael F. Herbst
en
### Content
The material for the workshop is available at https://github.com/mfherbst/juliacon_dft_workshop.
I'll briefly introduce the setting of density-functional theory (DFT), in particular show the equation system and its mathematical structure. With that we are in good shape to tackle the main part of the workshop, which will be devoted to discussing the numerical techniques used for solving it.
Our main tool in this workshop will be the [density-functional toolkit (DFTK)](https://dftk.org),
a state-of-the-art DFT code written in Julia (of course ;)). This code will allow us to consider a number of reduced problems, where things are more tractable if you wish to interactively explore, visualise and understand. In particular we will use DFTK to inspect what's going on while the DFT problem is being solved. With that knowledge at hand we'll try to code up some simple DFT solvers on our own. Due to the scaffolding DFTK provides this is a fairly manageable task and (as a small bonus) the resulting algorithms could be directly applied to cutting edge problems (if we're careful with performance issues).
Depending on how our progress is I plan to cover the following topics:
- Problem setup: Mathematical structure of DFT
- Typical discretisation approaches: Gaussians versus plane waves
- Typical solution algorithms: Direct minimisation versus self-consistent field (SCF) iterations
- Numerical analysis of SCF problems
- Writing our own SCF, understanding why it badly fails and what we can do about it.
- Connections between the physical properties of matter and convergence properties of an SCF
### Assumed background
I'll try to do my best to make this workshop accessible for a broad range of people: Those with the chemistry or physics background that always wanted to understand the maths behind DFT as well as those with the understanding on PDEs / linear algebra that are interested in getting an idea about this challenging application domain.
I will assume you safely know your way around Julia.
false
https://pretalx.com/juliacon2021/talk/KK9KS7/
https://pretalx.com/juliacon2021/talk/KK9KS7/feedback/
Red
Game development in Julia with GameZero.jl
Workshop
2021-07-22T14:00:00+00:00
14:00
03:00
A game development workshop where participants will create a few simple games, inspired by classic games from the early days of computing. This workshop is suitable for beginner programmers, or for experienced coders hoping to teach programming to younger people. Or for anyone wanting to have some fun while programming.
Please add the GameZero and Colors package to a julia environment.
juliacon2021-9945-game-development-in-julia-with-gamezero-jl
Avik SenguptaAhan Sengupta
en
Developing simple games is one the most effective ways to learning, and teaching, programming. GameZero.jl is a low-overhead game development framework, that allows beginners and students to learn programming while having a lot of fun.
We will describe the simple API exposed by GameZero, and then build up a couple of games using these building blocks. By the end of the session, participants will have one fully functional game working, and will have the building blocks to create the second. On the way, we will also describe the basic syntax and semantics of Julia and its standar library for users who are unfamiliar with it.
false
https://pretalx.com/juliacon2021/talk/RS9B7Q/
https://pretalx.com/juliacon2021/talk/RS9B7Q/feedback/
Green
Solving differential equations in parallel on GPUs
Workshop
2021-07-23T14:00:00+00:00
14:00
03:00
Why to wait hours for computations to complete, when it could take only a few seconds? Tired of prototyping code in an interactive, high-level language and rewriting it in a lower-level language to get high-performance code? Or simply curious about parallel and GPU computing being game changers.
juliacon2021-9444-solving-differential-equations-in-parallel-on-gpus
Ludovic RässMauro WerderSamuel Omlin
en
The workshop materials can be found here: https://github.com/luraess/parallel-gpu-workshop-JuliaCon21
This workshop covers trendy areas in modern numerical computing with examples from geoscientific applications. The physical processes governing natural systems' evolution are often mathematically described as systems of differential equations. Fast and accurate solutions require numerical implementations to leverage modern parallel hardware.
The goal of this workshop is to offer an interactive hands-on to solve systems of differential equations in parallel on GPUs using the [`ParallelStencil.jl`](https://github.com/omlins/ParallelStencil.jl) and [`ImplicitGlobalGrid.jl`](https://github.com/eth-cscs/ImplicitGlobalGrid.jl) Julia modules. [`ParallelStencil.jl`](https://github.com/omlins/ParallelStencil.jl) permits to write architecture-agnostic parallel high-performance GPU and CPU code and [`ImplicitGlobalGrid.jl`](https://github.com/eth-cscs/ImplicitGlobalGrid.jl) renders stencil-based distributed parallelisation almost trivial. The resulting codes are fast, short and readable. We will use these two Julia modules to design and implement a (multi-) GPU application that predicts ice flow dynamics over mountainous topography.
The workshop consists of 2 parts:
1. You will learn about parallel and distributed computing and iterative solvers.
2. You will implement a PDE solver to predict ice flow dynamics on real topography.
By the end of this workshop, you will:
- Have a GPU PDE solver that predicts ice-flow;
- Have a concise Julia code that achieves similar performance than legacy C, CUDA, MPI code;
- Be able to leverage the computing power of modern GPU accelerated servers and supercomputers.
We look forward to having you on board and will make sure to foster exchange of ideas and knowledge to provide an as inclusive as possible event.
false
https://pretalx.com/juliacon2021/talk/CPH7SG/
https://pretalx.com/juliacon2021/talk/CPH7SG/feedback/
Red
Package development in VSCode
Workshop
2021-07-23T14:00:00+00:00
14:00
03:00
The [Julia extension for VSCode](https://www.julia-vscode.org/) provides a multitude of tools and commands to make package development and interactive coding easy. We'll provide an overview on how to develop a Julia package from scratch and show how to use the debugger and profiler efficiently to find and fix faulty logic or performance issues.
juliacon2021-9728-package-development-in-vscode
Sebastian PfitznerDavid Anthoff
en
The [Julia extension for VSCode](https://www.julia-vscode.org/) has changed significantly over the last year, with multiple feature additions and UX improvements. At the same time, VSCode has many not particularly widely known yet very useful features.
This workshop aims to introduce new as well experienced users to a package development workflow in VSCode, including use cases like debugging and profiling as well as how to best use inline evaluation or the Revise integration. We'll also provide an overview on the various possibilities for interactive data exploration/analysis, the remote capabilities built into VScode, and more.
false
https://pretalx.com/juliacon2021/talk/WCSKJ7/
https://pretalx.com/juliacon2021/talk/WCSKJ7/feedback/
Green
Simulating Big Models in Julia with ModelingToolkit
Workshop
2021-07-24T14:00:00+00:00
14:00
03:00
It can be hard to build and solve million equation models. Making them high performance, stable, and parallel? Introducing ModelingToolkit.jl! The modeling auto-optimizer for all of your performance needs! We will show many use cases on differential equations and beyond (optimization, nonlinear solving, etc.).
juliacon2021-9218-simulating-big-models-in-julia-with-modelingtoolkit
Chris Rackauckas
en
It can be hard to build and solve million equation models. Making them high performance, stable, and parallel? Introducing ModelingToolkit.jl! In this workshop we will showcase ModelingToolkit as a system for building large differential equation models in a hierarchical component-wise way. This acausal modeling system is reminiscent of widely used tools like Simulink and Modelica, but we will showcase how ModelingToolkit's deep integration with interactive symbolic programming leads to a more intuitive pure Julia modeling system. The audience will be walked through a live demonstration of using ModelingToolkit to compose models and add transformations, like index reduction of differential-algebraic equations (DAEs) and tearing of nonlinear systems, to improve stability and performance of the generated code. We will demonstrate how to use the automated parallelism easily solve millions of equations in the most performant way. We will show how ModelingToolkit extends far beyond differential equations, featuring how it can be used for similarly generating high performance code for nonlinear optimization, solving nonlinear equations, doing nonlinear optimal control, generating models from chemical reaction descriptions, and more. The user will leave with a better understanding of the growing symbolic-numeric modeling ecosystem and the future of large-scale accurate and high-performance SciML modeling.
false
https://pretalx.com/juliacon2021/talk/NNVXZC/
https://pretalx.com/juliacon2021/talk/NNVXZC/feedback/
Red
Package development: improving engineering quality & latency
Workshop
2021-07-24T14:00:00+00:00
14:00
03:00
Julia holds immense promise for a composable package ecosystem. Potential obstacles to achieving this promise include missing methods for unanticipated types, unwitting type-piracy, poor performance due to inference failures, method ambiguities, and latency due to long compilation times and/or invalidation of previously-compiled code.
This workshop will tutor developers on the use of some recently-developed tools for detecting, diagnosing, and fixing such problems.
juliacon2021-9085-package-development-improving-engineering-quality-latency
Tim HolyShuhei Kadowaki
en
This workshop will tutor developers on the use of some of the tools available for improving package quality and reducing latency. We will begin by summarizing the factors that influence dispatch, inference, latency, and invalidation, and how monitoring inference provides a framework for detecting problems before or as they arise. We will then tutor attendees in the use of tools like MethodAnalysis, JET, Cthulhu, and SnoopCompile to discover, analyze, and fix detected problems in package implementation. We will also show how in addition to improving robustness, such steps can often streamline design and reduce latency.
This workshop is aimed at experienced Julia developers. Materials can be cloned from https://github.com/aviatesk/juliacon2021-workshop-pkgdev
false
https://pretalx.com/juliacon2021/talk/VY9UVX/
https://pretalx.com/juliacon2021/talk/VY9UVX/feedback/
Green
Parse and broker (log) messages with CombinedParsers(.EBNF)
Workshop
2021-07-25T14:00:00+00:00
14:00
03:00
Parsers are programs to break apart strings that match a grammar
in order to transform into structured representations.
I demonstrate composing regular expressions, EBNF grammars and CombinedParsers.jl constructors to build a slick message broker system inspired by Apache Kafka:
Log lines and other messages are parsed to julia types.
Parsed instances are brokered by julia's multiple dispatch into different data sinks (git managed CSV, SearchLight.jl).
juliacon2021-9948-parse-and-broker-log-messages-with-combinedparsers-ebnf-
Gregor Kappler
en
Step by step I show available options for defining CombinedParsers to process different message formats (e.g. log lines) and to transform into julia result_types.
The examples demonstrate that julia's dispatch leverages parsed result_types straightforwardly to a slick and powerful platform for complex string-processing workflows like message brokering similar to Apache Kafka:
Julia's multiple dispatch is easier to write and executes faster than conditional programming patterns of the form "if this kind of thing then do x" in java-based Kafka.
The demonstration exemplifies dispatch into different data sinks like git managed CSV and text files, SearchLight.jl, and even Telegram.jl Bot alerts.
The workshop details the use of grammar languages supported by CombinedParsers.jl:
You can conveniently compose existing EBNF Grammars with PCRE regular expressions and CombinedParser's julia constructors to create fast pure julia compiled (also recursive) parsers.
Regular expressions and EBNF CombinedParsers result in nested (named) tuples by default.
Users can inject transformation functions for any (sub-)parser after definition as EBNF/PCRE.
Alternatively a CombinedParsers julia syntax equivalent to a PCRE/EBNF grammar can be printed and amended with transformations.
For improved performance, lazy transformations allow access to parts of a parsed string without transforming the full parsing result (similar to LazyJSON.jl).
Benchmarks and standards compliance is reported based on extensive unit tests.
Julia CombinedParsers performance competes with the PCRE C library, which is among the fastest regex libraries on the market.
This is achieved by leveraging the excellent julia compiler with generated functions, multiple dispatch and parametric types.
CombinedParsers supports to lazily iterate all valid parsings if not unique, and the TextParse interface to include CombinedParsers e.g. in CSV.jl.
Other parsing packages (Automa.jl, ParserCombinator.jl, Lerche.jl) and current limitations and considerations for further optimization will be discussed.
false
https://pretalx.com/juliacon2021/talk/DKEJ97/
https://pretalx.com/juliacon2021/talk/DKEJ97/feedback/
Red
Modeling Marine Ecosystems At Multiple Scales Using Julia
Workshop
2021-07-25T14:00:00+00:00
14:00
03:00
Life in the oceans is strongly connected to our climate. In this workshop, you will learn to use packages from the JuliaOcean and JuliaClimate organizations that provide a foundation for studying marine ecosystems across a wide range of scales. We will first run agent-based models to explore individual microbes and processes that drive species interactions. On the other end of the model hierarchy, we will simulate planetary-scale transports that control ocean biogeography and climate change.
juliacon2021-9873-modeling-marine-ecosystems-at-multiple-scales-using-julia
Gael ForgetBenoit PasquierZhen Wu
en
Packages covered in this workshop will include:
- `AIBECS.jl` : global steady-state biogeochemistry and gridded transport models that run fast for long time scales (centuries or even millenia).
- `PlanktonIndividuals.jl` : local to global agent based model, particluarly suited to study microbial communities, plankton physiology, and nutrient cycles.
- `IndividualDisplacements.jl` : local to global particle tracking, for simulating dispersion, connectivity, transports in the ocean or atmosphere, etc.
- `MITgcmTools.jl` : interface to full-featured, fortran-based, general circulation model and its output (transports, chemistry, ecology, ocean, seaice, atmosphere, and more).
The workshop's first two hours will be organized around tutorials and self-contained Pluto notebooks for the different packages.
The third hour will provide the opportunity for attendees to further explore the models in breakout rooms and via exercises.
Workshop schedule in more detail:
- Introduction of the topics covered, presenters, installation, and workshop roadmap (15 minutes).
- AIBECS.jl : concept, implementation, tutorial workthough (20 minutes + 10' for questions)
- PlanktonIndividuals.jl : concept, implementation, tutorial workthough (20 minutes + 10' for questions)
- IndividualDisplacements.jl : concept, implementation, tutorial workthough (10 minutes + 10' for questions)
- MITgcmTools.jl : concept, implementation, tutorial workthough (10 minutes + 10' for questions)
- 5-minute break
- breakout rooms for deeper dive in tutorials, exercises, or trying out your own idea with guidance from the presenters (1 hour)
Workshop materials will be made available ahead of time @ https://github.com/JuliaOcean/MarineEcosystemsJuliaCon2021.jl
false
https://pretalx.com/juliacon2021/talk/FEZW9Q/
https://pretalx.com/juliacon2021/talk/FEZW9Q/feedback/
Green
It's all Set: A hands-on introduction to JuliaReach
Workshop
2021-07-26T14:00:00+00:00
14:00
03:00
JuliaReach is among the best-of-breed software addressing the fundamental problem of reachability analysis: computing the set of states that are reachable by a dynamical system from all initial states and for all admissible inputs and parameters. We explain the role of Julia's multiple dispatch to gain an unprecedented level of flexibility and expressiveness in this area. We explore diverse applications including differential equations, hybrid systems and neural network controlled systems.
juliacon2021-9859-it-s-all-set-a-hands-on-introduction-to-juliareach
/media/juliacon2021/submissions/9KGMHJ/logo_n0pYIwG.png
Marcelo ForetsChristian Schilling
en
We present [JuliaReach](https://github.com/JuliaReach), a Julia ecosystem to perform reachability analysis of dynamical systems. JuliaReach builds on sound scientific approaches and was, in two occasions (2018 and 2020) the winner of the annual friendly competition on Applied Verification for Continuous and Hybrid Systems ([ARCH-COMP](https://cps-vo.org/group/ARCH)).
The workshop consists of three parts (respectively packages) in [JuliaReach](https://github.com/JuliaReach): our core package for set representations, our main package for reachability analysis, and a new package applying reachability analysis with potential use in domain of control, robotics and autonomous systems.
In the first part we present [LazySets.jl](https://github.com/JuliaReach/LazySets.jl), which provides ways to symbolically represent sets of points as geometric shapes, with a special focus on convex sets and polyhedral approximations. [LazySets.jl](https://github.com/JuliaReach/LazySets.jl) provides methods to apply common set operations, convert between different set representations, and efficiently compute with sets in high dimensions.
In the second part we present [ReachabilityAnalysis.jl](https://github.com/JuliaReach/ReachabilityAnalysis.jl), which provides tools to approximate the set of reachable states of systems with both continuous and mixed discrete-continuous dynamics, also known as hybrid systems. It implements conservative discretization and set-propagation techniques at the state-of-the-art.
In the third part we present [NeuralNetworkAnalysis.jl](https://github.com/JuliaReach/NeuralNetworkAnalysis.jl), which is an application of [ReachabilityAnalysis.jl](https://github.com/JuliaReach/ReachabilityAnalysis.jl) to analyze dynamical systems that are controlled by neural networks. This package can be used to validate or invalidate specifications, for instance about the safety of such systems.
---
Meet the team of researchers and students that form the [JuliaReach](https://juliareach.com) network:
- [Luis Benet](https://github.com/lbenet). Universidad Nacional Autónoma de México. *Validated integration, Nonlinear Physics.* He is also one of the lead developers of [JuliaIntervals](https://github.com/JuliaIntervals).
- [Marcelo Forets](https://github.com/mforets). Universidad de la República, Uruguay. *Reachability Analysis, Hybrid Systems, Neural Network Robustness.*
- [Daniel Freire Caporale](https://github.com/dfcaporale). Universidad de la República, Uruguay. *Reachability, PDEs, Fluid Mechanics.*
- [Sebastian Guadalupe](https://github.com/sebastianguadalupe). Universidad de la República, Uruguay. *Julia Seasons of Contributions 2020 Alumni. Mathematical Modeling, Hybrid systems.*
- [Uziel Linares](https://github.com/uziellinares). Universidad Nacional Autónoma de México. *Google Summer of Code 2020 Alumni. Nonlinear reachability, Taylor models.*
- [Jorge Pérez Zerpa](https://github.com/jorgepz). Universidad de la República, Uruguay. *Finite Element Method, Structural Engineering, Material Identification.*
- [David P. Sanders](https://github.com/dpsanders). Universidad Nacional Autónoma de México and visiting professor at MIT. *Computational Science, Interval Arithmetic, and Numeric-symbolic Computing.* He is also one of the lead developers of [JuliaIntervals](https://github.com/JuliaIntervals).
- [Christian Schilling](https://github.com/schillic). University of Konstanz, Germany. *Formal Verification, Artificial Intelligence, Cyber-Physical Systems.*
false
https://pretalx.com/juliacon2021/talk/9KGMHJ/
https://pretalx.com/juliacon2021/talk/9KGMHJ/feedback/
Red
Introduction to Bayesian Data Analysis
Workshop
2021-07-26T14:00:00+00:00
14:00
03:00
This workshop will introduce the recommended workflow for applied Bayesian data analysis by working through an example analysis together. We will start with the simplest non-trivial model and use increasingly sophisticated models to explain the properties of our data set based on model diagnostics. We will also give an overview of the different probabilistic programming packages in Julia and show where we have advantages over other languages such as Stan and Python.
juliacon2021-9858-introduction-to-bayesian-data-analysis
Kusti Skytén
en
We will give participants an intuition and diagnostics for the workhorse of modern Bayesian statistics: the Hamiltonian MCMC algorithm. Additionally, we will cover the following topics:
- modeling count data with Poisson regression
- modeling overdispersion with the negative Binomial model
- hierarchical modeling
- modeling time varying effects with autoregressive models and Gaussian processes
We will conclude the workshop by showcasing future potential and features that are not currently available elsewhere such as Bayesian neural ODEs and symbolic optimization of Bayesian models.
false
https://pretalx.com/juliacon2021/talk/J7BFBM/
https://pretalx.com/juliacon2021/talk/J7BFBM/feedback/
Red
Introduction to metaprogramming in Julia
Workshop
2021-07-27T14:00:00+00:00
14:00
03:00
Metaprogramming is a key technique that intermediate to advanced Julia users *sometimes* need -- although not as often as they think!
This will be a tutorial introduction to metaprogramming, analyzing, from the bottom up, key topics such as the structure of Julia expressions, how and when to use (and not use) generated functions and macros, and touching on more recent techniques like use of Symbolics.jl and MLStyle.jl.
juliacon2021-9833-introduction-to-metaprogramming-in-julia
David P. Sanders
en
Metaprogramming is an important skill that intermediate to advanced Julia users *sometimes* need to use. This tutorial will be an introduction at the intermediate level, aiming to answer clearly questions such as:
- What is metaprogramming?
- When and why should I use it?
- When should I *not* use it? (See Steven Johnson's keynote from JuliaCon 2019.)
- What are macros for, and how do they work?
- What is macro hygiene and how should I use it?
- How can I write a function that recursively analyses a syntax tree?
- When should I use a generated function?
- How can I get access to the code for a function that is already defined?
- Are there packages that can make this simpler? (Brief sketch)
The goal is to provide a firm foundation of understanding that can then be built on later with more advanced applications (not covered in the workshop). The aim is to provide a relatively pedestrian, but easy to follow, path to understanding, rather than to apply powerful, but difficult to understand, functional techniques.
We will provide simple examples of metaprogramming applied to interesting questions in scientific computing, always aiming for simple examples and explanations.
false
https://pretalx.com/juliacon2021/talk/DWEMBV/
https://pretalx.com/juliacon2021/talk/DWEMBV/feedback/
Green
Geostatistical Learning
Talk
2021-07-28T12:30:00+00:00
12:30
00:30
**Geostatistical Learning** is a new branch of Geostatistics concerned with learning functions over geospatial domains (e.g. 2D maps, 3D subsurface models). The theory is being carefully implemented in the **GeoStats.jl** framework, which is an extensible framework for high-performance geostatistics in Julia. In this talk, I will illustrate how the framework can be used to learn functions over general unstructured meshes, and how this unique technology can help advance geoscientific work.
juliacon2021-9276-geostatistical-learning
/media/juliacon2021/submissions/SLUMQM/juliacon_GoYOxp2.png
Júlio Hoffimann
en
The theory was introduced in our recent (open access) paper available online: https://www.frontiersin.org/articles/10.3389/fams.2021.689393/full
Its implementation requires knowledge of geostatistics, computational geometry, and high-performance computing. Due to the great features of the Julia language we were able to achieve an elegant design with great runtime performance.
**Packages:** [GeoStats.jl](https://github.com/JuliaEarth/GeoStats.jl), [Meshes.jl](https://github.com/JuliaGeometry/Meshes.jl)
false
https://pretalx.com/juliacon2021/talk/SLUMQM/
https://pretalx.com/juliacon2021/talk/SLUMQM/feedback/
Green
Hierarchical Multiple Instance Learning
Talk
2021-07-28T13:00:00+00:00
13:00
00:30
Learning from raw data input is one of the key components of many successful applications of machine learning methods. While machine learning problems are often formulated on data that naturally translate into a vector representation suitable for classifiers, there are data sources with a unifying hierarchical structure, such as JSON. This talk will describe Mill.jl and JsonGrinder.jl, which offers a theoretically justified approach to solve machine learning problems with these data sources.
juliacon2021-9307-hierarchical-multiple-instance-learning
Tomas Pevny
en
Learning from raw data input, thus limiting the need for manual feature engineering, is one of the key components of many successful applications of machine learning methods. While machine learning problems are often formulated on data that naturally translate into a vector representation suitable for classifiers, there are data sources, for example in cybersecurity, that are naturally represented in diverse files with a unifying hierarchical structure, such as XML, JSON, and Protocol Buffers.
Converting this data to vector (tensor) representation is generally done by manual feature engineering, which is laborious, lossy, and prone to human bias about the importance of particular features.
Mill.jl and Jsongrinder.jl is a tandem of libraries, which fully automates the conversion. Starting with an arbitrary set of JSON samples, they create a differentiable machine learning model capable of infer from further JSON samples in their raw form.
In the spirit of the Julia language, the framework is split into two packages --- Mill.jl implementing the hierarchical multiple instance learning paradigm, offering a theoretically justified approach for building machine learning models for this type of data, and Jsongrinder.jl summarizing the structure in a set of JSON samples and reflecting it in a Mill.jl model.
The talk will be split in four parts.
1) Motivation why we think the problem is interesting
2) Description of mathematical function and theorems about mathematical correctness
3) Description of a design of libraries
4) Practical demo
Link to libraries:
https://github.com/CTUAvastLab/Mill.jl
https://github.com/pevnak/JsonGrinder.jl
false
https://pretalx.com/juliacon2021/talk/XFZWWA/
https://pretalx.com/juliacon2021/talk/XFZWWA/feedback/
Green
ReactiveMP.jl: Reactive Message Passing-based Bayesian Inference
Lightning talk
2021-07-28T13:30:00+00:00
13:30
00:10
ReactiveMP.jl is a native Julia implementation of reactive message passing-based Bayesian inference in probabilistic graphical models. The package supports a large range of standard probabilistic models and can be extended to custom novel nodes and message update rules. In contrast to non-reactive (imperatively coded) Bayesian inference packages, ReactiveMP.jl scales easily to support inference on a standard laptop for large models with tens of thousands of variables and millions of nodes.
juliacon2021-9446-reactivemp-jl-reactive-message-passing-based-bayesian-inference
Dmitry Bagaev
en
GitHub: https://github.com/biaslab/ReactiveMP.jl
Demos: https://github.com/biaslab/ReactiveMP.jl/tree/master/demo
YouTube: https://www.youtube.com/watch?v=twhTsKsXa_8
Experiments from the talk: https://github.com/biaslab/ReactiveMP_JuliaCon2021
Bayesian inference is one of the key computational mechanisms that underlies probabilistic model-based machine learning applications such as time series prediction, image and speech recognition, and robotics. Unfortunately, for many models of practical interest, Bayesian inference requires evaluating high-dimensional integrals that have no analytical solution. As a result, Probabilistic Programming (PP) tools for Automated Approximate Bayesian Inference (AABI) have become popular, e.g., Turing.jl, Soss.jl, ForneyLab.jl, Pyro, and others. These tools help researchers to define custom probabilistic models in a high-level domain-specific language and run AABI algorithms with minimal additional overhead.
An important issue in the development of PP frameworks is scalability of AABI algorithms for large models and large data sets. One solution approach concerns message passing-based inference in factor graphs. In this framework, relationships between model variables are represented by a graph of sparsely connected nodes, and inference proceeds efficiently by a sequence of nodes sending probabilistic messages to neighboring nodes. While the optimal message passing schedule is data-dependent, all existing factor graph frameworks (e.g., Infer.Net, ForneyLab.jl) use preset message sequence schedules. The potential benefits of massively parallel and asynchronous reactive message passing in a factor graph include scaling to large inference tasks, much smaller processing latency and processing of data samples that arrive at irregular time intervals.
We have developed ReactiveMP.jl, which is a native Julia package for automated reactive message passing-based (both exact and approximate) Bayesian inference. ReactiveMP.jl is based on Rocket.jl, which is a native Julia package for a reactive programming. In ReactiveMP.jl, there are no pre-scheduled messages. Instead, nodes subscribe to messages from connected nodes and react autonomously and asynchronously whenever a new message has been received. As a result, ReactiveMP.jl scales comfortably to inference tasks on factor graphs with tens of thousands of variables and millions of nodes.
The ReactiveMP.jl package comes with a collection of standard probabilistic models, including linear Gaussian state-space models, hidden Markov models, auto-regressive models and mixture models. Moreover, ReactiveMP.jl API supports various processing modes such as offline learning, online filtering of infinite data streams and protocols for handling missing data.
ReactiveMP.jl is customizable and provides an easy way to add new models, node functions and analytical message update rules to the existing platform. As a result, a user can extend built-in functionality with custom nodes to run automated inference in novel probabilistic models. The resulting inference procedures are differentiable with the ForwardDiff.jl or ReverseDiff.jl packages. In addition, the inference engine supports different types of floating point numbers, e.g., the built-in BigFloat Julia type.
We achieved excellent performance by relying on Julia's great multiple dispatch capabilities and advanced compile-time optimization techniques. Message passing-based inference requires computation of many messages by node-specific update rules. Some of these updates can be evaluated and in-lined at compile time, which results in a fast and accurate automated Bayesian inference realization with almost zero overhead when compared to manually hard-coded inference procedures.
We compared ReactiveMP.jl to other message passing and sampling-based inference packages. In terms of computation time and memory usage, specifically for conjugate models, the ReactiveMP.jl engine outperforms Turing.jl, ForneyLab.jl and Infer.Net significantly by orders of magnitude. Comparative performance benchmarks are available at the GitHub repository: https://github.com/biaslab/ReactiveMP.jl.
Automating scalable Bayesian inference is a key factor in the quest to apply Bayesian machine learning to useful applications. We developed ReactiveMP.jl as a package that enables developers to build novel probabilistic models and automate scalable inference in those models by asynchronous, reactive message passing in a factor graph. We are looking forward to presenting the ReactiveMP.jl package and discuss the advantages and drawbacks of the reactive message passing approach.
false
https://pretalx.com/juliacon2021/talk/J7Z9PL/
https://pretalx.com/juliacon2021/talk/J7Z9PL/feedback/
Green
Exploiting Structure in Kernel Matrices
Lightning talk
2021-07-28T13:40:00+00:00
13:40
00:10
Kernel methods are widely used in statistics, machine learning, and physical simulations. These methods give rise to dense matrices that are naïvely expensive to multiply or invert. Herein, we present CovarianceFunctions.jl, a package that automatically detects and exploits low rankness, hierarchical structure, approximate sparsity. We highlight applications of this technology in Bayesian optimization and physical simulations.
juliacon2021-9917-exploiting-structure-in-kernel-matrices
Sebastian AmentJohn Paul Ryan
en
CovarianceFunctions.jl implements many commonly used kernel functions including stationary ones like the exponentiated quadratic, rational quadratic, and Matérn kernel, but also non-stationary ones like the polynomial and neural network kernel. A crucial component of the package is the "Gramian" matrix type which lazily represents kernel matrices with virtually no memory footprint. The package's most significant functionality derives from algorithms designed for particular combinations of kernel and data types, since they are able to drastically reduce the computational complexity of multiplication and inversion. However, even in the general case the lazy implementation eliminates the typical O(n^2) memory allocation for simply storing a kernel matrix of n data points
For stationary kernels in low dimensions, the package implements a new hierarchical factorization based on multipole expansions of the kernels via automatic differentiation. The user need only input the kernel function and data points, and the package automatically computes the relevant analytic expansions which are then leveraged within a treecode analagous to the Barnes-Hut algorithm. Fast multiplies are then performed in O(nlog(n)) time, and solves are performed using an iterative method whose preconditioner is also based on the aforementioned treecode.
For exponentially decaying stationary kernels (i.e. exponential, Matérn, RBF) in high dimensions, it is highly likely that the associated kernel matrix can be approximated well by a sparse matrix. However, naïvely detecting this approximate sparsity pattern would require evaluating the entire matrix in O(n^2) time. Instead, the package takes advantage of vantage trees to quickly find the most prominent neighbors of each data point in O(nk log(n)), where k is the maximum number of relevant neighbors of a data point.
In the context of Bayesian optimization with gradient information, the associated gradient kernel matrices are of size (nd x nd), naïvely requiring O(n^2d^2) operations for multiplication, which becomes prohibitive quickly as the number of parameters d increases. Based on recent work that uncovered a particular structure in a large class of these gradient kernel matrices, the package contains an exact multiplication algorithm that requires O(n^2d) operations. As a result, we are able to demonstrate first-order Bayesian optimization on problems of higher dimensionality than were previously possible.
In the absence of any of the above particular structure, the package attempts to construct a low-rank approximation of the matrix via a generic pivoted Cholesky algorithm that lazily computes the kernel matrix's entries, allowing the algorithm to terminate in O(nr^2) steps, where r is the numerical rank, before even fully forming the entire matrix. In the worst case however, this falls back to O(n^3) complexity in the absence of any structure.
In addition to implementing the above algorithms, a main feature of the package is the automatic detection of the most scalable algorithm depending on the kernel and data type. We believe that this type of automation is particularly useful for practitioners that rely on kernel methods and need to scale them to large datasets. We further invite specialists to contribute their methods for efficient computations with kernel matrices.
false
https://pretalx.com/juliacon2021/talk/LXATFU/
https://pretalx.com/juliacon2021/talk/LXATFU/feedback/
Green
Effects.jl: Effectively Understand Effects in Regression Models
Lightning talk
2021-07-28T13:50:00+00:00
13:50
00:10
Regression models are useful but they can be tricky to interpret.
Variable centering and contrast coding can obscure the meaning of main effects.
Interaction terms, especially higher order ones, only increase the difficulty of interpretation.
Here, we introduce Effects.jl which translates the fitted model, including estimated uncertainty, back into data space.
Using Effects.jl, it is possible to generate effects plots that enable rapid visualization and interpretation of regression models.
juliacon2021-9809-effects-jl-effectively-understand-effects-in-regression-models
Phillip Alday
en
Regression is a foundational technique of statistical analysis, and many common statistical tests are based on regression models (e.g., ANOVA, t-test, correlation tests, etc.).
Despite the expressive power of regression models, users often prefer the simpler procedures because regression models themselves can be difficult to interpret.
Most notably, the interpretation of individual regression coefficients (including their magnitude, sign, and even significance) changes depending on the presence or even centering/contrast coding of other terms or interactions.
For instance, a common source of confusion in regression analysis is the meaning of the intercept coefficient.
On its own, this coefficient corresponds to the grand mean of the independent variable, but in the presence of a contrast-coded categorical variable, it can correspond to the mean of the baseline level of that variable, the grand mean, or something else altogether, depending on the contrast coding scheme that is used.
Effects.jl provides a general-purpose tool for interpreting fitted regression models by projecting the effects of one or more terms in the model back into "data space", along with the associated uncertainty, fixing other the value of other terms at typical or user-specified values.
This makes it straightforward to interrogate the estimated effects of any predictor at any combination of other predictors' values.
Because these effects are computed in data space, they can be plotted in parallel format to raw or aggregated data, enabling intuitive model interpretation and sanity checks.
false
https://pretalx.com/juliacon2021/talk/BMMEGV/
https://pretalx.com/juliacon2021/talk/BMMEGV/feedback/
Green
Opening remarks
Keynote
2021-07-28T14:30:00+00:00
14:30
00:05
Opening remarks
juliacon2021-11706-opening-remarks
en
false
https://pretalx.com/juliacon2021/talk/3JYPC9/
https://pretalx.com/juliacon2021/talk/3JYPC9/feedback/
Green
Keynote (Jan Vitek)
Keynote
2021-07-28T14:35:00+00:00
14:35
00:45
Julia - Is it a great language, or it is the greatest language!
juliacon2021-11700-keynote-jan-vitek-
en
false
https://pretalx.com/juliacon2021/talk/7WYDH3/
https://pretalx.com/juliacon2021/talk/7WYDH3/feedback/
Green
Keynote: William Kahan - Debugging Tools for Floating-Point Code
Keynote
2021-07-28T15:20:00+00:00
15:20
00:40
Debugging tools widely used for almost all other programs are inadequate
for floating-point programs because these are so different. Suitable tools
appeared in 1980 with IEEE Standard 754 for Floating-Point Hardware,
but such tools have gone largely undemandeded by customers who pay for
designers and implementors of languages and operating systems. These
almost never support such tools. Their value has gone unappreciated.
MAYBE A FEW EXAMPLES WILL CHANGE SOME MINDS.
juliacon2021-11837-keynote-william-kahan-debugging-tools-for-floating-point-code
en
William Kahan was instrumental in creating the IEEE 754-1985 standard for floating-point computation in the late 1970s and early 1980s. He developed a program called “paranoia’ in the 1980s to test for potential floating point bugs and developed the Kahan summation algorithm which helps minimize errors introduced when adding a sequence of finite precision floating-point numbers. Kahan won the ACM A.M. Turing Award in 1989.
false
https://pretalx.com/juliacon2021/talk/UKVUHW/
https://pretalx.com/juliacon2021/talk/UKVUHW/feedback/
Green
JuliaCon Trivia
Talk
2021-07-28T16:00:00+00:00
16:00
00:30
Trivia questions related to Julia -- this is a fun optional activity for our first break.
juliacon2021-11836-juliacon-trivia
en
false
https://pretalx.com/juliacon2021/talk/C9VRY3/
https://pretalx.com/juliacon2021/talk/C9VRY3/feedback/
Green
Everything you need to know about ChainRules 1.0
Talk
2021-07-28T16:30:00+00:00
16:30
00:30
ChainRules is an automatic differentiation (AD)-independent ecosystem for forward-, reverse-, and mixed-mode primitives. It comprises ChainRules.jl, a collection of primitives for Julia Base, ChainRulesCore.jl, the utilities for defining custom primitives, and ChainRulesTestUtils.jl, the utilities to test primitives using finite differences. This talk provides brief updates on the ecosystem since last year and focuses on when and how to write and test custom primitives.
juliacon2021-9495-everything-you-need-to-know-about-chainrules-1-0
/media/juliacon2021/submissions/LWVB39/logo_NobvM4b.png
Miha Zgubic
en
Automatic differentiation (AD), the ability to efficiently evaluate derivatives of arbitrary functions without computing derivatives by hand, enables efficient learning of many mathematical models. There are two components to every AD system: a collection of primitives (also called sensitivities or adjoints), and a way to keep track of and combine primitives using the chain rule of calculus in order to compute derivatives of arbitrary functions. While AD systems differ greatly in the latter, the set of primitives can be shared among them.
The ChainRules ecosystem provides the AD-independent collection of primitives for Julia Base (ChainRules.jl), utilities for defining custom primitives (ChainRulesCore.jl), and utilities for testing custom primitives using finite differences (ChainRulesTestUtils.jl). While not needed in principle, the ability to define custom primitives provides a way to speed up the computation by applying domain knowledge or mathematical insight, or get around limitations and performance issues of individual AD systems. In addition, ChainRulesCore.jl provides a suite of expressive differential types which allow comparing derivatives across multiple AD systems.
Since last year the ecosystem has matured considerably, improving the user experience in a number of ways. Improvements include:
- It is now possible to write rules for higher order functions (e.g. map) by calling back into the AD system
- @non_differentiable makes it easy to define rules for non-differentiable functions
- Testing custom primitives became easier since a random tangent (usually) does not have to be provided
- It is now possible to test f/rrule like functions, meaning AD systems can be tested
- ChainRules is now used by Zygote, Nabla, ForwardDiff2, and ReversePropagation
This talk will start by briefly introducing the ChainRules ecosystem, and highlighting the most important new features since last year. Those unfamiliar with the general idea of ChainRules are encouraged to watch last year’s talk on ChainRules first, since the core of the talk is a comprehensive guide to using, writing, and testing custom primitives. In particular, the talk will explain when it is advantageous to write custom primitives compared to using an AD system on its own, the interface for writing custom primitives and the associated supporting functionality, as well as why and how to test primitives by finite differencing methods.
false
https://pretalx.com/juliacon2021/talk/LWVB39/
https://pretalx.com/juliacon2021/talk/LWVB39/feedback/
Green
Enzyme.jl -- Reverse mode differentiation on LLVM IR for Julia
Talk
2021-07-28T17:00:00+00:00
17:00
00:30
Enzyme (https://enzyme.mit.edu) is a reverse mode auto-differentiation tool that performs automatic differentiation over LLVM intermediate representation and synthesis high-performance reverse-mode functions. We will discuss how Enzyme.jl integrates with the Julia compiler and special considerations required for differentiating a dynamic programming language such as Julia.
juliacon2021-9725-enzyme-jl-reverse-mode-differentiation-on-llvm-ir-for-julia
Valentin ChuravyWilliam Moses
en
Automatic differentiation (AD) is key to training neural networks, bayesian inference, and scientific computing. This talk presents Enzyme.jl, a Julia frontend for the Enzyme high performance LLVM automatic differentiation (AD) toolkit. By operating at a low level, Enzyme is able to run optimizations prior to differentiation and is therefore highly efficient on scalar code and can support mutation out of the box. We explain how Enzyme.jl integrates with the Julia compiler, supports synthesis for Julia GPU kernels, and propagates Julia knowledge of types to the lower-level tool. We will discuss ongoing work to extend Enzyme.jl to be able to differentiate through Julia language features like dynamic calls and garbage collection. We will conclude by describing the potential of combining high level and low level systems to get the benefit of both algebraic and instruction level optimizations, and using Enzyme.jl in other AD systems such as Zygote.jl or Diffractor.jl to perform differentiation of foreign function calls, enabling cross-language AD.
false
https://pretalx.com/juliacon2021/talk/UDJ7SJ/
https://pretalx.com/juliacon2021/talk/UDJ7SJ/feedback/
Green
A Tour of the differentiable programming landscape with Flux.jl
Lightning talk
2021-07-28T17:30:00+00:00
17:30
00:10
Deep learning has grown steadily and there has been rising interest from various groups to incorporate ML techniques in their modelling via differentiable programming. Software 2.0 as its known, is going to need a large resource pool of tools to actualise its goal. In this talk, we will discuss how the Flux.jl stack along with Zygote and next-gen AD tooling is enabling differentiable programming already in a variety of domains and tour across the packages and projects that are taking part in it.
juliacon2021-9938-a-tour-of-the-differentiable-programming-landscape-with-flux-jl
Dhairya Gandhi
en
Machine Learning has come a long way in the past decade. With differentiable programming we have seen a renewed interest from numerous communities to apply ML techniques to diverse fields through scientific machine learning. Traditional deep learning has seen many strides with larger, more compute-intensive models which need increasingly complex training routines that push the boundaries of the current state-of-the-art.
In this talk, we will go through the depth of the machine learning and differentiable programming ecosystem in Julia through the [FluxML](https://github.com/FluxML) stack. We shall discuss the various tools and features available to the users through the advances in the ecosystem and the next-gen tooling required to allow even more expressive modelling possible in Julia.
We will also take note of the new packages and techniques being developed in domains such as differentiable physics, [chemistry](https://github.com/aced-differentiate/AtomicGraphNets.jl), graph networks, [molecular simulation](https://juliamolsim.github.io/Molly.jl/stable/differentiable/) and [multi-GPU training](https://julialang.org/jsoc/gsoc/hpc/#distributed_training) etc.
We will also talk about the development effort in the [Flux](https://github.com/FluxML/Flux.jl) stack including performance enhancements, better coverage of CUDA, NNlib optimisations for the CPU and the new composable and functional optimisers via [Optimisers.jl](https://github.com/FluxML/Optimisers.jl) etc.
false
https://pretalx.com/juliacon2021/talk/ZEV3MR/
https://pretalx.com/juliacon2021/talk/ZEV3MR/feedback/
Green
Learning to align with differentiable dynamic programming
Lightning talk
2021-07-28T17:40:00+00:00
17:40
00:10
The alignment of two or more biological sequences is one of the main workhorses in bioinformatics because it can quantify similarity and reveal conserved patterns. We provided a differential version of the two most popular algorithms for sequence alignment: the Needleman–Wunsch and Smith-Waterman algorithms. Using ChainRulesCore.jl, the gradients can be used directly in combination with bioinformatics and machine learning libraries.
juliacon2021-9716-learning-to-align-with-differentiable-dynamic-programming
Michiel Stock
en
The alignment of two or more biological sequences is one of the main workhorses in bioinformatics because it can quantify similarity and reveal conserved patterns. Dynamic programming allows for rapidly computing the optimal alignment between two sequences by recursively splitting the problem into smaller tractable choices, i.e., deciding whether it is best to extend a current alignment or introduce a gap in one of the sequences. This process leads to the optimal alignment score and backtracking yields the optimal alignment. By departing from a collection of pairwise alignments, one can heuristically compute a multiple sequence alignment of many sequences. If one is interested in the effect of a small change in the alignment parameter or the sequences, one has to compute the alignment score gradient with respect to these inputs. Regrettably, computing this gradient is not possible because the individual maximisation (minimisation) steps in the dynamic programming are non-differentiable.
However, Mensch and Blondel recently showed that by smoothing the maximum operator, for example, by regularising with an entropic term, one can design fully differentiable dynamic programming algorithms. The individual smoothed maximum operators have various desirable properties, such as being efficient to compute, sparsity, or probabilistic interpretation. Departing from this work, we created a differentiable version of the Needleman–Wunsch and Smith-Waterman algorithm. Using ChainRulesCore.jl, we allowed this gradient to be compatible with Julia's autodiff ecosystem.
The resulting gradient has an immediate diagnostic and statistical interpretation, such as computing the Fisher information to create uncertainty estimates. Furthermore, it enables us to use sequence alignment in differentiable computing, allowing one to learn an optimal substitution matrix and gap cost from a set of homologous sequences. The flexibility allows these parameters to vary at different regions in the sequences, for example, depending on the secondary structure. One can also change this around and fix the alignment parameters and optimise the sequences for alignment. This scheme allows for finding consensus sequences, which can be useful in creating a multiple sequence alignment. More broadly, our algorithm can be incorporated in arbitrary artificial neural network architectures (using e.g. Flux.jl), making it an attractive alternative to the popular convolution neural networks, LSTMs or transformer networks currently used to learn from biological sequences.
false
https://pretalx.com/juliacon2021/talk/QB8EC8/
https://pretalx.com/juliacon2021/talk/QB8EC8/feedback/
Green
Partitions and chains: enabling batch processing for your data
Lightning talk
2021-07-28T17:50:00+00:00
17:50
00:10
While big data isn't new anymore, building efficient pipelines to parse, analyze, transform, aggregate, and save all this data is still a tricky business. Come learn about new tools across the JuliaData family of packages for batch processing data, allowing automatic use of multithreading for data processing tasks.
juliacon2021-9930-partitions-and-chains-enabling-batch-processing-for-your-data
Jacob Quinn
en
I want to give a overview of the next "phase" of functionality we've been building across the data ecosystem and some walk-throughs of how the functionality is already being leveraged, including:
* The ChainedVector array type, which allows treating "batches" of arrays as one long array, while allowing efficient multithreading and other concurrent operations on the data automatically
* Tables.partitions: The Tables.jl package now supports "batches" of data for sinks to process, with a focus on enabling multithreaded sink processing of source partitions
* The TableOperations.jl package provides the `makepartitions` and `joinpartitions` utility functions for facilitating working with partitions and your data
* Examples of how packages are already taking advantage: Arrow.jl, CSV.jl, JuliaDB.jl, Parquet.jl, and Avro.jl
false
https://pretalx.com/juliacon2021/talk/Z7ZLTP/
https://pretalx.com/juliacon2021/talk/Z7ZLTP/feedback/
Green
GatherTown -- Social break
Social hour
2021-07-28T18:00:00+00:00
18:00
01:00
Join us on Gather.town for a social hour.
It is a virtual location where we will facilitate the poster sessions, social gatherings, and hackathon. You can join the space using the URL: https://gather.town/invite?token=3QYkt8gX.
You should have received the password through Eventbrite
juliacon2021-11882-gathertown-social-break
en
true
https://pretalx.com/juliacon2021/talk/SLGTWB/
https://pretalx.com/juliacon2021/talk/SLGTWB/feedback/
Green
Building on AlphaZero with Julia
Talk
2021-07-28T19:00:00+00:00
19:00
00:30
In this talk, we give an introduction to the AlphaZero algorithm and discuss some research challenges of using it beyond board games. In an effort to make this algorithm widely accessible to students and researchers, we introduce [AlphaZero.jl](https://github.com/jonathan-laurent/AlphaZero.jl). We show how this package leverages Julia's strengths to provide an implementation that is simple and flexible, while being up to two orders of magnitude faster than comparable Python implementations.
juliacon2021-9489-building-on-alphazero-with-julia
/media/juliacon2021/submissions/NLG9FQ/logo-text_Es5VNyy.png
Jonathan Laurent
en
Deepmind's AlphaZero algorithm illustrates a general methodology of combining learning and search to solve complex combinatorial problems. Yet, despite its much-publicized success at the game of Go and a wide range of potential applications, few researchers have managed to build on it.
In an effort to make AlphaZero widely accessible to students and researchers, we introduce [AlphaZero.jl](https://github.com/jonathan-laurent/AlphaZero.jl). Leveraging Julia's unique strengths, this package provides an implementation of Deepmind's algorithm that is simple and flexible, while being up to two orders of magnitude faster than comparable Python implementations.
In this talk, we give a short lecture on the AlphaZero algorithm and discuss some research challenges of using it to solve problems beyond board games. Then, we introduce our [AlphaZero.jl](https://github.com/jonathan-laurent/AlphaZero.jl) package. We show how Julia enables a unique combination of simplicity, flexibility and speed, while also identifying areas in which improvements to the Julia ecosystem could lead to further performance gains. We conclude the talk with more general thoughts on how we believe Julia can have a transformative impact on reinforcement-learning research.
false
https://pretalx.com/juliacon2021/talk/NLG9FQ/
https://pretalx.com/juliacon2021/talk/NLG9FQ/feedback/
Green
Bayesian Neural Ordinary Differential Equations
Talk
2021-07-28T19:30:00+00:00
19:30
00:30
We answer the question: “Can Bayesian learning frameworks be integrated with Neural ODE’s to robustly quantify the uncertainty in the weights of a Neural ODE?” for the following categories of inference methods: (a) NUTS samples and stochastic frameworks like (b) SGLD, SGHMC. We test these methods on physical systems and ML datasets like MNIST. Finally, we demonstrate probabilistic, symbolic recovery of missing terms from dynamical systems using universal ODEs.
juliacon2021-9764-bayesian-neural-ordinary-differential-equations
Raj Dandekar
en
Recently, Neural Ordinary Differential Equations has emerged as a powerful framework for modeling physical simulations without explicitly defining the ODEs governing the system, but instead learning them via machine learning. However, the question: “Can Bayesian learning frameworks be integrated with Neural ODE’s to robustly quantify the uncertainty in the weights of a Neural ODE?” remains unanswered. In an effort to address this question, we primarily evaluate the following categories of inference methods: (a) The No-U-Turn MCMC sampler (NUTS), (b) Stochastic Gradient Hamiltonian Monte Carlo (SGHMC) and (c) Stochastic Langevin Gradient Descent (SGLD). We demonstrate the successful integration of Neural ODEs with the above Bayesian inference frameworks on classical physical systems, as well as on standard machine learning datasets like MNIST, using GPU acceleration. On the MNIST dataset, we achieve a posterior sample accuracy of 98.5% on the test ensemble of 10,000 images. This is a performance competitive with current state-of-the-art image classification methods, which meanwhile lack our method's ability to quantify the confidence in its predictions.
Subsequently, for the first time, we demonstrate the successful integration of variational inference with normalizing flows and Neural ODEs, leading to a powerful Bayesian Neural ODE object.
Finally, considering a predator-prey model and an epidemiological system, we demonstrate the probabilistic identification of model specification in partially-described dynamical systems using universal ordinary differential equations. Together, this gives a scientific machine learning tool for probabilistic estimation of epistemic uncertainties.
In this study, we used the Julia differentiable programming stack to compose the Julia differential equation solvers with the Turing probabilistic programming language. The study was performed without modifications to the underlying libraries due to the composability afforded by the differentiable programming stack.
false
https://pretalx.com/juliacon2021/talk/FJLE7U/
https://pretalx.com/juliacon2021/talk/FJLE7U/feedback/
Green
POMDPs.jl and Interactive Assignments in Julia
Lightning talk
2021-07-28T20:10:00+00:00
20:10
00:10
POMDPs.jl is a leading research tool for partially observable Markov decision processes that also enables new teaching opportunities. This talk will describe POMDPs.jl and the Decision Making under Uncertainty class at CU Boulder. Each assignment in this class includes an open-ended challenge problem where students implement algorithms in Julia that are auto-graded. The system enables challenging assignments such as programming MCTS with a 100ms time limit and DQN for reinforcement learning.
juliacon2021-9929-pomdps-jl-and-interactive-assignments-in-julia
/media/juliacon2021/submissions/HLFY9G/16347008_aDjo62a.png
Zachary Sunberg
en
The course materials website, including notes and homework assignments, is located here: https://github.com/zsunberg/CU-DMU-Materials, and the Julia package for the course is located here: https://github.com/zsunberg/DMUStudent.jl. The algorithms that the students implement in Julia include Value Iteration, Monte Carlo Tree Search, DQN, and QMDP. The algorithms are graded on the students' machine to ease debugging. This talk will give a very-brief overview of POMDPs.jl, and discuss the course, what went well, and what aspects turned out to be challenging.
(this talk could be expanded into a 30 minute talk if there is enough interest).
false
https://pretalx.com/juliacon2021/talk/HLFY9G/
https://pretalx.com/juliacon2021/talk/HLFY9G/feedback/
Green
Probabilistic Model Checking using POMDPModelChecking.jl
Lightning talk
2021-07-28T20:20:00+00:00
20:20
00:10
Autonomous systems are often required to operate in partially observable environments. They must reliably execute a specified objective even with incomplete information about the state of the environment. Model checking allows us to synthesize a decision policy that satisfies a linear temporal logic (LTL) formula in a POMDP. By reformulating the model checking problem into an AI planning problem, we can use state-of-the-art POMDP planning algorithms to solve model checking problems.
juliacon2021-9952-probabilistic-model-checking-using-pomdpmodelchecking-jl
Maxime Bouton
en
In this talk we will show how we built a model checking library in a few lines of Julia by integrating an LTL manipulation library to the JuliaPOMDP ecosystem. With this library we can compute decision policies with probabilistic guarantees for various range of partially observable problems: drone surveillance, robot exploration, pedestrian avoidance for autonomous driving.
This lightning talk will be organized as follows:
- Introduction to the problem of POMDP/MDP model checking: quick overview of JuliaPOMDP [1]
- Introduction to linear temporal logic manipulation using Spot.jl [2]: Spot.jl is a wrapper of spot [3], a c++ library for LTL manipulation. The Julia wrapper is built using CxxWrap.jl. We will demonstrate some visual examples of how spot is used to convert a temporal logic formula into a finite state machine and how we can visualize it (material will be inspired from the Spot.jl tutorial but remodeled to fit the talk format).
- Introduction to POMDPModelChecking.jl [4]: we will show how we can reuse the whole JuliaPOMDP ecosystem to solve model checking problems. Our library exposes two solvers (ModelCheckingSolver and ReachabilitySolver), which takes as input any Julia POMDP model and an LTL formula and outputs a policy. Internally, the solver creates a new POMDP model which is a composition of the original model, and a finite state machine created by Spot.jl. This new model can then be solved by any JuliaPOMDP planning algorithm. The theoretical justification of reformulating the model checking problem into a planning problem has been detailed in previous work [5].
- Gallery: We will show visual examples of decision policies computed using POMDPModelChecking.jl on the rock sample POMDP problem [6].
References:
[1] https://github.com/JuliaPOMDP
[2] https://github.com/sisl/Spot.jl
[3] https://spot.lrde.epita.fr/index.html
[4] https://github.com/sisl/POMDPModelChecking.jl
[5] M. Bouton, J. Tumova, and M. J. Kochenderfer, "Point-Based Methods for Model Checking in Partially Observable Markov Decision Processes," in AAAI Conference on Artificial Intelligence (AAAI), 2020.
[6] https://github.com/JuliaPOMDP/RockSample.jl
[7] M. Bouton, "Safe and Scalable Planning Under Uncertainty for Autonomous Driving", PhD thesis, Stanford University, 2020.
false
https://pretalx.com/juliacon2021/talk/7GYDRZ/
https://pretalx.com/juliacon2021/talk/7GYDRZ/feedback/
Green
GatherTown -- Social break
Social hour
2021-07-28T20:30:00+00:00
20:30
01:00
Join us on Gather.town for a social hour.
It is a virtual location where we will facilitate the poster sessions, social gatherings, and hackathon. You can join the space using the URL: https://gather.town/invite?token=3QYkt8gX.
You should have received the password through Eventbrite
juliacon2021-11889-gathertown-social-break
en
false
https://pretalx.com/juliacon2021/talk/YXJ8YQ/
https://pretalx.com/juliacon2021/talk/YXJ8YQ/feedback/
Red
Put some constraints into your life with JuliaCon(straints)
Talk
2021-07-28T12:30:00+00:00
12:30
00:30
The freshly born JuliaConstraints GitHub organization provides a combination of packages around the theme of Constraint Programming and Combinatorial Optimization.
This talk introduces the whole ecosystem of JuliaConstraints packages and its main dependencies. It focuses on the LocalSearchSolvers.jl framework (and CBLS.jl, its interface with JuMP) for Constraint-Based Local Search. We also cover the utility packages that we hope to share with the Julia and Constraint Programming communities.
juliacon2021-9868-put-some-constraints-into-your-life-with-juliacon-straints-
/media/juliacon2021/submissions/8LL9QH/jc-logo_1024x1024_0Z2zqxf.png
Jean-François BAFFIER (azzaare@github)
en
Problem-solving often consists in two actions: model and solve. The holy grail of Constraint Programming is to have the human (user) model the problem and have the machine (solver) solve it. All the smartness should be in the solver.
**JuliaConstraints**, a freshly hatched GitHub organization, is a first attempt to provide common grounds to the growing Constraint Programming community in Julia while tackling that holy grail.
We will approach the different blocks of the ecosystem through the lens of shared interfaces, shared instances and models, and shared internals. We will illustrate the use, pros and cons of problem-solving through Constraint Programming with different solvers and frameworks such as *ConstraintSolver.jl* and *LocalSearchSolvers.jl*.
A possible common interface, building on the popular *JuMP.jl*, is already available for some solvers. An attempt to write shared models in JuMP syntax as just started as *ConstraintModels.jl*. Various problems have been modeled such as:
- sudoku
- n-queens
- magic square
- chemical equilibrium
- quadratic assignment
- golomb ruler
- minimum and maximum cuts in networks
- traveler salesman problem
- scheduling
A store of instances, generators and global information combinatorial optimization problems is also available as *COPInstances.jl* (tentative name, WIP). This package aims for a larger audience than simply CP solvers, and we would be glad to see it grows for other optimization packages.
Finally, JuliaConstraints hosts also some internal packages, mainly used within the *LocalSearchSolvers.jl* framework, but with the hope, that some parts can be shared with other solvers:
- *Constraints.jl*: a store of usual constraints in CP
- *ConstraintDomains.jl*: structures and methods for the domain of variables
- *CompositionalNetworks.jl*: a glass-box neural networks for scalable compositions of functions
- A very nice logo with chains and Julia (in)famous colored dots
There is an extensive list of incredible Julia packages and internal methods that provide all the computational power and the expressive syntax of the Constraint Programming ecosystem in Julia. We will also highlight the key external dependencies such as JuMP, Evolutionary, Dictionaries, Base.Threads, and more!
Incidentally, we will try to have some fun with an interactive model session (if interactivity is allowed in the COVID-19 context) for LocalSearchSolvers.jl. Did we mention that the solving speed scale super linearly with the number of thread/process?
false
https://pretalx.com/juliacon2021/talk/8LL9QH/
https://pretalx.com/juliacon2021/talk/8LL9QH/feedback/
Red
Julog.jl: Prolog-like Logic Programming in Julia
Lightning talk
2021-07-28T13:00:00+00:00
13:00
00:10
Julog.jl is a library and domain-specific language for Prolog-like logic programming in Julia. This lightning will introduce logic programming at a high level, how Julog can be used to solve first-order logic problems, how its functionality can be integrated with custom Julia functions, downstream use cases, and some next steps for making logic and constraint programming fast and accessible for Julia users.
juliacon2021-9864-julog-jl-prolog-like-logic-programming-in-julia
Xuan (Tan Zhi Xuan)
en
false
https://pretalx.com/juliacon2021/talk/3PGHMY/
https://pretalx.com/juliacon2021/talk/3PGHMY/feedback/
Red
Solving discrete problems via Boolean satisfiability with Julia
Lightning talk
2021-07-28T13:10:00+00:00
13:10
00:10
Many discrete problems in mathematics and computer science can be encoded into Boolean satisfiability (SAT) problems, and then solved by one of the many SAT "solvers" written in C or C++, which are now capable of solving problems with millions of variables.
In order to understand the algorithms and trade-offs involved, we developed a simple SAT solver in pure Julia that is performant for small systems. We also have developed simple tools to encode discrete problems like sudoku into SAT.
juliacon2021-9822-solving-discrete-problems-via-boolean-satisfiability-with-julia
David P. Sanders
en
Many discrete problems in computer science can be encoded into Boolean satisfiability (SAT) problems. In such problems, all variables are Boolean (true or false), but are restricted by *constraints* between the Boolean variables.
Over the last 50 years there have been remarkable developments in understanding how to solve these constraint satisfaction problems, and many open-source solvers have been developed, some of which have been wrapped in Julia, which are capable of solving problems with millions of variables. However, their code is often difficult to understand and modify.
In order to increase the awareness and accessibility of SAT solvers in the community, and to encourage experimentation, we developed a simple solver in pure Julia that is performant for small systems.
We also have developed a tool that allows us to write down discrete problems, such as sudoku, symbolically in Julia, and encode them into SAT.
false
https://pretalx.com/juliacon2021/talk/GNB93V/
https://pretalx.com/juliacon2021/talk/GNB93V/feedback/
Red
Running Programs Forwards, Backwards, and Everything In Between
Lightning talk
2021-07-28T13:20:00+00:00
13:20
00:10
Every method defines a relation, which contains all the information we need to query possible values of any of the inputs or outputs given information on the others. This talk introduces parametric relational programming, which given a method M; information on any of M's variables, and a query set Q of variables of interest, compiles a new method M̂ that computes possible values of variables in Q. This unifies the forward and inverse execution (and everything in between) as forms of inference.
juliacon2021-9953-running-programs-forwards-backwards-and-everything-in-between
Zenna Tavares
en
This talk should be of interest to people interested in any of:
- Compiler transformations
- Probabilistic programming
- Inference and machine learning
false
https://pretalx.com/juliacon2021/talk/LRHPUH/
https://pretalx.com/juliacon2021/talk/LRHPUH/feedback/
Red
FunSQL: a library for compositional construction of SQL queries
Talk
2021-07-28T13:30:00+00:00
13:30
00:30
Julia programmers sometimes need to interrogate data with the Structured Query Language (SQL). But SQL is notoriously hard to write in a modular fashion. There is no way to reuse SQL query fragments among different queries.
FunSQL exposes full expressive power of SQL with a compositional semantics. FunSQL allows you to build queries incrementally from small independent fragments. This approach is particularly useful for building applications that programmatically construct SQL queries.
juliacon2021-9875-funsql-a-library-for-compositional-construction-of-sql-queries
Kyrylo SimonovClark C. Evans
en
To introduce FunSQL, we will construct a practical query from healthcare informatics and then discuss how it works. We use a fragment of the [OMOP Common Data Model](https://github.com/OHDSI/CommonDataModel), a cross-platform database model for observational healthcare data.
As typical in healthcare, this schema is patient-centric. The table `person` contains de-identified information about patients including the unique identifier, approximate birthdate, and demographic information. To make this table available for FunSQL, we define it as follows.
<pre>
const person =
SQLTable(:person, columns = [:person_id, :year_of_birth, :location_id])
</pre>
The `patient` table has a foreign key to `location`, which specifies geographic location, typically down to a zipcode.
<pre>
const location =
SQLTable(:location, columns = [:location_id, :city, :state, :zip])
</pre>
Each person is associated with clinical events: encounters with care providers, recorded observations, diagnosed conditions, performed procedures, etc. We will represent one of them.
<pre>
const visit_occurrence =
SQLTable(:visit_occurrence, columns = [:visit_occurrence_id, :person_id, :visit_start_date])
</pre>
With this background in place, let us suppose a physician scientist asks:
*When was the last time each person, born in 2000 or earlier and living in Illinois, was seen by a care provider?*
This research question could be answered using FunSQL.
<pre>
From(person) |>
Where(Get.year_of_birth .<= 2000) |>
Join(:location => From(location),
on = (Get.location_id .== Get.location.location_id)) |>
Where(Get.location.state .== "IL") |>
Join(:visit_group => From(visit_occurrence) |>
Group(Get.person_id),
on = (Get.person_id .== Get.visit_group.person_id),
left = true) |>
Select(Get.person_id,
:max_visit_start_date =>
Get.visit_group |> Agg.Max(Get.visit_start_date))
</pre>
FunSQL provides operations with familiar SQL names such as `From`, `Where`, `Join`, `Group`, and `Select`, which can be chained together using the `|>` operator. The notation `:location => From(location)`, and its counterpart `Get.location.state`, lets us arrange table attributes hierarchically. Most importantly, the query can be constructed and tested incrementally, one operation at a time.
Contrast this with a hand-crafted SQL query.
<pre>
SELECT p.person_id, MAX(vo.visit_start_date)
FROM person p
JOIN location l ON (p.location_id = l.location_id)
LEFT JOIN visit_occurrence vo ON (p.person_id = vo.person_id)
WHERE (p.year_of_birth <= 2000) AND (l.state = 'IL')
GROUP BY p.person_id
</pre>
Although the SQL query is compact, it cannot be incrementally constructed. Indeed, if we follow the progression of the research question, we arrive at:
<pre>
FROM person p
WHERE (p.year_of_birth <= 2000)
JOIN location l ON (p.location_id = l.location_id)
...
</pre>
But this is not valid SQL. SQL enforces a rigid order of clauses: `FROM`, `JOIN`, `WHERE`, `GROUP BY`. As we refine a SQL query, attempting to incrementally correlate it with the research question, we are always forced to backtrack and rebuild it. This is what makes SQL tedious and error-prone.
FunSQL solves the problem of compositional query construction by representing individual operations as subqueries with a deferred `SELECT` list.
<pre>
q1 AS (SELECT ... FROM person)
q2 AS (SELECT ... FROM q1 WHERE q1.year_of_birth <= 2000)
q3 AS (SELECT ... FROM location)
q4 AS (SELECT ... FROM q2 JOIN q3 ON (q2.location_id = q3.location_id))
q5 AS (SELECT ... FROM q4 WHERE q4.state = 'IL')
q6 AS (SELECT ... FROM visit_occurrence)
q7 AS (SELECT ... FROM q6 GROUP BY q6.person_id)
q8 AS (SELECT ... FROM q5 LEFT JOIN q7 ON (q5.person_id = q7.person_id))
</pre>
The final subquery fixes the output columns.
<pre>
SELECT q8.person_id, q8.max_visit_start_date FROM q8
</pre>
Once the output columns are known, each deferred `SELECT` list can be resolved automatically. For instance, references `q1.year_of_bith`, `q2.location_id`, `q5.person_id` force `q1` to take the following form.
<pre>
q1 AS (SELECT person_id, year_of_birth, location_id FROM person)
</pre>
This `SELECT` resolution also propagates aggregate expressions. Thus, `q7` becomes:
<pre>
q7 AS (SELECT q6.person_id,
MAX(q6.visit_start_date) AS max_visit_start_date
FROM q6
GROUP BY q6.person_id)
</pre>
This approach provides a uniform compositional interface to the variety of SQL operations, preserving the expressive power of SQL while eliminating its stifling inflexibility.
For a Julia programmer, FunSQL realizes query operations as 1st class objects. Treated as values, they could be generated independently, assembled into composite operations, and remixed as needed. FunSQL lets us construct queries systematically, converging upon the research questions we wish to ask our databases.
false
https://pretalx.com/juliacon2021/talk/FEG39B/
https://pretalx.com/juliacon2021/talk/FEG39B/feedback/
Red
TopOpt.jl: topology optimization software done right!
Talk
2021-07-28T16:30:00+00:00
16:30
00:30
Topology optimization is a field lacking in good software tools. Most available software in this field either can’t be installed easily on all operating systems, support one or a few simple types of problems, implement one or a few types of algorithms, lack modularity and a decent API, lack performance, or all of the above! TopOpt.jl is a Julian attempt to provide a modular, flexible and high performance tool for topology optimization researchers.
juliacon2021-9590-topopt-jl-topology-optimization-software-done-right-
Mohamed Tarek
en
Topology optimization is a field that combines physics simulation and (mathematical) optimization to optimize the shapes and designs of physical systems. It is an extremely rich and fast growing field with its roots in structural and solid mechanics design but is quickly growing into other areas of physics and engineering. Being a fast growing research field, there is still no consensus on what functionality must be available in a decent topology optimization software. The ability to easily experiment with existing algorithms and easily define new problems to apply algorithms on is something that TopOpt.jl takes to a whole new level. Manually deriving gradients of long chained functions is still embarrassingly half of almost every important topology optimization paper in the field to this day! TopOpt.jl hopes to eliminate the need for this using automatic differentiation (Zygote.jl). Some custom adjoint rules are necessary to define for efficiency but automatic differentiation makes the software design and the API for defining custom adjoints much more pleasant than the status quo of re-inventing automatic differentiation for every new objective, constraint, sub-function, physical system, etc. The modular design of TopOpt.jl also allows a near complete segregation of the objective and constraint definitions from the mathematical optimization algorithm implementations which enables both to grow asynchronously appeasing to different audiences with different sets of expertise.
false
https://pretalx.com/juliacon2021/talk/XV3AH8/
https://pretalx.com/juliacon2021/talk/XV3AH8/feedback/
Red
FrankWolfe.jl: scalable constrained optimization
Talk
2021-07-28T17:00:00+00:00
17:00
00:30
We present FrankWolfe.jl, a new Julia package implementing several Frank-Wolfe algorithms to optimize differentiable functions with convex constraints.
The Julia optimization ecosystem includes toolboxes for unconstrained optimization on one hand and domain-specific modelling languages for constrained optimization on the other hand.
This package offers the possibility to optimize functions defined as Julia code with DSL-based closed-form or arbitrary convex constraints in an efficient manner.
juliacon2021-9804-frankwolfe-jl-scalable-constrained-optimization
Mathieu Besançon
en
For large-scale and data-intensive optimization, first-order methods are often a favoured choice, motivated by faster iterations and lower memory requirements.
Frank-Wolfe algorithms allow the optimization of a differentiable function over a convex set, solving a linear optimization problem at each iteration to determine a progress direction.
Each of these linear subproblems is much cheaper than the quadratic subproblems solved by projected gradient algorithms.
The talk will present the package and how it fits an unaddressed spot in the Julia optimization landscape, comparing it with the DSL approaches such as JuMP and Convex.jl,
StructuredOptimization.jl and to the other smooth optimization frameworks such as Optim.jl and JuliaSmoothOptimizers.
After a quick overview of the algorithm, we will cover some interesting properties on specific optimization problems, in particular the solution sparsity preserved throughout the whole optimization process.
Sparsity means in particular that the iterates are a convex combination of a low number of extreme points of the feasible set which can result in low-rank matrices, sparse arrays or other specific structures depending on the feasible set.
In the last part of the talk, we will cover some insight gained from the development of the package on building generic algorithms and in particular managing to handle vertices assuming a vector space but not necessarily finite dimensions.
false
https://pretalx.com/juliacon2021/talk/99GSDN/
https://pretalx.com/juliacon2021/talk/99GSDN/feedback/
Red
Modelling cryptographic side-channels with Julia types
Lightning talk
2021-07-28T17:30:00+00:00
17:30
00:10
In cryptographic embedded systems, power-line or RF emissions can leak secrets. We use Julia to model both attacks and defenses. Some of our custom integer and array types record information observable by attackers, such as Hamming weights of values. Others implement counter-measures, such as masking values across randomized shares. Julia’s parametric type system conveniently allows us to stack these types without syntactic overhead when exploring or teaching side-channel security.
juliacon2021-9648-modelling-cryptographic-side-channels-with-julia-types
Simon SchwarzMarkus Kuhn
en
In hardware security, side-channel attackers can monitor analog signals, like the per-instruction power consumed. They can record this data during the execution of a cryptographic algorithm to gain additional information. Such leakage data can depend on intermediate values of the cipher, which themselves depend on the secret key. Hence, with such side-channel data, reconstructing the key of the cipher may become feasible.
In this talk, we focus on using Julia’s type system to create a framework for generating, analyzing and protecting such side-channel data. For this purpose, we create custom types that behave like integers or arrays. When passing values of these types to a Julia implementation of a cryptographic algorithm, multiple dispatch automatically produces an instrumented or transformed version of that algorithm. Usually, this process does not require modifications to the algorithm’s original implementation.
We look in particular at two different functionalities that we can integrate via such custom types:
- To simulate potential side-channel attacks, it is useful to generate data traces that depend on intermediate values. We will show how to construct types that log a trace of information about the values processed. This reduces the need for access to analog recording hardware, which is particularly useful when teaching side-channel security concepts in student practicals.
- To explore protection against side-channel attacks, values that depend on the secret key should never appear in memory without protection. We explore how integer and array-like types can be created to implement a range of techniques for splitting register values into multiple shares, to reduce the dependence of leakage data on the actual values processed.
Julia’s parametric type system allows us to arbitrarily stack those types on top of each other. For instance, protection types can be stacked on top of logging types. This construction allows us to conveniently collect traces of protected data which can be, for example, used to verify the effectiveness of the protection.
Package: https://github.com/parablack/CryptoSideChannel.jl
<br>Documentation: https://parablack.github.io/CryptoSideChannel.jl/dev/
<br>Dissertation: https://github.com/parablack/CryptoSideChannel.jl/raw/master/diss.pdf
false
https://pretalx.com/juliacon2021/talk/TEKDX9/
https://pretalx.com/juliacon2021/talk/TEKDX9/feedback/
Red
Lattice Reduction using LLLplus.jl
Lightning talk
2021-07-28T17:40:00+00:00
17:40
00:10
Lattice reduction is used in post-quantumn cryptography, digital communication, and number theory. Lattice tools will be introduced with a focus on the Lenstra-Lenstra-Lovacsz (LLL) technique. The [LLLplus.jl](https://github.com/christianpeel/LLLplus.jl) package will be demoed and shown to work with user-defined data types such as [BitIntegers.jl](https://github.com/rfourquet/BitIntegers.jl).
juliacon2021-9261-lattice-reduction-using-lllplus-jl
Chris Peel
en
false
https://pretalx.com/juliacon2021/talk/7XFSZB/
https://pretalx.com/juliacon2021/talk/7XFSZB/feedback/
Red
SpeedMapping.jl: Implementing Alternating cyclic extrapolations
Lightning talk
2021-07-28T17:50:00+00:00
17:50
00:10
SpeedMapping.jl implements Alternating cyclic extrapolations: a new and fast algorithm for accelerating optimization algorithms. It may be used for a large class of problems requiring a solution to the mapping *F(x) = x*. It also performs multivariate optimization often faster than L-BFGS or the nonlinear conjugate gradient method, especially with box-constraints. It will be useful in statistics, computer science, physics, biology or economics and many other fields.
juliacon2021-9901-speedmapping-jl-implementing-alternating-cyclic-extrapolations
/media/juliacon2021/submissions/EWFRHW/Rosenbrock_example_EDNDXbr.png
Nicolas Lepage-Saucier
en
The talk will briefly explain the ideas behind the method and demonstrate its use with two examples: *i)* computing a dominant eigenvalue by accelerating the power iteration *ii)* minimizing a multivariate Rosenbrock function with or without constraint by providing only the objective or only the gradient. Benchmarks will show significant speed gains over the L-BFGS and the nonlinear conjugate gradient.
A notebook for the talk may be downloaded at https://github.com/NicolasL-S/SpeedMapping.jl/blob/main/Resources/SpeedMapping_JuliaCon2021.ipynb
SpeedMapping may be installed directly from the REPL, or downloaded here: https://github.com/NicolasL-S/SpeedMapping.jl
The Alternating cyclic extrapolation method is detailed in:
N. Lepage-Saucier, _Alternating cyclic extrapolation methods for optimization algorithms_, arXiv:2104.04974 (2021). https://arxiv.org/abs/2104.04974
The paper also shows other applications, such as a logistic regression, a large set of CUTEst unconstrained problems, accelerating the expectation-maximization (EM) algorithm for Poisson mixtures and for a proportional hazards regression with interval censoring, for canonical tensor decomposition, and for the method of alternating projections (MAP) applied to regressions with high-dimensional fixed effects.
false
https://pretalx.com/juliacon2021/talk/EWFRHW/
https://pretalx.com/juliacon2021/talk/EWFRHW/feedback/
Red
🎈 Pluto.jl — one year later
Talk
2021-07-28T19:00:00+00:00
19:00
00:30
[Pluto.jl](https://github.com/fonsp/Pluto.jl) is a notebook IDE for Julia, with a focus on interactivity and education. In this talk, you'll learn about our work during the past year, and our future plans.
juliacon2021-9460--pluto-jl-one-year-later
/media/juliacon2021/submissions/BGLQ3U/cutebanner_s71wQmy.png
Fons van der Plas
en
Hi! We're the developers of Pluto.jl, and we have been busy!
[Pluto.jl](https://github.com/fonsp/Pluto.jl) is a notebook IDE for Julia, with a focus on interactivity and education. In this talk, you'll learn about our work during the past year, which includes:
- Built-in package manager
- Macro support
- Static site export
- Interactive site export!
- Integration with many packages
- Disabling reactivity?
- Automatically run notebooks as REST APIs (also in separate talk)
- Tools for university education (also in separate talk)
We will also talk a bit about experimental features and future plans!
false
https://pretalx.com/juliacon2021/talk/BGLQ3U/
https://pretalx.com/juliacon2021/talk/BGLQ3U/feedback/
Red
Julia in VS Code - What's New
Talk
2021-07-28T19:30:00+00:00
19:30
00:30
We will highlight new features in the Julia VS Code extension that shipped in the last year and give a preview of some new work. The new features from last year that we will highlight are: 1) progress UI, 2) documentation browser, 3) package tagging functionality, 4) Jupyter notebook support, and 5) a new cloud hosted symbol indexing architecture.
juliacon2021-9735-julia-in-vs-code-what-s-new
David AnthoffZac Nugent
en
false
https://pretalx.com/juliacon2021/talk/AUYF3X/
https://pretalx.com/juliacon2021/talk/AUYF3X/feedback/
Red
Web application for atmospheric dispersion modeling.
Lightning talk
2021-07-28T20:00:00+00:00
20:00
00:10
Atmospheric dispersion models will be coupled with event-based response models to assess the impact of CBRN (Chemical, Biological, Radiological and Nuclear) releases. A user-friendly web-based tool is being developed using Genie.jl and will run on the cloud infrastructure of ECMWF. The event-based model will be implemented using the SimJulia.jl framework. Ensemble weather forecasts will then be used to give probabilistic quantification of the impacted area and of the appropriate response plan.
juliacon2021-9636-web-application-for-atmospheric-dispersion-modeling-
Tristan Carion
en
For both military and civilian purposes, the assessment of the impact of a CBRN agent release is crucial. To assess the area of contamination of an agent, atmospheric dispersion models can be used. The accuracy of such models particularly depends on high quality weather data. A joint project of Royal Military Academy of Belgium, ECMWF and Royal Meteorological Institute of Belgium aims to develop a web application that implements simple dispersion models with real-time weather forecast data from ECMWF. The idea is to provide quick assessments of the impact area of a CRBN release as well as response models to plan appropriate actions. The application will run on the ECMWF Weather Cloud so the input weather data for the models can be accessed quickly.
A prototype of the application has already been developed. For the time being, it implements the very simple ATP-45 dispersion model from NATO, which basically draws various hazard area shapes on the map according to the wind speed at the release location. Some screenshots of the app are provided in attachment.
The more complex FLEXPART atmospheric model is currently being added to the application and other state-of-the-art models are foreseen to be implemented as well. The response model will also be added using event driven simulation to account for other external data (population density, topography etc.). Ultimately, it will be possible to use ensemble forecast data to produce ensemble dispersion modelling and introduce probabilistic quantification in the response model.
The choice of Julia for the implementation has been made because we want to use the SimJulia.jl package, maintained by Ben Lauwens and who is one of the supervisors of the project. We are currently using the Genie.jl web framework as backend for web development (Angular is used as frontend) and some other packages for the handling of meteorological data (GRIB.jl, packages from JuliaGeo...).
The presentation will cover:
- A general description of the project
- A live demo of the application
- An Explanation about the role of Julia in the application
- The future of the project
false
https://pretalx.com/juliacon2021/talk/KD7MR7/
https://pretalx.com/juliacon2021/talk/KD7MR7/feedback/
Red
HypertextLiteral : performant string interpolation for HTML/SVG
Lightning talk
2021-07-28T20:10:00+00:00
20:10
00:10
HypertextLiteral is a Julia package for generating HTML, SVG, and other SGML tagged content. It works similar to Julia string interpolation, appropriately escaping interpolated values and providing handy data conversions dependent upon context. The implementation compiles templates to functions, with a custom IO proxy for escaping.
For those building dynamic hypertext, HTL is fast: 40x faster than object-based serializations; 8x faster than naive list comprehensions with string interpolation.
juliacon2021-9911-hypertextliteral-performant-string-interpolation-for-html-svg
Clark C. Evans
en
Generating HTML + SVG output is a common requirement for applications, especially when building scientific dashboards. The faster the better. Being able to use proven hypertext fragments as templates is especially important. The ability to encapsulate and re-use these templates as functions is critical.
`HypertextLiteral` (HTL) is a Julia package that satisfies these criteria, permitting complex hypertext output to be constructed server-side. This package is inspired by its Javascript's namesake written by Mike Bostock, the creator of D3. It uses string literals along with list comprehension syntax. The `@htl` macro translates an HTML template into a function closure. Here is an example.
```
books = [
(name="Who Gets What & Why", year=2012, authors=["Alvin Roth"]),
(name="Switch", year=2010, authors=["Chip Heath", "Dan Heath"]),
(name="Governing The Commons", year=1990, authors=["Elinor Ostrom"])]
render_row(book) = @htl("""
<tr><td>$(book.name) ($(book.year))<td>$(join(book.authors, " & "))
""")
render_table(books) = @htl("""
<table><caption><h3>Selected Books</h3></caption>
<thead><tr><th>Book<th>Authors<tbody>
$((render_row(b) for b in books))</tbody></table>""")
display("text/html", render_table(books))
#=>
<table><caption><h3>Selected Books</h3></caption>
<thead><tr><th>Book<th>Authors<tbody>
<tr><td>Who Gets What & Why (2012)<td>Alvin Roth
<tr><td>Switch (2010)<td>Chip Heath & Dan Heath
<tr><td>Governing The Commons (1990)<td>Elinor Ostrom
</tbody></table>
=#
```
*HTL is contextual.* At macro expansion time, the string template is passed through a light-weight HTML/SGML lexer. This is used to track the context of each interpolated Julia expression: is it part of element content, an attribute value, or is it inside an element tag where several attributes might be expanded? There is also a rawtext context used when content is inside a `script` tag.
*HTL is extensible.* With multiple dispatch, custom data types can provide their own contextual serialization. This permits us to omit boolean attributes that are false. It also lets us expand vectors differently dependent upon context: within element content, they are simply appended; while within attribute values, they are space separated.
*HTL is fast.* A template rendering that takes 500μs with HTL, takes 4.5ms with naive string interpolation and list comprehension. Object based alternatives, such as Hyperscript, take even longer (21ms). Memory usage of HTL is likewise low. It uses 1/3rd less memory than naive string approach, and 1/6th the memory of an object based approach.
This efficiency was achieved by emulating Julia's documentation system. Each component of the template is converted into an object which prints its content to a given `IO`. During macro processing, we build a Julia program that relies upon three primitive structures:
- *Bypass* is used for content that should be emitted as-is.
- *Render* is used for content that should be properly escaped.
- *Reprint* is a function closure used for composing content.
As the template is converted, leaf nodes are converted into either *Render* or *Bypass*, depending if they are part of the template, or part of variables that are to be escaped. *Reprint* is used to concatenate adjacent components that appear in the template or are generated by a list comprehension.
*HTL is safe.* Escaping code is layered using an `IO` proxy. Each of the 3 primitives has their own dispatch with regard to this proxy. This way, so long as the template translation properly distinguishes between `Bypass` and `Render` chunks, escaping is always performed. Handled as an exception, `<script>` content is checked to ensure it does not contain the `"</script>"` literal but is otherwise unescaped.
HTL can serialize attribute sets from pairs, dictionaries or named tuples. Unlike its Javascript namesake, we don't get clever with `camelCase` attribute names, which must be left as-is for SVG. Instead, we only convert `snake_case` names to their `kebab-case` equivalent. Moreover, if attribute sets are constants, we can pre-compute their serialization at macro expansion time.
It is notable how nicely the Julia implementation flowed together. Julia's excellent macro facility lets us easily convert embedded functions and list comprehensions into relevant template logic. Julia's handling of tiny function closures was outstanding: not only does it let us write code that is easy to read, the approach turned out to be surprisingly fast. Julia's `IO` interface lets us easily insert a proxy that was trivial to write, and, yet again, surprisingly efficient. Finally, multiple dispatch enables user-defined types to have their own serialization. Kudos Julia.
This approach could be used to make similar template libraries for other structured notations, such as JSON.
false
https://pretalx.com/juliacon2021/talk/9XJTRW/
https://pretalx.com/juliacon2021/talk/9XJTRW/feedback/
Red
Pluto.jl Notebooks are Web APIs!
Lightning talk
2021-07-28T20:20:00+00:00
20:20
00:10
What if Pluto notebooks could become web APIs instantly? With the power of reactivity, Pluto’s new “What you see is what you REST” features do just that: every global variable becomes an HTTP endpoint, and you can provide other global variables as URL parameters. These features not only provide a new paradigm for writing web APIs with Julia, but also open the door to a promising new form of inter-notebook communication all within Pluto.
juliacon2021-9912-pluto-jl-notebooks-are-web-apis-
Connor Burns
en
Pluto is fundamentally built upon **reactivity**, and hence knowledge of how notebook cells interact is known. Therefore cells can update in response to other cells changing, which happens in a Pluto notebook every time you run a cell. But what if these intelligent updates could also happen on-demand programmatically?
Introducing the new, experimental “What you see is what you REST” feature! (*WYSIWYR* for short.) Every global variable becomes an HTTP endpoint, and you can provide other global variables as parameters. Instead of experimenting with a model inside Pluto and then moving your code to an API script, your notebook _is_ an API, using reactivity to automatically create an execution model for each endpoint.
With this feature, interacting with Pluto notebooks from both outside and inside of other Pluto notebooks is revolutionarily simple. Everything from sharing models to writing custom web APIs with Julia is now possible, entirely from within Pluto, without having to transition from notebook code to “production code”.
This talk will demonstrate how to get started with WYSIWYR and use it in your own projects. By also explaining how the feature works, we hope to get experienced users interested in the feature. Along the way, we will discover how its expansion to existing notebook interactivity features opens the door to more seamless inter-notebook communication, and even to building web applications and APIs all from inside Pluto notebooks.
false
https://pretalx.com/juliacon2021/talk/39ZFBF/
https://pretalx.com/juliacon2021/talk/39ZFBF/feedback/
Blue
BifurcationKit.jl: bifurcation analysis of large scale systems
Talk
2021-07-28T12:30:00+00:00
12:30
00:30
`BifurcationKit.jl` is a package for the numerical bifurcation analysis of large scale problems. It incorporates automatic bifurcation diagrams (of equilibria) routines and efficient tools to study periodic orbits. Most of these tools run on GPU which makes it possible to study challenging problems. Its design allows an easy to interface with many packages such as `ApproxFun.jl`, `DifferentialEquations.jl`, `FourierFlows.jl`,...
juliacon2021-9576-bifurcationkit-jl-bifurcation-analysis-of-large-scale-systems
/media/juliacon2021/submissions/RERJWC/mittlemannBD-1_YjDhEO3.png
Romain VELTZ
en
In this talk, I will give a panorama of `BifurcationKit.jl`, a Julia package to perform numerical bifurcation analysis of large dimensional equations (PDE, nonlocal equations, etc) using Matrix-Free / Sparse Matrix formulations of the problem. Notably, numerical bifurcation analysis can be done **entirely** on GPU.
`BifurcationKit` incorporates continuation algorithms (PALC, deflated continuation, ...) which can be used to perform **fully automatic bifurcation diagram** computation of stationary states. I will showcase this with the 2d Bratu problem. I will also show an example of neural network that runs entirely on GPU.
Additionally, by leveraging on the above methods, the package can also seek for periodic orbits of Cauchy problems by casting them into an equation of high dimension. It is by now, one of the only softwares which provides parallel (Standard / Poincaré) shooting methods and finite differences based methods to compute periodic orbits in high dimensions. I will present an application highlighting the ability to fine tune `BifurcationKit` to get performance.
false
https://pretalx.com/juliacon2021/talk/RERJWC/
https://pretalx.com/juliacon2021/talk/RERJWC/feedback/
Blue
Agents.jl and the next chapter in agent based modelling
Talk
2021-07-28T13:00:00+00:00
13:00
00:30
Complex dynamical systems are comprised of many interacting sub-systems that couple together through multiple, varying (and many times non-linear) processes: creating emergent system properties as a consequence. Agents.jl provides a framework to work with such dynamics, through a bottom-up approach known as Agent Based Modelling. This talk provides an overview of the package, and discusses how the greater Julia ecosystem may provide the next paradigm shift in this well established research area.
juliacon2021-9720-agents-jl-and-the-next-chapter-in-agent-based-modelling
/media/juliacon2021/submissions/E8SVYT/Agents_5poOwRo.png
Tim DuBois
en
Agent based modelling (ABM) is a simulation method in which autonomous agents react to their environment, given a predefined set of rules. It is a bottom-up approach for modelling and simulating complex systems, such as behavior, decision making, crowd dynamics and other socio-economic problems; as well as, but not limited to, complex natural sciences such as chemical reactions or biological processes.
Since ABMs are not described by simple and concise mathematical equations, code that generates them is typically complicated, large, and slow. In addition, since many of these problems are very domain specific, a lot of ABMs are hand written from scratch.
Agents.jl provides a solution to this complication. Acknowledging that ABM frameworks have existed for decades, we show that Agents.jl is not only the most performant, but also the least complicated software (in terms of lines of code written to implement well-known ABM test cases), providing the same (and sometimes more) features as competitors.
This enables rapid prototyping of your domain specific ABM, with tried and tested (but generic) tooling.
The talk will provide an introduction to many of these helpful features, as well as showcase how well it integrates with the entire Julia ecosystem. Interactive applications with Makie.jl, differential equations from DifferentialEquations.jl, parameter optimization from BlackboxOptim.jl, and more.
To conclude, we'll outline some of the big next-steps on the roadmap that other ABM frameworks will struggle to match in the absence of the Julia ecosystem.
false
https://pretalx.com/juliacon2021/talk/E8SVYT/
https://pretalx.com/juliacon2021/talk/E8SVYT/feedback/
Blue
An individual-based model to simulate Coffee Leaf Rust epidemics
Lightning talk
2021-07-28T13:30:00+00:00
13:30
00:10
Coffee Leaf Rust (CLR) is an aggressive plant disease of high economic importance that has caused major production collapses worldwide. To explore how the management and long-term planning of a coffee farm can influence CLR epidemic outcomes over several years, we took advantage of Julia’s multiple dispatch and distributed computing to develop and test an individual-based model of a coffee farm.
juliacon2021-9924-an-individual-based-model-to-simulate-coffee-leaf-rust-epidemics
Manuela Vanegas Ferro
en
CLR is an active research topic in plant pathology and epidemiology. However, the overall effect of the use of shade trees on the development of the CLR disease has not yet been established. The introduction of shade trees in a farm produces local changes that can have positive or negative effects on the development of CLR epidemics, depending on the life cycle stage of present infections.
In an effort to integrate relevant pathology and ecology knowledge, we developed a spatially explicit individual-based model that allows us to simulate CLR epidemics at a farm scale and its effect on coffee productivity over several years. Using high-throughput computing, we explore different agricultural management strategies, including various patterns of shade-providing tree placement within the farm, and test their efficacy at controlling a potential CLR outbreak. This talk will show how Agents.jl and Distributed.jl facilitated our research.
false
https://pretalx.com/juliacon2021/talk/LHQEPZ/
https://pretalx.com/juliacon2021/talk/LHQEPZ/feedback/
Blue
hPF-MD.jl: Hybrid Particle-Field Molecular-Dynamics Simulation
Lightning talk
2021-07-28T13:40:00+00:00
13:40
00:10
We introduce an efficient framework of molecular-dynamics simulations (hPF-MD), using a density-functional-based formalism to compute the non-bonded interactions between particles. hPF.jl is motivated to leverage the advantages of Julia, an interpreted language designed to achieve high-performance, statically compiled programming languages and the extensive computing community.
juliacon2021-9748-hpf-md-jl-hybrid-particle-field-molecular-dynamics-simulation
Zhenghao Wu
en
In this talk, we will give (1) a brief overview of the hPF-MD method, (2) example systems compared with results from existing hPF-MD packages and standard MD method, and (3) advantages and extensibility of their Julia implementations.
References:
1. Milano, G.; Kawakatsu, T. Hybrid Particle-Field Molecular Dynamics Simulations for Dense Polymer Systems. The Journal of Chemical Physics 2009, 130 (21), 214106. https://doi.org/10.1063/1.3142103.
2. Wu, Z.; Milano, G.; Müller-Plathe, F. Combination of Hybrid Particle-Field Molecular Dynamics and Slip-Springs for the Efficient Simulation of Coarse-Grained Polymer Models: Static and Dynamic Properties of Polystyrene Melts. J. Chem. Theory Comput. 2020. https://doi.org/10.1021/acs.jctc.0c00954.
3. Caputo, S.; Hristov, V.; Nicola, A. D.; Herbst, H.; Pizzirusso, A.; Donati, G.; Munaò, G.; Albunia, A. R.; Milano, G. Efficient Hybrid Particle-Field Coarse-Grained Model of Polymer Filler Interactions: Multiscale Hierarchical Structure of Carbon Black Particles in Contact with Polyethylene. J. Chem. Theory Comput. 2021, https://doi.org/10.1021/acs.jctc.0c01095.
false
https://pretalx.com/juliacon2021/talk/ECKGDE/
https://pretalx.com/juliacon2021/talk/ECKGDE/feedback/
Blue
Enhanced Sampling in Molecular Dynamics Simulations with Julia
Lightning talk
2021-07-28T13:50:00+00:00
13:50
00:10
When performing molecular dynamics simulations of materials in chemistry, physics and
biology, there exists a large gap between the time scales that can be probed
computationally to the ones observed in experiments. Two strategies to tackle this
problem are both to develop algorithms to explore the simulation space more efficiently, and to
employ hardware accelerators. I would like to share my experience and perspectives using
Julia to make faster developments in both fronts.
juliacon2021-9806-enhanced-sampling-in-molecular-dynamics-simulations-with-julia
Pablo Zubieta
en
When performing molecular dynamics (MD) simulations of materials in chemistry, physics and
biology, there exists a large gap between the time scales that can be probed
computationally to the ones observed in experiments. One strategy to approach this issue
has been to develop algorithms to enhance sampling over the simulated system's
configuration space, overcoming the otherwise hard to surmount energetic barriers limiting
the observation of certain possible states. These algorithms themselves are not enough to
really push toward larger timescales, one also needs to implement them in hardware
accelerators such as GPUs. In fact, a good number of the most recently developed
algorithms tend to become a bottleneck for molecular simulations accelerated on GPUs, as
they are commonly implemented in CPUs even when some of them heavily rely on machine
learning strategies.
Within our research group, we are trying to provide a library that can be hooked to
different molecular dynamics simulations packages, allowing the user to perform enhanced
sampling simulations through a uniform interface without sacrificing the efficiency of the
underlying MD code. The library is currently located here:
https://github.com/SSAGESLabs/PySAGES, and although it is a Python library it has
continuously been prototyped in Julia. For example, here
https://github.com/pabloferz/ReactionCoordinates.jl and here
https://github.com/pabloferz/DLPack.jl are some of the pieces that we have built for such
purpose. The prototypes, being written in Julia, are of course faster than the current
Python implementation.
I would like to share my experience and perspective using Julia to build these tools.
false
https://pretalx.com/juliacon2021/talk/EFYMME/
https://pretalx.com/juliacon2021/talk/EFYMME/feedback/
Blue
Vectorized Query Evaluation in Julia
Talk
2021-07-28T17:00:00+00:00
17:00
00:30
Modern databases can choose between two approaches to evaluating queries with high performance: Query Compilation compiles each query to optimized machine code, while Vectorization interprets queries using BLAS-style primitives.
Query compilation offers more optimization potential for LLVM, while vectorization doesn’t require runtime compilation.
We explain how these techniques work and how we combine them, showcasing how Julia lets us have the best of both.
juliacon2021-9863-vectorized-query-evaluation-in-julia
Richard GankemaAlex Hall
en
In modern (SQL) database query engines, there are two major approaches on how to evaluate user-provided queries in a highly performant manner (see e.g. [1]):
Query Compilation: Each pipeline of a query plan gets compiled into a single function that effectively fuses operators into a single (nested) for-loop. This function is then compiled to highly-optimized machine code. Operators process data tuple-at-a-time.
Vectorization: The query plan is interpreted, and each operator in the plan is mapped to a pre-compiled function. To offset the arising interpretation cost, each operator evaluates batches ("vectors") of, say, 1000 values in bulk on each step.
Query compilation offers more optimization potential for LLVM and is often effective at keeping values in registers, while vectorization enables shorter compilation times — for better support of interactive queries. As part of the production-grade RelationalAI Knowledge Graph Management System, we implemented both approaches in Julia.
In this presentation, we explain in greater detail how both of these fundamentally different techniques work, why we are implementing them, and how we aim to combine them. We showcase where Julia enabled us to implement highly performant code with ease, but also reveal where we had to spend non-trivial amounts of engineering effort to arrive at the desired performance.
[1] https://www.vldb.org/pvldb/vol11/p2209-kersten.pdf
false
https://pretalx.com/juliacon2021/talk/CAMR3P/
https://pretalx.com/juliacon2021/talk/CAMR3P/feedback/
Blue
ReTest.jl - more productive testing
Lightning talk
2021-07-28T17:30:00+00:00
17:30
00:10
[ReTest.jl](https://github.com/JuliaTesting/ReTest.jl) is a testing framework which is backward-compatible with the `Test` standard library, and offers few usability improvements, like nicer printing of results, filtering testsets according to their descriptions or running them in parallel. This talk is a tutorial motivating and demonstrating the main features of the package.
juliacon2021-9885-retest-jl-more-productive-testing
Rafael Fourquet
en
The main idea behind [ReTest.jl](https://github.com/JuliaTesting/ReTest.jl) is that its `@testset` macro does not run tests immedidately, but instead stores them for later execution, via a call to the `retest` function. This is what enables a lot of the provided features, two of which were the initial drive for the creation of the package:
- filtering which testsets are run by matching their descriptions against a given regular expression;
- the ability to write tests "inline" in source files, right next to the code implementing the tested behaviors.
The fact that `Test` and `ReTest` have the same macro name, `@testset`, makes it usually trivial to switch an existing test suite over to `ReTest`. So much so that there is an option to actually use `ReTest` on a test suite without changing a single line of code!
And what if you could use `Revise` on your test files...?
false
https://pretalx.com/juliacon2021/talk/KUVB9C/
https://pretalx.com/juliacon2021/talk/KUVB9C/feedback/
Blue
Sponsor Talk (Invenia)
Keynote
2021-07-28T17:40:00+00:00
17:40
00:05
Sponsor talk.
juliacon2021-11794-sponsor-talk-invenia-
en
false
https://pretalx.com/juliacon2021/talk/7Q8P9D/
https://pretalx.com/juliacon2021/talk/7Q8P9D/feedback/
Blue
Sponsor talk (KAUST)
Keynote
2021-07-28T17:45:00+00:00
17:45
00:05
Sponsor talk.
juliacon2021-11793-sponsor-talk-kaust-
en
false
https://pretalx.com/juliacon2021/talk/EJP7SS/
https://pretalx.com/juliacon2021/talk/EJP7SS/feedback/
Blue
Sponsor talk (Pumas AI)
Keynote
2021-07-28T17:50:00+00:00
17:50
00:05
Sponsor talk.
juliacon2021-11849-sponsor-talk-pumas-ai-
en
false
https://pretalx.com/juliacon2021/talk/NKSCDS/
https://pretalx.com/juliacon2021/talk/NKSCDS/feedback/
Blue
Sponsor talk (Quera)
Keynote
2021-07-28T17:55:00+00:00
17:55
00:05
Sponsor talk.
juliacon2021-11850-sponsor-talk-quera-
en
false
https://pretalx.com/juliacon2021/talk/RSPXSS/
https://pretalx.com/juliacon2021/talk/RSPXSS/feedback/
Blue
Changing Physics education with Julia
Talk
2021-07-28T19:00:00+00:00
19:00
00:30
In many disciplines of physics, code is not explicitly discussed as part of the learning subject. Here I will focus on nonlinear dynamics, a discipline that suffers greatly from the disconnect between the mathematics and the coding. I will present our new approach in teaching this subject, based on JuliaDynamics and a new Springer textbook for nonlinear dynamics whose pages are interlaced with Julia code. I wish to demonstrate how Julia can fundamentally change the way physics is being taught.
juliacon2021-9375-changing-physics-education-with-julia
George Datseris
en
In many disciplines of physics, code is not explicitly discussed as part of the learning subject. Here I will focus on nonlinear dynamics, a discipline that suffers greatly from the disconnect between the mathematics and the coding. In fact, this disconnect is largely what started the JuliaDynamics software organization, as a means to eliminate this disconnect.
In this talk I will present the numerous ways that we have employed in order to fundamentally change physics education for the better. And this change necessarily requires including coding as part of the learning subject. I will demonstrate how we created easy to read code using Julia, how to incorporate it into exercises, how to make interactive applications that enhance learning, and how to include scientific analysis using code as part of the learning subject. Our new approach to teaching nonlinear dynamics, which I will present here, is also published as a new textbook on the topic by Springer. The book explicitly includes real, runnable Julia code. A GitHub repository related with the book can be found here: https://github.com/JuliaDynamics/NonlinearDynamicsTextbook
I believe that this new approach should be used by more and more branches of physics. When this is done, then finally coding will be viewed as an integral part of science, instead of some "background business behind the curtains", which is its current perception. Ultimately, this will lead not only to better science, but to actually reproducible science.
false
https://pretalx.com/juliacon2021/talk/KNWNHJ/
https://pretalx.com/juliacon2021/talk/KNWNHJ/feedback/
Blue
Open and interactive Computational Thinking with Julia and Pluto
Talk
2021-07-28T19:40:00+00:00
19:40
00:30
We will discuss goals, ideas, technical tools and outcomes for the open, online, interactive course on "Computational Thinking with Julia" that we have been teaching for the last two semesters. The Pluto notebook has allowed us to develop a new approach to write both an online interactive textbook and interactive problem sets with built-in solution checks.
juliacon2021-9819-open-and-interactive-computational-thinking-with-julia-and-pluto
David P. SandersAlan EdelmanFons van der Plas
en
During the Fall 2020 and Spring 2021 semesters we have been teaching an online, open, interactive course on Computational Thinking using Julia and the Pluto notebook.
Previously we had used the Jupyter notebook, but we decided to take the plunge with the then-brand-new Pluto notebook in the summer of 2020, when it was still in its early days. It has turned out to be an excellent -- although at times frustrating! -- decision.
Pluto has allowed us to develop a completely new approach to writing both an interactive online textbook, as well as interactive problem sets with beautiful built-in solution checks that make working on problems both more fun and more rewarding.
Indeed, the capabilities of the Pluto notebook itself have developed together with the course, as we have collaborated on the required tools and ideas. Many recent features in Pluto have been added with this style of teaching in mind, and we hope to inspire more teachers to write interactive material.
Half-way through the second semester, each video lecture consistently receives over 2,000 views, and the course website receives 6,000 hits per month. The interactive and self-checking nature of Pluto homeworks is especially useful in an open course, where students have to work on the material independently.
We will discuss ideas and goals for teaching Julia, computational thinking, concepts from computer and applied mathematics mixed together, and how the technical features of Pluto enable and enhance one another.
Course homepage with online interactive textbook: https://computationalthinking.mit.edu/Spring21/
false
https://pretalx.com/juliacon2021/talk/RXF8UK/
https://pretalx.com/juliacon2021/talk/RXF8UK/feedback/
Purple
Clearing the Pipeline Jungle with FeatureTransforms.jl
Lightning talk
2021-07-28T12:30:00+00:00
12:30
00:10
The prevalence of glue code in feature engineering pipelines poses many problems in conducting high-quality, scalable research. In worst-case scenarios, the technical debt racked up by overgrown “pipeline jungles” can preclude further development and grind promising projects to a halt [1]. This talk will show how the FeatureTransforms.jl package can help make feature engineering a more sustainable practice for users without sacrificing the flexibility they desire.
juliacon2021-9497-clearing-the-pipeline-jungle-with-featuretransforms-jl
Glenn Moynihan
en
Feature engineering is an essential component in all machine learning and data science workflows. It is often an exploratory activity in which the pipeline for a particular set of features tends to be developed iteratively as new data or insights are incorporated.
As the feature complexity grows over time it is very common for code to devolve into unwieldy “pipeline jungles” [1], which pose multiple problems to developers. They are often brittle, with highly-coupled operations that make it increasingly difficult to make isolated changes. The over-entanglement of such pipelines also means they are difficult to unit test and debug effectively, making them particularly error-prone. Since adding to this complexity is often easier than investing in refactoring it, pipeline jungles tend to be more susceptible to incurring technical debt over time, which can impact the project’s long-term success.
In this talk, we will showcase some of the key features of the [FeatureTransforms.jl](https://github.com/invenia/FeatureTransforms.jl) package, such as the composability, reusability, and performance of common transform operations, that were designed to help mitigate the problems in our own pipeline jungles..
[FeatureTransforms.jl](https://github.com/invenia/FeatureTransforms.jl) is conceptually different from other widely-known packages that provide similar utilities for manipulating data, such as [DataFramesMeta.jl](https://github.com/JuliaData/DataFramesMeta.jl), [DataKnots.jl](https://github.com/rbt-lang/DataKnots.jl), and [Query.jl](https://github.com/queryverse/Query.jl). These packages provide methods for composing relational operations to filter, join, or combine structured data. However, a query-based syntax or an API that only supports one type are not the most suitable for composing the kinds of mathematical transformations, such as one-hot-encoding, that underpin most (non-trivial) feature engineering pipelines, which this package aims to provide.
The composability of transforms reflects the practice of piping the output of one operation to the input of another, as well as combining the pipelines of multiple features. Reusability is achieved by having native support for the Tables and AbstractArray interfaces, which includes tables such as [DataFrames](https://github.com/JuliaData/DataFrames.jl/), [TypedTables](https://github.com/JuliaData/TypedTables.jl), [LibPQ.Result](https://github.com/invenia/LibPQ.jl), etc, and arrays such as [AxisArrays](https://github.com/JuliaArrays/AxisArrays.jl), [KeyedArrays](https://github.com/mcabbott/AxisKeys.jl), and [NamedDimsArrays](https://github.com/invenia/NamedDims.jl). This flexible design allows for performant code that should satisfy the needs of most users while not being restricted to (or by) any one data type.
[1] [Sculley, David, et al. "Hidden technical debt in machine learning systems." Advances in neural information processing systems 28 (2015): 2503-2511](https://proceedings.neurips.cc/paper/2015/hash/86df7dcfd896fcaf2674f757a2463eba-Abstract.html).
false
https://pretalx.com/juliacon2021/talk/GFHKV7/
https://pretalx.com/juliacon2021/talk/GFHKV7/feedback/
Purple
TiledViews.jl
Lightning talk
2021-07-28T12:40:00+00:00
12:40
00:10
This class implements an 2N-dimensional tiled (copy-free) view of an `AbstractArray` of N dimensions. The tiling is specified by a `tile_size` and a `tile_overlap` leading to N inner coordinates (within each tile) and outer coordinates (tile index). The view is easily combined with windows and has `getindex`/`setindex `access. Applications range from deconvolution of large datasets to propagation of optical field amplitudes. Similarities and differences to `TiledIteration.jl` will be presented.
juliacon2021-9792-tiledviews-jl
Rainer Heintzmann
en
false
https://pretalx.com/juliacon2021/talk/8VL9R7/
https://pretalx.com/juliacon2021/talk/8VL9R7/feedback/
Purple
Structural lambdas for generic code and delayed evaluation
Lightning talk
2021-07-28T12:50:00+00:00
12:50
00:10
We describe an [experimental package](https://github.com/goretkin/FixArgs.jl) that reifies lambda functions as a `Lambda(args, body)` and function calls as `Call(function, args)`, giving a new way to "quote" expressions. It generalizes types like `Base.Fix2`, `Base.Generators`, `Iterators.Filter` and possibly many others. It might be well-suited for the recurring pattern of deferred computation in Julia code.
juliacon2021-9928-structural-lambdas-for-generic-code-and-delayed-evaluation
Gustavo Nunes Goretkin
en
Pervasive and performant multiple dispatch in Julia has led to the development of functions that convey generic meaning. Deciding on a name and settling on the meaning of these generic functions is challenging work, but it is essential for generic programming.
Functions can be used to direct dispatch. e.g.
```julia
reduce(vcat, [[1,2,3], [4,5], [6,7,8,9]])
```
There is a fallback method of `reduce` that works for any binary operation, not just `vcat`. However, there is also a method specific to `vcat` that preallocates the result. Both the fallback and the specialization produce the same result, but the specialization is likely to perform better since it is inexpensive to calculate the size of the result.
Consider e.g. `Base.filter` and `Base.Iterators.filter`. The second simply constructs a `Base.Iterators.Filter`, which is in essence nothing more than a lazy representation of `Base.filter(f, itr)`. We developed an experimental package [`FixArgs.jl`](https://github.com/goretkin/FixArgs.jl) to represent this:
```julia
julia> @xquote filter(iseven, $(1:5))
Call(Some(filter), FrankenTuple((Some(iseven), Some(1:5)), NamedTuple()))
```
Suppose we want to define `eltype`, the same way it is defined for `Base.Iterators.Filter`. We can define an `eltype` method for `Call` for the specific parameter `filter`. `FixArgs.jl` defines a macro to help (but the ergonomics should still be improved):
```julia
julia> Base.eltype(filt::(@xquoteT filter(::F, ::I))) where {F, I} = eltype(something(filt.args[2]))
julia> eltype(@xquote filter(iseven, $(1:5)))
Int64
```
`FixArgs.jl` began as a generalization of `Base.Fix1` and `Base.Fix2`, but identifies a common pattern that could systematically replace many existing types and methods. e.g. Broadcasting relies on types to represent lazy function calls, and `materialize` to perform the computation (with e.g. dot fusion). `Base.Generator` and `collect` are analogous.
Instead of generating a new name for a type, and attaching meaning to it, one can meaningfully compose a name from existing meaningful names. One straightforward example is to define `Rational{T}` as
```julia
julia> (@xquoteT ::T / ::T) where T
FixArgs.Call{Some{typeof(/)}, FrankenTuples.FrankenTuple{Tuple{Some{T}, Some{T}}, (), Tuple{}}} where T
```
Note that `Rational{Int}` and (@xquoteT ::Int / ::Int) have identical memory layouts!
(Also, occasionally, there is a need for a `Rational`-like type that does not constrain the numerator and denominator to have the same type. The type above would fit the bill).
In this case, `Rational` is in Base, but more generally packages have to depend on a common package (usually called `*Base.jl`) that defines types. If it is possible to define new types terms of existing types, then in a sense the types are structural and not nominal. This may reduces the need for these common packages and enable better package interoperability.
As another example, the type `Base.Generator(f, itr)` could be identical to `@xquote map(f, itr)` (though not exactly since the meaning of `map` and `collect` are currently conflated (https://github.com/JuliaLang/julia/issues/39628)
There are many other examples in the ecosystem, such as in `LazySets.jl`, `LazyArrays.jl`, `MappedArrays.jl`, `StructArrays.jl`, ... where types are defined to essentially represent lazy function calls ad-hoc. They each have a version of "materialize". Note that in most cases, these cannot be replaced directly since `Call` cannot e.g. subtype `AbstractArray` and `AbstractSet`.
`Base.Fix1`, etc. is useful, even though one can already define a lambda function with the same behavior, because it is possible to dispatch on the structure of this lambda function as opposed to having an opaque name:
```julia
julia> x -> x > 2
#3 (generic function with 1 method)
julia> >(2)
(::Base.Fix2{typeof(>),Int64}) (generic function with 1 method)
```
`FixArgs.jl` (which really should be called e.g. `StructuralLambdas.jl`) allows one to easily define these structural lambdas:
```julia
julia> @xquote x -> x > 2
Fix2(>,2)
julia> typeof(@xquote x -> x > 2)
Fix2{typeof(>), Int64} (alias for FixArgs.Lambda{FixArgs.Arity{1, Nothing}, FixArgs.Call{Some{typeof(>)}, FrankenTuples.FrankenTuple{Tuple{FixArgs.ArgPos{1}, Some{Int64}}, (), Tuple{}}}})
```
(Better aesthetics would be necessary for usability.)
See https://goretkin.github.io/FixArgs.jl/dev/ for more motivation and details, including examples for replacing `Complex` and generalizing `FixedPointNumbers.jl`.
Please note that this talk is about an idea, not `FixArgs.jl` itself. It may turn out that the idea is not practical; e.g. it might pose immense challenges for compilation, or it might be too confusing to marry the meaning of functions and `Call`, or package interoperability will fail due to subtle differences. I hope the idea holds promise. A great way to find out is at JuliaCon 2021.
false
https://pretalx.com/juliacon2021/talk/CBDFPN/
https://pretalx.com/juliacon2021/talk/CBDFPN/feedback/
Purple
Dictionaries.jl - for improved productivity and performance
Talk
2021-07-28T13:30:00+00:00
13:30
00:30
[*Dictionaries.jl*](https://github.com/andyferris/Dictionaries.jl) presents an alternative interface for dictionaries in Julia, for improved productivity and performance. During this talk we'll learn how to use Julia's data manipulation tools (such as indexing, broadcasting, `map`, `filter`, `reduce`, etc) with dictionaries and explore some implementation decisions made in this package. We will end with applications, including recent work on tabular data with primary and/or grouping keys.
juliacon2021-9584-dictionaries-jl-for-improved-productivity-and-performance
Andy Ferris
en
This talk will be divided into roughly three sections.
### Motivation
Julia is an awesome language for manipulating data, with excellent built-in functionality that is easy to extend by packages and users. Arrays, sets and dictionaries form the basis of core data structures necessary for a wide range of workloads. Of the three, Julia's `AbstractArray` interface is most extensive — supporting a wide range of data structures *and* a rich set of operations, designed around the same core set of functionality (primarily, indexing and iteration).
On the other hand, `AbstractSet` and `AbstractDict` do not support such a wide range of operations (like broadcasting, `map`, or `reduce`) and the interface for a user to create a new, fully-functional `AbstractDict` is not as clear cut or simple as it is for an array. *Dictionaries.jl* is an attempt to remedy this situation, by applying the learnings of the `AbstractArray` interface to create a new `AbstractDictionary` interface. By using *Dictionaries.jl*, users can experience improved programmer productivity as well as signficantly faster execution for many operations (especially with analytics-style workloads).
### Implementation
A *Dictionaries.jl* `AbstractDictionary` differs from Julia's `AbstractDict` in three main ways:
1. Dictionaries iterate values, like an array, instead of key-value pairs.
2. The indices (or `keys`) of dictionary are a special kind of dictionary, much like the indices (or `keys`) of an array is a special kind of array.
3. Dictionaries (and their indices) by default iterate in a well-defined order based on insertion, rather than quasi-randomly based on how the hashmap was constructed.
Since the indices of a dictionary are distinct, they naturally form a set (in the mathematical sense). In *Dictionaries.jl* this type is represented by `AbstractIndices <: AbstractDictionary` (unlike for `Dict`, there was no obvious alternative spelling of `Set` available). Every `AbstractIndices` has the special property that it's values are the same as it's keys, so if `i ∈ indices` then `indices[i]` is just `i`.
From these alone, natural definitions of `broadcast`, `map`, `filter` and `reduce` follow, for both indices (i.e. "sets") or dictionaries. For example, the `map` operation preserves the indices and maps the values to new values. You can even `map` a set of indices/keys into a new dictionary.
This property leads to *Dictionaries.jl*'s primary efficiency gain. The indices of dictionaries (i.e. the expensive part) can be shared between different dictionaries. For example, the provided hashmap `Dictionary` can share its hash `Indices` with other dictionaries. The values are stored in a (mostly) dense array, so operating on all the values with an operation like `map`, `filter` and `reduce` is as fast as it is for a similarly sized array. One can use `map` or `broadcast` with multiple similar dictionaries (i.e. those that share compatible "tokens"), co-iterating values together at speed and with zero hash lookups.
### Applications
We will turn our attention to some example applications of this interface, first highlighting the convenience and speed of `Dictionary` for some common analytics tasks.
We will then see how *Dictionaries.jl* is used in conjunction with other packages, for example how *SplitApplyCombine.jl*'s `group` operation now supports a simple and easy split-apply-combine workflow.
Finally, we will explore recent work in *TypedTables.jl* which uses *Dictionaries.jl* to provide a table with a primary key (enabling easy lookup of rows based on data rather than an array index). The idea is similarly extended to grouped/partitioned data tables and their grouping keys.
false
https://pretalx.com/juliacon2021/talk/WRNAEN/
https://pretalx.com/juliacon2021/talk/WRNAEN/feedback/
Purple
Tomographic Image Reconstruction with Julia
Talk
2021-07-28T16:30:00+00:00
16:30
00:30
In this talk we show how Julia can be used to develop tomographic image reconstruction algorithms. These involve the solution of large scale ill-posed inverse problems where usually the imaging operator does not fit into the main memory and in-turn matrix-free methods need to be applied. The talk captures how Julia has been used to form a package ecosystem for two different tomographic imaging methods and outlines the advantageous compared to mature C/C++ libraries in the field.
juliacon2021-9675-tomographic-image-reconstruction-with-julia
/media/juliacon2021/submissions/N9JPV7/JuliaImagingPackages_v3_WJ8MTEK.png
Tobias Knopp
en
Tomographic imaging plays a major role in clinical routine and has revolutionized the diagnosis and treatment of serious diseases such as stroke, heart attack and cancer. Tomographic techniques such as magnetic resonance imaging (MRI), computed tomography or the new imaging modality magnetic particle imaging (MPI) make it possible to look inside the human body without surgical intervention, simply by measuring indirect signals which allow reconstruction of an image of the inside of the body. Medical imaging is an interdisciplinary field involving physicians, physicists, engineers, mathematicians and computer scientists to develop a tomographic imaging system. While the technical development of modalities such as MRI are approaching limits with respect to the optimization of signal quality, the potential on the side of image reconstruction algorithms has not yet been fully exploited. As a consequence, numerous innovations from the fields of mathematics, signal processing and computer science have found their way into tomography research within the last decade.
Traditionally, algorithm development within the imaging community has been divided into two parts. Researchers who primarily work on mathematical methods often implement these using a high-level language such as Matlab and occasionally Python. As a result, the application of these algorithms is often limited to selected datasets, which validate the feasibility and the accuracy of the method. On the other hand, application-oriented researchers often use highly optimized program libraries, implemented in a low-level language such as C/C++, to apply algorithmic innovations to larger datasets that are acquired in clinical trials. Some of the larger C/C++ software packages such as the MRI reconstruction framework BART and Gadgetron use such a low-level approach and additionally provide Python bindings to make the framework accessible also to researchers who prefer using a high-level programming language. In practice, this hybrid approach, where low-level and high-level code is mixed leads to the well-known two-language problem since the presence of bindings still does not allow for an easy transition of new algorithmic ideas into the core of these packages.
This presentation of the current state of software tools in the imaging community shows that there is great potential for a modern programming language like Julia to close the gap between theoretically orientated and applied researchers. The speaker of this talk will outline the Julia package infrastructure for two different imaging modalities that have been devel-oped since 2015. The packages cover a wide range of functionality, namely:
- File handling for raw data files acquired with tomographic imaging systems.
- Preprocessing of raw data to make it suitable for image reconstruction.
- Routines for setting up dense and matrix-free image operators
- Iterative solvers for solving the reconstruction problem, including a flexible system for applying regularization to incor-porate prior knowledge about the solution
- Visualization methods for slicing, coloring, and merging tomographic images
Instead of putting all of this functionality into a single software package, the opposite approach is taken with the philosophy of reusing as much functionality from existing Julia packages as possible (see attached figure). This has the advantage of keeping imaging-specific functionality small and allows to share common methods across different imaging modalities. Julia's powerful package manager allows for small packages, making this form of fine-granular modularization feasible. An interesting opportunity that arises by solving the two-language problem is that the software becomes much more accessible since a user can not only use the provided interface but also access internals easily. In the imaging packages MRIReco.jl and MPIReco.jl we have exploited this advantage by providing the user direct access to different abstraction layers of the reconstruction pipeline. In this way a user can either perform standard reconstruction using ready-to-use high-level building blocks or the user can develop a custom reconstruction pipeline based on the available building blocks. While this flexibility can also be achieved in two-language solutions, it arises very naturally in Julia, without much additional effort on the developer side.
Since tomographic image reconstruction is a computationally intensive task one needs efficient algorithms to determine the image in short enough time. The talk outlines how the package MRIReco.jl has been designed to match the efficiency of a highly tuned C/C++ libraries even in a multi-threading scenario, based on the parallel task runtime support available since Julia 1.3.
Core packages being presented:
- https://github.com/MagneticResonanceImaging/MRIReco.jl
- https://github.com/MagneticParticleImaging/MPIReco.jl
false
https://pretalx.com/juliacon2021/talk/N9JPV7/
https://pretalx.com/juliacon2021/talk/N9JPV7/feedback/
Purple
DeconvOptim.jl: Microscopy Image Deconvolution
Lightning talk
2021-07-28T17:10:00+00:00
17:10
00:10
A microscope capturing incoherent light emitted by a specimen always introduces some blur to the image which can be described as a convolution of the object with the point spread function (PSF) of the optical system.
Deconvolution is an algorithm which tries to reverse this blurring process providing a sharper image.
We offer a flexible deconvolution toolbox called DeconvOptim.jl to solve deconvolution for multidimensional signals.
juliacon2021-9445-deconvoptim-jl-microscopy-image-deconvolution
Felix Wechsler
en
In our package DeconvOptim.jl we address deconvolution through an optimization problem.
However, deconvolution is, due to the band limit of the PSF, an ill-conditioned inverse problem which cannot be solved directly.
The forward model is convolution of the PSF with our estimation. The PSF is the mathematical description of the optical system which introduces the blur.
Based on an initial estimation we minimize an exchangeable loss function (for microscopy Poisson loss is widely used) with respect to a reconstruction being a consistent solution to the inverse problem.
To ensure certain constraints we allow that regularizers like Total Variation (TV) can be used. These regularizers are assembled before the optimization via metaprogramming and Tullio.jl.
The gradient of the full inverse problem pipeline is calculated by Zygote.jl and the optimization by Optim.jl (currently L-BFGS). Despite having microscopy images in mind the toolbox can be used for any type of signal deconvolution due to its flexibility. We also offer GPU/CUDA.jl support to a certain extend.
The full source code is available at [GitHub](https://github.com/roflmaostc/DeconvOptim.jl).
false
https://pretalx.com/juliacon2021/talk/8UR3U3/
https://pretalx.com/juliacon2021/talk/8UR3U3/feedback/
Purple
Matlab to Julia: Hours to Minutes for MRI Image Analysis
Lightning talk
2021-07-28T17:20:00+00:00
17:20
00:10
Magnetic resonance imaging (MRI) research has quickly entered the big data regime: hardware and software advances have given rise to (3+1)-dimensional MRI images which consist of 32-64 volumes with dimensions 250x250x250 or more, making non-trivial image processing computationally expensive. In this talk, we describe our experience translating an MRI image post-processing technique from Matlab to Julia (https://github.com/jondeuce/DECAES.jl), reducing computation times from 2 hours to 2 mins.
juliacon2021-9902-matlab-to-julia-hours-to-minutes-for-mri-image-analysis
Jonathan Doucette
en
Like most fields of science, in magnetic resonance imaging (MRI) our appetite for data is never satiated – spatiotemporal resolution and signal-to-noise ratio can never be too high, and MRI scan times can never be too low. But, with big data comes big compute. MRI image reconstruction, for example, often involves parallel acquisition of multiple (3+1)D MRI images as measured by each of the 32+ scanner readout coils which are then combined through a complex series of iterative optimization problems. These types of algorithms are typically prototyped in high-level programming languages like Matlab or Python, and subsequently translated to C/C++ for deployment. The Julia community is familiar with this old story, though – in fact the fantastic package MRIReco.jl (https://github.com/MagneticResonanceImaging/MRIReco.jl) by Knopp et al. provides Julia implementations of MRI reconstruction algorithms which are competitive with C/C++. Post-processing these reconstructed MRI images faces similar computational challenges, and in this talk, we will describe our experience of implementing a parameter inference algorithm from the MRI subfield of myelin water imaging (MWI) in Julia.
In MWI, one analyses (3+1)D time series of image volumes acquired on cartesian spatiotemporal grids with dimensions 250x250x250x64 or more. The MRI time signals acquired in each voxel, which exhibit multi-exponential decay, are decomposed into a spectrum of decay rates. These decay rates are used to compute, among other useful metrics, the myelin water fraction (MWF) which is known to correlate with local myelin content in the brain. This inverse Laplace transform-like computation involves fitting an MRI signal model – the extended phase graph (EPG) model – to each time signal by solving an L2 regularized nonnegative least squares (NNLS) optimization problem. Note that, with images typically consisting of 10^7 voxels or more, this computation requires solving upwards of 10^7 optimization problems.
Prior to this work, the NNLS procedure used for MWI was implemented in Matlab – a closed-source high-level programming language – as is common in MRI research. This computation is particularly poorly suited for Matlab, however. First, similar to the Python library numpy, Matlab encourages computations on vectors or matrices, as opposed to explicit for-loops, in order to call out to BLAS or LAPACK libraries and ameliorate the overhead of the Matlab interpreter. For this reason, the EPG algorithm – which is most efficiently expressed in terms of nested for-loops – was previously implemented in terms of sparse matrix-vector products in order to avoid Matlab’s slow loops. Second, while the solving of independent optimization problems from each voxel is embarrassingly parallel, Matlab provides little control over multiprocessing optimizations such as reusing thread-local memory buffers and task scheduling. Lastly, Matlab does not provide a statically sized array type, which would be beneficial for micro-optimizing 3x3 matrix-vector products present in the EPG algorithm.
Julia excels in these types of computations. In the DEcomposition and Component Analysis of Exponential Signals (DECAES.jl) package (https://github.com/jondeuce/DECAES.jl, https://doi.org/10.1016/j.zemedi.2020.04.001), we provide optimized procedures for computing MWI which address the aforementioned limitations of Matlab, and additionally include command line and Matlab interfaces for ease of interoperability. In all, DECAES.jl reduced computation times approximately 60X from 1.5-2.5 hours down to 1.5-2.5 mins. This large speedup demonstrates that it is possible to perform this analysis directly on the MRI scanner, removing the need for researchers to (manually) process the acquired data.
Among the many additional benefits from the Julia translation is the synergy with other Julia packages: we experimented with explicit SIMD vectorization in the EPG algorithm using the SIMD.jl package; we make liberal use of statically sized vectors and matrices using the StaticArrays.jl package; we use a pure-Julia implementation of NNLS using the NNLS.jl package. Furthermore, the EPG algorithm is independently useful outside of MWI, and can e.g. be efficiently differentiated trivially using the automatic differentiation packages ForwardDiff.jl or Zygote.jl.
In conclusion, we have found that the combination of high-performance and high-expressibility present in Julia is well suited to MRI research, particularly in comparison to existing Matlab-based workflows, and we believe that our experience will resonate with the scientific computing community more broadly. We look forward to the opportunity to share our experience.
false
https://pretalx.com/juliacon2021/talk/WT8PHT/
https://pretalx.com/juliacon2021/talk/WT8PHT/feedback/
Purple
Genify.jl: Transforming Julia into Gen for Bayesian inference
Lightning talk
2021-07-28T17:40:00+00:00
17:40
00:10
Many Julia libraries implement stochastic simulators of natural and social phenomena, but they are not generally amenable to Bayesian inference. In this talk, we present Genify.jl, which transforms these simulators into the Gen probabilistic programming system via compiler injection, allowing us to compute likelihoods, constrain random variables to specific values, and update these values for Monte Carlo inference, thereby enabling Bayesian inference over a wide range of existing Julia code.
juliacon2021-9695-genify-jl-transforming-julia-into-gen-for-bayesian-inference
Xuan (Tan Zhi Xuan)
en
A wide variety of libraries written in Julia implement stochastic simulators of natural and social phenomena for the purposes of computational science. However, these simulators are not generally amenable to Bayesian inference, as they do not provide likelihoods for execution traces, support constraining of observed random variables, or allow random choices and subroutines to be selectively updated in Monte Carlo algorithms.
To address these limitations, we present Genify.jl, an approach to transforming plain Julia code into generative functions in Gen, a universal probabilistic programming system with programmable inference. We accomplish this via lightweight transformation of lowered Julia code into Gen’s dynamic modeling language, combined with a user-friendly random variable addressing scheme that enables straightforward implementation of custom inference programs.
We demonstrate the utility of this approach by transforming an existing agent-based simulator from plain Julia into Gen, and designing custom inference programs that increase accuracy and efficiency relative to generic SMC and MCMC methods. This performance improvement is achieved by proposing, constraining, or re-simulating random variables that are internal to the simulator, which is made possible by transformation into Gen.
Genify.jl is available at: https://github.com/probcomp/Genify.jl
false
https://pretalx.com/juliacon2021/talk/PPG3CY/
https://pretalx.com/juliacon2021/talk/PPG3CY/feedback/
Purple
Code, docs, and tests: what's in the General registry?
Talk
2021-07-28T19:00:00+00:00
19:00
00:30
The General registry is the collection of open source packages that makes up the Julia package ecosystem. Here, we take a survey: what fraction of packages have tests? CI? Docs? An open source license? How big are most packages? What's the biggest one? Are there many tiny packages? We will explore these questions and more with charts, plots, and discussion. We'll also show how to use PackageAnalyzer.jl to collect the data for yourself or take a look at a particular package (perhaps your own!).
juliacon2021-9703-code-docs-and-tests-what-s-in-the-general-registry-
/media/juliacon2021/submissions/HVSAW9/general_oW2Lvi8.png
Mosè GiordanoEric P. Hanson
en
We know that Julia is a modern language that makes adopting best programming practices, like documentation and testing, very simple, lowering the entry barriers for newcomers... but is that true? We developed a package called [`PackageAnalyzer.jl`](https://github.com/JuliaEcosystem/PackageAnalyzer.jl) to try to answer this question and get more information about packages in the Julia ecosystem.
[`PackageAnalyzer.jl`](https://github.com/JuliaEcosystem/PackageAnalyzer.jl) lets you statically inspect the content of a package and collect information about the use of documentation, testing suite, continuous integration, as well as the licenses used, the number of lines of code and the number of contributors.
In this talk we will show how to use [`PackageAnalyzer.jl`](https://github.com/JuliaEcosystem/PackageAnalyzer.jl) with your own package, and then iterate the analysis over any collection of packages, including all those in the General registry. We will present plots and statistics about the open source packages in the Julia ecosystem. We will be able to see what is the adoption of practices like documentation and testing, what are the most popular licenses and continuous integration services, what are the largest packages and in what languages they are written. Additionally, we will have a look into the Julia community: how many users contributed to the Julia ecosystem and how many people work on a single package, on average?
false
https://pretalx.com/juliacon2021/talk/HVSAW9/
https://pretalx.com/juliacon2021/talk/HVSAW9/feedback/
Purple
Using optimization to make good guesses for test cases
Lightning talk
2021-07-28T19:30:00+00:00
19:30
00:10
Some applications seem untestable because they are slow to run, with too many options. One approach chooses tests carefully using an optimization algorithm to find the smallest set of tests that are likely to exercise all the parts of the code. In this talk, we introduce the UnitTestDesign.jl package for combinatorial testing and show how it integrates with Julia's test framework using Julia's system of artifacts and scratch spaces.
juliacon2021-9830-using-optimization-to-make-good-guesses-for-test-cases
Andrew Dolgert
en
I'm developing the largest inference application I have ever seen. For any population, it estimates morbidity and mortality from disease, cast against a background of mortality, but this is measured across years for multiple ages. There are seven web pages of settings, and it can take a day to run. In some way, it's easy to test because it's an inverse problem, so I can create a correct answer, generate data, and see if the application finds the correct answer. What I want is a defensible claim that I've tested the seven pages of settings.
My first approach is to write tests for some important cases. These paradigmatic tests have to pass, and they tell stakeholders that the basics work well. I add to these some tests I know challenge the system. Beyond these two classes of tests are another set of less common techniques that help look for bugs where I don't expect them. These include random testing, concolic testing, and property-based testing. For this problem, let's focus on a simpler technique, combinatorial testing.
Combinatorial testing is a careful selection of test arguments, designed to likely have good code coverage. If we picture a page of code, then any one call to a function will walk through that code, skipping parts of it when it fails an if-condition. A thorough set of tests should, at least, execute different parts of if-conditions. There must be some choice of inputs to the application that lead to every branch of the code. Some branches depend on two input arguments multiplied pairwise. Others may depend on a particular combination of three or four input arguments. It would be helpful to test each value of each option and, somehow, walk through all possible pairs of arguments or all possible triples of arguments, in order to cover all branches.
If we have twenty different options, each of which can take one of four values, we don't have to run twenty-times-four tests to try every value. We can pack them into only a few tests. What if we wanted to try all pairs of the first two values? For each pair, that's four-choose-two, or twelve, combinations of arguments, to make twelve tests for each pair, and there are twenty-choose-two pairs, but we can pack these together, too, so that each test case explores a lot of the code.
The algorithms in UnitTestDesign.jl use greedy optimization to construct short test suites to pack all-pairs testing into as few arguments as possible. For twenty arguments with four values each, it can pack every possible pair of arguments into thirty-seven test cases. There is some research support that all-pairs testing will do a good job of finding faults in code, but the same package can generate tests with higher coverage, where higher means all triples or quadruples of input values are included in test cases.
Most implementations of all-pairs algorithms aren't easy to run in a unit-testing framework because they are web-based or proprietary. There are a few reasons for this. These algorithms need to deal with different argument types. They need to give the tester a way to say that, if a flag is false, then another argument can't take certain values, so they need a little domain-specific language. Julia handles those problems naturally and, further, is efficient at the greedy optimization to determine test cases. These can take time to generate.
For applications with many options, or functions with many arguments, generating a good set of test cases can be computationally intensive, so we rely on the testing framework to help us generate values when needed, save them, and load them later. In Julia, the packages for scratch space and artifacts give us a workflow for testing where we generate combinatorial values, save them to scratch, and upload them as artifacts for others to use.
The resulting approach is to create a set of tests, save them for reuse, and run them many times. Given the challenging problem of testing a large, slow application, we've begun to describe a paradigm from the field of test automation. The general approach is to create a bunch of tests, measure their coverage, select a set to run, and respond to failing tests by refining those tests until we've narrowed down the fault at their source. Parts of this general approach can be seen in random testing, concolic testing, and property-based testing. Compared with these, combinatorial testing is the art of starting with a really good guess.
true
https://pretalx.com/juliacon2021/talk/3LMU3W/
https://pretalx.com/juliacon2021/talk/3LMU3W/feedback/
Purple
Building Interactive REPL-based Visualizations in GridWorlds.jl
Lightning talk
2021-07-28T19:40:00+00:00
19:40
00:10
Visualization often plays an important role in several disciplines. For example, in reinforcement learning, visualization tools are indispensable for testing environment logic and analyzing agent behavior. Using the GridWorlds.jl package as an example, I will explain some fundamental concepts and techniques to enable anyone to easily build their own terminal-based visualizations from scratch, and demonstrate how they can be leveraged to create productive workflows inside the Julia REPL.
juliacon2021-11102-building-interactive-repl-based-visualizations-in-gridworlds-jl
Siddharth Bhatia
en
Resources:
Repository for this talk: https://github.com/Sid-Bhatia-0/JuliaCon2021Talk
GridWorlds.jl: https://github.com/JuliaReinforcementLearning/GridWorlds.jl
A good visualization can sometimes drastically speed up the understanding of a program. Plots are an obvious example of this. Additionally, in reinforcement learning, for example, other forms of visualizations are often used to test out an environment, and also to analyze the behavior of an agent in the environment at various points during its training process.
While developing complex programs, it often pays off well to write visualization tools from an early stage, especially when the correctness of a program cannot be verified via writing test cases alone. Reinforcement learning environments, or any kinds of games for that matter, are a good example of this. In many cases, people may overestimate the cost of creating such tools relative to the value they provide, and might perceive such a task to be more challenging than it actually is. I am here to show you that in some cases, it is much easier than you might think.
The Julia REPL offers several valuable features, often making it an indispensable part of a Julia user’s workflow in some form or another. It is possible to take this one step further and create interactive terminal-based visualizations that unlock even more productive workflows while using the REPL.
I will showcase some relevant features from the GridWorlds.jl package as a concrete example of increased developer productivity using interactive terminal-based visualizations in the REPL. In this package, plain keyboard inputs allow one to rapidly test and debug tile-based reinforcement learning environments by directly visualizing and playing them inside the terminal. One can instantly switch back and forth between testing an environment and debugging it in the REPL in the same REPL session without losing the local state. Additionally, one can also record these interactions and replay them inside the REPL by stepping through the individual frames. This feature also proves extremely handy when analyzing the behavior of an agent at various points during training.
I will deconstruct the essential pieces necessary to create and run such a visualization inside the terminal and explain how it can be built from scratch, only utilizing things that already ship with Julia.
The techniques and tricks explained in this talk are much more generally applicable. I encourage everyone to think about how you can creatively augment your current workflow to make it even more productive and engaging for your domain.
false
https://pretalx.com/juliacon2021/talk/DX7DCQ/
https://pretalx.com/juliacon2021/talk/DX7DCQ/feedback/
Purple
Catwalk.jl: A profile guided dispatch optimizer
Lightning talk
2021-07-28T19:50:00+00:00
19:50
00:10
Catwalk.jl can speed up long-running Julia processes by minimizing the overhead of dynamic dispatch. It is a JIT compiler that continuosly re-optimizes dispatch code based on data collected at runtime.
It features a low overhead statistical profiler and a tunable cost model to drive recompilation decisions.
I will talk about its target use case, performance characteristics, some implementation details and its connections to the Julia ecosystem.
juliacon2021-9729-catwalk-jl-a-profile-guided-dispatch-optimizer
Krisztián Schäffer
en
false
https://pretalx.com/juliacon2021/talk/FZ99RD/
https://pretalx.com/juliacon2021/talk/FZ99RD/feedback/
BoF/Mini Track
Building a Chemistry and Materials Science Ecosystem in Julia
Birds of Feather
2021-07-28T12:30:00+00:00
12:30
01:30
Julia has a growing presence in the computational chemistry and materials science communities, already exhibiting best-in-class performance in several domains. However, a common set of tools, datatypes, and norms are largely lacking at present. In this session, we will have discussions to build consensus around a vision for such tools, with an emphasis on reusable structures/workflows, such as I/O for common file types, bindings for widely-used codes from other languages, and mathematical tools.
juliacon2021-9628-building-a-chemistry-and-materials-science-ecosystem-in-julia
Rachel KurchinMichael F. Herbst
en
Julia is a natural choice for computational chemists and materials scientists, primarily due to its excellent computational performance combined with ease of code sharing and extensibility within and between packages. Unsurprisingly, interest in and use of Julia within this community is growing – in particular, at JuliaCon 2020, several packages were introduced (such as JuliaChem and DFTK) that generated substantial “buzz” in the community, and despite being quite young (O(1) developer-year of invested effort), these packages are already matching or even exceeding best-in-class performance for some use cases!
This BoF session aims to continue this momentum, as well as to set some longer-term goals and norms for the community as a whole. To our knowledge, there has not yet been a broad discussion of this kind, and an informal proposal on Julia Discourse (see discourse.julialang.org/t/interest-in-chemistry-focused-bof ) indicated enthusiasm from a variety of developers and users.
In particular, at present, standards such as how to represent certain ubiquitous types of data and perform common tasks is lacking, which can lead to inadvertent duplication of effort. As interest in and use of Julia in these communities grows, the impact of establishing such norms multiplies. In this session, we plan to host a community discussion aimed at building consensus around these topics. Some specific examples include, but are not necessarily limited to:
1. I/O for common structure file types (e.g. .cif, .xyz) and Julia data types for representing these structures (examples of Python versions of such systems include those provided by the Atomic Simulation Environment and pymatgen)
2. Frequently-invoked mathematical procedures such as integration on common types of grids or using common sets of basis functions utilized within quantum chemical simulation approaches such as density functional theory and (post-)Hartree-Fock
3. Julia bindings for widely-used codes in other languages (primarily C, C++, and Python) that are not worth duplicating in Julia in the short term but which provide functionality such as parsing outputs of simulation codes as well as some of the mathematical operations described above.
We are optimistic that the discussions in this session will both help to strengthen ties within this small but growing community as well as help to amplify its productivity and impact!
false
https://pretalx.com/juliacon2021/talk/ZQJAW3/
https://pretalx.com/juliacon2021/talk/ZQJAW3/feedback/
BoF/Mini Track
Set Propagation Methods in Julia: Techniques and Applications
Minisymposium
2021-07-28T16:30:00+00:00
16:30
02:00
This minisymposium presents modern approaches to analyze a variety of mathematical systems in Julia, via set propagation techniques: dynamical systems, cyber-physical systems, probabilistic systems, and neural networks. To deploy those systems in the real world there is an increasing demand for safe and reliable models. The speakers represent a broad cross-section of work from different fields that build on set-based techniques and global optimization to address such challenges.
juliacon2021-9831-set-propagation-methods-in-julia-techniques-and-applications
Marcelo ForetsChristian SchillingAnder GrayDavid P. SandersMatthew WilhelmGoran FrehseJorge Pérez ZerpaDeleted UserJulien CalbertTomer Arnon
en
- Organisers: Marcelo Forets (@mforets) and Christian Schilling (@schillic)
- Moderator: David P. Sanders (@dpsanders)
A new generation of algorithms is addressing the fundamental challenge of how to exhaustively explore all possible scenarios for simulation of dynamical systems under model uncertainties. Moreover, deep neural networks play an increasing role in control and safety-critical applications, although it is often not known how to guarantee that they will behave correctly and safely under all circumstances. This minisymposium will host applications of set-based techniques and global optimization in Julia that address these questions.
We have made sure to reach out to several different groups who are working on set propagation techniques and their application in a wide range of areas, to present a broad overview of the area.
Plan: 1 introductory talk (non-Julia-specific), 5 regular talks, 1 Q&A panel.
- Introduction by Goran Frehse (Hybrid Systems Semantics group, Computer Science and System Engineering Laboratory (U2IS), ENSTA Paris. [Homepage](https://sites.google.com/site/frehseg/home).
- **Using Set Propagation and the Finite Element Method For Time Integration in Transient Solid Mechanics Problems.** By Jorge Pérez Zerpa (speaker), Marcelo Forets and Daniel Freire Caporale. The Finite Element Method (FEM) is the gold standard for numerical simulation in transient solid mechanics problems. Several time-integration algorithms have been developed in recent decades; however, it is still a challenging problem to completely describe the family of dynamically-feasible behaviors from given sets of initial states. In this talk we take a set-based approach and conclude that it has a lot of potential to efficiently solve such problems.
- **Dionysos.jl: Optimal Control of Cyber-Physical Systems.** By Benoit Legat, Guillaume Berger, Julien Calbert (speaker) and Raphaël Jungers. [Dionysos.jl](https://github.com/dionysos-dev/Dionysos.jl) is software produced by the ERC project Learning to Control (L2C). In view of the Cyber-Physical Systems (CPS) revolution, the only sensible way of controlling these complex systems is by discretizing the different variables, thus transforming the model into a simple combinatorial problem on a finite-state automaton, called an abstraction of this system. Our goal is to transform this approach into an effective, scalable, cutting-edge technology that will address the challenges of CPS and unlock their potential.
- **Solving Optimization Problems with Embedded Dynamical Systems.** By Matthew Wilhelm (speaker) and Matthew Stuber. We will discuss our recent work at [PSORLab](https://github.com/PSORLab): EAGODynamicOptimizer.jl and DynamicBounds.jl packages. These extend our EAGO.jl nonconvex optimizer to address formulations containing embedded dynamical systems. We highlight a series of approaches for constructing the requisite convex and concave relaxations of differential equations in the original decision space and discuss the use of such techniques in a global optimization context. These methods may readily be composed with existing McCormick relaxation approaches, which allows for the solution of general nonlinear formulations to certified global optimality. Use cases relevant to hybrid data-driven process modeling, parameter estimation, and worst-case robust design are discussed.
- **Computing with sets of probabilities in Julia.** By Ander Gray. There are many ways to mathematically define a set of probability distributions, including: intervals, possibility distributions, random sets and probability boxes (p-boxes). These structures were discovered independently from one another, but are often synonymous and can be translated. Imprecise Probability theory links all these theories into one. In this presentation, we present [ProbabilityBoundsAnalysis.jl](https://github.com/AnderGray/ProbabilityBoundsAnalysis.jl) (PBA) a numerical implementation of p-box arithmetic in Julia, which gives an arithmetic of random variables where both marginal distributions and dependencies may be partially defined. We show how PBA may be used to rigorously propagate distributions and p-boxes in reachability problems using [ReachabilityAnalysis.jl](https://github.com/JuliaReach/ReachabilityAnalysis.jl).
- **Methods to Soundly Verify Deep Neural Networks.** By Tomer Arnon. Deep neural networks are widely used for nonlinear function approximation, with applications ranging from computer vision to control. Although these networks involve the composition of simple arithmetic operations, it can be very challenging to verify whether a particular network satisfies certain input-output properties. [NeuralVerification.jl](https://github.com/sisl/NeuralVerification.jl) implements several methods that have emerged recently for soundly verifying such properties. We discuss fundamental differences between existing algorithms and compare them on a set of benchmark problems.
false
https://pretalx.com/juliacon2021/talk/DRMPLU/
https://pretalx.com/juliacon2021/talk/DRMPLU/feedback/
BoF/Mini Track
Fancy Arrays BoF 2
Birds of Feather
2021-07-28T19:00:00+00:00
19:00
01:30
This is the second of two BoFs planned several years ago, to replace AxisArrays.jl.
Per the original plan, we would go away and make many packages to try many ideas. Come back and touch base in 2020, and then draw conclusions in 2021.
The goal this year is to determine a final plan to either get down to a small number of packages, or establish a common interface.
juliacon2021-9687-fancy-arrays-bof-2
Frames Catherine WhiteRory Finnegan
en
Notes from [last years discussions can be found here](https://docs.google.com/document/d/1imBX3k0EEejauWVyXONZDRj8LTr0PeLOJNGEgo6ow1g/edit#heading=h.qrm4q6q56yxm).
Since then it has emerged clarity of 3 packages that can basically replace AxisArrays.jl with a more modern and idiomatic interface. \
In approximate order of power and also complexity (both for users and for maintainers) they are: [AxisKeys.jl](https://github.com/mcabbott/AxisKeys.jl), [AxisIndices.jl](https://github.com/Tokazama/AxisIndices.jl/), and [DimensionalData.jl](https://github.com/rafaqz/DimensionalData.jl) (the former two building upon [NamedDims.jl](https://github.com/invenia/NamedDims.jl/) for naming axes). \
Since last year, [IndexedDims.jl](https://github.com/invenia/IndexedDims.jl/) has been deprecated in favour of [AxisKeys.jl](https://github.com/mcabbott/AxisKeys.jl).
An ideal outcome of this BoF session would be an agreement to deprecate an additional package for one of the others, or even deprecating two leaving one. \
A less ideal, but still very good outcome of the BoF is to discuss a common API (like [Tables.jl](https://github.com/JuliaData/Tables.jl)), which each package can extend, and to direct someone to lead the establishment of this API, and ensure that it gets rolled out.
We'll be using a [google doc](https://docs.google.com/document/d/1RPQw3zMGRVm8cayUrQhFGzlKV5hp-1DJMUE32H_-bgo/edit?usp=sharing) to organize speaking turns during the call.
false
https://pretalx.com/juliacon2021/talk/A93QFU/
https://pretalx.com/juliacon2021/talk/A93QFU/feedback/
JuMP Track
The state of JuMP
Talk
2021-07-28T12:30:00+00:00
12:30
00:30
JuMP is a modeling language and collection of supporting packages for mathematical optimization in Julia. JuMP makes it easy to formulate and solve linear programming, semidefinite programming, integer programming, convex optimization, constrained nonlinear optimization, and related classes of optimization problems.
In this talk, we discuss the state of JuMP, preview some recently added features, and discuss our plans for the future.
juliacon2021-9899-the-state-of-jump
Oscar Dowson
en
false
https://pretalx.com/juliacon2021/talk/X7QCPU/
https://pretalx.com/juliacon2021/talk/X7QCPU/feedback/
JuMP Track
What's new in COSMO?
Talk
2021-07-28T13:00:00+00:00
13:00
00:30
In this talk we describe two recent improvements to the COSMO solver. The first improvement is an automatic clique merging strategy which allows COSMO to solve large sparse SDPs more effectively. The second improvement is a safeguarded acceleration method that wraps around the solver's ADMM algorithm. We show that this leads to a significant improvement in both convergence and solve time to higher accuracy solutions. We tested the method on more than 500 QPs and SDPs from various applications.
juliacon2021-10862-what-s-new-in-cosmo-
Michael Garstka
en
false
https://pretalx.com/juliacon2021/talk/UDWSEC/
https://pretalx.com/juliacon2021/talk/UDWSEC/feedback/
JuMP Track
Conic optimization example problems in Hypatia's examples folder
Talk
2021-07-28T13:30:00+00:00
13:30
00:30
Hypatia is a conic interior point solver written in Julia, with a generic cone interface. In Hypatia's examples folder, we have implemented around three dozen applied examples from a wide variety of domains (see https://chriscoey.github.io/Hypatia.jl/dev/examples/). In this talk, we summarize Hypatia's examples, scripts, and the results of our computational comparisons on thousands of conic instances generated from our examples.
juliacon2021-10867-conic-optimization-example-problems-in-hypatia-s-examples-folder
Chris Coey
en
Most of these examples have multiple formulation options, and together these formulations cover all of Hypatia's several dozen predefined cone types (see https://chriscoey.github.io/Hypatia.jl/dev/api/cones/#Predefined-cone-types). Using scripts in Hypatia's scripts folder, we use these examples to (1) compare the performance of Hypatia's algorithmic options/enhancements, and (2) to assess the value of low-dimensional natural formulations versus standard conic formulations that only use cones currently recognized by other conic solvers.
false
https://pretalx.com/juliacon2021/talk/7KECGM/
https://pretalx.com/juliacon2021/talk/7KECGM/feedback/
JuMP Track
Symmetry reduction for Sum-of-Squares programming
Talk
2021-07-28T16:30:00+00:00
16:30
00:30
In this talk we discuss a symmetry reduction approach relying on the invariance of the polynomial under a group of actions. From the algebraic properties of the group, the SymbolicWedderburn package determines a change of basis that enables the decomposition of the constraints into smaller bases, some of them being equal which further reduces the problem. We show how to specify the group symmetry to allow SumOfSquares to perform this reformulation automatically.
juliacon2021-10921-symmetry-reduction-for-sum-of-squares-programming
Benoît LegatMarek Kaluba
en
Sum-of-Squares or semidefinite programming allows to provide guaranteed bounds on remarkably many problems. Although several efficient algorithms exist to solve these programs, their space and time complexity and even their numerical robustness do not scale well with the size of the polynomial basis or semidefinite matrix. To alleviate this problem different methods have been developed to reduce constraints with a large basis or matrix into smaller ones.
These exploit sign symmetry or sparsity structure using chordal decomposition.
false
https://pretalx.com/juliacon2021/talk/L8DTE3/
https://pretalx.com/juliacon2021/talk/L8DTE3/feedback/
JuMP Track
Sparse Matrix Decomposition and Completion with Chordal.jl
Lightning talk
2021-07-28T17:10:00+00:00
17:10
00:10
We will introduce Chordal.jl, which includes several extensible algorithms for sparse matrices with a chordal sparsity pattern. We will overview the algorithms in this package and showcase their application in sparse semidefinite programming.
juliacon2021-9927-sparse-matrix-decomposition-and-completion-with-chordal-jl
Theo Diamandis
en
In this talk, we will introduce chordal graphs and some of their core properties. These properties enable many otherwise difficult problems, such as minimum vertex coloring, to be solved efficiently. Furthermore, they lead to several decomposition results for sparse matrices.
We will introduce Chordal.jl, a package for working with sparse matrices that have a chordal sparsity pattern. We will overview the algorithms implemented in this package and their applications, including Euclidean distance matrix completion and optimization with sparse data.
We will conclude by using Chordal.jl to dramatically reduce the solve time of a sparse semidefinite program (SDP). Solving large, sparse semidefinite programs (SDPs) remains computationally prohibitive for many existing solvers, and this application largely motivated the development of this package.
false
https://pretalx.com/juliacon2021/talk/8E9BAK/
https://pretalx.com/juliacon2021/talk/8E9BAK/feedback/
JuMP Track
Automatic dualization with Dualization.jl
Lightning talk
2021-07-28T17:20:00+00:00
17:20
00:10
In this talk, we present Dualization.jl, an extension that allows users to dualize optimization problems defined in JuMP. The dual formulation can be used to better suit the description of the optimization problem to the format expected by the conic solver. Moreover, automatic dualization can be used to model bilevel problems by automatically building some of the KKT conditions.
juliacon2021-10861-automatic-dualization-with-dualization-jl
Guilherme Bodin
en
false
https://pretalx.com/juliacon2021/talk/8YGNYU/
https://pretalx.com/juliacon2021/talk/8YGNYU/feedback/
JuMP Track
Modeling Bilevel optimization problems with BilevelJuMP.jl
Talk
2021-07-28T17:30:00+00:00
17:30
00:30
In this talk, we present BilevelJuMP.jl an extension that makes it straightforward for users to write bilevel problems just like JuMP made it easy to write optimization problems. BilevelJuMP.jl uses Dualization.jl to generate the dual constraints of KKT conditions and has multiple formulations for complementarity constraints such as SOS1, Fortuny-Amat, quadratic programming, and actual complementarity constraints.
juliacon2021-10882-modeling-bilevel-optimization-problems-with-bileveljump-jl
Joaquim Dias Garcia
en
false
https://pretalx.com/juliacon2021/talk/WULB78/
https://pretalx.com/juliacon2021/talk/WULB78/feedback/
JuMP Track
Infinite-Dimensional Optimization with InfiniteOpt.jl
Talk
2021-07-28T19:00:00+00:00
19:00
00:30
We present InfiniteOpt.jl which facilitates a coherent unifying abstraction for characterizing infinite-dimensional optimization problems rigorously through a common lens. This decouples models from discretized forms and promotes the use of novel transformations. This new perspective encourages new theoretical crossover and novel problem formulations (creating new disciplines like random field optimization).
juliacon2021-10880-infinite-dimensional-optimization-with-infiniteopt-jl
Joshua Pulsipher
en
Infinite-dimensional optimization problems are a challenging problem class that cover a wide breadth of optimization areas and embed complex modeling elements such as infinite-dimensional variables, measures, and derivatives. Typical modeling approaches (e.g., those behind Gekko and Pyomo.dae) often only consider discretized formulations and do not provide a unified paradigm across the various disciplines.
false
https://pretalx.com/juliacon2021/talk/YVCM8B/
https://pretalx.com/juliacon2021/talk/YVCM8B/feedback/
JuMP Track
Hybrid Strategies using Piecewise-Linear Decision Rules
Talk
2021-07-28T19:30:00+00:00
19:30
00:30
In this talk, we discuss planned extensions to the features provided by JuMPeR via the following three attributes: (1) introduction of new policy type to the adaptive decisions, (2) introduction of the stochastic programming objective function paradigm and (3) introduction of moving/folding horizon simulator features to assess the robust/stochastic affine policies. The third attribute is closely related to what is known as pareto optimality of robust adaptive solutions.
juliacon2021-10872-hybrid-strategies-using-piecewise-linear-decision-rules
Said Rahal
en
Decision rules offer a rich and tractable framework for solving certain classes of multistage adaptive optimization problems. Recent literature has shown the promise of using linear and nonlinear decision rules in which wait-and-see decisions are represented as functions, whose parameters are decision variables to be optimized, of the underlying uncertain parameters. Despite this growing success, solving real-world stochastic optimization problems can become computationally prohibitive when using nonlinear decision rules, and in some cases, linear ones. Consequently, decision rules that offer a competitive trade-off between solution quality and computational time become more attractive. Whereas the extant research has always used homogeneous (i.e., either linear or piecewise-linear) decision rules, the major contribution of this paper is a computational exploration of hybrid decision rules combining the benefits of the two classes of decision rules. We also demonstrate a case where, unexpectedly, a linear decision rule is superior to a more complex piecewise-linear decision rule within a simulator. This observation bolsters the need to assess the quality of decision rules obtained from a look-ahead model within a simulator rather than just using the optimal look-ahead objective function value.
false
https://pretalx.com/juliacon2021/talk/CEFANG/
https://pretalx.com/juliacon2021/talk/CEFANG/feedback/
JuMP Track
Flexible set projections with MathOptInterface
Lightning talk
2021-07-28T20:00:00+00:00
20:00
00:10
MathOptInterface has become a pillar of constrained optimization in Julia, defining a common language unifying multiple branches of mathematical optimization. We will present MathOptSetDistances.jl, a package to compute distances to and projections onto sets, and the differentiation of these operations. We will cover the motivation behind it, how it started and highlight learned lessons on the way.
juliacon2021-9316-flexible-set-projections-with-mathoptinterface
Mathieu Besançon
en
This talk introduces the main abstractions of MathOptInterface.jl, the central interface for expressing constrained optimization problems in Julia and explains how an extension for distances and projections.
MathOptSetDistances.jl defines an API for projecting points onto sets and computing distances from a point to a given set defined in MathOptInterface.jl. It has become a toolbox used by other packages built on top of MathOptInterface.jl and opens new features accessible to Convex.jl, JuMP.jl, and their extensions. Computing distances and projections is central to many optimization algorithms, to compute the violation of a constraint or projecting back onto a feasible set.
One challenge that arose from distance computation is designing an interface with a consistency guarantee on the definition of distances while allowing alternative distance implementations for some sets.
The projections and distances operators are also differentiable and implement both a full Jacobian computation and the ChainRules API, which we will illustrate on some sets.
false
https://pretalx.com/juliacon2021/talk/X9BNQV/
https://pretalx.com/juliacon2021/talk/X9BNQV/feedback/
JuMP Track
Solving optimization problems at Fonterra
Lightning talk
2021-07-28T20:10:00+00:00
20:10
00:10
In this talk we discuss how the Data Science team at Fonterra, a New Zealand dairy co-operative responsible for 30% of the world trade in dairy exports, use JuMP to solve planning problems relating to organic milk production.
juliacon2021-10978-solving-optimization-problems-at-fonterra
Oleg Barbin
en
Solving optimization problems in a business setting can be a significant challenge. There is a constant tension between delivering quick prototypes to prove value and building robust tools.
At Fonterra, a New Zealand dairy co-operative, one of our planning problems concerns organic milk production. Due to low volumes or organic-certified milk, organic production planning takes place outside the usual planning process. The constraints around organic problems are complex, and there is considerable value to be derived from a quality plan. These factors make organic planning a perfect candidate for a stand-alone optimization project within the business.
During this project, JuMP has been an invaluable tool in several ways. Using JuMP, it has been trivial to develop quick prototypes and experimental features, without sacrificing the robustness of the end-product. JuMP enables our team to be creative during the process and try new things on the fly. We can quickly respond to feedback from end users, which helps build a close relationship and ensure the continued success of the project. JuMP is also a reliable tool for building larger optimization applications, enabling the Data Science team at Fonterra to easily incorporate different multi-objective optimization approaches, optional cuts and complex conditional constraints into the model.
Thanks to JuMP, we have been able to mitigate the problem outlined at the start of the abstract, and secure key user engagement through continuous proof of value while delivering robust software.
false
https://pretalx.com/juliacon2021/talk/3F88PP/
https://pretalx.com/juliacon2021/talk/3F88PP/feedback/
JuMP Track
TSSOS.jl: exploiting sparsity in polynomial optimization
Lightning talk
2021-07-28T20:20:00+00:00
20:20
00:10
TSSOS.jl helps polynomial optimizers solve large-scale problems with sparse input data. The underlying algorithmic framework is based on exploiting correlative and term sparsity to obtain a new moment-SOS hierarchy involving potentially much smaller positive semidefinite matrices. TSSOS can be applied to numerous problems ranging from power networks to eigenvalue and trace optimization of noncommutative polynomials, involving up to tens of thousands of variables and constraints.
juliacon2021-10877-tssos-jl-exploiting-sparsity-in-polynomial-optimization
Jie Wang
en
false
https://pretalx.com/juliacon2021/talk/XFC73Y/
https://pretalx.com/juliacon2021/talk/XFC73Y/feedback/
Green
SmartTensors: Unsupervised Machine Learning
Talk
2021-07-29T12:30:00+00:00
12:30
00:30
Demonstrate SmartTensors (http://tensors.lanl.gov; https://github.com/SmartTensors): a toolbox for unsupervised machine learning based on matrix/tensor factorization constrained by penalties enforcing robustness and interpretability (e.g., nonnegativity; physics and mathematical constraints; etc.). SmartTensors has been applied to analyze diverse datasets related to a wide range of problems: from COVID-19 to wildfires and climate.
juliacon2021-9765-smarttensors-unsupervised-machine-learning
/media/juliacon2021/submissions/UKASBZ/SmartTensorsNew_F3yKYby.png
Velimir Vesselinov
en
The world’s most valuable resource is no longer oil. It is data. SmartTensors (http://tensors.lanl.gov; https://github.com/SmartTensors) is a toolbox for unsupervised machine learning based on matrix/tensor factorization constrained by penalties enforcing robustness and interpretability (e.g., nonnegativity; physics and mathematical constraints; etc.). SmartTensors has been applied to analyze diverse datasets related to a wide range of problems: from COVID-19 to wildfires and climate. The workshop will demonstrate how SmartTensors can be easily applied to these and other application areas. The workshop will include hands-on real-time demonstrations of already existing case studies. The workshop will be designed to be suitable and useful for anyone regardless of their machine learning experience by providing materials at introduction, intermediate and expert levels.
false
https://pretalx.com/juliacon2021/talk/UKASBZ/
https://pretalx.com/juliacon2021/talk/UKASBZ/feedback/
Green
Finding an Effective Strategy for AutoML Pipeline Optimization
Talk
2021-07-29T13:00:00+00:00
13:00
00:30
One of the main problems in AutoML implementation is finding the best strategy to search the most optimal pipeline in prediction or classification tasks. This problem is commonly known as CASH (Combined Algorithm Selection and Hyperparameter Optimization). This talk will show competitive results with significantly shorter computation time by just focusing the search in the model selection and structure of the pipeline without the need of hyperparameter optimization.
juliacon2021-9536-finding-an-effective-strategy-for-automl-pipeline-optimization
Paulito Palmes
en
The CASH problem can be decomposed into three major components:
- searching the optimal __m__ model with n(m) search space
- searching the optimal order of __p__ preprocessing elements with n(p) search space
- searching the optimal __h__ hyperparameters with n(h) search space
The most popular approaches involve simultaneous search of these three components with time complexity of n(p) x n(m) x n(h). An alternative method is to perform the search sequentially starting with __m__ using surrogates __p__ and __h__ followed by searching for __p__ using optimal __m__ and surrogate __h__, and finally searching for __h__ using optimal __p__ and __m__ found. This alternative technique only involves n(p) + n(m) + n(h) search space which is significantly smaller than simultaneously searching __p__, __m__, and __h__. We find in our experiments using the [AutoMLPipeline](https://github.com/IBM/AutoMLPipeline.jl) package, that in many cases, it is sufficient to just search for __m__ and __p__ to achieve competitive performance with those of other optimal algorithms that searches all three components simultaneously.
Relevant paper: https://arxiv.org/abs/2107.01253
Relevant Julia Packages used in the talk:
- [AutoMLPipeline.jl](https://github.com/IBM/AutoMLPipeline.jl)
- [AMLPipelineBase.jl](https://github.com/IBM/AMLPipelineBase.jl)
- [Lale.jl](https://github.com/IBM/Lale.jl)
- [Hyperopt.jl](https://github.com/baggepinnen/Hyperopt.jl)
false
https://pretalx.com/juliacon2021/talk/FHGWBQ/
https://pretalx.com/juliacon2021/talk/FHGWBQ/feedback/
Green
Physics-Informed ML Simulator for Wildfire Propagation
Lightning talk
2021-07-29T13:30:00+00:00
13:30
00:10
The aim of this work is to evaluate the feasibility of re-implementing some key parts of the widely used Weather Research and Forecasting WRF-SFIRE simulator by replacing its core differential equations numerical solvers with state-of-the-art physics-informed machine learning techniques to solve ODEs and PDEs implemented in Julia, in order to transform it into a real-time simulator for wildfire spread prediction.
juliacon2021-9577-physics-informed-ml-simulator-for-wildfire-propagation
/media/juliacon2021/submissions/X9RATL/MLJC_JuliaCon_7OI5NDz.png
Francesco CalistoSimone AzeglioValerio PagliarinoLuca Bottero
en
The study we carried out has the goal to investigate the applicability of the recently developed field of Scientific Machine Learning on climate, wildfire in particular, models. We have outlined some results that tell us that many improvements are needed in order to transform this into a validated product, but also show the big potential of our approach. We need to add further refinements to the implementation in order to carry out a precise time comparison between our approach and the standard numerical solvers, but the results obtained thus far show promising evidence.
The encouraging outcome inspires us to continue our work by improving the architectures and possibly employ them in different fields of research.
We hope that this line of research will be a small step towards a more effective cohesiveness between Machine Learning and Physical Models in Climate Sciences, and thus further explored by other researchers.
false
https://pretalx.com/juliacon2021/talk/X9RATL/
https://pretalx.com/juliacon2021/talk/X9RATL/feedback/
Green
Bias Audit and Mitigation in Julia
Lightning talk
2021-07-29T13:40:00+00:00
13:40
00:10
This talk introduces Fairness.jl, a toolkit to audit and mitigate bias in ML decision support tools. We shall introduce the problem of fairness in ML systems, its sources, significance and challenges. Then we will demonstrate Fairness.jl structure and workflow.
juliacon2021-9790-bias-audit-and-mitigation-in-julia
Ashrya Agrawal
en
Machine Learning is involved in a lot of crucial decision support tools. Use of these tools range from granting parole, shortlisting job applications to accepting credit applications. There have been numerous political and policy developments during the past one year that have pointed out the transparency issues and bias in these ML based decision support tools. Thus it has become crucial for the ML community to think about fairness and bias. Eliminating bias isn't easy due to the existence of various trade-offs. There exist various performance-fairness, fairness-fairness (various definitions of fairness might not be compatible) trade-offs.
In this talk we shall we shall discuss
- Challenges in mitigating bias
- Metrics and fairness algorithms offered by Fairness.jl and the workflow with the package
- How Julia's ecosystem of packages (MLJ, Distributed) helped us in performing a large systematic benchmarking of debiasing algorithms, which helped us understand their [generalization properties](https://arxiv.org/abs/2011.02407).
Repository: [Fairness.jl](https://github.com/ashryaagr/Fairness.jl)
Documentation is available [here](https://ashryaagr.github.io/Fairness.jl/dev/), and introductory blogpost is available [here](https://nextjournal.com/ashryaagr/fairness/)
false
https://pretalx.com/juliacon2021/talk/KNDFHC/
https://pretalx.com/juliacon2021/talk/KNDFHC/feedback/
Green
Data driven insight into fish behaviour for aquaculture
Lightning talk
2021-07-29T13:50:00+00:00
13:50
00:10
Aquaculture, or the farmed production of fish and shellfish, has grown rapidly, from supplying just 7% of fish for human consumption in 1974 to more than half in 2016. Sustaining this rapid expansion requires data-driven management of the production process and environmental impacts. This talk presents a machine-learning-based exploration of environmental and fish behaviour datasets collected at three salmon farms in Norway, Scotland, and Canada using AutoML tools in Julia.
juliacon2021-9645-data-driven-insight-into-fish-behaviour-for-aquaculture
Fearghal O'DonnchaPaulito Palmes
en
Data generated on modern aquaculture farms extend across a wide variety of forms. In situ sensors sample large numbers of environmental variables such as temperature, current velocity, dissolved oxygen (DO), chlorophyll and salinity. Remotely-sensed environmental data can sample much larger spatial domains and can be at the bay-scale – from land-based sensors such as CODAR-type HF radar – or at the global scale from satellite-based monitoring system. Informing on farm operations also requires sampling of animal variables such as size, clustering behaviour, and movement, and this is typically done using underwater technologies such as hydroacoustic technology, video monitoring, and aerial drone imagery. Further, there are large datasets of pertinent variables that are generated by numerical models such as weather or ocean circulation products. These datasets constitute huge data volumes with distinct characteristics. Integrating and extracting information from these disparate data sources (in scalable manner) are key to encapsulating the full dynamics of the farm environment and enabling effective management.
This paper presents an analysis of environmental and fish behaviour datasets collected at three salmon farms in Norway, Scotland, and Canada. Information on fish behaviour were collected using hydroacoustic sensors that sampled the vertical distribution of fish in a cage at high spatial and temporal resolution, while a network of environmental sensors characterised local site conditions. We present an analysis of the environmental and hydroacoustic datasets using the Julia open-source packages we developed: data were preprocessed and curated into time-aligned matrix form using TSML (https://github.com/IBM/TSML.jl), and machine learning pipelines were identified and implemented using Lale (https://github.com/IBM/Lale.jl).
Analysis enabled a quantitative investigation of the effects of environmental conditions on fish response together with information on drivers of anomalous fish response. Results demonstrated pronounced temporal variations in fish distribution as dictated by factors such as diurnal patterns, dynamics (currents and winds), and oxygen and temperature variations. Diurnal patterns driven by natural changes in light intensity were broadly similar across sites although this trend was ameliorated at the Norwegian site which was located inside the Arctic circle and experience 24 hours of daylight during summer months. Generally, fish occupied a deeper position in the cage during the day and were more tightly clustered; while at night, fish utilised more of the cage volume and were at a higher average position.
Analysis indicated that temperature was the primary environmental driver at two of the three sites. Temperature in the warmer summer months exhibited pronounced stratification before returning to a well-mixed temperature profile in September and October. During these stratified periods there was a tendency for fish to cluster to the warmer, upper portion of the cage and avoid colder temperatures. On the other hand, in reasonably homogeneous environments where temperature varies little with depth (such as at the Canada site during autumn), temperature did not influence the vertical distribution of salmon.
Variation in oxygen levels were most pronounced at the Canada site which showed consistently lower values than at other sites. Feature importance analysis indicated that dissolved oxygen values were the most important contributor to fish behaviour and in particular during periods of lower oxygen levels, a pronounced response was noted. Analysis indicated that fish moved towards the surfaces when values drop below 7mgL-1 which is in line with literature which reports reduced appetites and feeding in Atlantic salmon when values drop below this threshold.
Results presented in this paper indicate pronounced differences between sites and the need to consider these variations for farm management. One could readily use this analysis to quantify the difference between sites, and further to identify the fundamental drivers to these variations. This could be particularly valuable when comparing different farm systems such as inshore and offshore and the associated operational implications.
false
https://pretalx.com/juliacon2021/talk/Y9XWLM/
https://pretalx.com/juliacon2021/talk/Y9XWLM/feedback/
Green
State of Julia
Keynote
2021-07-29T14:30:00+00:00
14:30
00:45
Placeholder for State of Julia talk.
juliacon2021-9877-state-of-julia
Stefan Karpinski
en
Placeholder for State of Julia talk.
false
https://pretalx.com/juliacon2021/talk/UJUE8P/
https://pretalx.com/juliacon2021/talk/UJUE8P/feedback/
Green
Keynote (Xiaoye (Sherry) Li)
Keynote
2021-07-29T15:15:00+00:00
15:15
00:45
Interplay of linear algebra, machine learning, and HPC
juliacon2021-11701-keynote-xiaoye-sherry-li-
en
In recent years, we have seen a large body of research using hierarchical
matrix algebra to construct low complexity linear solvers and preconditioners.
Not only can these fast solvers significantly accelerate the speed of
large scale PDE based simulations, but also they can speed up many AI and
machine learning algorithms which are often matrix-computation-bound.
On the other hand, statistical and machine learning methods can be used
to help select best solvers or solvers' configurations for specific problems
and computer platforms. In both of these fields, high performance computing
becomes an indispensable cross-cutting tool for achieving real-time solution
for big data problems. In this talk, we will show our recent developments
in the intersection of these areas.
BIO
Sherry Li is a Senior Scientist in the Computational Research Division,
Lawrence Berkeley National Laboratory. She has worked on diverse problems
in high performance scientific computations, including parallel computing,
sparse matrix computations, high precision arithmetic, and combinatorial
scientific computing. She is the lead developer of SuperLU, a widely-used
sparse direct solver, and has contributed to the development of several other
mathematical libraries, including ARPREC, LAPACK, PDSLin, STRUMPACK, and XBLAS. She earned Ph.D. in Computer Science from UC Berkeley and B.S. in Computer Science from Tsinghua Univ. in China. She has served on the editorial boards of the SIAM J. Scientific Comput. and ACM Trans. Math. Software, as well as many program committees of the scientific conferences. She is a Fellow of SIAM and a Senior Member of ACM.
false
https://pretalx.com/juliacon2021/talk/YS9RUZ/
https://pretalx.com/juliacon2021/talk/YS9RUZ/feedback/
Green
InvertibleNetworks.jl - Memory efficient deep learning in Julia
Talk
2021-07-29T16:30:00+00:00
16:30
00:30
We present InvertibleNetworks.jl, an open-source package for invertible neural networks and normalizing flows using memory-efficient backpropagation. InvertibleNetworks.jl uses manually implement gradients to take advantage of the invertibility of building blocks, which allows for scaling to large-scale problem sizes. We present the architecture and features of the library and demonstrate its application to a variety of problems ranging from loop unrolling to uncertainty quantification.
juliacon2021-9627-invertiblenetworks-jl-memory-efficient-deep-learning-in-julia
Philipp A. WitteMathias LouboutinAli SiahkoohiFelix J. HerrmannGabrio RizzutiBas Peters
en
Invertible neural networks (INNs) are designed around bijective building blocks that allow the evaluation of (deep) INNs in both directions, which means that inputs into the network (and all internal states) can be uniquely re-computed from the output. INNs were popularized in the context of normalizing flows as an alternative approach to generative adversarial networks (GANs) and variational auto-encoders (VAEs), but their property of invertibility is also appealing for discriminative models, as INNs allow memory-efficient backpropagation during training. As hidden states can be recomputed for INNs from the output, it is in principle not required to save the state during forward evaluation, thus leading to a significantly lower memory imprint than conventional neural networks. However, existing backpropagation libraries that are used in TensorFlow or PyTorch do not support the concept of invertibility and therefore require work arounds to benefit from them. For this reason, current frameworks for INNs such as FrEIA or MemCNN use layer-wise AD, in which backpropagation is performed by first re-computing the hidden state of the current layer and then using PyTorch's AD tool (Autograd) to compute the gradients for the respective layer. This approach is computationally not efficient, as it performs an additional forward pass during backpropagation.
With InvertibleNetworks.jl, we present an open-source Julia framework (MIT license) with manually implemented gradients, in which we take advantage of the invertibility of building blocks. For each invertible layer, we provide a backpropagation layer that (re-)computes the hidden state and weight updates all at once, thus not requiring an extra (layer-wise) forward evaluation. In addition to gradients, InvertibleNetworks.jl also provides Jacobians for each layer (i.e. forward differentiation), or more precisely, matrix-free implementations of Jacobian-vector products, as well as log-determinants for normalizing flows. While backpropagation and Jacobians are implemented manually, InvertibleNetworks.jl integrates seamlessly with ChainRules.jl, so users do not need to manually define backward passes for implemented networks. Additionally, InvertibleNetworks.jl is compatible with Flux.jl, so that users can create networks that consist of a mix of invertible and non-invertible Flux layers. In this talk, we present the architecture and features of InvertibleNetworks.jl, which includes implementations of common invertible layers from the literature, and show its application to a range of scenarios including loop-unrolled imaging, uncertainty quantification with normalizing flows and large-scale image segmentation.
false
https://pretalx.com/juliacon2021/talk/EVR3HZ/
https://pretalx.com/juliacon2021/talk/EVR3HZ/feedback/
Green
Composable Bayesian Modeling with Soss.jl
Lightning talk
2021-07-29T17:00:00+00:00
17:00
00:10
Soss is a probabilistic programming language (PPL) with first-class composable models. Through dynamic code generation, Soss can achieve speedup of several orders of magnitude in some models, for example using symbolic simplification of the log-density.
In this talk, we'll discuss the goals and design choices in Soss that distinguish it from other PPLs, followed by an overview of upcoming work.
juliacon2021-9727-composable-bayesian-modeling-with-soss-jl
Chad Scherrer
en
# First-Class, Composable Models
Soss models can be used and composed similarly to working with functions. This allows models to be built up from smaller, reusable components. In some cases, these can be developed and tested independently.
# Dynamic Code Generation
Soss uses runtime code generation for efficient inference primitives. These are specialized for model and input types. New primitives can easily use arbitrary data structures; the system is very flexible. Models are fully generative and determine joint distributions. In particular, models have `rand` and `logdensity` methods like any other measure.
# Model Transformations
Internally, models are represented as a directed graph with an AST (a Julia `Expr`) at each node. This makes it easy to transform one model into another based on its dependencies or AST structures. We can compute Markov blankets or reparameterizations, or change a model to output the latent conditional distributions used along the way.
# MeasureTheory.jl
Soss uses MeasureTheory.jl and allows falling back to Distributions.jl when needed, so it inherits the benefits of MeasureTheory. For example, fewer type constraints on constructors means Soss can evaluate a log-density symbolically. Coupled with codegen, this enables generation of highly optimized code.
false
https://pretalx.com/juliacon2021/talk/SLHLHX/
https://pretalx.com/juliacon2021/talk/SLHLHX/feedback/
Green
Chaotic time series predictions with ReservoirComputing.jl
Lightning talk
2021-07-29T17:10:00+00:00
17:10
00:10
Are you interested in how machine learning can be used to predict the behavior of the "unpredictable" chaotic systems? This talk will be a deep dive into ReservoirComputing.jl (https://github.com/SciML/ReservoirComputing.jl), a package in the SciML ecosystem focused on a class of stabilized machine learning specialized for handling learning these difficult dynamical systems.
juliacon2021-9801-chaotic-time-series-predictions-with-reservoircomputing-jl
Francesco Martinuzzi
en
Chaoticity is by definition hard to predict or to reproduce using forecasting models. With the advent of Deep Learning (DL) a lot of effort has been dedicated to this problem, with the default approaches being represented by Recurrent Neural Networks (RNNs) and Long Short Term Memory networks (LSTMs). More recently a new family of models has proved more effective in tackling chaotic systems, namely Reservoir Computing (RC) models. Given the relative infancy of the RC paradigm it is not simple to find an implementation of such models, let alone a full library. With ReservorComputing.jl we want to propose a Julia package that allows the user to quickly get started with a fast growing range of RC models, ranging from the standard Echo State Networks (ESNs) to the more exotic Reservoir Computing with Cellular Automata (RECA). In this talk a brief introduction to the concept of RC will be given and afterwards the capabilities of ReservoirComputing.jl will be illustrated using interactive examples.
Reservoir Computing models work by expanding the input data into a higher dimensional space, called reservoir. After this expansion the resulting states are collected and the model is trained against the desired input as a linear regression problem. This approach allows for fast training times, and solves several problems of Neural Networks training, like the vanishing gradient. Not only are the models in the RC family faster and safer to train, but, as mentioned before, it has been shown that they are also better in the prediction and reproduction of chaotic systems. RC models are mainly composed of three sections: an input to reservoir coupler, the reservoir, and a reservoir to output coupler. The last section is the result of the training process, and is dependent on the training method that one chooses to utilize. It is easy to see that using different constructions for these elements is possible to obtain different results in the task at hand. To properly explore the RC models a quick way to access these layers is needed in their implementation.
At a high level, the implementation of ReservoirComputing.jl gives the user the appropriate tools needed for a quick setup of the desired model, allowing an exploration of these family of models for the prediction of a given time series. Otherwise if one chooses to delve more deeply into the customization of the model, the implementation of ReservoirComputing.jl follows a modularity designed to leave users with the freedom to fully customize the system they intend to train and use for predictions. This not only helps with possible recombinations of layers already implemented in the library, but allows for expansions with the aid of external libraries or custom code. Leveraging the great package ecosystem of Julia the user could decide to train the RC system using not yet implemented regression approaches with an external package. At the same time it is possible to use a reservoir matrix construction not present in the library, either by custom construction, or again by using other packages, like LightGraphs.jl.
After the brief introduction of the RC paradigm the talk intends to illustrate the concepts defined above using concrete examples. It will be shown both the ease of use of the package and some of the possible variations included that can be explored. Finally a demonstration of possible customizations will be illustrated, both by custom defined layers and by leveraging other libraries of the Julia ecosystem.
false
https://pretalx.com/juliacon2021/talk/NDQTSP/
https://pretalx.com/juliacon2021/talk/NDQTSP/feedback/
Green
Airborne Magnetic Navigation Enhanced with Neural Networks
Lightning talk
2021-07-29T17:20:00+00:00
17:20
00:10
Using the earth’s magnetic field for navigation of aircraft has shown promise as a viable alternative to GPS. An airborne magnetic navigation system collects magnetic field data and uses predetermined magnetic maps to estimate location. A challenge arises when the measured data contains magnetic signals from both the (desired) earth field and (undesired) aircraft field. This work explores several approaches for obtaining a clean magnetic signal that is usable for navigation.
juliacon2021-9861-airborne-magnetic-navigation-enhanced-with-neural-networks
Deleted User
en
false
https://pretalx.com/juliacon2021/talk/NYNJMJ/
https://pretalx.com/juliacon2021/talk/NYNJMJ/feedback/
Green
Generative Models with Latent Differential Equations in Julia
Lightning talk
2021-07-29T17:40:00+00:00
17:40
00:10
Scientific Machine Learning (SciML) is the branch of scientific computing that combines domain-aware and interpretable models with powerful machine learning techniques. The Julia language has been a key enabler of this burgeoning field, thanks to its unique SciML ecosystem. In this talk, we will present a contribution in this direction: an easy and flexible implementation of generative latent differential equations models.
juliacon2021-9914-generative-models-with-latent-differential-equations-in-julia
Germán Abrevaya
en
Scientific Machine Learning (SciML) is a very promising and exciting field that has been emerging in the past few years, with particular strength within the Julia community given the thriving SciML ecosystem. It consists of a growing set of diverse tools focused on combining traditional scientific modeling with novel machine learning (ML) techniques. The former is usually based on the long-established field of differential equations (DE) models, while the latter, though more recent, provides powerful general-purpose tools, and has demonstrated remarkable achievements in many applications.
Both approaches, of course, have their advantages and drawbacks: traditional modeling is far from trivial, since building an adequate model for a given problem usually requires educated guesses and approximations based on a deep understanding of the system being studied. Often in practice, it is only possible to build partial models and have access only to an incomplete set of the considered variables, sometimes even in a different unknown coordinate system. On the other hand, using orthodox ML models on poor-quality and scarce scientific data can be disadvantageous because of the lack of interpretability of these models, and the dependence on large amounts of training data to achieve good generalization.
SciML is a bridge between these two worlds, taking the best from each. A perfect example of such hybrid solutions is the case of Universal Differential Equations [1], where prior scientific insight is used to build some parts of a DE model, filling the unknown terms with neural networks (NN). They jointly optimize the DE parameters and NN weights using automatic differentiation and sensitivity analysis algorithms. This powerful approach was developed by members of the Julia community and is readily available to use in the DiffEqFlux.jl package. However, this method only works when one has direct measurements of the state variables of the DEs models, which is not always the case.
There exists a class of approaches that tackles this issue by constructing latent DE models, where other NN layers learn transformations from the input space to a latent DE space, usually with lower dimensionality. Some examples of this approach are LatentODEs [2,3] and GOKU-nets [4]. In a broad view, these models consist of a Variational Autoencoder structure with DEs inside. Their decoders contain the DEs, whose initial conditions (and in some cases, parameters) are sampled from distributions learned by the encoders. In the case of LatentODEs, NNs are used to approximate the latent ODE, while in the case of GOKU-nets, one can use prior knowledge to provide some ODE model for the latent dynamics.
Currently, Flux.jl and the SciML ecosystem have all the functionalities to build these latent DE models, but this process can be time-consuming and possibly has a steep learning curve for people without a background in machine learning. Our goal is to provide a package that makes latent differential equation models readily accessible with high flexibility in architecture and problem definition.
In this presentation, we will introduce the basic background and concepts behind latent differential equation models, in particular, presenting the GOKU-net architecture. We will then show our implementation structure via a simple example: given videos of pendulums of different lengths, learn to reconstruct them by passing through their latent DE representation. We anticipate that our presentation shall be a user-friendly introduction to latent differential equations models for the Julia community.
Work done in collaboration with:
Jean-Christophe Gagnon-Audet¹*
Mahta Ramezanian¹
Vikram Voleti¹
Irina Rish¹
Pranav Mahajan²
Guillermo Cecchi³
Silvina Ponce Dawson⁴
Guillaume Dumas¹
*creator of the beautiful diagrams that you will see in the presentation
¹ Mila & Université de Montréal
² University of Pilani
³ IBM Research
⁴ CONICET & University of Buenos Aires
[1] Rackauckas, C., Ma, Y., Martensen, J., Warner, C., Zubov, K., Supekar, R., ... & Edelman, A. (2020). Universal differential equations for scientific machine learning. arXiv preprint arXiv:2001.04385.
[2] Chen, R. T., Rubanova, Y., Bettencourt, J., & Duvenaud, D. (2018). Neural ordinary differential equations. arXiv preprint arXiv:1806.07366.
[3] Rubanova, Y., Chen, R. T., & Duvenaud, D. (2019). Latent odes for irregularly-sampled time series. arXiv preprint arXiv:1907.03907.
[4] Linial, O., Eytan, D., & Shalit, U. (2020). Generative ODE Modeling with Known Unknowns. arXiv preprint arXiv:2003.10775.
false
https://pretalx.com/juliacon2021/talk/QEANKW/
https://pretalx.com/juliacon2021/talk/QEANKW/feedback/
Green
CompositionalNetworks.jl: a scaling glass-box neural network
Lightning talk
2021-07-29T17:50:00+00:00
17:50
00:10
Interpretable Compositional Networks (ICN), a variant of neural networks, that allows the user to get interpretable results, unlike regular artificial neural networks. An ICN is a glass-box producing functions composition that scale with the size of the input, allowing a learning phase on relatively small spaces.
This presentation covers the different Julia packages and paradigms involved, a set of use-case, current limitations, future developments, and hopefully possible collaborations.
juliacon2021-9915-compositionalnetworks-jl-a-scaling-glass-box-neural-network
Khalil CHRIT
en
The *JuliaConstraints* GitHub organization was born last fall and aims to improve collaborative packages around the theme of Constraint Programming (CP) in Julia.
As for many fields of optimization, there is often a trade-off between efficiency and the simplicity of the model. **CompositionalNetworks.jl** was designed to smooth that trade-off. One could make a parallel with not having to choose between the speed of C and the simplicity of Python (among others).
An Interpretable Compositional Networks (ICN) takes any vector (of arbitrary size) as an input and outputs a (non-negative) value that corresponds to a user given metric. For instance, consider an error function network in Constraint Programming, one can choose a Hamming distance metric to evaluate the distance between a configuration of the variables’ values and the closest satisfying values. It provides the minimum number of variables to change to reach a solution.
A usual constraint showing the modeling power of Constraint Programming is the `AllDifferent` constraint which ensures that all the variables take different values. One can model a Sudoku problem with only such constraints.
An ICN, in its most basic form, is composed of four layers: transformation, arithmetic, aggregation, and composition layers. Weights between the layers are binary, meaning that neurons (operations) are either connected to, or disconnected from each other neuron in adjacent layers. These simple boolean weights allow a straightforward composition of the operations composing an ICN, and provide a result that is interpretable by a human. The user can then, either verify and use the composition directly, or use it as an inspiration for a handmade composition.
An ICN learning on a small space of 4 variables with domain [1, 2, 3, 4] can extract the following function:
```
icn_all_different(x::AbstractVector) = x |> count_eq_left |> count_positive |> identity
```
where `count_eq_left` is the function that counts the number of elements of `x` smaller than `xi` for each `i`, and `count_positive` counts the number of elements `xi>0`. This output is equivalent to the best known handmade error function for the `AllDifferent` constraint. Furthermore, it is fully scalable to any vector length.
In CompositionalNetworks.jl, we generate the code of the composed function directly. We can even compile it on the fly due to the meta programming capabilities of Julia. Moreover, we can also export the compositions to human-readable language or other programming languages.
Users can check and modify the function composed by an ICN to adapt or improve the output to its needs and requirements. Of course, the function can also be used directly.
During this talk, we will cover an out-of-the-box use of CompositionalNetworks.jl along with the different julian and non-julian key aspects to the development of this package. Among others, the use of other julian packages as dependencies such as the genetic algorithm in Evolutionary.jl to fix the Boolean weights of an ICN, or the generation of compositions in either programming code or mathematical language through Julia efficient meta programming.
The versatility of the Julia version of ICN mixed with metaprogramming allows a much broader practical use cases for any user of ICN compared to the original C++ version, where modifying the code is a much harder task, and metaprogramming is not possible (and usually not recommended for (pre)compiled languages)
While we provide a basic ICN use-case as error function networks in Constraint Programming, it is straightforward for the user to provide additional operations, or even layers. The type of functions learned and composed is more versatile than our use case. We hope this package can have some use for, but not limited to, the people in the Constraint Programming and the Julia communities.
Although our current applications are mainly within some packages of *JuliaConstraints*, we hope to exchange with the community for other methods to compose functions, apply them to other problems, and improve our understanding of Julia for Interpretable Compositional Networks.
false
https://pretalx.com/juliacon2021/talk/BSTFEQ/
https://pretalx.com/juliacon2021/talk/BSTFEQ/feedback/
Green
GatherTown -- Social break
Social hour
2021-07-29T18:00:00+00:00
18:00
01:00
Join us on Gather.town for a social hour.
It is a virtual location where we will facilitate the poster sessions, social gatherings, and hackathon. You can join the space using the URL: https://gather.town/invite?token=3QYkt8gX.
You should have received the password through Eventbrite
juliacon2021-11884-gathertown-social-break
en
false
https://pretalx.com/juliacon2021/talk/VGHALK/
https://pretalx.com/juliacon2021/talk/VGHALK/feedback/
Green
Modeling the Economy During the Pandemic
Talk
2021-07-29T19:00:00+00:00
19:00
00:30
Macroeconomic modeling during the COVID-19 pandemic, and the switch to a new monetary policy framework, has required rapid adjustments to the DSGE.jl package, made possible by Julia’s flexible typing and efficient matrix computations. We review the new features in DSGE.jl that allow users to model periods of large economic shifts and uncertainty. As an illustration, we also explain how the Federal Reserve Bank of New York solved and estimated a model with these features during the recession.
juliacon2021-9780-modeling-the-economy-during-the-pandemic
Shlok GoyalAlissa Johnson
en
In this talk, we will discuss how the Federal Reserve Bank of New York (FRBNY) uses Julia for forecasting. We will focus on how the FRBNY adjusted its dynamic stochastic general equilibrium (DSGE) model for the rapid changes in economic conditions brought about by the COVID-19 pandemic. These changes, which are available publicly through DSGE.jl, include the ability to solve and estimate an economic model with multiple regimes (where regimes differ in the equations that describe the economy). Regime-switching allows the FRBNY DSGE to better capture the economic effects of COVID-19 as well as the switch to the new interest rate policy of average inflation targeting (AIT) announced by the Federal Reserve (Fed) in August 2020. In modeling the impact of this policy change it is assumed that the introduction of the new reaction function is only partially incorporated by the agents in forming expectations. Specifically, these are formed using a convex combination of forecasts obtained under the old and the new policy reaction functions. We write the code generically, so other forms of exogenous regime-switching and imperfect credibility about policy rules are accommodated.
In addition, we will demonstrate how this new model is estimated. New features in DSGE.jl, SMC.jl, and ModelConstructors.jl provide a user-friendly API for estimating parameters that change over time. We then show how to estimate this new model in an “online” manner that uses estimation results from an older model trained on data until before the pandemic. This method speeds up estimation times and can be applied even when the model has new COVID-specific parameters.
Throughout the talk, we will discuss how Julia’s functionalities and runtime performance enabled us to implement and use these changes quickly, which was crucial in forecasting during the rapidly-changing economic conditions over the last year.
These advances in DSGE.jl will be useful to any Julia users who are interested in flexibly modeling the economy, particularly in crisis situations as during the recession in 2020. It will also be useful to anyone who regularly conducts Bayesian estimation and is interested in re-using the results from an old estimation to efficiently estimate a new model or with new data.
Disclaimer: This talk reflects the experience of the authors and does not represent an endorsement by the Federal Reserve Bank of New York or the Federal Reserve System of any particular product or service. The views expressed in this talk are those of the authors and do not necessarily reflect the position of the Federal Reserve Bank of New York or the Federal Reserve System. Any errors or omissions are the responsibility of the authors.
false
https://pretalx.com/juliacon2021/talk/BEEHC8/
https://pretalx.com/juliacon2021/talk/BEEHC8/feedback/
Green
HighFrequencyCovariance: Estimating Covariance Matrices in Julia
Lightning talk
2021-07-29T19:30:00+00:00
19:30
00:10
High frequency data typically exhibit asynchronous trading and microstructure noise, which can bias the covariances estimated by standard estimators. While a number of specialised estimators have been developed, they have had limited availability in open source software. HighFrequencyCovariance is the first Julia package which implements specialised estimators for volatility, correlation and covariance using high frequency financial data.
juliacon2021-9464-highfrequencycovariance-estimating-covariance-matrices-in-julia
Stuart Baumann
en
This talk will briefly cover the challenges of using high frequency data for covariance matrix estimation. Then a number of algorithms will be discussed. Then we will demonstrate the use of the HighFrequencyCovariance package to estimate covariance matrices.
General content is in this paper: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3786912
And this package: https://github.com/s-baumann/HighFrequencyCovariance.jl
false
https://pretalx.com/juliacon2021/talk/NXJYHT/
https://pretalx.com/juliacon2021/talk/NXJYHT/feedback/
Green
Using Julia to study economic inequality and taxation
Lightning talk
2021-07-29T19:40:00+00:00
19:40
00:10
In this talk, I illustrate a Julia workflow to study economic inequality and taxation in the United States. My workflow centers around Taxsim.jl which allows to answer a large number of research questions related to the US tax system. First, I import a widely used survey dataset to show how high and low before-tax incomes evolved since 1960. Next, using Taxsim.jl, I impute taxes paid to compare the evolution of after-tax incomes and to measure the redistributive effect of the tax system.
juliacon2021-9619-using-julia-to-study-economic-inequality-and-taxation
Johannes Fleck
en
Many consider economic inequality the biggest social challenge of the 21st century. Indeed, the distribution of disposable incomes, i.e. earned income (wages and salaries) minus income taxes, has become more unequal in recent years and a larger share is captured by the top 1%. Yet, it is a challenge to measure if this development is driven by changes in the distribution of earned income itself or if governmental efforts to redistribute from the rich to the poor have weakened. Accordingly, there are conflicting views among scientists on how to address increasing economic inequality.
In this talk, I show how to use Julia to study the evolution of earned income and disposable income in the United States (US). While most researchers in the social sciences use software such as R and STATA for this purpose, my talk demonstrates that Julia is a superb alternative. To illustrate a concrete application, I use a new Julia package, Taxsim.jl, to investigate if the US tax system has become more or less redistributive during the last decades; income taxes paid are not reported in survey datasets and Taxsim.jl allows to impute them efficiently by uploading data from the Julia workspace to the tax calculator of the National Bureau of Economic Research (NBER). The calculator then computes a number of tax variables (income taxes, tax credits, etc.) and Taxsim.jl downloads them back into Julia.
My talk has three elements. First, I give a brief introduction to the NBER tax calculator and describe its input and return variables. Second, I use CSV.jl and DataFrames.jl to import and inspect information on individual incomes contained in publicly available and easily accessible survey datasets (ACS, CPS, Census). Finally, I apply Taxsim.jl to impute income taxes paid via a simple function call and I compare the evolution of before- and after-tax household incomes in the United States since 1960 to measure the redistributive effects of the US tax system. Thus, my talk uses Julia to answer a question which is at the center of public debates on inequality.
The Julia workflow I present can be adjusted to suit a large range of applications. Moreover, Taxsim.jl allows to investigate many aspects of the US tax system, such as the role of tax credits, deductions, state tax policies, etc. Hence, beyond the general Julia user community, the particular target group of this talk are researchers in quantitative social sciences (economics, finance, sociology, public policy etc).
false
https://pretalx.com/juliacon2021/talk/THTPGL/
https://pretalx.com/juliacon2021/talk/THTPGL/feedback/
Green
Diversity and Inclusion in the Julia community
Lightning talk
2021-07-29T19:50:00+00:00
19:50
00:10
It takes the entire community to promote diversity and inclusion. This talk will focus on the current plans underway to promote diversity and inclusion in the Julia Community as well as give an updated look at the state of diversity and inclusion in our community.
juliacon2021-9777-diversity-and-inclusion-in-the-julia-community
Logan Kilpatrick
en
This talk is designed as a primer for the upcoming Diversity and Inclusion BoF (Birds of Feathers, session where community members come together to talk about a specific topic) and will provide all of the diversity data we have access to, in order to pain the full picture about the current state of the community with respect to D&I.
false
https://pretalx.com/juliacon2021/talk/VM7PSF/
https://pretalx.com/juliacon2021/talk/VM7PSF/feedback/
Green
Improving Gender Diversity in the Julia Community
Lightning talk
2021-07-29T20:00:00+00:00
20:00
00:10
The Julia community aims to be welcoming, diverse, inclusive towards people from all backgrounds. However, the 2020 Julia User & Developer Survey found that only 3% of respondents were women, and reported no respondents who were non-binary or another gender. We, Julia Gender Inclusive, believe this needs to change. In this talk, we will share our ideas and initiatives for improving gender diversity among Julia users and developers, including outreach, community building, and mutual support.
juliacon2021-9766-improving-gender-diversity-in-the-julia-community
Kim Louisa AuthXuan (Tan Zhi Xuan)
en
More information about Julia Gender Inclusive can be found in our announcement post here: https://discourse.julialang.org/t/announcing-julia-gender-inclusive/63702
Interested community members can sign up here to be added to our Slack workspace, and to join our regular coffee meet-ups: https://forms.gle/tGhCckZqhzvAHoQFA
false
https://pretalx.com/juliacon2021/talk/CLRKFC/
https://pretalx.com/juliacon2021/talk/CLRKFC/feedback/
Green
Publish your research code: The Journal of Open Source Software
Lightning talk
2021-07-29T20:20:00+00:00
20:20
00:10
JOSS, the Journal of Open Source Software (https://joss.theoj.org/) is a venue for publishing research software packages. This provides a mechanism for the large time investment required to develop open-source research software to be included within traditional systems for academic credit.
A JOSS paper is meant to be a short description of the contribution provided by the research software,
with the main content being in the (archived) software repository itself.
juliacon2021-11723-publish-your-research-code-the-journal-of-open-source-software
David P. Sanders
en
The peer review process, run by volunteers via GitHub issues, and automated using a bot as much as possible, is designed mainly to review and improve the software itself, including documentation and tests.
false
https://pretalx.com/juliacon2021/talk/BRG8Z3/
https://pretalx.com/juliacon2021/talk/BRG8Z3/feedback/
Green
GatherTown -- Social break
Social hour
2021-07-29T20:30:00+00:00
20:30
01:00
Join us on Gather.town for a social hour.
It is a virtual location where we will facilitate the poster sessions, social gatherings, and hackathon. You can join the space using the URL: https://gather.town/invite?token=3QYkt8gX.
You should have received the password through Eventbrite
juliacon2021-11885-gathertown-social-break
en
false
https://pretalx.com/juliacon2021/talk/Q78RW3/
https://pretalx.com/juliacon2021/talk/Q78RW3/feedback/
Red
Scalable Power System Modeling and Analaysis
Talk
2021-07-29T12:30:00+00:00
12:30
00:30
The [Scalable Integrated Infrastructure Planning (SIIP) initiative at NREL](https://www.nrel.gov/analysis/siip.html) has developed a set of high-performance power system simulation capabilities with PowerSystems.jl and PowerSimulations.jl. This talk will demonstrate these capabilities with interactive examples using large realistic datasets, and provide theoretical background for software design choices.
juliacon2021-9758-scalable-power-system-modeling-and-analaysis
/media/juliacon2021/submissions/88EDGD/SIIP_energy_J9TJT4l.png
Clayton BarrowsDheepak Krishnamurthy
en
This talk will provide practical modeling examples and theoretical justification for design choices made in the [Scalable Integrated Infrastructure Planning (SIIP) Initiative](https://www.nrel.gov/analysis/siip.html) at the National Renewable Energy Lab (NREL). We will demonstrate the suite of power systems focused packages – [SIIP::Power](https://github.com/NREL-SIIP) to perform large-scale power systems modeling and analysis activities. In particular, this talk will highlight:
- [InfrastructureSystems.jl](https://github.com/nrel-siip/infrastructuresystems.jl): for enabling large-scale infrastructure system data set management and access
- [PowerSystems.jl](https://github.com/nrel-siip/powersystems.jl): for specifying quasi-static and dynamic power systems data
- [PowerSimulations.jl](https://github.com/nrel-siip/powersimulations.jl): for enabling optimization based power systems modeling, including production cost modeling and optimal power flow using [PowerModels.jl](https://github.com/lanl-ansi/powermodels.jl)
- [PowerGraphics.jl](https://github.com/nrel-siip/powergraphics.jl): for visualizations of results generated by PowerSystems.jl and PowerSimulations.jl
Examples will focus on standard modeling practice and highlight opportunities to customize and extend capabilities to meet individual needs.
false
https://pretalx.com/juliacon2021/talk/88EDGD/
https://pretalx.com/juliacon2021/talk/88EDGD/feedback/
Red
Unbalanced Power Flow Optimization with PowerModelsDistribution
Lightning talk
2021-07-29T13:00:00+00:00
13:00
00:10
With the recent advancements in power distribution, e.g., higher penetration of distributed energy resources (DERs), there is a significant demand for optimization tools to solve a variety of complex operational and planning problems, such as optimal dispatch, load shedding, and on-load tap changing. We have developed an optimization-focused approach to phase unbalanced power distribution modeling called PowerModelsDistribution, the design and usage of which we will introduce in this talk.
juliacon2021-9646-unbalanced-power-flow-optimization-with-powermodelsdistribution
/media/juliacon2021/submissions/F8BBVZ/Power_Models_Distribution_Jjb2tkT.png
David M Fobes
en
PowerModelsDistribution (PMD) is an optimization focused toolkit for power distribution networks modeling, designed using JuMP, which allows for a decoupling of the various problems, power flow formulations, and optimization solvers, for easy exploration and application of a variety of power flow problem types and mathematical formulations related to multi-phase quasi-steady-state optimization. PMD includes several nonlinear AC formulations, linear approximations, and relaxations, all based on state-of-the-art peer-reviewed research, and has native support for both single-period and multi-period (time series) problems, the latter of which is especially relevant due to the larger number of energy storage components appearing in power distribution networks. PMD includes a native Julia OpenDSS data format parser, allowing us to validate the results of AC power flow against OpenDSS using a number of IEEE distribution test feeders, and provides a simple avenue to support existing data models for a broad collection of distribution system components such as photovoltaic systems and energy storage.
false
https://pretalx.com/juliacon2021/talk/F8BBVZ/
https://pretalx.com/juliacon2021/talk/F8BBVZ/feedback/
Red
PowerModelsDistributionStateEstimation.jl
Lightning talk
2021-07-29T13:10:00+00:00
13:10
00:10
This talk is about a registered Julia package: PowerModelsDistributionStateEstimation.jl, that allows easy benchmark and design of state estimation models for power distribution systems. The goal is to accelerate the use of this technique in research and real-life settings. State estimation is formulated as a mathematical optimization problem using JuMP.jl and can be solved with off-the-shelf solvers. Different modeling options are featured, and the package is designed to be easily extensible.
juliacon2021-9558-powermodelsdistributionstateestimation-jl
Marta
en
Distribution networks are the final stage of the delivery of power from generation to consumers, and they have traditionally been managed with a fit-and-forget approach. This has been appropriate until recent years, given the predictable behavior and underutilization of these networks. However, several developments are changing the state of affairs, e.g., electric vehicles, PV panels, etc. These devices increase utilization, unpredictability, and the risk of voltage and congestion problems, but also provide a potential source of flexibility and control.
To understand the impact of these technologies and, potentially, to perform control actions, it is necessary to monitor distribution systems. State estimation (SE) is a monitoring tool that determines the most-likely state of the system given a set of measurements.
In this talk/poster, a (registered) Julia package is presented, PowerModelsDistributionStateEstimation.jl, which has been developed as a SE design facilitator. The main goal is to provide a flexible tool that allows researchers or other interested users to easily and rapidly design and benchmark SE techniques. This, in turn, has the potential to accelerate the real-life deployment of monitoring and control routines, which can play an important role in the management and operation of future power grids.
The package is an extension of PowerModelsDistribution.jl (https://github.com/lanl-ansi/PowerModelsDistribution.jl), and it allows to formulate SE as a constrained optimization problem. Usually, SE is not addressed in a strict mathematical optimization sense, but the latter is a more general way to describe the problem, which encapsulates the different methods available in the literature, making benchmarking easier.
The biggest challenge in the comparison of SE methods is that the solving algorithm is an inherent part of the SE model. This means that changes in the model-defining equations often require changes in the subsequent solving steps, making it very labor-intensive and time-consuming to test even a limited number of modeling options. With this package we break this paradigm, by splitting the modeling and solver layer, which is possible by using JuMP.jl. This allows users to focus on the design of a suitable SE model, letting an off-the-shelf solver, e.g., Ipopt, take care of the solving part.
A potential drawback is that solve times are longer than with a customized algorithm. However, numerical experiments with available solvers show solve times that seem acceptable for experimental and real-life use. If a better performance is required, the package can still be used to quickly find the optimal SE design, which the user can augment with a customized solver at a later stage.
Several SE modeling options (e.g., measurement types, power flow equations), are available in the package, which is easy to extend to include more.
The talk will give a short overview on the concept of SE, to then introduce the package in detail and provide some numerical examples to demonstrate its functionalities.
false
https://pretalx.com/juliacon2021/talk/UDTRW3/
https://pretalx.com/juliacon2021/talk/UDTRW3/feedback/
Red
LatticeQCD.jl: Simulation of quantum gauge fields
Lightning talk
2021-07-29T13:20:00+00:00
13:20
00:10
We present our code (LatticeQCD.jl) for quantum chromo-dynamics (QCD), which describes microscopic world inside of nucleons.
QCD calculation has been implemented by Fortran and C++ on supercomputers or GPU clusters because it requires huge numerical resource, i.g. Monte-Carlo with inversions of matrices with 10^16 x 10^16, and has been succeeded to calculate crucial numbers used in experiments. We implemented a code for QCD in Julia, which achieves compatible speed with a Fortran code.
juliacon2021-9581-latticeqcd-jl-simulation-of-quantum-gauge-fields
/media/juliacon2021/submissions/DTJYAC/logo_q21EgHE.png
Akio TomiyaYuki Nagai
en
false
https://pretalx.com/juliacon2021/talk/DTJYAC/
https://pretalx.com/juliacon2021/talk/DTJYAC/feedback/
Red
JuliaSPICE: A Composable ML Accelerated Analog Circuit Simulator
Talk
2021-07-29T13:30:00+00:00
13:30
00:30
Analog circuit simulation is widely used to design and verify analog circuits before they are manufactured. We present a novel, composable SPICE simulator written entirely in Julia, called JuliaSPICE.
juliacon2021-9795-juliaspice-a-composable-ml-accelerated-analog-circuit-simulator
Glen HertzPepijn de Vos
en
Modern analog design and verification requires semi-custom and complex flows that are difficult to construct with commercial tools since they are built around rigid command-line batch flows. In comparison, JuliaSPICE is built from the ground up for flexibility with a full Julia API so users can automate complex tasks without using slow disk IO and parsers. User-defined measurements or checks, written in Julia, can be executed inline with the simulator allowing the designer to dynamically alter the simulation and make on-the-fly measurements JuliaSPICE is also advancing the latest ML techniques with surrogate models and is funded by a DARPA award with the goal of delivering a 1000x speed-up over traditional approaches. The composability of a Julia solution will be demonstrated from within a Pluto notebook, showing interactive analyses not available in other simulators. The user will leave with a much better understanding of how Julia can be leveraged in an environment to accelerate their workflows, whether performing analog simulations or other tasks.
false
https://pretalx.com/juliacon2021/talk/QUCAK3/
https://pretalx.com/juliacon2021/talk/QUCAK3/feedback/
Red
Designing Spacecraft Trajectories with Julia
Lightning talk
2021-07-29T16:30:00+00:00
16:30
00:10
This talk briefly presents OrbitalTrajectories.jl, a library providing tools for the analysis of orbital trajectories for space mission design. Making use of the Julia scientific modeling ecosystem to easily define and extend high-fidelity simulations of spacecraft motion, we demonstrate how key techniques including meta-programming, symbolic computation, non-linear optimisation, and automatic differentiation work towards generating, analysing, and stabilising orbital trajectories.
juliacon2021-9816-designing-spacecraft-trajectories-with-julia
/media/juliacon2021/submissions/SB7HWT/OrbitalTrajectories_ROmQOV5.png
Dan Padilha
en
false
https://pretalx.com/juliacon2021/talk/SB7HWT/
https://pretalx.com/juliacon2021/talk/SB7HWT/feedback/
Red
AtomicSets.jl
Lightning talk
2021-07-29T16:40:00+00:00
16:40
00:10
We present `AtomicSets.jl`, a Julian framework for structured convex optimization. Algorithms for structured optimization build up a solution from a set of prescribed _atoms_ that represent simple structures. The atoms that participate in the final solution often represent key explanatory components of a model. We use Julia's dispatch system to implement a calculus of convex sets and their functional representations that compiles to efficient machine code.
juliacon2021-9931-atomicsets-jl
Mia Kramer
en
_AtomicSets.jl was developed by Michael Friedlander, Zhenan Fan and Mia Kramer at the University of British Columbia._
We say a set is convex if, for every pair of points in the set, the line between those points is also contained in the set. This is a generalization of the notion of convex that most of us would have learned in grade school for polygons: the set doesn't have any "caves" or "dents". We similarly call a function convex if its _epigraph_—the set of points "above" the function if it were plotted—is convex. Some common examples of useful convex sets are the _one ball_—the set of all points with 1-norm less than or equal to one, and the _nuclear ball_—the set of all matrices that can be written as an outer product of unit norm vectors. We care about convexity because it guarantees some useful properties for optimization. For exampe: any local minimum of a convex function is also a global minimum. Combinations of these properties can give us efficient algorithms with strong convergence guarantees.
Suppose we are trying to solve a problem where our answer is a vector, and we expect it to be sparse. In general, computing with exact sparsity is difficult, but let's take a step back. To say that a vector is sparse is to say it should be constructed from a small number of coordinate vectors, each scaled by a nonnegative amount. Let's take the coordinate vectors (and their opposite sign counterparts) to be our _atoms_, and take their convex hull to be our domain. We now have a domain we know to be convex, which also induces the structure we want to see in our solution. The process is similar for low-rank matrices: we assume that they will be constructed from a small number of outer products, so we take the set of unit rank outer products to be our atoms.
To generalize over these _atomic sets_, we need more than their common properties; we need a set of common operations. The most basic of these is probably the `gauge` function. Given a set _A_ and a point _x_, the gauge function answers the question "what is the smallest scale _λ_ such that _x_ is in _λ A_?" In other words, how much do we have to expand or contract _A_ so that _x_ is only just contained in it? If we pick our set to be the _two ball_, the set of points with Euclidean norm ≤ 1, our gauge function is just the Euclidean norm of the point.
Other common operations we have on atomic sets are the `expose` and `support` functions. The `expose` function gives us the atom which is most aligned with a given vector, where aligned means maximizing the dot product. Another way to imagine the operation is to take your set and your vector, and then sweep a hyperplane from the origin in the direction of the vector. The last point that the hyperplane touches on its path is the exposed point. The `support` function gives the value of the inner product which produced the exposed point.
Additionally, we define a calculus of atomic sets. We can construct atomic sets that are, for example, the Minkowski sum of other sets, a scaling of a set, or a linear map applied on a set. The `expose` operation gives us in some sense an element of the subderivative of the set, and so the usual chain rule applies. By defining this operation recursively for these compound sets, they too can be generic.
Using Julia's type system, we build representations for the sets, their atoms, and _faces_ (collections of atoms). By writing functions using these common operations on atomic sets, Julia's dispatch system allows compiling said function for any choice of atomic set (and hence notion of sparsity). Using this construction, we present a dual method for solving min_x ½ ‖ Mx - b ‖² s.t. gauge_A(x) ≤ τ, which is generic over the choice of set _A_.
false
https://pretalx.com/juliacon2021/talk/P7KQMT/
https://pretalx.com/juliacon2021/talk/P7KQMT/feedback/
Red
Julia Admittance: A Toolbox for Admittance Extraction
Lightning talk
2021-07-29T16:50:00+00:00
16:50
00:10
For power grid operators, wind farms and solar farms are black boxes since details of inverter technologies, the main component of renewables, are proprietary info. Then, how can a grid operator assess a power grid's stability? We aim to design a toolbox to address this challenge. Julia Admittance will process experiment data and come up with admittance models for wind, solar, etc. With admittance models available, eigenvalues of the system will be computed to assess system stability.
juliacon2021-9416-julia-admittance-a-toolbox-for-admittance-extraction
Lingling Fan
en
false
https://pretalx.com/juliacon2021/talk/9MGDGG/
https://pretalx.com/juliacon2021/talk/9MGDGG/feedback/
Red
JuliaFolds: Structured parallelism for Julia
Talk
2021-07-29T17:00:00+00:00
17:00
00:30
JuliaFolds ecosystem supports structured parallelism for Julia with the packages such as Transducers.jl, Folds.jl, and FLoops.jl. It aims at providing parallelism that is easy-to-use, composable,and extensible. Furthermore, it provides a unified interface to different execution mechanisms such as multi-threading, GPU, and distributed parallelisms. In this talk, I discuss the composable design principle of the JuliaFolds packages.
juliacon2021-9820-juliafolds-structured-parallelism-for-julia
Takafumi Arakaki
en
false
https://pretalx.com/juliacon2021/talk/3JQKRW/
https://pretalx.com/juliacon2021/talk/3JQKRW/feedback/
Red
Teaching parallelism to the Julia compiler
Talk
2021-07-29T17:30:00+00:00
17:30
00:30
How we currently implement the task parallel API in Julia introduces a couple of obstacles for supporting high-performance parallel programs. In particular, the compiler cannot analyze and optimize the child tasks in the context of the surrounding code. In this talk, I discuss our work on using Tapir (Schardl et al., 2019) to add parallelism to Julia that can be optimized by the compiler.
juliacon2021-9825-teaching-parallelism-to-the-julia-compiler
Takafumi Arakaki
en
This is joint work with Valentin Churavy and TB Schardl.
false
https://pretalx.com/juliacon2021/talk/MENJSR/
https://pretalx.com/juliacon2021/talk/MENJSR/feedback/
Red
Javis.jl - Julia Animations and Visualizations
Talk
2021-07-29T19:00:00+00:00
19:00
00:30
Javis.jl is a general purpose animation library which builds on top of the Luxor.jl graphics library.
It fills a gap in the Julia ecosystem by providing functionality to create object based animations to communicate complex ideas through simple means. Furthermore, Javis provides the flexibility for users to extend Javis’s visualizations to a variety of applications. Users are already expressing complicated ideas through winsome domain specific visuals such as planetary motion or brain mapping.
juliacon2021-9734-javis-jl-julia-animations-and-visualizations
Jacob ZelkoOle Kröger
en
Javis.jl is a general purpose library for creating Julia-based animations and visualizations across domains. At its core, Javis builds on the high level graphics library Luxor.jl and FFMPEG.jl for animation creation. Individuals who have difficulty effectively communicating ideas or findings statically, can easily use and extend Javis to construct informative, performant, and winsome animated graphics.
In this talk the audience will learn the key concepts of Javis by having a look into the abstraction system we use. Additionally they will see basic examples on how objects can interact to create powerful animations. A main point will be the interoperability with existing Julia packages like Luxor.jl and Animations.jl that we use to make the easiest experience for the user who already knows how to use Luxor for static art. Finally, the audience will come away with how Javis is already being used, what the future of Javis is, and how to get involved with the project.
Javis is inspired by the Python-based animation engine, manim, created by Grant Sanderson (aka 3blue1brown) to visualize math concepts. Although inspired by manim, Javis has the greater goal of providing a general purpose animated graphics library. Historically, the Julia ecosystem has lacked a similar dedicated toolchain for the easy creation of complex animations. Javis is now filling that gap in the ecosystem - and beyond only mathematics.
After reviewing the Julia ecosystem, the most similar packages to Javis are Reel.jl, Makie.jl, and Animations.jl. Javis differentiates itself from these packages by enabling its users to create visuals that may not be generally - or easily - supported by standard plotting packages. Although Javis uses a "Frame" concept similar to Reel.jl and Makie.jl, it is not only limited to plots and can create much more complicated visualizations than Animations.jl. Moreover, Javis has extensive documentation and tutorials to illustrate how to easily get started which is at times lacking within these similar packages. Finally, Javis has an active 40+ developer community where beginners can ask questions and participate in the open development path of Javis. Given that Javis users will not be limited by plotting conventions and having guidance in the form of high-quality tutorials and extensive documentation, the novelty and accessibility of Javis in the Julia space is high.
Already, Javis has seen steady adoption by users. For example, Javis has been used in secondary school settings to teach on topics such as physics and earth sciences. Increasingly, Javis is also being used for advanced visualizations. Further applications of Javis are in domain specific applications such as visualizing fourier series for signal processing use cases and mapping activity across the brain to view how the brain behaves under stress.
Due to the extensible nature of Javis, Javis is poised to leverage the existing Julia ecosystem for further animations that users can take advantage of using. This integrative tooling comes as a result of a very open definition of how Javis defines an animation. This tooling enables a user to hook into packages, such as Animations.jl or Pluto.jl, to provide additional capabilities for fine-grained controls of animations, as in the case of Animations.jl, or reproducible development environments per Pluto.jl.
false
https://pretalx.com/juliacon2021/talk/DMTYDS/
https://pretalx.com/juliacon2021/talk/DMTYDS/feedback/
Red
Julia and deploying complex graphical applications for laypeople
Lightning talk
2021-07-29T19:30:00+00:00
19:30
00:10
Applications written in Julia targeted at a nonprofessional audience are still uncommon, even though libraries for designing such applications have existed for years. For software to be easy to use by laypeople, a simple installation process and an intuitive GUI are essential. We have been developing and deploying such an application for over three years. This talk will focus on our experiences during that time, how the situation has improved since Julia 0.6, and what it looks like today.
juliacon2021-9582-julia-and-deploying-complex-graphical-applications-for-laypeople
VexatosCruor
en
Julia shows to be promising as a general-purpose language, yet uses for software targeted at non-professional users still appear to be scarce. We are the developers of one such tool: [Ahorn](https://github.com/CelestialCartographers/Ahorn) is a graphical level editor for the video game Celeste that allows a user to create their own levels for the game. Ahorn is written entirely in Julia. As the tool itself is likely to be of little interest to the Julia community, this talk will not focus on the tool, but on our experiences developing and deploying it.
Owing to the nature of the tool, its audience consists in large parts of people who want to dip their toes in game and level design for the first time. Many of these people are young, some as young as 13 years old. This talk will be about what it is like to develop a graphical Julia application that has to be able to be installed by a child on a 10-year-old laptop. What did the Julia ecosystem offer for GUI design in early 2018 when the project started? How well did Julia’s package installation system handle the large variety in hardware and operating systems we have encountered? How has the situation improved since then? What unique features does Julia offer that made us choose it, and how did using the language pay off years later? In our talk, we would like to answer these questions by sharing on our own experiences, and provide some ideas for what can be improved if the Julia community wants the language to become more widely adopted for the development of non-scientific user-facing applications.
false
https://pretalx.com/juliacon2021/talk/NVLQT7/
https://pretalx.com/juliacon2021/talk/NVLQT7/feedback/
Red
PGFPlotsX.jl - Plotting with LaTeX, directly from Julia
Lightning talk
2021-07-29T19:40:00+00:00
19:40
00:10
PGFPlots is a plotting package for LaTeX that produces plots with vector graphics and interfaces with the math typesetting of LaTeX. `PGFPlotsX.jl` is a Julia plotting package that provides an interface to PGFPlots by transpiling Julia objects to LaTeX code. Furthermore, the figures generated by `PGFPlotsX.jl` are directly rendered in IPython notebooks, Pluto and VSCode which allows for rapid plot prototyping. It also serves as one of the backends to the popular `Plots.jl` package.
juliacon2021-9730-pgfplotsx-jl-plotting-with-latex-directly-from-julia
/media/juliacon2021/submissions/CPDWCV/c1886afe-3907-11e7-8027-213d36bc011a_BNJDdXb.png
Kristoffer Carlsson
en
Some people like to almost endlessly tinker with their plots and the LaTeX plot package PGFPlots is one of the plotting packages that allow for such tinkering. It comes with a 600-page manual describing an almost endless number of dials and levers that can be turned and pulled to finally get the perfect plot. One of its drawbacks is that coding in LaTeX can be argued to be quite unpleasant. The error messages are often not good and there is very little linting support. `PGFPlots.jl` is a Julia package that brings all the good things about PGFPlots into Julia while remedying the bad part by allowing one to use Julia for the coding part.
One of the big design goals of the package was to facilitate "translatability" of LaTeX PGFPlots code into Julia over, for example, terseness. This was based on the observation that many plots are created by "stitching together" parts of different examples that can be found scattered over the internet. Allowing LaTeX PGFPlots code to easily be brought into Julia would open up a big amount of example code to be used. It also means that the official PGFPlots manual largely acts as a manual for the package.
Even though the API is made to resemble the one in LaTeX it is of a much higher level than the LaTeX counterpart. Many Julia objects can be directly used as inputs to the plot and will "convert" in a predictable way. Some examples of plottable Julia objects include data frames (from `DataFrames.jl)`, contours (from `Contours.jl`), colors (from `Colors.jl`), error bars (from `Measurements.jl`).
For people that desire a terser coding style while still having easy access to the PGFPlots renderer, it is possible to use `PGFPlotsX.jl` as a backend to `Plots.jl`.
In this talk, I will discuss how the design goals outlined above were achieved and give some illustrative examples and use cases. Attendees should get an overview of the package and be able to determine if using the package for their daily plotting is suitable.
false
https://pretalx.com/juliacon2021/talk/CPDWCV/
https://pretalx.com/juliacon2021/talk/CPDWCV/feedback/
Red
Towards an increased code-creativity harmony in Javis
Lightning talk
2021-07-29T19:50:00+00:00
19:50
00:10
Javis.jl is a graphical animation/visualization package for the Julia Language which is inspired by Grant Sanderson's (aka, famous Math educator and YouTuber, 3blue1brown) Python based animation engine, Manim. This talk is about the work I have been doing this summer to make Javis more friendly and feature rich for creators
juliacon2021-11678-towards-an-increased-code-creativity-harmony-in-javis
Arsh Sharma
en
This summer I have been working towards:
Bringing a more organized experience for creators and developers via layers.
Currently a WIP, this feature aims to add a layer based approach towards the Javis animation canvas, where different layers are stacked on top of each other. This is beneficial in cases where a user wants to modify a particular layer, without affecting objects present in other layers. It helps maintain virtual boundaries between different objects on the canvas both conceptually and syntactically.
Powerful abstractions for improved reasoning about the Javis API.
Creating shorthand methods/constants to define general functions that saves creators from writing anonymous functions for each object by extending Luxor’s shapes such as Line, Circle, Rectangle, Polygon etc.
Improvements to object transformations
To be able to create stunning animations, being able to visualize the transformation of one element to another on the fly is both useful and aesthetically pleasing. The current state of morphing allows only single step transformation where the final object is not a distinctly different object.Being able to modify the new object after morphing will open new possibilities for the user to transform the object further.
Livestream animations
Sharing is a part of creation, and being able to share animations is really important. Livestreaming can be done in two ways, over a local network, or directly to platforms like twitch.tv. While twitch support is a WIP, the former is available in the latest Javis.jl release.
false
https://pretalx.com/juliacon2021/talk/XDX3SQ/
https://pretalx.com/juliacon2021/talk/XDX3SQ/feedback/
Red
A deep dive into MakieLayout
Talk
2021-07-29T20:00:00+00:00
20:00
00:30
Makie.jl is a plotting package for high-performance interactive and publication-quality static data visualizations. MakieLayout, a former extension delivering flexible layouts and interactive widgets, has recently been integrated into the base package, and is now part of the default workflow.
This talk will take a detailed look at the new syntax and the architecture behind the layout system, as well as highlight features that make creating complex multi-plot figures a breeze.
juliacon2021-9891-a-deep-dive-into-makielayout
Julius Krumbiegel
en
false
https://pretalx.com/juliacon2021/talk/3S8DGW/
https://pretalx.com/juliacon2021/talk/3S8DGW/feedback/
Blue
CUDA.jl 3.0
Talk
2021-07-29T12:30:00+00:00
12:30
00:30
An overview and demonstration of the new features in CUDA.jl 3.0, most notably support for concurrent GPU programming.
juliacon2021-9708-cuda-jl-3-0
Tim Besard
en
CUDA.jl 3.0 was a major release of the NVIDIA GPU programming support package for Julia, with a major addition to the programming model: support for concurrent GPU programming with Julia tasks. In this talk, I will explain what concurrent GPU programming means, how it works, and how you can use it to improve your GPU programs.
I will also talk about other features and changes that are part of CUDA.jl 3.0 and more recent releases, such as the new device-side random number generator, support for building computational graphs, the new memory allocator, etc.
false
https://pretalx.com/juliacon2021/talk/UGX8YR/
https://pretalx.com/juliacon2021/talk/UGX8YR/feedback/
Blue
Scaling of Oceananigans.jl on multi GPU and CPU systems
Lightning talk
2021-07-29T13:00:00+00:00
13:00
00:10
This talk will present scaling and performance of the Oceananigans.jl ocean model on CPU and GPU systems. Oceananigans.jl is an all Julia code that is designed to study geophysical fluids problems ranging from idealized turbulence to planetary scale circulation. It uses the KernelAbstractions.jl package to support CPU and GPU single address space parallelism. It uses MPI.jl, to support multi-node and multi-GPU parallelism. MPI.jl is used both directly and through PencilArrays.jl.
juliacon2021-9824-scaling-of-oceananigans-jl-on-multi-gpu-and-cpu-systems
/media/juliacon2021/submissions/DZC7HN/bickley_1sB1HcZ.jpg
Chris HillValentin ChuravyAli RamadhanFrancis PoulinGregory Wagner
en
Oceananigans.jl is designed to be a user friendly ocean modeling code natively implemented in Julia that can scale from single core, laptop studies to large scale parallel CPU and GPU cluster systems. The codes finite volume algorithm has large inherent parallelism through spatial domain decomposition.
In this talk we will look at the strong and weak scaling performance of non-linear shallow water model configurations of Oceananigans. The code's numerical kernels utilize KernelAbstractions.jl, allowing one source code to be maintained that supports both CPU and GPU parallel scenarios. Multi-process on-node and multi-node parallelism is supported by MPI.jl and largely abstracted, using data structures and associated types that dispatch communication operations depending on the active parallelism model.
We will describe briefly the benchmark problems used and then look at scaling over multiple threads on CPUs within a single node, across multiple GPUs and across multiple CPU and GPU nodes in a high-performance computing cluster. We will present speedup metrics and cost per solution metrics. The latter can be used to provide some measure of cost-effectiveness across quite different architectures.
false
https://pretalx.com/juliacon2021/talk/DZC7HN/
https://pretalx.com/juliacon2021/talk/DZC7HN/feedback/
Blue
Calculating a million stationary points in a second on the GPU
Lightning talk
2021-07-29T13:10:00+00:00
13:10
00:10
We will show how Julia allows us to implement spatial branch-and-bound-type methods using interval arithmetic in parallel on GPUs, in a relatively painless way. As a test case, we calculate and verify existence and uniqueness of over one million stationary points of the transcendental Griewank function of two variables in one second on a recent GPU. We are not aware of any other system that is able to do this.
juliacon2021-9821-calculating-a-million-stationary-points-in-a-second-on-the-gpu
David P. SandersValentin Churavy
en
We will show how Julia allows us to implement spatial branch-and-bound-type methods using interval arithmetic in parallel on GPUs, in a relatively painless way.
These methods use repeated bisection in a divide-and-conquer style to perform exhaustive search over a box in d dimensions (for small d), in order to find all roots of a function f, find all global optima of f, or to bound feasible sets of constraints such as {x: f(x) ≤ 0}.
Using a vectorised implementation, we will show firstly how to define a vector of interval objects (or similar user-defined types) on the GPU, which most other systems cannot do. Then we need a way to run interval arithmetic methods, as defined in the IntervalArithmetic.jl package, on the GPU. `CUDA.jl`'s broadcasting abstraction
We will illustrate with the Griewank function, a standard test case for nonlinear optimization. We have developed a generic implementation of a vectorised branch-and-prune algorithm, which can run on both the CPU and GPU with no code changes whatsoever. A key difficulty that we faced, but were able to solve, was how to eliminate the uninteresting boxes in parallel.
We obtain a 2-orders-of-magnitude speed-up over a single CPU core, and we expect that performance will be improved even more by reducing array allocations.
false
https://pretalx.com/juliacon2021/talk/ZYPPNH/
https://pretalx.com/juliacon2021/talk/ZYPPNH/feedback/
Blue
ZXCalculus.jl: A Julia package for the ZX-calculus
Lightning talk
2021-07-29T13:20:00+00:00
13:20
00:10
The ZX-calculus is a graphical language for representing and reasoning about quantum information. ZXCalculus.jl is a high-performance package for creating, manipulating, and visualizing ZX-diagrams in Julia. Comparing with a previous Python implementation PyZX, ZXCalculus.jl has 6-50x speed-ups on various tasks of simplifying ZX-diagrams. Moreover, this package is integrated with YaoCompiler.jl and works as a circuit simplification pass in the quantum compiler.
juliacon2021-9698-zxcalculus-jl-a-julia-package-for-the-zx-calculus
Chen Zhao
en
The repository of ZXCalculus.jl is available on GitHub: [ZXCalculus.jl](https://github.com/QuantumBFS/ZXCalculus.jl)
For a brief introduction to this package, please refer to this [blog post](https://chenzhao44.github.io/2020/08/27/ZXCalculus.jl/).
For more details about the ZX-calculus, please check this [website](http://zxcalculus.com/).
false
https://pretalx.com/juliacon2021/talk/NWFRP9/
https://pretalx.com/juliacon2021/talk/NWFRP9/feedback/
Blue
ExaTron.jl: a scalable GPU-MPI-based batch solver for small NLPs
Talk
2021-07-29T13:30:00+00:00
13:30
00:30
We introduce ExaTron.jl which is a scalable GPU-MPI-based batch solver for many small nonlinear programming problems. We present ExaTron.jl's architecture, its kernel design principles, and implementation details with experimental results comparing different design choices. We demonstrate a linear scaling of parallel computational performance of ExaTron.jl on Summit at Oak Ridge National Laboratory.
juliacon2021-9910-exatron-jl-a-scalable-gpu-mpi-based-batch-solver-for-small-nlps
Youngdae Kim
en
We introduce ExaTron.jl which is a scalable GPU-MPI-based batch solver for many small nonlinear programming problems. Its algorithm is based on a trust-region Newton algorithm for solving bound constrained nonlinear nonconvex problems. In contrast to existing work in the literature, it completely works on GPUs without requiring data transfers between CPU and GPU during its procedure. This enables us to eliminate one of the main performance bottlenecks under memory-bound situation. We present ExaTron.jl's architecture, its kernel design principles, and implementation details with experimental results comparing different design choices. We have implemented an ADMM algorithm for solving alternating current optimal power flow, where tens of thousands of small nonlinear nonconvex problems are solved by ExaTron.jl. We demonstrate a linear scaling of parallel computational performance of ExaTron.jl on Summit at Oak Ridge National Laboratory.
false
https://pretalx.com/juliacon2021/talk/LMLJS8/
https://pretalx.com/juliacon2021/talk/LMLJS8/feedback/
Blue
Release management - lessons learned in JuliaData ecosystem
Talk
2021-07-29T16:30:00+00:00
16:30
00:30
Registering a new release for your package is always a great moment. However, there are several challenges related with release management. In this talk, using the experience from JuliaData ecosystem, I will discuss the major things to consider if you want to keep your users happy.
juliacon2021-8897-release-management-lessons-learned-in-juliadata-ecosystem
Bogumił Kamiński
en
In this talk I will discuss:
1. How we manage development and patch branches in DataFrames.jl.
2. Why users might not be able to install the latest version of your package and why installing it might downgrade other packages.
3. How to coordinate releases of closely coupled packages.
4. Why having interface packages like DataAPI.jl and Tables.jl is useful.
false
https://pretalx.com/juliacon2021/talk/RJE93F/
https://pretalx.com/juliacon2021/talk/RJE93F/feedback/
Blue
Shaped Data with Acsets
Talk
2021-07-29T17:00:00+00:00
17:00
00:30
Acsets are a novel infrastructure for handling data of different shapes, based on category theory and implemented in Catlab.jl. Acsets generalize both graphs and dataframes, and allow a much more general approach to data manipulation than was previously available. We will discuss both the mathematics of acsets and some of the metaprogramming techniques we used to implement them in Julia. Finally, we will give examples of how acsets have been key in developing many projects in AlgebraicJulia.
juliacon2021-9535-shaped-data-with-acsets
Owen Lynch
en
Any practicing data scientist can tell you that all the munging going on between data acquisition and mathematical algorithm is a huge time sink. This is especially evident when the data does not fall into the traditional model of the dataframe. If one is lucky, it is shaped like a graph, and one can use a graph data structure and graph algorithms to analyze it. However, more generally, there are many more "shapes" of data, that must either be put into adhoc data structures or shoehorned into general-purpose data structures.
In Catlab, we have built a general infrastructure for differently-shaped data based on a category-theoretic framework for databases as functors that we call "Attributed C-Sets" (acsets for short).
The acset infrastructure is made possible by a novel use of the Julia macro and type system, which would be difficult-to-untenable in most other languages. First "schemas" for acsets are generated by macros. Then, more macros are used to transform these schemas into custom structs. Finally, we use `@generated` functions to specialize generic operations to these custom structs.
This approach gives us performance comparable to popular data solutions like DataFrames.jl and LightGraphs.jl, while remaining fully generic. The acset infrastructure is used pervasively throughout the AlgebraicJulia ecosystem because of the flexibility, expressivity, and performance features.
In our talk, we will give an overview of the mathematical and computational innovations necessary to implement the acset infrastructure, as well as examples of practical applications of acsets, and a reflection on how acsets have become an essential part of AlgebraicJulia.
false
https://pretalx.com/juliacon2021/talk/NWRPGY/
https://pretalx.com/juliacon2021/talk/NWRPGY/feedback/
Blue
Types from JSON
Lightning talk
2021-07-29T17:30:00+00:00
17:30
00:10
Tired of writing artisanally crafted types to match the JSON file or API you're consuming? Learn about type providers and how to have types created from your JSON.
juliacon2021-9465-types-from-json
Mary McGrath
en
Type providers infer and instantiate types from real world data. [Types from data: Making structured data first-class citizens in F#](http://tomasp.net/academic/papers/fsharp-data/fsharp-data.pdf) formalized a type inference algorithm for real world data. This talk will provide an overview of the theory of type providers, how this applies to Julia, and how you can get types from data today in [JSON3.jl](https://github.com/quinnj/JSON3.jl).
false
https://pretalx.com/juliacon2021/talk/3PCHLJ/
https://pretalx.com/juliacon2021/talk/3PCHLJ/feedback/
Blue
PrettyPrinting: optimal layout for code and data
Lightning talk
2021-07-29T17:40:00+00:00
17:40
00:10
*PrettyPrinting* is a library for formatting composite data structures. PrettyPrinting optimizes the layout of the data to make it fit the screen width.
Out of the box, PrettyPrinting can format Julia code and standard Julia containers. It can be easily extended to format custom data types.
juliacon2021-9872-prettyprinting-optimal-layout-for-code-and-data
Kyrylo Simonov
en
If you use Julia REPL to work with JSON or other nested data structures, you may find the way the data is displayed unsatisfactory. If this is the case, consider using PrettyPrinting. Compare:
<pre>
julia> data = JSON.parsefile("patient-example.json")
Dict{String, Any} with 14 entries:
"active" => true
"managingOrganization" => Dict{String, Any}("reference"=>"Organization/1")
"address" => Any[Dict{String, Any}("line"=>Any["534 Erewhon St"], "dis…
"name" => Any[Dict{String, Any}("family"=>"Chalmers", "given"=>Any[…
"id" => "example"
"birthDate" => "1974-12-25"
⋮
</pre>
<pre>
julia> using PrettyPrinting
julia> pprint(data)
Dict(
"active" => true,
"managingOrganization" => Dict("reference" => "Organization/1"),
"address" => [Dict("line" => ["534 Erewhon St"],
"district" => "Rainbow",
"use" => "home",
"postalCode" => "3999",
"city" => "PleasantVille",
"period" => Dict("start" => "1974-12-25"),
"text" => "534 Erewhon St PeasantVille, Rainbow, Vic 3999",
"type" => "both",
"state" => "Vic")],
"name" => [Dict("family" => "Chalmers",
"given" => ["Peter", "James"],
"use" => "official"),
Dict("given" => ["Jim"], "use" => "usual"),
Dict("family" => "Windsor",
"given" => ["Peter", "James"],
"use" => "maiden",
"period" => Dict("end" => "2002"))],
"id" => "example",
"birthDate" => "1974-12-25",
⋮
</pre>
PrettyPrinting optimizes the layout of the data to make it fit the screen width. It knows how to format tuples, named tuples, vectors, sets, and dictionaries.
PrettyPrinting can also serialize `Expr` nodes as Julia code. It supports a fair subset of Julia syntax including top-level declarations, statements, and expressions.
The ability of PrettyPrinting to format `Expr` nodes makes it easy to extend `pprint()` to user-defined data types. Indeed, it is customary to display a Julia object as a valid Julia expression that constructs the object. This could be done by converting the object to `Expr` and having `pprint()` format
it.
For example, let us define a type `MyNode` modeled after the standard `Expr` type.
<pre>
julia> struct MyNode
head
args
MyNode(head, args...) = new(head, args)
end
</pre>
The default implementation of `show()` is not aware of the custom constructor. Moreover, it dumps the whole object in a single line, making it difficult to read.
<pre>
julia> tree = MyNode("1",
MyNode("1.1", MyNode("1.1.1"), MyNode("1.1.2"), MyNode("1.1.3")),
MyNode("1.2", MyNode("1.2.1"), MyNode("1.2.2"), MyNode("1.2.3")))
MyNode("1", (MyNode("1.1", (MyNode("1.1.1", ()), MyNode("1.1.2", ()), MyNode("1.1.3", …
</pre>
We implement function `quoteof(::MyNode)` to convert `MyNode` to `Expr`. We can also override the default implementation of `show()` to make it use `pprint()`.
<pre>
julia> PrettyPrinting.quoteof(n::MyNode) =
:(MyNode($(quoteof(n.head)), $((quoteof(arg) for arg in n.args)...)))
julia> Base.show(io::IO, ::MIME"text/plain", n::MyNode) =
pprint(io, n)
</pre>
Now the output is correct Julia code that fits the screen width.
<pre>
julia> tree
MyNode("1",
MyNode("1.1", MyNode("1.1.1"), MyNode("1.1.2"), MyNode("1.1.3")),
MyNode("1.2", MyNode("1.2.1"), MyNode("1.2.2"), MyNode("1.2.3")))
</pre>
Internally, PrettyPrinting represents all potential layouts of a data structure in the form of a *layout expression* assembled from atomic layouts, vertical and horizontal composition, and the choice operator. The layout cost function estimates how well the layout fits the screen dimensions. The algorithm for finding the optimal layout is a clever application of dynamic programming, which is described in [Phillip Yelland, A New Approach to Optimal Code Formatting, 2016](https://ai.google/research/pubs/pub44667).
false
https://pretalx.com/juliacon2021/talk/HWSUQN/
https://pretalx.com/juliacon2021/talk/HWSUQN/feedback/
Blue
Sponsor talk (Datachef)
Keynote
2021-07-29T17:50:00+00:00
17:50
00:05
Sponsor talk.
juliacon2021-11871-sponsor-talk-datachef-
en
false
https://pretalx.com/juliacon2021/talk/AKRLTA/
https://pretalx.com/juliacon2021/talk/AKRLTA/feedback/
Blue
Package latency and what developers can do to reduce it
Talk
2021-07-29T19:00:00+00:00
19:00
00:30
Package latency remains one of the chief complaints among Julia users. While recent improvements in Julia have reduced the problem, the opportunity for additional progress is large. In this talk I'll analyze latency from a package developer's standpoint, describing some of the factors that affect latency and how improvements in package design can reduce it. I will briefly exhibit tools that can help identify opportunities for improvement.
juliacon2021-9086-package-latency-and-what-developers-can-do-to-reduce-it
Tim Holy
en
false
https://pretalx.com/juliacon2021/talk/LE38LV/
https://pretalx.com/juliacon2021/talk/LE38LV/feedback/
Blue
Creating a Shared Library Bundle with Package Compiler
Lightning talk
2021-07-29T19:30:00+00:00
19:30
00:10
[`PackageCompiler.jl`](https://julialang.github.io/PackageCompiler.jl/dev/) has become the de facto method for creating standalone Julia applications. In this talk, we will demonstrate the use of `PackageCompiler.jl` to produce shared library bundles. This functionality was added recently and allows the easy creation of location-independent dynamic libraries which can be linked to and called from C, C++, Rust, or other languages which can link to and use C libraries.
juliacon2021-9882-creating-a-shared-library-bundle-with-package-compiler
Kevin SquireSimon ByrneKristoffer Carlsson
en
Julia has been touted as a great solution to the two-language problem (and it is). But for many, interacting with code in other languages is a necessity.
Numerous packages exist which aid interoperability with other languages, including C ([`Clang.jl`](https://juliainterop.github.io/Clang.jl/stable/)), C++ ([`CxxWrap.jl`](https://github.com/JuliaInterop/CxxWrap.jl)), Java ([`JavaCall.jl`](https://juliainterop.github.io/JavaCall.jl/)), Matlab ([`Matlab.jl`](https://github.com/JuliaInterop/MATLAB.jl) / [`Mex.jl`](https://github.com/byuflowlab/Mex.jl)), Python ([`PyCall.jl`](https://github.com/JuliaPy/PyCall.jl) / [`pyjulia`](https://pyjulia.readthedocs.io/en/stable/)), R ([`RCall.jl`](https://juliainterop.github.io/RCall.jl/stable/) / [`JuliaCall`](https://cran.r-project.org/web/packages/JuliaCall/readme/README.html)), Mathematica ([`MathLink.jl`](https://github.com/JuliaInterop/MathLink.jl)), and rust ([`jlrs`](https://docs.rs/jlrs/0.9.0/jlrs/)).
Many of these packages focus on calling out to code in other languages from Julia, but there is also support for calling Julia code from other languages, especially for those that have the ability to call C functions, and that is what we will focus on here.
The Julia manual has a [full section on Embedding Julia](https://docs.julialang.org/en/v1/manual/embedding/). Until now, this has been the standard way to embed and call Julia from other languages. Using the ideas here, along with custom Julia sysimage generation with [`PackageCompiler.jl`](https://julialang.github.io/PackageCompiler.jl/dev/), one of us created a proof-of-concept repository for creating a shared library from Julia code for C or other languages (https://github.com/simonbyrne/libcg).
One downside of this work is that the library was not easy to relocate--it contained hard-coded paths to the Julia runtime. We wanted the ability to create a relocatable shared library.
`PackageCompiler.jl` already allowed the creation of “apps”--bundles of files, including an executable--which could be relocated and moved to other machines (with some minor caveats). We extended this functionality to create relocatable shared libraries with a `create_library` function.
The actual act of creating a shared library with `PackageCompiler.jl` is very much like creating an “app”, and has a very similar output--a bundle of directories which include the shared library and enough of the Julia runtime to run. This bundle can be zipped or tarred up, sent to other computers, and installed in any location that a linker can find it. The user has the option of setting the library version (on Mac and Linux), and can include C header files for the Julia functions she has exported in the shared library.
For this talk, we will give a brief overview of the `create_library` functionality, discuss situations in which it might be used, show how to use it, and discuss its limitations.
false
https://pretalx.com/juliacon2021/talk/KHK7PA/
https://pretalx.com/juliacon2021/talk/KHK7PA/feedback/
Blue
Semantically Releasing Julia Packages
Lightning talk
2021-07-29T19:40:00+00:00
19:40
00:10
The Julia community has embraced semantic versioning from a very early stage (if not from the get-go). The Julia package release flow has also been put through its paces and is easy to get started with. This is a great foundation and may be sufficient for most. However, some may require a more structured approach to release preparation, incorporating it into their day-to-day operations. This talk will show how to use the 'semantic release' framework for Julia packages to accomplish this.
juliacon2021-9757-semantically-releasing-julia-packages
Joris Kraak
en
The 'semantic release' framework (https://semantic-release.gitbook.io) builds upon the 'semantic versioning' (https://semver.org) and 'conventional commits' (https://www.conventionalcommits.org) specifications to bring release preparation closer to day-to-day operations.
When the time rolls around for a new release of a Julia package, multiple decisions have to be made regarding which version number to assign, documenting what has changed, etc. This is not always a straightforward task. Gathering this information may require coordination within a group of people, or stretch long periods of time requiring effort to regain an overview of the current state of a package relative to the previous one to be able to document it. By adopting a specific commit message format, standard tooling can be used to extract this information automatically, whenever a release is desired. For instance, based on the content of commit messages the new semantic version can be determined, a changelog for the public API can be automatically generated, etc.
An argument can be made that, as all of this information is available in the version control system, there is no need for tooling such as this. However, the information contained in these systems is typically either too high-level due to sloppy commit messages, or it is too detailed requiring deep knowledge of the software to understand the implications of changes. It is typically not convenient for consumers of a Julia package to find out how a new release of a dependency affects their software.
Adopting a 'semantic release' process benefits both developers and consumers of Julia software. For the former, it enables thinking about the impact of changes 'in the moment', instead of 'after the fact'. This is typically beneficial for the quality of documentation of these changes (e.g. reasons why, etc.). For the latter, it becomes easier to judge whether a new release of a dependency actually has an impact on their software.
Slides are available at https://bauglir.gitlab.io/talks/juliacon-2021-semantically-releasing-julia-packages/.
false
https://pretalx.com/juliacon2021/talk/CY88QP/
https://pretalx.com/juliacon2021/talk/CY88QP/feedback/
Blue
Runtime-switchable BLAS/LAPACK backends via libblastrampoline
Lightning talk
2021-07-29T19:50:00+00:00
19:50
00:10
Julia has historically been built against a single backing BLAS/LAPACK library, and switching to a different library has required a recompilation of Julia. This was compounded by issues with loading 3rd party binaries that linked against incompatible BLAS backends. This talk will showcase a new low-overhead compatibility layer in Julia v1.7 named libblastrampoline that allows for runtime switching of BLAS/LAPACK libraries, as well as allowing loading of multiple BLAS/LAPACK ABIs at once.
juliacon2021-9799-runtime-switchable-blas-lapack-backends-via-libblastrampoline
Mosè GiordanoElliot Saba
en
false
https://pretalx.com/juliacon2021/talk/ZSPVMT/
https://pretalx.com/juliacon2021/talk/ZSPVMT/feedback/
Blue
Deep Dive: Creating Shared Libraries with PackageCompiler.jl
Talk
2021-07-29T20:00:00+00:00
20:00
00:30
The ability to create shared library bundles was recently added to `PackageCompiler.jl`. In this talk, we will discuss the technical details of the implementation and give in-depth examples of using the resulting shared library bundles from C and Rust.
juliacon2021-9900-deep-dive-creating-shared-libraries-with-packagecompiler-jl
Kevin SquireNikhil MitraKristoffer CarlssonSimon Byrne
en
We recently added to `PackageCompiler.jl` functionality for creating shared library bundles, consisting of a "main" dynamic library (`.so`, `.dylib`, or `.dll`) created from Julia code, as well as any required Julia runtime libraries. The purpose of the library bundle is to allow developers to write Julia code that can be distributed to developers using other languages without the need for Julia to be installed.
This work extends the existing `PackageCompiler.jl` functionality to create self-contained, distributable and relocatable "apps". In this talk, we will go into the details of the implementation, as well as give in-depth examples of using the resulting shared library from C and rust.
false
https://pretalx.com/juliacon2021/talk/U9SZZU/
https://pretalx.com/juliacon2021/talk/U9SZZU/feedback/
Purple
DataSets.jl: A bridge between code and data
Lightning talk
2021-07-29T12:30:00+00:00
12:30
00:10
In technical computing, getting data into and out of your code can be a pain. Data comes in all shapes, sizes and formats, with many different locations and storage access mechanisms.
DataSets.jl is a new package for describing data declaratively and mapping it neatly into your programs. We aim to make your code portable between data environments and remove the cruft of local paths and data access wrappers which litter technical analysis code.
juliacon2021-9950-datasets-jl-a-bridge-between-code-and-data
Claire Foster
en
DataSets.jl is an open source package for describing data format and location declaratively so that one can better separate data deserialization and access from the domain-specific analysis code which consumes that data.
To quote from the package documentation available at https://juliacomputing.github.io/DataSets.jl/dev :
DataSets.jl exists to help manage data and reduce the amount of data wrangling
code you need to write. It's annoying to constantly rewrite
* Command line wrappers which deal with paths to data storage
* Code to load and save from various *data storage systems* (eg, local
filesystem data; local git data, downloaders for remote data over various
protocols, cloud storage access)
* Code to load the same data model from various serializations
* Code to deal with data lifecycle; versions, provenance, etc
DataSets.jl provides scaffolding to make this kind of code more reusable. We want
to make it easy to *relocate* an algorithm between different data environments
without code changes. For example from your laptop to the cloud, to another
user's machine, or to an HPC system.
false
https://pretalx.com/juliacon2021/talk/73XKCM/
https://pretalx.com/juliacon2021/talk/73XKCM/feedback/
Purple
Systems Biology in ModelingToolkit
Lightning talk
2021-07-29T12:40:00+00:00
12:40
00:10
Systems Biology Markup Language (SBML) and CellML are extensible markup languages (XML) widely used throughout the biological modeling community. In this talk we showcase new packages (SBML.jl and CellMLToolkit.jl) for importing models from these languages to the ModelingToolkit.jl format for the full suite of SciML tools to simulate and analyze!
juliacon2021-9688-systems-biology-in-modelingtoolkit
Anand JainShahriar IravanianPaul Lang
en
Back in my day, systems biologists used MATLAB and Python for RK4. But in 2021 we can now run downhill both ways and make our biological models zoom with CellMLToolkit.jl and SBMLToolkit.jl in Julia! We will demonstrate importing CellML and SBML models into ModelingToolkit and how we get these model analysis and simulation tools "for free" in an acausal symbolic component model. We will show a few examples of how (biological) researchers may benefit from the broader SciML ecosystem, including parameter estimation and global sensitivity analysis. Short comparisons with de facto SBML and CellML modeling programs will be drawn to demonstrate how a biologists’ workflow may differ with SciML. The audience will leave with a firm understanding of how the Julia simulation environments will lead the next generation of biological modeling and simulation.
false
https://pretalx.com/juliacon2021/talk/EZHEQL/
https://pretalx.com/juliacon2021/talk/EZHEQL/feedback/
Purple
Single-cell resolved cell-cell communication modeling in Julia
Lightning talk
2021-07-29T12:50:00+00:00
12:50
00:10
We develop multiscale models that couple cell-cell communication with cell-internal gene regulatory network dynamics to study cell fate decision-making from a dynamical systems perspective. In JuliaLang, we model cell-cell communication as a Poisson process, and cell-internal dynamics with nonlinear ordinary differential equations, taking advantage of the power of DifferentialEquations.jl. We show that subtle changes in cell-cell communication lead to dramatic changes in cell fate distributions.
juliacon2021-10027-single-cell-resolved-cell-cell-communication-modeling-in-julia
Megan Franke
en
The role of cell-cell communication in cell fate decision-making has not been well-characterized through a dynamical systems perspective. To do so, here we develop multiscale models that couple cell-cell communication with cell-internal gene regulatory network dynamics. This allows us to study the influence of external signaling on cell fate decision-making at the resolution of single cells. We study the granulocyte-monocyte vs. megakaryocyte-erythrocyte fate decision, dictated by the GATA1-PU.1 network, as an exemplary bistable cell fate system. Using JuliaLang, we model the cell-internal dynamics with nonlinear ordinary differential equations and the cell-cell communication via a Poisson process.
In this work, through analysis of a wide range of cell-cell communication topologies, we discovered that general principles emerged describing how cell-cell communication regulates cell fate decision-making. We studied a wide range of cell communication topologies through simulation using tools from DifferentialEquations.jl. We also used our high-performance computing cluster to run thousands of simulations in order to understand the limiting behaviors of our model. We show that, for a wide range of cell communication topologies, subtle changes in signaling can lead to dramatic changes in cell fate. We find that cell-cell coupling can explain how populations of heterogeneous cell types can arise. Analysis of intrinsic and extrinsic cell-cell communication noise demonstrates that noise alone can alter the cell fate decision-making boundaries. These results illustrate how external signals alter transcriptional dynamics, provide insight into cell fate decision-making, and provide a framework for modeling cell-cell communication that we expect will be of wide interest to the systems biology community.
false
https://pretalx.com/juliacon2021/talk/YKHNVR/
https://pretalx.com/juliacon2021/talk/YKHNVR/feedback/
Purple
FlowAtlas.jl: interactive exploration of phenotypes in cytometry
Lightning talk
2021-07-29T13:00:00+00:00
13:00
00:10
I will present an interactive web app for exploring phenotypes in flow cytometry data. In particular a multi-tissue, high-dimensional, immune cell dataset. This tool bridges computational methods in GigaSOM.jl and the popular FlowJo, used to annotate cells with gating strategies. By leveraging the geospatial mapping library OpenLayers to render, annotate and analyze cells, immunologists can now efficiently navigate the phenotype space of Human Cell Atlas datasets.
juliacon2021-9509-flowatlas-jl-interactive-exploration-of-phenotypes-in-cytometry
Grisha Szep
en
This project demonstrates how combining OpenLayers, D3 and GigaSOM.jl using JSServe.jl allowed us to create interactive clustering and visualisation tools for really large cytometry data. We want to continue lowering the entry barrier for experimental biologists to use computational tools.
This talk should be interesting to people from bioinformatics, immunology, machine learning and web development. Special thanks go to the lovely people involved in GigaSOM.jl for the helpful discussions
false
https://pretalx.com/juliacon2021/talk/QTLENJ/
https://pretalx.com/juliacon2021/talk/QTLENJ/feedback/
Purple
Designing ecologically optimized vaccines
Lightning talk
2021-07-29T13:10:00+00:00
13:10
00:10
Designing vaccines is an expensive and time consuming process. This talk demonstrates how we can exploit automatic differentiation of ODEs, parallelization, stochastic search and Bayesian optimization to minimize post-vaccination invasive pneumococcal disease and antibiotic resistant strains in a bacteria population using a novel computational model of the bacterial population dynamics that integrates epidemiological and genomic data.
juliacon2021-9884-designing-ecologically-optimized-vaccines
Kusti Skytén
en
Streptococcus pneumoniae (the pneumococcus) is a common nasopharyngeal bacterium that can cause invasive pneumococcal disease (IPD). Each component of current vaccines generally induce immunity to one of the approximately 100 pneumococcal types. Overall carriage rates remain similar to pre-vaccination as the serotypes not affected by the vaccine will replace the affected ones. Selecting which serotypes to target to minimize the post-vaccine IPD burden is a challenging combinatorial problem involving a large ODE system describing the population dynamics of the bacteria in response to each proposed vaccine. This talk describes how I have approached this problem using automatic differentiation, parallelized evaluation of the ODEs, stochastic search and Bayesian optimization. Here is a link to the paper this work is based on: https://www.nature.com/articles/s41564-019-0651-y.
false
https://pretalx.com/juliacon2021/talk/EWWNFZ/
https://pretalx.com/juliacon2021/talk/EWWNFZ/feedback/
Purple
PRS.jl: Fast Polygenic Risk Scores
Lightning talk
2021-07-29T13:20:00+00:00
13:20
00:10
Determining one’s risk of developing various diseases throughout one’s lifetime is important for pursuing good health. An emerging method for performing this calculation is the Polygenic Risk Score, or PRS. A PRS method allows one to construct a model of risk of acquiring a certain disease given one’s own genome and provides a simple numerical result representing that risk. We will describe how we ported a widely used PRS program to Julia and the performance and usability that we gained.
juliacon2021-9796-prs-jl-fast-polygenic-risk-scores
Annika Faucon
en
The PRS-CS Python library calculates the relationship between genetic features and traits, eventually producing a single numerical result representing a person’s genetic susceptibility to a given disease. It does this using a novel Markov Chain Monte Carlo approach, allowing it to capture information from more genetic features than previous approaches.
As collection and storage of genetic data increases globally, more diseases are studied at once. However, when calculating these scores for many diseases while maintaining high accuracy, the computational burden becomes increasingly expensive. Because of the limitations of PRS-CS in making top-notch accuracy fast, we developed PRS.jl.
PRS.jl started as a direct port of PRS-CS, and without any special treatment produces results with the same accuracy but in a fraction of the time (or, depending on the configuration, better accuracy for the same amount of time). Today, PRS.jl boasts additional features and improved usability over PRS-CS, while maintaining low compute times per trait (among 9 tested) from an average of 80 hours for PRS-CS to just 15 for PRS.jl.
In this talk, I will introduce the concept of polygenic risk scores and describe how the they are used in biology and medicine. Next, I'll demonstrate how the program works and what aspects we improved upon. Finally, I will show areas where users can contribute improvements to the package.
false
https://pretalx.com/juliacon2021/talk/PDMYDR/
https://pretalx.com/juliacon2021/talk/PDMYDR/feedback/
Purple
PhyloNetworks: a Julia package for phylogenetic networks
Lightning talk
2021-07-29T13:30:00+00:00
13:30
00:10
Evolutionary relationships among organisms are depicted by a binary tree. However, not all species follow the paradigm of vertical inheritance of genes and thus, estimation of phylogenetic networks becomes necessary. PhyloNetworks is the first Julia package for the inference, manipulation, visualization, and use of phylogenetic networks.
The package documentation has a full tutorial including upstream analyses, network estimation, bootstrap analysis, and downstream analyses for trait evolution.
juliacon2021-9565-phylonetworks-a-julia-package-for-phylogenetic-networks
/media/juliacon2021/submissions/DQJNVA/phylonetworks-logo_1mJDpzf.png
Claudia Solis-Lemus
en
false
https://pretalx.com/juliacon2021/talk/DQJNVA/
https://pretalx.com/juliacon2021/talk/DQJNVA/feedback/
Purple
Solving Pokemon Go Battles using Julia
Lightning talk
2021-07-29T13:40:00+00:00
13:40
00:10
RandomBattles.jl is a Julia package for the efficient simulation of individual and team Player-vs-Player Battles in Pokemon Go, the AR mobile game by Niantic. This package can compute Monte Carlo simulations, as well as game theoretic solutions to perfect information games. Using the game’s structure and Nash equilibria, the algorithm computes optimal play strategies for an arbitrary number of moves. These simulations derived strategies that are highly similar to those employed by human players.
juliacon2021-9438-solving-pokemon-go-battles-using-julia
Ian Slagle
en
false
https://pretalx.com/juliacon2021/talk/NUFWBU/
https://pretalx.com/juliacon2021/talk/NUFWBU/feedback/
Purple
Julia for data analysis in High Energy Physics
Lightning talk
2021-07-29T13:50:00+00:00
13:50
00:10
The talk presents the first data analysis in the LHCb experiment performed in Julia (arXiv:2107.03419). The analysis includes data selection, building and combining complex PDFs (`AlgebraPDF.jl`), likelihood fitting, angular analysis (`FourVectors.jl`, `ThreeBodyDecay.jl`, `PartialWaveFunctions.jl`), hypotheses testing, and running pseudo experiments. New packages enrich Julia ecosystem fostering the adoption of the language in the High Energy Physics community.
juliacon2021-9810-julia-for-data-analysis-in-high-energy-physics
Mikhail Mikhasenko
en
The field of High Energy Physics (HEP) is a natural place to take great benefits of Julia language. The adaptation of Julia in HEP, however, has been slow and the HEP ecosystem stays a promising place for future development. In the talk, I will present [an example of the data analysis in LHCb](https://inspirehep.net/literature/1879440), a large collaboration of 1000 scientists, that pioneers an application of Julia to the typical HEP problems. The central part of the analysis is the study of a multi-particle spectrum by building a customary Mixture Model PDF based on the particle-scattering amplitude using [AlgebraPDF.jl](https://github.com/mmikhasenko/AlgebraPDF.jl), extended-likelihood fitting, and spin-hypotheses testing using sets of pseudo experiments.
false
https://pretalx.com/juliacon2021/talk/TRMZFB/
https://pretalx.com/juliacon2021/talk/TRMZFB/feedback/
Purple
Experiences session
Experience
2021-07-29T16:30:00+00:00
16:30
01:30
This session will include all the experiences talks. List of all experiences talks: https://juliacon.org/2021/experiences/
juliacon2021-11726-experiences-session
en
false
https://pretalx.com/juliacon2021/talk/MAUPF9/
https://pretalx.com/juliacon2021/talk/MAUPF9/feedback/
Purple
Monads 2.0, aka Algebraic Effects: ExtensibleEffects.jl
Talk
2021-07-29T19:00:00+00:00
19:00
00:30
While Monads make it easy to hide one context nicely in your code, with Extensible Effects you can combine multiple contexts and let them seamlessly interact with each other. TLDR: If you want to abstract and hide away some computational context, prefer Extensible Effects to Monads.
juliacon2021-9890-monads-2-0-aka-algebraic-effects-extensibleeffects-jl
Stephan Sahm
en
You heard that monads should be cool, but guess maybe there is something better already? Indeed ;-)
Extensible effects, or sometimes also called algebraic effects are now around for some time and have made monads composable.
Remember, a monad is essentially a composable hidden context, however to compose different such monads has been a struggle for many years.
This talk will present the concept and implementation of Extensible Effects. The implementation was adapted from the scala library Eff, but massively simplified, and with many examples of different complexity. Hence it will serve very well for educational purposes. You can find the source code at https://github.com/JuliaFunctional/ExtensibleEffects.jl
Extensible Effects are a bit like magic. The implementation looks so small but what it can do surpasses imagination, even if you programmed it yourself. It is a truly remarkable concept. Grab the chance and get to know it in this session!
false
https://pretalx.com/juliacon2021/talk/BB97DT/
https://pretalx.com/juliacon2021/talk/BB97DT/feedback/
Purple
Roadmap to Julia BLAS and LinearAlgebra
Talk
2021-07-29T19:30:00+00:00
19:30
00:30
BLAS & LAPACK are an integral component of many numerical algorithms. Due to their importance, a lot of effort has gone into optimizing ASM/C/Fortran implementations.
Nonetheless, early work demonstrated Julia implementations were often faster than competitors, while laying groundwork for new routines specialized for new problems.
We discuss a roadmap toward providing Julia BLAS and LAPACK libraries, from optimizations in LoopVectorization to libraries like Octavian and RecursiveFactorization.
juliacon2021-9881-roadmap-to-julia-blas-and-linearalgebra
Chris Elrod
en
The primary motivations for implementing BLAS/LAPACK in Julia are:
1. Because we can!
2. The existence of highly optimized alternatives such as MKL provide a solid benchmark by which to assess how we're doing before applying the same optimization approaches to novel problems.
3. We can adapt the routines easily to related operation types such as evaluating dense layers or miscellaneous tensor operations.
4. Generic with respect to number types, whether that means mixing precision or something as exotic as Tropical Numbers (showcased in TropicalGEMM.jl).
5. Ability to take advantage of compile time information and specialize, e.g. for statically sized arrays.
6. Relatively painless support for new hardware, as we do not need to write assembly kernels. Feature detection and support for generating optimized code will also be tied with LLVM rather than libraries like OpenBLAS, which tend to lag far behind the compilers.
Some of the challenges faced in the ecosystem include:
1. Efficient composable threading with low enough overhead to beat MKL for small array sizes.
2. Compilation time or sysimage building to avoid "time to first matmul" problems.
3. The implementations of BLAS and LAPACK routines themselves.
Traditionally, BLAS and LAPACK libraries define many compute kernels, typically written in assembly for each supported architecture. Supporting code then builds the supported BLAS and LAPACK routines through applying these kernels.
Libraries such as Octavian and RecursiveFactorization followed this approach while using LoopVectorization to produce most of the kernels.
An alternative approach is to use these problems to motivate and guide extending LoopVectorization to perform analysis and optimizations.
At the time I submit this proposal, planned features that will be able to extend the amount of work LoopVectorization can handle automatically, thereby reducing the effort needed to implement new functions:
1. Allowing the bounds of inner loops to depend on the induction variables of outer loops. For example, loops of the form`for m in 1:M, n in 1:m; ...; end`.
2. Allow multiple loops to occur at the same level in the nest. For example, loops of the form `for m in 1:M; for n in 1:N; end; for k in 1:K; end; end`.
3. Model dependencies across loop iterations, and avoid violating them. For example, loops of the form `for m in 1:M; a[m] += a[m-1]; end`.
Together, these would allow LoopVectorization to support loop nests performing cholesky factorizations or triangular solves. These should be tunable to perform with (nearly) optimal performance at small to moderate sizes for use in blockwise routines.
An orthogonal set of optimizations would be to develop a model for automatically generating blocking (working on pieces of arrays at a time that fit nicely into upper cache levels) and packing code (copying pieces of arrays into temporary buffers to avoid pessimized address calculations due to memory accesses being spread across too many pages; this also benefits hardware prefetchers, which require relatively small strides between subsequent memory accesses to trigger).
We outline a path toward building up an ecosystem through a combination of the approaches, including applications we can target -- such as stiff ODE solves benefiting from LU and triangular solves -- and see immediately benefits along the way.
false
https://pretalx.com/juliacon2021/talk/QDBNXV/
https://pretalx.com/juliacon2021/talk/QDBNXV/feedback/
Purple
SuiteSparseGraphBLAS.jl
Lightning talk
2021-07-29T20:00:00+00:00
20:00
00:10
Graphs are a ubiquitous and versatile data structure, which allow representation of problems and systems across a vast array of domains, from infrastructure networks and molecules to language and social interactions. [SuiteSparseGraphBLAS.jl](https://github.com/JuliaSparse/SuiteSparseGraphBLAS.jl) casts graph computations as generalized linear algebra on sparse matrices. Support for ChainRules AD frameworks, and the wider ecosystem is a core feature of v1.0, releasing around JuliaCon.
juliacon2021-9866-suitesparsegraphblas-jl
William Kimmerer
en
This talk will give an overview of progress on a JSOC 2021 project. Most work will be complete by this point, and the talk will give a brief overview of GraphBLAS, an example algorithm using GraphBLAS in Julia, and a graph neural network layer written using the project.
One of the goals of the project is interoperability with the Julia ecosystem, integrating with interfaces from SparseArrays, LightGraphs, and GeometricFlux. These integrations will be highlighted as well.
false
https://pretalx.com/juliacon2021/talk/YFPXCU/
https://pretalx.com/juliacon2021/talk/YFPXCU/feedback/
Purple
MutableArithmetics: An API for mutable operations
Lightning talk
2021-07-29T20:10:00+00:00
20:10
00:10
The definition of the arithmetic operations defined in Julia assume that the arguments are not modified.
However, in many situations, a variable represents an accumulator that can be modified to contain the result, e.g., when summing the elements of an array.
Moreover, many types can be mutated and mutating the element may have significant performance benefit.
This talk presents an interface that allows algorithms to exploit a possible mutability while still being completely generic.
juliacon2021-9865-mutablearithmetics-an-api-for-mutable-operations
Benoît Legat
en
Julia allows to write generic algorithms that work with arbitrary number types as long as they implement the needed operation such as `+`, `*`, ...
The definition of the arithmetic operations defined in Julia assume that the arguments are not modified.
However, in many situations, a variable represents an accumulator that can be modified to contain the result, e.g., when summing the elements of an array.
Moreover, many types can be mutated, e.g., multiple precision numbers, JuMP expressions, MOI functions, polynomials, arrays, ...
and mutating the element may have significant performance benefit.
This talk presents an interface called MutableArithmetics.
It allows for mutable types to implement an arithmetics exploiting their mutability and for algorithms to exploit mutability while still being completely generic.
Moreover, it provides the following additional features:
1. it re-implements part of the Julia standard library on top of the API to allow mutable type to use a more efficient version than the default one.
2. it defines a `@rewrite` macro that rewrites an expression using the standard operations (e.g `+`, `*`, ...) into a code that exploits the mutability of the intermediate values created when evalutation the expression.
JuMP used to have its own API for mutable operations on JuMP expressions and its own JuMP-specific implementation of 1. and 2..
This was refactored into the package MutableArithmetics generalizing this to arbitrary mutable types.
Starting from JuMP v0.21, JuMP expression and MOI functions implement the MutableArithmetics API and the JuMP-specific implementation of 1. and 2. was removed in favor of the generic versions implemented in MutableArithmetics on top of the MutableArithmetics API.
While MutableArithmetics is already used in the released versions of numerous packages (such as JuMP, MathOptInterface, SumOfSquares, Polyhedra, SDDP and MultivariatePolynomials)
and seems to be working well and cover the use cases of many different types and algorithms on these types,
we may still need to modify the API to cover all possible use cases.
During this presentation, we hope to argue our design decision in a clear and detailed manner so that the Julia community can help us figure out whether there are situations that the API does not cover and how it could be further improved.
false
https://pretalx.com/juliacon2021/talk/PRFW3N/
https://pretalx.com/juliacon2021/talk/PRFW3N/feedback/
Purple
ExprTools: Metaprogramming from reflection
Lightning talk
2021-07-29T20:20:00+00:00
20:20
00:10
Have you ever had a list of `Method`s, e.g. from the output of `methodswith`, and thought _”I just want to implement all of these, it would be great to use metaprogramming for that”_?
ExprTools.jl has the parts to let you extract the info out of the method table, manipulate it, and then generate the AST you want for the new method you want to define.
Does this access undocumented Julia internals? Absolutely!
Is this well tested? Comprehensively!
Is this a good idea? Who knows!
juliacon2021-9686-exprtools-metaprogramming-from-reflection
Frames Catherine White
en
Sometimes you want to generate definitions for many methods. Consider for example implementing there delegation pattern. Where you have a field of a different type, and you want to overload all methods that accept that field’s type to also accept this new object, and have them just delegate to calling the method on the field. Ideally this wouldn’t come up and you would just need to implement a small well documented set for an interface. But sometimes things can’t be ideal. Generating overloads from the method table is one way to take a jack-hammer to blast through the problem. But even outside that it can be useful as this talk will discuss.
[ExprTools.jl](https://github.com/invenia/ExprTools.jl) was created to hold a more robust version of `splitdef` and `combinedef` from [MacroTools.jl](https://github.com/FluxML/MacroTools.jl).
`splitdef` takes the AST for a method definition and outputs a dictionary of all the parts: name, args, whereparams, body etc. `combinedef` does the reverse: taking such a dictionary, and outputting an AST that declares the method.
`splitdef` is very useful since it both handles different equivalent syntax forms, and makes the key parts accessible in a consistent way.
This makes it easier to write function decorator macros, and also macros that let the user write something that looks like a function but is actually transformed into something else.
This dictionary is also useful, and it would be great if we could define it not from an AST but from a method that has already been defined. We could access all the information we need via reflection. This is exactly what the `signature` function provides.
The `signature` function takes in a `Method` object, which can be obtained from `methods` or `methodswith`, and returns a dictionary like `splitdef` would, except it excludes the body.
Excluding the body is generally not useful for this kind of generated code anyway since the user will generally want to fill the body with their own code that calls the method we are generating from. One exampl. Another example is generating overloaded operators for overloading-based reverse mode AD from [ChainRules.jl](https://github.com/JuliaDiff/ChainRules.jl/)’s rrule.
The main alternative for this kind of approach is something along the lines of [Cassette.jl](https://github.com/JuliaLabs/Cassette.jl), which in effect allows the overloading of what it means to call a function. There are three key differences of an ExprTools-based generation from reflection approach over a Cassette-based overdubbing approach.
Overdubbing occurs in a specific dynamically scoped context, method generation applies globally.
A downside of method generation is that it will not detect new methods added after the generation is performed, overdubbing does.
An upside of method generation is that it is just plain julia code, so it doesn’t break the compiler’s ability to do type inference. The compiler is completely prepared to deal with julia-code. This is (sadly, but demonstrably) not true for Cassette right now.
This talk will spend ~3 minutes time covering the basics of ExprTools, with `splitdef` and `combine`. It will spend 4 minutes demonstrating `signature` and the generation of methods from the methods tables. It will spend ~1 minutes peeking under the covers as to how it works.
false
https://pretalx.com/juliacon2021/talk/FEEV9A/
https://pretalx.com/juliacon2021/talk/FEEV9A/feedback/
BoF/Mini Track
Live Coding: Outreach and Beyond
Birds of Feather
2021-07-29T12:30:00+00:00
12:30
01:30
The goal of this session is to gather people interested in streaming their programming sessions. Streaming code is a particular exercise, we want to foster the exchange of best practices, tips and thoughts. In particular, we would like to see to which extent Julia streamers have managed to reach an audience beyond the Julia community, used streams as a teaching medium and how we can improve the formats to make it more accessible to newcomers. The BoF is not limited to streamers, people watching
juliacon2021-9812-live-coding-outreach-and-beyond
/media/juliacon2021/submissions/N7REEK/juliastreamer_9aF75GY.jpg
Jacob Zelko
en
Live streaming is a recent phenomena that has seen huge growth due to services such as Twitch, Youtube Live, and Facebook Live. Although this burgeoning community’s focus is generally on video games, vlogs, and talk shows, a niche that is increasing in this community are educational streams. Examples of such streams are where students studying invite audiences to “study with me” or educators hosting ask me anything sessions. In particular, one area in this niche that is particularly relevant for the Julia community is live coding.
Live coding is where software developers or programmers stream their programming development to a live audience. It can take many shapes where a developer works on an open source project, a coder is learning a new language, or an interactive back-and-forth to create a novel application. Live coding works as a sort of give and take relationship where streamers get the opportunity to make new connections and the audience gets to be exposed to new programming styles or learn new skills. Often, this works in the reciprocal as well.
For the Julia community, with the advent of the COVID19 pandemic, many individuals were suddenly left in a common, but highly unusual, circumstance. As many have experienced and are experiencing, the days of being in a work office setting and having passing conversations with colleagues have become somewhat distant memories. Instead, many find themselves at home behind their computers with their only company being either family, pets, or the whir of their computer’s fan. In conjunction with this, the amount of interest in live streaming programming in the Julia community has been growing.
In this Birds of a Feather, we want to gather those people who have been live streaming within the Julia community and those interested in live streaming. In this gathering, streamers can share their ideas around best practices, tips and experiences in the streamer community. This could be a strong opportunity for the Julia community to also discuss how to reach and engage with people outside of the Julia community. Furthermore, this BoF could also lead to productive discussions on how to help within the Julia community whether that be in the form of increasing visibility to amazing Julia packages or leading teaching sessions.
Finally, this BoF would also provide an open avenue for individuals who are interested in live streaming to freely ask questions. This could range from questions such as “what is needed to get started as a streamer?” to “how do you build a great community around your stream?” As a result, this can not only increase outreach from the Julia community but also foster new and meaningful connections one could make - especially in the pandemic era.
false
https://pretalx.com/juliacon2021/talk/N7REEK/
https://pretalx.com/juliacon2021/talk/N7REEK/feedback/
BoF/Mini Track
Julia in High-Performance Computing
BoF (45 mins)
2021-07-29T16:30:00+00:00
16:30
00:45
The JuliaHPC community as a group maintains the infrastructure for using Julia in high-performance computing. In this BoF we invite newcomers, application developers, and maintainers to join us for an informal discussion around the state of Julia in HPC.
juliacon2021-9826-julia-in-high-performance-computing
Valentin ChuravyMichael Schlottke-LakemperSimon ByrneCarsten Bauer
en
# Agenda
## Short presentations about ongoing projects
- Ludovic Räss & Sam Omlin: GPU4GEO and Julia HPC development at ETH Zurich
- Simon Byrne: ClimateMachine.jl
- Valentin Churavy: CESMIX-MIT
- Johannes Blaschke: Julia@NERSC
## Roundtable discussion
- Julia in the DOE
- Teaching HPC
- MPI.jl
- Challenges of running Julia at scale
- Deploying Julia (Sysimages/Pkg/Depots/Artifacts/...)
- **Your suggestion**
false
https://pretalx.com/juliacon2021/talk/C3EBJM/
https://pretalx.com/juliacon2021/talk/C3EBJM/feedback/
BoF/Mini Track
GPU programming in Julia BoF
BoF (45 mins)
2021-07-29T17:15:00+00:00
17:15
00:45
This is a BoF to talk about the various GPU programming packages in Julia:
- CUDA.jl
- AMDGPU.jl
- oneAPI.jl
- KernelAbstractions.jl
- GPUArrays.jl
- GPUCompiler.jl
- ...
If you have any thoughts or questions about these packages, or other approaches to GPU programming in Julia, please join this BoF to chat about it!
juliacon2021-9710-gpu-programming-in-julia-bof
Tim BesardJulian P SamarooValentin Churavy
en
false
https://pretalx.com/juliacon2021/talk/RXBMHE/
https://pretalx.com/juliacon2021/talk/RXBMHE/feedback/
BoF/Mini Track
Julia in Private Organizations
Birds of Feather
2021-07-29T19:00:00+00:00
19:00
01:30
Using Julia code within private organizations can encounter challenges not faced in the open-source community. In this BoF, we'll be discussing the unique aspects of using Julia in private organizations and cover topics such as: production deployments, tooling/ techniques for teams coding in Julia, and answering questions regarding transition/adopting Julia within an organization.
juliacon2021-9791-julia-in-private-organizations
Curtis Vogt
en
Every private organization works slightly differently with how they operate and the internal tooling they use. As Julia users who work in private organizations we'll use this BoF as an opportunity to discuss the unique challenges we've faced while using Julia within our organizations and how we've solved them. This BoF is suitable for members of private organizations which already are established in using Julia and advocates pushing for Julia to be adopted.
Discussion points will include:
- Are you using repository hosting besides GitHub? Have you faced any challenges with integrating open-source tools? (e.g. CI tooling, GitHub specific tools)
- Did you face any challenges when setting up a private Julia package registry?
- What tooling to you use to assist with new package registry entries? (e.g. Bots, RegistryCI.jl)
- How do you keep private code up to date with public dependencies? (e.g. major version changes, deprecations, etc.)
- How does Julia fit into your production environment? A service, batch job, etc.
- What cloud infrastructure do you use for running distributed Julia?
- Solutions for containerizing Julia: shared base images, optimizing startup time, etc.
- Procedures for moving closed-source to open-source?
- Advice for adopting Julia within an organization
Hopefully, this BoF will allow different organizations using similar tooling/techniques to connect and work together. The result of this could be an improved workflow experience for these organizations and ideally a much smoother transition for those organizations just starting to adopt Julia.
false
https://pretalx.com/juliacon2021/talk/GWRZPV/
https://pretalx.com/juliacon2021/talk/GWRZPV/feedback/
JuMP Track
The Design of the MiniZinc Modelling Language
Talk
2021-07-29T12:30:00+00:00
12:30
00:30
In this talk, we discuss the design of MiniZinc, a leading Constraint modelling language.
juliacon2021-10874-the-design-of-the-minizinc-modelling-language
Gleb Belov
en
MiniZinc was designed with the aim to become a 'standard' Constraint Programming modelling language. As such, it is oriented towards logical and combinatorial constraints standardized in the Global Constraints Catalogue, but also supports continuous variables. The most important design criteria were expressiveness, while at the same time simplicity for practical implementation, and mechanisms for easy plugging of new solvers. The solver interface incorporates a low-level language FlatZinc (aka MPS for example), and a redefinition scheme for global constraints. The latter enables native handling of globals supported by a given solver, while applying default or solver-specific redefinitions for unsupported ones. Other solver technologies, such as SAT, local search, and MIP, have been interfaced, and several experimental interfaces exist, such as to quantum computing. The modelling system has enabled the annual solver competition MiniZinc Challenge since 2008.
false
https://pretalx.com/juliacon2021/talk/3BBA7L/
https://pretalx.com/juliacon2021/talk/3BBA7L/feedback/
JuMP Track
ConstraintSolver.jl - First constraint solver written in Julia
Talk
2021-07-29T13:00:00+00:00
13:00
00:30
In this talk we discuss ConstraintSolver.jl, a new Julia package to tackle the problem of solving constraint programming problems purely in Julia.
juliacon2021-10881-constraintsolver-jl-first-constraint-solver-written-in-julia
Ole Kröger
en
Constraint programming is used in a variety of fields ranging from simple puzzle solving to big instances in industry. Currently Julia does not have a package for constraint programming and JuMP itself is in the beginning of implementing constraints and variable sets to support constraint solvers in the future. ConstraintSolver.jl is a new Julia package to tackle the problem of solving constraint programming problems purely in Julia. This has advantages for prototyping new ideas which is harder to do in low level languages like C or C++. Additionally the solver will be able to solve problems with different types than just integers and floating point numbers i.e. an integration with Unitful.jl will be possible. Another advantage of a solver purely written in Julia is to easily use automatic differentiation.
false
https://pretalx.com/juliacon2021/talk/9KTFNJ/
https://pretalx.com/juliacon2021/talk/9KTFNJ/feedback/
JuMP Track
ConstraintProgrammingExtensions.jl
Talk
2021-07-29T13:30:00+00:00
13:30
00:30
ConstraintProgrammingExtensions.jl is a project bringing constraint programming to JuMP. Its main part is a large series of constraints that aim at providing a common interface for constraint-programming solvers. It also consists of a series of bridges that define relationships between those sets (including between high-level constraints such as knapsacks and mathematical-programming formulations) and of a FlatZinc reader-writer to import and export models in that common format.
juliacon2021-10869-constraintprogrammingextensions-jl
Thibaut Cuvelier
en
Constraint programming is a modelling paradigm that has proved to be extremely useful in many real-world scenarios, like computing optimum schedules or vehicle routings. It is often viewed as either a complementary or a competing technology to mathematical programming, trading modelling ease with computational efficiency. Both approaches have seen many developments in terms of modelling language and solvers alike, including in Julia. Even though several constraint-programming solvers are available (or entirely written) in Julia, [JuMP and MathOptInterface](https://jump.dev/) (its solver abstraction layer) do not give access to them in the same, unified way as mathematical programming, though the latest versions of JuMP have been designed to provide great flexibility.
[ConstraintProgrammingExtensions](https://github.com/dourouc05/ConstraintProgrammingExtensions.jl) is currently a one-man project bringing constraint programming to JuMP. Its main part is a large series of sets that aim at providing a common interface for constraint-programming solvers. It also consists of a series of bridges that define relationships between those sets (including between high-level constraints such as knapsacks and mathematical-programming formulations) and of a [FlatZinc](https://www.minizinc.org/) reader-writer to import and export models in that common format, already supported by tens of solvers. As a side effect, ConstraintProgrammingExtensions is also becoming a way to ease modelling for mathematical programming, as high-level constraints can be used with traditional mathematical-programming solvers.
This presentation details the current state of ConstraintProgrammingExtensions, some of its design decisions, and future developments when JuMP and MathOptInterface do not provide sufficient versatility: for instance, several constraint-programming solvers allow graphs as first-class decision variables; also, constraint programming is not restricted by the linearity or the convexity of mathematical expressions, unlike many mathematical-programming solvers.
false
https://pretalx.com/juliacon2021/talk/EHUKWK/
https://pretalx.com/juliacon2021/talk/EHUKWK/feedback/
JuMP Track
Nonlinear programming on the GPU
Talk
2021-07-29T16:30:00+00:00
16:30
00:30
So far, most nonlinear optimization modelers and solvers have primarily targeted CPU architectures. However, with the emergence of heterogeneous computing architectures, leveraging massively parallel accelerators in nonlinear optimization has become crucial for performance. As part of the Exascale Computing Project ExaSGD, we are studying how to efficiently run nonlinear optimization algorithms at exascale using GPU accelerators.
juliacon2021-10863-nonlinear-programming-on-the-gpu
François Pacaud
en
This talk walks over our recent experiences in our development efforts. The parallel layout of GPUs requires running as many operations as possible in batch mode, in a massively parallel fashion. We will detail how we have adapted the automatic differentiation, the linear algebra and the optimization solvers in a batch setting and present the different challenges we have addressed. Our efforts have led to the development of different prototypes, all addressing a specific issue on the GPU: ExaPF for batch automatic differentiation, ExaTron as a batch optimization solver, ProxAL for distributed parallelism. The future research opportunities are manyfold for the nonlinear optimization community: how can we leverage new automatic differentiation backends developed in the machine learning community for optimization purpose? How can we exploit the Julia language to develop a vectorized nonlinear optimization modeler targeting massively parallel accelerators?
false
https://pretalx.com/juliacon2021/talk/P8KJSW/
https://pretalx.com/juliacon2021/talk/P8KJSW/feedback/
JuMP Track
MadNLP.jl: A Mad Nonlinear Programming Solver.
Lightning talk
2021-07-29T17:00:00+00:00
17:00
00:10
We present a native-Julia nonlinear programming (NLP) solver MadNLP.jl. This solver implements the filter line-search interior-point method for constrained NLPs; to the best of our knowledge, MadNLP is currently the only native-Julia solver that is capable of handling general nonlinear equality/inequality-constrained optimization problems. MadNLP is interfaced with the algebraic modeling language JuMP.jl, the graph-based modeling language Plasmo.jl, and the NLP data structure NLPModels.jl.
juliacon2021-10866-madnlp-jl-a-mad-nonlinear-programming-solver-
Sungho Shin
en
MadNLP leverages diverse sparse and dense linear algebra routines: UMFPACK, HSL routines, MUMPS, Pardiso, LAPACK, and cuSOLVER. The key feature of MadNLP is the adoption of scalable linear algebra methods: structure-exploiting parallel linear algebra (based on restricted additive Schwarz and Schur complement strategy) and GPU-based linear algebra (cuSOLVER). These methods significantly enhance the scalability of the solver to large-scale problem instances (e.g., long-horizon dynamic optimization, stochastic programs, abd dense NLPs). Furthermore, MadNLP exploits Julia's extensibility so that new linear solvers can be added in a plug-and-play manner. In the presentation, we will present benchmark results against other open-source and commercial solvers as well as the results highlighting MadNLP's advanced features. Our results suggest that (i) MadNLP has comparable speed and robustness with Ipopt/KNITRO when tested against the standard benchmark test set (CUTEst); (ii) MadNLP with structure-exploiting parallel linear algebra can achieve speed-up up of a factor of 3 when solving large-scale sparse nonlinear programs; and (iii) GPU-acceleration achieves the speed-up of a factor of 10 when solving dense nonlinear optimization problems. The presentation will conclude with a future development roadmap, including the implementation of distributed-memory parallelism and pure-GPU solver.
false
https://pretalx.com/juliacon2021/talk/A3Z33C/
https://pretalx.com/juliacon2021/talk/A3Z33C/feedback/
JuMP Track
Nonconvex.jl
Lightning talk
2021-07-29T17:10:00+00:00
17:10
00:10
Nonconvex.jl is a package that aims to interface all the major nonlinear and mixed integer nonlinear programming packages in Julia using a function-based API. Zygote.jl is used for automatic differentiation (AD) and ChainRules.jl can be used to define analytic gradients or custom adjoint rules for functions. Ipopt.jl, NLopt.jl, Percival.jl and Juniper.jl are some of the packages wrapped in Nonconvex.jl as of the writing of this abstract.
juliacon2021-10865-nonconvex-jl
Mohamed Tarek
en
The method of moving asymptotes is also natively implemented in the package. The first order augmented Lagrangian algorithm implemented in Percival.jl is particularly suitable for the AD-based approach because efficient adjoint rules of block constraints can be used when calculating the gradient of the augmented Lagrangian instead of computing the entire Jacobian of the constraint functions. The nice thing about having a function-based API is that registering functions with JuMP.jl and splatting inputs are not needed anymore thus simplifying the nonlinear and mixed integer nonlinear optimization interface. Future work includes using ModelingToolkit.jl to reverse-engineer the objective and constraint functions, generating mathematical expressions where possible thus allowing the use of expression-based nonlinear and mixed integer nonlinear solvers such as Alpine.jl.
false
https://pretalx.com/juliacon2021/talk/MVCXHB/
https://pretalx.com/juliacon2021/talk/MVCXHB/feedback/
JuMP Track
NOMAD.jl
Lightning talk
2021-07-29T17:20:00+00:00
17:20
00:10
The NOMAD software is a derivative-free solver which implements the mesh adaptive direct search algorithm. Its purpose is to solve constrained problems where the objective and the functions defining the constraints correspond to the outputs of a program seen as a blackbox. This talk aims at presenting the NOMAD.jl interface to Julia, linked to the JuMP modeling language. Some applications will also be exposed.
juliacon2021-10879-nomad-jl
Ludovic Salomon
en
false
https://pretalx.com/juliacon2021/talk/UZJWTT/
https://pretalx.com/juliacon2021/talk/UZJWTT/feedback/
JuMP Track
Linearly Constrained Separable Optimization
Talk
2021-07-29T17:30:00+00:00
17:30
00:30
Many optimization problems involve minimizing a sum of univariate functions, each with a different variable, subject to coupling constraints. We present [PiecewiseQuadratics.jl](https://github.com/JuliaFirstOrder/PiecewiseQuadratics.jl) and [SeparableOptimization.jl](https://github.com/JuliaFirstOrder/SeparableOptimization.jl), two Julia packages for solving such problems when these univariate functions in the objective are piecewise-quadratic.
juliacon2021-10878-linearly-constrained-separable-optimization
Ellis Brown
en
***Note:*** *SeparableOptimization.jl was named "LCSO.jl" at the time of the presentation recording.*
[PiecewiseQuadratics.jl](https://github.com/JuliaFirstOrder/PiecewiseQuadratics.jl) allows for the representation and manipulation of such functions, including the computation of the proximal operator or the convex envelope. [SeparableOptimization.jl](https://github.com/JuliaFirstOrder/SeparableOptimization.jl) solves the problem of minimizing a sum of piecewise-quadratic functions subject to affine equality constraints by applying the Alternating Direction Method of Multipliers (ADMM). This allows us to quickly solve problems even when the univariate functions are very complicated. We demonstrate this with a portfolio construction example, in which the univariate functions represent the US tax laws for realized capital gains.
false
https://pretalx.com/juliacon2021/talk/FGUEAM/
https://pretalx.com/juliacon2021/talk/FGUEAM/feedback/
JuMP Track
NExOS.jl for Nonconvex Exterior-point Operator Splitting
Talk
2021-07-29T19:00:00+00:00
19:00
00:30
NExOS.jl is a Julia package that implements the Nonconvex Exterior-point Operator Splitting (NExOS) algorithm (https://arxiv.org/pdf/2011.04552.pdf). The package is tailored for minimizing a convex cost function over a nonconvex constraint set, where projection onto the constraint set is single-valued around local minima.
juliacon2021-10864-nexos-jl-for-nonconvex-exterior-point-operator-splitting
Shuvomoy Das Gupta
en
We consider the problem of minimizing a convex cost function over a nonconvex constraint set, where projection onto the constraint set is single-valued around points of interest. A wide range of nonconvex learning problems have this structure including (but not limited to) sparse and low-rank optimization problems.
By exploiting the underlying geometry of the constraint set, NExOS finds a locally optimal point by solving a sequence of penalized problems with strictly decreasing penalty parameters. NExOS solves each penalized problem by applying an outer iteration operator splitting algorithm, which converges linearly to a local minimum of the corresponding penalized formulation under regularity conditions. Furthermore, the local minima of the penalized problems converge to a local minimum of the original problem as the penalty parameter goes to zero.
NExOS.jl has been extensively tested on many instances from a wide variety of learning problems. In spite of being general-purpose, NExOS is able to compute high-quality solutions very quickly and is competitive with specialized algorithms.
false
https://pretalx.com/juliacon2021/talk/SWBYRL/
https://pretalx.com/juliacon2021/talk/SWBYRL/feedback/
JuMP Track
Global constrained nonlinear optimisation with interval methods
Talk
2021-07-29T19:30:00+00:00
19:30
00:30
We will present recent work in progress on guaranteed methods for inequality-constrained *global* nonlinear optimization in Julia. Using methods based on interval arithmetic allows us to guarantee (prove) that we return the true global minimum and minimizers for inequality-constrained optimization problems in low dimensions.
juliacon2021-10873-global-constrained-nonlinear-optimisation-with-interval-methods
David P. Sanders
en
Interval arithmetic provides a computationally-cheap way to compute an over-estimate of the range of a function over an input set. These estimates are guaranteed to be correct (mathematically rigorous), even
though the computations are done using floating-point arithmetic, by using directed rounding.
This kind of range bounding can be used to design a conceptually-simple algorithm for guaranteed unconstrained global optimization, as in the talk presented at JuMP-dev Chile in 2019.
In this talk we show how to extend this to constrained optimization.
First we show how both the objective function and constraints can be modelled using symbolic expressions from the Symbolics.jl library. Based on these symbolic expressions we have a new implementation of interval constraint propagation, as implemented in the ReversePropagation.jl library, including common subexpression elimination.
One main difficulty in interval-based inequality-constrained optimization is deciding when a given box is feasible, i.e. satisfies all of the constraints. We have implemented what we believe to be a novel method to do so.
This is an extension to inequality-constrained optimization of th
false
https://pretalx.com/juliacon2021/talk/BR7WG8/
https://pretalx.com/juliacon2021/talk/BR7WG8/feedback/
Green
Calibration analysis of probabilistic models in Julia
Talk
2021-07-30T12:30:00+00:00
12:30
00:30
Calibrated probabilistic models ensure that predictions are consistent with empirically observed outcomes, and hence such models provide reliable uncertainty estimates for decision-making. This is particularly important in safety-critical applications. We present Julia packages for analyzing calibration of general probabilistic predictive models, beyond commonly studied classification models. Additionally, our framework allows to perform statistical hypothesis testing of calibration.
juliacon2021-9814-calibration-analysis-of-probabilistic-models-in-julia
David Widmann
en
**The Pluto notebook of this talk is available at https://talks.widmann.dev/2021/07/calibration/**
The talk focuses on:
- introducing/explaining calibration of probabilistic models
- discussing/showing how users can apply the offered evaluation measures and hypothesis tests
- highlighting the relation to the Julia ecosystem, in particular to packages such as KernelFunctions and HypothesisTests and interfaces via pyjulia (Python) and JuliaCall (R)
Probabilistic predictive models, including Bayesian and non-Bayesian models, output probability distributions of targets that try to capture uncertainty inherent in prediction tasks and modeling. In particular in safety-critical applications, it is important for decision-making that the model predictions actually represent these uncertainties in a reliable, meaningful, and interpretable way.
A calibrated model provides such guarantees. Loosely speaking, if the same prediction would be obtained repeatedly, then it ensures that in the long run the empirical frequencies of observed outcomes are equal to this prediction. Note that usually it is not sufficient if a model is calibrated though: a constant model that always outputs the marginal distribution of targets, independently of the inputs, is calibrated but probably not very useful.
Commonly, calibration is analyzed for classification models, often also in a reduced binary setting that focuses on the most-confident predictions only. Recently, we published a framework for calibration analysis of general probabilistic predictive models, including but not limited to classification and regression models. We implemented the proposed methods for calibration analysis in different Julia packages such that users can incorporate them easily in their evaluation pipeline.
[CalibrationErrors.jl](https://github.com/devmotion/CalibrationErrors.jl) contains estimators of different calibration measures such as the expected calibration error (ECE) and the squared kernel calibration error (SKCE). The estimators of the SKCE are consistent, and both unbiased and unbiased estimators exist. The package uses kernels from KernelFunctions.jl, and hence many standard kernels are supported automatically.
[CalibrationTests.jl](https://github.com/devmotion/CalibrationTests.jl) implements statistical hypothesis tests of calibration, so-called calibration tests. Most of these tests are based on the SKCE and can be applied to any probabilistic predictive model.
Finally, the package [CalibrationErrorsDistributions.jl](https://github.com/devmotion/CalibrationErrorsDistributions.jl) extends calibration analysis to models that output probability distributions from Distributions.jl. Currently, Gaussian distributions, Laplace distributions, and mixture models are supported.
To increase the adoption of these calibration evaluation techniques by the statistics and machine learning communities, we also published interfaces to the Julia packages in [Python](https://github.com/devmotion/pycalibration) and [R](https://github.com/devmotion/rcalibration).
### References
Widmann, D., Lindsten, F., & Zachariah, D. (2019). Calibration tests in multi-class classification: A unifying framework. In Advances in Neural Information Processing Systems 32 (NeurIPS 2019) (pp. 12257–12267).
Widmann, D., Lindsten, F., & Zachariah, D. (2021). Calibration tests beyond classification. International Conference on Learning Representations (ICLR) 2021.
false
https://pretalx.com/juliacon2021/talk/8BWJXP/
https://pretalx.com/juliacon2021/talk/8BWJXP/feedback/
Green
Julia Developer Survey Results
Lightning talk
2021-07-30T13:00:00+00:00
13:00
00:10
Results from the annual Julia Developer survey will be shared.
juliacon2021-11164-julia-developer-survey-results
Viral B. Shah
en
false
https://pretalx.com/juliacon2021/talk/WDFZWG/
https://pretalx.com/juliacon2021/talk/WDFZWG/feedback/
Green
SciML for Structures: Predicting Bridge Behavior
Lightning talk
2021-07-30T13:10:00+00:00
13:10
00:10
We study the utility of a scientific machine learning (SciML) model for predicting structural responses such as bridge deflections and stresses. The SciML model is compared with a data-driven neural network model for a synthetic and a real world case.In both cases, we rely on the Julia algorithmic differentiation ecosystem to efficiently fit the models. Our preliminary results indicate the superiority of the SciML mode over the data-driven one in interpolation and extrapolation as well.
juliacon2021-9630-sciml-for-structures-predicting-bridge-behavior
/media/juliacon2021/submissions/WWP7DR/SciModel-01_ol5RxM1.png
Axel Larsson
en
Structures in civil engineering are traditionally modelled using the finite element (FE) method. Although it is an extremely successful method, it has some shortcomings: (i) it can require substantial human effort to build complex models; and (ii) it can be difficult to combine with measurement data in order to increase model prediction accuracy. A way to overcome these shortcomings is to use data-driven machine learning approaches, however these may require a prohibitive amount of measurement data and still perform poorly in extrapolation. Combining the machine learning model with scientific knowledge, i.e. scientific machine learning (SciML), may offer a practically tenable solution to the above challenges.
This talk aims to explore to what extent a scientific machine learning model can predict the structural response of a twin girder bridge in comparison with a data-driven machine learning model. The two approaches are compared with regards to prediction accuracy as well as the amount of data needed to achieve a particular accuracy. The comparison is made by using a synthetic case and a real-world case with field measurements.
The scientific machine learning model requires a formulation of the physics, which can be done in different manners. In engineering practice the structural behavior of bridges is typically described/predicted by FE models. For the SciML physics formulation we selected a simplified 2D beam model made up of 4-degrees-of-freedom linear elastic beam elements. This simple 2D-model is chosen in order to explore a very fast modelling workflow, potentially expandable to a digital tool for quick structural assessment. The 2D model is combined with a neural network in order to approximate the 3D bridge behavior. The neural network achieves this by representing a transverse load distribution function that describes what percentage of a concentrated load at a certain location is carried by the modelled 2D girder. As the bridge is composed of two identical girders, the rest of the load is assumed to be carried by the second girder. We do not take shear lag effects into account in our physics formulation.
The loss for the SciML model was calculated in three steps: first, the load on the 2D beam was determined by the neural network, second, the structural system is solved for this predicted load and sensor position, finally the loss is calculated using the difference of the predicted structural responses and the measured ones.
For the data-driven model, a feedforward neural network tries to directly predict the structural response of the 2D girder for a certain sensor and load location. The difference between this prediction and the measured data is used to calculate the loss for the training of the neural network.
We chose to make the implementation of the SciML model in Julia because of its many attractive features, such as multiple dispatch and packages for automatic differentiation (e.g. Zygote.jl) and machine learning (e.g. Flux.jl). For the FE package, we wanted a lightweight, hackable package that would be easy to get started with, in order to provide a fast workflow. It was also desirable to have a 100% Julia written FE package, in order to fully utilize Zygote for backpropagating the FE solutions. We chose CALFEM.jl, a Julia port of the CALFEM package, originally developed in the late 1970s at Lund University in Sweden and subsequently improved over the decades, today typically used for teaching simple FE programming. CALFEM.jl lacks support for automatic differentiation, but because of the many favorable features of Julia, it was a quite simple task to implement AD support for the components that we needed. The source code of the analysis will be made open to the public.
The results show that a SciML approach can accurately predict structural behavior of the bridge using far less data points than a purely data driven approach. Moreover, the SciML approach is much better in extrapolation than the purely data-driven one. Our results show that at the moment purely data-driven approaches are impractical to predict structural responses and SciML seems to be a very promising addition to the toolbox of structural modelling approaches.
false
https://pretalx.com/juliacon2021/talk/WWP7DR/
https://pretalx.com/juliacon2021/talk/WWP7DR/feedback/
Green
Simulating a public transportation system with OpenStreetMapX.jl
Lightning talk
2021-07-30T13:20:00+00:00
13:20
00:10
We will show how to perform modeling and of an urban network using the OpenStreetMapX.jl package. With actual Toronto data we will show how the library can be used for commuter routing including sidewalks and public transportation. We represent the city’s urban space as a LightGraphs.jl strongly connected, directed graph where vertices are located at geographic coordinates. Additionally, we will also demonstrate a simulation model explaining the role of public transportation in virus widespread.
juliacon2021-9704-simulating-a-public-transportation-system-with-openstreetmapx-jl
/media/juliacon2021/submissions/NVSXHU/sim_state4_mfZYtLf.png
Przemysław Szufel
en
*Co-authors: Nykyta Polituchyi, Kinga Siuta, Paweł Prałat*
The [OpenStreetMapX.jl](https://github.com/pszufe/OpenStreetMapX.jl) package is capable of parsing [*.osm](https://wiki.openstreetmap.org/wiki/OSM_file_formats) formatted data from the [OpenStreetMap.org](https://www.openstreetmap.org/) project. This data can be subsequently utilized to extract information about city’s POIs (points of interest), measure actual distances, perform routing and build numerical simulation model that make it possible to understand dynamics of a city. Those capabilities will be illustrated with a map of Toronto and show how to extend the osm data with other sources to extend the city routing beyond cars and sidewalks and model an actual public transportation network.
In this presentation two interconnected applications of the The OpenStreetMapX.jl package will be presented. Firstly, mixed routing combining different means of transportation will be presented and discussed showing how different Julia libraries can work together towards a common goal (including [OpenStreetMapXPlot](https://github.com/pszufe/OpenStreetMapX.jl), [LightGraphs](https://github.com/JuliaGraphs/LightGraphs.jl), PyCall, Plots, DataFrames and others). Secondly, an agent-based simulation of a public transportation system will be discussed. We will show how to model and measure the impact of availability and frequency of public transportation onto decisions made by commuters and subsequently its contribution towards spreading the pandemic.
The presentation is accompanied by a Jupyter notebook that is available since on the [OpenStreetMapX.jl GitHub project website](https://github.com/pszufe/OpenStreetMapX.jl) since the first day of JuliaCon 2021.
In summary, in this talk the following areas will be discussed:
- processing of OpenStreetMap data in Julia to obtain graph structures for processing with LightGraphs.jl
- visualizing graphs, maps and spatial data with OpenStreetMapXPlot.jl (GR, PyPlot backends) as well as integration with Leaflet via folium and PyCall
- building animations of a city using OpenStreetMapXPlot.jl combined with the `Plots.@animate` macro
- using Julia to augment OSM map data with external sources in order to build routing mechanism that can include public transportation (metro, streetcarts)
- combine this all into an agent simulation that can be used to model how the frequency and availability of a public urban transportation system contributes to the development of pandemic
*The research is financed by a NSERC, Canada, “Alliance COVID-19” grant titled: "COVID-19: Agent-based framework for modelling pandemics in urban environment”.*
false
https://pretalx.com/juliacon2021/talk/NVSXHU/
https://pretalx.com/juliacon2021/talk/NVSXHU/feedback/
Green
JuliaSim: Machine Learning Accelerated Modeling and Simulation
Talk
2021-07-30T13:30:00+00:00
13:30
00:30
Julia is known for its speed, but how can you keep making things faster after all of the standard code optimization tricks run out? The answer is machine learning reduced or approximate models. JuliaSim is an extension to the Julia SciML ecosystem for automatically generating machine learning surrogates which accurately reproduce model behavior.
juliacon2021-9219-juliasim-machine-learning-accelerated-modeling-and-simulation
Chris Rackauckas
en
Julia is known for its speed, but how can you keep making things faster after all of the standard code optimization tricks run out? The answer is machine learning reduced or approximate models. JuliaSim is an extension to the Julia SciML ecosystem for automatically generating machine learning surrogates which accurately reproduce model behavior. In this talk we will showcase how you can take your existing ModelingToolkit.jl models and automate the model order reduction of its components. By hooking into the hierarchical modeling ecosystem, this allows for using the same surrogate across many models without requiring retraining. We will show the benefits of this process on energy efficient building design, which has been accelerated by orders of magnitude over the Dymola Modelica implementation, by using neural surrogatized HVAC models. We will demo simultaneous translation and acceleration of components designed outside of Julia through JuliaSim's ability to take in Functional Markup Units (FMUs) from Modelica and Simulink, along with domain-specific modeling definitions like SPICE netlists of electrical circuits and Pumas pharmacometic models. Similarly, this system allows for generating digital twins of real objects through its measurements, allowing one to quickly incorporate components with less physical understanding directly through its data. We will show a JuliaHub-based parallelized training platform that allows offloading the training process to the cloud. This will allow for engineers to pull pre-accelerated models from the ever growing JuliaSim Model Store directly into their Julia-based designs for fast exploration. Together this will leave the audience ready to integrate ML-accelerated modeling and simulation tools into their workflows.
false
https://pretalx.com/juliacon2021/talk/ETY3B7/
https://pretalx.com/juliacon2021/talk/ETY3B7/feedback/
Green
Keynote (Soumith Chintala)
Keynote
2021-07-30T14:30:00+00:00
14:30
00:45
Keynote
juliacon2021-11702-keynote-soumith-chintala-
en
false
https://pretalx.com/juliacon2021/talk/S3RH8Z/
https://pretalx.com/juliacon2021/talk/S3RH8Z/feedback/
Green
Sponsor talk - RelationalAI
Keynote
2021-07-30T15:15:00+00:00
15:15
00:10
Sponsor talk - RelationalAI
juliacon2021-11705-sponsor-talk-relationalai
en
false
https://pretalx.com/juliacon2021/talk/8CMRGC/
https://pretalx.com/juliacon2021/talk/8CMRGC/feedback/
Green
The state of DataFrames.jl
Keynote
2021-07-30T15:25:00+00:00
15:25
00:15
In this talk I discuss what has recently changed in DataFrames.jl, what is the current state of the package, and what are our plans for the future.
juliacon2021-11703-the-state-of-dataframes-jl
/media/juliacon2021/submissions/VSMCQG/juliadata_V7UW6ot.png
Bogumił Kamiński
en
false
https://pretalx.com/juliacon2021/talk/VSMCQG/
https://pretalx.com/juliacon2021/talk/VSMCQG/feedback/
Green
Sponsor talk - JuliaComputing
Keynote
2021-07-30T15:40:00+00:00
15:40
00:15
Sponsor talk - JuliaComputing
juliacon2021-11704-sponsor-talk-juliacomputing
en
false
https://pretalx.com/juliacon2021/talk/AX3VYR/
https://pretalx.com/juliacon2021/talk/AX3VYR/feedback/
Green
Closing remarks
Keynote
2021-07-30T15:55:00+00:00
15:55
00:05
Closing remarks
juliacon2021-11707-closing-remarks
en
false
https://pretalx.com/juliacon2021/talk/VJEVMQ/
https://pretalx.com/juliacon2021/talk/VJEVMQ/feedback/
Green
GatherTown -- Social break
Social hour
2021-07-30T18:00:00+00:00
18:00
01:00
Join us on Gather.town for a social hour.
It is a virtual location where we will facilitate the poster sessions, social gatherings, and hackathon. You can join the space using the URL: https://gather.town/invite?token=3QYkt8gX.
You should have received the password through Eventbrite
juliacon2021-11886-gathertown-social-break
en
false
https://pretalx.com/juliacon2021/talk/9TYSM9/
https://pretalx.com/juliacon2021/talk/9TYSM9/feedback/
Green
Introducing Chemellia: Machine Learning, with Atoms!
Talk
2021-07-30T19:00:00+00:00
19:00
00:30
In this talk, I introduce Chemellia: a machine learning ecosystem (built on Flux.jl) designed for chemistry and materials science problems involving molecules, crystals, surfaces, etc. I will focus on two packages I have developed: first, ChemistryFeaturization, which allows customizable and invertible featurization of atomic systems. The second, AtomicGraphNets, implements graph neural network models tailored to atomic graphs, and substantially outperforms comparable Python packages.
juliacon2021-9811-introducing-chemellia-machine-learning-with-atoms-
/media/juliacon2021/submissions/T7UFDU/biglogo_Z7ynDIR.png
Rachel Kurchin
en
Machine learning is a promising approach in science and engineering for “filling the gaps” in modeling, particularly in cases where substantial volumes of training data are available. These techniques are becoming increasingly popular in the chemistry and materials science communities, as evidenced by the popularity of Python packages such as DeepChem and matminer. Clearly, there are many potential benefits to building, training, and running such models in Julia, including improved performance, better code readability, and perhaps most importantly, a multitude of prospects for composability with packages from the broader SciML ecosystem, allowing integration with packages for differential equation solving, sensitivity analysis, and more.
In this talk, I introduce [Chemellia](https://github.com/chemellia): an ecosystem for machine learning on atomic systems based on Flux.jl. In particular, I will focus two packages I have been developing that will be core to Chemellia. [ChemistryFeaturization](https://github.com/Chemellia/ChemistryFeaturization.jl) represents a novel paradigm in data representation of molecules, crystals, and more. It defines flexible types for features associated with individual atoms, pairs of atoms, etc. as well as for representing featurized structures in the form of, for example, a crystal graph (the AtomGraph type, which of course dispatches the set of functions so that all of the LightGraphs analysis capabilities “just work”). It also implements an easily extensible set of modular featurization schemes to create inputs for a variety of models, graph-based and otherwise. A core design principle of the package is that all featurized data types carry the requisite metadata to “decode” their features back to human-readable values.
[AtomicGraphNets](https://github.com/Chemellia/AtomicGraphNets.jl) provides a Julia implementation of the increasingly popular crystal graph convolutional neural net model architecture that trains and runs nearly an order of magnitude faster than the Python implementation, and requires fewer trainable parameters to achieve the same accuracy on benchmark tasks due to a more efficient and expressive convolutional operation. The layers provided by this package can be easily combined into other architectures using Flux’s utility functions such as Chain and Parallel.
We have some great summer student developers working on these packages now and would welcome further community feedback and contributions!
false
https://pretalx.com/juliacon2021/talk/T7UFDU/
https://pretalx.com/juliacon2021/talk/T7UFDU/feedback/
Green
Simulating Chemical Kinetics with ReactionMechanismSimulator.jl
Talk
2021-07-30T19:30:00+00:00
19:30
00:30
Understanding many complex chemical processes requires the study of large chemical mechanisms that can involve thousands of species. We present ReactionMechanismSimulator.jl a highly extensible software that can be used to simulate, calculate sensitivities for, analyze and visualize a wide variety of kinetic systems and reactors from gas phase ignition to liquid oxidation to electrocatalysis. We present benchmarks against alternative software and our extensive mechanism analysis toolkit.
juliacon2021-9658-simulating-chemical-kinetics-with-reactionmechanismsimulator-jl
/media/juliacon2021/submissions/ME7JE9/rms-logo-medium_VcsaXfc.png
Matthew S Johnson
en
Large chemical kinetic systems are important in many fields including atmospheric chemistry, combustion, pyrolysis, polymers, oxidation, catalysis and electrocatalysis. Traditional C++ and Fortran tools for simulating these systems tend to be difficult to extend, have difficulty integrating modern numerical techniques such as automatic differentiation and adjoint sensitivities and have old or lacking mechanism analysis tools. We present [ReactionMechanismSimulator.jl](https://github.com/ReactionMechanismGenerator/ReactionMechanismSimulator.jl) a Julia package for simulating and analyzing kinetic systems.
ReactionMechanismSimulator.jl was designed with extension in mind. Its parser can automatically parse and use newly added kinetic, thermodynamic, phase and domain models as soon as the associated structure is defined with no other code modifications. In addition to analytic jacobians for common systems it provides automatic and symbolic jacobians through ForwardDiff.jl and ModelingToolkit.jl. Forward and adjoint sensitivity analyses are implemented using Julia’s SciML toolkit. ReactionMechanismSimulator.jl includes a suite of molecular structure aware plotting and flux diagram generation software that facilitates efficient analysis of kinetic mechanisms.
false
https://pretalx.com/juliacon2021/talk/ME7JE9/
https://pretalx.com/juliacon2021/talk/ME7JE9/feedback/
Green
Clapeyron.jl: An Extensible Implementation of Equations of State
Lightning talk
2021-07-30T20:00:00+00:00
20:00
00:10
The implementation of thermodynamic equations of state for physical property prediction (density, heat capacity, enthalpy, phase fractions, etc) is traditionally an esoteric process. With Julia’s clean syntax and efficient multiple-dispatch system, it is possible to produce extremely lucid code without trade-offs in efficiency, while also being infinitely extensible. We hope to show that Julia is a powerful tool that can transform the art of thermodynamic modelling in both academia and industry.
juliacon2021-9502-clapeyron-jl-an-extensible-implementation-of-equations-of-state
/media/juliacon2021/submissions/7PVG8Z/OpenSAFT_51tCFCs.png
Paul YewPierre WalkerAndrés Riedemann
en
Thermodynamic models represent a key tool for a variety of applications; this includes the study of complex systems (electrolytes, polymers, pharmaceuticals, etc), process modelling and molecular design. However, it is not uncommon for thermodynamic models to involve hundreds of different components, especially with the more modern equations of state like those built from Statistical Associating Fluid Theory (SAFT), whose ability to model complex phenomena (such as hydrogen bonding and London dispersion interactions) comes at the cost of complicated mathematical formulation. Most implementations are often abstruse, if they are open to the public at all, which is likely to be the main reason for the high barrier to entry into the field. Beyond those mathematical functions, it is also an exercise in working out the physical properties by exploiting some thermodynamic relations, which may involve the use of highly non-linear solvers for problems with near-singular Jacobians, and solving for the global minima of a non-convex, non-linear problem. The actual execution tends to be application-specific and difficult to extend, even if one had a full understanding of the procedures that are traditionally written in FORTRAN.
Enter Julia, a language that seems to provide the most natural realisation of every step of this process. OpenSAFT is a framework that makes it easy to build SAFT-type (or any free-energy-based) models such that researchers and enthusiasts alike will be able to focus on the actual thermodynamics and algorithms without worrying about the implementation. With the Julia culture that completely embraces Unicode identifiers and terse syntax for mathematical operations, we are able to create nearly one-to-one translations of the mathematical expressions in the literature to code, removing the layer of obfuscation that usually appears when writing high-performance code.
Differential programming is a concept that is extensively used in modern statistical-learning tools, but is still relatively unknown to a lot of the scientific community. We are now able to trivially obtain any-order derivatives of the Helmholtz free energy function, instead of having to work out the corresponding expressions for each model. Suddenly, it all becomes a plug-and-play solution where the user could just write out the model equations, and have OpenSAFT seamlessly obtain all the relevant properties. By careful selection of parameter types, nearly every part of the code can be easily modified or extended so that users will be able to take direct control of the solvers if necessary. This allows people to easily pry into the inner workings of thermodynamic equations of state and study how they can be set up and used.
With Julia, OpenSAFT has the potential to revolutionise thermodynamic research and education, and we think that this effort will also greatly help to bridge the gap between cutting-edge development in academia and actual practical use in industry. Perhaps it could also inspire scientists from other domains to invest in bringing over their work to Julia where everything “just works”.
false
https://pretalx.com/juliacon2021/talk/7PVG8Z/
https://pretalx.com/juliacon2021/talk/7PVG8Z/feedback/
Green
Modia – Modeling Multidomain Engineering Systems with Julia
Lightning talk
2021-07-30T20:10:00+00:00
20:10
00:10
Modia (www.ModiaSim.org) is a set of Julia packages for modeling and simulation of multidomain engineering systems (electrical, 3D mechanical, fluid, etc.). Status and plans of a largely redesigned version of Modia is presented consisting of a new syntax in pure Julia mixing equation with function based modeling (e.g. drive trains + 3D mechanics) and new transformation techniques in order that Modia models can be simulated with the ODE integrators of DifferentialEquations.jl.
juliacon2021-9895-modia-modeling-multidomain-engineering-systems-with-julia
/media/juliacon2021/submissions/YGWUVE/Modia_Robot_zVsBIHs.png
Hilding ElmqvistMartin OtterAndrea Neumayr
en
Modia (www.ModiaSim.org) is a set of Julia packages for modeling and simulation of coupled multidomain engineering systems (electrical, 3D mechanical, fluid, etc.). It shares many powerful features of the Modelica (www.Modelica.org) language. In the talk status and plans for Modia are presented.
A new simple, yet powerful syntax has been introduced in Modia based on named tuples of Julia and recursive merge. An electrical Resistor can, for example, be defined as:
Resistor = OnePort | Model( R = 1.0u"Ω", equation = :[ R*i = v ] )
The | denotes a recursive merge between the named tuple OnePort and a new Model (named tuple) adding a parameter R and Ohms equation, i.e., corresponding to extending the model OnePort having variables and equations. Such a resistor can then be instantiated:
R = Resistor | Map(R=0.5u"Ω")
with an updated value of the resistance R. The Model constructor constructs a named tuple which only adds attributes during merge and Map only updates attributes. This use of named tuples unifies and generalizes inheritance, hierarchical modifiers and replaceable models of Modelica. Component instances such as R have ports (defined in OnePort) which are connected to form complete hierarchical system models.
For certain kinds of models, such as multibody systems, the order of evaluating the component equations is independent of the model topology. This means that algorithmic functions can be used for each model component and called according to the connection topology. This avoids repeated structural and symbolic analysis of the multibody equations, the code size is considerably reduced, and pre-compilation is possible. Modia allows to express such kinds of models together with equation-based models.
New symbolic algorithms transform the Modia equations to ODEs (Ordinary Differential Equations in state space form) and generate a Julia function that can be used to simulate the transformed model with ODE integrators of DifferentialEquations.jl.
When instantiating a Modia model, the floating point type of the Modia variables can be defined. This allows for example to easily model uncertainty propagation with Measurements.jl or perform Monte Carlo Simulation with MonteCarloMeasurements.jl. The hierarchical NamedTuple description of a model can be easily mapped to a JSON file. As a result, the complete parameterization of a Modia Model, or the complete Modia model itself, can be exchanged in a straightforward way with a Web App for model composition by drag-and-drop and for 3D animation.
Hilding Elmqvist: Mogram AB
Martin Otter, Andrea Neumayr, Gerhard Hippmann: DLR Institute of System Dynamics and Control
false
https://pretalx.com/juliacon2021/talk/YGWUVE/
https://pretalx.com/juliacon2021/talk/YGWUVE/feedback/
Green
Optical simulation with the OpticSim.jl package
Lightning talk
2021-07-30T20:20:00+00:00
20:20
00:10
OpticSim.jl
juliacon2021-9588-optical-simulation-with-the-opticsim-jl-package
/media/juliacon2021/submissions/X3SAWW/Screenshot_2021-03-21_084101_7qcpTuP.jpg
Brian GuenterCharlie Hewitt
en
OpticSim.jl is an open source (https://github.com/microsoft/OpticSim.jl) Julia package for simulation and optimization of complex optical systems developed by the Microsoft Research Interactive Media Group and the Microsoft HART group.
It is designed to allow optical engineers to create optical systems procedurally and then to simulate and optimize them.
A large variety of surface types are supported, and these can be composed into complex 3D objects through the use of constructive solid geometry (CSG). A substantial catalog of optical materials is provided through the GlassCat submodule.
The software provides extensive control over the modelling, simulation, and visualization of optical systems. It is especially suited for designs that have a procedural architecture.
The talk will explain how to use OpticSim.jl to simulate various types of optical systems.
false
https://pretalx.com/juliacon2021/talk/X3SAWW/
https://pretalx.com/juliacon2021/talk/X3SAWW/feedback/
Green
GatherTown -- Social break
Social hour
2021-07-30T20:30:00+00:00
20:30
01:00
Join us on Gather.town for a social hour.
It is a virtual location where we will facilitate the poster sessions, social gatherings, and hackathon. You can join the space using the URL: https://gather.town/invite?token=3QYkt8gX.
You should have received the password through Eventbrite
juliacon2021-11887-gathertown-social-break
en
false
https://pretalx.com/juliacon2021/talk/8LK7KU/
https://pretalx.com/juliacon2021/talk/8LK7KU/feedback/
Green
JuliaCon Hackathon
Social hour
2021-07-30T21:30:00+00:00
21:30
1:00:00
Join us on Gather.town for a 24 hour hackathon!
It is a virtual location where we will facilitate the poster sessions, social gatherings, and hackathon. You can join the space using the URL: https://gather.town/invite?token=3QYkt8gX.
You should have received the password through Eventbrite
juliacon2021-11897-juliacon-hackathon
en
The event will take place for 24 hours! Join us to built something you are excited about in Julia. We will also have mentors available to help if you run into issues.
false
https://pretalx.com/juliacon2021/talk/W7RT9F/
https://pretalx.com/juliacon2021/talk/W7RT9F/feedback/
Red
Symbolics.jl - fast and flexible symbolic programming
Talk
2021-07-30T12:30:00+00:00
12:30
00:30
Symbolics.jl is a fast, yet flexible symbolic manipulation package. It can generate serial or multi-threaded Julia code; or even C, Stan or MATLAB code from symbolic expressions. This talk is an overview of the features and the organization of the Symbolics.jl package, and the design decisions that make it fast and extendable.
juliacon2021-9737-symbolics-jl-fast-and-flexible-symbolic-programming
/media/juliacon2021/submissions/WZ7YM9/symbolics_mvV3IG0.png
Shashi Gowda
en
Symbolic systems either excel in flexibility or performance. For example, SymPy is highly flexible and has a good set of term rewriting functionality, but is slow. On the other hand, projects like OSCAR are specialized tools for computational algebra -- problems are hard to set up but computations are highly efficient. Further, neither of these types of tools actually help you turn symbolic expressions into executable code.
In this talk, we introduce the Symbolics.jl and the underlying SymbolicUtils.jl packages. We also talk about the term-rewriting system and ways to write passes that transform symbolic expressions with user-defined custom rules.
Outline:
- Why is Symbolics.jl useful
- Example of symbolic basic manipulation
- Benchmark vs SymPy
- Code generation example
- Differentiation syntax (comparison with other systems and possibilities, and AD)
- Fast sparsity detection
- Under the hood
- Wrapper to make symbolic expression: `Num <: Number`
- Syms and Terms
- Fast canonical form
- Term interface
- Expression rewriting
- Rule syntax
- Chaining and pipelining rules
- Simplification
- Polynomial form from AbstractAlgebra
- ModelingToolkit
- How ModelingToolkit builds a simulation system on top of Symbolics
- Use of build_function in ODE solver
- Structural simplification example with a bit of all the clever ideas (Attend Chris’s talk and workshop)
false
https://pretalx.com/juliacon2021/talk/WZ7YM9/
https://pretalx.com/juliacon2021/talk/WZ7YM9/feedback/
Red
Unleashing Algebraic Metaprogramming in Julia with Metatheory.jl
Talk
2021-07-30T13:00:00+00:00
13:00
00:30
A novel data structure and technique from theorem provers, a pattern matching system and classical term rewriting. Mix it with dynamism and the homoiconic metaprogramming system of Julia. Add algebraic composability. Shake well before using. What could go wrong? Composable compiler transforms, numerical code optimizers, interpreters and compilers, computer algebra systems, categorical theorem provers and much more to come. Come experiment with us and the Metatheory.jl package!
juliacon2021-9439-unleashing-algebraic-metaprogramming-in-julia-with-metatheory-jl
/media/juliacon2021/submissions/F9PLVY/dragon_YF1D9lf.jpg
Alessandro CheliPhilip Zucker
en
We introduce Metatheory.jl: a lightweight and performant general purpose symbolics and metaprogramming framework meant to simplify the act of writing complex Julia metaprograms and to significantly enhance Julia with a native term rewriting system, based on state-of-the-art equality
saturation techniques, and a dynamic first class AST pattern matching system that is dynamically
composable in an algebraic fashion, taking full advantage of the language’s powerful reflection capabilities. Our contribution allows performing general purpose symbolic mathematics, manipulation,
optimization, synthesis or analysis of syntactically valid Julia expressions with a clean and concise
programming interface, both during compilation or execution of programs. We have been currently experimenting with optimizing mathematical code and equational theorem proving strategies. This talk will discuss algebraic equational reasoning with examples from logic, program analysis, abstract algebra and category theory.
false
https://pretalx.com/juliacon2021/talk/F9PLVY/
https://pretalx.com/juliacon2021/talk/F9PLVY/feedback/
Red
Towards a symbolic integrator with Rubin.jl
Lightning talk
2021-07-30T13:30:00+00:00
13:30
00:10
Rubin.jl will be a 100% Julia implementation of an integration term-rewriting system. The rule catalogue is taken from RUBI, a Mathematica-based integration engine that uses binary searches in a tree of mutually exclusive rewriting rules, which nets RUBI an order of magnitude speed improvement over Mathematica over an immense test suite. Rubin.jl hosts 99.5+% of RUBI's rules, and 99.9% of the test suite in a JSON format, spanning more than 72,000 single variable integration unit tests.
juliacon2021-9656-towards-a-symbolic-integrator-with-rubin-jl
Miguel Raz Guzmán Macedo
en
Rubin.jl will be based on Symbolics.jl, a novel foundation for a Julian CAS. The goal of Rubin.jl is to
[X] Convert all the RUBI rules into a huge JSON
[X] Convert all the RUBI unit tests into a huge JSON
[ ] Parse the JSON files into Rubin Rules and Rubin tests
[ ] Benchmark the test suite and assess discrepancies
Symbolics.jl is a Julia based term-rewriting system that allows the user to specify that a "left hand side" symbolic expression should be transformed into the expression on the right hand side. Symbolic integration is useful for pure and applied mathematics - this will help bring in even more users to Julia.
false
https://pretalx.com/juliacon2021/talk/G8LARY/
https://pretalx.com/juliacon2021/talk/G8LARY/feedback/
Red
AlgebraicDynamics: Compositional dynamical systems
Lightning talk
2021-07-30T13:40:00+00:00
13:40
00:10
[AlgebraicDynamics](https://github.com/AlgebraicJulia/AlgebraicDynamics.jl) is a new library in the [AlgebraicJulia](https://www.algebraicjulia.org/) ecosystem for specifying and solving dynamical systems with compositional and hierarchical structure. This modular approach to constructing and analyzing dynamical systems is grounded in the mathematics of applied category theory.
juliacon2021-9513-algebraicdynamics-compositional-dynamical-systems
Sophie LibkindJames Fairbanks
en
false
https://pretalx.com/juliacon2021/talk/ARURL8/
https://pretalx.com/juliacon2021/talk/ARURL8/feedback/
Red
The OSCAR Computer Algebra System
Lightning talk
2021-07-30T13:50:00+00:00
13:50
00:10
We present OSCAR, an **O**pen **S**ource **C**omputer **A**lgebra **R**esearch system for abstract algebra, algebraic geometry, group theory, number theory, and more. It joins existing world class systems under a common Julia interface in the Oscar.jl package. Applications exist well beyond pure mathematics (e.g. in coding theory, cryptography, robotics, ...).
We give an overview of existing and planned capabilities. We also discuss what sets us apart from Symbolics.jl.
juliacon2021-9746-the-oscar-computer-algebra-system
/media/juliacon2021/submissions/WQ8MJK/OSCAR_logo_TfUhtvt.png
Max HornClaus Fieker
en
In this talk we present OSCAR, an **O**pen **S**ource **C**omputer **A**lgebra **R**esearch system for computations to support research in abstract algebra, algebraic geometry, group theory, number theory, and more. It builds on decades of experience by extending and integrating our four existing cornerstone systems:
- [GAP](https://www.gap-system.org/) - group and representation theory (via [GAP.jl](https://github.com/oscar-system/GAP.jl)),
- [Singular](https://www.singular.uni-kl.de/) - commutative and non-commutative algebra, algebraic geometry (via [Singular.jl](https://github.com/oscar-system/Singular.jl)),
- [Polymake](https://polymake.org/doku.php) - polyhedral geometry (via [Polymake.jl](https://github.com/oscar-system/Polymake.jl)),
- Antic ([Hecke](https://github.com/thofma/Hecke.jl/), [Nemo](http://nemocas.org/)) - number theory.
These are joined together under a common Julia interface in the [Oscar.jl](https://github.com/oscar-system/Oscar.jl) package.
Applications of our computational capabilities exist well beyond pure mathematics (e.g. in coding theory, cryptography, crystallography, robotics, ...).
While OSCAR is still under heavy development, many useful features are already available, and more are in the works. We will give an overview of existing capabilities and give a preview of what will come in the future. We will also outline what sets us apart from Symbolics.jl (which has a very different scope).
The development of OSCAR is supported by the Deutsche Forschungsgemeinschaft DFG within the [Collaborative Research Center TRR 195](https://www.computeralgebra.de/sfb/).
Outside contributions to OSCAR are highly welcome. Please talk to us:
- on our own Slack (use this [invite link](https://join.slack.com/t/oscar-system/shared_invite/zt-thtcv97k-2678bKQ~RpR~5gZszDcISw), or [email me](mailto:horn@mathematik.uni-kl.de) if it does not work)
- on the Julia Slack in `#oscar` or `#algebra`
- via our mailing list, join at <https://mail.mathematik.uni-kl.de/mailman/listinfo/oscar-dev>
- via issues and PRs on our various GitHub repositories.
Additional information can be found on our homepage, <https://oscar.computeralgebra.de>.
false
https://pretalx.com/juliacon2021/talk/WQ8MJK/
https://pretalx.com/juliacon2021/talk/WQ8MJK/feedback/
Red
Enabling Rapid Microservice Development with a Julia SDK
Talk
2021-07-30T19:00:00+00:00
19:00
00:30
The Optimal Reality platform, by Deloitte Digital Australia, is a modelling and simulation environment built and deployed as Julia microservices. This talk describes our Software Development Kit which enables rapid building and deployment of these microservices. We discuss our use of templates, utility packages, standardised multi-threading and logging functions, and a custom GraphQL interface; and share how they can be used to bolster team creativity and efficiency.
juliacon2021-9744-enabling-rapid-microservice-development-with-a-julia-sdk
Malcolm Miller
en
The benefits of microservices architectures are well understood. They have the potential to be more agile, enable each service to pick the best technology for its purpose and be scaled or autoscaled appropriately, and can be easily deployed and managed through common open-source technologies.
Developing a solution with a microservices architecture, however, can have some disadvantages. A developer creating a new microservice must understand the external interfaces to that service, how it is tested and deployed, and how to configure the service to run and scale as required. This increases the skill requirement of a developer and can take time away from what the developer is actually trying to do – create a new bit of functionality. It can also result in inconsistencies in code behaviour and style between services. This problem is compounded when the developers are using a new language and are unfamiliar with what is possible and with best practices, and further compounded when that language is itself rapidly developing.
We faced this challenge when developing a modelling and simulation platform built and deployed as Julia microservices. Julia was a new language for the majority of our team, and we needed to quickly design and deploy many services. Furthermore, we wanted to continue to use recent open-source developments without requiring that all of our developers must stay up to date with package and language advancements. To enable our developers to focus on what they’re best at (i.e., the functionality of the service they’re developing) and mitigate the issues mentioned, we made use of Julia’s excellent ecosystem to create a Software Development Kit (SDK).
In this talk, we describe the components of the SDK, how they enable both efficient development and use of new open-source advances, and how similar approaches can be used by your team as you build microservices. We will discuss how Julia’s package system makes it ideal for SDK use.
Located in a private registry, the SDK includes a microservice template, a utility package template, various utility packages and a custom GraphQL interface. The microservice template is the starting point for a developer writing a new microservice and includes the following functionality:
- Default communication routes (service execution, liveness etc)
- Logging behaviour
- Automatic documentation
- Asynchronous and multithreading tools
- CICD scripts (testing, building and cloud deployment)
We will discuss the above, including detailing the various open-source packages used for each function.
We will also describe how providing utility packages in an SDK enables teams to make use of the best open-source developments, which may be developing and changing frequently, through a stable API. For example, when handling large volumes of data, it is often desirable to encode and decode numeric arrays to minimise data transfer. One package in our SDK provides a simple encode and decode interface, where the specifics of what compression packages are being used can be updated as required without the majority of users needing to stay abreast of open-source developments. We will detail this approach and give other examples of where we have found it useful. Finally, we will describe our custom GraphQL interface, which wraps a generic interface in a similar method to the utility packages, enabling developers to quickly interact with and make use of our platform.
To conclude, Julia is a powerful language with an exceptional ecosystem. In this talk, we will demonstrate how it can be used to create an efficient environment for microservice development which lets developers focus on what they’re developing, ensures all services use the best of the open-source community and generally makes things much more straightforward.
false
https://pretalx.com/juliacon2021/talk/SHHKEM/
https://pretalx.com/juliacon2021/talk/SHHKEM/feedback/
Red
kubernetes-native julia development
Lightning talk
2021-07-30T19:30:00+00:00
19:30
00:10
You have access to a k8s cluster, and you want to use it to scale out computations. But first, you need to develop and debug julia code that can take advantage of it!
I will present an ergonomic julia development setup to help make k8s feel like home, using freely available and easily installed tools.
juliacon2021-9894-kubernetes-native-julia-development
Kolia Sadeghi
en
In this setup, from a julia project directory, you can:
- drop into a julia REPL that is running on your k8s cluster
- edit source files locally, use Revise and get back results saved to disk, via a 2-way sync between the local julia project directory and the corresponding directory in the k8s container
- sync REPL history across local and remote julia sessions
- easily spin up and use Distributed workers from within the julia session
- automatically build and use images containing julia, with chosen dependencies baked in a PkgCompiler sysimage, precompiled julia project, and (optionally) CUDA
- minimize time-to-first-command-completion with cached image builds; first use in a project directory takes a long time to build, but subsequently spinning up is fast
- set RAM/cpu/disk resources for the main julia session and any Distributed workers
- set julia (and CUDA) versions independently for each session
- run your work as a non-interactive job once it is ready
This tries to make minimal assumptions about the k8s setup; requirements are access to the cluster via `kubectl` and to a container registry that the k8s cluster can pull from.
Tools needing to be installed locally are:
- kubectl
- docker buildkit
- devspace sync
The julia-specific tools developped to make this possible are [K8sClusterManagers.jl](https://github.com/beacon-biosignals/K8sClusterManagers.jl) and `julia_pod`.
This workflow is developped and used day-to-day at [Beacon Biosignals](https://beacon.bio/).
false
https://pretalx.com/juliacon2021/talk/9LCKEQ/
https://pretalx.com/juliacon2021/talk/9LCKEQ/feedback/
Red
Rewriting Pieces of a Python Codebase in Julia
Lightning talk
2021-07-30T19:40:00+00:00
19:40
00:10
Many people looking at Julia are coming from Python, and already have a sizable codebase.
Our fund started rewriting performance-critical parts of our Python codebase in Julia, getting 10x-30x speedups. I'll go over how to start migrating Python code to Julia using PyCall and PyJulia, some gotchas to avoid, and where you're likely to see the biggest benefits.
juliacon2021-9879-rewriting-pieces-of-a-python-codebase-in-julia
Satvik Souza Beri
en
false
https://pretalx.com/juliacon2021/talk/GXLNHG/
https://pretalx.com/juliacon2021/talk/GXLNHG/feedback/
Red
Julia in the Windows Store
Lightning talk
2021-07-30T19:50:00+00:00
19:50
00:10
I will describe an effort to distribute Julia via the Windows Store. This effort includes a full Julia version manager that provides the ability to install multiple Julia versions at the same time, switch between them etc. The talk showcases an experimental working version of the installer from a user perspective, and also gives a brief deep dive around the technologies used.
juliacon2021-9736-julia-in-the-windows-store
David Anthoff
en
false
https://pretalx.com/juliacon2021/talk/MSTYCZ/
https://pretalx.com/juliacon2021/talk/MSTYCZ/feedback/
Red
Redwood: A framework for clusterless supercomputing in the cloud
Talk
2021-07-30T20:00:00+00:00
20:00
00:30
We present Redwood, a Julia framework for clusterless supercomputing in the cloud. Redwood provides a set of distributed programming macros that enable users to remotely execute Julia functions in parallel through cloud services for batch and serverless computing. We present the architecture and design of Redwood, as well as its application to existing Julia packages for machine learning and inverse problems.
juliacon2021-9572-redwood-a-framework-for-clusterless-supercomputing-in-the-cloud
Philipp A. Witte
en
Through the rise in popularity of deep learning and large-scale numerical simulations, high-performance computing (HPC) has entered the mainstream of scientific computing. Today, HPC techniques are increasingly required by a wider and wider audience, in fields including machine and deep learning, weather forecasting, medical and seismic imaging, computational genomics, fluid dynamics and others. HPC workloads have been traditionally deployed to on-premise high-performance computing clusters and were therefore only available to a very limited number of researchers or corporations. With the rise of cloud computing, HPC resources have in principle become available to a much wider audience but managing HPC infrastructure in the cloud is challenging. As the cloud provides a fundamentally different computing infrastructure from on-premise supercomputer, users need build environments and applications that are resilient are cost efficient and that are able to leverage cloud-related opportunities such as elastic (hyper-scale) compute and heterogeneous infrastructure.
Naturally, the current approach to port HPC applications to the cloud is to replicate the infrastructure of on-premise supercomputing centers with cloud resources. Cloud services such as AWS ParallelCluster or Azure CycleCloud enable users to create virtual HPC clusters that consist of login nodes, job schedulers, a set of compute instances, networking and distributed storage systems. Even cloud-native approaches such as Kubernetes follow this cluster-based architecture, albeit using containerization and novel schedulers. However, from the user side both approaches are a two-step approach in which users first create a (virtual) HPC cluster in the cloud and then submit their parallel program to the cluster. This makes running HPC applications in the cloud challenging, as users have to act as cluster administrators who manage the HPC infrastructure before being able to run their application.
In this work, we argue for the case of clusterless supercomputing in the cloud in which the user application essentially takes over the role of the job scheduler and cluster orchestrator. Instead of a
two-step process in which users first create a cluster and then submit their job to it, the application is executed anywhere and dynamically manages the required compute infrastructure at runtime. To enable this type of clusterless HPC which is heavily inspired by serverless orchestration frameworks, we introduce Redwood, an open-source software package for clusterless supercomputing on the Azure cloud. Redwood provides a set of distributed programming macros that are designed in accordance with Julia's existing macros for distributed computing around the principles of remote function calls and futures. Unlike Julia's standard distributed computing framework, Redwood does not require a parallel Julia session that is running on a set of interconnected nodes (i.e., a cluster). Instead, Redwood executes functions that are tagged for remote (parallel) execution via cloud services such as Azure batch or Azure Functions by creating a closure around the executed code and running it remotely through the respective cloud service. Results, namely function outputs, are written to cloud object stores and remote references are returned to the user.
In this talk, we discuss the architecture and implementation of Redwood and present HPC scenarios that are enable by it. This includes large-scale MapReduce workloads, computations that are distributed across multiple data centers or even regions, as well as combinations of data and model parallel applications in which users can execute multiple distributed-memory MPI workloads in parallel. Additionally, we present how existing Julia packages such as Flux or JUDI (a framework for PDE-constrained optimization) can be cloud-natively deployed through Redwood, without requiring users to set up HPC clusters.
false
https://pretalx.com/juliacon2021/talk/TXAWKU/
https://pretalx.com/juliacon2021/talk/TXAWKU/feedback/
Blue
JET.jl: The next generation of code checker for Julia
Talk
2021-07-30T12:30:00+00:00
12:30
00:30
Julia's extreme expressiveness and composability come from its dynamism – at the cost of that, a static type check of Julia code has been remained as a longstanding problem.
JET.jl is a fresh approach to static analysis of such a dynamic language; it can detect type-level errors given a pure Julia script within a practical speed.
In this talk we will first give an overview of its features and basic usages, and then move to a discussion of its internals, current limitations and future works.
juliacon2021-9838-jet-jl-the-next-generation-of-code-checker-for-julia
Shuhei Kadowaki
en
This talk will introduce [JET.jl](https://github.com/aviatesk/JET.jl), an experimental type checker for Julia.
JET is powered by both abstract interpretation routine implemented within the Julia compiler as well as a concrete interpretation based on [JuliaInterpreter.jl](https://github.com/JuliaDebug/JuliaInterpreter.jl). The abstract interpreter enables a static analysis on a pure Julia script without any need for additional type annotations, while the concrete interpreter allows effective analysis no matter how heavily it depends on runtime reflections or external configurations, which are common obstacles to static code analysis.
The talk will begin by explaining the motivation for type-level analysis of Julia code as well as how we can find various kinds of errors ahead of time using JET. Then we will illustrate how JET works and also the limitations involved with its design choices, and finally discuss the planned future enhancements like IDE integrations and such.
false
https://pretalx.com/juliacon2021/talk/BNB888/
https://pretalx.com/juliacon2021/talk/BNB888/feedback/
Blue
Easy, Featureful Parallelism with Dagger.jl
Talk
2021-07-30T13:00:00+00:00
13:00
00:30
Parallelizing codes with Distributed.jl is simple and can provide an appreciable speed-up; but for complicated problems or when scaling to large problem sizes, the APIs are somewhat lacking. Dagger.jl takes parallelism to the next level, with support for GPU execution, fault tolerance, and more. Dagger's scheduler exploits every bit of parallelism it can find, and uses all the resources you can give it. In this talk, I'll build an application with Dagger to highlight what Dagger can do for you!
juliacon2021-9733-easy-featureful-parallelism-with-dagger-jl
Julian P Samaroo
en
The Distributed standard library exposes RPC primitives (remotecall) and remote channels for coordinating and executing code on a cluster of Julia processes. When a problem is simple enough, such as a trivial map operation, the provided APIs are enough to get great performance and "pretty good" scaling. However, things change when one wants to use Distributed for something complicated, like a large data pipeline with many inputs and outputs, or a full desktop application. While one *could* build these programs with Distributed, one would quickly realize that a lot of functionality will need to be built from scratch: application-scale fault tolerance and checkpointing, heterogeneous resource utilization control, and even simple load-balancing. This isn't a fault of Distributed: it just wasn't designed as the be-all-end-all distributed computing library for Julia. If Distributed won't make it easy to build complicated parallel applications, what will?
Dagger.jl takes a different approach: it is a batteries-included distributed computing library, with a variety of useful tools built-in that makes it easy to build complicated applications that can scale to whatever kind and size of resources you have at your disposal. Dagger ships with a built-in heterogeneous scheduler, which can dispatch units of work to CPUs, GPUs, and future accelerators. Dagger has a framework for checkpointing (and restoring) intermediate results, and together with fault tolerance, allows computations to safely fail partway through, and be automatically or manually resumed later. Dagger also has primitives to build dynamic execution graphs across the cluster, so users can easily implement layers on top of Dagger that provide abstractions better matching the problem at hand.
This talk will start with a brief introduction to Dagger: what it is, how it relates to Distributed.jl, and a brief overview of the features available. Then I will take the listeners through the building of a realistic, mildly complicated application with Dagger, showcasing how Dagger makes it easy to make the application scalable, performant, and robust. As each feature of Dagger is used, I will also point out any important caveats or alternative approaches that the listeners should consider when building their own applications with Dagger. I will wrap up the talk by showing the application running at scale, and talk briefly about the future of Dagger and how listeners can help to improve it.
false
https://pretalx.com/juliacon2021/talk/3TLU8P/
https://pretalx.com/juliacon2021/talk/3TLU8P/feedback/
Blue
Actors.jl: Concurrent Computing with the Actor Model
Lightning talk
2021-07-30T13:30:00+00:00
13:30
00:10
`Actors` implements the Actor Model of concurrent computation. Actors
- interact via messages,
- represent computations and
- can create other actors.
Programmers can use actors to
- model computational concepts: e.g. atomic blocks, event handlers, state machines,
- implement concurrent objects such as servers, supervisors, firewalls and to
- compose them into an application.
Actors allows to write fault-tolerant Julia applications and make concurrency easier to understand.
juliacon2021-9742-actors-jl-concurrent-computing-with-the-actor-model
Paul Bayer
en
**Give an overview** of `Actors`' philosophy and functionality and how it integrates into Julia's multi-threading and distributed computing.
**Demonstrate** how to
- `spawn` actors with arbitrary Julia functions as behaviors,
- `send` them messages and get back results,
- make them interact,
- `supervise` them and to
- integrate them with tasks and distributed processes.
false
https://pretalx.com/juliacon2021/talk/PFWXF9/
https://pretalx.com/juliacon2021/talk/PFWXF9/feedback/
Blue
BPFnative.jl: eBPF programming in Julia
Lightning talk
2021-07-30T13:40:00+00:00
13:40
00:10
eBPF is a virtual machine that can run user-defined code in the Linux kernel. The ability to generate eBPF bytecode from Julia would allow our Linux users to introspect, manipulate, and explore the core of their operating system from the comfort of a high-level language. In this talk, I will explain the basics of eBPF, how it's integrated and used in the Linux kernel, and how we can use "eBPF superpowers" from Julia.
juliacon2021-9681-bpfnative-jl-ebpf-programming-in-julia
Julian P Samaroo
en
eBPF (extended Berkeley Packet Filter) is a virtual machine specification and machine code ISA originally designed for packet filtering in operating system kernels. eBPF is designed to be simple and compact enough to be trivially converted to native machine code, making it very portable across machine architectures. eBPF is developed in tandem with the Linux kernel, intended to be an internal runtime for safely executing user-defined code within the Linux kernel, where it allows users to introspect (and even modify) the functioning of their kernel's various subsystems. Given the key role that the OS kernel plays in allowing modern computers to function, it is thus no surprise that the ability to write and install eBPF kernels is considered a Linux "superpower".
As we can see from the example set by CUDA.jl, Julia is an excellent language for writing portable code which can execute on a variety of architectures with minimal changes. Recognizing this, I created BPFnative.jl as an interface from Julia to eBPF and the Linux kernel. BPFnative.jl allows users to write eBPF kernels in pure Julia, compile them into eBPF bytecode, and install them at various locations in the Linux kernel. This allows users with a minimal understanding of eBPF to explore their OS kernel at runtime, and thanks to the security measures and verifier built into the Linux eBPF VM, makes this a very safe thing to do.
For this talk, I will introduce the basics of eBPF and why Linux users should care about it, and then provide examples (including code snippets) of how to create eBPF kernels for introspecting various parts of the Linux kernel with BPFnative.jl. I will strive to make the examples relevant to everyday Linux users who want to find out more about what their OS is doing behind the scenes, without having to fully understand how the Linux kernel works. I will also encourage interested users to explore other parts of their OS with eBPF, and submit examples to BPFnative.jl to benefit the community.
false
https://pretalx.com/juliacon2021/talk/DAQSUR/
https://pretalx.com/juliacon2021/talk/DAQSUR/feedback/
Blue
Atomic fields: the new primitives on the block
Lightning talk
2021-07-30T13:50:00+00:00
13:50
00:10
Atomic accessors support have recently been expanded to provide more efficient build-blocks for working with threads. Dealing effectively with multi-core programs requires a vocabulary for communicating intent, both to humans and machines. Here I'll talk about what atomics are, why we needed them, and how to use them!
juliacon2021-9723-atomic-fields-the-new-primitives-on-the-block
Jameson Nash
en
false
https://pretalx.com/juliacon2021/talk/YFCXJD/
https://pretalx.com/juliacon2021/talk/YFCXJD/feedback/
Blue
A Short History of AstroTime.jl
Talk
2021-07-30T19:00:00+00:00
19:00
00:30
Time is...complicated. It seems simple enough when you are close to the surface of the Earth and you have a device in your pocket that is constantly connected to atomic clocks. But go to outer space and things get uncomfortable pretty quickly. This talk explores how [AstroTime.jl](https://github.com/JuliaAstro/AstroTime.jl) has evolved and how it can help you deal with the intricacies of time such as leap seconds and different time scales. Even if you are neither an astronomer nor an astronaut!
juliacon2021-9771-a-short-history-of-astrotime-jl
Helge Eichhorn
en
The [AstroTime.jl](https://github.com/JuliaAstro/AstroTime.jl) library has been in development since 2013 (originally as part of [Astrodynamics.jl](https://github.com/JuliaSpace/Astrodynamics.jl)). It provides the `Epoch` type as a replacement and complement to Julia's `DateTime`. `Epoch` can handle sub-nanosecond accuracy over a time span several times the age of the universe with support for all commonly used astronomical time scales.
Since its inception, AstroTime.jl has gone through several major design iterations as our understanding of the scope and complexity of the problem domain has grown. The public API on the other hand has remained remarkably stable which is a great testament to Julia's expressive and versatile type system. While AstroTime.jl is built on the solid foundations of the `Dates` standard library, it also fixes some of the shortcomings of the latter and might also highlight further areas of possible improvement.
AstroTime.jl was meant to be only a small stepping stone on the way to making Julia a multiplanetary programming language but it has become a great project in its own right. We want to share the journey so far and maybe get you exited about something as mundane as time. Or spacetime, rather, relatively speaking...
false
https://pretalx.com/juliacon2021/talk/TJ3FNS/
https://pretalx.com/juliacon2021/talk/TJ3FNS/feedback/
Blue
Going to Jupiter with Julia
Lightning talk
2021-07-30T19:30:00+00:00
19:30
00:10
Modern astrodynamics demands a lot from scientific-computing. Calculations are often expensive, and correct unit handling is essential. Sometimes, complex algorithms are needed to parse through vast amounts of data. Julia's efficient syntax, and rich and growing ecosystem has met these challenges with minimal developer effort throughout the development of GeneralAstrodynamics.jl. Feature development and research applications will be presented alongside a simple Earth-Jupiter transfer design.
juliacon2021-9793-going-to-jupiter-with-julia
Joe Carpinelli
en
false
https://pretalx.com/juliacon2021/talk/BPJ3N7/
https://pretalx.com/juliacon2021/talk/BPJ3N7/feedback/
Blue
ClimaCore.jl: Tools for building spatial discretizations
Lightning talk
2021-07-30T19:40:00+00:00
19:40
00:10
This talk will cover ClimaCore.jl: a suite of tools for building spatial discretizations, primarily aimed at weather and climate modeling applications. It provides a high-level interface for composing multiple operators and functions, and are compatible with the SciML suite of differential equations solvers.
juliacon2021-9907-climacore-jl-tools-for-building-spatial-discretizations
Simon Byrne
en
The Climate Modelling Alliance (CliMA) is building ClimateMachine.jl, a modern earth system model that can learn from data. On a technical side, we are developing the model entirely in Julia, using distributed parallelism with both GPU and CPU architectures.
ClimaCore.jl is a suite of tools we are building for our next iteration of Climate Machine.
false
https://pretalx.com/juliacon2021/talk/MXSRY8/
https://pretalx.com/juliacon2021/talk/MXSRY8/feedback/
Blue
ClimateModels.jl -- A Simple Interface To Climate Models
Lightning talk
2021-07-30T19:50:00+00:00
19:50
00:10
Here we provides a uniform interface to climate models of varying complexity and completeness. Models that range from low dimensional to whole Earth System models are
ran and analyzed via this simple interface. Three examples illustrate this framework as applied to:
- a stochastic path (zero-dimensional, Julia function)
- a shallow water model (two-dimensional, Julia package)
- a general circulation model (high-dim., feature-rich, fortran, MPI)
juliacon2021-9860-climatemodels-jl-a-simple-interface-to-climate-models
/media/juliacon2021/submissions/GBJ3HG/simulated_atm_flow04_Ru0WzFe.png
Gael Forget
en
Key objectives of this project include:
- make it as easy to run complex models as it is to run simple ones and, hopefully, so easy that that they can all be used interactively in classrooms
- enable the Julia community to access widely-used, full-featured models right now and comfortably using notebooks, IDEs, terminal, and batch _(1)_.
- enable the climate science community to leverage the booming Julia ecosystem for analyzing model output and experimenting with models _(2)_.
- provide basic pipelining (e.g. Channel), book-keeping (e.g. Git), and documenting features (e.g. Pkg) to make complex workflows easier to reproduce, modify, and share with others.
_(1) The MITgcm, used as example, has configurations for Ocean, Atmosphere, Cryosphere, Biosphere in forward as well as an adjoint mode (via AD)._
_(2) Both on-premise or via cloud based environments._
false
https://pretalx.com/juliacon2021/talk/GBJ3HG/
https://pretalx.com/juliacon2021/talk/GBJ3HG/feedback/
Blue
Space Engineering in Julia
Lightning talk
2021-07-30T20:00:00+00:00
20:00
00:10
Amazonia 1 was the first remote sensing satellite fully designed, integrated, tested, and operated by Brazil. It was developed by the National Institute for Space Research (INPE) and was launched on February 28, 2021. In this project, Julia language was frequently used in the mission analysis and the development of the attitude and orbit control subsystem (AOCS).
juliacon2021-10340-space-engineering-in-julia
Ronan Arraes Jardim Chagas
en
This talk presents how we used the packages ReferenceFrameRotations.jl, SatelliteToolbox.jl, and DifferentialEquations.jl to create a high fidelity simulator of the Amazonia-1’s AOCS and perform numerous analyses related to this mission.
false
https://pretalx.com/juliacon2021/talk/E3MWHZ/
https://pretalx.com/juliacon2021/talk/E3MWHZ/feedback/
Blue
In-Situ Data Analysis with Julia for E3SM at Large Scale
Lightning talk
2021-07-30T20:10:00+00:00
20:10
00:10
In this talk, we will present our work of coupling the Julia runtime with E3SM, an advanced earth system simulation application for supercomputers, and running E3SM with swappable in-situ Julia modules at large scale. The talk includes (1) the Julia runtime coupling with legacy High-Performance Computing (HPC) applications (i.e., E3SM), (2) the design of two in-situ data analysis modules in Julia, and (3) the communication design for E3SM and the in-situ Julia modules.
juliacon2021-9817-in-situ-data-analysis-with-julia-for-e3sm-at-large-scale
LI TANGEarl Lawrence
en
The Energy Exascale Earth System Model (E3SM) is the Department of Energy's state-of-the-art earth system simulation model. It aims to address the most critical and challenging climate problems by efficiently utilizing DOE’s advanced HPC systems. One of the challenges of E3SM (and other exascale simulations) is the imbalance between the great size of the generated simulation data and the limited storage capacity. This means that post hoc data analysis needs to be replaced with in-situ analysis, which analyzes simulation data as the simulation is running. Our work aims to use Julia to provide data scientists with a high-level and performant interface for developing in-situ data analysis algorithms without directly interacting with complex HPC codes. This talk discusses (1) high-level Julia runtime coupling with E3SM, (2) two in-situ data analysis modules in Julia, and (3) low-level communication between E3SM and the in-situ Julia modules.
In this project, we focus on the Community Atmosphere Model (CAM), which models the atmosphere and is one of E3SM’s coupled modules. Our goal is to study extreme weather events that happen in the atmosphere, such as sudden stratospheric warmings (SSW) that can destabilize the polar vortex and cause extreme cold temperatures on earth surfaces. The primary design consideration of coupling Julia with E3SM is the identification of an appropriate entry point in E3SM’s CAM for calling in-situ Julia modules. CAM is implemented in Fortran and simulates in the timestep style. The control module of CAM has access to the simulation data and is selected to be interfaced with the Julia runtime. To couple E3SM with Julia, (1) we have implemented a Fortran-based in-situ data adapter in the control module of CAM, which takes the CAM simulation data as input and internally passes the data to the Julia runtime. (2) We have implemented a C-based interface between the in-situ data adapter and the in-situ Julia modules. The C interface includes three major functions: initialization, cleanup and worker, which creates an in-situ Julia instance (by loading and initializing the in-situ Julia module from a specified path), destroys the Julia instance, and passes the data from the in-situ adapter to the in-situ Julia instance. Our Fortran in-situ adapter interface calls the worker function at every time step and initialization/cleanup functions at the first/last time step. (3) As E3SM mixes the usage of GNU Make and CMake for combining and compiling different E3SM components, we have added the Julia compilation flags for the C and Fortran interfaces into the CAM CMake file (i.e., header files) and the top-level GNU Make file (i.e., Julia libraries). The in-situ Julia module is only compiled when it is called during runtime, which avoids compiling the whole E3SM if the in-situ Julia module needs to be changed.
We have implemented two data analysis in-situ modules: linear regression and SSW. This linear regression approach models simulation variables as a function of simulation time. It can be used to track trends in variables of interest and to identify important checkpoints in the simulation. SSW characterizes midwinter stratospheric sudden warmings that often cause splitting of the stratospheric polar vortex. By definition, SSW occurs when the zonal mean of the zonal wind becomes reversed (easterly) at 60°N and 10 hPa and lasts for at least 10 consecutive days. This event can lead to extreme temperatures on the surface in northern America.
The worker function in the C interface aims to support efficient low-level data communication between E3SM and the in-situ Julia modules. To run at large-scale, E3SM adopts Message Passing Interface (MPI) and so the data is distributed among all the MPI ranks. Each MPI rank of CAM has access to only a local data block of CAM variables (e.g., velocity and temperature) and passes its local data block in 1D array to its own in-situ Julia instance through the C interface. When the in-situ Julia instance needs remote data (e.g., for computation of SSW) from other in-situ Julia instances, MPI.jl is used to implement the data communication between different in-situ Julia instances. However, one key design challenge is to make sure that E3SM and Julia use the same MPI communicator for correct data communication and this is challenging as current Julia C embeddings are not able to directly pass the MPI communicator. To address this challenge, we have developed two converters (i.e., in both C and Fortran formats) of MPI communicators for supporting different MPI libraries. Last, we have also evaluated the performance (i.e., overall overhead) of the worker function for providing valuable guidelines of running HPC applications with Julia at large-scale.
false
https://pretalx.com/juliacon2021/talk/GJRKY3/
https://pretalx.com/juliacon2021/talk/GJRKY3/feedback/
Purple
Adaptive and extendable numerical simulations with Trixi.jl
Talk
2021-07-30T12:30:00+00:00
12:30
00:30
Trixi.jl is a numerical simulation framework for adaptive, high-order discretizations of conservation laws. It has a modular architecture that allows users to easily extend its functionality and was designed to be useful to experienced researchers and new users alike. In this talk, we will give an overview of Trixi’s current features, present a typical workflow for creating and running a simulation, and show how to add new capabilities for your own research projects.
juliacon2021-9566-adaptive-and-extendable-numerical-simulations-with-trixi-jl
/media/juliacon2021/submissions/VAGFD7/blobvis4_cropped_with_title_bsKhlfu.png
Michael Schlottke-LakemperHendrik Ranocha
en
When doing research on numerical discretization methods, scientists are often faced with a dilemma when choosing the appropriate simulation tool: In the beginning of a project, you often want a code that is nimble and with low overhead, which allows rapid prototyping to assist you in experimenting with different approaches. Later on, however, you want to evaluate your newly developed methods and algorithms in a production setting and require a high-performance implementation, support for parallelization, and a full toolchain for postprocessing and visualizing your results.
With [Trixi.jl](https://github.com/trixi-framework/Trixi.jl), we try to bridge this gap by using a simple but modular architecture, which allows us to easily extend Trixi beyond the existing functionality. The main components, such as the mesh, the solvers, or the equations, can each be selected and combined individually in a library-like manner. At the same time, Trixi is a comprehensive numerical simulation framework for hyperbolic PDEs and comes with all necessary ingredients to set up a simulation, run it in parallel, and visualize the results.
At its core, various systems of equations are solved on hierarchical quadtree/octree grids that provide adaptive mesh refinement via solution-based indicators. The equations, e.g., compressible Euler, ideal MHD, or hyperbolic diffusion, are discretized with high-order discontinuous Galerkin spectral element methods, with support for entropy-stable shock capturing. Trixi puts an emphasis on having a fast implementation with shared memory parallelization, and integrates well with other packages of the Julia ecosystem, such as [OrdinaryDiffEq.jl](https://github.com/SciML/OrdinaryDiffEq.jl) for time integration, [ForwardDiff.jl](https://github.com/JuliaDiff/ForwardDiff.jl) for automatic differentiation, or [Plots.jl](https://github.com/JuliaPlots/Plots.jl) for visualization. One of the key goals of Trixi is to be useful to experienced researchers while remaining accessible for new users or students. Thus, we continuously strive to keep the implementation as simple as reasonably possible.
Due to Julia’s unique capabilities and ecosystem including [LoopVectorization.jl](https://github.com/JuliaSIMD/LoopVectorization.jl), serial performance of Trixi can be on par with large-scale C++ and Fortran projects in performance benchmarks using a subset of optimized methods. At the same time, the general framework is simple and extendable enough to allow porting new solver infrastructures within a few hours.
In this talk, we will give an overview of the currently implemented features and discuss the overall architecture of Trixi. We will show a typical workflow for creating and running a simulation, and present scientific results that were obtained with Trixi. Finally, we will demonstrate how to add new capabilities to Trixi for your own research projects.
The Jupyter notebook used for the live demonstration of Trixi.jl during the talk, as well as the presentation slides, can be found at https://github.com/trixi-framework/talk-2021-juliacon.
false
https://pretalx.com/juliacon2021/talk/VAGFD7/
https://pretalx.com/juliacon2021/talk/VAGFD7/feedback/
Purple
3.6x speedup on A64FX by squeezing ShallowWaters.jl into Float16
Lightning talk
2021-07-30T13:00:00+00:00
13:00
00:10
[ShallowWaters.jl](https://github.com/milankl/ShallowWaters.jl), a fluid circulation model that was written with a focus on 16-bit arithmetics, runs on A64FX 3.6x faster in Float16 compared to Float64 without a significant model degradation. Calculations were systematically rescaled to fit into the very limited range of Float16 guided by Sherlogs.jl. ShallowWaters.jl shows that 16-bit calculations on A64FX are indeed a competitive way to accelerate Earth-system simulations on available hardware.
juliacon2021-9776-3-6x-speedup-on-a64fx-by-squeezing-shallowwaters-jl-into-float16
/media/juliacon2021/submissions/E7HKVW/image_2021-05-27_13-32-47_OYI3e3w.png
Milan Klöwer
en
Most Earth-system simulations run on conventional CPUs in 64-bit double precision floating-point numbers Float64, although the need for high precision calculations in the presence of large uncertainties has been questioned. The world’s fastest supercomputer, Fugaku, is based on [A64FX microprocessors](https://www.fujitsu.com/global/products/computing/servers/supercomputer/a64fx/), which also support the 16-bit low precision format Float16. We investigate the Float16 performance on A64FX with [ShallowWaters.jl](https://github.com/milankl/ShallowWaters.jl), a fluid circulation model that was written with a focus on 16-bit arithmetics. It implements techniques that address precision and dynamic range issues in 16 bit. The precision-critical time integration is augmented to include Kahan’s compensated summation to reduce rounding errors. Such a compensated time integration is as precise but faster than mixing 16 and 32-bit of precision. The very limited dynamic range available in Float16 is 6e-5 to 65504, as subnormals are inefficiently supported on A64FX. The bitpattern histogram analysis at runtime with [Sherlogs.jl](https://github.com/milankl/Sherlogs.jl) as well as its functionality to record stacktraces conditioned on the occurrence of subnormals were invaluable to limit the arithmetic range. Consequently, we benchmark speed-ups of 3.8x on A64FX with Float16 and 3.6x with compensated time integration to minimize model degradation. Although ShallowWaters.jl is simplified compared to large Earth-system models, it shares essential algorithms and therefore shows that 16-bit calculations on A64FX are indeed a competitive way to accelerate Earth-system simulations on available hardware.
This work used the [Isambard UK National Tier-2 HPC Service](http://gw4.ac.uk/isambard/) operated by GW4 and the UK Met Office, and funded by EPSRC.
Co-authors
- [Sam Hatfield](https://www.ecmwf.int/en/about/media-centre/news/2020/accelerating-weather-forecasting-models-using-reduced-precision), European Centre for Medium-Range Weather Forecasts, Reading, UK
- [Matteo Croci](https://www.maths.ox.ac.uk/people/matteo.croci), Mathematical Institute, University of Oxford, UK
- [Peter Düben](https://www.ecmwf.int/en/about/who-we-are/staff-profiles/peter-dueben), European Centre for Medium-Range Weather Forecasts, Reading, UK
- [Tim Palmer](https://www2.physics.ox.ac.uk/contacts/people/palmer), University of Oxford, UK
false
https://pretalx.com/juliacon2021/talk/E7HKVW/
https://pretalx.com/juliacon2021/talk/E7HKVW/feedback/
Purple
WaterLily.jl: Real-time fluid simulation in pure Julia
Lightning talk
2021-07-30T13:10:00+00:00
13:10
00:10
WaterLily.jl is a new fluid dynamics simulation package written in pure Julia to take advantage of its speed and its active modelling, linear algebra, and machine learning communities. This talk will give an overview of the simulation approach, detail some of the Julia-specific aspects of the code (CartesianIndices for multidimensional algorithms and JuliaDiff for distance-function-based geometries), present a few examples, and discuss the future goals for the package.
juliacon2021-9432-waterlily-jl-real-time-fluid-simulation-in-pure-julia
/media/juliacon2021/submissions/RPBHWE/Screen_Shot_2021-03-04_at_18.52.45_K3lYLLM.png
Gabriel Weymouth
en
false
https://pretalx.com/juliacon2021/talk/RPBHWE/
https://pretalx.com/juliacon2021/talk/RPBHWE/feedback/
Purple
New tools to solve PDEs in Julia with Gridap.jl
Lightning talk
2021-07-30T13:20:00+00:00
13:20
00:10
In this talk, we explore the novel capabilities of Gridap to solve Partial Differential Equations (PDEs) in Julia. This includes new features like a high-level API to write the PDE weak form with a syntax almost identical to the math notation, support for automatic differentiation, and simulation of PDEs on manifolds and domains of mixed geometrical dimensions. We will showcase these techniques with representative applications and performance comparisons against codes implemented in C/C++.
juliacon2021-9713-new-tools-to-solve-pdes-in-julia-with-gridap-jl
Francesc VerdugoEric NeivaOriol ColomesSantiago Badia
en
Gridap is a new, open-source, finite element (FE) library implemented in the Julia programming language. The main goal of Gridap is to adopt a more modern programming style than existing FE applications written in C/C++ or Fortran in order to simplify the simulation of challenging problems in science and engineering and improve productivity in the research of new discretization methods. The library is a feature-rich general-purpose FE code able to solve a wide range of partial differential equations (PDEs), including linear, nonlinear, and multi-physics problems. Gridap is extensible and modular. One can implement new FE spaces, new reference elements, and use external mesh generators, linear solvers, and visualization tools. In addition, it blends perfectly well with other packages of the Julia package ecosystem, since Gridap is implemented 100% in Julia.
One of the distinctive features of the library is a high-level API allowing one to simulate complex PDEs with very few lines of code. This API makes possible to write the PDE weak form in a syntax almost identical to the mathematical notation. In some sense, the high-level API of Gridap resembles to the one of FE codes based on symbolic domain-specific languages like UFL in FEniCS, but, in contrast, Gridap does not consider any compiler of variational forms nor C/C++ code generation facilities. Instead, the library takes advantage of the Julia JIT compiler to generate efficient machine code for the particular problem the user wants to solve, which makes the Gridap much easier to maintain and extend.
The Gridap project was initially presented in last year's JuliaCon. Since then, a number of new important features have been added, including an enhanced syntax for writing the PDE weak form, the support of more PDE types, and the support of more numerical techniques. In JuliaCon2021, we would like to showcase these updates via a set of representative use cases and challenging applications such as fluid-structure interaction problems.
false
https://pretalx.com/juliacon2021/talk/HFHLDS/
https://pretalx.com/juliacon2021/talk/HFHLDS/feedback/
Purple
What's new in ITensors.jl
Talk
2021-07-30T13:30:00+00:00
13:30
00:30
Tensor networks encapsulate a large class of low rank decompositions of very high order -- potentially infinite order -- tensors. They have found a wide variety of uses in physics, chemistry, and data science. ITensors.jl is a high performance Julia library with a unique memory independent interface that makes it easy to use and develop tensor network algorithms. In this talk, I will give an overview of the library and new features that have been added since its release last year.
juliacon2021-9922-what-s-new-in-itensors-jl
/media/juliacon2021/submissions/39HB9T/ITensor_logo_zsz5Ea7.png
Matthew Fishman
en
In JuliaCon 2019, we gave an early preview of ITensors.jl, a ground-up pure Julia rewrite of ITensor, a high performance C++ library for using and developing tensor network algorithms. ITensors.jl v0.1 was officially released in May of 2020. Since then, there has been a lot of development of the library as well as a variety of spinoff libraries, such as ITensorsGPU.jl that adds a GPU backend for tensor operations, ITensorsVisualization.jl for visualizing tensor networks, PastaQ.jl for using tensor networks to simulate and analyze quantum computers, ITensorGaussianMPS.jl for creating tensor networks of noninteracting quantum systems, as well as more experimental libraries like ITensorsGrad.jl for adding automatic differentiation support and ITensorInfiniteMPS.jl for working with infinite tensor networks. In addition, many advanced features have been added to ITensors.jl and its underlying sparse tensor library NDTensors.jl, such as multithreaded block sparse tensor contractions, alternative dense contraction backends like TBLIS, contraction sequence optimization, and more. In this talk, I plan to give an overview of the current libraries and capabilities as well as lay out a roadmap for where the Julia ITensor ecosystem is heading.
false
https://pretalx.com/juliacon2021/talk/39HB9T/
https://pretalx.com/juliacon2021/talk/39HB9T/feedback/
Purple
Applied Measure Theory for Probabilistic Modeling
Talk
2021-07-30T19:00:00+00:00
19:00
00:30
We'll give an overview of MeasureTheory.jl, describing some of the advantages relative to Distributions.jl and some applications in probabilistic modeling.
juliacon2021-9579-applied-measure-theory-for-probabilistic-modeling
Chad Scherrer
en
We have several goals for MeasureTheory.jl:
- Better performance than Distributions.jl, because normalizing constants can be deferred
- Minimal type constraints, for example allowing symbolic manipulations
- Autodiff-friendly code
- Multiple parameterizations for a given measure
- A consistent interface, especially important for probabilistic programming
- Composability, to make it easy to build new measures from existing ones
- Fall-back to Distributions.jl when needed
While the library is still in its early stages, we're making good progress on all fronts. We hope this can become the library of choice as a basis for probabilistic modeling in Julia, and we're excited to help the Julia community get involved in development.
false
https://pretalx.com/juliacon2021/talk/U7AM33/
https://pretalx.com/juliacon2021/talk/U7AM33/feedback/
Purple
FourierTools.jl | Working with the Frequency Space
Lightning talk
2021-07-30T19:30:00+00:00
19:30
00:10
FourierTools.jl aims at simplifying work in Fourier/Frequency space without loosing efficiency.
We provide several convenient wrappers to speed-up the common `fft(fftshift(x))` pattern.
This package also brings functionality to up and downsample signals through sinc interpolation.
Furthermore, based on FFTs it provides shearing, rotation, convolution and (sub) pixel shift functions which can be applied to N-dimensional data efficiently.
juliacon2021-9696-fouriertools-jl-working-with-the-frequency-space
Rainer HeintzmannFelix Wechsler
en
Fourier space is commonly used for convolution operations, as the Fast Fourier Transformation (FFT) is, as its name may suggest, O(N log N) fast. The FFT algorithm typically produces data at a mangled form that makes it difficult to directly apply functions to. `fftshift` is a way to deal with this but involves data copies.
Based on the packages ShiftedArrays.jl and PaddedViews.jl, the FourierTools.jl package implements views to the results of the FFTW routines `fft` and `rfft` and their inverse `ifft` and `irfft` including the respective `fftshift` operations but implemented as views rather than copying data. The indexing is, in notable difference to FFTViews.jl kept as ordinary arrays are indexed. This helps with the seamless integration across packages.
To implement an FFT-based `resample` operation of real-valued data, a new view, derived from `AbstractArray` is introduced, handling potential copy and addition operations for even-sized arrays to enforce the real-valuedness of the corresponding real space data (`select_region_ft`). In the community it has been [discussed](https://discourse.julialang.org/t/sinc-interpolation-based-on-fft/52512), whether such an operation is necessary. Referring to this discussion, we argue that the Fourier-space operations cannot be replaced by casting to `real`, since the latter violates Parseval's theorem.
In addition to the `resample` operation `FourierTools.jl` also provides a tool for sub-pixel shifting based FFTs. Further algorithms like shearing, sub-pixel shifting and rotation can be also implemented via the use of the Fourier shift theorem and due to the generality of the FFT these can be applied to N-dimensional datasets efficiently.
false
https://pretalx.com/juliacon2021/talk/J8CRXY/
https://pretalx.com/juliacon2021/talk/J8CRXY/feedback/
Purple
IntervalLinearAlgebra.jl: Linear algebra done rigorously
Lightning talk
2021-07-30T19:40:00+00:00
19:40
00:10
Solving linear systems is central in most computational domains, from mathematics to engineering applications. This talk will introduce IntervalLinearAlgebra.jl: a package written in Julia to solve linear systems, with interval or real coefficients, rigorously. That is, producing a set guaranteed to contain the true solution of the original problem. This can be applied to solve problems involving uncertainty propagation or perform self-validated computations.
juliacon2021-11679-intervallinearalgebra-jl-linear-algebra-done-rigorously
Luca Ferranti
en
Linear systems arise in practically all domains involving numerical computations. While several efficient floating-point algorithms are available, the final output has no information about how close to the true solution the computed result is. To overcome this, interval arithmetic offers a framework to perform rigorous computations, where real numbers are replaced by intervals guaranteed to contain the true value.
The talk will introduce my Google Summer of Code (GSoC) project: the development of IntervalLinearAlgebra.jl, a package to solve both interval and real linear systems rigorously. During the talk, I will highlight the main features of the package. First, I will give an overview of interval linear systems and demonstrate how to use the package to determine the exact solution, highlighting that even in lower dimensions the solution set can have complex non-convex shapes.
Motivated by this, I will show how to determine a tight enclosure of the solution of an interval linear system, showing the several solution strategies implemented in the package. The presented algorithms will be compared in terms of accuracy and computation time, highlighting the pros and cons of each. During the talk, I will also discuss the lesson learnt during the development process as well as the roadmap beyond GSoC.
false
https://pretalx.com/juliacon2021/talk/WA7BP8/
https://pretalx.com/juliacon2021/talk/WA7BP8/feedback/
Purple
Global Sensitivity Analysis for SciML models in Julia
Lightning talk
2021-07-30T19:50:00+00:00
19:50
00:10
Majority of scientific modelling workflows involve doing global sensitivity analysis as an intermediate step. It can be used primarily in two stages, either before parameter estimation to simplify the fitting problem by fixing unimportant parameters or for analysis of the input parameters' influence on the output. GlobalSensitivity.jl is a package in the SciML ecosystem that provides a full suite of GSA methods that can be of great utility to a lot of practitioners.
juliacon2021-9450-global-sensitivity-analysis-for-sciml-models-in-julia
Vaibhav Dixit
en
Global Sensitivity Analysis quantifies the influence of input parameters on the model output. Hence some of the core questions we wish answer with models such as identification of most influential parameters, makes GSA an essential part of modelling workflow. GlobalSensitivity.jl [1] is a generalized GSA package with built-in support for parallelism integrated with the pharmaceutical modeling and simulation platform Pumas[2]. Our implementation of GSA for differential equation based mechanistic pharmacometrics, PBPK and QsP models gives order of magnitude speedups over GSA capabilities of other languages. Currently GlobalSensitivity.jl supports the Sobol, Morris, eFAST, Regression based, DGSM, Delta Moment, EASI, Fractional Factorial and RBD-FAST GSA methods.
The talk covers running GSA workflow on a Lotka-Volterra differential equation written in the DifferentialEquations.jl interface.
[1] url: https://gsa.sciml.ai/stable/.
[2] url: https://github.com/PumasAI/PumasTutorials.jl/blob/master/tutorials/pkpd/hcvgsa.jmd
false
https://pretalx.com/juliacon2021/talk/CLKYFN/
https://pretalx.com/juliacon2021/talk/CLKYFN/feedback/
Purple
ZigZagBoomerang.jl - parallel inference and variable selection
Talk
2021-07-30T20:00:00+00:00
20:00
00:30
[ZigZagBoomerang.jl](https://t.co/5MIOOlhGjZ) provides piecewise deterministic Monte Carlo methods. They have the same goal as classical Markov chain Monte Carlo methods: to sample for example from the posterior distribution in a Bayesian model. Only that the distribution is explored through the continuous movement of a particle and not one point at a time. This provides new angles of attack: I showcase a multithreaded sampler and high-dimensional variable selection sampler.
juliacon2021-9936-zigzagboomerang-jl-parallel-inference-and-variable-selection
/media/juliacon2021/submissions/LUVWJZ/Screenshot_2021-04-01_at_10.48.34_TUdRr5f.png
Moritz Schauer
en
## [ZigZagBoomerang.jl](https://github.com/mschauer/ZigZagBoomerang.jl) - parallel inference and variable selection
ZigZagBoomerang.jl provides piecewise deterministic Monte Carlo (PDMC) methods. They have the same goal as classical Markov chain Monte Carlo methods: to sample from a probability distribution, for example the posterior distribution in a Bayesian model. Only that the distribution is explored through the continuous movement of a particle and not one point at a time. The particle changes direction at random times and moves otherwise on deterministic trajectories. For example it may move with constant velocity along a line, see the picture. The random direction changes are calibrated such that the trajectory of the particle samples the target distribution, in general the particle is turned back (reflected) when moving too far into the tails of the distribution. From the trajectory, the quantities of interest, such as the posterior mean and standard deviation, can be estimated.
The decision of whether to change direction in one coordinate only requires the evaluation of a partial derivative which depends on few coordinates – the neighbourhood of the coordinate in the Markov blanket. That allows exploiting multiple processor cores using Julia's multithreaded parallelism (or other forms of parallel computing). The difference between threaded Gibbs sampling and threaded PDMP is that in Gibbs sampling part of the state is fixed, while the other part is changed. Here, the particle never ceases to move, and it is the decisions about direction changes which happen in parallel on subsets of coordinates. Metaphorically speaking this is the difference between walking, where one foot is on the ground all the time, and running, where both feet are in the air between steps.
Because the particle moves on a deterministic trajectory between the times of random events, one can determine exactly the time when the process would leave an area of interest. That allows to sample distributions of bounded support, or spending additional time in a lower dimensional subset of the space, the basis of variable selection with the sticky PDMPs in high dimensional sparse inference problems.
In the presentation I showcase a multithreaded sampler and high-dimensional variable selection with sticky PDMPs.
### Links
* [ZigZagBoomerang.jl](https://github.com/mschauer/ZigZagBoomerang.jl)
* Discourse Announcement: [[ANN] `ZigZagBoomerang.jl`](https://discourse.julialang.org/t/ann-zigzagboomerang-jl/57287)
* Joris Bierken's [Overview over Piecewise Deterministic Monte Carlo](https://diamweb.ewi.tudelft.nl/~joris/pdmps.html)
### Literature
1. Joris Bierkens, Paul Fearnhead, Gareth Roberts: The Zig-Zag Process and Super-Efficient Sampling for Bayesian Analysis of Big Data. *The Annals of Statistics*, 2019, 47. Vol., Nr. 3, pp. 1288-1320. [https://arxiv.org/abs/1607.03188].
2. Joris Bierkens, Sebastiano Grazzi, Kengo Kamatani and Gareth Robers: The Boomerang Sampler. *ICML 2020*. [https://arxiv.org/abs/2006.13777].
3. Joris Bierkens, Sebastiano Grazzi, Frank van der Meulen, Moritz Schauer: A piecewise deterministic Monte Carlo method for diffusion bridges. *Statistics and Computing*, 2021 (to appear). [https://arxiv.org/abs/2001.05889].
4. Joris Bierkens, Sebastiano Grazzi, Frank van der Meulen, Moritz Schauer: Sticky PDMP samplers for sparse and local inference problems. 2020. [https://arxiv.org/abs/2103.08478].
false
https://pretalx.com/juliacon2021/talk/LUVWJZ/
https://pretalx.com/juliacon2021/talk/LUVWJZ/feedback/
BoF/Mini Track
Julia for Biologists
Birds of Feather
2021-07-30T12:30:00+00:00
12:30
01:30
This session is tailored to anyone with a general interest in Julia for Biologists. Join us to meet like minded people, exchange thoughts and develop ideas on Julia and its application in the biological sciences.
The session has 3 parts:
1. Who is in the room? ~ 15 min
2. Presentation “A perspective: Julia for Biologists” by E. Roesch ~ 25 min
3. Discussion ~ 50 min
A recorded version of 2. will be made available afterwards but the session itself will NOT be recorded.
juliacon2021-9918-julia-for-biologists
/media/juliacon2021/submissions/FBTSYM/cartoon_caBWQXM.png
Elisabeth Roesch
en
“Birds of a Feather flock together” — Whether you see yourself as a biologist, software developer, mathematician or anything in between, the objective of this session is to provide a welcoming and discussion-stimulating environment to strengthen the Julia community in the biological sciences. Independent of your Julia skills level, we are curious to hear what brings you to this area, what you love about it and where you feel like is room for improvement.
true
https://pretalx.com/juliacon2021/talk/FBTSYM/
https://pretalx.com/juliacon2021/talk/FBTSYM/feedback/
BoF/Mini Track
Virtual posters session
Virtual Poster
2021-07-30T16:30:00+00:00
16:30
01:30
The virtual poster session will include 100 posters hosted on gathertown. Link to all the posters: https://juliacon.org/2021/posters/
juliacon2021-11725-virtual-posters-session
en
false
https://pretalx.com/juliacon2021/talk/NBER8M/
https://pretalx.com/juliacon2021/talk/NBER8M/feedback/
BoF/Mini Track
Discussing Gender Diversity in the Julia Community
Birds of Feather
2021-07-30T19:00:00+00:00
19:00
01:30
Julia Gender Inclusive is an initiative that came to life from a focus group that has been working on diversity in the Julia community for the last year. We are a group of people whose gender is underrepresented in the community and aim at providing a supportive space for all gender minorities in the Julia community. Through a BoF session we wish to discuss what we are doing and what we hope to do in the future with other people whose gender is underrepresented or allies willing to support us.
juliacon2021-9853-discussing-gender-diversity-in-the-julia-community
Laura VentosaKim Louisa AuthXuan (Tan Zhi Xuan)
en
The objective of this talk is to find more people who feel their gender is underrepresented within the Julia community or want to support people who feel so. We aim at creating a safe and fruitful discussion about gender diversity and new actions we can take from Julia Gender Inclusive.
false
https://pretalx.com/juliacon2021/talk/RAFSMK/
https://pretalx.com/juliacon2021/talk/RAFSMK/feedback/
JuMP Track
Modelling Australia's National Electricity Market with JuMP
Lightning talk
2021-07-30T12:30:00+00:00
12:30
00:10
I will discuss challenges, techniques and design choices in developing a flexible JuMP-based modelling workflow as a decision tool for a major Australian transmission network operator's development planning across diverse scenarios.
juliacon2021-10870-modelling-australia-s-national-electricity-market-with-jump
James D Foster
en
false
https://pretalx.com/juliacon2021/talk/TMXKDM/
https://pretalx.com/juliacon2021/talk/TMXKDM/feedback/
JuMP Track
AnyMOD.jl: A Julia package for creating energy system models
Lightning talk
2021-07-30T12:40:00+00:00
12:40
00:10
AnyMOD.jl is a Julia framework for creating large-scale energy system models. It applies a novel graph-based approach that was developed to address the challenges in modeling high levels of intermittent generation and sectoral integration. To enable modelers to work more efficiently, the framework provides features that help to visualize results, streamline the read-in of input data, and rescale optimization problems to increase solver performance.
juliacon2021-9711-anymod-jl-a-julia-package-for-creating-energy-system-models
/media/juliacon2021/submissions/EUVCJY/schriftzug_plus_logo_XiOFZKx.png
Leonard Göke
en
false
https://pretalx.com/juliacon2021/talk/EUVCJY/
https://pretalx.com/juliacon2021/talk/EUVCJY/feedback/
JuMP Track
Power Market Tool (POMATO)
Lightning talk
2021-07-30T12:50:00+00:00
12:50
00:10
In this talk, we present the open-source Power Market Tool (POMATO), which has been designed to study capacity allocation and congestion management policies of zonal electricity markets, especially flow-based market coupling.
juliacon2021-10876-power-market-tool-pomato-
Richard Weinhold
en
Europe's increase in electricity production from renewable energy resources (RES) in combination with a significant decline of conventional generation capacity has spawned political and academic interest in the transmission system's ability to accommodate this transition. Central to this discussion is the efficiency of capacity allocation and congestion management (CACM) policies between and within electricity market areas that are interconnected by shared and synchronized transmission infrastructure. To facilitate unrestricted cross-border electricity trading in the presence of finite physical transmission capacity, European system and electricity market operator inaugurated flow-based market coupling (FBMC).
FBMC is a coordinated multi-stage process that requires detailed forecasts and network models, which are typically not or only partially disclosed by the system operators. Academic publications that synthesize FBMC in model frameworks agree on a three step process – D-2 (base case), D-1 (day-ahead) and D-0 (redispatch) – but differ greatly in some core assumptions. Further, FBMC effectiveness for a future renewable-dominant generation mix is typically overlooked in the current literature.
The open-source Power Market Tool (POMATO) has been designed to study CACM policies of zonal electricity markets, especially flow-based market coupling (FBMC). For this purpose, POMATO implements methods for the analysis of simultaneous zonal market clearing, nodal (N-k secure) power flow computation for capacity allocation, and multi-stage market clearing with adaptive grid representation and redispatch. Additionally, POMATO includes risk-aware optimal power flow via chance constraints to internalize forecast uncertainty during the market clearing process. All optimization features rely on Julia/JuMP, leveraging its accessibility, computational performance, and solver interfaces. The Julia Code is embedded in a Python front-end, providing flexible and easily maintainable data processing and user interaction features.
false
https://pretalx.com/juliacon2021/talk/9SVMZ3/
https://pretalx.com/juliacon2021/talk/9SVMZ3/feedback/
JuMP Track
A Brief Introduction to InfrastructureModels
Talk
2021-07-30T13:00:00+00:00
13:00
00:30
The design, operation and resilience of critical infrastructure networks plays a foundational role in modern society. One open question is how artificial intelligence can provide decision support to maintain and adapt critical infrastructures to a changing world. This talk provides an overview of InfrastructureModels, a software foundation developed at Los Alamos National Laboratory for critical infrastructures analysis and optimization to help explore this question.
juliacon2021-9217-a-brief-introduction-to-infrastructuremodels
/media/juliacon2021/submissions/XGCJBA/InfrastructureModels_ymZnRCd.png
Carleton Coffrin
en
This talk will begin by motivating the need for optimization of the design and operations of critical infrastructure networks and discuss some of the challenges facing future infrastructure systems. It will then highlight why Julia and JuMP provide an ideal foundation for to building critical infrastructure analysis capabilities. The talk will finish with an overview of the design and use of Los Alamos National Laboratory's InfrastructureModels packages using optimization of electric power transmission networks as a specific example.
false
https://pretalx.com/juliacon2021/talk/XGCJBA/
https://pretalx.com/juliacon2021/talk/XGCJBA/feedback/
JuMP Track
UnitCommitment.jl: Security-Constrained Unit Commitment in JuMP
Talk
2021-07-30T13:30:00+00:00
13:30
00:30
In this talk, we introduce UnitCommitment.jl, an open-source Julia/JuMP optimization package which aims to eliminate some of the roadblocks researchers typically face when developing and evaluating new solution methods for the Security-Constrained Unit Commitment (SCUC) problem.
juliacon2021-10871-unitcommitment-jl-security-constrained-unit-commitment-in-jump
Alinson Santos Xavier
en
The Security-Constrained Unit Commitment (SCUC) problem is one of the most fundamental and challenging problems in power systems optimization, being solved daily by Independent System Operators (ISOs) to clear the day-ahead electricity markets. The package provides: (i) an extensible and fully-documented JSON-based data specification format for SCUC, developed in collaboration with ISOs, which can help researchers to share data sets across institutions; (ii) a diverse collection of large-scale benchmark instances, collected from the literature, converted into a common data format, and extended using data-driven methods make them more challenging and realistic; (iii) a Julia/JuMP implementation of state-of-the-art Mixed-Integer Linear Programming formulations and solution methods for the problem; and (iv) a suite of automated benchmark scripts to accurately evaluate the performance impact of newly proposed methods. The package is being developed as part of the "IEEE Task Force on Solving Large Scale Optimization Problems in Electricity Market and Power System Application".
false
https://pretalx.com/juliacon2021/talk/XNVLKH/
https://pretalx.com/juliacon2021/talk/XNVLKH/feedback/
JuMP Track
Linear programming by first-order methods
Talk
2021-07-30T19:00:00+00:00
19:00
00:30
We present PDLP, a practical first-order method for linear programming (LP) that can solve to the high levels of accuracy that are expected in traditional LP applications. In addition, it can scale to very large problems because its core operation is matrix-vector multiplications.
juliacon2021-10875-linear-programming-by-first-order-methods
Miles Lubin
en
PDLP is derived by applying the primal-dual hybrid gradient (PDHG) method, popularized by Chambolle and Pock (2011), to a saddle-point formulation of LP. PDLP enhances PDHG for LP by combining several new techniques with older tricks from the literature; the enhancements include diagonal preconditioning, presolving, adaptive step sizes, and adaptive restarting. PDLP compares favorably with SCS on medium-sized instances when solving both to moderate and high accuracy. Furthermore, we highlight standard benchmark instances and a large-scale application (PageRank) where our open-source prototype of PDLP outperforms a commercial LP solver. The prototype of PDLP is written in Julia and available at https://github.com/google-research/FirstOrderLp.jl.
false
https://pretalx.com/juliacon2021/talk/ANYQTY/
https://pretalx.com/juliacon2021/talk/ANYQTY/feedback/
JuMP Track
Cerberus: A solver for mixed-integer programs with disjunctions
Lightning talk
2021-07-30T19:30:00+00:00
19:30
00:10
Disjunctive programming (DP) is a powerful framework for modeling complex logic in optimization problems. In this talk, we present Cerberus, a prototype MIP solver that treats disjunctive constraints as first-class objects.
juliacon2021-10868-cerberus-a-solver-for-mixed-integer-programs-with-disjunctions
Joey Huchette
en
Typically, DP problems are reformulated as mixed-integer programming (MIP) problems, and then passed to a MIP solver. Crucially, the MIP solver only receives this "flattened" MIP reformulation, and not the original, rich DP structure. We discuss how this structural information can be used within a LP-based branch-and-cut algorithm for dynamic reformulation and domain propagation without breaking incremental LP solves, a crucial ingredient for the success of modern solvers. We focus in particular on how the JuMP ecosystem facilitates the rapid development of such a solver which is heavily dependent on advanced functionality from the both the underlying solvers and the modeling interface.
false
https://pretalx.com/juliacon2021/talk/REKLVV/
https://pretalx.com/juliacon2021/talk/REKLVV/feedback/
JuMP Track
HiGHS
Lightning talk
2021-07-30T19:40:00+00:00
19:40
00:10
In this talk we present HiGHS, a suite of high performance open source optimization solvers, written in C++. HiGHS has simplex and interior point solvers for LP, and MIP and QP solvers. HiGHS can be called from Julia via the HiGHS.jl package.
juliacon2021-10980-highs
Ivet Galabova
en
false
https://pretalx.com/juliacon2021/talk/FHWUR9/
https://pretalx.com/juliacon2021/talk/FHWUR9/feedback/
JuMP Track
vOptSolver: an ecosystem for multi-objective linear optimization
Lightning talk
2021-07-30T19:50:00+00:00
19:50
00:10
vOptSolver is an open source ecosystem written in the Julia language, for modeling and solving multi-objective linear optimization problems (mixed integer problems, continuous problems, integer problems, and combinatorial problems). Currently vOptSolver is composed of two independant packages named vOptGeneric.jl and vOptSpecific.jl integrated and registered as Julia packages since 2017. The source codes, examples, documentation and tutorial are available at https://github.com/vOptSolver.
juliacon2021-9803-voptsolver-an-ecosystem-for-multi-objective-linear-optimization
Xavier Gandibleux
en
vOptSolver is aimed to be a software for scientifics and practionners. It has been conceived to be intuitive for various profile of users (mathematicians, informaticians, and engineers), corresponding to needs encountered in research and development (open-source codes available for the design of new algorithms), decision-making (ready-to-use methods and algorithms for solving optimization problems), and education (environment for teachning and practicing the theories and algorithms).
The optimization problem to solve is built in formulating a model with the algebraic modeling language JuMP, extended to support multi-objective models, for non-structured optimization problems, or in calling the corresponding API for structured optimization problems. The problem data and the optimization results are set on and handled by the datastructures and functionalities of Julia.
vOptSolver integrates several generic and specific algorithms of the literature for computing the set of exact non-dominated points. It returns also the efficient solutions corresponding to this set. The generic algorithms make use of a MIP solver, while specific algorithms call problem-dedicated algorithms.
References :
I. Dunning, J. Huchette, M. Lubin, JuMP: A Modeling Language for Mathematical Optimization, SIAM Review 59 (2) (2017) 295–320.
B. Legat, O. Dowson, J. D. Garcia, M. Lubin, MathOptInterface: a data structure for mathematical optimization problems (2020). arXiv:2002.03447
X. Gandibleux, G. Soleilhac, A. Przybylski, S. Ruzika, vOptSolver: an open source software environment for multiobjective mathematical optimization, IFORS2017: 21st Conference of the International Federation of Operational Research Societies. July 17-21, 2017. Quebec City (Canada). (2017).
false
https://pretalx.com/juliacon2021/talk/TP88SL/
https://pretalx.com/juliacon2021/talk/TP88SL/feedback/
JuMP Track
A Derivative-Free Local Optimizer for Multi-Objective Problems
Talk
2021-07-30T20:00:00+00:00
20:00
00:30
In real-world applications, optimization problems might arise where there is more than one objective.
Additionally, some objectives could be computationally expensive to evaluate, with no gradient information available.
I present a derivative-free local optimizer (written in Julia) aimed at such problems. It employs a trust-region strategy and local surrogate models (e.g., polynomials or radial basis function models) to save function evaluations.
juliacon2021-9778-a-derivative-free-local-optimizer-for-multi-objective-problems
Manuel Berkemeier
en
I will revisit the basic concepts of multi-objective optimization and introduce the notion of Pareto optimality and Pareto criticality. Based on this idea, the steepest descent direction for multi-objective problems (MOPs) is derived. When used in conjunction with a trust region strategy, the steepest descent direction can be used to generate iterates converging to first-order critical points.
Besides talking about the mathematical background, I want to describe how local surrogate models are constructed and how we use other available packages (JuMP, NLopt, DynamicPolynomials etc.) in our implementation.
Moreover, I will show the results of a few numerical experiments proving the efficiency of the approach and talk a bit about how the local solver could be embedded in a global(ish) framework.
false
https://pretalx.com/juliacon2021/talk/Z8AJ9J/
https://pretalx.com/juliacon2021/talk/Z8AJ9J/feedback/