2.0
-//Pentabarf//Schedule//EN
PUBLISH
VK87Q3@@pretalx.com
-VK87Q3
GPU programming in Julia
en
en
20210720T140000
20210720T170000
3.00000
GPU programming in Julia
Julia has several packages for programming GPUs, each of which support various programming models. In this workshop, we will demonstrate the use of three major GPU programming packages: CUDA.jl for NVIDIA GPUs, AMDGPU.jl for AMD GPUs, and oneAPI.jl for Intel GPUs. We will explain the various approaches for programming GPUs with these packages, ranging from generic array operations that focus on ease-of-use, to hardware-specific kernels for when performance matters.
Most of the workshop will be vendor-neutral, and the content will be available for all supported GPU back-ends. There will also be a part on vendor-specific tools and APIs.
Attendees will be able to follow along, but are recommended to have access to a suitable GPU for doing so. Material for this workshop can be found at https://github.com/maleadt/juliacon21-gpu_workshop
PUBLIC
CONFIRMED
Workshop
https://pretalx.com/juliacon2021/talk/VK87Q3/
Green
Tim Besard
Julian P Samaroo
Valentin Churavy
PUBLISH
FXZXMB@@pretalx.com
-FXZXMB
DataFrames.jl 1.2 tutorial
en
en
20210720T140000
20210720T170000
3.00000
DataFrames.jl 1.2 tutorial
In this workshop an introduction to DataFrames.jl 1.2 will be presented.
The tutorial is targeted at people wanting to start using DataFrames.jl. However, it assumes that you have some experience in working with data frames in e.g. R or Python. The tutorial presents an example of doing a small data science project.
The topics covered are:
* creating a `DataFrame` object and getting basic information about it
* reading and writing data frames using [CSV.jl](https://github.com/JuliaData/CSV.jl) and [Arrow.jl](https://github.com/JuliaData/Arrow.jl)
* indexing and filtering
* sorting
* joining
* reshaping
* transforming columns and aggregation
* plotting
* building predictive models
* bootstrapping
All the materials used are available for download at https://github.com/bkamins/JuliaCon2021-DataFrames-Tutorial.
PUBLIC
CONFIRMED
Workshop
https://pretalx.com/juliacon2021/talk/FXZXMB/
Red
Bogumił Kamiński
PUBLISH
V3N73B@@pretalx.com
-V3N73B
Quantum Computing with Julia
en
en
20210721T140000
20210721T170000
3.00000
Quantum Computing with Julia
In this two part workshop we will use Amazon Braket with Julia to introduce attendees to the exciting world of quantum computing. Getting started in QC can be daunting if you’re not already an expert in physics or CS. We’ll spend the first part of the workshop getting acquainted with the different types of quantum hardware available today and some introductory algorithms, which we’ll run on real quantum computers and simulators. Then we’ll build upon this and begin exploring using quantum hardware to tackle machine learning and optimization problems.
In order to access the quantum hardware and simulators during the workshop, we’ll be using Amazon Braket, which is a fully managed quantum computing service that helps researchers and developers get started with the technology to accelerate research and discovery.
PUBLIC
CONFIRMED
Workshop
https://pretalx.com/juliacon2021/talk/V3N73B/
Green
Saravana Kumar
Katharine Hyatt
PUBLISH
A9KZCY@@pretalx.com
-A9KZCY
Statistics with Julia from the ground up
en
en
20210721T140000
20210721T170000
3.00000
Statistics with Julia from the ground up
This workshop accommodates data-scientists and statisticians that have experience with a language like R, but have not used Julia previously. In learning to use Julia, a contemporary "stats based" approach is taken focusing on short scripts that achieve concrete goals. The primary focus is on statistical applications and packages. The Julia language is covered as a by-product of the applications. Thus, this workshop is much more of a *how to use Julia for stats* course than a *how to program in Julia* course. This approach may be suitable for statisticians and data-scientists that tend to do their day-to-day scripting with a data and model based approach - as opposed to a software development approach.
The topics covered include:
* Basic probability and Monte Carlo.
* Basics from the in-built Statistics package and the [StatsBase](https://juliastats.org/StatsBase.jl/stable/) package.
* Basic plotting and statistical plotting with [StatsPlots](https://github.com/JuliaPlots/StatsPlots.jl).
* Using the [Distributions](https://juliastats.org/Distributions.jl/latest/) package.
* (Basic) usage of the [Dataframes](https://dataframes.juliadata.org/stable/) package.
* Using the [GLM](https://juliastats.org/GLM.jl/stable/) package.
* Other useful resources and packages.
(Note that Julia has hundreds of statistical packages and we can not cover them all in 3 hours).
Code snippets from [Statistics with Julia: Fundamentals for Data Science, Machine Learning and Artificial Intelligence](https://statisticswithjulia.org/) will be used in conjunction with smaller live constructed examples.
An extensive Jupyter notebook for the workshop together with data files is [here](https://github.com/yoninazarathy/JuliaCon2021-StatisticsWithJuliaFromTheGroundUp). You can install it to follow along.
If you don't have Julia with IJulia (Jupyter) installed, you can follow the instructions in [this video](https://www.youtube.com/watch?v=KJleqSITuRo).
PUBLIC
CONFIRMED
Workshop
https://pretalx.com/juliacon2021/talk/A9KZCY/
Red
Yoni Nazarathy
PUBLISH
KK9KS7@@pretalx.com
-KK9KS7
A mathematical look at electronic structure theory
en
en
20210722T140000
20210722T170000
3.00000
A mathematical look at electronic structure theory
### Content
The material for the workshop is available at https://github.com/mfherbst/juliacon_dft_workshop.
I'll briefly introduce the setting of density-functional theory (DFT), in particular show the equation system and its mathematical structure. With that we are in good shape to tackle the main part of the workshop, which will be devoted to discussing the numerical techniques used for solving it.
Our main tool in this workshop will be the [density-functional toolkit (DFTK)](https://dftk.org),
a state-of-the-art DFT code written in Julia (of course ;)). This code will allow us to consider a number of reduced problems, where things are more tractable if you wish to interactively explore, visualise and understand. In particular we will use DFTK to inspect what's going on while the DFT problem is being solved. With that knowledge at hand we'll try to code up some simple DFT solvers on our own. Due to the scaffolding DFTK provides this is a fairly manageable task and (as a small bonus) the resulting algorithms could be directly applied to cutting edge problems (if we're careful with performance issues).
Depending on how our progress is I plan to cover the following topics:
- Problem setup: Mathematical structure of DFT
- Typical discretisation approaches: Gaussians versus plane waves
- Typical solution algorithms: Direct minimisation versus self-consistent field (SCF) iterations
- Numerical analysis of SCF problems
- Writing our own SCF, understanding why it badly fails and what we can do about it.
- Connections between the physical properties of matter and convergence properties of an SCF
### Assumed background
I'll try to do my best to make this workshop accessible for a broad range of people: Those with the chemistry or physics background that always wanted to understand the maths behind DFT as well as those with the understanding on PDEs / linear algebra that are interested in getting an idea about this challenging application domain.
I will assume you safely know your way around Julia.
PUBLIC
CONFIRMED
Workshop
https://pretalx.com/juliacon2021/talk/KK9KS7/
Green
Michael F. Herbst
PUBLISH
RS9B7Q@@pretalx.com
-RS9B7Q
Game development in Julia with GameZero.jl
en
en
20210722T140000
20210722T170000
3.00000
Game development in Julia with GameZero.jl
Developing simple games is one the most effective ways to learning, and teaching, programming. GameZero.jl is a low-overhead game development framework, that allows beginners and students to learn programming while having a lot of fun.
We will describe the simple API exposed by GameZero, and then build up a couple of games using these building blocks. By the end of the session, participants will have one fully functional game working, and will have the building blocks to create the second. On the way, we will also describe the basic syntax and semantics of Julia and its standar library for users who are unfamiliar with it.
PUBLIC
CONFIRMED
Workshop
https://pretalx.com/juliacon2021/talk/RS9B7Q/
Red
Avik Sengupta
Ahan Sengupta
PUBLISH
CPH7SG@@pretalx.com
-CPH7SG
Solving differential equations in parallel on GPUs
en
en
20210723T140000
20210723T170000
3.00000
Solving differential equations in parallel on GPUs
The workshop materials can be found here: https://github.com/luraess/parallel-gpu-workshop-JuliaCon21
This workshop covers trendy areas in modern numerical computing with examples from geoscientific applications. The physical processes governing natural systems' evolution are often mathematically described as systems of differential equations. Fast and accurate solutions require numerical implementations to leverage modern parallel hardware.
The goal of this workshop is to offer an interactive hands-on to solve systems of differential equations in parallel on GPUs using the [`ParallelStencil.jl`](https://github.com/omlins/ParallelStencil.jl) and [`ImplicitGlobalGrid.jl`](https://github.com/eth-cscs/ImplicitGlobalGrid.jl) Julia modules. [`ParallelStencil.jl`](https://github.com/omlins/ParallelStencil.jl) permits to write architecture-agnostic parallel high-performance GPU and CPU code and [`ImplicitGlobalGrid.jl`](https://github.com/eth-cscs/ImplicitGlobalGrid.jl) renders stencil-based distributed parallelisation almost trivial. The resulting codes are fast, short and readable. We will use these two Julia modules to design and implement a (multi-) GPU application that predicts ice flow dynamics over mountainous topography.
The workshop consists of 2 parts:
1. You will learn about parallel and distributed computing and iterative solvers.
2. You will implement a PDE solver to predict ice flow dynamics on real topography.
By the end of this workshop, you will:
- Have a GPU PDE solver that predicts ice-flow;
- Have a concise Julia code that achieves similar performance than legacy C, CUDA, MPI code;
- Be able to leverage the computing power of modern GPU accelerated servers and supercomputers.
We look forward to having you on board and will make sure to foster exchange of ideas and knowledge to provide an as inclusive as possible event.
PUBLIC
CONFIRMED
Workshop
https://pretalx.com/juliacon2021/talk/CPH7SG/
Green
Ludovic Räss
Mauro Werder
Samuel Omlin
PUBLISH
WCSKJ7@@pretalx.com
-WCSKJ7
Package development in VSCode
en
en
20210723T140000
20210723T170000
3.00000
Package development in VSCode
The [Julia extension for VSCode](https://www.julia-vscode.org/) has changed significantly over the last year, with multiple feature additions and UX improvements. At the same time, VSCode has many not particularly widely known yet very useful features.
This workshop aims to introduce new as well experienced users to a package development workflow in VSCode, including use cases like debugging and profiling as well as how to best use inline evaluation or the Revise integration. We'll also provide an overview on the various possibilities for interactive data exploration/analysis, the remote capabilities built into VScode, and more.
PUBLIC
CONFIRMED
Workshop
https://pretalx.com/juliacon2021/talk/WCSKJ7/
Red
Sebastian Pfitzner
David Anthoff
PUBLISH
NNVXZC@@pretalx.com
-NNVXZC
Simulating Big Models in Julia with ModelingToolkit
en
en
20210724T140000
20210724T170000
3.00000
Simulating Big Models in Julia with ModelingToolkit
It can be hard to build and solve million equation models. Making them high performance, stable, and parallel? Introducing ModelingToolkit.jl! In this workshop we will showcase ModelingToolkit as a system for building large differential equation models in a hierarchical component-wise way. This acausal modeling system is reminiscent of widely used tools like Simulink and Modelica, but we will showcase how ModelingToolkit's deep integration with interactive symbolic programming leads to a more intuitive pure Julia modeling system. The audience will be walked through a live demonstration of using ModelingToolkit to compose models and add transformations, like index reduction of differential-algebraic equations (DAEs) and tearing of nonlinear systems, to improve stability and performance of the generated code. We will demonstrate how to use the automated parallelism easily solve millions of equations in the most performant way. We will show how ModelingToolkit extends far beyond differential equations, featuring how it can be used for similarly generating high performance code for nonlinear optimization, solving nonlinear equations, doing nonlinear optimal control, generating models from chemical reaction descriptions, and more. The user will leave with a better understanding of the growing symbolic-numeric modeling ecosystem and the future of large-scale accurate and high-performance SciML modeling.
PUBLIC
CONFIRMED
Workshop
https://pretalx.com/juliacon2021/talk/NNVXZC/
Green
Chris Rackauckas
PUBLISH
VY9UVX@@pretalx.com
-VY9UVX
Package development: improving engineering quality & latency
en
en
20210724T140000
20210724T170000
3.00000
Package development: improving engineering quality & latency
This workshop will tutor developers on the use of some of the tools available for improving package quality and reducing latency. We will begin by summarizing the factors that influence dispatch, inference, latency, and invalidation, and how monitoring inference provides a framework for detecting problems before or as they arise. We will then tutor attendees in the use of tools like MethodAnalysis, JET, Cthulhu, and SnoopCompile to discover, analyze, and fix detected problems in package implementation. We will also show how in addition to improving robustness, such steps can often streamline design and reduce latency.
This workshop is aimed at experienced Julia developers. Materials can be cloned from https://github.com/aviatesk/juliacon2021-workshop-pkgdev
PUBLIC
CONFIRMED
Workshop
https://pretalx.com/juliacon2021/talk/VY9UVX/
Red
Tim Holy
Shuhei Kadowaki
PUBLISH
DKEJ97@@pretalx.com
-DKEJ97
Parse and broker (log) messages with CombinedParsers(.EBNF)
en
en
20210725T140000
20210725T170000
3.00000
Parse and broker (log) messages with CombinedParsers(.EBNF)
Step by step I show available options for defining CombinedParsers to process different message formats (e.g. log lines) and to transform into julia result_types.
The examples demonstrate that julia's dispatch leverages parsed result_types straightforwardly to a slick and powerful platform for complex string-processing workflows like message brokering similar to Apache Kafka:
Julia's multiple dispatch is easier to write and executes faster than conditional programming patterns of the form "if this kind of thing then do x" in java-based Kafka.
The demonstration exemplifies dispatch into different data sinks like git managed CSV and text files, SearchLight.jl, and even Telegram.jl Bot alerts.
The workshop details the use of grammar languages supported by CombinedParsers.jl:
You can conveniently compose existing EBNF Grammars with PCRE regular expressions and CombinedParser's julia constructors to create fast pure julia compiled (also recursive) parsers.
Regular expressions and EBNF CombinedParsers result in nested (named) tuples by default.
Users can inject transformation functions for any (sub-)parser after definition as EBNF/PCRE.
Alternatively a CombinedParsers julia syntax equivalent to a PCRE/EBNF grammar can be printed and amended with transformations.
For improved performance, lazy transformations allow access to parts of a parsed string without transforming the full parsing result (similar to LazyJSON.jl).
Benchmarks and standards compliance is reported based on extensive unit tests.
Julia CombinedParsers performance competes with the PCRE C library, which is among the fastest regex libraries on the market.
This is achieved by leveraging the excellent julia compiler with generated functions, multiple dispatch and parametric types.
CombinedParsers supports to lazily iterate all valid parsings if not unique, and the TextParse interface to include CombinedParsers e.g. in CSV.jl.
Other parsing packages (Automa.jl, ParserCombinator.jl, Lerche.jl) and current limitations and considerations for further optimization will be discussed.
PUBLIC
CONFIRMED
Workshop
https://pretalx.com/juliacon2021/talk/DKEJ97/
Green
Gregor Kappler
PUBLISH
FEZW9Q@@pretalx.com
-FEZW9Q
Modeling Marine Ecosystems At Multiple Scales Using Julia
en
en
20210725T140000
20210725T170000
3.00000
Modeling Marine Ecosystems At Multiple Scales Using Julia
Packages covered in this workshop will include:
- `AIBECS.jl` : global steady-state biogeochemistry and gridded transport models that run fast for long time scales (centuries or even millenia).
- `PlanktonIndividuals.jl` : local to global agent based model, particluarly suited to study microbial communities, plankton physiology, and nutrient cycles.
- `IndividualDisplacements.jl` : local to global particle tracking, for simulating dispersion, connectivity, transports in the ocean or atmosphere, etc.
- `MITgcmTools.jl` : interface to full-featured, fortran-based, general circulation model and its output (transports, chemistry, ecology, ocean, seaice, atmosphere, and more).
The workshop's first two hours will be organized around tutorials and self-contained Pluto notebooks for the different packages.
The third hour will provide the opportunity for attendees to further explore the models in breakout rooms and via exercises.
Workshop schedule in more detail:
- Introduction of the topics covered, presenters, installation, and workshop roadmap (15 minutes).
- AIBECS.jl : concept, implementation, tutorial workthough (20 minutes + 10' for questions)
- PlanktonIndividuals.jl : concept, implementation, tutorial workthough (20 minutes + 10' for questions)
- IndividualDisplacements.jl : concept, implementation, tutorial workthough (10 minutes + 10' for questions)
- MITgcmTools.jl : concept, implementation, tutorial workthough (10 minutes + 10' for questions)
- 5-minute break
- breakout rooms for deeper dive in tutorials, exercises, or trying out your own idea with guidance from the presenters (1 hour)
Workshop materials will be made available ahead of time @ https://github.com/JuliaOcean/MarineEcosystemsJuliaCon2021.jl
PUBLIC
CONFIRMED
Workshop
https://pretalx.com/juliacon2021/talk/FEZW9Q/
Red
Gael Forget
Benoit Pasquier
Zhen Wu
PUBLISH
9KGMHJ@@pretalx.com
-9KGMHJ
It's all Set: A hands-on introduction to JuliaReach
en
en
20210726T140000
20210726T170000
3.00000
It's all Set: A hands-on introduction to JuliaReach
We present [JuliaReach](https://github.com/JuliaReach), a Julia ecosystem to perform reachability analysis of dynamical systems. JuliaReach builds on sound scientific approaches and was, in two occasions (2018 and 2020) the winner of the annual friendly competition on Applied Verification for Continuous and Hybrid Systems ([ARCH-COMP](https://cps-vo.org/group/ARCH)).
The workshop consists of three parts (respectively packages) in [JuliaReach](https://github.com/JuliaReach): our core package for set representations, our main package for reachability analysis, and a new package applying reachability analysis with potential use in domain of control, robotics and autonomous systems.
In the first part we present [LazySets.jl](https://github.com/JuliaReach/LazySets.jl), which provides ways to symbolically represent sets of points as geometric shapes, with a special focus on convex sets and polyhedral approximations. [LazySets.jl](https://github.com/JuliaReach/LazySets.jl) provides methods to apply common set operations, convert between different set representations, and efficiently compute with sets in high dimensions.
In the second part we present [ReachabilityAnalysis.jl](https://github.com/JuliaReach/ReachabilityAnalysis.jl), which provides tools to approximate the set of reachable states of systems with both continuous and mixed discrete-continuous dynamics, also known as hybrid systems. It implements conservative discretization and set-propagation techniques at the state-of-the-art.
In the third part we present [NeuralNetworkAnalysis.jl](https://github.com/JuliaReach/NeuralNetworkAnalysis.jl), which is an application of [ReachabilityAnalysis.jl](https://github.com/JuliaReach/ReachabilityAnalysis.jl) to analyze dynamical systems that are controlled by neural networks. This package can be used to validate or invalidate specifications, for instance about the safety of such systems.
---
Meet the team of researchers and students that form the [JuliaReach](https://juliareach.com) network:
- [Luis Benet](https://github.com/lbenet). Universidad Nacional Autónoma de México. *Validated integration, Nonlinear Physics.* He is also one of the lead developers of [JuliaIntervals](https://github.com/JuliaIntervals).
- [Marcelo Forets](https://github.com/mforets). Universidad de la República, Uruguay. *Reachability Analysis, Hybrid Systems, Neural Network Robustness.*
- [Daniel Freire Caporale](https://github.com/dfcaporale). Universidad de la República, Uruguay. *Reachability, PDEs, Fluid Mechanics.*
- [Sebastian Guadalupe](https://github.com/sebastianguadalupe). Universidad de la República, Uruguay. *Julia Seasons of Contributions 2020 Alumni. Mathematical Modeling, Hybrid systems.*
- [Uziel Linares](https://github.com/uziellinares). Universidad Nacional Autónoma de México. *Google Summer of Code 2020 Alumni. Nonlinear reachability, Taylor models.*
- [Jorge Pérez Zerpa](https://github.com/jorgepz). Universidad de la República, Uruguay. *Finite Element Method, Structural Engineering, Material Identification.*
- [David P. Sanders](https://github.com/dpsanders). Universidad Nacional Autónoma de México and visiting professor at MIT. *Computational Science, Interval Arithmetic, and Numeric-symbolic Computing.* He is also one of the lead developers of [JuliaIntervals](https://github.com/JuliaIntervals).
- [Christian Schilling](https://github.com/schillic). University of Konstanz, Germany. *Formal Verification, Artificial Intelligence, Cyber-Physical Systems.*
PUBLIC
CONFIRMED
Workshop
https://pretalx.com/juliacon2021/talk/9KGMHJ/
Green
Marcelo Forets
Christian Schilling
PUBLISH
J7BFBM@@pretalx.com
-J7BFBM
Introduction to Bayesian Data Analysis
en
en
20210726T140000
20210726T170000
3.00000
Introduction to Bayesian Data Analysis
We will give participants an intuition and diagnostics for the workhorse of modern Bayesian statistics: the Hamiltonian MCMC algorithm. Additionally, we will cover the following topics:
- modeling count data with Poisson regression
- modeling overdispersion with the negative Binomial model
- hierarchical modeling
- modeling time varying effects with autoregressive models and Gaussian processes
We will conclude the workshop by showcasing future potential and features that are not currently available elsewhere such as Bayesian neural ODEs and symbolic optimization of Bayesian models.
PUBLIC
CONFIRMED
Workshop
https://pretalx.com/juliacon2021/talk/J7BFBM/
Red
Kusti Skytén
PUBLISH
DWEMBV@@pretalx.com
-DWEMBV
Introduction to metaprogramming in Julia
en
en
20210727T140000
20210727T170000
3.00000
Introduction to metaprogramming in Julia
Metaprogramming is an important skill that intermediate to advanced Julia users *sometimes* need to use. This tutorial will be an introduction at the intermediate level, aiming to answer clearly questions such as:
- What is metaprogramming?
- When and why should I use it?
- When should I *not* use it? (See Steven Johnson's keynote from JuliaCon 2019.)
- What are macros for, and how do they work?
- What is macro hygiene and how should I use it?
- How can I write a function that recursively analyses a syntax tree?
- When should I use a generated function?
- How can I get access to the code for a function that is already defined?
- Are there packages that can make this simpler? (Brief sketch)
The goal is to provide a firm foundation of understanding that can then be built on later with more advanced applications (not covered in the workshop). The aim is to provide a relatively pedestrian, but easy to follow, path to understanding, rather than to apply powerful, but difficult to understand, functional techniques.
We will provide simple examples of metaprogramming applied to interesting questions in scientific computing, always aiming for simple examples and explanations.
PUBLIC
CONFIRMED
Workshop
https://pretalx.com/juliacon2021/talk/DWEMBV/
Red
David P. Sanders
PUBLISH
SLUMQM@@pretalx.com
-SLUMQM
Geostatistical Learning
en
en
20210728T123000
20210728T130000
0.03000
Geostatistical Learning
The theory was introduced in our recent (open access) paper available online: https://www.frontiersin.org/articles/10.3389/fams.2021.689393/full
Its implementation requires knowledge of geostatistics, computational geometry, and high-performance computing. Due to the great features of the Julia language we were able to achieve an elegant design with great runtime performance.
**Packages:** [GeoStats.jl](https://github.com/JuliaEarth/GeoStats.jl), [Meshes.jl](https://github.com/JuliaGeometry/Meshes.jl)
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2021/talk/SLUMQM/
Green
Júlio Hoffimann
PUBLISH
XFZWWA@@pretalx.com
-XFZWWA
Hierarchical Multiple Instance Learning
en
en
20210728T130000
20210728T133000
0.03000
Hierarchical Multiple Instance Learning
Learning from raw data input, thus limiting the need for manual feature engineering, is one of the key components of many successful applications of machine learning methods. While machine learning problems are often formulated on data that naturally translate into a vector representation suitable for classifiers, there are data sources, for example in cybersecurity, that are naturally represented in diverse files with a unifying hierarchical structure, such as XML, JSON, and Protocol Buffers.
Converting this data to vector (tensor) representation is generally done by manual feature engineering, which is laborious, lossy, and prone to human bias about the importance of particular features.
Mill.jl and Jsongrinder.jl is a tandem of libraries, which fully automates the conversion. Starting with an arbitrary set of JSON samples, they create a differentiable machine learning model capable of infer from further JSON samples in their raw form.
In the spirit of the Julia language, the framework is split into two packages --- Mill.jl implementing the hierarchical multiple instance learning paradigm, offering a theoretically justified approach for building machine learning models for this type of data, and Jsongrinder.jl summarizing the structure in a set of JSON samples and reflecting it in a Mill.jl model.
The talk will be split in four parts.
1) Motivation why we think the problem is interesting
2) Description of mathematical function and theorems about mathematical correctness
3) Description of a design of libraries
4) Practical demo
Link to libraries:
https://github.com/CTUAvastLab/Mill.jl
https://github.com/pevnak/JsonGrinder.jl
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2021/talk/XFZWWA/
Green
Tomas Pevny
PUBLISH
J7Z9PL@@pretalx.com
-J7Z9PL
ReactiveMP.jl: Reactive Message Passing-based Bayesian Inference
en
en
20210728T133000
20210728T134000
0.01000
ReactiveMP.jl: Reactive Message Passing-based Bayesian Inference
GitHub: https://github.com/biaslab/ReactiveMP.jl
Demos: https://github.com/biaslab/ReactiveMP.jl/tree/master/demo
YouTube: https://www.youtube.com/watch?v=twhTsKsXa_8
Experiments from the talk: https://github.com/biaslab/ReactiveMP_JuliaCon2021
Bayesian inference is one of the key computational mechanisms that underlies probabilistic model-based machine learning applications such as time series prediction, image and speech recognition, and robotics. Unfortunately, for many models of practical interest, Bayesian inference requires evaluating high-dimensional integrals that have no analytical solution. As a result, Probabilistic Programming (PP) tools for Automated Approximate Bayesian Inference (AABI) have become popular, e.g., Turing.jl, Soss.jl, ForneyLab.jl, Pyro, and others. These tools help researchers to define custom probabilistic models in a high-level domain-specific language and run AABI algorithms with minimal additional overhead.
An important issue in the development of PP frameworks is scalability of AABI algorithms for large models and large data sets. One solution approach concerns message passing-based inference in factor graphs. In this framework, relationships between model variables are represented by a graph of sparsely connected nodes, and inference proceeds efficiently by a sequence of nodes sending probabilistic messages to neighboring nodes. While the optimal message passing schedule is data-dependent, all existing factor graph frameworks (e.g., Infer.Net, ForneyLab.jl) use preset message sequence schedules. The potential benefits of massively parallel and asynchronous reactive message passing in a factor graph include scaling to large inference tasks, much smaller processing latency and processing of data samples that arrive at irregular time intervals.
We have developed ReactiveMP.jl, which is a native Julia package for automated reactive message passing-based (both exact and approximate) Bayesian inference. ReactiveMP.jl is based on Rocket.jl, which is a native Julia package for a reactive programming. In ReactiveMP.jl, there are no pre-scheduled messages. Instead, nodes subscribe to messages from connected nodes and react autonomously and asynchronously whenever a new message has been received. As a result, ReactiveMP.jl scales comfortably to inference tasks on factor graphs with tens of thousands of variables and millions of nodes.
The ReactiveMP.jl package comes with a collection of standard probabilistic models, including linear Gaussian state-space models, hidden Markov models, auto-regressive models and mixture models. Moreover, ReactiveMP.jl API supports various processing modes such as offline learning, online filtering of infinite data streams and protocols for handling missing data.
ReactiveMP.jl is customizable and provides an easy way to add new models, node functions and analytical message update rules to the existing platform. As a result, a user can extend built-in functionality with custom nodes to run automated inference in novel probabilistic models. The resulting inference procedures are differentiable with the ForwardDiff.jl or ReverseDiff.jl packages. In addition, the inference engine supports different types of floating point numbers, e.g., the built-in BigFloat Julia type.
We achieved excellent performance by relying on Julia's great multiple dispatch capabilities and advanced compile-time optimization techniques. Message passing-based inference requires computation of many messages by node-specific update rules. Some of these updates can be evaluated and in-lined at compile time, which results in a fast and accurate automated Bayesian inference realization with almost zero overhead when compared to manually hard-coded inference procedures.
We compared ReactiveMP.jl to other message passing and sampling-based inference packages. In terms of computation time and memory usage, specifically for conjugate models, the ReactiveMP.jl engine outperforms Turing.jl, ForneyLab.jl and Infer.Net significantly by orders of magnitude. Comparative performance benchmarks are available at the GitHub repository: https://github.com/biaslab/ReactiveMP.jl.
Automating scalable Bayesian inference is a key factor in the quest to apply Bayesian machine learning to useful applications. We developed ReactiveMP.jl as a package that enables developers to build novel probabilistic models and automate scalable inference in those models by asynchronous, reactive message passing in a factor graph. We are looking forward to presenting the ReactiveMP.jl package and discuss the advantages and drawbacks of the reactive message passing approach.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/J7Z9PL/
Green
Dmitry Bagaev
PUBLISH
LXATFU@@pretalx.com
-LXATFU
Exploiting Structure in Kernel Matrices
en
en
20210728T134000
20210728T135000
0.01000
Exploiting Structure in Kernel Matrices
CovarianceFunctions.jl implements many commonly used kernel functions including stationary ones like the exponentiated quadratic, rational quadratic, and Matérn kernel, but also non-stationary ones like the polynomial and neural network kernel. A crucial component of the package is the "Gramian" matrix type which lazily represents kernel matrices with virtually no memory footprint. The package's most significant functionality derives from algorithms designed for particular combinations of kernel and data types, since they are able to drastically reduce the computational complexity of multiplication and inversion. However, even in the general case the lazy implementation eliminates the typical O(n^2) memory allocation for simply storing a kernel matrix of n data points
For stationary kernels in low dimensions, the package implements a new hierarchical factorization based on multipole expansions of the kernels via automatic differentiation. The user need only input the kernel function and data points, and the package automatically computes the relevant analytic expansions which are then leveraged within a treecode analagous to the Barnes-Hut algorithm. Fast multiplies are then performed in O(nlog(n)) time, and solves are performed using an iterative method whose preconditioner is also based on the aforementioned treecode.
For exponentially decaying stationary kernels (i.e. exponential, Matérn, RBF) in high dimensions, it is highly likely that the associated kernel matrix can be approximated well by a sparse matrix. However, naïvely detecting this approximate sparsity pattern would require evaluating the entire matrix in O(n^2) time. Instead, the package takes advantage of vantage trees to quickly find the most prominent neighbors of each data point in O(nk log(n)), where k is the maximum number of relevant neighbors of a data point.
In the context of Bayesian optimization with gradient information, the associated gradient kernel matrices are of size (nd x nd), naïvely requiring O(n^2d^2) operations for multiplication, which becomes prohibitive quickly as the number of parameters d increases. Based on recent work that uncovered a particular structure in a large class of these gradient kernel matrices, the package contains an exact multiplication algorithm that requires O(n^2d) operations. As a result, we are able to demonstrate first-order Bayesian optimization on problems of higher dimensionality than were previously possible.
In the absence of any of the above particular structure, the package attempts to construct a low-rank approximation of the matrix via a generic pivoted Cholesky algorithm that lazily computes the kernel matrix's entries, allowing the algorithm to terminate in O(nr^2) steps, where r is the numerical rank, before even fully forming the entire matrix. In the worst case however, this falls back to O(n^3) complexity in the absence of any structure.
In addition to implementing the above algorithms, a main feature of the package is the automatic detection of the most scalable algorithm depending on the kernel and data type. We believe that this type of automation is particularly useful for practitioners that rely on kernel methods and need to scale them to large datasets. We further invite specialists to contribute their methods for efficient computations with kernel matrices.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/LXATFU/
Green
Sebastian Ament
John Paul Ryan
PUBLISH
BMMEGV@@pretalx.com
-BMMEGV
Effects.jl: Effectively Understand Effects in Regression Models
en
en
20210728T135000
20210728T140000
0.01000
Effects.jl: Effectively Understand Effects in Regression Models
Regression is a foundational technique of statistical analysis, and many common statistical tests are based on regression models (e.g., ANOVA, t-test, correlation tests, etc.).
Despite the expressive power of regression models, users often prefer the simpler procedures because regression models themselves can be difficult to interpret.
Most notably, the interpretation of individual regression coefficients (including their magnitude, sign, and even significance) changes depending on the presence or even centering/contrast coding of other terms or interactions.
For instance, a common source of confusion in regression analysis is the meaning of the intercept coefficient.
On its own, this coefficient corresponds to the grand mean of the independent variable, but in the presence of a contrast-coded categorical variable, it can correspond to the mean of the baseline level of that variable, the grand mean, or something else altogether, depending on the contrast coding scheme that is used.
Effects.jl provides a general-purpose tool for interpreting fitted regression models by projecting the effects of one or more terms in the model back into "data space", along with the associated uncertainty, fixing other the value of other terms at typical or user-specified values.
This makes it straightforward to interrogate the estimated effects of any predictor at any combination of other predictors' values.
Because these effects are computed in data space, they can be plotted in parallel format to raw or aggregated data, enabling intuitive model interpretation and sanity checks.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/BMMEGV/
Green
Phillip Alday
PUBLISH
3JYPC9@@pretalx.com
-3JYPC9
Opening remarks
en
en
20210728T143000
20210728T143500
0.00500
Opening remarks
PUBLIC
CONFIRMED
Keynote
https://pretalx.com/juliacon2021/talk/3JYPC9/
Green
PUBLISH
7WYDH3@@pretalx.com
-7WYDH3
Keynote (Jan Vitek)
en
en
20210728T143500
20210728T152000
0.04500
Keynote (Jan Vitek)
PUBLIC
CONFIRMED
Keynote
https://pretalx.com/juliacon2021/talk/7WYDH3/
Green
PUBLISH
UKVUHW@@pretalx.com
-UKVUHW
Keynote: William Kahan - Debugging Tools for Floating-Point Code
en
en
20210728T152000
20210728T160000
0.04000
Keynote: William Kahan - Debugging Tools for Floating-Point Code
William Kahan was instrumental in creating the IEEE 754-1985 standard for floating-point computation in the late 1970s and early 1980s. He developed a program called “paranoia’ in the 1980s to test for potential floating point bugs and developed the Kahan summation algorithm which helps minimize errors introduced when adding a sequence of finite precision floating-point numbers. Kahan won the ACM A.M. Turing Award in 1989.
PUBLIC
CONFIRMED
Keynote
https://pretalx.com/juliacon2021/talk/UKVUHW/
Green
PUBLISH
C9VRY3@@pretalx.com
-C9VRY3
JuliaCon Trivia
en
en
20210728T160000
20210728T163000
0.03000
JuliaCon Trivia
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2021/talk/C9VRY3/
Green
PUBLISH
LWVB39@@pretalx.com
-LWVB39
Everything you need to know about ChainRules 1.0
en
en
20210728T163000
20210728T170000
0.03000
Everything you need to know about ChainRules 1.0
Automatic differentiation (AD), the ability to efficiently evaluate derivatives of arbitrary functions without computing derivatives by hand, enables efficient learning of many mathematical models. There are two components to every AD system: a collection of primitives (also called sensitivities or adjoints), and a way to keep track of and combine primitives using the chain rule of calculus in order to compute derivatives of arbitrary functions. While AD systems differ greatly in the latter, the set of primitives can be shared among them.
The ChainRules ecosystem provides the AD-independent collection of primitives for Julia Base (ChainRules.jl), utilities for defining custom primitives (ChainRulesCore.jl), and utilities for testing custom primitives using finite differences (ChainRulesTestUtils.jl). While not needed in principle, the ability to define custom primitives provides a way to speed up the computation by applying domain knowledge or mathematical insight, or get around limitations and performance issues of individual AD systems. In addition, ChainRulesCore.jl provides a suite of expressive differential types which allow comparing derivatives across multiple AD systems.
Since last year the ecosystem has matured considerably, improving the user experience in a number of ways. Improvements include:
- It is now possible to write rules for higher order functions (e.g. map) by calling back into the AD system
- @non_differentiable makes it easy to define rules for non-differentiable functions
- Testing custom primitives became easier since a random tangent (usually) does not have to be provided
- It is now possible to test f/rrule like functions, meaning AD systems can be tested
- ChainRules is now used by Zygote, Nabla, ForwardDiff2, and ReversePropagation
This talk will start by briefly introducing the ChainRules ecosystem, and highlighting the most important new features since last year. Those unfamiliar with the general idea of ChainRules are encouraged to watch last year’s talk on ChainRules first, since the core of the talk is a comprehensive guide to using, writing, and testing custom primitives. In particular, the talk will explain when it is advantageous to write custom primitives compared to using an AD system on its own, the interface for writing custom primitives and the associated supporting functionality, as well as why and how to test primitives by finite differencing methods.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2021/talk/LWVB39/
Green
Miha Zgubic
PUBLISH
UDJ7SJ@@pretalx.com
-UDJ7SJ
Enzyme.jl -- Reverse mode differentiation on LLVM IR for Julia
en
en
20210728T170000
20210728T173000
0.03000
Enzyme.jl -- Reverse mode differentiation on LLVM IR for Julia
Automatic differentiation (AD) is key to training neural networks, bayesian inference, and scientific computing. This talk presents Enzyme.jl, a Julia frontend for the Enzyme high performance LLVM automatic differentiation (AD) toolkit. By operating at a low level, Enzyme is able to run optimizations prior to differentiation and is therefore highly efficient on scalar code and can support mutation out of the box. We explain how Enzyme.jl integrates with the Julia compiler, supports synthesis for Julia GPU kernels, and propagates Julia knowledge of types to the lower-level tool. We will discuss ongoing work to extend Enzyme.jl to be able to differentiate through Julia language features like dynamic calls and garbage collection. We will conclude by describing the potential of combining high level and low level systems to get the benefit of both algebraic and instruction level optimizations, and using Enzyme.jl in other AD systems such as Zygote.jl or Diffractor.jl to perform differentiation of foreign function calls, enabling cross-language AD.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2021/talk/UDJ7SJ/
Green
Valentin Churavy
William Moses
PUBLISH
ZEV3MR@@pretalx.com
-ZEV3MR
A Tour of the differentiable programming landscape with Flux.jl
en
en
20210728T173000
20210728T174000
0.01000
A Tour of the differentiable programming landscape with Flux.jl
Machine Learning has come a long way in the past decade. With differentiable programming we have seen a renewed interest from numerous communities to apply ML techniques to diverse fields through scientific machine learning. Traditional deep learning has seen many strides with larger, more compute-intensive models which need increasingly complex training routines that push the boundaries of the current state-of-the-art.
In this talk, we will go through the depth of the machine learning and differentiable programming ecosystem in Julia through the [FluxML](https://github.com/FluxML) stack. We shall discuss the various tools and features available to the users through the advances in the ecosystem and the next-gen tooling required to allow even more expressive modelling possible in Julia.
We will also take note of the new packages and techniques being developed in domains such as differentiable physics, [chemistry](https://github.com/aced-differentiate/AtomicGraphNets.jl), graph networks, [molecular simulation](https://juliamolsim.github.io/Molly.jl/stable/differentiable/) and [multi-GPU training](https://julialang.org/jsoc/gsoc/hpc/#distributed_training) etc.
We will also talk about the development effort in the [Flux](https://github.com/FluxML/Flux.jl) stack including performance enhancements, better coverage of CUDA, NNlib optimisations for the CPU and the new composable and functional optimisers via [Optimisers.jl](https://github.com/FluxML/Optimisers.jl) etc.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/ZEV3MR/
Green
Dhairya Gandhi
PUBLISH
QB8EC8@@pretalx.com
-QB8EC8
Learning to align with differentiable dynamic programming
en
en
20210728T174000
20210728T175000
0.01000
Learning to align with differentiable dynamic programming
The alignment of two or more biological sequences is one of the main workhorses in bioinformatics because it can quantify similarity and reveal conserved patterns. Dynamic programming allows for rapidly computing the optimal alignment between two sequences by recursively splitting the problem into smaller tractable choices, i.e., deciding whether it is best to extend a current alignment or introduce a gap in one of the sequences. This process leads to the optimal alignment score and backtracking yields the optimal alignment. By departing from a collection of pairwise alignments, one can heuristically compute a multiple sequence alignment of many sequences. If one is interested in the effect of a small change in the alignment parameter or the sequences, one has to compute the alignment score gradient with respect to these inputs. Regrettably, computing this gradient is not possible because the individual maximisation (minimisation) steps in the dynamic programming are non-differentiable.
However, Mensch and Blondel recently showed that by smoothing the maximum operator, for example, by regularising with an entropic term, one can design fully differentiable dynamic programming algorithms. The individual smoothed maximum operators have various desirable properties, such as being efficient to compute, sparsity, or probabilistic interpretation. Departing from this work, we created a differentiable version of the Needleman–Wunsch and Smith-Waterman algorithm. Using ChainRulesCore.jl, we allowed this gradient to be compatible with Julia's autodiff ecosystem.
The resulting gradient has an immediate diagnostic and statistical interpretation, such as computing the Fisher information to create uncertainty estimates. Furthermore, it enables us to use sequence alignment in differentiable computing, allowing one to learn an optimal substitution matrix and gap cost from a set of homologous sequences. The flexibility allows these parameters to vary at different regions in the sequences, for example, depending on the secondary structure. One can also change this around and fix the alignment parameters and optimise the sequences for alignment. This scheme allows for finding consensus sequences, which can be useful in creating a multiple sequence alignment. More broadly, our algorithm can be incorporated in arbitrary artificial neural network architectures (using e.g. Flux.jl), making it an attractive alternative to the popular convolution neural networks, LSTMs or transformer networks currently used to learn from biological sequences.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/QB8EC8/
Green
Michiel Stock
PUBLISH
Z7ZLTP@@pretalx.com
-Z7ZLTP
Partitions and chains: enabling batch processing for your data
en
en
20210728T175000
20210728T180000
0.01000
Partitions and chains: enabling batch processing for your data
I want to give a overview of the next "phase" of functionality we've been building across the data ecosystem and some walk-throughs of how the functionality is already being leveraged, including:
* The ChainedVector array type, which allows treating "batches" of arrays as one long array, while allowing efficient multithreading and other concurrent operations on the data automatically
* Tables.partitions: The Tables.jl package now supports "batches" of data for sinks to process, with a focus on enabling multithreaded sink processing of source partitions
* The TableOperations.jl package provides the `makepartitions` and `joinpartitions` utility functions for facilitating working with partitions and your data
* Examples of how packages are already taking advantage: Arrow.jl, CSV.jl, JuliaDB.jl, Parquet.jl, and Avro.jl
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/Z7ZLTP/
Green
Jacob Quinn
PUBLISH
SLGTWB@@pretalx.com
-SLGTWB
GatherTown -- Social break
en
en
20210728T180000
20210728T190000
1.00000
GatherTown -- Social break
PUBLIC
CONFIRMED
Social hour
https://pretalx.com/juliacon2021/talk/SLGTWB/
Green
PUBLISH
NLG9FQ@@pretalx.com
-NLG9FQ
Building on AlphaZero with Julia
en
en
20210728T190000
20210728T193000
0.03000
Building on AlphaZero with Julia
Deepmind's AlphaZero algorithm illustrates a general methodology of combining learning and search to solve complex combinatorial problems. Yet, despite its much-publicized success at the game of Go and a wide range of potential applications, few researchers have managed to build on it.
In an effort to make AlphaZero widely accessible to students and researchers, we introduce [AlphaZero.jl](https://github.com/jonathan-laurent/AlphaZero.jl). Leveraging Julia's unique strengths, this package provides an implementation of Deepmind's algorithm that is simple and flexible, while being up to two orders of magnitude faster than comparable Python implementations.
In this talk, we give a short lecture on the AlphaZero algorithm and discuss some research challenges of using it to solve problems beyond board games. Then, we introduce our [AlphaZero.jl](https://github.com/jonathan-laurent/AlphaZero.jl) package. We show how Julia enables a unique combination of simplicity, flexibility and speed, while also identifying areas in which improvements to the Julia ecosystem could lead to further performance gains. We conclude the talk with more general thoughts on how we believe Julia can have a transformative impact on reinforcement-learning research.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2021/talk/NLG9FQ/
Green
Jonathan Laurent
PUBLISH
FJLE7U@@pretalx.com
-FJLE7U
Bayesian Neural Ordinary Differential Equations
en
en
20210728T193000
20210728T200000
0.03000
Bayesian Neural Ordinary Differential Equations
Recently, Neural Ordinary Differential Equations has emerged as a powerful framework for modeling physical simulations without explicitly defining the ODEs governing the system, but instead learning them via machine learning. However, the question: “Can Bayesian learning frameworks be integrated with Neural ODE’s to robustly quantify the uncertainty in the weights of a Neural ODE?” remains unanswered. In an effort to address this question, we primarily evaluate the following categories of inference methods: (a) The No-U-Turn MCMC sampler (NUTS), (b) Stochastic Gradient Hamiltonian Monte Carlo (SGHMC) and (c) Stochastic Langevin Gradient Descent (SGLD). We demonstrate the successful integration of Neural ODEs with the above Bayesian inference frameworks on classical physical systems, as well as on standard machine learning datasets like MNIST, using GPU acceleration. On the MNIST dataset, we achieve a posterior sample accuracy of 98.5% on the test ensemble of 10,000 images. This is a performance competitive with current state-of-the-art image classification methods, which meanwhile lack our method's ability to quantify the confidence in its predictions.
Subsequently, for the first time, we demonstrate the successful integration of variational inference with normalizing flows and Neural ODEs, leading to a powerful Bayesian Neural ODE object.
Finally, considering a predator-prey model and an epidemiological system, we demonstrate the probabilistic identification of model specification in partially-described dynamical systems using universal ordinary differential equations. Together, this gives a scientific machine learning tool for probabilistic estimation of epistemic uncertainties.
In this study, we used the Julia differentiable programming stack to compose the Julia differential equation solvers with the Turing probabilistic programming language. The study was performed without modifications to the underlying libraries due to the composability afforded by the differentiable programming stack.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2021/talk/FJLE7U/
Green
Raj Dandekar
PUBLISH
HLFY9G@@pretalx.com
-HLFY9G
POMDPs.jl and Interactive Assignments in Julia
en
en
20210728T201000
20210728T202000
0.01000
POMDPs.jl and Interactive Assignments in Julia
The course materials website, including notes and homework assignments, is located here: https://github.com/zsunberg/CU-DMU-Materials, and the Julia package for the course is located here: https://github.com/zsunberg/DMUStudent.jl. The algorithms that the students implement in Julia include Value Iteration, Monte Carlo Tree Search, DQN, and QMDP. The algorithms are graded on the students' machine to ease debugging. This talk will give a very-brief overview of POMDPs.jl, and discuss the course, what went well, and what aspects turned out to be challenging.
(this talk could be expanded into a 30 minute talk if there is enough interest).
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/HLFY9G/
Green
Zachary Sunberg
PUBLISH
7GYDRZ@@pretalx.com
-7GYDRZ
Probabilistic Model Checking using POMDPModelChecking.jl
en
en
20210728T202000
20210728T203000
0.01000
Probabilistic Model Checking using POMDPModelChecking.jl
In this talk we will show how we built a model checking library in a few lines of Julia by integrating an LTL manipulation library to the JuliaPOMDP ecosystem. With this library we can compute decision policies with probabilistic guarantees for various range of partially observable problems: drone surveillance, robot exploration, pedestrian avoidance for autonomous driving.
This lightning talk will be organized as follows:
- Introduction to the problem of POMDP/MDP model checking: quick overview of JuliaPOMDP [1]
- Introduction to linear temporal logic manipulation using Spot.jl [2]: Spot.jl is a wrapper of spot [3], a c++ library for LTL manipulation. The Julia wrapper is built using CxxWrap.jl. We will demonstrate some visual examples of how spot is used to convert a temporal logic formula into a finite state machine and how we can visualize it (material will be inspired from the Spot.jl tutorial but remodeled to fit the talk format).
- Introduction to POMDPModelChecking.jl [4]: we will show how we can reuse the whole JuliaPOMDP ecosystem to solve model checking problems. Our library exposes two solvers (ModelCheckingSolver and ReachabilitySolver), which takes as input any Julia POMDP model and an LTL formula and outputs a policy. Internally, the solver creates a new POMDP model which is a composition of the original model, and a finite state machine created by Spot.jl. This new model can then be solved by any JuliaPOMDP planning algorithm. The theoretical justification of reformulating the model checking problem into a planning problem has been detailed in previous work [5].
- Gallery: We will show visual examples of decision policies computed using POMDPModelChecking.jl on the rock sample POMDP problem [6].
References:
[1] https://github.com/JuliaPOMDP
[2] https://github.com/sisl/Spot.jl
[3] https://spot.lrde.epita.fr/index.html
[4] https://github.com/sisl/POMDPModelChecking.jl
[5] M. Bouton, J. Tumova, and M. J. Kochenderfer, "Point-Based Methods for Model Checking in Partially Observable Markov Decision Processes," in AAAI Conference on Artificial Intelligence (AAAI), 2020.
[6] https://github.com/JuliaPOMDP/RockSample.jl
[7] M. Bouton, "Safe and Scalable Planning Under Uncertainty for Autonomous Driving", PhD thesis, Stanford University, 2020.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/7GYDRZ/
Green
Maxime Bouton
PUBLISH
YXJ8YQ@@pretalx.com
-YXJ8YQ
GatherTown -- Social break
en
en
20210728T203000
20210728T213000
1.00000
GatherTown -- Social break
PUBLIC
CONFIRMED
Social hour
https://pretalx.com/juliacon2021/talk/YXJ8YQ/
Green
PUBLISH
8LL9QH@@pretalx.com
-8LL9QH
Put some constraints into your life with JuliaCon(straints)
en
en
20210728T123000
20210728T130000
0.03000
Put some constraints into your life with JuliaCon(straints)
Problem-solving often consists in two actions: model and solve. The holy grail of Constraint Programming is to have the human (user) model the problem and have the machine (solver) solve it. All the smartness should be in the solver.
**JuliaConstraints**, a freshly hatched GitHub organization, is a first attempt to provide common grounds to the growing Constraint Programming community in Julia while tackling that holy grail.
We will approach the different blocks of the ecosystem through the lens of shared interfaces, shared instances and models, and shared internals. We will illustrate the use, pros and cons of problem-solving through Constraint Programming with different solvers and frameworks such as *ConstraintSolver.jl* and *LocalSearchSolvers.jl*.
A possible common interface, building on the popular *JuMP.jl*, is already available for some solvers. An attempt to write shared models in JuMP syntax as just started as *ConstraintModels.jl*. Various problems have been modeled such as:
- sudoku
- n-queens
- magic square
- chemical equilibrium
- quadratic assignment
- golomb ruler
- minimum and maximum cuts in networks
- traveler salesman problem
- scheduling
A store of instances, generators and global information combinatorial optimization problems is also available as *COPInstances.jl* (tentative name, WIP). This package aims for a larger audience than simply CP solvers, and we would be glad to see it grows for other optimization packages.
Finally, JuliaConstraints hosts also some internal packages, mainly used within the *LocalSearchSolvers.jl* framework, but with the hope, that some parts can be shared with other solvers:
- *Constraints.jl*: a store of usual constraints in CP
- *ConstraintDomains.jl*: structures and methods for the domain of variables
- *CompositionalNetworks.jl*: a glass-box neural networks for scalable compositions of functions
- A very nice logo with chains and Julia (in)famous colored dots
There is an extensive list of incredible Julia packages and internal methods that provide all the computational power and the expressive syntax of the Constraint Programming ecosystem in Julia. We will also highlight the key external dependencies such as JuMP, Evolutionary, Dictionaries, Base.Threads, and more!
Incidentally, we will try to have some fun with an interactive model session (if interactivity is allowed in the COVID-19 context) for LocalSearchSolvers.jl. Did we mention that the solving speed scale super linearly with the number of thread/process?
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2021/talk/8LL9QH/
Red
Jean-François BAFFIER (azzaare@github)
PUBLISH
3PGHMY@@pretalx.com
-3PGHMY
Julog.jl: Prolog-like Logic Programming in Julia
en
en
20210728T130000
20210728T131000
0.01000
Julog.jl: Prolog-like Logic Programming in Julia
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/3PGHMY/
Red
Xuan (Tan Zhi Xuan)
PUBLISH
GNB93V@@pretalx.com
-GNB93V
Solving discrete problems via Boolean satisfiability with Julia
en
en
20210728T131000
20210728T132000
0.01000
Solving discrete problems via Boolean satisfiability with Julia
Many discrete problems in computer science can be encoded into Boolean satisfiability (SAT) problems. In such problems, all variables are Boolean (true or false), but are restricted by *constraints* between the Boolean variables.
Over the last 50 years there have been remarkable developments in understanding how to solve these constraint satisfaction problems, and many open-source solvers have been developed, some of which have been wrapped in Julia, which are capable of solving problems with millions of variables. However, their code is often difficult to understand and modify.
In order to increase the awareness and accessibility of SAT solvers in the community, and to encourage experimentation, we developed a simple solver in pure Julia that is performant for small systems.
We also have developed a tool that allows us to write down discrete problems, such as sudoku, symbolically in Julia, and encode them into SAT.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/GNB93V/
Red
David P. Sanders
PUBLISH
LRHPUH@@pretalx.com
-LRHPUH
Running Programs Forwards, Backwards, and Everything In Between
en
en
20210728T132000
20210728T133000
0.01000
Running Programs Forwards, Backwards, and Everything In Between
This talk should be of interest to people interested in any of:
- Compiler transformations
- Probabilistic programming
- Inference and machine learning
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/LRHPUH/
Red
Zenna Tavares
PUBLISH
FEG39B@@pretalx.com
-FEG39B
FunSQL: a library for compositional construction of SQL queries
en
en
20210728T133000
20210728T140000
0.03000
FunSQL: a library for compositional construction of SQL queries
To introduce FunSQL, we will construct a practical query from healthcare informatics and then discuss how it works. We use a fragment of the [OMOP Common Data Model](https://github.com/OHDSI/CommonDataModel), a cross-platform database model for observational healthcare data.
As typical in healthcare, this schema is patient-centric. The table `person` contains de-identified information about patients including the unique identifier, approximate birthdate, and demographic information. To make this table available for FunSQL, we define it as follows.
<pre>
const person =
SQLTable(:person, columns = [:person_id, :year_of_birth, :location_id])
</pre>
The `patient` table has a foreign key to `location`, which specifies geographic location, typically down to a zipcode.
<pre>
const location =
SQLTable(:location, columns = [:location_id, :city, :state, :zip])
</pre>
Each person is associated with clinical events: encounters with care providers, recorded observations, diagnosed conditions, performed procedures, etc. We will represent one of them.
<pre>
const visit_occurrence =
SQLTable(:visit_occurrence, columns = [:visit_occurrence_id, :person_id, :visit_start_date])
</pre>
With this background in place, let us suppose a physician scientist asks:
*When was the last time each person, born in 2000 or earlier and living in Illinois, was seen by a care provider?*
This research question could be answered using FunSQL.
<pre>
From(person) |>
Where(Get.year_of_birth .<= 2000) |>
Join(:location => From(location),
on = (Get.location_id .== Get.location.location_id)) |>
Where(Get.location.state .== "IL") |>
Join(:visit_group => From(visit_occurrence) |>
Group(Get.person_id),
on = (Get.person_id .== Get.visit_group.person_id),
left = true) |>
Select(Get.person_id,
:max_visit_start_date =>
Get.visit_group |> Agg.Max(Get.visit_start_date))
</pre>
FunSQL provides operations with familiar SQL names such as `From`, `Where`, `Join`, `Group`, and `Select`, which can be chained together using the `|>` operator. The notation `:location => From(location)`, and its counterpart `Get.location.state`, lets us arrange table attributes hierarchically. Most importantly, the query can be constructed and tested incrementally, one operation at a time.
Contrast this with a hand-crafted SQL query.
<pre>
SELECT p.person_id, MAX(vo.visit_start_date)
FROM person p
JOIN location l ON (p.location_id = l.location_id)
LEFT JOIN visit_occurrence vo ON (p.person_id = vo.person_id)
WHERE (p.year_of_birth <= 2000) AND (l.state = 'IL')
GROUP BY p.person_id
</pre>
Although the SQL query is compact, it cannot be incrementally constructed. Indeed, if we follow the progression of the research question, we arrive at:
<pre>
FROM person p
WHERE (p.year_of_birth <= 2000)
JOIN location l ON (p.location_id = l.location_id)
...
</pre>
But this is not valid SQL. SQL enforces a rigid order of clauses: `FROM`, `JOIN`, `WHERE`, `GROUP BY`. As we refine a SQL query, attempting to incrementally correlate it with the research question, we are always forced to backtrack and rebuild it. This is what makes SQL tedious and error-prone.
FunSQL solves the problem of compositional query construction by representing individual operations as subqueries with a deferred `SELECT` list.
<pre>
q1 AS (SELECT ... FROM person)
q2 AS (SELECT ... FROM q1 WHERE q1.year_of_birth <= 2000)
q3 AS (SELECT ... FROM location)
q4 AS (SELECT ... FROM q2 JOIN q3 ON (q2.location_id = q3.location_id))
q5 AS (SELECT ... FROM q4 WHERE q4.state = 'IL')
q6 AS (SELECT ... FROM visit_occurrence)
q7 AS (SELECT ... FROM q6 GROUP BY q6.person_id)
q8 AS (SELECT ... FROM q5 LEFT JOIN q7 ON (q5.person_id = q7.person_id))
</pre>
The final subquery fixes the output columns.
<pre>
SELECT q8.person_id, q8.max_visit_start_date FROM q8
</pre>
Once the output columns are known, each deferred `SELECT` list can be resolved automatically. For instance, references `q1.year_of_bith`, `q2.location_id`, `q5.person_id` force `q1` to take the following form.
<pre>
q1 AS (SELECT person_id, year_of_birth, location_id FROM person)
</pre>
This `SELECT` resolution also propagates aggregate expressions. Thus, `q7` becomes:
<pre>
q7 AS (SELECT q6.person_id,
MAX(q6.visit_start_date) AS max_visit_start_date
FROM q6
GROUP BY q6.person_id)
</pre>
This approach provides a uniform compositional interface to the variety of SQL operations, preserving the expressive power of SQL while eliminating its stifling inflexibility.
For a Julia programmer, FunSQL realizes query operations as 1st class objects. Treated as values, they could be generated independently, assembled into composite operations, and remixed as needed. FunSQL lets us construct queries systematically, converging upon the research questions we wish to ask our databases.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2021/talk/FEG39B/
Red
Kyrylo Simonov
Clark C. Evans
PUBLISH
XV3AH8@@pretalx.com
-XV3AH8
TopOpt.jl: topology optimization software done right!
en
en
20210728T163000
20210728T170000
0.03000
TopOpt.jl: topology optimization software done right!
Topology optimization is a field that combines physics simulation and (mathematical) optimization to optimize the shapes and designs of physical systems. It is an extremely rich and fast growing field with its roots in structural and solid mechanics design but is quickly growing into other areas of physics and engineering. Being a fast growing research field, there is still no consensus on what functionality must be available in a decent topology optimization software. The ability to easily experiment with existing algorithms and easily define new problems to apply algorithms on is something that TopOpt.jl takes to a whole new level. Manually deriving gradients of long chained functions is still embarrassingly half of almost every important topology optimization paper in the field to this day! TopOpt.jl hopes to eliminate the need for this using automatic differentiation (Zygote.jl). Some custom adjoint rules are necessary to define for efficiency but automatic differentiation makes the software design and the API for defining custom adjoints much more pleasant than the status quo of re-inventing automatic differentiation for every new objective, constraint, sub-function, physical system, etc. The modular design of TopOpt.jl also allows a near complete segregation of the objective and constraint definitions from the mathematical optimization algorithm implementations which enables both to grow asynchronously appeasing to different audiences with different sets of expertise.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2021/talk/XV3AH8/
Red
Mohamed Tarek
PUBLISH
99GSDN@@pretalx.com
-99GSDN
FrankWolfe.jl: scalable constrained optimization
en
en
20210728T170000
20210728T173000
0.03000
FrankWolfe.jl: scalable constrained optimization
For large-scale and data-intensive optimization, first-order methods are often a favoured choice, motivated by faster iterations and lower memory requirements.
Frank-Wolfe algorithms allow the optimization of a differentiable function over a convex set, solving a linear optimization problem at each iteration to determine a progress direction.
Each of these linear subproblems is much cheaper than the quadratic subproblems solved by projected gradient algorithms.
The talk will present the package and how it fits an unaddressed spot in the Julia optimization landscape, comparing it with the DSL approaches such as JuMP and Convex.jl,
StructuredOptimization.jl and to the other smooth optimization frameworks such as Optim.jl and JuliaSmoothOptimizers.
After a quick overview of the algorithm, we will cover some interesting properties on specific optimization problems, in particular the solution sparsity preserved throughout the whole optimization process.
Sparsity means in particular that the iterates are a convex combination of a low number of extreme points of the feasible set which can result in low-rank matrices, sparse arrays or other specific structures depending on the feasible set.
In the last part of the talk, we will cover some insight gained from the development of the package on building generic algorithms and in particular managing to handle vertices assuming a vector space but not necessarily finite dimensions.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2021/talk/99GSDN/
Red
Mathieu Besançon
PUBLISH
TEKDX9@@pretalx.com
-TEKDX9
Modelling cryptographic side-channels with Julia types
en
en
20210728T173000
20210728T174000
0.01000
Modelling cryptographic side-channels with Julia types
In hardware security, side-channel attackers can monitor analog signals, like the per-instruction power consumed. They can record this data during the execution of a cryptographic algorithm to gain additional information. Such leakage data can depend on intermediate values of the cipher, which themselves depend on the secret key. Hence, with such side-channel data, reconstructing the key of the cipher may become feasible.
In this talk, we focus on using Julia’s type system to create a framework for generating, analyzing and protecting such side-channel data. For this purpose, we create custom types that behave like integers or arrays. When passing values of these types to a Julia implementation of a cryptographic algorithm, multiple dispatch automatically produces an instrumented or transformed version of that algorithm. Usually, this process does not require modifications to the algorithm’s original implementation.
We look in particular at two different functionalities that we can integrate via such custom types:
- To simulate potential side-channel attacks, it is useful to generate data traces that depend on intermediate values. We will show how to construct types that log a trace of information about the values processed. This reduces the need for access to analog recording hardware, which is particularly useful when teaching side-channel security concepts in student practicals.
- To explore protection against side-channel attacks, values that depend on the secret key should never appear in memory without protection. We explore how integer and array-like types can be created to implement a range of techniques for splitting register values into multiple shares, to reduce the dependence of leakage data on the actual values processed.
Julia’s parametric type system allows us to arbitrarily stack those types on top of each other. For instance, protection types can be stacked on top of logging types. This construction allows us to conveniently collect traces of protected data which can be, for example, used to verify the effectiveness of the protection.
Package: https://github.com/parablack/CryptoSideChannel.jl
<br>Documentation: https://parablack.github.io/CryptoSideChannel.jl/dev/
<br>Dissertation: https://github.com/parablack/CryptoSideChannel.jl/raw/master/diss.pdf
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/TEKDX9/
Red
Simon Schwarz
Markus Kuhn
PUBLISH
7XFSZB@@pretalx.com
-7XFSZB
Lattice Reduction using LLLplus.jl
en
en
20210728T174000
20210728T175000
0.01000
Lattice Reduction using LLLplus.jl
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/7XFSZB/
Red
Chris Peel
PUBLISH
EWFRHW@@pretalx.com
-EWFRHW
SpeedMapping.jl: Implementing Alternating cyclic extrapolations
en
en
20210728T175000
20210728T180000
0.01000
SpeedMapping.jl: Implementing Alternating cyclic extrapolations
The talk will briefly explain the ideas behind the method and demonstrate its use with two examples: *i)* computing a dominant eigenvalue by accelerating the power iteration *ii)* minimizing a multivariate Rosenbrock function with or without constraint by providing only the objective or only the gradient. Benchmarks will show significant speed gains over the L-BFGS and the nonlinear conjugate gradient.
A notebook for the talk may be downloaded at https://github.com/NicolasL-S/SpeedMapping.jl/blob/main/Resources/SpeedMapping_JuliaCon2021.ipynb
SpeedMapping may be installed directly from the REPL, or downloaded here: https://github.com/NicolasL-S/SpeedMapping.jl
The Alternating cyclic extrapolation method is detailed in:
N. Lepage-Saucier, _Alternating cyclic extrapolation methods for optimization algorithms_, arXiv:2104.04974 (2021). https://arxiv.org/abs/2104.04974
The paper also shows other applications, such as a logistic regression, a large set of CUTEst unconstrained problems, accelerating the expectation-maximization (EM) algorithm for Poisson mixtures and for a proportional hazards regression with interval censoring, for canonical tensor decomposition, and for the method of alternating projections (MAP) applied to regressions with high-dimensional fixed effects.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/EWFRHW/
Red
Nicolas Lepage-Saucier
PUBLISH
BGLQ3U@@pretalx.com
-BGLQ3U
🎈 Pluto.jl — one year later
en
en
20210728T190000
20210728T193000
0.03000
🎈 Pluto.jl — one year later
Hi! We're the developers of Pluto.jl, and we have been busy!
[Pluto.jl](https://github.com/fonsp/Pluto.jl) is a notebook IDE for Julia, with a focus on interactivity and education. In this talk, you'll learn about our work during the past year, which includes:
- Built-in package manager
- Macro support
- Static site export
- Interactive site export!
- Integration with many packages
- Disabling reactivity?
- Automatically run notebooks as REST APIs (also in separate talk)
- Tools for university education (also in separate talk)
We will also talk a bit about experimental features and future plans!
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2021/talk/BGLQ3U/
Red
Fons van der Plas
PUBLISH
AUYF3X@@pretalx.com
-AUYF3X
Julia in VS Code - What's New
en
en
20210728T193000
20210728T200000
0.03000
Julia in VS Code - What's New
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2021/talk/AUYF3X/
Red
David Anthoff
Zac Nugent
PUBLISH
KD7MR7@@pretalx.com
-KD7MR7
Web application for atmospheric dispersion modeling.
en
en
20210728T200000
20210728T201000
0.01000
Web application for atmospheric dispersion modeling.
For both military and civilian purposes, the assessment of the impact of a CBRN agent release is crucial. To assess the area of contamination of an agent, atmospheric dispersion models can be used. The accuracy of such models particularly depends on high quality weather data. A joint project of Royal Military Academy of Belgium, ECMWF and Royal Meteorological Institute of Belgium aims to develop a web application that implements simple dispersion models with real-time weather forecast data from ECMWF. The idea is to provide quick assessments of the impact area of a CRBN release as well as response models to plan appropriate actions. The application will run on the ECMWF Weather Cloud so the input weather data for the models can be accessed quickly.
A prototype of the application has already been developed. For the time being, it implements the very simple ATP-45 dispersion model from NATO, which basically draws various hazard area shapes on the map according to the wind speed at the release location. Some screenshots of the app are provided in attachment.
The more complex FLEXPART atmospheric model is currently being added to the application and other state-of-the-art models are foreseen to be implemented as well. The response model will also be added using event driven simulation to account for other external data (population density, topography etc.). Ultimately, it will be possible to use ensemble forecast data to produce ensemble dispersion modelling and introduce probabilistic quantification in the response model.
The choice of Julia for the implementation has been made because we want to use the SimJulia.jl package, maintained by Ben Lauwens and who is one of the supervisors of the project. We are currently using the Genie.jl web framework as backend for web development (Angular is used as frontend) and some other packages for the handling of meteorological data (GRIB.jl, packages from JuliaGeo...).
The presentation will cover:
- A general description of the project
- A live demo of the application
- An Explanation about the role of Julia in the application
- The future of the project
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/KD7MR7/
Red
Tristan Carion
PUBLISH
9XJTRW@@pretalx.com
-9XJTRW
HypertextLiteral : performant string interpolation for HTML/SVG
en
en
20210728T201000
20210728T202000
0.01000
HypertextLiteral : performant string interpolation for HTML/SVG
Generating HTML + SVG output is a common requirement for applications, especially when building scientific dashboards. The faster the better. Being able to use proven hypertext fragments as templates is especially important. The ability to encapsulate and re-use these templates as functions is critical.
`HypertextLiteral` (HTL) is a Julia package that satisfies these criteria, permitting complex hypertext output to be constructed server-side. This package is inspired by its Javascript's namesake written by Mike Bostock, the creator of D3. It uses string literals along with list comprehension syntax. The `@htl` macro translates an HTML template into a function closure. Here is an example.
```
books = [
(name="Who Gets What & Why", year=2012, authors=["Alvin Roth"]),
(name="Switch", year=2010, authors=["Chip Heath", "Dan Heath"]),
(name="Governing The Commons", year=1990, authors=["Elinor Ostrom"])]
render_row(book) = @htl("""
<tr><td>$(book.name) ($(book.year))<td>$(join(book.authors, " & "))
""")
render_table(books) = @htl("""
<table><caption><h3>Selected Books</h3></caption>
<thead><tr><th>Book<th>Authors<tbody>
$((render_row(b) for b in books))</tbody></table>""")
display("text/html", render_table(books))
#=>
<table><caption><h3>Selected Books</h3></caption>
<thead><tr><th>Book<th>Authors<tbody>
<tr><td>Who Gets What & Why (2012)<td>Alvin Roth
<tr><td>Switch (2010)<td>Chip Heath & Dan Heath
<tr><td>Governing The Commons (1990)<td>Elinor Ostrom
</tbody></table>
=#
```
*HTL is contextual.* At macro expansion time, the string template is passed through a light-weight HTML/SGML lexer. This is used to track the context of each interpolated Julia expression: is it part of element content, an attribute value, or is it inside an element tag where several attributes might be expanded? There is also a rawtext context used when content is inside a `script` tag.
*HTL is extensible.* With multiple dispatch, custom data types can provide their own contextual serialization. This permits us to omit boolean attributes that are false. It also lets us expand vectors differently dependent upon context: within element content, they are simply appended; while within attribute values, they are space separated.
*HTL is fast.* A template rendering that takes 500μs with HTL, takes 4.5ms with naive string interpolation and list comprehension. Object based alternatives, such as Hyperscript, take even longer (21ms). Memory usage of HTL is likewise low. It uses 1/3rd less memory than naive string approach, and 1/6th the memory of an object based approach.
This efficiency was achieved by emulating Julia's documentation system. Each component of the template is converted into an object which prints its content to a given `IO`. During macro processing, we build a Julia program that relies upon three primitive structures:
- *Bypass* is used for content that should be emitted as-is.
- *Render* is used for content that should be properly escaped.
- *Reprint* is a function closure used for composing content.
As the template is converted, leaf nodes are converted into either *Render* or *Bypass*, depending if they are part of the template, or part of variables that are to be escaped. *Reprint* is used to concatenate adjacent components that appear in the template or are generated by a list comprehension.
*HTL is safe.* Escaping code is layered using an `IO` proxy. Each of the 3 primitives has their own dispatch with regard to this proxy. This way, so long as the template translation properly distinguishes between `Bypass` and `Render` chunks, escaping is always performed. Handled as an exception, `<script>` content is checked to ensure it does not contain the `"</script>"` literal but is otherwise unescaped.
HTL can serialize attribute sets from pairs, dictionaries or named tuples. Unlike its Javascript namesake, we don't get clever with `camelCase` attribute names, which must be left as-is for SVG. Instead, we only convert `snake_case` names to their `kebab-case` equivalent. Moreover, if attribute sets are constants, we can pre-compute their serialization at macro expansion time.
It is notable how nicely the Julia implementation flowed together. Julia's excellent macro facility lets us easily convert embedded functions and list comprehensions into relevant template logic. Julia's handling of tiny function closures was outstanding: not only does it let us write code that is easy to read, the approach turned out to be surprisingly fast. Julia's `IO` interface lets us easily insert a proxy that was trivial to write, and, yet again, surprisingly efficient. Finally, multiple dispatch enables user-defined types to have their own serialization. Kudos Julia.
This approach could be used to make similar template libraries for other structured notations, such as JSON.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/9XJTRW/
Red
Clark C. Evans
PUBLISH
39ZFBF@@pretalx.com
-39ZFBF
Pluto.jl Notebooks are Web APIs!
en
en
20210728T202000
20210728T203000
0.01000
Pluto.jl Notebooks are Web APIs!
Pluto is fundamentally built upon **reactivity**, and hence knowledge of how notebook cells interact is known. Therefore cells can update in response to other cells changing, which happens in a Pluto notebook every time you run a cell. But what if these intelligent updates could also happen on-demand programmatically?
Introducing the new, experimental “What you see is what you REST” feature! (*WYSIWYR* for short.) Every global variable becomes an HTTP endpoint, and you can provide other global variables as parameters. Instead of experimenting with a model inside Pluto and then moving your code to an API script, your notebook _is_ an API, using reactivity to automatically create an execution model for each endpoint.
With this feature, interacting with Pluto notebooks from both outside and inside of other Pluto notebooks is revolutionarily simple. Everything from sharing models to writing custom web APIs with Julia is now possible, entirely from within Pluto, without having to transition from notebook code to “production code”.
This talk will demonstrate how to get started with WYSIWYR and use it in your own projects. By also explaining how the feature works, we hope to get experienced users interested in the feature. Along the way, we will discover how its expansion to existing notebook interactivity features opens the door to more seamless inter-notebook communication, and even to building web applications and APIs all from inside Pluto notebooks.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/39ZFBF/
Red
Connor Burns
PUBLISH
RERJWC@@pretalx.com
-RERJWC
BifurcationKit.jl: bifurcation analysis of large scale systems
en
en
20210728T123000
20210728T130000
0.03000
BifurcationKit.jl: bifurcation analysis of large scale systems
In this talk, I will give a panorama of `BifurcationKit.jl`, a Julia package to perform numerical bifurcation analysis of large dimensional equations (PDE, nonlocal equations, etc) using Matrix-Free / Sparse Matrix formulations of the problem. Notably, numerical bifurcation analysis can be done **entirely** on GPU.
`BifurcationKit` incorporates continuation algorithms (PALC, deflated continuation, ...) which can be used to perform **fully automatic bifurcation diagram** computation of stationary states. I will showcase this with the 2d Bratu problem. I will also show an example of neural network that runs entirely on GPU.
Additionally, by leveraging on the above methods, the package can also seek for periodic orbits of Cauchy problems by casting them into an equation of high dimension. It is by now, one of the only softwares which provides parallel (Standard / Poincaré) shooting methods and finite differences based methods to compute periodic orbits in high dimensions. I will present an application highlighting the ability to fine tune `BifurcationKit` to get performance.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2021/talk/RERJWC/
Blue
Romain VELTZ
PUBLISH
E8SVYT@@pretalx.com
-E8SVYT
Agents.jl and the next chapter in agent based modelling
en
en
20210728T130000
20210728T133000
0.03000
Agents.jl and the next chapter in agent based modelling
Agent based modelling (ABM) is a simulation method in which autonomous agents react to their environment, given a predefined set of rules. It is a bottom-up approach for modelling and simulating complex systems, such as behavior, decision making, crowd dynamics and other socio-economic problems; as well as, but not limited to, complex natural sciences such as chemical reactions or biological processes.
Since ABMs are not described by simple and concise mathematical equations, code that generates them is typically complicated, large, and slow. In addition, since many of these problems are very domain specific, a lot of ABMs are hand written from scratch.
Agents.jl provides a solution to this complication. Acknowledging that ABM frameworks have existed for decades, we show that Agents.jl is not only the most performant, but also the least complicated software (in terms of lines of code written to implement well-known ABM test cases), providing the same (and sometimes more) features as competitors.
This enables rapid prototyping of your domain specific ABM, with tried and tested (but generic) tooling.
The talk will provide an introduction to many of these helpful features, as well as showcase how well it integrates with the entire Julia ecosystem. Interactive applications with Makie.jl, differential equations from DifferentialEquations.jl, parameter optimization from BlackboxOptim.jl, and more.
To conclude, we'll outline some of the big next-steps on the roadmap that other ABM frameworks will struggle to match in the absence of the Julia ecosystem.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2021/talk/E8SVYT/
Blue
Tim DuBois
PUBLISH
LHQEPZ@@pretalx.com
-LHQEPZ
An individual-based model to simulate Coffee Leaf Rust epidemics
en
en
20210728T133000
20210728T134000
0.01000
An individual-based model to simulate Coffee Leaf Rust epidemics
CLR is an active research topic in plant pathology and epidemiology. However, the overall effect of the use of shade trees on the development of the CLR disease has not yet been established. The introduction of shade trees in a farm produces local changes that can have positive or negative effects on the development of CLR epidemics, depending on the life cycle stage of present infections.
In an effort to integrate relevant pathology and ecology knowledge, we developed a spatially explicit individual-based model that allows us to simulate CLR epidemics at a farm scale and its effect on coffee productivity over several years. Using high-throughput computing, we explore different agricultural management strategies, including various patterns of shade-providing tree placement within the farm, and test their efficacy at controlling a potential CLR outbreak. This talk will show how Agents.jl and Distributed.jl facilitated our research.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/LHQEPZ/
Blue
Manuela Vanegas Ferro
PUBLISH
ECKGDE@@pretalx.com
-ECKGDE
hPF-MD.jl: Hybrid Particle-Field Molecular-Dynamics Simulation
en
en
20210728T134000
20210728T135000
0.01000
hPF-MD.jl: Hybrid Particle-Field Molecular-Dynamics Simulation
In this talk, we will give (1) a brief overview of the hPF-MD method, (2) example systems compared with results from existing hPF-MD packages and standard MD method, and (3) advantages and extensibility of their Julia implementations.
References:
1. Milano, G.; Kawakatsu, T. Hybrid Particle-Field Molecular Dynamics Simulations for Dense Polymer Systems. The Journal of Chemical Physics 2009, 130 (21), 214106. https://doi.org/10.1063/1.3142103.
2. Wu, Z.; Milano, G.; Müller-Plathe, F. Combination of Hybrid Particle-Field Molecular Dynamics and Slip-Springs for the Efficient Simulation of Coarse-Grained Polymer Models: Static and Dynamic Properties of Polystyrene Melts. J. Chem. Theory Comput. 2020. https://doi.org/10.1021/acs.jctc.0c00954.
3. Caputo, S.; Hristov, V.; Nicola, A. D.; Herbst, H.; Pizzirusso, A.; Donati, G.; Munaò, G.; Albunia, A. R.; Milano, G. Efficient Hybrid Particle-Field Coarse-Grained Model of Polymer Filler Interactions: Multiscale Hierarchical Structure of Carbon Black Particles in Contact with Polyethylene. J. Chem. Theory Comput. 2021, https://doi.org/10.1021/acs.jctc.0c01095.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/ECKGDE/
Blue
Zhenghao Wu
PUBLISH
EFYMME@@pretalx.com
-EFYMME
Enhanced Sampling in Molecular Dynamics Simulations with Julia
en
en
20210728T135000
20210728T140000
0.01000
Enhanced Sampling in Molecular Dynamics Simulations with Julia
When performing molecular dynamics (MD) simulations of materials in chemistry, physics and
biology, there exists a large gap between the time scales that can be probed
computationally to the ones observed in experiments. One strategy to approach this issue
has been to develop algorithms to enhance sampling over the simulated system's
configuration space, overcoming the otherwise hard to surmount energetic barriers limiting
the observation of certain possible states. These algorithms themselves are not enough to
really push toward larger timescales, one also needs to implement them in hardware
accelerators such as GPUs. In fact, a good number of the most recently developed
algorithms tend to become a bottleneck for molecular simulations accelerated on GPUs, as
they are commonly implemented in CPUs even when some of them heavily rely on machine
learning strategies.
Within our research group, we are trying to provide a library that can be hooked to
different molecular dynamics simulations packages, allowing the user to perform enhanced
sampling simulations through a uniform interface without sacrificing the efficiency of the
underlying MD code. The library is currently located here:
https://github.com/SSAGESLabs/PySAGES, and although it is a Python library it has
continuously been prototyped in Julia. For example, here
https://github.com/pabloferz/ReactionCoordinates.jl and here
https://github.com/pabloferz/DLPack.jl are some of the pieces that we have built for such
purpose. The prototypes, being written in Julia, are of course faster than the current
Python implementation.
I would like to share my experience and perspective using Julia to build these tools.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/EFYMME/
Blue
Pablo Zubieta
PUBLISH
CAMR3P@@pretalx.com
-CAMR3P
Vectorized Query Evaluation in Julia
en
en
20210728T170000
20210728T173000
0.03000
Vectorized Query Evaluation in Julia
In modern (SQL) database query engines, there are two major approaches on how to evaluate user-provided queries in a highly performant manner (see e.g. [1]):
Query Compilation: Each pipeline of a query plan gets compiled into a single function that effectively fuses operators into a single (nested) for-loop. This function is then compiled to highly-optimized machine code. Operators process data tuple-at-a-time.
Vectorization: The query plan is interpreted, and each operator in the plan is mapped to a pre-compiled function. To offset the arising interpretation cost, each operator evaluates batches ("vectors") of, say, 1000 values in bulk on each step.
Query compilation offers more optimization potential for LLVM and is often effective at keeping values in registers, while vectorization enables shorter compilation times — for better support of interactive queries. As part of the production-grade RelationalAI Knowledge Graph Management System, we implemented both approaches in Julia.
In this presentation, we explain in greater detail how both of these fundamentally different techniques work, why we are implementing them, and how we aim to combine them. We showcase where Julia enabled us to implement highly performant code with ease, but also reveal where we had to spend non-trivial amounts of engineering effort to arrive at the desired performance.
[1] https://www.vldb.org/pvldb/vol11/p2209-kersten.pdf
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2021/talk/CAMR3P/
Blue
Richard Gankema
Alex Hall
PUBLISH
KUVB9C@@pretalx.com
-KUVB9C
ReTest.jl - more productive testing
en
en
20210728T173000
20210728T174000
0.01000
ReTest.jl - more productive testing
The main idea behind [ReTest.jl](https://github.com/JuliaTesting/ReTest.jl) is that its `@testset` macro does not run tests immedidately, but instead stores them for later execution, via a call to the `retest` function. This is what enables a lot of the provided features, two of which were the initial drive for the creation of the package:
- filtering which testsets are run by matching their descriptions against a given regular expression;
- the ability to write tests "inline" in source files, right next to the code implementing the tested behaviors.
The fact that `Test` and `ReTest` have the same macro name, `@testset`, makes it usually trivial to switch an existing test suite over to `ReTest`. So much so that there is an option to actually use `ReTest` on a test suite without changing a single line of code!
And what if you could use `Revise` on your test files...?
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/KUVB9C/
Blue
Rafael Fourquet
PUBLISH
7Q8P9D@@pretalx.com
-7Q8P9D
Sponsor Talk (Invenia)
en
en
20210728T174000
20210728T174500
0.00500
Sponsor Talk (Invenia)
PUBLIC
CONFIRMED
Keynote
https://pretalx.com/juliacon2021/talk/7Q8P9D/
Blue
PUBLISH
EJP7SS@@pretalx.com
-EJP7SS
Sponsor talk (KAUST)
en
en
20210728T174500
20210728T175000
0.00500
Sponsor talk (KAUST)
PUBLIC
CONFIRMED
Keynote
https://pretalx.com/juliacon2021/talk/EJP7SS/
Blue
PUBLISH
NKSCDS@@pretalx.com
-NKSCDS
Sponsor talk (Pumas AI)
en
en
20210728T175000
20210728T175500
0.00500
Sponsor talk (Pumas AI)
PUBLIC
CONFIRMED
Keynote
https://pretalx.com/juliacon2021/talk/NKSCDS/
Blue
PUBLISH
RSPXSS@@pretalx.com
-RSPXSS
Sponsor talk (Quera)
en
en
20210728T175500
20210728T180000
0.00500
Sponsor talk (Quera)
PUBLIC
CONFIRMED
Keynote
https://pretalx.com/juliacon2021/talk/RSPXSS/
Blue
PUBLISH
KNWNHJ@@pretalx.com
-KNWNHJ
Changing Physics education with Julia
en
en
20210728T190000
20210728T193000
0.03000
Changing Physics education with Julia
In many disciplines of physics, code is not explicitly discussed as part of the learning subject. Here I will focus on nonlinear dynamics, a discipline that suffers greatly from the disconnect between the mathematics and the coding. In fact, this disconnect is largely what started the JuliaDynamics software organization, as a means to eliminate this disconnect.
In this talk I will present the numerous ways that we have employed in order to fundamentally change physics education for the better. And this change necessarily requires including coding as part of the learning subject. I will demonstrate how we created easy to read code using Julia, how to incorporate it into exercises, how to make interactive applications that enhance learning, and how to include scientific analysis using code as part of the learning subject. Our new approach to teaching nonlinear dynamics, which I will present here, is also published as a new textbook on the topic by Springer. The book explicitly includes real, runnable Julia code. A GitHub repository related with the book can be found here: https://github.com/JuliaDynamics/NonlinearDynamicsTextbook
I believe that this new approach should be used by more and more branches of physics. When this is done, then finally coding will be viewed as an integral part of science, instead of some "background business behind the curtains", which is its current perception. Ultimately, this will lead not only to better science, but to actually reproducible science.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2021/talk/KNWNHJ/
Blue
George Datseris
PUBLISH
RXF8UK@@pretalx.com
-RXF8UK
Open and interactive Computational Thinking with Julia and Pluto
en
en
20210728T194000
20210728T201000
0.03000
Open and interactive Computational Thinking with Julia and Pluto
During the Fall 2020 and Spring 2021 semesters we have been teaching an online, open, interactive course on Computational Thinking using Julia and the Pluto notebook.
Previously we had used the Jupyter notebook, but we decided to take the plunge with the then-brand-new Pluto notebook in the summer of 2020, when it was still in its early days. It has turned out to be an excellent -- although at times frustrating! -- decision.
Pluto has allowed us to develop a completely new approach to writing both an interactive online textbook, as well as interactive problem sets with beautiful built-in solution checks that make working on problems both more fun and more rewarding.
Indeed, the capabilities of the Pluto notebook itself have developed together with the course, as we have collaborated on the required tools and ideas. Many recent features in Pluto have been added with this style of teaching in mind, and we hope to inspire more teachers to write interactive material.
Half-way through the second semester, each video lecture consistently receives over 2,000 views, and the course website receives 6,000 hits per month. The interactive and self-checking nature of Pluto homeworks is especially useful in an open course, where students have to work on the material independently.
We will discuss ideas and goals for teaching Julia, computational thinking, concepts from computer and applied mathematics mixed together, and how the technical features of Pluto enable and enhance one another.
Course homepage with online interactive textbook: https://computationalthinking.mit.edu/Spring21/
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2021/talk/RXF8UK/
Blue
David P. Sanders
Alan Edelman
Fons van der Plas
PUBLISH
GFHKV7@@pretalx.com
-GFHKV7
Clearing the Pipeline Jungle with FeatureTransforms.jl
en
en
20210728T123000
20210728T124000
0.01000
Clearing the Pipeline Jungle with FeatureTransforms.jl
Feature engineering is an essential component in all machine learning and data science workflows. It is often an exploratory activity in which the pipeline for a particular set of features tends to be developed iteratively as new data or insights are incorporated.
As the feature complexity grows over time it is very common for code to devolve into unwieldy “pipeline jungles” [1], which pose multiple problems to developers. They are often brittle, with highly-coupled operations that make it increasingly difficult to make isolated changes. The over-entanglement of such pipelines also means they are difficult to unit test and debug effectively, making them particularly error-prone. Since adding to this complexity is often easier than investing in refactoring it, pipeline jungles tend to be more susceptible to incurring technical debt over time, which can impact the project’s long-term success.
In this talk, we will showcase some of the key features of the [FeatureTransforms.jl](https://github.com/invenia/FeatureTransforms.jl) package, such as the composability, reusability, and performance of common transform operations, that were designed to help mitigate the problems in our own pipeline jungles..
[FeatureTransforms.jl](https://github.com/invenia/FeatureTransforms.jl) is conceptually different from other widely-known packages that provide similar utilities for manipulating data, such as [DataFramesMeta.jl](https://github.com/JuliaData/DataFramesMeta.jl), [DataKnots.jl](https://github.com/rbt-lang/DataKnots.jl), and [Query.jl](https://github.com/queryverse/Query.jl). These packages provide methods for composing relational operations to filter, join, or combine structured data. However, a query-based syntax or an API that only supports one type are not the most suitable for composing the kinds of mathematical transformations, such as one-hot-encoding, that underpin most (non-trivial) feature engineering pipelines, which this package aims to provide.
The composability of transforms reflects the practice of piping the output of one operation to the input of another, as well as combining the pipelines of multiple features. Reusability is achieved by having native support for the Tables and AbstractArray interfaces, which includes tables such as [DataFrames](https://github.com/JuliaData/DataFrames.jl/), [TypedTables](https://github.com/JuliaData/TypedTables.jl), [LibPQ.Result](https://github.com/invenia/LibPQ.jl), etc, and arrays such as [AxisArrays](https://github.com/JuliaArrays/AxisArrays.jl), [KeyedArrays](https://github.com/mcabbott/AxisKeys.jl), and [NamedDimsArrays](https://github.com/invenia/NamedDims.jl). This flexible design allows for performant code that should satisfy the needs of most users while not being restricted to (or by) any one data type.
[1] [Sculley, David, et al. "Hidden technical debt in machine learning systems." Advances in neural information processing systems 28 (2015): 2503-2511](https://proceedings.neurips.cc/paper/2015/hash/86df7dcfd896fcaf2674f757a2463eba-Abstract.html).
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/GFHKV7/
Purple
Glenn Moynihan
PUBLISH
8VL9R7@@pretalx.com
-8VL9R7
TiledViews.jl
en
en
20210728T124000
20210728T125000
0.01000
TiledViews.jl
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/8VL9R7/
Purple
Rainer Heintzmann
PUBLISH
CBDFPN@@pretalx.com
-CBDFPN
Structural lambdas for generic code and delayed evaluation
en
en
20210728T125000
20210728T130000
0.01000
Structural lambdas for generic code and delayed evaluation
Pervasive and performant multiple dispatch in Julia has led to the development of functions that convey generic meaning. Deciding on a name and settling on the meaning of these generic functions is challenging work, but it is essential for generic programming.
Functions can be used to direct dispatch. e.g.
```julia
reduce(vcat, [[1,2,3], [4,5], [6,7,8,9]])
```
There is a fallback method of `reduce` that works for any binary operation, not just `vcat`. However, there is also a method specific to `vcat` that preallocates the result. Both the fallback and the specialization produce the same result, but the specialization is likely to perform better since it is inexpensive to calculate the size of the result.
Consider e.g. `Base.filter` and `Base.Iterators.filter`. The second simply constructs a `Base.Iterators.Filter`, which is in essence nothing more than a lazy representation of `Base.filter(f, itr)`. We developed an experimental package [`FixArgs.jl`](https://github.com/goretkin/FixArgs.jl) to represent this:
```julia
julia> @xquote filter(iseven, $(1:5))
Call(Some(filter), FrankenTuple((Some(iseven), Some(1:5)), NamedTuple()))
```
Suppose we want to define `eltype`, the same way it is defined for `Base.Iterators.Filter`. We can define an `eltype` method for `Call` for the specific parameter `filter`. `FixArgs.jl` defines a macro to help (but the ergonomics should still be improved):
```julia
julia> Base.eltype(filt::(@xquoteT filter(::F, ::I))) where {F, I} = eltype(something(filt.args[2]))
julia> eltype(@xquote filter(iseven, $(1:5)))
Int64
```
`FixArgs.jl` began as a generalization of `Base.Fix1` and `Base.Fix2`, but identifies a common pattern that could systematically replace many existing types and methods. e.g. Broadcasting relies on types to represent lazy function calls, and `materialize` to perform the computation (with e.g. dot fusion). `Base.Generator` and `collect` are analogous.
Instead of generating a new name for a type, and attaching meaning to it, one can meaningfully compose a name from existing meaningful names. One straightforward example is to define `Rational{T}` as
```julia
julia> (@xquoteT ::T / ::T) where T
FixArgs.Call{Some{typeof(/)}, FrankenTuples.FrankenTuple{Tuple{Some{T}, Some{T}}, (), Tuple{}}} where T
```
Note that `Rational{Int}` and (@xquoteT ::Int / ::Int) have identical memory layouts!
(Also, occasionally, there is a need for a `Rational`-like type that does not constrain the numerator and denominator to have the same type. The type above would fit the bill).
In this case, `Rational` is in Base, but more generally packages have to depend on a common package (usually called `*Base.jl`) that defines types. If it is possible to define new types terms of existing types, then in a sense the types are structural and not nominal. This may reduces the need for these common packages and enable better package interoperability.
As another example, the type `Base.Generator(f, itr)` could be identical to `@xquote map(f, itr)` (though not exactly since the meaning of `map` and `collect` are currently conflated (https://github.com/JuliaLang/julia/issues/39628)
There are many other examples in the ecosystem, such as in `LazySets.jl`, `LazyArrays.jl`, `MappedArrays.jl`, `StructArrays.jl`, ... where types are defined to essentially represent lazy function calls ad-hoc. They each have a version of "materialize". Note that in most cases, these cannot be replaced directly since `Call` cannot e.g. subtype `AbstractArray` and `AbstractSet`.
`Base.Fix1`, etc. is useful, even though one can already define a lambda function with the same behavior, because it is possible to dispatch on the structure of this lambda function as opposed to having an opaque name:
```julia
julia> x -> x > 2
#3 (generic function with 1 method)
julia> >(2)
(::Base.Fix2{typeof(>),Int64}) (generic function with 1 method)
```
`FixArgs.jl` (which really should be called e.g. `StructuralLambdas.jl`) allows one to easily define these structural lambdas:
```julia
julia> @xquote x -> x > 2
Fix2(>,2)
julia> typeof(@xquote x -> x > 2)
Fix2{typeof(>), Int64} (alias for FixArgs.Lambda{FixArgs.Arity{1, Nothing}, FixArgs.Call{Some{typeof(>)}, FrankenTuples.FrankenTuple{Tuple{FixArgs.ArgPos{1}, Some{Int64}}, (), Tuple{}}}})
```
(Better aesthetics would be necessary for usability.)
See https://goretkin.github.io/FixArgs.jl/dev/ for more motivation and details, including examples for replacing `Complex` and generalizing `FixedPointNumbers.jl`.
Please note that this talk is about an idea, not `FixArgs.jl` itself. It may turn out that the idea is not practical; e.g. it might pose immense challenges for compilation, or it might be too confusing to marry the meaning of functions and `Call`, or package interoperability will fail due to subtle differences. I hope the idea holds promise. A great way to find out is at JuliaCon 2021.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/CBDFPN/
Purple
Gustavo Nunes Goretkin
PUBLISH
WRNAEN@@pretalx.com
-WRNAEN
Dictionaries.jl - for improved productivity and performance
en
en
20210728T133000
20210728T140000
0.03000
Dictionaries.jl - for improved productivity and performance
This talk will be divided into roughly three sections.
### Motivation
Julia is an awesome language for manipulating data, with excellent built-in functionality that is easy to extend by packages and users. Arrays, sets and dictionaries form the basis of core data structures necessary for a wide range of workloads. Of the three, Julia's `AbstractArray` interface is most extensive — supporting a wide range of data structures *and* a rich set of operations, designed around the same core set of functionality (primarily, indexing and iteration).
On the other hand, `AbstractSet` and `AbstractDict` do not support such a wide range of operations (like broadcasting, `map`, or `reduce`) and the interface for a user to create a new, fully-functional `AbstractDict` is not as clear cut or simple as it is for an array. *Dictionaries.jl* is an attempt to remedy this situation, by applying the learnings of the `AbstractArray` interface to create a new `AbstractDictionary` interface. By using *Dictionaries.jl*, users can experience improved programmer productivity as well as signficantly faster execution for many operations (especially with analytics-style workloads).
### Implementation
A *Dictionaries.jl* `AbstractDictionary` differs from Julia's `AbstractDict` in three main ways:
1. Dictionaries iterate values, like an array, instead of key-value pairs.
2. The indices (or `keys`) of dictionary are a special kind of dictionary, much like the indices (or `keys`) of an array is a special kind of array.
3. Dictionaries (and their indices) by default iterate in a well-defined order based on insertion, rather than quasi-randomly based on how the hashmap was constructed.
Since the indices of a dictionary are distinct, they naturally form a set (in the mathematical sense). In *Dictionaries.jl* this type is represented by `AbstractIndices <: AbstractDictionary` (unlike for `Dict`, there was no obvious alternative spelling of `Set` available). Every `AbstractIndices` has the special property that it's values are the same as it's keys, so if `i ∈ indices` then `indices[i]` is just `i`.
From these alone, natural definitions of `broadcast`, `map`, `filter` and `reduce` follow, for both indices (i.e. "sets") or dictionaries. For example, the `map` operation preserves the indices and maps the values to new values. You can even `map` a set of indices/keys into a new dictionary.
This property leads to *Dictionaries.jl*'s primary efficiency gain. The indices of dictionaries (i.e. the expensive part) can be shared between different dictionaries. For example, the provided hashmap `Dictionary` can share its hash `Indices` with other dictionaries. The values are stored in a (mostly) dense array, so operating on all the values with an operation like `map`, `filter` and `reduce` is as fast as it is for a similarly sized array. One can use `map` or `broadcast` with multiple similar dictionaries (i.e. those that share compatible "tokens"), co-iterating values together at speed and with zero hash lookups.
### Applications
We will turn our attention to some example applications of this interface, first highlighting the convenience and speed of `Dictionary` for some common analytics tasks.
We will then see how *Dictionaries.jl* is used in conjunction with other packages, for example how *SplitApplyCombine.jl*'s `group` operation now supports a simple and easy split-apply-combine workflow.
Finally, we will explore recent work in *TypedTables.jl* which uses *Dictionaries.jl* to provide a table with a primary key (enabling easy lookup of rows based on data rather than an array index). The idea is similarly extended to grouped/partitioned data tables and their grouping keys.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2021/talk/WRNAEN/
Purple
Andy Ferris
PUBLISH
N9JPV7@@pretalx.com
-N9JPV7
Tomographic Image Reconstruction with Julia
en
en
20210728T163000
20210728T170000
0.03000
Tomographic Image Reconstruction with Julia
Tomographic imaging plays a major role in clinical routine and has revolutionized the diagnosis and treatment of serious diseases such as stroke, heart attack and cancer. Tomographic techniques such as magnetic resonance imaging (MRI), computed tomography or the new imaging modality magnetic particle imaging (MPI) make it possible to look inside the human body without surgical intervention, simply by measuring indirect signals which allow reconstruction of an image of the inside of the body. Medical imaging is an interdisciplinary field involving physicians, physicists, engineers, mathematicians and computer scientists to develop a tomographic imaging system. While the technical development of modalities such as MRI are approaching limits with respect to the optimization of signal quality, the potential on the side of image reconstruction algorithms has not yet been fully exploited. As a consequence, numerous innovations from the fields of mathematics, signal processing and computer science have found their way into tomography research within the last decade.
Traditionally, algorithm development within the imaging community has been divided into two parts. Researchers who primarily work on mathematical methods often implement these using a high-level language such as Matlab and occasionally Python. As a result, the application of these algorithms is often limited to selected datasets, which validate the feasibility and the accuracy of the method. On the other hand, application-oriented researchers often use highly optimized program libraries, implemented in a low-level language such as C/C++, to apply algorithmic innovations to larger datasets that are acquired in clinical trials. Some of the larger C/C++ software packages such as the MRI reconstruction framework BART and Gadgetron use such a low-level approach and additionally provide Python bindings to make the framework accessible also to researchers who prefer using a high-level programming language. In practice, this hybrid approach, where low-level and high-level code is mixed leads to the well-known two-language problem since the presence of bindings still does not allow for an easy transition of new algorithmic ideas into the core of these packages.
This presentation of the current state of software tools in the imaging community shows that there is great potential for a modern programming language like Julia to close the gap between theoretically orientated and applied researchers. The speaker of this talk will outline the Julia package infrastructure for two different imaging modalities that have been devel-oped since 2015. The packages cover a wide range of functionality, namely:
- File handling for raw data files acquired with tomographic imaging systems.
- Preprocessing of raw data to make it suitable for image reconstruction.
- Routines for setting up dense and matrix-free image operators
- Iterative solvers for solving the reconstruction problem, including a flexible system for applying regularization to incor-porate prior knowledge about the solution
- Visualization methods for slicing, coloring, and merging tomographic images
Instead of putting all of this functionality into a single software package, the opposite approach is taken with the philosophy of reusing as much functionality from existing Julia packages as possible (see attached figure). This has the advantage of keeping imaging-specific functionality small and allows to share common methods across different imaging modalities. Julia's powerful package manager allows for small packages, making this form of fine-granular modularization feasible. An interesting opportunity that arises by solving the two-language problem is that the software becomes much more accessible since a user can not only use the provided interface but also access internals easily. In the imaging packages MRIReco.jl and MPIReco.jl we have exploited this advantage by providing the user direct access to different abstraction layers of the reconstruction pipeline. In this way a user can either perform standard reconstruction using ready-to-use high-level building blocks or the user can develop a custom reconstruction pipeline based on the available building blocks. While this flexibility can also be achieved in two-language solutions, it arises very naturally in Julia, without much additional effort on the developer side.
Since tomographic image reconstruction is a computationally intensive task one needs efficient algorithms to determine the image in short enough time. The talk outlines how the package MRIReco.jl has been designed to match the efficiency of a highly tuned C/C++ libraries even in a multi-threading scenario, based on the parallel task runtime support available since Julia 1.3.
Core packages being presented:
- https://github.com/MagneticResonanceImaging/MRIReco.jl
- https://github.com/MagneticParticleImaging/MPIReco.jl
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2021/talk/N9JPV7/
Purple
Tobias Knopp
PUBLISH
8UR3U3@@pretalx.com
-8UR3U3
DeconvOptim.jl: Microscopy Image Deconvolution
en
en
20210728T171000
20210728T172000
0.01000
DeconvOptim.jl: Microscopy Image Deconvolution
In our package DeconvOptim.jl we address deconvolution through an optimization problem.
However, deconvolution is, due to the band limit of the PSF, an ill-conditioned inverse problem which cannot be solved directly.
The forward model is convolution of the PSF with our estimation. The PSF is the mathematical description of the optical system which introduces the blur.
Based on an initial estimation we minimize an exchangeable loss function (for microscopy Poisson loss is widely used) with respect to a reconstruction being a consistent solution to the inverse problem.
To ensure certain constraints we allow that regularizers like Total Variation (TV) can be used. These regularizers are assembled before the optimization via metaprogramming and Tullio.jl.
The gradient of the full inverse problem pipeline is calculated by Zygote.jl and the optimization by Optim.jl (currently L-BFGS). Despite having microscopy images in mind the toolbox can be used for any type of signal deconvolution due to its flexibility. We also offer GPU/CUDA.jl support to a certain extend.
The full source code is available at [GitHub](https://github.com/roflmaostc/DeconvOptim.jl).
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/8UR3U3/
Purple
Felix Wechsler
PUBLISH
WT8PHT@@pretalx.com
-WT8PHT
Matlab to Julia: Hours to Minutes for MRI Image Analysis
en
en
20210728T172000
20210728T173000
0.01000
Matlab to Julia: Hours to Minutes for MRI Image Analysis
Like most fields of science, in magnetic resonance imaging (MRI) our appetite for data is never satiated – spatiotemporal resolution and signal-to-noise ratio can never be too high, and MRI scan times can never be too low. But, with big data comes big compute. MRI image reconstruction, for example, often involves parallel acquisition of multiple (3+1)D MRI images as measured by each of the 32+ scanner readout coils which are then combined through a complex series of iterative optimization problems. These types of algorithms are typically prototyped in high-level programming languages like Matlab or Python, and subsequently translated to C/C++ for deployment. The Julia community is familiar with this old story, though – in fact the fantastic package MRIReco.jl (https://github.com/MagneticResonanceImaging/MRIReco.jl) by Knopp et al. provides Julia implementations of MRI reconstruction algorithms which are competitive with C/C++. Post-processing these reconstructed MRI images faces similar computational challenges, and in this talk, we will describe our experience of implementing a parameter inference algorithm from the MRI subfield of myelin water imaging (MWI) in Julia.
In MWI, one analyses (3+1)D time series of image volumes acquired on cartesian spatiotemporal grids with dimensions 250x250x250x64 or more. The MRI time signals acquired in each voxel, which exhibit multi-exponential decay, are decomposed into a spectrum of decay rates. These decay rates are used to compute, among other useful metrics, the myelin water fraction (MWF) which is known to correlate with local myelin content in the brain. This inverse Laplace transform-like computation involves fitting an MRI signal model – the extended phase graph (EPG) model – to each time signal by solving an L2 regularized nonnegative least squares (NNLS) optimization problem. Note that, with images typically consisting of 10^7 voxels or more, this computation requires solving upwards of 10^7 optimization problems.
Prior to this work, the NNLS procedure used for MWI was implemented in Matlab – a closed-source high-level programming language – as is common in MRI research. This computation is particularly poorly suited for Matlab, however. First, similar to the Python library numpy, Matlab encourages computations on vectors or matrices, as opposed to explicit for-loops, in order to call out to BLAS or LAPACK libraries and ameliorate the overhead of the Matlab interpreter. For this reason, the EPG algorithm – which is most efficiently expressed in terms of nested for-loops – was previously implemented in terms of sparse matrix-vector products in order to avoid Matlab’s slow loops. Second, while the solving of independent optimization problems from each voxel is embarrassingly parallel, Matlab provides little control over multiprocessing optimizations such as reusing thread-local memory buffers and task scheduling. Lastly, Matlab does not provide a statically sized array type, which would be beneficial for micro-optimizing 3x3 matrix-vector products present in the EPG algorithm.
Julia excels in these types of computations. In the DEcomposition and Component Analysis of Exponential Signals (DECAES.jl) package (https://github.com/jondeuce/DECAES.jl, https://doi.org/10.1016/j.zemedi.2020.04.001), we provide optimized procedures for computing MWI which address the aforementioned limitations of Matlab, and additionally include command line and Matlab interfaces for ease of interoperability. In all, DECAES.jl reduced computation times approximately 60X from 1.5-2.5 hours down to 1.5-2.5 mins. This large speedup demonstrates that it is possible to perform this analysis directly on the MRI scanner, removing the need for researchers to (manually) process the acquired data.
Among the many additional benefits from the Julia translation is the synergy with other Julia packages: we experimented with explicit SIMD vectorization in the EPG algorithm using the SIMD.jl package; we make liberal use of statically sized vectors and matrices using the StaticArrays.jl package; we use a pure-Julia implementation of NNLS using the NNLS.jl package. Furthermore, the EPG algorithm is independently useful outside of MWI, and can e.g. be efficiently differentiated trivially using the automatic differentiation packages ForwardDiff.jl or Zygote.jl.
In conclusion, we have found that the combination of high-performance and high-expressibility present in Julia is well suited to MRI research, particularly in comparison to existing Matlab-based workflows, and we believe that our experience will resonate with the scientific computing community more broadly. We look forward to the opportunity to share our experience.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/WT8PHT/
Purple
Jonathan Doucette
PUBLISH
PPG3CY@@pretalx.com
-PPG3CY
Genify.jl: Transforming Julia into Gen for Bayesian inference
en
en
20210728T174000
20210728T175000
0.01000
Genify.jl: Transforming Julia into Gen for Bayesian inference
A wide variety of libraries written in Julia implement stochastic simulators of natural and social phenomena for the purposes of computational science. However, these simulators are not generally amenable to Bayesian inference, as they do not provide likelihoods for execution traces, support constraining of observed random variables, or allow random choices and subroutines to be selectively updated in Monte Carlo algorithms.
To address these limitations, we present Genify.jl, an approach to transforming plain Julia code into generative functions in Gen, a universal probabilistic programming system with programmable inference. We accomplish this via lightweight transformation of lowered Julia code into Gen’s dynamic modeling language, combined with a user-friendly random variable addressing scheme that enables straightforward implementation of custom inference programs.
We demonstrate the utility of this approach by transforming an existing agent-based simulator from plain Julia into Gen, and designing custom inference programs that increase accuracy and efficiency relative to generic SMC and MCMC methods. This performance improvement is achieved by proposing, constraining, or re-simulating random variables that are internal to the simulator, which is made possible by transformation into Gen.
Genify.jl is available at: https://github.com/probcomp/Genify.jl
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/PPG3CY/
Purple
Xuan (Tan Zhi Xuan)
PUBLISH
HVSAW9@@pretalx.com
-HVSAW9
Code, docs, and tests: what's in the General registry?
en
en
20210728T190000
20210728T193000
0.03000
Code, docs, and tests: what's in the General registry?
We know that Julia is a modern language that makes adopting best programming practices, like documentation and testing, very simple, lowering the entry barriers for newcomers... but is that true? We developed a package called [`PackageAnalyzer.jl`](https://github.com/JuliaEcosystem/PackageAnalyzer.jl) to try to answer this question and get more information about packages in the Julia ecosystem.
[`PackageAnalyzer.jl`](https://github.com/JuliaEcosystem/PackageAnalyzer.jl) lets you statically inspect the content of a package and collect information about the use of documentation, testing suite, continuous integration, as well as the licenses used, the number of lines of code and the number of contributors.
In this talk we will show how to use [`PackageAnalyzer.jl`](https://github.com/JuliaEcosystem/PackageAnalyzer.jl) with your own package, and then iterate the analysis over any collection of packages, including all those in the General registry. We will present plots and statistics about the open source packages in the Julia ecosystem. We will be able to see what is the adoption of practices like documentation and testing, what are the most popular licenses and continuous integration services, what are the largest packages and in what languages they are written. Additionally, we will have a look into the Julia community: how many users contributed to the Julia ecosystem and how many people work on a single package, on average?
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2021/talk/HVSAW9/
Purple
Mosè Giordano
Eric P. Hanson
PUBLISH
3LMU3W@@pretalx.com
-3LMU3W
Using optimization to make good guesses for test cases
en
en
20210728T193000
20210728T194000
0.01000
Using optimization to make good guesses for test cases
I'm developing the largest inference application I have ever seen. For any population, it estimates morbidity and mortality from disease, cast against a background of mortality, but this is measured across years for multiple ages. There are seven web pages of settings, and it can take a day to run. In some way, it's easy to test because it's an inverse problem, so I can create a correct answer, generate data, and see if the application finds the correct answer. What I want is a defensible claim that I've tested the seven pages of settings.
My first approach is to write tests for some important cases. These paradigmatic tests have to pass, and they tell stakeholders that the basics work well. I add to these some tests I know challenge the system. Beyond these two classes of tests are another set of less common techniques that help look for bugs where I don't expect them. These include random testing, concolic testing, and property-based testing. For this problem, let's focus on a simpler technique, combinatorial testing.
Combinatorial testing is a careful selection of test arguments, designed to likely have good code coverage. If we picture a page of code, then any one call to a function will walk through that code, skipping parts of it when it fails an if-condition. A thorough set of tests should, at least, execute different parts of if-conditions. There must be some choice of inputs to the application that lead to every branch of the code. Some branches depend on two input arguments multiplied pairwise. Others may depend on a particular combination of three or four input arguments. It would be helpful to test each value of each option and, somehow, walk through all possible pairs of arguments or all possible triples of arguments, in order to cover all branches.
If we have twenty different options, each of which can take one of four values, we don't have to run twenty-times-four tests to try every value. We can pack them into only a few tests. What if we wanted to try all pairs of the first two values? For each pair, that's four-choose-two, or twelve, combinations of arguments, to make twelve tests for each pair, and there are twenty-choose-two pairs, but we can pack these together, too, so that each test case explores a lot of the code.
The algorithms in UnitTestDesign.jl use greedy optimization to construct short test suites to pack all-pairs testing into as few arguments as possible. For twenty arguments with four values each, it can pack every possible pair of arguments into thirty-seven test cases. There is some research support that all-pairs testing will do a good job of finding faults in code, but the same package can generate tests with higher coverage, where higher means all triples or quadruples of input values are included in test cases.
Most implementations of all-pairs algorithms aren't easy to run in a unit-testing framework because they are web-based or proprietary. There are a few reasons for this. These algorithms need to deal with different argument types. They need to give the tester a way to say that, if a flag is false, then another argument can't take certain values, so they need a little domain-specific language. Julia handles those problems naturally and, further, is efficient at the greedy optimization to determine test cases. These can take time to generate.
For applications with many options, or functions with many arguments, generating a good set of test cases can be computationally intensive, so we rely on the testing framework to help us generate values when needed, save them, and load them later. In Julia, the packages for scratch space and artifacts give us a workflow for testing where we generate combinatorial values, save them to scratch, and upload them as artifacts for others to use.
The resulting approach is to create a set of tests, save them for reuse, and run them many times. Given the challenging problem of testing a large, slow application, we've begun to describe a paradigm from the field of test automation. The general approach is to create a bunch of tests, measure their coverage, select a set to run, and respond to failing tests by refining those tests until we've narrowed down the fault at their source. Parts of this general approach can be seen in random testing, concolic testing, and property-based testing. Compared with these, combinatorial testing is the art of starting with a really good guess.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/3LMU3W/
Purple
Andrew Dolgert
PUBLISH
DX7DCQ@@pretalx.com
-DX7DCQ
Building Interactive REPL-based Visualizations in GridWorlds.jl
en
en
20210728T194000
20210728T195000
0.01000
Building Interactive REPL-based Visualizations in GridWorlds.jl
Resources:
Repository for this talk: https://github.com/Sid-Bhatia-0/JuliaCon2021Talk
GridWorlds.jl: https://github.com/JuliaReinforcementLearning/GridWorlds.jl
A good visualization can sometimes drastically speed up the understanding of a program. Plots are an obvious example of this. Additionally, in reinforcement learning, for example, other forms of visualizations are often used to test out an environment, and also to analyze the behavior of an agent in the environment at various points during its training process.
While developing complex programs, it often pays off well to write visualization tools from an early stage, especially when the correctness of a program cannot be verified via writing test cases alone. Reinforcement learning environments, or any kinds of games for that matter, are a good example of this. In many cases, people may overestimate the cost of creating such tools relative to the value they provide, and might perceive such a task to be more challenging than it actually is. I am here to show you that in some cases, it is much easier than you might think.
The Julia REPL offers several valuable features, often making it an indispensable part of a Julia user’s workflow in some form or another. It is possible to take this one step further and create interactive terminal-based visualizations that unlock even more productive workflows while using the REPL.
I will showcase some relevant features from the GridWorlds.jl package as a concrete example of increased developer productivity using interactive terminal-based visualizations in the REPL. In this package, plain keyboard inputs allow one to rapidly test and debug tile-based reinforcement learning environments by directly visualizing and playing them inside the terminal. One can instantly switch back and forth between testing an environment and debugging it in the REPL in the same REPL session without losing the local state. Additionally, one can also record these interactions and replay them inside the REPL by stepping through the individual frames. This feature also proves extremely handy when analyzing the behavior of an agent at various points during training.
I will deconstruct the essential pieces necessary to create and run such a visualization inside the terminal and explain how it can be built from scratch, only utilizing things that already ship with Julia.
The techniques and tricks explained in this talk are much more generally applicable. I encourage everyone to think about how you can creatively augment your current workflow to make it even more productive and engaging for your domain.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/DX7DCQ/
Purple
Siddharth Bhatia
PUBLISH
FZ99RD@@pretalx.com
-FZ99RD
Catwalk.jl: A profile guided dispatch optimizer
en
en
20210728T195000
20210728T200000
0.01000
Catwalk.jl: A profile guided dispatch optimizer
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/FZ99RD/
Purple
Krisztián Schäffer
PUBLISH
ZQJAW3@@pretalx.com
-ZQJAW3
Building a Chemistry and Materials Science Ecosystem in Julia
en
en
20210728T123000
20210728T140000
1.03000
Building a Chemistry and Materials Science Ecosystem in Julia
Julia is a natural choice for computational chemists and materials scientists, primarily due to its excellent computational performance combined with ease of code sharing and extensibility within and between packages. Unsurprisingly, interest in and use of Julia within this community is growing – in particular, at JuliaCon 2020, several packages were introduced (such as JuliaChem and DFTK) that generated substantial “buzz” in the community, and despite being quite young (O(1) developer-year of invested effort), these packages are already matching or even exceeding best-in-class performance for some use cases!
This BoF session aims to continue this momentum, as well as to set some longer-term goals and norms for the community as a whole. To our knowledge, there has not yet been a broad discussion of this kind, and an informal proposal on Julia Discourse (see discourse.julialang.org/t/interest-in-chemistry-focused-bof ) indicated enthusiasm from a variety of developers and users.
In particular, at present, standards such as how to represent certain ubiquitous types of data and perform common tasks is lacking, which can lead to inadvertent duplication of effort. As interest in and use of Julia in these communities grows, the impact of establishing such norms multiplies. In this session, we plan to host a community discussion aimed at building consensus around these topics. Some specific examples include, but are not necessarily limited to:
1. I/O for common structure file types (e.g. .cif, .xyz) and Julia data types for representing these structures (examples of Python versions of such systems include those provided by the Atomic Simulation Environment and pymatgen)
2. Frequently-invoked mathematical procedures such as integration on common types of grids or using common sets of basis functions utilized within quantum chemical simulation approaches such as density functional theory and (post-)Hartree-Fock
3. Julia bindings for widely-used codes in other languages (primarily C, C++, and Python) that are not worth duplicating in Julia in the short term but which provide functionality such as parsing outputs of simulation codes as well as some of the mathematical operations described above.
We are optimistic that the discussions in this session will both help to strengthen ties within this small but growing community as well as help to amplify its productivity and impact!
PUBLIC
CONFIRMED
Birds of Feather
https://pretalx.com/juliacon2021/talk/ZQJAW3/
BoF/Mini Track
Rachel Kurchin
Michael F. Herbst
PUBLISH
DRMPLU@@pretalx.com
-DRMPLU
Set Propagation Methods in Julia: Techniques and Applications
en
en
20210728T163000
20210728T183000
2.00000
Set Propagation Methods in Julia: Techniques and Applications
- Organisers: Marcelo Forets (@mforets) and Christian Schilling (@schillic)
- Moderator: David P. Sanders (@dpsanders)
A new generation of algorithms is addressing the fundamental challenge of how to exhaustively explore all possible scenarios for simulation of dynamical systems under model uncertainties. Moreover, deep neural networks play an increasing role in control and safety-critical applications, although it is often not known how to guarantee that they will behave correctly and safely under all circumstances. This minisymposium will host applications of set-based techniques and global optimization in Julia that address these questions.
We have made sure to reach out to several different groups who are working on set propagation techniques and their application in a wide range of areas, to present a broad overview of the area.
Plan: 1 introductory talk (non-Julia-specific), 5 regular talks, 1 Q&A panel.
- Introduction by Goran Frehse (Hybrid Systems Semantics group, Computer Science and System Engineering Laboratory (U2IS), ENSTA Paris. [Homepage](https://sites.google.com/site/frehseg/home).
- **Using Set Propagation and the Finite Element Method For Time Integration in Transient Solid Mechanics Problems.** By Jorge Pérez Zerpa (speaker), Marcelo Forets and Daniel Freire Caporale. The Finite Element Method (FEM) is the gold standard for numerical simulation in transient solid mechanics problems. Several time-integration algorithms have been developed in recent decades; however, it is still a challenging problem to completely describe the family of dynamically-feasible behaviors from given sets of initial states. In this talk we take a set-based approach and conclude that it has a lot of potential to efficiently solve such problems.
- **Dionysos.jl: Optimal Control of Cyber-Physical Systems.** By Benoit Legat, Guillaume Berger, Julien Calbert (speaker) and Raphaël Jungers. [Dionysos.jl](https://github.com/dionysos-dev/Dionysos.jl) is software produced by the ERC project Learning to Control (L2C). In view of the Cyber-Physical Systems (CPS) revolution, the only sensible way of controlling these complex systems is by discretizing the different variables, thus transforming the model into a simple combinatorial problem on a finite-state automaton, called an abstraction of this system. Our goal is to transform this approach into an effective, scalable, cutting-edge technology that will address the challenges of CPS and unlock their potential.
- **Solving Optimization Problems with Embedded Dynamical Systems.** By Matthew Wilhelm (speaker) and Matthew Stuber. We will discuss our recent work at [PSORLab](https://github.com/PSORLab): EAGODynamicOptimizer.jl and DynamicBounds.jl packages. These extend our EAGO.jl nonconvex optimizer to address formulations containing embedded dynamical systems. We highlight a series of approaches for constructing the requisite convex and concave relaxations of differential equations in the original decision space and discuss the use of such techniques in a global optimization context. These methods may readily be composed with existing McCormick relaxation approaches, which allows for the solution of general nonlinear formulations to certified global optimality. Use cases relevant to hybrid data-driven process modeling, parameter estimation, and worst-case robust design are discussed.
- **Computing with sets of probabilities in Julia.** By Ander Gray. There are many ways to mathematically define a set of probability distributions, including: intervals, possibility distributions, random sets and probability boxes (p-boxes). These structures were discovered independently from one another, but are often synonymous and can be translated. Imprecise Probability theory links all these theories into one. In this presentation, we present [ProbabilityBoundsAnalysis.jl](https://github.com/AnderGray/ProbabilityBoundsAnalysis.jl) (PBA) a numerical implementation of p-box arithmetic in Julia, which gives an arithmetic of random variables where both marginal distributions and dependencies may be partially defined. We show how PBA may be used to rigorously propagate distributions and p-boxes in reachability problems using [ReachabilityAnalysis.jl](https://github.com/JuliaReach/ReachabilityAnalysis.jl).
- **Methods to Soundly Verify Deep Neural Networks.** By Tomer Arnon. Deep neural networks are widely used for nonlinear function approximation, with applications ranging from computer vision to control. Although these networks involve the composition of simple arithmetic operations, it can be very challenging to verify whether a particular network satisfies certain input-output properties. [NeuralVerification.jl](https://github.com/sisl/NeuralVerification.jl) implements several methods that have emerged recently for soundly verifying such properties. We discuss fundamental differences between existing algorithms and compare them on a set of benchmark problems.
PUBLIC
CONFIRMED
Minisymposium
https://pretalx.com/juliacon2021/talk/DRMPLU/
BoF/Mini Track
Marcelo Forets
Christian Schilling
Ander Gray
David P. Sanders
Matthew Wilhelm
Goran Frehse
Jorge Pérez Zerpa
Deleted User
Julien Calbert
Tomer Arnon
PUBLISH
A93QFU@@pretalx.com
-A93QFU
Fancy Arrays BoF 2
en
en
20210728T190000
20210728T203000
1.03000
Fancy Arrays BoF 2
Notes from [last years discussions can be found here](https://docs.google.com/document/d/1imBX3k0EEejauWVyXONZDRj8LTr0PeLOJNGEgo6ow1g/edit#heading=h.qrm4q6q56yxm).
Since then it has emerged clarity of 3 packages that can basically replace AxisArrays.jl with a more modern and idiomatic interface. \
In approximate order of power and also complexity (both for users and for maintainers) they are: [AxisKeys.jl](https://github.com/mcabbott/AxisKeys.jl), [AxisIndices.jl](https://github.com/Tokazama/AxisIndices.jl/), and [DimensionalData.jl](https://github.com/rafaqz/DimensionalData.jl) (the former two building upon [NamedDims.jl](https://github.com/invenia/NamedDims.jl/) for naming axes). \
Since last year, [IndexedDims.jl](https://github.com/invenia/IndexedDims.jl/) has been deprecated in favour of [AxisKeys.jl](https://github.com/mcabbott/AxisKeys.jl).
An ideal outcome of this BoF session would be an agreement to deprecate an additional package for one of the others, or even deprecating two leaving one. \
A less ideal, but still very good outcome of the BoF is to discuss a common API (like [Tables.jl](https://github.com/JuliaData/Tables.jl)), which each package can extend, and to direct someone to lead the establishment of this API, and ensure that it gets rolled out.
We'll be using a [google doc](https://docs.google.com/document/d/1RPQw3zMGRVm8cayUrQhFGzlKV5hp-1DJMUE32H_-bgo/edit?usp=sharing) to organize speaking turns during the call.
PUBLIC
CONFIRMED
Birds of Feather
https://pretalx.com/juliacon2021/talk/A93QFU/
BoF/Mini Track
Frames Catherine White
Rory Finnegan
PUBLISH
X7QCPU@@pretalx.com
-X7QCPU
The state of JuMP
en
en
20210728T123000
20210728T130000
0.03000
The state of JuMP
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2021/talk/X7QCPU/
JuMP Track
Oscar Dowson
PUBLISH
UDWSEC@@pretalx.com
-UDWSEC
What's new in COSMO?
en
en
20210728T130000
20210728T133000
0.03000
What's new in COSMO?
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2021/talk/UDWSEC/
JuMP Track
Michael Garstka
PUBLISH
7KECGM@@pretalx.com
-7KECGM
Conic optimization example problems in Hypatia's examples folder
en
en
20210728T133000
20210728T140000
0.03000
Conic optimization example problems in Hypatia's examples folder
Most of these examples have multiple formulation options, and together these formulations cover all of Hypatia's several dozen predefined cone types (see https://chriscoey.github.io/Hypatia.jl/dev/api/cones/#Predefined-cone-types). Using scripts in Hypatia's scripts folder, we use these examples to (1) compare the performance of Hypatia's algorithmic options/enhancements, and (2) to assess the value of low-dimensional natural formulations versus standard conic formulations that only use cones currently recognized by other conic solvers.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2021/talk/7KECGM/
JuMP Track
Chris Coey
PUBLISH
L8DTE3@@pretalx.com
-L8DTE3
Symmetry reduction for Sum-of-Squares programming
en
en
20210728T163000
20210728T170000
0.03000
Symmetry reduction for Sum-of-Squares programming
Sum-of-Squares or semidefinite programming allows to provide guaranteed bounds on remarkably many problems. Although several efficient algorithms exist to solve these programs, their space and time complexity and even their numerical robustness do not scale well with the size of the polynomial basis or semidefinite matrix. To alleviate this problem different methods have been developed to reduce constraints with a large basis or matrix into smaller ones.
These exploit sign symmetry or sparsity structure using chordal decomposition.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2021/talk/L8DTE3/
JuMP Track
Benoît Legat
Marek Kaluba
PUBLISH
8E9BAK@@pretalx.com
-8E9BAK
Sparse Matrix Decomposition and Completion with Chordal.jl
en
en
20210728T171000
20210728T172000
0.01000
Sparse Matrix Decomposition and Completion with Chordal.jl
In this talk, we will introduce chordal graphs and some of their core properties. These properties enable many otherwise difficult problems, such as minimum vertex coloring, to be solved efficiently. Furthermore, they lead to several decomposition results for sparse matrices.
We will introduce Chordal.jl, a package for working with sparse matrices that have a chordal sparsity pattern. We will overview the algorithms implemented in this package and their applications, including Euclidean distance matrix completion and optimization with sparse data.
We will conclude by using Chordal.jl to dramatically reduce the solve time of a sparse semidefinite program (SDP). Solving large, sparse semidefinite programs (SDPs) remains computationally prohibitive for many existing solvers, and this application largely motivated the development of this package.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/8E9BAK/
JuMP Track
Theo Diamandis
PUBLISH
8YGNYU@@pretalx.com
-8YGNYU
Automatic dualization with Dualization.jl
en
en
20210728T172000
20210728T173000
0.01000
Automatic dualization with Dualization.jl
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/8YGNYU/
JuMP Track
Guilherme Bodin
PUBLISH
WULB78@@pretalx.com
-WULB78
Modeling Bilevel optimization problems with BilevelJuMP.jl
en
en
20210728T173000
20210728T180000
0.03000
Modeling Bilevel optimization problems with BilevelJuMP.jl
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2021/talk/WULB78/
JuMP Track
Joaquim Dias Garcia
PUBLISH
YVCM8B@@pretalx.com
-YVCM8B
Infinite-Dimensional Optimization with InfiniteOpt.jl
en
en
20210728T190000
20210728T193000
0.03000
Infinite-Dimensional Optimization with InfiniteOpt.jl
Infinite-dimensional optimization problems are a challenging problem class that cover a wide breadth of optimization areas and embed complex modeling elements such as infinite-dimensional variables, measures, and derivatives. Typical modeling approaches (e.g., those behind Gekko and Pyomo.dae) often only consider discretized formulations and do not provide a unified paradigm across the various disciplines.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2021/talk/YVCM8B/
JuMP Track
Joshua Pulsipher
PUBLISH
CEFANG@@pretalx.com
-CEFANG
Hybrid Strategies using Piecewise-Linear Decision Rules
en
en
20210728T193000
20210728T200000
0.03000
Hybrid Strategies using Piecewise-Linear Decision Rules
Decision rules offer a rich and tractable framework for solving certain classes of multistage adaptive optimization problems. Recent literature has shown the promise of using linear and nonlinear decision rules in which wait-and-see decisions are represented as functions, whose parameters are decision variables to be optimized, of the underlying uncertain parameters. Despite this growing success, solving real-world stochastic optimization problems can become computationally prohibitive when using nonlinear decision rules, and in some cases, linear ones. Consequently, decision rules that offer a competitive trade-off between solution quality and computational time become more attractive. Whereas the extant research has always used homogeneous (i.e., either linear or piecewise-linear) decision rules, the major contribution of this paper is a computational exploration of hybrid decision rules combining the benefits of the two classes of decision rules. We also demonstrate a case where, unexpectedly, a linear decision rule is superior to a more complex piecewise-linear decision rule within a simulator. This observation bolsters the need to assess the quality of decision rules obtained from a look-ahead model within a simulator rather than just using the optimal look-ahead objective function value.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2021/talk/CEFANG/
JuMP Track
Said Rahal
PUBLISH
X9BNQV@@pretalx.com
-X9BNQV
Flexible set projections with MathOptInterface
en
en
20210728T200000
20210728T201000
0.01000
Flexible set projections with MathOptInterface
This talk introduces the main abstractions of MathOptInterface.jl, the central interface for expressing constrained optimization problems in Julia and explains how an extension for distances and projections.
MathOptSetDistances.jl defines an API for projecting points onto sets and computing distances from a point to a given set defined in MathOptInterface.jl. It has become a toolbox used by other packages built on top of MathOptInterface.jl and opens new features accessible to Convex.jl, JuMP.jl, and their extensions. Computing distances and projections is central to many optimization algorithms, to compute the violation of a constraint or projecting back onto a feasible set.
One challenge that arose from distance computation is designing an interface with a consistency guarantee on the definition of distances while allowing alternative distance implementations for some sets.
The projections and distances operators are also differentiable and implement both a full Jacobian computation and the ChainRules API, which we will illustrate on some sets.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/X9BNQV/
JuMP Track
Mathieu Besançon
PUBLISH
3F88PP@@pretalx.com
-3F88PP
Solving optimization problems at Fonterra
en
en
20210728T201000
20210728T202000
0.01000
Solving optimization problems at Fonterra
Solving optimization problems in a business setting can be a significant challenge. There is a constant tension between delivering quick prototypes to prove value and building robust tools.
At Fonterra, a New Zealand dairy co-operative, one of our planning problems concerns organic milk production. Due to low volumes or organic-certified milk, organic production planning takes place outside the usual planning process. The constraints around organic problems are complex, and there is considerable value to be derived from a quality plan. These factors make organic planning a perfect candidate for a stand-alone optimization project within the business.
During this project, JuMP has been an invaluable tool in several ways. Using JuMP, it has been trivial to develop quick prototypes and experimental features, without sacrificing the robustness of the end-product. JuMP enables our team to be creative during the process and try new things on the fly. We can quickly respond to feedback from end users, which helps build a close relationship and ensure the continued success of the project. JuMP is also a reliable tool for building larger optimization applications, enabling the Data Science team at Fonterra to easily incorporate different multi-objective optimization approaches, optional cuts and complex conditional constraints into the model.
Thanks to JuMP, we have been able to mitigate the problem outlined at the start of the abstract, and secure key user engagement through continuous proof of value while delivering robust software.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/3F88PP/
JuMP Track
Oleg Barbin
PUBLISH
XFC73Y@@pretalx.com
-XFC73Y
TSSOS.jl: exploiting sparsity in polynomial optimization
en
en
20210728T202000
20210728T203000
0.01000
TSSOS.jl: exploiting sparsity in polynomial optimization
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/XFC73Y/
JuMP Track
Jie Wang
PUBLISH
UKASBZ@@pretalx.com
-UKASBZ
SmartTensors: Unsupervised Machine Learning
en
en
20210729T123000
20210729T130000
0.03000
SmartTensors: Unsupervised Machine Learning
The world’s most valuable resource is no longer oil. It is data. SmartTensors (http://tensors.lanl.gov; https://github.com/SmartTensors) is a toolbox for unsupervised machine learning based on matrix/tensor factorization constrained by penalties enforcing robustness and interpretability (e.g., nonnegativity; physics and mathematical constraints; etc.). SmartTensors has been applied to analyze diverse datasets related to a wide range of problems: from COVID-19 to wildfires and climate. The workshop will demonstrate how SmartTensors can be easily applied to these and other application areas. The workshop will include hands-on real-time demonstrations of already existing case studies. The workshop will be designed to be suitable and useful for anyone regardless of their machine learning experience by providing materials at introduction, intermediate and expert levels.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2021/talk/UKASBZ/
Green
Velimir Vesselinov
PUBLISH
FHGWBQ@@pretalx.com
-FHGWBQ
Finding an Effective Strategy for AutoML Pipeline Optimization
en
en
20210729T130000
20210729T133000
0.03000
Finding an Effective Strategy for AutoML Pipeline Optimization
The CASH problem can be decomposed into three major components:
- searching the optimal __m__ model with n(m) search space
- searching the optimal order of __p__ preprocessing elements with n(p) search space
- searching the optimal __h__ hyperparameters with n(h) search space
The most popular approaches involve simultaneous search of these three components with time complexity of n(p) x n(m) x n(h). An alternative method is to perform the search sequentially starting with __m__ using surrogates __p__ and __h__ followed by searching for __p__ using optimal __m__ and surrogate __h__, and finally searching for __h__ using optimal __p__ and __m__ found. This alternative technique only involves n(p) + n(m) + n(h) search space which is significantly smaller than simultaneously searching __p__, __m__, and __h__. We find in our experiments using the [AutoMLPipeline](https://github.com/IBM/AutoMLPipeline.jl) package, that in many cases, it is sufficient to just search for __m__ and __p__ to achieve competitive performance with those of other optimal algorithms that searches all three components simultaneously.
Relevant paper: https://arxiv.org/abs/2107.01253
Relevant Julia Packages used in the talk:
- [AutoMLPipeline.jl](https://github.com/IBM/AutoMLPipeline.jl)
- [AMLPipelineBase.jl](https://github.com/IBM/AMLPipelineBase.jl)
- [Lale.jl](https://github.com/IBM/Lale.jl)
- [Hyperopt.jl](https://github.com/baggepinnen/Hyperopt.jl)
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2021/talk/FHGWBQ/
Green
Paulito Palmes
PUBLISH
X9RATL@@pretalx.com
-X9RATL
Physics-Informed ML Simulator for Wildfire Propagation
en
en
20210729T133000
20210729T134000
0.01000
Physics-Informed ML Simulator for Wildfire Propagation
The study we carried out has the goal to investigate the applicability of the recently developed field of Scientific Machine Learning on climate, wildfire in particular, models. We have outlined some results that tell us that many improvements are needed in order to transform this into a validated product, but also show the big potential of our approach. We need to add further refinements to the implementation in order to carry out a precise time comparison between our approach and the standard numerical solvers, but the results obtained thus far show promising evidence.
The encouraging outcome inspires us to continue our work by improving the architectures and possibly employ them in different fields of research.
We hope that this line of research will be a small step towards a more effective cohesiveness between Machine Learning and Physical Models in Climate Sciences, and thus further explored by other researchers.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/X9RATL/
Green
Francesco Calisto
Simone Azeglio
Valerio Pagliarino
Luca Bottero
PUBLISH
KNDFHC@@pretalx.com
-KNDFHC
Bias Audit and Mitigation in Julia
en
en
20210729T134000
20210729T135000
0.01000
Bias Audit and Mitigation in Julia
Machine Learning is involved in a lot of crucial decision support tools. Use of these tools range from granting parole, shortlisting job applications to accepting credit applications. There have been numerous political and policy developments during the past one year that have pointed out the transparency issues and bias in these ML based decision support tools. Thus it has become crucial for the ML community to think about fairness and bias. Eliminating bias isn't easy due to the existence of various trade-offs. There exist various performance-fairness, fairness-fairness (various definitions of fairness might not be compatible) trade-offs.
In this talk we shall we shall discuss
- Challenges in mitigating bias
- Metrics and fairness algorithms offered by Fairness.jl and the workflow with the package
- How Julia's ecosystem of packages (MLJ, Distributed) helped us in performing a large systematic benchmarking of debiasing algorithms, which helped us understand their [generalization properties](https://arxiv.org/abs/2011.02407).
Repository: [Fairness.jl](https://github.com/ashryaagr/Fairness.jl)
Documentation is available [here](https://ashryaagr.github.io/Fairness.jl/dev/), and introductory blogpost is available [here](https://nextjournal.com/ashryaagr/fairness/)
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/KNDFHC/
Green
Ashrya Agrawal
PUBLISH
Y9XWLM@@pretalx.com
-Y9XWLM
Data driven insight into fish behaviour for aquaculture
en
en
20210729T135000
20210729T140000
0.01000
Data driven insight into fish behaviour for aquaculture
Data generated on modern aquaculture farms extend across a wide variety of forms. In situ sensors sample large numbers of environmental variables such as temperature, current velocity, dissolved oxygen (DO), chlorophyll and salinity. Remotely-sensed environmental data can sample much larger spatial domains and can be at the bay-scale – from land-based sensors such as CODAR-type HF radar – or at the global scale from satellite-based monitoring system. Informing on farm operations also requires sampling of animal variables such as size, clustering behaviour, and movement, and this is typically done using underwater technologies such as hydroacoustic technology, video monitoring, and aerial drone imagery. Further, there are large datasets of pertinent variables that are generated by numerical models such as weather or ocean circulation products. These datasets constitute huge data volumes with distinct characteristics. Integrating and extracting information from these disparate data sources (in scalable manner) are key to encapsulating the full dynamics of the farm environment and enabling effective management.
This paper presents an analysis of environmental and fish behaviour datasets collected at three salmon farms in Norway, Scotland, and Canada. Information on fish behaviour were collected using hydroacoustic sensors that sampled the vertical distribution of fish in a cage at high spatial and temporal resolution, while a network of environmental sensors characterised local site conditions. We present an analysis of the environmental and hydroacoustic datasets using the Julia open-source packages we developed: data were preprocessed and curated into time-aligned matrix form using TSML (https://github.com/IBM/TSML.jl), and machine learning pipelines were identified and implemented using Lale (https://github.com/IBM/Lale.jl).
Analysis enabled a quantitative investigation of the effects of environmental conditions on fish response together with information on drivers of anomalous fish response. Results demonstrated pronounced temporal variations in fish distribution as dictated by factors such as diurnal patterns, dynamics (currents and winds), and oxygen and temperature variations. Diurnal patterns driven by natural changes in light intensity were broadly similar across sites although this trend was ameliorated at the Norwegian site which was located inside the Arctic circle and experience 24 hours of daylight during summer months. Generally, fish occupied a deeper position in the cage during the day and were more tightly clustered; while at night, fish utilised more of the cage volume and were at a higher average position.
Analysis indicated that temperature was the primary environmental driver at two of the three sites. Temperature in the warmer summer months exhibited pronounced stratification before returning to a well-mixed temperature profile in September and October. During these stratified periods there was a tendency for fish to cluster to the warmer, upper portion of the cage and avoid colder temperatures. On the other hand, in reasonably homogeneous environments where temperature varies little with depth (such as at the Canada site during autumn), temperature did not influence the vertical distribution of salmon.
Variation in oxygen levels were most pronounced at the Canada site which showed consistently lower values than at other sites. Feature importance analysis indicated that dissolved oxygen values were the most important contributor to fish behaviour and in particular during periods of lower oxygen levels, a pronounced response was noted. Analysis indicated that fish moved towards the surfaces when values drop below 7mgL-1 which is in line with literature which reports reduced appetites and feeding in Atlantic salmon when values drop below this threshold.
Results presented in this paper indicate pronounced differences between sites and the need to consider these variations for farm management. One could readily use this analysis to quantify the difference between sites, and further to identify the fundamental drivers to these variations. This could be particularly valuable when comparing different farm systems such as inshore and offshore and the associated operational implications.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/Y9XWLM/
Green
Fearghal O'Donncha
Paulito Palmes
PUBLISH
UJUE8P@@pretalx.com
-UJUE8P
State of Julia
en
en
20210729T143000
20210729T151500
0.04500
State of Julia
Placeholder for State of Julia talk.
PUBLIC
CONFIRMED
Keynote
https://pretalx.com/juliacon2021/talk/UJUE8P/
Green
Stefan Karpinski
PUBLISH
YS9RUZ@@pretalx.com
-YS9RUZ
Keynote (Xiaoye (Sherry) Li)
en
en
20210729T151500
20210729T160000
0.04500
Keynote (Xiaoye (Sherry) Li)
In recent years, we have seen a large body of research using hierarchical
matrix algebra to construct low complexity linear solvers and preconditioners.
Not only can these fast solvers significantly accelerate the speed of
large scale PDE based simulations, but also they can speed up many AI and
machine learning algorithms which are often matrix-computation-bound.
On the other hand, statistical and machine learning methods can be used
to help select best solvers or solvers' configurations for specific problems
and computer platforms. In both of these fields, high performance computing
becomes an indispensable cross-cutting tool for achieving real-time solution
for big data problems. In this talk, we will show our recent developments
in the intersection of these areas.
BIO
Sherry Li is a Senior Scientist in the Computational Research Division,
Lawrence Berkeley National Laboratory. She has worked on diverse problems
in high performance scientific computations, including parallel computing,
sparse matrix computations, high precision arithmetic, and combinatorial
scientific computing. She is the lead developer of SuperLU, a widely-used
sparse direct solver, and has contributed to the development of several other
mathematical libraries, including ARPREC, LAPACK, PDSLin, STRUMPACK, and XBLAS. She earned Ph.D. in Computer Science from UC Berkeley and B.S. in Computer Science from Tsinghua Univ. in China. She has served on the editorial boards of the SIAM J. Scientific Comput. and ACM Trans. Math. Software, as well as many program committees of the scientific conferences. She is a Fellow of SIAM and a Senior Member of ACM.
PUBLIC
CONFIRMED
Keynote
https://pretalx.com/juliacon2021/talk/YS9RUZ/
Green
PUBLISH
EVR3HZ@@pretalx.com
-EVR3HZ
InvertibleNetworks.jl - Memory efficient deep learning in Julia
en
en
20210729T163000
20210729T170000
0.03000
InvertibleNetworks.jl - Memory efficient deep learning in Julia
Invertible neural networks (INNs) are designed around bijective building blocks that allow the evaluation of (deep) INNs in both directions, which means that inputs into the network (and all internal states) can be uniquely re-computed from the output. INNs were popularized in the context of normalizing flows as an alternative approach to generative adversarial networks (GANs) and variational auto-encoders (VAEs), but their property of invertibility is also appealing for discriminative models, as INNs allow memory-efficient backpropagation during training. As hidden states can be recomputed for INNs from the output, it is in principle not required to save the state during forward evaluation, thus leading to a significantly lower memory imprint than conventional neural networks. However, existing backpropagation libraries that are used in TensorFlow or PyTorch do not support the concept of invertibility and therefore require work arounds to benefit from them. For this reason, current frameworks for INNs such as FrEIA or MemCNN use layer-wise AD, in which backpropagation is performed by first re-computing the hidden state of the current layer and then using PyTorch's AD tool (Autograd) to compute the gradients for the respective layer. This approach is computationally not efficient, as it performs an additional forward pass during backpropagation.
With InvertibleNetworks.jl, we present an open-source Julia framework (MIT license) with manually implemented gradients, in which we take advantage of the invertibility of building blocks. For each invertible layer, we provide a backpropagation layer that (re-)computes the hidden state and weight updates all at once, thus not requiring an extra (layer-wise) forward evaluation. In addition to gradients, InvertibleNetworks.jl also provides Jacobians for each layer (i.e. forward differentiation), or more precisely, matrix-free implementations of Jacobian-vector products, as well as log-determinants for normalizing flows. While backpropagation and Jacobians are implemented manually, InvertibleNetworks.jl integrates seamlessly with ChainRules.jl, so users do not need to manually define backward passes for implemented networks. Additionally, InvertibleNetworks.jl is compatible with Flux.jl, so that users can create networks that consist of a mix of invertible and non-invertible Flux layers. In this talk, we present the architecture and features of InvertibleNetworks.jl, which includes implementations of common invertible layers from the literature, and show its application to a range of scenarios including loop-unrolled imaging, uncertainty quantification with normalizing flows and large-scale image segmentation.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2021/talk/EVR3HZ/
Green
Philipp A. Witte
Mathias Louboutin
Ali Siahkoohi
Felix J. Herrmann
Gabrio Rizzuti
Bas Peters
PUBLISH
SLHLHX@@pretalx.com
-SLHLHX
Composable Bayesian Modeling with Soss.jl
en
en
20210729T170000
20210729T171000
0.01000
Composable Bayesian Modeling with Soss.jl
# First-Class, Composable Models
Soss models can be used and composed similarly to working with functions. This allows models to be built up from smaller, reusable components. In some cases, these can be developed and tested independently.
# Dynamic Code Generation
Soss uses runtime code generation for efficient inference primitives. These are specialized for model and input types. New primitives can easily use arbitrary data structures; the system is very flexible. Models are fully generative and determine joint distributions. In particular, models have `rand` and `logdensity` methods like any other measure.
# Model Transformations
Internally, models are represented as a directed graph with an AST (a Julia `Expr`) at each node. This makes it easy to transform one model into another based on its dependencies or AST structures. We can compute Markov blankets or reparameterizations, or change a model to output the latent conditional distributions used along the way.
# MeasureTheory.jl
Soss uses MeasureTheory.jl and allows falling back to Distributions.jl when needed, so it inherits the benefits of MeasureTheory. For example, fewer type constraints on constructors means Soss can evaluate a log-density symbolically. Coupled with codegen, this enables generation of highly optimized code.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/SLHLHX/
Green
Chad Scherrer
PUBLISH
NDQTSP@@pretalx.com
-NDQTSP
Chaotic time series predictions with ReservoirComputing.jl
en
en
20210729T171000
20210729T172000
0.01000
Chaotic time series predictions with ReservoirComputing.jl
Chaoticity is by definition hard to predict or to reproduce using forecasting models. With the advent of Deep Learning (DL) a lot of effort has been dedicated to this problem, with the default approaches being represented by Recurrent Neural Networks (RNNs) and Long Short Term Memory networks (LSTMs). More recently a new family of models has proved more effective in tackling chaotic systems, namely Reservoir Computing (RC) models. Given the relative infancy of the RC paradigm it is not simple to find an implementation of such models, let alone a full library. With ReservorComputing.jl we want to propose a Julia package that allows the user to quickly get started with a fast growing range of RC models, ranging from the standard Echo State Networks (ESNs) to the more exotic Reservoir Computing with Cellular Automata (RECA). In this talk a brief introduction to the concept of RC will be given and afterwards the capabilities of ReservoirComputing.jl will be illustrated using interactive examples.
Reservoir Computing models work by expanding the input data into a higher dimensional space, called reservoir. After this expansion the resulting states are collected and the model is trained against the desired input as a linear regression problem. This approach allows for fast training times, and solves several problems of Neural Networks training, like the vanishing gradient. Not only are the models in the RC family faster and safer to train, but, as mentioned before, it has been shown that they are also better in the prediction and reproduction of chaotic systems. RC models are mainly composed of three sections: an input to reservoir coupler, the reservoir, and a reservoir to output coupler. The last section is the result of the training process, and is dependent on the training method that one chooses to utilize. It is easy to see that using different constructions for these elements is possible to obtain different results in the task at hand. To properly explore the RC models a quick way to access these layers is needed in their implementation.
At a high level, the implementation of ReservoirComputing.jl gives the user the appropriate tools needed for a quick setup of the desired model, allowing an exploration of these family of models for the prediction of a given time series. Otherwise if one chooses to delve more deeply into the customization of the model, the implementation of ReservoirComputing.jl follows a modularity designed to leave users with the freedom to fully customize the system they intend to train and use for predictions. This not only helps with possible recombinations of layers already implemented in the library, but allows for expansions with the aid of external libraries or custom code. Leveraging the great package ecosystem of Julia the user could decide to train the RC system using not yet implemented regression approaches with an external package. At the same time it is possible to use a reservoir matrix construction not present in the library, either by custom construction, or again by using other packages, like LightGraphs.jl.
After the brief introduction of the RC paradigm the talk intends to illustrate the concepts defined above using concrete examples. It will be shown both the ease of use of the package and some of the possible variations included that can be explored. Finally a demonstration of possible customizations will be illustrated, both by custom defined layers and by leveraging other libraries of the Julia ecosystem.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/NDQTSP/
Green
Francesco Martinuzzi
PUBLISH
NYNJMJ@@pretalx.com
-NYNJMJ
Airborne Magnetic Navigation Enhanced with Neural Networks
en
en
20210729T172000
20210729T173000
0.01000
Airborne Magnetic Navigation Enhanced with Neural Networks
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/NYNJMJ/
Green
Deleted User
PUBLISH
QEANKW@@pretalx.com
-QEANKW
Generative Models with Latent Differential Equations in Julia
en
en
20210729T174000
20210729T175000
0.01000
Generative Models with Latent Differential Equations in Julia
Scientific Machine Learning (SciML) is a very promising and exciting field that has been emerging in the past few years, with particular strength within the Julia community given the thriving SciML ecosystem. It consists of a growing set of diverse tools focused on combining traditional scientific modeling with novel machine learning (ML) techniques. The former is usually based on the long-established field of differential equations (DE) models, while the latter, though more recent, provides powerful general-purpose tools, and has demonstrated remarkable achievements in many applications.
Both approaches, of course, have their advantages and drawbacks: traditional modeling is far from trivial, since building an adequate model for a given problem usually requires educated guesses and approximations based on a deep understanding of the system being studied. Often in practice, it is only possible to build partial models and have access only to an incomplete set of the considered variables, sometimes even in a different unknown coordinate system. On the other hand, using orthodox ML models on poor-quality and scarce scientific data can be disadvantageous because of the lack of interpretability of these models, and the dependence on large amounts of training data to achieve good generalization.
SciML is a bridge between these two worlds, taking the best from each. A perfect example of such hybrid solutions is the case of Universal Differential Equations [1], where prior scientific insight is used to build some parts of a DE model, filling the unknown terms with neural networks (NN). They jointly optimize the DE parameters and NN weights using automatic differentiation and sensitivity analysis algorithms. This powerful approach was developed by members of the Julia community and is readily available to use in the DiffEqFlux.jl package. However, this method only works when one has direct measurements of the state variables of the DEs models, which is not always the case.
There exists a class of approaches that tackles this issue by constructing latent DE models, where other NN layers learn transformations from the input space to a latent DE space, usually with lower dimensionality. Some examples of this approach are LatentODEs [2,3] and GOKU-nets [4]. In a broad view, these models consist of a Variational Autoencoder structure with DEs inside. Their decoders contain the DEs, whose initial conditions (and in some cases, parameters) are sampled from distributions learned by the encoders. In the case of LatentODEs, NNs are used to approximate the latent ODE, while in the case of GOKU-nets, one can use prior knowledge to provide some ODE model for the latent dynamics.
Currently, Flux.jl and the SciML ecosystem have all the functionalities to build these latent DE models, but this process can be time-consuming and possibly has a steep learning curve for people without a background in machine learning. Our goal is to provide a package that makes latent differential equation models readily accessible with high flexibility in architecture and problem definition.
In this presentation, we will introduce the basic background and concepts behind latent differential equation models, in particular, presenting the GOKU-net architecture. We will then show our implementation structure via a simple example: given videos of pendulums of different lengths, learn to reconstruct them by passing through their latent DE representation. We anticipate that our presentation shall be a user-friendly introduction to latent differential equations models for the Julia community.
Work done in collaboration with:
Jean-Christophe Gagnon-Audet¹*
Mahta Ramezanian¹
Vikram Voleti¹
Irina Rish¹
Pranav Mahajan²
Guillermo Cecchi³
Silvina Ponce Dawson⁴
Guillaume Dumas¹
*creator of the beautiful diagrams that you will see in the presentation
¹ Mila & Université de Montréal
² University of Pilani
³ IBM Research
⁴ CONICET & University of Buenos Aires
[1] Rackauckas, C., Ma, Y., Martensen, J., Warner, C., Zubov, K., Supekar, R., ... & Edelman, A. (2020). Universal differential equations for scientific machine learning. arXiv preprint arXiv:2001.04385.
[2] Chen, R. T., Rubanova, Y., Bettencourt, J., & Duvenaud, D. (2018). Neural ordinary differential equations. arXiv preprint arXiv:1806.07366.
[3] Rubanova, Y., Chen, R. T., & Duvenaud, D. (2019). Latent odes for irregularly-sampled time series. arXiv preprint arXiv:1907.03907.
[4] Linial, O., Eytan, D., & Shalit, U. (2020). Generative ODE Modeling with Known Unknowns. arXiv preprint arXiv:2003.10775.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/QEANKW/
Green
Germán Abrevaya
PUBLISH
BSTFEQ@@pretalx.com
-BSTFEQ
CompositionalNetworks.jl: a scaling glass-box neural network
en
en
20210729T175000
20210729T180000
0.01000
CompositionalNetworks.jl: a scaling glass-box neural network
The *JuliaConstraints* GitHub organization was born last fall and aims to improve collaborative packages around the theme of Constraint Programming (CP) in Julia.
As for many fields of optimization, there is often a trade-off between efficiency and the simplicity of the model. **CompositionalNetworks.jl** was designed to smooth that trade-off. One could make a parallel with not having to choose between the speed of C and the simplicity of Python (among others).
An Interpretable Compositional Networks (ICN) takes any vector (of arbitrary size) as an input and outputs a (non-negative) value that corresponds to a user given metric. For instance, consider an error function network in Constraint Programming, one can choose a Hamming distance metric to evaluate the distance between a configuration of the variables’ values and the closest satisfying values. It provides the minimum number of variables to change to reach a solution.
A usual constraint showing the modeling power of Constraint Programming is the `AllDifferent` constraint which ensures that all the variables take different values. One can model a Sudoku problem with only such constraints.
An ICN, in its most basic form, is composed of four layers: transformation, arithmetic, aggregation, and composition layers. Weights between the layers are binary, meaning that neurons (operations) are either connected to, or disconnected from each other neuron in adjacent layers. These simple boolean weights allow a straightforward composition of the operations composing an ICN, and provide a result that is interpretable by a human. The user can then, either verify and use the composition directly, or use it as an inspiration for a handmade composition.
An ICN learning on a small space of 4 variables with domain [1, 2, 3, 4] can extract the following function:
```
icn_all_different(x::AbstractVector) = x |> count_eq_left |> count_positive |> identity
```
where `count_eq_left` is the function that counts the number of elements of `x` smaller than `xi` for each `i`, and `count_positive` counts the number of elements `xi>0`. This output is equivalent to the best known handmade error function for the `AllDifferent` constraint. Furthermore, it is fully scalable to any vector length.
In CompositionalNetworks.jl, we generate the code of the composed function directly. We can even compile it on the fly due to the meta programming capabilities of Julia. Moreover, we can also export the compositions to human-readable language or other programming languages.
Users can check and modify the function composed by an ICN to adapt or improve the output to its needs and requirements. Of course, the function can also be used directly.
During this talk, we will cover an out-of-the-box use of CompositionalNetworks.jl along with the different julian and non-julian key aspects to the development of this package. Among others, the use of other julian packages as dependencies such as the genetic algorithm in Evolutionary.jl to fix the Boolean weights of an ICN, or the generation of compositions in either programming code or mathematical language through Julia efficient meta programming.
The versatility of the Julia version of ICN mixed with metaprogramming allows a much broader practical use cases for any user of ICN compared to the original C++ version, where modifying the code is a much harder task, and metaprogramming is not possible (and usually not recommended for (pre)compiled languages)
While we provide a basic ICN use-case as error function networks in Constraint Programming, it is straightforward for the user to provide additional operations, or even layers. The type of functions learned and composed is more versatile than our use case. We hope this package can have some use for, but not limited to, the people in the Constraint Programming and the Julia communities.
Although our current applications are mainly within some packages of *JuliaConstraints*, we hope to exchange with the community for other methods to compose functions, apply them to other problems, and improve our understanding of Julia for Interpretable Compositional Networks.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/BSTFEQ/
Green
Khalil CHRIT
PUBLISH
VGHALK@@pretalx.com
-VGHALK
GatherTown -- Social break
en
en
20210729T180000
20210729T190000
1.00000
GatherTown -- Social break
PUBLIC
CONFIRMED
Social hour
https://pretalx.com/juliacon2021/talk/VGHALK/
Green
PUBLISH
BEEHC8@@pretalx.com
-BEEHC8
Modeling the Economy During the Pandemic
en
en
20210729T190000
20210729T193000
0.03000
Modeling the Economy During the Pandemic
In this talk, we will discuss how the Federal Reserve Bank of New York (FRBNY) uses Julia for forecasting. We will focus on how the FRBNY adjusted its dynamic stochastic general equilibrium (DSGE) model for the rapid changes in economic conditions brought about by the COVID-19 pandemic. These changes, which are available publicly through DSGE.jl, include the ability to solve and estimate an economic model with multiple regimes (where regimes differ in the equations that describe the economy). Regime-switching allows the FRBNY DSGE to better capture the economic effects of COVID-19 as well as the switch to the new interest rate policy of average inflation targeting (AIT) announced by the Federal Reserve (Fed) in August 2020. In modeling the impact of this policy change it is assumed that the introduction of the new reaction function is only partially incorporated by the agents in forming expectations. Specifically, these are formed using a convex combination of forecasts obtained under the old and the new policy reaction functions. We write the code generically, so other forms of exogenous regime-switching and imperfect credibility about policy rules are accommodated.
In addition, we will demonstrate how this new model is estimated. New features in DSGE.jl, SMC.jl, and ModelConstructors.jl provide a user-friendly API for estimating parameters that change over time. We then show how to estimate this new model in an “online” manner that uses estimation results from an older model trained on data until before the pandemic. This method speeds up estimation times and can be applied even when the model has new COVID-specific parameters.
Throughout the talk, we will discuss how Julia’s functionalities and runtime performance enabled us to implement and use these changes quickly, which was crucial in forecasting during the rapidly-changing economic conditions over the last year.
These advances in DSGE.jl will be useful to any Julia users who are interested in flexibly modeling the economy, particularly in crisis situations as during the recession in 2020. It will also be useful to anyone who regularly conducts Bayesian estimation and is interested in re-using the results from an old estimation to efficiently estimate a new model or with new data.
Disclaimer: This talk reflects the experience of the authors and does not represent an endorsement by the Federal Reserve Bank of New York or the Federal Reserve System of any particular product or service. The views expressed in this talk are those of the authors and do not necessarily reflect the position of the Federal Reserve Bank of New York or the Federal Reserve System. Any errors or omissions are the responsibility of the authors.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2021/talk/BEEHC8/
Green
Shlok Goyal
Alissa Johnson
PUBLISH
NXJYHT@@pretalx.com
-NXJYHT
HighFrequencyCovariance: Estimating Covariance Matrices in Julia
en
en
20210729T193000
20210729T194000
0.01000
HighFrequencyCovariance: Estimating Covariance Matrices in Julia
This talk will briefly cover the challenges of using high frequency data for covariance matrix estimation. Then a number of algorithms will be discussed. Then we will demonstrate the use of the HighFrequencyCovariance package to estimate covariance matrices.
General content is in this paper: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3786912
And this package: https://github.com/s-baumann/HighFrequencyCovariance.jl
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/NXJYHT/
Green
Stuart Baumann
PUBLISH
THTPGL@@pretalx.com
-THTPGL
Using Julia to study economic inequality and taxation
en
en
20210729T194000
20210729T195000
0.01000
Using Julia to study economic inequality and taxation
Many consider economic inequality the biggest social challenge of the 21st century. Indeed, the distribution of disposable incomes, i.e. earned income (wages and salaries) minus income taxes, has become more unequal in recent years and a larger share is captured by the top 1%. Yet, it is a challenge to measure if this development is driven by changes in the distribution of earned income itself or if governmental efforts to redistribute from the rich to the poor have weakened. Accordingly, there are conflicting views among scientists on how to address increasing economic inequality.
In this talk, I show how to use Julia to study the evolution of earned income and disposable income in the United States (US). While most researchers in the social sciences use software such as R and STATA for this purpose, my talk demonstrates that Julia is a superb alternative. To illustrate a concrete application, I use a new Julia package, Taxsim.jl, to investigate if the US tax system has become more or less redistributive during the last decades; income taxes paid are not reported in survey datasets and Taxsim.jl allows to impute them efficiently by uploading data from the Julia workspace to the tax calculator of the National Bureau of Economic Research (NBER). The calculator then computes a number of tax variables (income taxes, tax credits, etc.) and Taxsim.jl downloads them back into Julia.
My talk has three elements. First, I give a brief introduction to the NBER tax calculator and describe its input and return variables. Second, I use CSV.jl and DataFrames.jl to import and inspect information on individual incomes contained in publicly available and easily accessible survey datasets (ACS, CPS, Census). Finally, I apply Taxsim.jl to impute income taxes paid via a simple function call and I compare the evolution of before- and after-tax household incomes in the United States since 1960 to measure the redistributive effects of the US tax system. Thus, my talk uses Julia to answer a question which is at the center of public debates on inequality.
The Julia workflow I present can be adjusted to suit a large range of applications. Moreover, Taxsim.jl allows to investigate many aspects of the US tax system, such as the role of tax credits, deductions, state tax policies, etc. Hence, beyond the general Julia user community, the particular target group of this talk are researchers in quantitative social sciences (economics, finance, sociology, public policy etc).
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/THTPGL/
Green
Johannes Fleck
PUBLISH
VM7PSF@@pretalx.com
-VM7PSF
Diversity and Inclusion in the Julia community
en
en
20210729T195000
20210729T200000
0.01000
Diversity and Inclusion in the Julia community
This talk is designed as a primer for the upcoming Diversity and Inclusion BoF (Birds of Feathers, session where community members come together to talk about a specific topic) and will provide all of the diversity data we have access to, in order to pain the full picture about the current state of the community with respect to D&I.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/VM7PSF/
Green
Logan Kilpatrick
PUBLISH
CLRKFC@@pretalx.com
-CLRKFC
Improving Gender Diversity in the Julia Community
en
en
20210729T200000
20210729T201000
0.01000
Improving Gender Diversity in the Julia Community
More information about Julia Gender Inclusive can be found in our announcement post here: https://discourse.julialang.org/t/announcing-julia-gender-inclusive/63702
Interested community members can sign up here to be added to our Slack workspace, and to join our regular coffee meet-ups: https://forms.gle/tGhCckZqhzvAHoQFA
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/CLRKFC/
Green
Kim Louisa Auth
Xuan (Tan Zhi Xuan)
PUBLISH
BRG8Z3@@pretalx.com
-BRG8Z3
Publish your research code: The Journal of Open Source Software
en
en
20210729T202000
20210729T203000
0.01000
Publish your research code: The Journal of Open Source Software
The peer review process, run by volunteers via GitHub issues, and automated using a bot as much as possible, is designed mainly to review and improve the software itself, including documentation and tests.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/BRG8Z3/
Green
David P. Sanders
PUBLISH
Q78RW3@@pretalx.com
-Q78RW3
GatherTown -- Social break
en
en
20210729T203000
20210729T213000
1.00000
GatherTown -- Social break
PUBLIC
CONFIRMED
Social hour
https://pretalx.com/juliacon2021/talk/Q78RW3/
Green
PUBLISH
88EDGD@@pretalx.com
-88EDGD
Scalable Power System Modeling and Analaysis
en
en
20210729T123000
20210729T130000
0.03000
Scalable Power System Modeling and Analaysis
This talk will provide practical modeling examples and theoretical justification for design choices made in the [Scalable Integrated Infrastructure Planning (SIIP) Initiative](https://www.nrel.gov/analysis/siip.html) at the National Renewable Energy Lab (NREL). We will demonstrate the suite of power systems focused packages – [SIIP::Power](https://github.com/NREL-SIIP) to perform large-scale power systems modeling and analysis activities. In particular, this talk will highlight:
- [InfrastructureSystems.jl](https://github.com/nrel-siip/infrastructuresystems.jl): for enabling large-scale infrastructure system data set management and access
- [PowerSystems.jl](https://github.com/nrel-siip/powersystems.jl): for specifying quasi-static and dynamic power systems data
- [PowerSimulations.jl](https://github.com/nrel-siip/powersimulations.jl): for enabling optimization based power systems modeling, including production cost modeling and optimal power flow using [PowerModels.jl](https://github.com/lanl-ansi/powermodels.jl)
- [PowerGraphics.jl](https://github.com/nrel-siip/powergraphics.jl): for visualizations of results generated by PowerSystems.jl and PowerSimulations.jl
Examples will focus on standard modeling practice and highlight opportunities to customize and extend capabilities to meet individual needs.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2021/talk/88EDGD/
Red
Clayton Barrows
Dheepak Krishnamurthy
PUBLISH
F8BBVZ@@pretalx.com
-F8BBVZ
Unbalanced Power Flow Optimization with PowerModelsDistribution
en
en
20210729T130000
20210729T131000
0.01000
Unbalanced Power Flow Optimization with PowerModelsDistribution
PowerModelsDistribution (PMD) is an optimization focused toolkit for power distribution networks modeling, designed using JuMP, which allows for a decoupling of the various problems, power flow formulations, and optimization solvers, for easy exploration and application of a variety of power flow problem types and mathematical formulations related to multi-phase quasi-steady-state optimization. PMD includes several nonlinear AC formulations, linear approximations, and relaxations, all based on state-of-the-art peer-reviewed research, and has native support for both single-period and multi-period (time series) problems, the latter of which is especially relevant due to the larger number of energy storage components appearing in power distribution networks. PMD includes a native Julia OpenDSS data format parser, allowing us to validate the results of AC power flow against OpenDSS using a number of IEEE distribution test feeders, and provides a simple avenue to support existing data models for a broad collection of distribution system components such as photovoltaic systems and energy storage.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/F8BBVZ/
Red
David M Fobes
PUBLISH
UDTRW3@@pretalx.com
-UDTRW3
PowerModelsDistributionStateEstimation.jl
en
en
20210729T131000
20210729T132000
0.01000
PowerModelsDistributionStateEstimation.jl
Distribution networks are the final stage of the delivery of power from generation to consumers, and they have traditionally been managed with a fit-and-forget approach. This has been appropriate until recent years, given the predictable behavior and underutilization of these networks. However, several developments are changing the state of affairs, e.g., electric vehicles, PV panels, etc. These devices increase utilization, unpredictability, and the risk of voltage and congestion problems, but also provide a potential source of flexibility and control.
To understand the impact of these technologies and, potentially, to perform control actions, it is necessary to monitor distribution systems. State estimation (SE) is a monitoring tool that determines the most-likely state of the system given a set of measurements.
In this talk/poster, a (registered) Julia package is presented, PowerModelsDistributionStateEstimation.jl, which has been developed as a SE design facilitator. The main goal is to provide a flexible tool that allows researchers or other interested users to easily and rapidly design and benchmark SE techniques. This, in turn, has the potential to accelerate the real-life deployment of monitoring and control routines, which can play an important role in the management and operation of future power grids.
The package is an extension of PowerModelsDistribution.jl (https://github.com/lanl-ansi/PowerModelsDistribution.jl), and it allows to formulate SE as a constrained optimization problem. Usually, SE is not addressed in a strict mathematical optimization sense, but the latter is a more general way to describe the problem, which encapsulates the different methods available in the literature, making benchmarking easier.
The biggest challenge in the comparison of SE methods is that the solving algorithm is an inherent part of the SE model. This means that changes in the model-defining equations often require changes in the subsequent solving steps, making it very labor-intensive and time-consuming to test even a limited number of modeling options. With this package we break this paradigm, by splitting the modeling and solver layer, which is possible by using JuMP.jl. This allows users to focus on the design of a suitable SE model, letting an off-the-shelf solver, e.g., Ipopt, take care of the solving part.
A potential drawback is that solve times are longer than with a customized algorithm. However, numerical experiments with available solvers show solve times that seem acceptable for experimental and real-life use. If a better performance is required, the package can still be used to quickly find the optimal SE design, which the user can augment with a customized solver at a later stage.
Several SE modeling options (e.g., measurement types, power flow equations), are available in the package, which is easy to extend to include more.
The talk will give a short overview on the concept of SE, to then introduce the package in detail and provide some numerical examples to demonstrate its functionalities.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/UDTRW3/
Red
Marta
PUBLISH
DTJYAC@@pretalx.com
-DTJYAC
LatticeQCD.jl: Simulation of quantum gauge fields
en
en
20210729T132000
20210729T133000
0.01000
LatticeQCD.jl: Simulation of quantum gauge fields
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/DTJYAC/
Red
Akio Tomiya
Yuki Nagai
PUBLISH
QUCAK3@@pretalx.com
-QUCAK3
JuliaSPICE: A Composable ML Accelerated Analog Circuit Simulator
en
en
20210729T133000
20210729T140000
0.03000
JuliaSPICE: A Composable ML Accelerated Analog Circuit Simulator
Modern analog design and verification requires semi-custom and complex flows that are difficult to construct with commercial tools since they are built around rigid command-line batch flows. In comparison, JuliaSPICE is built from the ground up for flexibility with a full Julia API so users can automate complex tasks without using slow disk IO and parsers. User-defined measurements or checks, written in Julia, can be executed inline with the simulator allowing the designer to dynamically alter the simulation and make on-the-fly measurements JuliaSPICE is also advancing the latest ML techniques with surrogate models and is funded by a DARPA award with the goal of delivering a 1000x speed-up over traditional approaches. The composability of a Julia solution will be demonstrated from within a Pluto notebook, showing interactive analyses not available in other simulators. The user will leave with a much better understanding of how Julia can be leveraged in an environment to accelerate their workflows, whether performing analog simulations or other tasks.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2021/talk/QUCAK3/
Red
Glen Hertz
Pepijn de Vos
PUBLISH
SB7HWT@@pretalx.com
-SB7HWT
Designing Spacecraft Trajectories with Julia
en
en
20210729T163000
20210729T164000
0.01000
Designing Spacecraft Trajectories with Julia
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/SB7HWT/
Red
Dan Padilha
PUBLISH
P7KQMT@@pretalx.com
-P7KQMT
AtomicSets.jl
en
en
20210729T164000
20210729T165000
0.01000
AtomicSets.jl
_AtomicSets.jl was developed by Michael Friedlander, Zhenan Fan and Mia Kramer at the University of British Columbia._
We say a set is convex if, for every pair of points in the set, the line between those points is also contained in the set. This is a generalization of the notion of convex that most of us would have learned in grade school for polygons: the set doesn't have any "caves" or "dents". We similarly call a function convex if its _epigraph_—the set of points "above" the function if it were plotted—is convex. Some common examples of useful convex sets are the _one ball_—the set of all points with 1-norm less than or equal to one, and the _nuclear ball_—the set of all matrices that can be written as an outer product of unit norm vectors. We care about convexity because it guarantees some useful properties for optimization. For exampe: any local minimum of a convex function is also a global minimum. Combinations of these properties can give us efficient algorithms with strong convergence guarantees.
Suppose we are trying to solve a problem where our answer is a vector, and we expect it to be sparse. In general, computing with exact sparsity is difficult, but let's take a step back. To say that a vector is sparse is to say it should be constructed from a small number of coordinate vectors, each scaled by a nonnegative amount. Let's take the coordinate vectors (and their opposite sign counterparts) to be our _atoms_, and take their convex hull to be our domain. We now have a domain we know to be convex, which also induces the structure we want to see in our solution. The process is similar for low-rank matrices: we assume that they will be constructed from a small number of outer products, so we take the set of unit rank outer products to be our atoms.
To generalize over these _atomic sets_, we need more than their common properties; we need a set of common operations. The most basic of these is probably the `gauge` function. Given a set _A_ and a point _x_, the gauge function answers the question "what is the smallest scale _λ_ such that _x_ is in _λ A_?" In other words, how much do we have to expand or contract _A_ so that _x_ is only just contained in it? If we pick our set to be the _two ball_, the set of points with Euclidean norm ≤ 1, our gauge function is just the Euclidean norm of the point.
Other common operations we have on atomic sets are the `expose` and `support` functions. The `expose` function gives us the atom which is most aligned with a given vector, where aligned means maximizing the dot product. Another way to imagine the operation is to take your set and your vector, and then sweep a hyperplane from the origin in the direction of the vector. The last point that the hyperplane touches on its path is the exposed point. The `support` function gives the value of the inner product which produced the exposed point.
Additionally, we define a calculus of atomic sets. We can construct atomic sets that are, for example, the Minkowski sum of other sets, a scaling of a set, or a linear map applied on a set. The `expose` operation gives us in some sense an element of the subderivative of the set, and so the usual chain rule applies. By defining this operation recursively for these compound sets, they too can be generic.
Using Julia's type system, we build representations for the sets, their atoms, and _faces_ (collections of atoms). By writing functions using these common operations on atomic sets, Julia's dispatch system allows compiling said function for any choice of atomic set (and hence notion of sparsity). Using this construction, we present a dual method for solving min_x ½ ‖ Mx - b ‖² s.t. gauge_A(x) ≤ τ, which is generic over the choice of set _A_.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/P7KQMT/
Red
Mia Kramer
PUBLISH
9MGDGG@@pretalx.com
-9MGDGG
Julia Admittance: A Toolbox for Admittance Extraction
en
en
20210729T165000
20210729T170000
0.01000
Julia Admittance: A Toolbox for Admittance Extraction
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/9MGDGG/
Red
Lingling Fan
PUBLISH
3JQKRW@@pretalx.com
-3JQKRW
JuliaFolds: Structured parallelism for Julia
en
en
20210729T170000
20210729T173000
0.03000
JuliaFolds: Structured parallelism for Julia
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2021/talk/3JQKRW/
Red
Takafumi Arakaki
PUBLISH
MENJSR@@pretalx.com
-MENJSR
Teaching parallelism to the Julia compiler
en
en
20210729T173000
20210729T180000
0.03000
Teaching parallelism to the Julia compiler
This is joint work with Valentin Churavy and TB Schardl.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2021/talk/MENJSR/
Red
Takafumi Arakaki
PUBLISH
DMTYDS@@pretalx.com
-DMTYDS
Javis.jl - Julia Animations and Visualizations
en
en
20210729T190000
20210729T193000
0.03000
Javis.jl - Julia Animations and Visualizations
Javis.jl is a general purpose library for creating Julia-based animations and visualizations across domains. At its core, Javis builds on the high level graphics library Luxor.jl and FFMPEG.jl for animation creation. Individuals who have difficulty effectively communicating ideas or findings statically, can easily use and extend Javis to construct informative, performant, and winsome animated graphics.
In this talk the audience will learn the key concepts of Javis by having a look into the abstraction system we use. Additionally they will see basic examples on how objects can interact to create powerful animations. A main point will be the interoperability with existing Julia packages like Luxor.jl and Animations.jl that we use to make the easiest experience for the user who already knows how to use Luxor for static art. Finally, the audience will come away with how Javis is already being used, what the future of Javis is, and how to get involved with the project.
Javis is inspired by the Python-based animation engine, manim, created by Grant Sanderson (aka 3blue1brown) to visualize math concepts. Although inspired by manim, Javis has the greater goal of providing a general purpose animated graphics library. Historically, the Julia ecosystem has lacked a similar dedicated toolchain for the easy creation of complex animations. Javis is now filling that gap in the ecosystem - and beyond only mathematics.
After reviewing the Julia ecosystem, the most similar packages to Javis are Reel.jl, Makie.jl, and Animations.jl. Javis differentiates itself from these packages by enabling its users to create visuals that may not be generally - or easily - supported by standard plotting packages. Although Javis uses a "Frame" concept similar to Reel.jl and Makie.jl, it is not only limited to plots and can create much more complicated visualizations than Animations.jl. Moreover, Javis has extensive documentation and tutorials to illustrate how to easily get started which is at times lacking within these similar packages. Finally, Javis has an active 40+ developer community where beginners can ask questions and participate in the open development path of Javis. Given that Javis users will not be limited by plotting conventions and having guidance in the form of high-quality tutorials and extensive documentation, the novelty and accessibility of Javis in the Julia space is high.
Already, Javis has seen steady adoption by users. For example, Javis has been used in secondary school settings to teach on topics such as physics and earth sciences. Increasingly, Javis is also being used for advanced visualizations. Further applications of Javis are in domain specific applications such as visualizing fourier series for signal processing use cases and mapping activity across the brain to view how the brain behaves under stress.
Due to the extensible nature of Javis, Javis is poised to leverage the existing Julia ecosystem for further animations that users can take advantage of using. This integrative tooling comes as a result of a very open definition of how Javis defines an animation. This tooling enables a user to hook into packages, such as Animations.jl or Pluto.jl, to provide additional capabilities for fine-grained controls of animations, as in the case of Animations.jl, or reproducible development environments per Pluto.jl.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2021/talk/DMTYDS/
Red
Jacob Zelko
Ole Kröger
PUBLISH
NVLQT7@@pretalx.com
-NVLQT7
Julia and deploying complex graphical applications for laypeople
en
en
20210729T193000
20210729T194000
0.01000
Julia and deploying complex graphical applications for laypeople
Julia shows to be promising as a general-purpose language, yet uses for software targeted at non-professional users still appear to be scarce. We are the developers of one such tool: [Ahorn](https://github.com/CelestialCartographers/Ahorn) is a graphical level editor for the video game Celeste that allows a user to create their own levels for the game. Ahorn is written entirely in Julia. As the tool itself is likely to be of little interest to the Julia community, this talk will not focus on the tool, but on our experiences developing and deploying it.
Owing to the nature of the tool, its audience consists in large parts of people who want to dip their toes in game and level design for the first time. Many of these people are young, some as young as 13 years old. This talk will be about what it is like to develop a graphical Julia application that has to be able to be installed by a child on a 10-year-old laptop. What did the Julia ecosystem offer for GUI design in early 2018 when the project started? How well did Julia’s package installation system handle the large variety in hardware and operating systems we have encountered? How has the situation improved since then? What unique features does Julia offer that made us choose it, and how did using the language pay off years later? In our talk, we would like to answer these questions by sharing on our own experiences, and provide some ideas for what can be improved if the Julia community wants the language to become more widely adopted for the development of non-scientific user-facing applications.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/NVLQT7/
Red
Vexatos
Cruor
PUBLISH
CPDWCV@@pretalx.com
-CPDWCV
PGFPlotsX.jl - Plotting with LaTeX, directly from Julia
en
en
20210729T194000
20210729T195000
0.01000
PGFPlotsX.jl - Plotting with LaTeX, directly from Julia
Some people like to almost endlessly tinker with their plots and the LaTeX plot package PGFPlots is one of the plotting packages that allow for such tinkering. It comes with a 600-page manual describing an almost endless number of dials and levers that can be turned and pulled to finally get the perfect plot. One of its drawbacks is that coding in LaTeX can be argued to be quite unpleasant. The error messages are often not good and there is very little linting support. `PGFPlots.jl` is a Julia package that brings all the good things about PGFPlots into Julia while remedying the bad part by allowing one to use Julia for the coding part.
One of the big design goals of the package was to facilitate "translatability" of LaTeX PGFPlots code into Julia over, for example, terseness. This was based on the observation that many plots are created by "stitching together" parts of different examples that can be found scattered over the internet. Allowing LaTeX PGFPlots code to easily be brought into Julia would open up a big amount of example code to be used. It also means that the official PGFPlots manual largely acts as a manual for the package.
Even though the API is made to resemble the one in LaTeX it is of a much higher level than the LaTeX counterpart. Many Julia objects can be directly used as inputs to the plot and will "convert" in a predictable way. Some examples of plottable Julia objects include data frames (from `DataFrames.jl)`, contours (from `Contours.jl`), colors (from `Colors.jl`), error bars (from `Measurements.jl`).
For people that desire a terser coding style while still having easy access to the PGFPlots renderer, it is possible to use `PGFPlotsX.jl` as a backend to `Plots.jl`.
In this talk, I will discuss how the design goals outlined above were achieved and give some illustrative examples and use cases. Attendees should get an overview of the package and be able to determine if using the package for their daily plotting is suitable.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/CPDWCV/
Red
Kristoffer Carlsson
PUBLISH
XDX3SQ@@pretalx.com
-XDX3SQ
Towards an increased code-creativity harmony in Javis
en
en
20210729T195000
20210729T200000
0.01000
Towards an increased code-creativity harmony in Javis
This summer I have been working towards:
Bringing a more organized experience for creators and developers via layers.
Currently a WIP, this feature aims to add a layer based approach towards the Javis animation canvas, where different layers are stacked on top of each other. This is beneficial in cases where a user wants to modify a particular layer, without affecting objects present in other layers. It helps maintain virtual boundaries between different objects on the canvas both conceptually and syntactically.
Powerful abstractions for improved reasoning about the Javis API.
Creating shorthand methods/constants to define general functions that saves creators from writing anonymous functions for each object by extending Luxor’s shapes such as Line, Circle, Rectangle, Polygon etc.
Improvements to object transformations
To be able to create stunning animations, being able to visualize the transformation of one element to another on the fly is both useful and aesthetically pleasing. The current state of morphing allows only single step transformation where the final object is not a distinctly different object.Being able to modify the new object after morphing will open new possibilities for the user to transform the object further.
Livestream animations
Sharing is a part of creation, and being able to share animations is really important. Livestreaming can be done in two ways, over a local network, or directly to platforms like twitch.tv. While twitch support is a WIP, the former is available in the latest Javis.jl release.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/XDX3SQ/
Red
Arsh Sharma
PUBLISH
3S8DGW@@pretalx.com
-3S8DGW
A deep dive into MakieLayout
en
en
20210729T200000
20210729T203000
0.03000
A deep dive into MakieLayout
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2021/talk/3S8DGW/
Red
Julius Krumbiegel
PUBLISH
UGX8YR@@pretalx.com
-UGX8YR
CUDA.jl 3.0
en
en
20210729T123000
20210729T130000
0.03000
CUDA.jl 3.0
CUDA.jl 3.0 was a major release of the NVIDIA GPU programming support package for Julia, with a major addition to the programming model: support for concurrent GPU programming with Julia tasks. In this talk, I will explain what concurrent GPU programming means, how it works, and how you can use it to improve your GPU programs.
I will also talk about other features and changes that are part of CUDA.jl 3.0 and more recent releases, such as the new device-side random number generator, support for building computational graphs, the new memory allocator, etc.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2021/talk/UGX8YR/
Blue
Tim Besard
PUBLISH
DZC7HN@@pretalx.com
-DZC7HN
Scaling of Oceananigans.jl on multi GPU and CPU systems
en
en
20210729T130000
20210729T131000
0.01000
Scaling of Oceananigans.jl on multi GPU and CPU systems
Oceananigans.jl is designed to be a user friendly ocean modeling code natively implemented in Julia that can scale from single core, laptop studies to large scale parallel CPU and GPU cluster systems. The codes finite volume algorithm has large inherent parallelism through spatial domain decomposition.
In this talk we will look at the strong and weak scaling performance of non-linear shallow water model configurations of Oceananigans. The code's numerical kernels utilize KernelAbstractions.jl, allowing one source code to be maintained that supports both CPU and GPU parallel scenarios. Multi-process on-node and multi-node parallelism is supported by MPI.jl and largely abstracted, using data structures and associated types that dispatch communication operations depending on the active parallelism model.
We will describe briefly the benchmark problems used and then look at scaling over multiple threads on CPUs within a single node, across multiple GPUs and across multiple CPU and GPU nodes in a high-performance computing cluster. We will present speedup metrics and cost per solution metrics. The latter can be used to provide some measure of cost-effectiveness across quite different architectures.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/DZC7HN/
Blue
Chris Hill
Valentin Churavy
Ali Ramadhan
Francis Poulin
Gregory Wagner
PUBLISH
ZYPPNH@@pretalx.com
-ZYPPNH
Calculating a million stationary points in a second on the GPU
en
en
20210729T131000
20210729T132000
0.01000
Calculating a million stationary points in a second on the GPU
We will show how Julia allows us to implement spatial branch-and-bound-type methods using interval arithmetic in parallel on GPUs, in a relatively painless way.
These methods use repeated bisection in a divide-and-conquer style to perform exhaustive search over a box in d dimensions (for small d), in order to find all roots of a function f, find all global optima of f, or to bound feasible sets of constraints such as {x: f(x) ≤ 0}.
Using a vectorised implementation, we will show firstly how to define a vector of interval objects (or similar user-defined types) on the GPU, which most other systems cannot do. Then we need a way to run interval arithmetic methods, as defined in the IntervalArithmetic.jl package, on the GPU. `CUDA.jl`'s broadcasting abstraction
We will illustrate with the Griewank function, a standard test case for nonlinear optimization. We have developed a generic implementation of a vectorised branch-and-prune algorithm, which can run on both the CPU and GPU with no code changes whatsoever. A key difficulty that we faced, but were able to solve, was how to eliminate the uninteresting boxes in parallel.
We obtain a 2-orders-of-magnitude speed-up over a single CPU core, and we expect that performance will be improved even more by reducing array allocations.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/ZYPPNH/
Blue
David P. Sanders
Valentin Churavy
PUBLISH
NWFRP9@@pretalx.com
-NWFRP9
ZXCalculus.jl: A Julia package for the ZX-calculus
en
en
20210729T132000
20210729T133000
0.01000
ZXCalculus.jl: A Julia package for the ZX-calculus
The repository of ZXCalculus.jl is available on GitHub: [ZXCalculus.jl](https://github.com/QuantumBFS/ZXCalculus.jl)
For a brief introduction to this package, please refer to this [blog post](https://chenzhao44.github.io/2020/08/27/ZXCalculus.jl/).
For more details about the ZX-calculus, please check this [website](http://zxcalculus.com/).
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/NWFRP9/
Blue
Chen Zhao
PUBLISH
LMLJS8@@pretalx.com
-LMLJS8
ExaTron.jl: a scalable GPU-MPI-based batch solver for small NLPs
en
en
20210729T133000
20210729T140000
0.03000
ExaTron.jl: a scalable GPU-MPI-based batch solver for small NLPs
We introduce ExaTron.jl which is a scalable GPU-MPI-based batch solver for many small nonlinear programming problems. Its algorithm is based on a trust-region Newton algorithm for solving bound constrained nonlinear nonconvex problems. In contrast to existing work in the literature, it completely works on GPUs without requiring data transfers between CPU and GPU during its procedure. This enables us to eliminate one of the main performance bottlenecks under memory-bound situation. We present ExaTron.jl's architecture, its kernel design principles, and implementation details with experimental results comparing different design choices. We have implemented an ADMM algorithm for solving alternating current optimal power flow, where tens of thousands of small nonlinear nonconvex problems are solved by ExaTron.jl. We demonstrate a linear scaling of parallel computational performance of ExaTron.jl on Summit at Oak Ridge National Laboratory.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2021/talk/LMLJS8/
Blue
Youngdae Kim
PUBLISH
RJE93F@@pretalx.com
-RJE93F
Release management - lessons learned in JuliaData ecosystem
en
en
20210729T163000
20210729T170000
0.03000
Release management - lessons learned in JuliaData ecosystem
In this talk I will discuss:
1. How we manage development and patch branches in DataFrames.jl.
2. Why users might not be able to install the latest version of your package and why installing it might downgrade other packages.
3. How to coordinate releases of closely coupled packages.
4. Why having interface packages like DataAPI.jl and Tables.jl is useful.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2021/talk/RJE93F/
Blue
Bogumił Kamiński
PUBLISH
NWRPGY@@pretalx.com
-NWRPGY
Shaped Data with Acsets
en
en
20210729T170000
20210729T173000
0.03000
Shaped Data with Acsets
Any practicing data scientist can tell you that all the munging going on between data acquisition and mathematical algorithm is a huge time sink. This is especially evident when the data does not fall into the traditional model of the dataframe. If one is lucky, it is shaped like a graph, and one can use a graph data structure and graph algorithms to analyze it. However, more generally, there are many more "shapes" of data, that must either be put into adhoc data structures or shoehorned into general-purpose data structures.
In Catlab, we have built a general infrastructure for differently-shaped data based on a category-theoretic framework for databases as functors that we call "Attributed C-Sets" (acsets for short).
The acset infrastructure is made possible by a novel use of the Julia macro and type system, which would be difficult-to-untenable in most other languages. First "schemas" for acsets are generated by macros. Then, more macros are used to transform these schemas into custom structs. Finally, we use `@generated` functions to specialize generic operations to these custom structs.
This approach gives us performance comparable to popular data solutions like DataFrames.jl and LightGraphs.jl, while remaining fully generic. The acset infrastructure is used pervasively throughout the AlgebraicJulia ecosystem because of the flexibility, expressivity, and performance features.
In our talk, we will give an overview of the mathematical and computational innovations necessary to implement the acset infrastructure, as well as examples of practical applications of acsets, and a reflection on how acsets have become an essential part of AlgebraicJulia.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2021/talk/NWRPGY/
Blue
Owen Lynch
PUBLISH
3PCHLJ@@pretalx.com
-3PCHLJ
Types from JSON
en
en
20210729T173000
20210729T174000
0.01000
Types from JSON
Type providers infer and instantiate types from real world data. [Types from data: Making structured data first-class citizens in F#](http://tomasp.net/academic/papers/fsharp-data/fsharp-data.pdf) formalized a type inference algorithm for real world data. This talk will provide an overview of the theory of type providers, how this applies to Julia, and how you can get types from data today in [JSON3.jl](https://github.com/quinnj/JSON3.jl).
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/3PCHLJ/
Blue
Mary McGrath
PUBLISH
HWSUQN@@pretalx.com
-HWSUQN
PrettyPrinting: optimal layout for code and data
en
en
20210729T174000
20210729T175000
0.01000
PrettyPrinting: optimal layout for code and data
If you use Julia REPL to work with JSON or other nested data structures, you may find the way the data is displayed unsatisfactory. If this is the case, consider using PrettyPrinting. Compare:
<pre>
julia> data = JSON.parsefile("patient-example.json")
Dict{String, Any} with 14 entries:
"active" => true
"managingOrganization" => Dict{String, Any}("reference"=>"Organization/1")
"address" => Any[Dict{String, Any}("line"=>Any["534 Erewhon St"], "dis…
"name" => Any[Dict{String, Any}("family"=>"Chalmers", "given"=>Any[…
"id" => "example"
"birthDate" => "1974-12-25"
⋮
</pre>
<pre>
julia> using PrettyPrinting
julia> pprint(data)
Dict(
"active" => true,
"managingOrganization" => Dict("reference" => "Organization/1"),
"address" => [Dict("line" => ["534 Erewhon St"],
"district" => "Rainbow",
"use" => "home",
"postalCode" => "3999",
"city" => "PleasantVille",
"period" => Dict("start" => "1974-12-25"),
"text" => "534 Erewhon St PeasantVille, Rainbow, Vic 3999",
"type" => "both",
"state" => "Vic")],
"name" => [Dict("family" => "Chalmers",
"given" => ["Peter", "James"],
"use" => "official"),
Dict("given" => ["Jim"], "use" => "usual"),
Dict("family" => "Windsor",
"given" => ["Peter", "James"],
"use" => "maiden",
"period" => Dict("end" => "2002"))],
"id" => "example",
"birthDate" => "1974-12-25",
⋮
</pre>
PrettyPrinting optimizes the layout of the data to make it fit the screen width. It knows how to format tuples, named tuples, vectors, sets, and dictionaries.
PrettyPrinting can also serialize `Expr` nodes as Julia code. It supports a fair subset of Julia syntax including top-level declarations, statements, and expressions.
The ability of PrettyPrinting to format `Expr` nodes makes it easy to extend `pprint()` to user-defined data types. Indeed, it is customary to display a Julia object as a valid Julia expression that constructs the object. This could be done by converting the object to `Expr` and having `pprint()` format
it.
For example, let us define a type `MyNode` modeled after the standard `Expr` type.
<pre>
julia> struct MyNode
head
args
MyNode(head, args...) = new(head, args)
end
</pre>
The default implementation of `show()` is not aware of the custom constructor. Moreover, it dumps the whole object in a single line, making it difficult to read.
<pre>
julia> tree = MyNode("1",
MyNode("1.1", MyNode("1.1.1"), MyNode("1.1.2"), MyNode("1.1.3")),
MyNode("1.2", MyNode("1.2.1"), MyNode("1.2.2"), MyNode("1.2.3")))
MyNode("1", (MyNode("1.1", (MyNode("1.1.1", ()), MyNode("1.1.2", ()), MyNode("1.1.3", …
</pre>
We implement function `quoteof(::MyNode)` to convert `MyNode` to `Expr`. We can also override the default implementation of `show()` to make it use `pprint()`.
<pre>
julia> PrettyPrinting.quoteof(n::MyNode) =
:(MyNode($(quoteof(n.head)), $((quoteof(arg) for arg in n.args)...)))
julia> Base.show(io::IO, ::MIME"text/plain", n::MyNode) =
pprint(io, n)
</pre>
Now the output is correct Julia code that fits the screen width.
<pre>
julia> tree
MyNode("1",
MyNode("1.1", MyNode("1.1.1"), MyNode("1.1.2"), MyNode("1.1.3")),
MyNode("1.2", MyNode("1.2.1"), MyNode("1.2.2"), MyNode("1.2.3")))
</pre>
Internally, PrettyPrinting represents all potential layouts of a data structure in the form of a *layout expression* assembled from atomic layouts, vertical and horizontal composition, and the choice operator. The layout cost function estimates how well the layout fits the screen dimensions. The algorithm for finding the optimal layout is a clever application of dynamic programming, which is described in [Phillip Yelland, A New Approach to Optimal Code Formatting, 2016](https://ai.google/research/pubs/pub44667).
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/HWSUQN/
Blue
Kyrylo Simonov
PUBLISH
AKRLTA@@pretalx.com
-AKRLTA
Sponsor talk (Datachef)
en
en
20210729T175000
20210729T175500
0.00500
Sponsor talk (Datachef)
PUBLIC
CONFIRMED
Keynote
https://pretalx.com/juliacon2021/talk/AKRLTA/
Blue
PUBLISH
LE38LV@@pretalx.com
-LE38LV
Package latency and what developers can do to reduce it
en
en
20210729T190000
20210729T193000
0.03000
Package latency and what developers can do to reduce it
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2021/talk/LE38LV/
Blue
Tim Holy
PUBLISH
KHK7PA@@pretalx.com
-KHK7PA
Creating a Shared Library Bundle with Package Compiler
en
en
20210729T193000
20210729T194000
0.01000
Creating a Shared Library Bundle with Package Compiler
Julia has been touted as a great solution to the two-language problem (and it is). But for many, interacting with code in other languages is a necessity.
Numerous packages exist which aid interoperability with other languages, including C ([`Clang.jl`](https://juliainterop.github.io/Clang.jl/stable/)), C++ ([`CxxWrap.jl`](https://github.com/JuliaInterop/CxxWrap.jl)), Java ([`JavaCall.jl`](https://juliainterop.github.io/JavaCall.jl/)), Matlab ([`Matlab.jl`](https://github.com/JuliaInterop/MATLAB.jl) / [`Mex.jl`](https://github.com/byuflowlab/Mex.jl)), Python ([`PyCall.jl`](https://github.com/JuliaPy/PyCall.jl) / [`pyjulia`](https://pyjulia.readthedocs.io/en/stable/)), R ([`RCall.jl`](https://juliainterop.github.io/RCall.jl/stable/) / [`JuliaCall`](https://cran.r-project.org/web/packages/JuliaCall/readme/README.html)), Mathematica ([`MathLink.jl`](https://github.com/JuliaInterop/MathLink.jl)), and rust ([`jlrs`](https://docs.rs/jlrs/0.9.0/jlrs/)).
Many of these packages focus on calling out to code in other languages from Julia, but there is also support for calling Julia code from other languages, especially for those that have the ability to call C functions, and that is what we will focus on here.
The Julia manual has a [full section on Embedding Julia](https://docs.julialang.org/en/v1/manual/embedding/). Until now, this has been the standard way to embed and call Julia from other languages. Using the ideas here, along with custom Julia sysimage generation with [`PackageCompiler.jl`](https://julialang.github.io/PackageCompiler.jl/dev/), one of us created a proof-of-concept repository for creating a shared library from Julia code for C or other languages (https://github.com/simonbyrne/libcg).
One downside of this work is that the library was not easy to relocate--it contained hard-coded paths to the Julia runtime. We wanted the ability to create a relocatable shared library.
`PackageCompiler.jl` already allowed the creation of “apps”--bundles of files, including an executable--which could be relocated and moved to other machines (with some minor caveats). We extended this functionality to create relocatable shared libraries with a `create_library` function.
The actual act of creating a shared library with `PackageCompiler.jl` is very much like creating an “app”, and has a very similar output--a bundle of directories which include the shared library and enough of the Julia runtime to run. This bundle can be zipped or tarred up, sent to other computers, and installed in any location that a linker can find it. The user has the option of setting the library version (on Mac and Linux), and can include C header files for the Julia functions she has exported in the shared library.
For this talk, we will give a brief overview of the `create_library` functionality, discuss situations in which it might be used, show how to use it, and discuss its limitations.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/KHK7PA/
Blue
Kevin Squire
Simon Byrne
Kristoffer Carlsson
PUBLISH
CY88QP@@pretalx.com
-CY88QP
Semantically Releasing Julia Packages
en
en
20210729T194000
20210729T195000
0.01000
Semantically Releasing Julia Packages
The 'semantic release' framework (https://semantic-release.gitbook.io) builds upon the 'semantic versioning' (https://semver.org) and 'conventional commits' (https://www.conventionalcommits.org) specifications to bring release preparation closer to day-to-day operations.
When the time rolls around for a new release of a Julia package, multiple decisions have to be made regarding which version number to assign, documenting what has changed, etc. This is not always a straightforward task. Gathering this information may require coordination within a group of people, or stretch long periods of time requiring effort to regain an overview of the current state of a package relative to the previous one to be able to document it. By adopting a specific commit message format, standard tooling can be used to extract this information automatically, whenever a release is desired. For instance, based on the content of commit messages the new semantic version can be determined, a changelog for the public API can be automatically generated, etc.
An argument can be made that, as all of this information is available in the version control system, there is no need for tooling such as this. However, the information contained in these systems is typically either too high-level due to sloppy commit messages, or it is too detailed requiring deep knowledge of the software to understand the implications of changes. It is typically not convenient for consumers of a Julia package to find out how a new release of a dependency affects their software.
Adopting a 'semantic release' process benefits both developers and consumers of Julia software. For the former, it enables thinking about the impact of changes 'in the moment', instead of 'after the fact'. This is typically beneficial for the quality of documentation of these changes (e.g. reasons why, etc.). For the latter, it becomes easier to judge whether a new release of a dependency actually has an impact on their software.
Slides are available at https://bauglir.gitlab.io/talks/juliacon-2021-semantically-releasing-julia-packages/.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/CY88QP/
Blue
Joris Kraak
PUBLISH
ZSPVMT@@pretalx.com
-ZSPVMT
Runtime-switchable BLAS/LAPACK backends via libblastrampoline
en
en
20210729T195000
20210729T200000
0.01000
Runtime-switchable BLAS/LAPACK backends via libblastrampoline
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/ZSPVMT/
Blue
Mosè Giordano
Elliot Saba
PUBLISH
U9SZZU@@pretalx.com
-U9SZZU
Deep Dive: Creating Shared Libraries with PackageCompiler.jl
en
en
20210729T200000
20210729T203000
0.03000
Deep Dive: Creating Shared Libraries with PackageCompiler.jl
We recently added to `PackageCompiler.jl` functionality for creating shared library bundles, consisting of a "main" dynamic library (`.so`, `.dylib`, or `.dll`) created from Julia code, as well as any required Julia runtime libraries. The purpose of the library bundle is to allow developers to write Julia code that can be distributed to developers using other languages without the need for Julia to be installed.
This work extends the existing `PackageCompiler.jl` functionality to create self-contained, distributable and relocatable "apps". In this talk, we will go into the details of the implementation, as well as give in-depth examples of using the resulting shared library from C and rust.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2021/talk/U9SZZU/
Blue
Kevin Squire
Nikhil Mitra
Kristoffer Carlsson
Simon Byrne
PUBLISH
73XKCM@@pretalx.com
-73XKCM
DataSets.jl: A bridge between code and data
en
en
20210729T123000
20210729T124000
0.01000
DataSets.jl: A bridge between code and data
DataSets.jl is an open source package for describing data format and location declaratively so that one can better separate data deserialization and access from the domain-specific analysis code which consumes that data.
To quote from the package documentation available at https://juliacomputing.github.io/DataSets.jl/dev :
DataSets.jl exists to help manage data and reduce the amount of data wrangling
code you need to write. It's annoying to constantly rewrite
* Command line wrappers which deal with paths to data storage
* Code to load and save from various *data storage systems* (eg, local
filesystem data; local git data, downloaders for remote data over various
protocols, cloud storage access)
* Code to load the same data model from various serializations
* Code to deal with data lifecycle; versions, provenance, etc
DataSets.jl provides scaffolding to make this kind of code more reusable. We want
to make it easy to *relocate* an algorithm between different data environments
without code changes. For example from your laptop to the cloud, to another
user's machine, or to an HPC system.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/73XKCM/
Purple
Claire Foster
PUBLISH
EZHEQL@@pretalx.com
-EZHEQL
Systems Biology in ModelingToolkit
en
en
20210729T124000
20210729T125000
0.01000
Systems Biology in ModelingToolkit
Back in my day, systems biologists used MATLAB and Python for RK4. But in 2021 we can now run downhill both ways and make our biological models zoom with CellMLToolkit.jl and SBMLToolkit.jl in Julia! We will demonstrate importing CellML and SBML models into ModelingToolkit and how we get these model analysis and simulation tools "for free" in an acausal symbolic component model. We will show a few examples of how (biological) researchers may benefit from the broader SciML ecosystem, including parameter estimation and global sensitivity analysis. Short comparisons with de facto SBML and CellML modeling programs will be drawn to demonstrate how a biologists’ workflow may differ with SciML. The audience will leave with a firm understanding of how the Julia simulation environments will lead the next generation of biological modeling and simulation.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/EZHEQL/
Purple
Anand Jain
Shahriar Iravanian
Paul Lang
PUBLISH
YKHNVR@@pretalx.com
-YKHNVR
Single-cell resolved cell-cell communication modeling in Julia
en
en
20210729T125000
20210729T130000
0.01000
Single-cell resolved cell-cell communication modeling in Julia
The role of cell-cell communication in cell fate decision-making has not been well-characterized through a dynamical systems perspective. To do so, here we develop multiscale models that couple cell-cell communication with cell-internal gene regulatory network dynamics. This allows us to study the influence of external signaling on cell fate decision-making at the resolution of single cells. We study the granulocyte-monocyte vs. megakaryocyte-erythrocyte fate decision, dictated by the GATA1-PU.1 network, as an exemplary bistable cell fate system. Using JuliaLang, we model the cell-internal dynamics with nonlinear ordinary differential equations and the cell-cell communication via a Poisson process.
In this work, through analysis of a wide range of cell-cell communication topologies, we discovered that general principles emerged describing how cell-cell communication regulates cell fate decision-making. We studied a wide range of cell communication topologies through simulation using tools from DifferentialEquations.jl. We also used our high-performance computing cluster to run thousands of simulations in order to understand the limiting behaviors of our model. We show that, for a wide range of cell communication topologies, subtle changes in signaling can lead to dramatic changes in cell fate. We find that cell-cell coupling can explain how populations of heterogeneous cell types can arise. Analysis of intrinsic and extrinsic cell-cell communication noise demonstrates that noise alone can alter the cell fate decision-making boundaries. These results illustrate how external signals alter transcriptional dynamics, provide insight into cell fate decision-making, and provide a framework for modeling cell-cell communication that we expect will be of wide interest to the systems biology community.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/YKHNVR/
Purple
Megan Franke
PUBLISH
QTLENJ@@pretalx.com
-QTLENJ
FlowAtlas.jl: interactive exploration of phenotypes in cytometry
en
en
20210729T130000
20210729T131000
0.01000
FlowAtlas.jl: interactive exploration of phenotypes in cytometry
This project demonstrates how combining OpenLayers, D3 and GigaSOM.jl using JSServe.jl allowed us to create interactive clustering and visualisation tools for really large cytometry data. We want to continue lowering the entry barrier for experimental biologists to use computational tools.
This talk should be interesting to people from bioinformatics, immunology, machine learning and web development. Special thanks go to the lovely people involved in GigaSOM.jl for the helpful discussions
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/QTLENJ/
Purple
Grisha Szep
PUBLISH
EWWNFZ@@pretalx.com
-EWWNFZ
Designing ecologically optimized vaccines
en
en
20210729T131000
20210729T132000
0.01000
Designing ecologically optimized vaccines
Streptococcus pneumoniae (the pneumococcus) is a common nasopharyngeal bacterium that can cause invasive pneumococcal disease (IPD). Each component of current vaccines generally induce immunity to one of the approximately 100 pneumococcal types. Overall carriage rates remain similar to pre-vaccination as the serotypes not affected by the vaccine will replace the affected ones. Selecting which serotypes to target to minimize the post-vaccine IPD burden is a challenging combinatorial problem involving a large ODE system describing the population dynamics of the bacteria in response to each proposed vaccine. This talk describes how I have approached this problem using automatic differentiation, parallelized evaluation of the ODEs, stochastic search and Bayesian optimization. Here is a link to the paper this work is based on: https://www.nature.com/articles/s41564-019-0651-y.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/EWWNFZ/
Purple
Kusti Skytén
PUBLISH
PDMYDR@@pretalx.com
-PDMYDR
PRS.jl: Fast Polygenic Risk Scores
en
en
20210729T132000
20210729T133000
0.01000
PRS.jl: Fast Polygenic Risk Scores
The PRS-CS Python library calculates the relationship between genetic features and traits, eventually producing a single numerical result representing a person’s genetic susceptibility to a given disease. It does this using a novel Markov Chain Monte Carlo approach, allowing it to capture information from more genetic features than previous approaches.
As collection and storage of genetic data increases globally, more diseases are studied at once. However, when calculating these scores for many diseases while maintaining high accuracy, the computational burden becomes increasingly expensive. Because of the limitations of PRS-CS in making top-notch accuracy fast, we developed PRS.jl.
PRS.jl started as a direct port of PRS-CS, and without any special treatment produces results with the same accuracy but in a fraction of the time (or, depending on the configuration, better accuracy for the same amount of time). Today, PRS.jl boasts additional features and improved usability over PRS-CS, while maintaining low compute times per trait (among 9 tested) from an average of 80 hours for PRS-CS to just 15 for PRS.jl.
In this talk, I will introduce the concept of polygenic risk scores and describe how the they are used in biology and medicine. Next, I'll demonstrate how the program works and what aspects we improved upon. Finally, I will show areas where users can contribute improvements to the package.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/PDMYDR/
Purple
Annika Faucon
PUBLISH
DQJNVA@@pretalx.com
-DQJNVA
PhyloNetworks: a Julia package for phylogenetic networks
en
en
20210729T133000
20210729T134000
0.01000
PhyloNetworks: a Julia package for phylogenetic networks
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/DQJNVA/
Purple
Claudia Solis-Lemus
PUBLISH
NUFWBU@@pretalx.com
-NUFWBU
Solving Pokemon Go Battles using Julia
en
en
20210729T134000
20210729T135000
0.01000
Solving Pokemon Go Battles using Julia
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/NUFWBU/
Purple
Ian Slagle
PUBLISH
TRMZFB@@pretalx.com
-TRMZFB
Julia for data analysis in High Energy Physics
en
en
20210729T135000
20210729T140000
0.01000
Julia for data analysis in High Energy Physics
The field of High Energy Physics (HEP) is a natural place to take great benefits of Julia language. The adaptation of Julia in HEP, however, has been slow and the HEP ecosystem stays a promising place for future development. In the talk, I will present [an example of the data analysis in LHCb](https://inspirehep.net/literature/1879440), a large collaboration of 1000 scientists, that pioneers an application of Julia to the typical HEP problems. The central part of the analysis is the study of a multi-particle spectrum by building a customary Mixture Model PDF based on the particle-scattering amplitude using [AlgebraPDF.jl](https://github.com/mmikhasenko/AlgebraPDF.jl), extended-likelihood fitting, and spin-hypotheses testing using sets of pseudo experiments.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/TRMZFB/
Purple
Mikhail Mikhasenko
PUBLISH
MAUPF9@@pretalx.com
-MAUPF9
Experiences session
en
en
20210729T163000
20210729T180000
1.03000
Experiences session
PUBLIC
CONFIRMED
Experience
https://pretalx.com/juliacon2021/talk/MAUPF9/
Purple
PUBLISH
BB97DT@@pretalx.com
-BB97DT
Monads 2.0, aka Algebraic Effects: ExtensibleEffects.jl
en
en
20210729T190000
20210729T193000
0.03000
Monads 2.0, aka Algebraic Effects: ExtensibleEffects.jl
You heard that monads should be cool, but guess maybe there is something better already? Indeed ;-)
Extensible effects, or sometimes also called algebraic effects are now around for some time and have made monads composable.
Remember, a monad is essentially a composable hidden context, however to compose different such monads has been a struggle for many years.
This talk will present the concept and implementation of Extensible Effects. The implementation was adapted from the scala library Eff, but massively simplified, and with many examples of different complexity. Hence it will serve very well for educational purposes. You can find the source code at https://github.com/JuliaFunctional/ExtensibleEffects.jl
Extensible Effects are a bit like magic. The implementation looks so small but what it can do surpasses imagination, even if you programmed it yourself. It is a truly remarkable concept. Grab the chance and get to know it in this session!
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2021/talk/BB97DT/
Purple
Stephan Sahm
PUBLISH
QDBNXV@@pretalx.com
-QDBNXV
Roadmap to Julia BLAS and LinearAlgebra
en
en
20210729T193000
20210729T200000
0.03000
Roadmap to Julia BLAS and LinearAlgebra
The primary motivations for implementing BLAS/LAPACK in Julia are:
1. Because we can!
2. The existence of highly optimized alternatives such as MKL provide a solid benchmark by which to assess how we're doing before applying the same optimization approaches to novel problems.
3. We can adapt the routines easily to related operation types such as evaluating dense layers or miscellaneous tensor operations.
4. Generic with respect to number types, whether that means mixing precision or something as exotic as Tropical Numbers (showcased in TropicalGEMM.jl).
5. Ability to take advantage of compile time information and specialize, e.g. for statically sized arrays.
6. Relatively painless support for new hardware, as we do not need to write assembly kernels. Feature detection and support for generating optimized code will also be tied with LLVM rather than libraries like OpenBLAS, which tend to lag far behind the compilers.
Some of the challenges faced in the ecosystem include:
1. Efficient composable threading with low enough overhead to beat MKL for small array sizes.
2. Compilation time or sysimage building to avoid "time to first matmul" problems.
3. The implementations of BLAS and LAPACK routines themselves.
Traditionally, BLAS and LAPACK libraries define many compute kernels, typically written in assembly for each supported architecture. Supporting code then builds the supported BLAS and LAPACK routines through applying these kernels.
Libraries such as Octavian and RecursiveFactorization followed this approach while using LoopVectorization to produce most of the kernels.
An alternative approach is to use these problems to motivate and guide extending LoopVectorization to perform analysis and optimizations.
At the time I submit this proposal, planned features that will be able to extend the amount of work LoopVectorization can handle automatically, thereby reducing the effort needed to implement new functions:
1. Allowing the bounds of inner loops to depend on the induction variables of outer loops. For example, loops of the form`for m in 1:M, n in 1:m; ...; end`.
2. Allow multiple loops to occur at the same level in the nest. For example, loops of the form `for m in 1:M; for n in 1:N; end; for k in 1:K; end; end`.
3. Model dependencies across loop iterations, and avoid violating them. For example, loops of the form `for m in 1:M; a[m] += a[m-1]; end`.
Together, these would allow LoopVectorization to support loop nests performing cholesky factorizations or triangular solves. These should be tunable to perform with (nearly) optimal performance at small to moderate sizes for use in blockwise routines.
An orthogonal set of optimizations would be to develop a model for automatically generating blocking (working on pieces of arrays at a time that fit nicely into upper cache levels) and packing code (copying pieces of arrays into temporary buffers to avoid pessimized address calculations due to memory accesses being spread across too many pages; this also benefits hardware prefetchers, which require relatively small strides between subsequent memory accesses to trigger).
We outline a path toward building up an ecosystem through a combination of the approaches, including applications we can target -- such as stiff ODE solves benefiting from LU and triangular solves -- and see immediately benefits along the way.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2021/talk/QDBNXV/
Purple
Chris Elrod
PUBLISH
YFPXCU@@pretalx.com
-YFPXCU
SuiteSparseGraphBLAS.jl
en
en
20210729T200000
20210729T201000
0.01000
SuiteSparseGraphBLAS.jl
This talk will give an overview of progress on a JSOC 2021 project. Most work will be complete by this point, and the talk will give a brief overview of GraphBLAS, an example algorithm using GraphBLAS in Julia, and a graph neural network layer written using the project.
One of the goals of the project is interoperability with the Julia ecosystem, integrating with interfaces from SparseArrays, LightGraphs, and GeometricFlux. These integrations will be highlighted as well.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/YFPXCU/
Purple
William Kimmerer
PUBLISH
PRFW3N@@pretalx.com
-PRFW3N
MutableArithmetics: An API for mutable operations
en
en
20210729T201000
20210729T202000
0.01000
MutableArithmetics: An API for mutable operations
Julia allows to write generic algorithms that work with arbitrary number types as long as they implement the needed operation such as `+`, `*`, ...
The definition of the arithmetic operations defined in Julia assume that the arguments are not modified.
However, in many situations, a variable represents an accumulator that can be modified to contain the result, e.g., when summing the elements of an array.
Moreover, many types can be mutated, e.g., multiple precision numbers, JuMP expressions, MOI functions, polynomials, arrays, ...
and mutating the element may have significant performance benefit.
This talk presents an interface called MutableArithmetics.
It allows for mutable types to implement an arithmetics exploiting their mutability and for algorithms to exploit mutability while still being completely generic.
Moreover, it provides the following additional features:
1. it re-implements part of the Julia standard library on top of the API to allow mutable type to use a more efficient version than the default one.
2. it defines a `@rewrite` macro that rewrites an expression using the standard operations (e.g `+`, `*`, ...) into a code that exploits the mutability of the intermediate values created when evalutation the expression.
JuMP used to have its own API for mutable operations on JuMP expressions and its own JuMP-specific implementation of 1. and 2..
This was refactored into the package MutableArithmetics generalizing this to arbitrary mutable types.
Starting from JuMP v0.21, JuMP expression and MOI functions implement the MutableArithmetics API and the JuMP-specific implementation of 1. and 2. was removed in favor of the generic versions implemented in MutableArithmetics on top of the MutableArithmetics API.
While MutableArithmetics is already used in the released versions of numerous packages (such as JuMP, MathOptInterface, SumOfSquares, Polyhedra, SDDP and MultivariatePolynomials)
and seems to be working well and cover the use cases of many different types and algorithms on these types,
we may still need to modify the API to cover all possible use cases.
During this presentation, we hope to argue our design decision in a clear and detailed manner so that the Julia community can help us figure out whether there are situations that the API does not cover and how it could be further improved.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/PRFW3N/
Purple
Benoît Legat
PUBLISH
FEEV9A@@pretalx.com
-FEEV9A
ExprTools: Metaprogramming from reflection
en
en
20210729T202000
20210729T203000
0.01000
ExprTools: Metaprogramming from reflection
Sometimes you want to generate definitions for many methods. Consider for example implementing there delegation pattern. Where you have a field of a different type, and you want to overload all methods that accept that field’s type to also accept this new object, and have them just delegate to calling the method on the field. Ideally this wouldn’t come up and you would just need to implement a small well documented set for an interface. But sometimes things can’t be ideal. Generating overloads from the method table is one way to take a jack-hammer to blast through the problem. But even outside that it can be useful as this talk will discuss.
[ExprTools.jl](https://github.com/invenia/ExprTools.jl) was created to hold a more robust version of `splitdef` and `combinedef` from [MacroTools.jl](https://github.com/FluxML/MacroTools.jl).
`splitdef` takes the AST for a method definition and outputs a dictionary of all the parts: name, args, whereparams, body etc. `combinedef` does the reverse: taking such a dictionary, and outputting an AST that declares the method.
`splitdef` is very useful since it both handles different equivalent syntax forms, and makes the key parts accessible in a consistent way.
This makes it easier to write function decorator macros, and also macros that let the user write something that looks like a function but is actually transformed into something else.
This dictionary is also useful, and it would be great if we could define it not from an AST but from a method that has already been defined. We could access all the information we need via reflection. This is exactly what the `signature` function provides.
The `signature` function takes in a `Method` object, which can be obtained from `methods` or `methodswith`, and returns a dictionary like `splitdef` would, except it excludes the body.
Excluding the body is generally not useful for this kind of generated code anyway since the user will generally want to fill the body with their own code that calls the method we are generating from. One exampl. Another example is generating overloaded operators for overloading-based reverse mode AD from [ChainRules.jl](https://github.com/JuliaDiff/ChainRules.jl/)’s rrule.
The main alternative for this kind of approach is something along the lines of [Cassette.jl](https://github.com/JuliaLabs/Cassette.jl), which in effect allows the overloading of what it means to call a function. There are three key differences of an ExprTools-based generation from reflection approach over a Cassette-based overdubbing approach.
Overdubbing occurs in a specific dynamically scoped context, method generation applies globally.
A downside of method generation is that it will not detect new methods added after the generation is performed, overdubbing does.
An upside of method generation is that it is just plain julia code, so it doesn’t break the compiler’s ability to do type inference. The compiler is completely prepared to deal with julia-code. This is (sadly, but demonstrably) not true for Cassette right now.
This talk will spend ~3 minutes time covering the basics of ExprTools, with `splitdef` and `combine`. It will spend 4 minutes demonstrating `signature` and the generation of methods from the methods tables. It will spend ~1 minutes peeking under the covers as to how it works.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/FEEV9A/
Purple
Frames Catherine White
PUBLISH
N7REEK@@pretalx.com
-N7REEK
Live Coding: Outreach and Beyond
en
en
20210729T123000
20210729T140000
1.03000
Live Coding: Outreach and Beyond
Live streaming is a recent phenomena that has seen huge growth due to services such as Twitch, Youtube Live, and Facebook Live. Although this burgeoning community’s focus is generally on video games, vlogs, and talk shows, a niche that is increasing in this community are educational streams. Examples of such streams are where students studying invite audiences to “study with me” or educators hosting ask me anything sessions. In particular, one area in this niche that is particularly relevant for the Julia community is live coding.
Live coding is where software developers or programmers stream their programming development to a live audience. It can take many shapes where a developer works on an open source project, a coder is learning a new language, or an interactive back-and-forth to create a novel application. Live coding works as a sort of give and take relationship where streamers get the opportunity to make new connections and the audience gets to be exposed to new programming styles or learn new skills. Often, this works in the reciprocal as well.
For the Julia community, with the advent of the COVID19 pandemic, many individuals were suddenly left in a common, but highly unusual, circumstance. As many have experienced and are experiencing, the days of being in a work office setting and having passing conversations with colleagues have become somewhat distant memories. Instead, many find themselves at home behind their computers with their only company being either family, pets, or the whir of their computer’s fan. In conjunction with this, the amount of interest in live streaming programming in the Julia community has been growing.
In this Birds of a Feather, we want to gather those people who have been live streaming within the Julia community and those interested in live streaming. In this gathering, streamers can share their ideas around best practices, tips and experiences in the streamer community. This could be a strong opportunity for the Julia community to also discuss how to reach and engage with people outside of the Julia community. Furthermore, this BoF could also lead to productive discussions on how to help within the Julia community whether that be in the form of increasing visibility to amazing Julia packages or leading teaching sessions.
Finally, this BoF would also provide an open avenue for individuals who are interested in live streaming to freely ask questions. This could range from questions such as “what is needed to get started as a streamer?” to “how do you build a great community around your stream?” As a result, this can not only increase outreach from the Julia community but also foster new and meaningful connections one could make - especially in the pandemic era.
PUBLIC
CONFIRMED
Birds of Feather
https://pretalx.com/juliacon2021/talk/N7REEK/
BoF/Mini Track
Jacob Zelko
PUBLISH
C3EBJM@@pretalx.com
-C3EBJM
Julia in High-Performance Computing
en
en
20210729T163000
20210729T171500
0.04500
Julia in High-Performance Computing
# Agenda
## Short presentations about ongoing projects
- Ludovic Räss & Sam Omlin: GPU4GEO and Julia HPC development at ETH Zurich
- Simon Byrne: ClimateMachine.jl
- Valentin Churavy: CESMIX-MIT
- Johannes Blaschke: Julia@NERSC
## Roundtable discussion
- Julia in the DOE
- Teaching HPC
- MPI.jl
- Challenges of running Julia at scale
- Deploying Julia (Sysimages/Pkg/Depots/Artifacts/...)
- **Your suggestion**
PUBLIC
CONFIRMED
BoF (45 mins)
https://pretalx.com/juliacon2021/talk/C3EBJM/
BoF/Mini Track
Valentin Churavy
Michael Schlottke-Lakemper
Simon Byrne
Carsten Bauer
PUBLISH
RXBMHE@@pretalx.com
-RXBMHE
GPU programming in Julia BoF
en
en
20210729T171500
20210729T180000
0.04500
GPU programming in Julia BoF
PUBLIC
CONFIRMED
BoF (45 mins)
https://pretalx.com/juliacon2021/talk/RXBMHE/
BoF/Mini Track
Tim Besard
Julian P Samaroo
Valentin Churavy
PUBLISH
GWRZPV@@pretalx.com
-GWRZPV
Julia in Private Organizations
en
en
20210729T190000
20210729T203000
1.03000
Julia in Private Organizations
Every private organization works slightly differently with how they operate and the internal tooling they use. As Julia users who work in private organizations we'll use this BoF as an opportunity to discuss the unique challenges we've faced while using Julia within our organizations and how we've solved them. This BoF is suitable for members of private organizations which already are established in using Julia and advocates pushing for Julia to be adopted.
Discussion points will include:
- Are you using repository hosting besides GitHub? Have you faced any challenges with integrating open-source tools? (e.g. CI tooling, GitHub specific tools)
- Did you face any challenges when setting up a private Julia package registry?
- What tooling to you use to assist with new package registry entries? (e.g. Bots, RegistryCI.jl)
- How do you keep private code up to date with public dependencies? (e.g. major version changes, deprecations, etc.)
- How does Julia fit into your production environment? A service, batch job, etc.
- What cloud infrastructure do you use for running distributed Julia?
- Solutions for containerizing Julia: shared base images, optimizing startup time, etc.
- Procedures for moving closed-source to open-source?
- Advice for adopting Julia within an organization
Hopefully, this BoF will allow different organizations using similar tooling/techniques to connect and work together. The result of this could be an improved workflow experience for these organizations and ideally a much smoother transition for those organizations just starting to adopt Julia.
PUBLIC
CONFIRMED
Birds of Feather
https://pretalx.com/juliacon2021/talk/GWRZPV/
BoF/Mini Track
Curtis Vogt
PUBLISH
3BBA7L@@pretalx.com
-3BBA7L
The Design of the MiniZinc Modelling Language
en
en
20210729T123000
20210729T130000
0.03000
The Design of the MiniZinc Modelling Language
MiniZinc was designed with the aim to become a 'standard' Constraint Programming modelling language. As such, it is oriented towards logical and combinatorial constraints standardized in the Global Constraints Catalogue, but also supports continuous variables. The most important design criteria were expressiveness, while at the same time simplicity for practical implementation, and mechanisms for easy plugging of new solvers. The solver interface incorporates a low-level language FlatZinc (aka MPS for example), and a redefinition scheme for global constraints. The latter enables native handling of globals supported by a given solver, while applying default or solver-specific redefinitions for unsupported ones. Other solver technologies, such as SAT, local search, and MIP, have been interfaced, and several experimental interfaces exist, such as to quantum computing. The modelling system has enabled the annual solver competition MiniZinc Challenge since 2008.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2021/talk/3BBA7L/
JuMP Track
Gleb Belov
PUBLISH
9KTFNJ@@pretalx.com
-9KTFNJ
ConstraintSolver.jl - First constraint solver written in Julia
en
en
20210729T130000
20210729T133000
0.03000
ConstraintSolver.jl - First constraint solver written in Julia
Constraint programming is used in a variety of fields ranging from simple puzzle solving to big instances in industry. Currently Julia does not have a package for constraint programming and JuMP itself is in the beginning of implementing constraints and variable sets to support constraint solvers in the future. ConstraintSolver.jl is a new Julia package to tackle the problem of solving constraint programming problems purely in Julia. This has advantages for prototyping new ideas which is harder to do in low level languages like C or C++. Additionally the solver will be able to solve problems with different types than just integers and floating point numbers i.e. an integration with Unitful.jl will be possible. Another advantage of a solver purely written in Julia is to easily use automatic differentiation.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2021/talk/9KTFNJ/
JuMP Track
Ole Kröger
PUBLISH
EHUKWK@@pretalx.com
-EHUKWK
ConstraintProgrammingExtensions.jl
en
en
20210729T133000
20210729T140000
0.03000
ConstraintProgrammingExtensions.jl
Constraint programming is a modelling paradigm that has proved to be extremely useful in many real-world scenarios, like computing optimum schedules or vehicle routings. It is often viewed as either a complementary or a competing technology to mathematical programming, trading modelling ease with computational efficiency. Both approaches have seen many developments in terms of modelling language and solvers alike, including in Julia. Even though several constraint-programming solvers are available (or entirely written) in Julia, [JuMP and MathOptInterface](https://jump.dev/) (its solver abstraction layer) do not give access to them in the same, unified way as mathematical programming, though the latest versions of JuMP have been designed to provide great flexibility.
[ConstraintProgrammingExtensions](https://github.com/dourouc05/ConstraintProgrammingExtensions.jl) is currently a one-man project bringing constraint programming to JuMP. Its main part is a large series of sets that aim at providing a common interface for constraint-programming solvers. It also consists of a series of bridges that define relationships between those sets (including between high-level constraints such as knapsacks and mathematical-programming formulations) and of a [FlatZinc](https://www.minizinc.org/) reader-writer to import and export models in that common format, already supported by tens of solvers. As a side effect, ConstraintProgrammingExtensions is also becoming a way to ease modelling for mathematical programming, as high-level constraints can be used with traditional mathematical-programming solvers.
This presentation details the current state of ConstraintProgrammingExtensions, some of its design decisions, and future developments when JuMP and MathOptInterface do not provide sufficient versatility: for instance, several constraint-programming solvers allow graphs as first-class decision variables; also, constraint programming is not restricted by the linearity or the convexity of mathematical expressions, unlike many mathematical-programming solvers.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2021/talk/EHUKWK/
JuMP Track
Thibaut Cuvelier
PUBLISH
P8KJSW@@pretalx.com
-P8KJSW
Nonlinear programming on the GPU
en
en
20210729T163000
20210729T170000
0.03000
Nonlinear programming on the GPU
This talk walks over our recent experiences in our development efforts. The parallel layout of GPUs requires running as many operations as possible in batch mode, in a massively parallel fashion. We will detail how we have adapted the automatic differentiation, the linear algebra and the optimization solvers in a batch setting and present the different challenges we have addressed. Our efforts have led to the development of different prototypes, all addressing a specific issue on the GPU: ExaPF for batch automatic differentiation, ExaTron as a batch optimization solver, ProxAL for distributed parallelism. The future research opportunities are manyfold for the nonlinear optimization community: how can we leverage new automatic differentiation backends developed in the machine learning community for optimization purpose? How can we exploit the Julia language to develop a vectorized nonlinear optimization modeler targeting massively parallel accelerators?
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2021/talk/P8KJSW/
JuMP Track
François Pacaud
PUBLISH
A3Z33C@@pretalx.com
-A3Z33C
MadNLP.jl: A Mad Nonlinear Programming Solver.
en
en
20210729T170000
20210729T171000
0.01000
MadNLP.jl: A Mad Nonlinear Programming Solver.
MadNLP leverages diverse sparse and dense linear algebra routines: UMFPACK, HSL routines, MUMPS, Pardiso, LAPACK, and cuSOLVER. The key feature of MadNLP is the adoption of scalable linear algebra methods: structure-exploiting parallel linear algebra (based on restricted additive Schwarz and Schur complement strategy) and GPU-based linear algebra (cuSOLVER). These methods significantly enhance the scalability of the solver to large-scale problem instances (e.g., long-horizon dynamic optimization, stochastic programs, abd dense NLPs). Furthermore, MadNLP exploits Julia's extensibility so that new linear solvers can be added in a plug-and-play manner. In the presentation, we will present benchmark results against other open-source and commercial solvers as well as the results highlighting MadNLP's advanced features. Our results suggest that (i) MadNLP has comparable speed and robustness with Ipopt/KNITRO when tested against the standard benchmark test set (CUTEst); (ii) MadNLP with structure-exploiting parallel linear algebra can achieve speed-up up of a factor of 3 when solving large-scale sparse nonlinear programs; and (iii) GPU-acceleration achieves the speed-up of a factor of 10 when solving dense nonlinear optimization problems. The presentation will conclude with a future development roadmap, including the implementation of distributed-memory parallelism and pure-GPU solver.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/A3Z33C/
JuMP Track
Sungho Shin
PUBLISH
MVCXHB@@pretalx.com
-MVCXHB
Nonconvex.jl
en
en
20210729T171000
20210729T172000
0.01000
Nonconvex.jl
The method of moving asymptotes is also natively implemented in the package. The first order augmented Lagrangian algorithm implemented in Percival.jl is particularly suitable for the AD-based approach because efficient adjoint rules of block constraints can be used when calculating the gradient of the augmented Lagrangian instead of computing the entire Jacobian of the constraint functions. The nice thing about having a function-based API is that registering functions with JuMP.jl and splatting inputs are not needed anymore thus simplifying the nonlinear and mixed integer nonlinear optimization interface. Future work includes using ModelingToolkit.jl to reverse-engineer the objective and constraint functions, generating mathematical expressions where possible thus allowing the use of expression-based nonlinear and mixed integer nonlinear solvers such as Alpine.jl.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/MVCXHB/
JuMP Track
Mohamed Tarek
PUBLISH
UZJWTT@@pretalx.com
-UZJWTT
NOMAD.jl
en
en
20210729T172000
20210729T173000
0.01000
NOMAD.jl
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/UZJWTT/
JuMP Track
Ludovic Salomon
PUBLISH
FGUEAM@@pretalx.com
-FGUEAM
Linearly Constrained Separable Optimization
en
en
20210729T173000
20210729T180000
0.03000
Linearly Constrained Separable Optimization
***Note:*** *SeparableOptimization.jl was named "LCSO.jl" at the time of the presentation recording.*
[PiecewiseQuadratics.jl](https://github.com/JuliaFirstOrder/PiecewiseQuadratics.jl) allows for the representation and manipulation of such functions, including the computation of the proximal operator or the convex envelope. [SeparableOptimization.jl](https://github.com/JuliaFirstOrder/SeparableOptimization.jl) solves the problem of minimizing a sum of piecewise-quadratic functions subject to affine equality constraints by applying the Alternating Direction Method of Multipliers (ADMM). This allows us to quickly solve problems even when the univariate functions are very complicated. We demonstrate this with a portfolio construction example, in which the univariate functions represent the US tax laws for realized capital gains.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2021/talk/FGUEAM/
JuMP Track
Ellis Brown
PUBLISH
SWBYRL@@pretalx.com
-SWBYRL
NExOS.jl for Nonconvex Exterior-point Operator Splitting
en
en
20210729T190000
20210729T193000
0.03000
NExOS.jl for Nonconvex Exterior-point Operator Splitting
We consider the problem of minimizing a convex cost function over a nonconvex constraint set, where projection onto the constraint set is single-valued around points of interest. A wide range of nonconvex learning problems have this structure including (but not limited to) sparse and low-rank optimization problems.
By exploiting the underlying geometry of the constraint set, NExOS finds a locally optimal point by solving a sequence of penalized problems with strictly decreasing penalty parameters. NExOS solves each penalized problem by applying an outer iteration operator splitting algorithm, which converges linearly to a local minimum of the corresponding penalized formulation under regularity conditions. Furthermore, the local minima of the penalized problems converge to a local minimum of the original problem as the penalty parameter goes to zero.
NExOS.jl has been extensively tested on many instances from a wide variety of learning problems. In spite of being general-purpose, NExOS is able to compute high-quality solutions very quickly and is competitive with specialized algorithms.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2021/talk/SWBYRL/
JuMP Track
Shuvomoy Das Gupta
PUBLISH
BR7WG8@@pretalx.com
-BR7WG8
Global constrained nonlinear optimisation with interval methods
en
en
20210729T193000
20210729T200000
0.03000
Global constrained nonlinear optimisation with interval methods
Interval arithmetic provides a computationally-cheap way to compute an over-estimate of the range of a function over an input set. These estimates are guaranteed to be correct (mathematically rigorous), even
though the computations are done using floating-point arithmetic, by using directed rounding.
This kind of range bounding can be used to design a conceptually-simple algorithm for guaranteed unconstrained global optimization, as in the talk presented at JuMP-dev Chile in 2019.
In this talk we show how to extend this to constrained optimization.
First we show how both the objective function and constraints can be modelled using symbolic expressions from the Symbolics.jl library. Based on these symbolic expressions we have a new implementation of interval constraint propagation, as implemented in the ReversePropagation.jl library, including common subexpression elimination.
One main difficulty in interval-based inequality-constrained optimization is deciding when a given box is feasible, i.e. satisfies all of the constraints. We have implemented what we believe to be a novel method to do so.
This is an extension to inequality-constrained optimization of th
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2021/talk/BR7WG8/
JuMP Track
David P. Sanders
PUBLISH
8BWJXP@@pretalx.com
-8BWJXP
Calibration analysis of probabilistic models in Julia
en
en
20210730T123000
20210730T130000
0.03000
Calibration analysis of probabilistic models in Julia
**The Pluto notebook of this talk is available at https://talks.widmann.dev/2021/07/calibration/**
The talk focuses on:
- introducing/explaining calibration of probabilistic models
- discussing/showing how users can apply the offered evaluation measures and hypothesis tests
- highlighting the relation to the Julia ecosystem, in particular to packages such as KernelFunctions and HypothesisTests and interfaces via pyjulia (Python) and JuliaCall (R)
Probabilistic predictive models, including Bayesian and non-Bayesian models, output probability distributions of targets that try to capture uncertainty inherent in prediction tasks and modeling. In particular in safety-critical applications, it is important for decision-making that the model predictions actually represent these uncertainties in a reliable, meaningful, and interpretable way.
A calibrated model provides such guarantees. Loosely speaking, if the same prediction would be obtained repeatedly, then it ensures that in the long run the empirical frequencies of observed outcomes are equal to this prediction. Note that usually it is not sufficient if a model is calibrated though: a constant model that always outputs the marginal distribution of targets, independently of the inputs, is calibrated but probably not very useful.
Commonly, calibration is analyzed for classification models, often also in a reduced binary setting that focuses on the most-confident predictions only. Recently, we published a framework for calibration analysis of general probabilistic predictive models, including but not limited to classification and regression models. We implemented the proposed methods for calibration analysis in different Julia packages such that users can incorporate them easily in their evaluation pipeline.
[CalibrationErrors.jl](https://github.com/devmotion/CalibrationErrors.jl) contains estimators of different calibration measures such as the expected calibration error (ECE) and the squared kernel calibration error (SKCE). The estimators of the SKCE are consistent, and both unbiased and unbiased estimators exist. The package uses kernels from KernelFunctions.jl, and hence many standard kernels are supported automatically.
[CalibrationTests.jl](https://github.com/devmotion/CalibrationTests.jl) implements statistical hypothesis tests of calibration, so-called calibration tests. Most of these tests are based on the SKCE and can be applied to any probabilistic predictive model.
Finally, the package [CalibrationErrorsDistributions.jl](https://github.com/devmotion/CalibrationErrorsDistributions.jl) extends calibration analysis to models that output probability distributions from Distributions.jl. Currently, Gaussian distributions, Laplace distributions, and mixture models are supported.
To increase the adoption of these calibration evaluation techniques by the statistics and machine learning communities, we also published interfaces to the Julia packages in [Python](https://github.com/devmotion/pycalibration) and [R](https://github.com/devmotion/rcalibration).
### References
Widmann, D., Lindsten, F., & Zachariah, D. (2019). Calibration tests in multi-class classification: A unifying framework. In Advances in Neural Information Processing Systems 32 (NeurIPS 2019) (pp. 12257–12267).
Widmann, D., Lindsten, F., & Zachariah, D. (2021). Calibration tests beyond classification. International Conference on Learning Representations (ICLR) 2021.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2021/talk/8BWJXP/
Green
David Widmann
PUBLISH
WDFZWG@@pretalx.com
-WDFZWG
Julia Developer Survey Results
en
en
20210730T130000
20210730T131000
0.01000
Julia Developer Survey Results
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/WDFZWG/
Green
Viral B. Shah
PUBLISH
WWP7DR@@pretalx.com
-WWP7DR
SciML for Structures: Predicting Bridge Behavior
en
en
20210730T131000
20210730T132000
0.01000
SciML for Structures: Predicting Bridge Behavior
Structures in civil engineering are traditionally modelled using the finite element (FE) method. Although it is an extremely successful method, it has some shortcomings: (i) it can require substantial human effort to build complex models; and (ii) it can be difficult to combine with measurement data in order to increase model prediction accuracy. A way to overcome these shortcomings is to use data-driven machine learning approaches, however these may require a prohibitive amount of measurement data and still perform poorly in extrapolation. Combining the machine learning model with scientific knowledge, i.e. scientific machine learning (SciML), may offer a practically tenable solution to the above challenges.
This talk aims to explore to what extent a scientific machine learning model can predict the structural response of a twin girder bridge in comparison with a data-driven machine learning model. The two approaches are compared with regards to prediction accuracy as well as the amount of data needed to achieve a particular accuracy. The comparison is made by using a synthetic case and a real-world case with field measurements.
The scientific machine learning model requires a formulation of the physics, which can be done in different manners. In engineering practice the structural behavior of bridges is typically described/predicted by FE models. For the SciML physics formulation we selected a simplified 2D beam model made up of 4-degrees-of-freedom linear elastic beam elements. This simple 2D-model is chosen in order to explore a very fast modelling workflow, potentially expandable to a digital tool for quick structural assessment. The 2D model is combined with a neural network in order to approximate the 3D bridge behavior. The neural network achieves this by representing a transverse load distribution function that describes what percentage of a concentrated load at a certain location is carried by the modelled 2D girder. As the bridge is composed of two identical girders, the rest of the load is assumed to be carried by the second girder. We do not take shear lag effects into account in our physics formulation.
The loss for the SciML model was calculated in three steps: first, the load on the 2D beam was determined by the neural network, second, the structural system is solved for this predicted load and sensor position, finally the loss is calculated using the difference of the predicted structural responses and the measured ones.
For the data-driven model, a feedforward neural network tries to directly predict the structural response of the 2D girder for a certain sensor and load location. The difference between this prediction and the measured data is used to calculate the loss for the training of the neural network.
We chose to make the implementation of the SciML model in Julia because of its many attractive features, such as multiple dispatch and packages for automatic differentiation (e.g. Zygote.jl) and machine learning (e.g. Flux.jl). For the FE package, we wanted a lightweight, hackable package that would be easy to get started with, in order to provide a fast workflow. It was also desirable to have a 100% Julia written FE package, in order to fully utilize Zygote for backpropagating the FE solutions. We chose CALFEM.jl, a Julia port of the CALFEM package, originally developed in the late 1970s at Lund University in Sweden and subsequently improved over the decades, today typically used for teaching simple FE programming. CALFEM.jl lacks support for automatic differentiation, but because of the many favorable features of Julia, it was a quite simple task to implement AD support for the components that we needed. The source code of the analysis will be made open to the public.
The results show that a SciML approach can accurately predict structural behavior of the bridge using far less data points than a purely data driven approach. Moreover, the SciML approach is much better in extrapolation than the purely data-driven one. Our results show that at the moment purely data-driven approaches are impractical to predict structural responses and SciML seems to be a very promising addition to the toolbox of structural modelling approaches.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/WWP7DR/
Green
Axel Larsson
PUBLISH
NVSXHU@@pretalx.com
-NVSXHU
Simulating a public transportation system with OpenStreetMapX.jl
en
en
20210730T132000
20210730T133000
0.01000
Simulating a public transportation system with OpenStreetMapX.jl
*Co-authors: Nykyta Polituchyi, Kinga Siuta, Paweł Prałat*
The [OpenStreetMapX.jl](https://github.com/pszufe/OpenStreetMapX.jl) package is capable of parsing [*.osm](https://wiki.openstreetmap.org/wiki/OSM_file_formats) formatted data from the [OpenStreetMap.org](https://www.openstreetmap.org/) project. This data can be subsequently utilized to extract information about city’s POIs (points of interest), measure actual distances, perform routing and build numerical simulation model that make it possible to understand dynamics of a city. Those capabilities will be illustrated with a map of Toronto and show how to extend the osm data with other sources to extend the city routing beyond cars and sidewalks and model an actual public transportation network.
In this presentation two interconnected applications of the The OpenStreetMapX.jl package will be presented. Firstly, mixed routing combining different means of transportation will be presented and discussed showing how different Julia libraries can work together towards a common goal (including [OpenStreetMapXPlot](https://github.com/pszufe/OpenStreetMapX.jl), [LightGraphs](https://github.com/JuliaGraphs/LightGraphs.jl), PyCall, Plots, DataFrames and others). Secondly, an agent-based simulation of a public transportation system will be discussed. We will show how to model and measure the impact of availability and frequency of public transportation onto decisions made by commuters and subsequently its contribution towards spreading the pandemic.
The presentation is accompanied by a Jupyter notebook that is available since on the [OpenStreetMapX.jl GitHub project website](https://github.com/pszufe/OpenStreetMapX.jl) since the first day of JuliaCon 2021.
In summary, in this talk the following areas will be discussed:
- processing of OpenStreetMap data in Julia to obtain graph structures for processing with LightGraphs.jl
- visualizing graphs, maps and spatial data with OpenStreetMapXPlot.jl (GR, PyPlot backends) as well as integration with Leaflet via folium and PyCall
- building animations of a city using OpenStreetMapXPlot.jl combined with the `Plots.@animate` macro
- using Julia to augment OSM map data with external sources in order to build routing mechanism that can include public transportation (metro, streetcarts)
- combine this all into an agent simulation that can be used to model how the frequency and availability of a public urban transportation system contributes to the development of pandemic
*The research is financed by a NSERC, Canada, “Alliance COVID-19” grant titled: "COVID-19: Agent-based framework for modelling pandemics in urban environment”.*
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/NVSXHU/
Green
Przemysław Szufel
PUBLISH
ETY3B7@@pretalx.com
-ETY3B7
JuliaSim: Machine Learning Accelerated Modeling and Simulation
en
en
20210730T133000
20210730T140000
0.03000
JuliaSim: Machine Learning Accelerated Modeling and Simulation
Julia is known for its speed, but how can you keep making things faster after all of the standard code optimization tricks run out? The answer is machine learning reduced or approximate models. JuliaSim is an extension to the Julia SciML ecosystem for automatically generating machine learning surrogates which accurately reproduce model behavior. In this talk we will showcase how you can take your existing ModelingToolkit.jl models and automate the model order reduction of its components. By hooking into the hierarchical modeling ecosystem, this allows for using the same surrogate across many models without requiring retraining. We will show the benefits of this process on energy efficient building design, which has been accelerated by orders of magnitude over the Dymola Modelica implementation, by using neural surrogatized HVAC models. We will demo simultaneous translation and acceleration of components designed outside of Julia through JuliaSim's ability to take in Functional Markup Units (FMUs) from Modelica and Simulink, along with domain-specific modeling definitions like SPICE netlists of electrical circuits and Pumas pharmacometic models. Similarly, this system allows for generating digital twins of real objects through its measurements, allowing one to quickly incorporate components with less physical understanding directly through its data. We will show a JuliaHub-based parallelized training platform that allows offloading the training process to the cloud. This will allow for engineers to pull pre-accelerated models from the ever growing JuliaSim Model Store directly into their Julia-based designs for fast exploration. Together this will leave the audience ready to integrate ML-accelerated modeling and simulation tools into their workflows.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2021/talk/ETY3B7/
Green
Chris Rackauckas
PUBLISH
S3RH8Z@@pretalx.com
-S3RH8Z
Keynote (Soumith Chintala)
en
en
20210730T143000
20210730T151500
0.04500
Keynote (Soumith Chintala)
PUBLIC
CONFIRMED
Keynote
https://pretalx.com/juliacon2021/talk/S3RH8Z/
Green
PUBLISH
8CMRGC@@pretalx.com
-8CMRGC
Sponsor talk - RelationalAI
en
en
20210730T151500
20210730T152500
0.01000
Sponsor talk - RelationalAI
PUBLIC
CONFIRMED
Keynote
https://pretalx.com/juliacon2021/talk/8CMRGC/
Green
PUBLISH
VSMCQG@@pretalx.com
-VSMCQG
The state of DataFrames.jl
en
en
20210730T152500
20210730T154000
0.01500
The state of DataFrames.jl
PUBLIC
CONFIRMED
Keynote
https://pretalx.com/juliacon2021/talk/VSMCQG/
Green
Bogumił Kamiński
PUBLISH
AX3VYR@@pretalx.com
-AX3VYR
Sponsor talk - JuliaComputing
en
en
20210730T154000
20210730T155500
0.01500
Sponsor talk - JuliaComputing
PUBLIC
CONFIRMED
Keynote
https://pretalx.com/juliacon2021/talk/AX3VYR/
Green
PUBLISH
VJEVMQ@@pretalx.com
-VJEVMQ
Closing remarks
en
en
20210730T155500
20210730T160000
0.00500
Closing remarks
PUBLIC
CONFIRMED
Keynote
https://pretalx.com/juliacon2021/talk/VJEVMQ/
Green
PUBLISH
9TYSM9@@pretalx.com
-9TYSM9
GatherTown -- Social break
en
en
20210730T180000
20210730T190000
1.00000
GatherTown -- Social break
PUBLIC
CONFIRMED
Social hour
https://pretalx.com/juliacon2021/talk/9TYSM9/
Green
PUBLISH
T7UFDU@@pretalx.com
-T7UFDU
Introducing Chemellia: Machine Learning, with Atoms!
en
en
20210730T190000
20210730T193000
0.03000
Introducing Chemellia: Machine Learning, with Atoms!
Machine learning is a promising approach in science and engineering for “filling the gaps” in modeling, particularly in cases where substantial volumes of training data are available. These techniques are becoming increasingly popular in the chemistry and materials science communities, as evidenced by the popularity of Python packages such as DeepChem and matminer. Clearly, there are many potential benefits to building, training, and running such models in Julia, including improved performance, better code readability, and perhaps most importantly, a multitude of prospects for composability with packages from the broader SciML ecosystem, allowing integration with packages for differential equation solving, sensitivity analysis, and more.
In this talk, I introduce [Chemellia](https://github.com/chemellia): an ecosystem for machine learning on atomic systems based on Flux.jl. In particular, I will focus two packages I have been developing that will be core to Chemellia. [ChemistryFeaturization](https://github.com/Chemellia/ChemistryFeaturization.jl) represents a novel paradigm in data representation of molecules, crystals, and more. It defines flexible types for features associated with individual atoms, pairs of atoms, etc. as well as for representing featurized structures in the form of, for example, a crystal graph (the AtomGraph type, which of course dispatches the set of functions so that all of the LightGraphs analysis capabilities “just work”). It also implements an easily extensible set of modular featurization schemes to create inputs for a variety of models, graph-based and otherwise. A core design principle of the package is that all featurized data types carry the requisite metadata to “decode” their features back to human-readable values.
[AtomicGraphNets](https://github.com/Chemellia/AtomicGraphNets.jl) provides a Julia implementation of the increasingly popular crystal graph convolutional neural net model architecture that trains and runs nearly an order of magnitude faster than the Python implementation, and requires fewer trainable parameters to achieve the same accuracy on benchmark tasks due to a more efficient and expressive convolutional operation. The layers provided by this package can be easily combined into other architectures using Flux’s utility functions such as Chain and Parallel.
We have some great summer student developers working on these packages now and would welcome further community feedback and contributions!
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2021/talk/T7UFDU/
Green
Rachel Kurchin
PUBLISH
ME7JE9@@pretalx.com
-ME7JE9
Simulating Chemical Kinetics with ReactionMechanismSimulator.jl
en
en
20210730T193000
20210730T200000
0.03000
Simulating Chemical Kinetics with ReactionMechanismSimulator.jl
Large chemical kinetic systems are important in many fields including atmospheric chemistry, combustion, pyrolysis, polymers, oxidation, catalysis and electrocatalysis. Traditional C++ and Fortran tools for simulating these systems tend to be difficult to extend, have difficulty integrating modern numerical techniques such as automatic differentiation and adjoint sensitivities and have old or lacking mechanism analysis tools. We present [ReactionMechanismSimulator.jl](https://github.com/ReactionMechanismGenerator/ReactionMechanismSimulator.jl) a Julia package for simulating and analyzing kinetic systems.
ReactionMechanismSimulator.jl was designed with extension in mind. Its parser can automatically parse and use newly added kinetic, thermodynamic, phase and domain models as soon as the associated structure is defined with no other code modifications. In addition to analytic jacobians for common systems it provides automatic and symbolic jacobians through ForwardDiff.jl and ModelingToolkit.jl. Forward and adjoint sensitivity analyses are implemented using Julia’s SciML toolkit. ReactionMechanismSimulator.jl includes a suite of molecular structure aware plotting and flux diagram generation software that facilitates efficient analysis of kinetic mechanisms.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2021/talk/ME7JE9/
Green
Matthew S Johnson
PUBLISH
7PVG8Z@@pretalx.com
-7PVG8Z
Clapeyron.jl: An Extensible Implementation of Equations of State
en
en
20210730T200000
20210730T201000
0.01000
Clapeyron.jl: An Extensible Implementation of Equations of State
Thermodynamic models represent a key tool for a variety of applications; this includes the study of complex systems (electrolytes, polymers, pharmaceuticals, etc), process modelling and molecular design. However, it is not uncommon for thermodynamic models to involve hundreds of different components, especially with the more modern equations of state like those built from Statistical Associating Fluid Theory (SAFT), whose ability to model complex phenomena (such as hydrogen bonding and London dispersion interactions) comes at the cost of complicated mathematical formulation. Most implementations are often abstruse, if they are open to the public at all, which is likely to be the main reason for the high barrier to entry into the field. Beyond those mathematical functions, it is also an exercise in working out the physical properties by exploiting some thermodynamic relations, which may involve the use of highly non-linear solvers for problems with near-singular Jacobians, and solving for the global minima of a non-convex, non-linear problem. The actual execution tends to be application-specific and difficult to extend, even if one had a full understanding of the procedures that are traditionally written in FORTRAN.
Enter Julia, a language that seems to provide the most natural realisation of every step of this process. OpenSAFT is a framework that makes it easy to build SAFT-type (or any free-energy-based) models such that researchers and enthusiasts alike will be able to focus on the actual thermodynamics and algorithms without worrying about the implementation. With the Julia culture that completely embraces Unicode identifiers and terse syntax for mathematical operations, we are able to create nearly one-to-one translations of the mathematical expressions in the literature to code, removing the layer of obfuscation that usually appears when writing high-performance code.
Differential programming is a concept that is extensively used in modern statistical-learning tools, but is still relatively unknown to a lot of the scientific community. We are now able to trivially obtain any-order derivatives of the Helmholtz free energy function, instead of having to work out the corresponding expressions for each model. Suddenly, it all becomes a plug-and-play solution where the user could just write out the model equations, and have OpenSAFT seamlessly obtain all the relevant properties. By careful selection of parameter types, nearly every part of the code can be easily modified or extended so that users will be able to take direct control of the solvers if necessary. This allows people to easily pry into the inner workings of thermodynamic equations of state and study how they can be set up and used.
With Julia, OpenSAFT has the potential to revolutionise thermodynamic research and education, and we think that this effort will also greatly help to bridge the gap between cutting-edge development in academia and actual practical use in industry. Perhaps it could also inspire scientists from other domains to invest in bringing over their work to Julia where everything “just works”.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/7PVG8Z/
Green
Paul Yew
Pierre Walker
Andrés Riedemann
PUBLISH
YGWUVE@@pretalx.com
-YGWUVE
Modia – Modeling Multidomain Engineering Systems with Julia
en
en
20210730T201000
20210730T202000
0.01000
Modia – Modeling Multidomain Engineering Systems with Julia
Modia (www.ModiaSim.org) is a set of Julia packages for modeling and simulation of coupled multidomain engineering systems (electrical, 3D mechanical, fluid, etc.). It shares many powerful features of the Modelica (www.Modelica.org) language. In the talk status and plans for Modia are presented.
A new simple, yet powerful syntax has been introduced in Modia based on named tuples of Julia and recursive merge. An electrical Resistor can, for example, be defined as:
Resistor = OnePort | Model( R = 1.0u"Ω", equation = :[ R*i = v ] )
The | denotes a recursive merge between the named tuple OnePort and a new Model (named tuple) adding a parameter R and Ohms equation, i.e., corresponding to extending the model OnePort having variables and equations. Such a resistor can then be instantiated:
R = Resistor | Map(R=0.5u"Ω")
with an updated value of the resistance R. The Model constructor constructs a named tuple which only adds attributes during merge and Map only updates attributes. This use of named tuples unifies and generalizes inheritance, hierarchical modifiers and replaceable models of Modelica. Component instances such as R have ports (defined in OnePort) which are connected to form complete hierarchical system models.
For certain kinds of models, such as multibody systems, the order of evaluating the component equations is independent of the model topology. This means that algorithmic functions can be used for each model component and called according to the connection topology. This avoids repeated structural and symbolic analysis of the multibody equations, the code size is considerably reduced, and pre-compilation is possible. Modia allows to express such kinds of models together with equation-based models.
New symbolic algorithms transform the Modia equations to ODEs (Ordinary Differential Equations in state space form) and generate a Julia function that can be used to simulate the transformed model with ODE integrators of DifferentialEquations.jl.
When instantiating a Modia model, the floating point type of the Modia variables can be defined. This allows for example to easily model uncertainty propagation with Measurements.jl or perform Monte Carlo Simulation with MonteCarloMeasurements.jl. The hierarchical NamedTuple description of a model can be easily mapped to a JSON file. As a result, the complete parameterization of a Modia Model, or the complete Modia model itself, can be exchanged in a straightforward way with a Web App for model composition by drag-and-drop and for 3D animation.
Hilding Elmqvist: Mogram AB
Martin Otter, Andrea Neumayr, Gerhard Hippmann: DLR Institute of System Dynamics and Control
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/YGWUVE/
Green
Hilding Elmqvist
Martin Otter
Andrea Neumayr
PUBLISH
X3SAWW@@pretalx.com
-X3SAWW
Optical simulation with the OpticSim.jl package
en
en
20210730T202000
20210730T203000
0.01000
Optical simulation with the OpticSim.jl package
OpticSim.jl is an open source (https://github.com/microsoft/OpticSim.jl) Julia package for simulation and optimization of complex optical systems developed by the Microsoft Research Interactive Media Group and the Microsoft HART group.
It is designed to allow optical engineers to create optical systems procedurally and then to simulate and optimize them.
A large variety of surface types are supported, and these can be composed into complex 3D objects through the use of constructive solid geometry (CSG). A substantial catalog of optical materials is provided through the GlassCat submodule.
The software provides extensive control over the modelling, simulation, and visualization of optical systems. It is especially suited for designs that have a procedural architecture.
The talk will explain how to use OpticSim.jl to simulate various types of optical systems.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/X3SAWW/
Green
Brian Guenter
Charlie Hewitt
PUBLISH
8LK7KU@@pretalx.com
-8LK7KU
GatherTown -- Social break
en
en
20210730T203000
20210730T213000
1.00000
GatherTown -- Social break
PUBLIC
CONFIRMED
Social hour
https://pretalx.com/juliacon2021/talk/8LK7KU/
Green
PUBLISH
W7RT9F@@pretalx.com
-W7RT9F
JuliaCon Hackathon
en
en
20210730T213000
20210731T213000
0.00000
JuliaCon Hackathon
The event will take place for 24 hours! Join us to built something you are excited about in Julia. We will also have mentors available to help if you run into issues.
PUBLIC
CONFIRMED
Social hour
https://pretalx.com/juliacon2021/talk/W7RT9F/
Green
PUBLISH
WZ7YM9@@pretalx.com
-WZ7YM9
Symbolics.jl - fast and flexible symbolic programming
en
en
20210730T123000
20210730T130000
0.03000
Symbolics.jl - fast and flexible symbolic programming
Symbolic systems either excel in flexibility or performance. For example, SymPy is highly flexible and has a good set of term rewriting functionality, but is slow. On the other hand, projects like OSCAR are specialized tools for computational algebra -- problems are hard to set up but computations are highly efficient. Further, neither of these types of tools actually help you turn symbolic expressions into executable code.
In this talk, we introduce the Symbolics.jl and the underlying SymbolicUtils.jl packages. We also talk about the term-rewriting system and ways to write passes that transform symbolic expressions with user-defined custom rules.
Outline:
- Why is Symbolics.jl useful
- Example of symbolic basic manipulation
- Benchmark vs SymPy
- Code generation example
- Differentiation syntax (comparison with other systems and possibilities, and AD)
- Fast sparsity detection
- Under the hood
- Wrapper to make symbolic expression: `Num <: Number`
- Syms and Terms
- Fast canonical form
- Term interface
- Expression rewriting
- Rule syntax
- Chaining and pipelining rules
- Simplification
- Polynomial form from AbstractAlgebra
- ModelingToolkit
- How ModelingToolkit builds a simulation system on top of Symbolics
- Use of build_function in ODE solver
- Structural simplification example with a bit of all the clever ideas (Attend Chris’s talk and workshop)
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2021/talk/WZ7YM9/
Red
Shashi Gowda
PUBLISH
F9PLVY@@pretalx.com
-F9PLVY
Unleashing Algebraic Metaprogramming in Julia with Metatheory.jl
en
en
20210730T130000
20210730T133000
0.03000
Unleashing Algebraic Metaprogramming in Julia with Metatheory.jl
We introduce Metatheory.jl: a lightweight and performant general purpose symbolics and metaprogramming framework meant to simplify the act of writing complex Julia metaprograms and to significantly enhance Julia with a native term rewriting system, based on state-of-the-art equality
saturation techniques, and a dynamic first class AST pattern matching system that is dynamically
composable in an algebraic fashion, taking full advantage of the language’s powerful reflection capabilities. Our contribution allows performing general purpose symbolic mathematics, manipulation,
optimization, synthesis or analysis of syntactically valid Julia expressions with a clean and concise
programming interface, both during compilation or execution of programs. We have been currently experimenting with optimizing mathematical code and equational theorem proving strategies. This talk will discuss algebraic equational reasoning with examples from logic, program analysis, abstract algebra and category theory.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2021/talk/F9PLVY/
Red
Alessandro Cheli
Philip Zucker
PUBLISH
G8LARY@@pretalx.com
-G8LARY
Towards a symbolic integrator with Rubin.jl
en
en
20210730T133000
20210730T134000
0.01000
Towards a symbolic integrator with Rubin.jl
Rubin.jl will be based on Symbolics.jl, a novel foundation for a Julian CAS. The goal of Rubin.jl is to
[X] Convert all the RUBI rules into a huge JSON
[X] Convert all the RUBI unit tests into a huge JSON
[ ] Parse the JSON files into Rubin Rules and Rubin tests
[ ] Benchmark the test suite and assess discrepancies
Symbolics.jl is a Julia based term-rewriting system that allows the user to specify that a "left hand side" symbolic expression should be transformed into the expression on the right hand side. Symbolic integration is useful for pure and applied mathematics - this will help bring in even more users to Julia.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/G8LARY/
Red
Miguel Raz Guzmán Macedo
PUBLISH
ARURL8@@pretalx.com
-ARURL8
AlgebraicDynamics: Compositional dynamical systems
en
en
20210730T134000
20210730T135000
0.01000
AlgebraicDynamics: Compositional dynamical systems
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/ARURL8/
Red
Sophie Libkind
James Fairbanks
PUBLISH
WQ8MJK@@pretalx.com
-WQ8MJK
The OSCAR Computer Algebra System
en
en
20210730T135000
20210730T140000
0.01000
The OSCAR Computer Algebra System
In this talk we present OSCAR, an **O**pen **S**ource **C**omputer **A**lgebra **R**esearch system for computations to support research in abstract algebra, algebraic geometry, group theory, number theory, and more. It builds on decades of experience by extending and integrating our four existing cornerstone systems:
- [GAP](https://www.gap-system.org/) - group and representation theory (via [GAP.jl](https://github.com/oscar-system/GAP.jl)),
- [Singular](https://www.singular.uni-kl.de/) - commutative and non-commutative algebra, algebraic geometry (via [Singular.jl](https://github.com/oscar-system/Singular.jl)),
- [Polymake](https://polymake.org/doku.php) - polyhedral geometry (via [Polymake.jl](https://github.com/oscar-system/Polymake.jl)),
- Antic ([Hecke](https://github.com/thofma/Hecke.jl/), [Nemo](http://nemocas.org/)) - number theory.
These are joined together under a common Julia interface in the [Oscar.jl](https://github.com/oscar-system/Oscar.jl) package.
Applications of our computational capabilities exist well beyond pure mathematics (e.g. in coding theory, cryptography, crystallography, robotics, ...).
While OSCAR is still under heavy development, many useful features are already available, and more are in the works. We will give an overview of existing capabilities and give a preview of what will come in the future. We will also outline what sets us apart from Symbolics.jl (which has a very different scope).
The development of OSCAR is supported by the Deutsche Forschungsgemeinschaft DFG within the [Collaborative Research Center TRR 195](https://www.computeralgebra.de/sfb/).
Outside contributions to OSCAR are highly welcome. Please talk to us:
- on our own Slack (use this [invite link](https://join.slack.com/t/oscar-system/shared_invite/zt-thtcv97k-2678bKQ~RpR~5gZszDcISw), or [email me](mailto:horn@mathematik.uni-kl.de) if it does not work)
- on the Julia Slack in `#oscar` or `#algebra`
- via our mailing list, join at <https://mail.mathematik.uni-kl.de/mailman/listinfo/oscar-dev>
- via issues and PRs on our various GitHub repositories.
Additional information can be found on our homepage, <https://oscar.computeralgebra.de>.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/WQ8MJK/
Red
Max Horn
Claus Fieker
PUBLISH
SHHKEM@@pretalx.com
-SHHKEM
Enabling Rapid Microservice Development with a Julia SDK
en
en
20210730T190000
20210730T193000
0.03000
Enabling Rapid Microservice Development with a Julia SDK
The benefits of microservices architectures are well understood. They have the potential to be more agile, enable each service to pick the best technology for its purpose and be scaled or autoscaled appropriately, and can be easily deployed and managed through common open-source technologies.
Developing a solution with a microservices architecture, however, can have some disadvantages. A developer creating a new microservice must understand the external interfaces to that service, how it is tested and deployed, and how to configure the service to run and scale as required. This increases the skill requirement of a developer and can take time away from what the developer is actually trying to do – create a new bit of functionality. It can also result in inconsistencies in code behaviour and style between services. This problem is compounded when the developers are using a new language and are unfamiliar with what is possible and with best practices, and further compounded when that language is itself rapidly developing.
We faced this challenge when developing a modelling and simulation platform built and deployed as Julia microservices. Julia was a new language for the majority of our team, and we needed to quickly design and deploy many services. Furthermore, we wanted to continue to use recent open-source developments without requiring that all of our developers must stay up to date with package and language advancements. To enable our developers to focus on what they’re best at (i.e., the functionality of the service they’re developing) and mitigate the issues mentioned, we made use of Julia’s excellent ecosystem to create a Software Development Kit (SDK).
In this talk, we describe the components of the SDK, how they enable both efficient development and use of new open-source advances, and how similar approaches can be used by your team as you build microservices. We will discuss how Julia’s package system makes it ideal for SDK use.
Located in a private registry, the SDK includes a microservice template, a utility package template, various utility packages and a custom GraphQL interface. The microservice template is the starting point for a developer writing a new microservice and includes the following functionality:
- Default communication routes (service execution, liveness etc)
- Logging behaviour
- Automatic documentation
- Asynchronous and multithreading tools
- CICD scripts (testing, building and cloud deployment)
We will discuss the above, including detailing the various open-source packages used for each function.
We will also describe how providing utility packages in an SDK enables teams to make use of the best open-source developments, which may be developing and changing frequently, through a stable API. For example, when handling large volumes of data, it is often desirable to encode and decode numeric arrays to minimise data transfer. One package in our SDK provides a simple encode and decode interface, where the specifics of what compression packages are being used can be updated as required without the majority of users needing to stay abreast of open-source developments. We will detail this approach and give other examples of where we have found it useful. Finally, we will describe our custom GraphQL interface, which wraps a generic interface in a similar method to the utility packages, enabling developers to quickly interact with and make use of our platform.
To conclude, Julia is a powerful language with an exceptional ecosystem. In this talk, we will demonstrate how it can be used to create an efficient environment for microservice development which lets developers focus on what they’re developing, ensures all services use the best of the open-source community and generally makes things much more straightforward.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2021/talk/SHHKEM/
Red
Malcolm Miller
PUBLISH
9LCKEQ@@pretalx.com
-9LCKEQ
kubernetes-native julia development
en
en
20210730T193000
20210730T194000
0.01000
kubernetes-native julia development
In this setup, from a julia project directory, you can:
- drop into a julia REPL that is running on your k8s cluster
- edit source files locally, use Revise and get back results saved to disk, via a 2-way sync between the local julia project directory and the corresponding directory in the k8s container
- sync REPL history across local and remote julia sessions
- easily spin up and use Distributed workers from within the julia session
- automatically build and use images containing julia, with chosen dependencies baked in a PkgCompiler sysimage, precompiled julia project, and (optionally) CUDA
- minimize time-to-first-command-completion with cached image builds; first use in a project directory takes a long time to build, but subsequently spinning up is fast
- set RAM/cpu/disk resources for the main julia session and any Distributed workers
- set julia (and CUDA) versions independently for each session
- run your work as a non-interactive job once it is ready
This tries to make minimal assumptions about the k8s setup; requirements are access to the cluster via `kubectl` and to a container registry that the k8s cluster can pull from.
Tools needing to be installed locally are:
- kubectl
- docker buildkit
- devspace sync
The julia-specific tools developped to make this possible are [K8sClusterManagers.jl](https://github.com/beacon-biosignals/K8sClusterManagers.jl) and `julia_pod`.
This workflow is developped and used day-to-day at [Beacon Biosignals](https://beacon.bio/).
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/9LCKEQ/
Red
Kolia Sadeghi
PUBLISH
GXLNHG@@pretalx.com
-GXLNHG
Rewriting Pieces of a Python Codebase in Julia
en
en
20210730T194000
20210730T195000
0.01000
Rewriting Pieces of a Python Codebase in Julia
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/GXLNHG/
Red
Satvik Souza Beri
PUBLISH
MSTYCZ@@pretalx.com
-MSTYCZ
Julia in the Windows Store
en
en
20210730T195000
20210730T200000
0.01000
Julia in the Windows Store
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/MSTYCZ/
Red
David Anthoff
PUBLISH
TXAWKU@@pretalx.com
-TXAWKU
Redwood: A framework for clusterless supercomputing in the cloud
en
en
20210730T200000
20210730T203000
0.03000
Redwood: A framework for clusterless supercomputing in the cloud
Through the rise in popularity of deep learning and large-scale numerical simulations, high-performance computing (HPC) has entered the mainstream of scientific computing. Today, HPC techniques are increasingly required by a wider and wider audience, in fields including machine and deep learning, weather forecasting, medical and seismic imaging, computational genomics, fluid dynamics and others. HPC workloads have been traditionally deployed to on-premise high-performance computing clusters and were therefore only available to a very limited number of researchers or corporations. With the rise of cloud computing, HPC resources have in principle become available to a much wider audience but managing HPC infrastructure in the cloud is challenging. As the cloud provides a fundamentally different computing infrastructure from on-premise supercomputer, users need build environments and applications that are resilient are cost efficient and that are able to leverage cloud-related opportunities such as elastic (hyper-scale) compute and heterogeneous infrastructure.
Naturally, the current approach to port HPC applications to the cloud is to replicate the infrastructure of on-premise supercomputing centers with cloud resources. Cloud services such as AWS ParallelCluster or Azure CycleCloud enable users to create virtual HPC clusters that consist of login nodes, job schedulers, a set of compute instances, networking and distributed storage systems. Even cloud-native approaches such as Kubernetes follow this cluster-based architecture, albeit using containerization and novel schedulers. However, from the user side both approaches are a two-step approach in which users first create a (virtual) HPC cluster in the cloud and then submit their parallel program to the cluster. This makes running HPC applications in the cloud challenging, as users have to act as cluster administrators who manage the HPC infrastructure before being able to run their application.
In this work, we argue for the case of clusterless supercomputing in the cloud in which the user application essentially takes over the role of the job scheduler and cluster orchestrator. Instead of a
two-step process in which users first create a cluster and then submit their job to it, the application is executed anywhere and dynamically manages the required compute infrastructure at runtime. To enable this type of clusterless HPC which is heavily inspired by serverless orchestration frameworks, we introduce Redwood, an open-source software package for clusterless supercomputing on the Azure cloud. Redwood provides a set of distributed programming macros that are designed in accordance with Julia's existing macros for distributed computing around the principles of remote function calls and futures. Unlike Julia's standard distributed computing framework, Redwood does not require a parallel Julia session that is running on a set of interconnected nodes (i.e., a cluster). Instead, Redwood executes functions that are tagged for remote (parallel) execution via cloud services such as Azure batch or Azure Functions by creating a closure around the executed code and running it remotely through the respective cloud service. Results, namely function outputs, are written to cloud object stores and remote references are returned to the user.
In this talk, we discuss the architecture and implementation of Redwood and present HPC scenarios that are enable by it. This includes large-scale MapReduce workloads, computations that are distributed across multiple data centers or even regions, as well as combinations of data and model parallel applications in which users can execute multiple distributed-memory MPI workloads in parallel. Additionally, we present how existing Julia packages such as Flux or JUDI (a framework for PDE-constrained optimization) can be cloud-natively deployed through Redwood, without requiring users to set up HPC clusters.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2021/talk/TXAWKU/
Red
Philipp A. Witte
PUBLISH
BNB888@@pretalx.com
-BNB888
JET.jl: The next generation of code checker for Julia
en
en
20210730T123000
20210730T130000
0.03000
JET.jl: The next generation of code checker for Julia
This talk will introduce [JET.jl](https://github.com/aviatesk/JET.jl), an experimental type checker for Julia.
JET is powered by both abstract interpretation routine implemented within the Julia compiler as well as a concrete interpretation based on [JuliaInterpreter.jl](https://github.com/JuliaDebug/JuliaInterpreter.jl). The abstract interpreter enables a static analysis on a pure Julia script without any need for additional type annotations, while the concrete interpreter allows effective analysis no matter how heavily it depends on runtime reflections or external configurations, which are common obstacles to static code analysis.
The talk will begin by explaining the motivation for type-level analysis of Julia code as well as how we can find various kinds of errors ahead of time using JET. Then we will illustrate how JET works and also the limitations involved with its design choices, and finally discuss the planned future enhancements like IDE integrations and such.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2021/talk/BNB888/
Blue
Shuhei Kadowaki
PUBLISH
3TLU8P@@pretalx.com
-3TLU8P
Easy, Featureful Parallelism with Dagger.jl
en
en
20210730T130000
20210730T133000
0.03000
Easy, Featureful Parallelism with Dagger.jl
The Distributed standard library exposes RPC primitives (remotecall) and remote channels for coordinating and executing code on a cluster of Julia processes. When a problem is simple enough, such as a trivial map operation, the provided APIs are enough to get great performance and "pretty good" scaling. However, things change when one wants to use Distributed for something complicated, like a large data pipeline with many inputs and outputs, or a full desktop application. While one *could* build these programs with Distributed, one would quickly realize that a lot of functionality will need to be built from scratch: application-scale fault tolerance and checkpointing, heterogeneous resource utilization control, and even simple load-balancing. This isn't a fault of Distributed: it just wasn't designed as the be-all-end-all distributed computing library for Julia. If Distributed won't make it easy to build complicated parallel applications, what will?
Dagger.jl takes a different approach: it is a batteries-included distributed computing library, with a variety of useful tools built-in that makes it easy to build complicated applications that can scale to whatever kind and size of resources you have at your disposal. Dagger ships with a built-in heterogeneous scheduler, which can dispatch units of work to CPUs, GPUs, and future accelerators. Dagger has a framework for checkpointing (and restoring) intermediate results, and together with fault tolerance, allows computations to safely fail partway through, and be automatically or manually resumed later. Dagger also has primitives to build dynamic execution graphs across the cluster, so users can easily implement layers on top of Dagger that provide abstractions better matching the problem at hand.
This talk will start with a brief introduction to Dagger: what it is, how it relates to Distributed.jl, and a brief overview of the features available. Then I will take the listeners through the building of a realistic, mildly complicated application with Dagger, showcasing how Dagger makes it easy to make the application scalable, performant, and robust. As each feature of Dagger is used, I will also point out any important caveats or alternative approaches that the listeners should consider when building their own applications with Dagger. I will wrap up the talk by showing the application running at scale, and talk briefly about the future of Dagger and how listeners can help to improve it.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2021/talk/3TLU8P/
Blue
Julian P Samaroo
PUBLISH
PFWXF9@@pretalx.com
-PFWXF9
Actors.jl: Concurrent Computing with the Actor Model
en
en
20210730T133000
20210730T134000
0.01000
Actors.jl: Concurrent Computing with the Actor Model
**Give an overview** of `Actors`' philosophy and functionality and how it integrates into Julia's multi-threading and distributed computing.
**Demonstrate** how to
- `spawn` actors with arbitrary Julia functions as behaviors,
- `send` them messages and get back results,
- make them interact,
- `supervise` them and to
- integrate them with tasks and distributed processes.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/PFWXF9/
Blue
Paul Bayer
PUBLISH
DAQSUR@@pretalx.com
-DAQSUR
BPFnative.jl: eBPF programming in Julia
en
en
20210730T134000
20210730T135000
0.01000
BPFnative.jl: eBPF programming in Julia
eBPF (extended Berkeley Packet Filter) is a virtual machine specification and machine code ISA originally designed for packet filtering in operating system kernels. eBPF is designed to be simple and compact enough to be trivially converted to native machine code, making it very portable across machine architectures. eBPF is developed in tandem with the Linux kernel, intended to be an internal runtime for safely executing user-defined code within the Linux kernel, where it allows users to introspect (and even modify) the functioning of their kernel's various subsystems. Given the key role that the OS kernel plays in allowing modern computers to function, it is thus no surprise that the ability to write and install eBPF kernels is considered a Linux "superpower".
As we can see from the example set by CUDA.jl, Julia is an excellent language for writing portable code which can execute on a variety of architectures with minimal changes. Recognizing this, I created BPFnative.jl as an interface from Julia to eBPF and the Linux kernel. BPFnative.jl allows users to write eBPF kernels in pure Julia, compile them into eBPF bytecode, and install them at various locations in the Linux kernel. This allows users with a minimal understanding of eBPF to explore their OS kernel at runtime, and thanks to the security measures and verifier built into the Linux eBPF VM, makes this a very safe thing to do.
For this talk, I will introduce the basics of eBPF and why Linux users should care about it, and then provide examples (including code snippets) of how to create eBPF kernels for introspecting various parts of the Linux kernel with BPFnative.jl. I will strive to make the examples relevant to everyday Linux users who want to find out more about what their OS is doing behind the scenes, without having to fully understand how the Linux kernel works. I will also encourage interested users to explore other parts of their OS with eBPF, and submit examples to BPFnative.jl to benefit the community.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/DAQSUR/
Blue
Julian P Samaroo
PUBLISH
YFCXJD@@pretalx.com
-YFCXJD
Atomic fields: the new primitives on the block
en
en
20210730T135000
20210730T140000
0.01000
Atomic fields: the new primitives on the block
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/YFCXJD/
Blue
Jameson Nash
PUBLISH
TJ3FNS@@pretalx.com
-TJ3FNS
A Short History of AstroTime.jl
en
en
20210730T190000
20210730T193000
0.03000
A Short History of AstroTime.jl
The [AstroTime.jl](https://github.com/JuliaAstro/AstroTime.jl) library has been in development since 2013 (originally as part of [Astrodynamics.jl](https://github.com/JuliaSpace/Astrodynamics.jl)). It provides the `Epoch` type as a replacement and complement to Julia's `DateTime`. `Epoch` can handle sub-nanosecond accuracy over a time span several times the age of the universe with support for all commonly used astronomical time scales.
Since its inception, AstroTime.jl has gone through several major design iterations as our understanding of the scope and complexity of the problem domain has grown. The public API on the other hand has remained remarkably stable which is a great testament to Julia's expressive and versatile type system. While AstroTime.jl is built on the solid foundations of the `Dates` standard library, it also fixes some of the shortcomings of the latter and might also highlight further areas of possible improvement.
AstroTime.jl was meant to be only a small stepping stone on the way to making Julia a multiplanetary programming language but it has become a great project in its own right. We want to share the journey so far and maybe get you exited about something as mundane as time. Or spacetime, rather, relatively speaking...
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2021/talk/TJ3FNS/
Blue
Helge Eichhorn
PUBLISH
BPJ3N7@@pretalx.com
-BPJ3N7
Going to Jupiter with Julia
en
en
20210730T193000
20210730T194000
0.01000
Going to Jupiter with Julia
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/BPJ3N7/
Blue
Joe Carpinelli
PUBLISH
MXSRY8@@pretalx.com
-MXSRY8
ClimaCore.jl: Tools for building spatial discretizations
en
en
20210730T194000
20210730T195000
0.01000
ClimaCore.jl: Tools for building spatial discretizations
The Climate Modelling Alliance (CliMA) is building ClimateMachine.jl, a modern earth system model that can learn from data. On a technical side, we are developing the model entirely in Julia, using distributed parallelism with both GPU and CPU architectures.
ClimaCore.jl is a suite of tools we are building for our next iteration of Climate Machine.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/MXSRY8/
Blue
Simon Byrne
PUBLISH
GBJ3HG@@pretalx.com
-GBJ3HG
ClimateModels.jl -- A Simple Interface To Climate Models
en
en
20210730T195000
20210730T200000
0.01000
ClimateModels.jl -- A Simple Interface To Climate Models
Key objectives of this project include:
- make it as easy to run complex models as it is to run simple ones and, hopefully, so easy that that they can all be used interactively in classrooms
- enable the Julia community to access widely-used, full-featured models right now and comfortably using notebooks, IDEs, terminal, and batch _(1)_.
- enable the climate science community to leverage the booming Julia ecosystem for analyzing model output and experimenting with models _(2)_.
- provide basic pipelining (e.g. Channel), book-keeping (e.g. Git), and documenting features (e.g. Pkg) to make complex workflows easier to reproduce, modify, and share with others.
_(1) The MITgcm, used as example, has configurations for Ocean, Atmosphere, Cryosphere, Biosphere in forward as well as an adjoint mode (via AD)._
_(2) Both on-premise or via cloud based environments._
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/GBJ3HG/
Blue
Gael Forget
PUBLISH
E3MWHZ@@pretalx.com
-E3MWHZ
Space Engineering in Julia
en
en
20210730T200000
20210730T201000
0.01000
Space Engineering in Julia
This talk presents how we used the packages ReferenceFrameRotations.jl, SatelliteToolbox.jl, and DifferentialEquations.jl to create a high fidelity simulator of the Amazonia-1’s AOCS and perform numerous analyses related to this mission.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/E3MWHZ/
Blue
Ronan Arraes Jardim Chagas
PUBLISH
GJRKY3@@pretalx.com
-GJRKY3
In-Situ Data Analysis with Julia for E3SM at Large Scale
en
en
20210730T201000
20210730T202000
0.01000
In-Situ Data Analysis with Julia for E3SM at Large Scale
The Energy Exascale Earth System Model (E3SM) is the Department of Energy's state-of-the-art earth system simulation model. It aims to address the most critical and challenging climate problems by efficiently utilizing DOE’s advanced HPC systems. One of the challenges of E3SM (and other exascale simulations) is the imbalance between the great size of the generated simulation data and the limited storage capacity. This means that post hoc data analysis needs to be replaced with in-situ analysis, which analyzes simulation data as the simulation is running. Our work aims to use Julia to provide data scientists with a high-level and performant interface for developing in-situ data analysis algorithms without directly interacting with complex HPC codes. This talk discusses (1) high-level Julia runtime coupling with E3SM, (2) two in-situ data analysis modules in Julia, and (3) low-level communication between E3SM and the in-situ Julia modules.
In this project, we focus on the Community Atmosphere Model (CAM), which models the atmosphere and is one of E3SM’s coupled modules. Our goal is to study extreme weather events that happen in the atmosphere, such as sudden stratospheric warmings (SSW) that can destabilize the polar vortex and cause extreme cold temperatures on earth surfaces. The primary design consideration of coupling Julia with E3SM is the identification of an appropriate entry point in E3SM’s CAM for calling in-situ Julia modules. CAM is implemented in Fortran and simulates in the timestep style. The control module of CAM has access to the simulation data and is selected to be interfaced with the Julia runtime. To couple E3SM with Julia, (1) we have implemented a Fortran-based in-situ data adapter in the control module of CAM, which takes the CAM simulation data as input and internally passes the data to the Julia runtime. (2) We have implemented a C-based interface between the in-situ data adapter and the in-situ Julia modules. The C interface includes three major functions: initialization, cleanup and worker, which creates an in-situ Julia instance (by loading and initializing the in-situ Julia module from a specified path), destroys the Julia instance, and passes the data from the in-situ adapter to the in-situ Julia instance. Our Fortran in-situ adapter interface calls the worker function at every time step and initialization/cleanup functions at the first/last time step. (3) As E3SM mixes the usage of GNU Make and CMake for combining and compiling different E3SM components, we have added the Julia compilation flags for the C and Fortran interfaces into the CAM CMake file (i.e., header files) and the top-level GNU Make file (i.e., Julia libraries). The in-situ Julia module is only compiled when it is called during runtime, which avoids compiling the whole E3SM if the in-situ Julia module needs to be changed.
We have implemented two data analysis in-situ modules: linear regression and SSW. This linear regression approach models simulation variables as a function of simulation time. It can be used to track trends in variables of interest and to identify important checkpoints in the simulation. SSW characterizes midwinter stratospheric sudden warmings that often cause splitting of the stratospheric polar vortex. By definition, SSW occurs when the zonal mean of the zonal wind becomes reversed (easterly) at 60°N and 10 hPa and lasts for at least 10 consecutive days. This event can lead to extreme temperatures on the surface in northern America.
The worker function in the C interface aims to support efficient low-level data communication between E3SM and the in-situ Julia modules. To run at large-scale, E3SM adopts Message Passing Interface (MPI) and so the data is distributed among all the MPI ranks. Each MPI rank of CAM has access to only a local data block of CAM variables (e.g., velocity and temperature) and passes its local data block in 1D array to its own in-situ Julia instance through the C interface. When the in-situ Julia instance needs remote data (e.g., for computation of SSW) from other in-situ Julia instances, MPI.jl is used to implement the data communication between different in-situ Julia instances. However, one key design challenge is to make sure that E3SM and Julia use the same MPI communicator for correct data communication and this is challenging as current Julia C embeddings are not able to directly pass the MPI communicator. To address this challenge, we have developed two converters (i.e., in both C and Fortran formats) of MPI communicators for supporting different MPI libraries. Last, we have also evaluated the performance (i.e., overall overhead) of the worker function for providing valuable guidelines of running HPC applications with Julia at large-scale.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/GJRKY3/
Blue
LI TANG
Earl Lawrence
PUBLISH
VAGFD7@@pretalx.com
-VAGFD7
Adaptive and extendable numerical simulations with Trixi.jl
en
en
20210730T123000
20210730T130000
0.03000
Adaptive and extendable numerical simulations with Trixi.jl
When doing research on numerical discretization methods, scientists are often faced with a dilemma when choosing the appropriate simulation tool: In the beginning of a project, you often want a code that is nimble and with low overhead, which allows rapid prototyping to assist you in experimenting with different approaches. Later on, however, you want to evaluate your newly developed methods and algorithms in a production setting and require a high-performance implementation, support for parallelization, and a full toolchain for postprocessing and visualizing your results.
With [Trixi.jl](https://github.com/trixi-framework/Trixi.jl), we try to bridge this gap by using a simple but modular architecture, which allows us to easily extend Trixi beyond the existing functionality. The main components, such as the mesh, the solvers, or the equations, can each be selected and combined individually in a library-like manner. At the same time, Trixi is a comprehensive numerical simulation framework for hyperbolic PDEs and comes with all necessary ingredients to set up a simulation, run it in parallel, and visualize the results.
At its core, various systems of equations are solved on hierarchical quadtree/octree grids that provide adaptive mesh refinement via solution-based indicators. The equations, e.g., compressible Euler, ideal MHD, or hyperbolic diffusion, are discretized with high-order discontinuous Galerkin spectral element methods, with support for entropy-stable shock capturing. Trixi puts an emphasis on having a fast implementation with shared memory parallelization, and integrates well with other packages of the Julia ecosystem, such as [OrdinaryDiffEq.jl](https://github.com/SciML/OrdinaryDiffEq.jl) for time integration, [ForwardDiff.jl](https://github.com/JuliaDiff/ForwardDiff.jl) for automatic differentiation, or [Plots.jl](https://github.com/JuliaPlots/Plots.jl) for visualization. One of the key goals of Trixi is to be useful to experienced researchers while remaining accessible for new users or students. Thus, we continuously strive to keep the implementation as simple as reasonably possible.
Due to Julia’s unique capabilities and ecosystem including [LoopVectorization.jl](https://github.com/JuliaSIMD/LoopVectorization.jl), serial performance of Trixi can be on par with large-scale C++ and Fortran projects in performance benchmarks using a subset of optimized methods. At the same time, the general framework is simple and extendable enough to allow porting new solver infrastructures within a few hours.
In this talk, we will give an overview of the currently implemented features and discuss the overall architecture of Trixi. We will show a typical workflow for creating and running a simulation, and present scientific results that were obtained with Trixi. Finally, we will demonstrate how to add new capabilities to Trixi for your own research projects.
The Jupyter notebook used for the live demonstration of Trixi.jl during the talk, as well as the presentation slides, can be found at https://github.com/trixi-framework/talk-2021-juliacon.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2021/talk/VAGFD7/
Purple
Michael Schlottke-Lakemper
Hendrik Ranocha
PUBLISH
E7HKVW@@pretalx.com
-E7HKVW
3.6x speedup on A64FX by squeezing ShallowWaters.jl into Float16
en
en
20210730T130000
20210730T131000
0.01000
3.6x speedup on A64FX by squeezing ShallowWaters.jl into Float16
Most Earth-system simulations run on conventional CPUs in 64-bit double precision floating-point numbers Float64, although the need for high precision calculations in the presence of large uncertainties has been questioned. The world’s fastest supercomputer, Fugaku, is based on [A64FX microprocessors](https://www.fujitsu.com/global/products/computing/servers/supercomputer/a64fx/), which also support the 16-bit low precision format Float16. We investigate the Float16 performance on A64FX with [ShallowWaters.jl](https://github.com/milankl/ShallowWaters.jl), a fluid circulation model that was written with a focus on 16-bit arithmetics. It implements techniques that address precision and dynamic range issues in 16 bit. The precision-critical time integration is augmented to include Kahan’s compensated summation to reduce rounding errors. Such a compensated time integration is as precise but faster than mixing 16 and 32-bit of precision. The very limited dynamic range available in Float16 is 6e-5 to 65504, as subnormals are inefficiently supported on A64FX. The bitpattern histogram analysis at runtime with [Sherlogs.jl](https://github.com/milankl/Sherlogs.jl) as well as its functionality to record stacktraces conditioned on the occurrence of subnormals were invaluable to limit the arithmetic range. Consequently, we benchmark speed-ups of 3.8x on A64FX with Float16 and 3.6x with compensated time integration to minimize model degradation. Although ShallowWaters.jl is simplified compared to large Earth-system models, it shares essential algorithms and therefore shows that 16-bit calculations on A64FX are indeed a competitive way to accelerate Earth-system simulations on available hardware.
This work used the [Isambard UK National Tier-2 HPC Service](http://gw4.ac.uk/isambard/) operated by GW4 and the UK Met Office, and funded by EPSRC.
Co-authors
- [Sam Hatfield](https://www.ecmwf.int/en/about/media-centre/news/2020/accelerating-weather-forecasting-models-using-reduced-precision), European Centre for Medium-Range Weather Forecasts, Reading, UK
- [Matteo Croci](https://www.maths.ox.ac.uk/people/matteo.croci), Mathematical Institute, University of Oxford, UK
- [Peter Düben](https://www.ecmwf.int/en/about/who-we-are/staff-profiles/peter-dueben), European Centre for Medium-Range Weather Forecasts, Reading, UK
- [Tim Palmer](https://www2.physics.ox.ac.uk/contacts/people/palmer), University of Oxford, UK
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/E7HKVW/
Purple
Milan Klöwer
PUBLISH
RPBHWE@@pretalx.com
-RPBHWE
WaterLily.jl: Real-time fluid simulation in pure Julia
en
en
20210730T131000
20210730T132000
0.01000
WaterLily.jl: Real-time fluid simulation in pure Julia
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/RPBHWE/
Purple
Gabriel Weymouth
PUBLISH
HFHLDS@@pretalx.com
-HFHLDS
New tools to solve PDEs in Julia with Gridap.jl
en
en
20210730T132000
20210730T133000
0.01000
New tools to solve PDEs in Julia with Gridap.jl
Gridap is a new, open-source, finite element (FE) library implemented in the Julia programming language. The main goal of Gridap is to adopt a more modern programming style than existing FE applications written in C/C++ or Fortran in order to simplify the simulation of challenging problems in science and engineering and improve productivity in the research of new discretization methods. The library is a feature-rich general-purpose FE code able to solve a wide range of partial differential equations (PDEs), including linear, nonlinear, and multi-physics problems. Gridap is extensible and modular. One can implement new FE spaces, new reference elements, and use external mesh generators, linear solvers, and visualization tools. In addition, it blends perfectly well with other packages of the Julia package ecosystem, since Gridap is implemented 100% in Julia.
One of the distinctive features of the library is a high-level API allowing one to simulate complex PDEs with very few lines of code. This API makes possible to write the PDE weak form in a syntax almost identical to the mathematical notation. In some sense, the high-level API of Gridap resembles to the one of FE codes based on symbolic domain-specific languages like UFL in FEniCS, but, in contrast, Gridap does not consider any compiler of variational forms nor C/C++ code generation facilities. Instead, the library takes advantage of the Julia JIT compiler to generate efficient machine code for the particular problem the user wants to solve, which makes the Gridap much easier to maintain and extend.
The Gridap project was initially presented in last year's JuliaCon. Since then, a number of new important features have been added, including an enhanced syntax for writing the PDE weak form, the support of more PDE types, and the support of more numerical techniques. In JuliaCon2021, we would like to showcase these updates via a set of representative use cases and challenging applications such as fluid-structure interaction problems.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/HFHLDS/
Purple
Francesc Verdugo
Eric Neiva
Oriol Colomes
Santiago Badia
PUBLISH
39HB9T@@pretalx.com
-39HB9T
What's new in ITensors.jl
en
en
20210730T133000
20210730T140000
0.03000
What's new in ITensors.jl
In JuliaCon 2019, we gave an early preview of ITensors.jl, a ground-up pure Julia rewrite of ITensor, a high performance C++ library for using and developing tensor network algorithms. ITensors.jl v0.1 was officially released in May of 2020. Since then, there has been a lot of development of the library as well as a variety of spinoff libraries, such as ITensorsGPU.jl that adds a GPU backend for tensor operations, ITensorsVisualization.jl for visualizing tensor networks, PastaQ.jl for using tensor networks to simulate and analyze quantum computers, ITensorGaussianMPS.jl for creating tensor networks of noninteracting quantum systems, as well as more experimental libraries like ITensorsGrad.jl for adding automatic differentiation support and ITensorInfiniteMPS.jl for working with infinite tensor networks. In addition, many advanced features have been added to ITensors.jl and its underlying sparse tensor library NDTensors.jl, such as multithreaded block sparse tensor contractions, alternative dense contraction backends like TBLIS, contraction sequence optimization, and more. In this talk, I plan to give an overview of the current libraries and capabilities as well as lay out a roadmap for where the Julia ITensor ecosystem is heading.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2021/talk/39HB9T/
Purple
Matthew Fishman
PUBLISH
U7AM33@@pretalx.com
-U7AM33
Applied Measure Theory for Probabilistic Modeling
en
en
20210730T190000
20210730T193000
0.03000
Applied Measure Theory for Probabilistic Modeling
We have several goals for MeasureTheory.jl:
- Better performance than Distributions.jl, because normalizing constants can be deferred
- Minimal type constraints, for example allowing symbolic manipulations
- Autodiff-friendly code
- Multiple parameterizations for a given measure
- A consistent interface, especially important for probabilistic programming
- Composability, to make it easy to build new measures from existing ones
- Fall-back to Distributions.jl when needed
While the library is still in its early stages, we're making good progress on all fronts. We hope this can become the library of choice as a basis for probabilistic modeling in Julia, and we're excited to help the Julia community get involved in development.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2021/talk/U7AM33/
Purple
Chad Scherrer
PUBLISH
J8CRXY@@pretalx.com
-J8CRXY
FourierTools.jl | Working with the Frequency Space
en
en
20210730T193000
20210730T194000
0.01000
FourierTools.jl | Working with the Frequency Space
Fourier space is commonly used for convolution operations, as the Fast Fourier Transformation (FFT) is, as its name may suggest, O(N log N) fast. The FFT algorithm typically produces data at a mangled form that makes it difficult to directly apply functions to. `fftshift` is a way to deal with this but involves data copies.
Based on the packages ShiftedArrays.jl and PaddedViews.jl, the FourierTools.jl package implements views to the results of the FFTW routines `fft` and `rfft` and their inverse `ifft` and `irfft` including the respective `fftshift` operations but implemented as views rather than copying data. The indexing is, in notable difference to FFTViews.jl kept as ordinary arrays are indexed. This helps with the seamless integration across packages.
To implement an FFT-based `resample` operation of real-valued data, a new view, derived from `AbstractArray` is introduced, handling potential copy and addition operations for even-sized arrays to enforce the real-valuedness of the corresponding real space data (`select_region_ft`). In the community it has been [discussed](https://discourse.julialang.org/t/sinc-interpolation-based-on-fft/52512), whether such an operation is necessary. Referring to this discussion, we argue that the Fourier-space operations cannot be replaced by casting to `real`, since the latter violates Parseval's theorem.
In addition to the `resample` operation `FourierTools.jl` also provides a tool for sub-pixel shifting based FFTs. Further algorithms like shearing, sub-pixel shifting and rotation can be also implemented via the use of the Fourier shift theorem and due to the generality of the FFT these can be applied to N-dimensional datasets efficiently.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/J8CRXY/
Purple
Rainer Heintzmann
Felix Wechsler
PUBLISH
WA7BP8@@pretalx.com
-WA7BP8
IntervalLinearAlgebra.jl: Linear algebra done rigorously
en
en
20210730T194000
20210730T195000
0.01000
IntervalLinearAlgebra.jl: Linear algebra done rigorously
Linear systems arise in practically all domains involving numerical computations. While several efficient floating-point algorithms are available, the final output has no information about how close to the true solution the computed result is. To overcome this, interval arithmetic offers a framework to perform rigorous computations, where real numbers are replaced by intervals guaranteed to contain the true value.
The talk will introduce my Google Summer of Code (GSoC) project: the development of IntervalLinearAlgebra.jl, a package to solve both interval and real linear systems rigorously. During the talk, I will highlight the main features of the package. First, I will give an overview of interval linear systems and demonstrate how to use the package to determine the exact solution, highlighting that even in lower dimensions the solution set can have complex non-convex shapes.
Motivated by this, I will show how to determine a tight enclosure of the solution of an interval linear system, showing the several solution strategies implemented in the package. The presented algorithms will be compared in terms of accuracy and computation time, highlighting the pros and cons of each. During the talk, I will also discuss the lesson learnt during the development process as well as the roadmap beyond GSoC.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/WA7BP8/
Purple
Luca Ferranti
PUBLISH
CLKYFN@@pretalx.com
-CLKYFN
Global Sensitivity Analysis for SciML models in Julia
en
en
20210730T195000
20210730T200000
0.01000
Global Sensitivity Analysis for SciML models in Julia
Global Sensitivity Analysis quantifies the influence of input parameters on the model output. Hence some of the core questions we wish answer with models such as identification of most influential parameters, makes GSA an essential part of modelling workflow. GlobalSensitivity.jl [1] is a generalized GSA package with built-in support for parallelism integrated with the pharmaceutical modeling and simulation platform Pumas[2]. Our implementation of GSA for differential equation based mechanistic pharmacometrics, PBPK and QsP models gives order of magnitude speedups over GSA capabilities of other languages. Currently GlobalSensitivity.jl supports the Sobol, Morris, eFAST, Regression based, DGSM, Delta Moment, EASI, Fractional Factorial and RBD-FAST GSA methods.
The talk covers running GSA workflow on a Lotka-Volterra differential equation written in the DifferentialEquations.jl interface.
[1] url: https://gsa.sciml.ai/stable/.
[2] url: https://github.com/PumasAI/PumasTutorials.jl/blob/master/tutorials/pkpd/hcvgsa.jmd
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/CLKYFN/
Purple
Vaibhav Dixit
PUBLISH
LUVWJZ@@pretalx.com
-LUVWJZ
ZigZagBoomerang.jl - parallel inference and variable selection
en
en
20210730T200000
20210730T203000
0.03000
ZigZagBoomerang.jl - parallel inference and variable selection
## [ZigZagBoomerang.jl](https://github.com/mschauer/ZigZagBoomerang.jl) - parallel inference and variable selection
ZigZagBoomerang.jl provides piecewise deterministic Monte Carlo (PDMC) methods. They have the same goal as classical Markov chain Monte Carlo methods: to sample from a probability distribution, for example the posterior distribution in a Bayesian model. Only that the distribution is explored through the continuous movement of a particle and not one point at a time. The particle changes direction at random times and moves otherwise on deterministic trajectories. For example it may move with constant velocity along a line, see the picture. The random direction changes are calibrated such that the trajectory of the particle samples the target distribution, in general the particle is turned back (reflected) when moving too far into the tails of the distribution. From the trajectory, the quantities of interest, such as the posterior mean and standard deviation, can be estimated.
The decision of whether to change direction in one coordinate only requires the evaluation of a partial derivative which depends on few coordinates – the neighbourhood of the coordinate in the Markov blanket. That allows exploiting multiple processor cores using Julia's multithreaded parallelism (or other forms of parallel computing). The difference between threaded Gibbs sampling and threaded PDMP is that in Gibbs sampling part of the state is fixed, while the other part is changed. Here, the particle never ceases to move, and it is the decisions about direction changes which happen in parallel on subsets of coordinates. Metaphorically speaking this is the difference between walking, where one foot is on the ground all the time, and running, where both feet are in the air between steps.
Because the particle moves on a deterministic trajectory between the times of random events, one can determine exactly the time when the process would leave an area of interest. That allows to sample distributions of bounded support, or spending additional time in a lower dimensional subset of the space, the basis of variable selection with the sticky PDMPs in high dimensional sparse inference problems.
In the presentation I showcase a multithreaded sampler and high-dimensional variable selection with sticky PDMPs.
### Links
* [ZigZagBoomerang.jl](https://github.com/mschauer/ZigZagBoomerang.jl)
* Discourse Announcement: [[ANN] `ZigZagBoomerang.jl`](https://discourse.julialang.org/t/ann-zigzagboomerang-jl/57287)
* Joris Bierken's [Overview over Piecewise Deterministic Monte Carlo](https://diamweb.ewi.tudelft.nl/~joris/pdmps.html)
### Literature
1. Joris Bierkens, Paul Fearnhead, Gareth Roberts: The Zig-Zag Process and Super-Efficient Sampling for Bayesian Analysis of Big Data. *The Annals of Statistics*, 2019, 47. Vol., Nr. 3, pp. 1288-1320. [https://arxiv.org/abs/1607.03188].
2. Joris Bierkens, Sebastiano Grazzi, Kengo Kamatani and Gareth Robers: The Boomerang Sampler. *ICML 2020*. [https://arxiv.org/abs/2006.13777].
3. Joris Bierkens, Sebastiano Grazzi, Frank van der Meulen, Moritz Schauer: A piecewise deterministic Monte Carlo method for diffusion bridges. *Statistics and Computing*, 2021 (to appear). [https://arxiv.org/abs/2001.05889].
4. Joris Bierkens, Sebastiano Grazzi, Frank van der Meulen, Moritz Schauer: Sticky PDMP samplers for sparse and local inference problems. 2020. [https://arxiv.org/abs/2103.08478].
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2021/talk/LUVWJZ/
Purple
Moritz Schauer
PUBLISH
FBTSYM@@pretalx.com
-FBTSYM
Julia for Biologists
en
en
20210730T123000
20210730T140000
1.03000
Julia for Biologists
“Birds of a Feather flock together” — Whether you see yourself as a biologist, software developer, mathematician or anything in between, the objective of this session is to provide a welcoming and discussion-stimulating environment to strengthen the Julia community in the biological sciences. Independent of your Julia skills level, we are curious to hear what brings you to this area, what you love about it and where you feel like is room for improvement.
PUBLIC
CONFIRMED
Birds of Feather
https://pretalx.com/juliacon2021/talk/FBTSYM/
BoF/Mini Track
Elisabeth Roesch
PUBLISH
NBER8M@@pretalx.com
-NBER8M
Virtual posters session
en
en
20210730T163000
20210730T180000
1.03000
Virtual posters session
PUBLIC
CONFIRMED
Virtual Poster
https://pretalx.com/juliacon2021/talk/NBER8M/
BoF/Mini Track
PUBLISH
RAFSMK@@pretalx.com
-RAFSMK
Discussing Gender Diversity in the Julia Community
en
en
20210730T190000
20210730T203000
1.03000
Discussing Gender Diversity in the Julia Community
The objective of this talk is to find more people who feel their gender is underrepresented within the Julia community or want to support people who feel so. We aim at creating a safe and fruitful discussion about gender diversity and new actions we can take from Julia Gender Inclusive.
PUBLIC
CONFIRMED
Birds of Feather
https://pretalx.com/juliacon2021/talk/RAFSMK/
BoF/Mini Track
Laura Ventosa
Kim Louisa Auth
Xuan (Tan Zhi Xuan)
PUBLISH
TMXKDM@@pretalx.com
-TMXKDM
Modelling Australia's National Electricity Market with JuMP
en
en
20210730T123000
20210730T124000
0.01000
Modelling Australia's National Electricity Market with JuMP
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/TMXKDM/
JuMP Track
James D Foster
PUBLISH
EUVCJY@@pretalx.com
-EUVCJY
AnyMOD.jl: A Julia package for creating energy system models
en
en
20210730T124000
20210730T125000
0.01000
AnyMOD.jl: A Julia package for creating energy system models
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/EUVCJY/
JuMP Track
Leonard Göke
PUBLISH
9SVMZ3@@pretalx.com
-9SVMZ3
Power Market Tool (POMATO)
en
en
20210730T125000
20210730T130000
0.01000
Power Market Tool (POMATO)
Europe's increase in electricity production from renewable energy resources (RES) in combination with a significant decline of conventional generation capacity has spawned political and academic interest in the transmission system's ability to accommodate this transition. Central to this discussion is the efficiency of capacity allocation and congestion management (CACM) policies between and within electricity market areas that are interconnected by shared and synchronized transmission infrastructure. To facilitate unrestricted cross-border electricity trading in the presence of finite physical transmission capacity, European system and electricity market operator inaugurated flow-based market coupling (FBMC).
FBMC is a coordinated multi-stage process that requires detailed forecasts and network models, which are typically not or only partially disclosed by the system operators. Academic publications that synthesize FBMC in model frameworks agree on a three step process – D-2 (base case), D-1 (day-ahead) and D-0 (redispatch) – but differ greatly in some core assumptions. Further, FBMC effectiveness for a future renewable-dominant generation mix is typically overlooked in the current literature.
The open-source Power Market Tool (POMATO) has been designed to study CACM policies of zonal electricity markets, especially flow-based market coupling (FBMC). For this purpose, POMATO implements methods for the analysis of simultaneous zonal market clearing, nodal (N-k secure) power flow computation for capacity allocation, and multi-stage market clearing with adaptive grid representation and redispatch. Additionally, POMATO includes risk-aware optimal power flow via chance constraints to internalize forecast uncertainty during the market clearing process. All optimization features rely on Julia/JuMP, leveraging its accessibility, computational performance, and solver interfaces. The Julia Code is embedded in a Python front-end, providing flexible and easily maintainable data processing and user interaction features.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/9SVMZ3/
JuMP Track
Richard Weinhold
PUBLISH
XGCJBA@@pretalx.com
-XGCJBA
A Brief Introduction to InfrastructureModels
en
en
20210730T130000
20210730T133000
0.03000
A Brief Introduction to InfrastructureModels
This talk will begin by motivating the need for optimization of the design and operations of critical infrastructure networks and discuss some of the challenges facing future infrastructure systems. It will then highlight why Julia and JuMP provide an ideal foundation for to building critical infrastructure analysis capabilities. The talk will finish with an overview of the design and use of Los Alamos National Laboratory's InfrastructureModels packages using optimization of electric power transmission networks as a specific example.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2021/talk/XGCJBA/
JuMP Track
Carleton Coffrin
PUBLISH
XNVLKH@@pretalx.com
-XNVLKH
UnitCommitment.jl: Security-Constrained Unit Commitment in JuMP
en
en
20210730T133000
20210730T140000
0.03000
UnitCommitment.jl: Security-Constrained Unit Commitment in JuMP
The Security-Constrained Unit Commitment (SCUC) problem is one of the most fundamental and challenging problems in power systems optimization, being solved daily by Independent System Operators (ISOs) to clear the day-ahead electricity markets. The package provides: (i) an extensible and fully-documented JSON-based data specification format for SCUC, developed in collaboration with ISOs, which can help researchers to share data sets across institutions; (ii) a diverse collection of large-scale benchmark instances, collected from the literature, converted into a common data format, and extended using data-driven methods make them more challenging and realistic; (iii) a Julia/JuMP implementation of state-of-the-art Mixed-Integer Linear Programming formulations and solution methods for the problem; and (iv) a suite of automated benchmark scripts to accurately evaluate the performance impact of newly proposed methods. The package is being developed as part of the "IEEE Task Force on Solving Large Scale Optimization Problems in Electricity Market and Power System Application".
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2021/talk/XNVLKH/
JuMP Track
Alinson Santos Xavier
PUBLISH
ANYQTY@@pretalx.com
-ANYQTY
Linear programming by first-order methods
en
en
20210730T190000
20210730T193000
0.03000
Linear programming by first-order methods
PDLP is derived by applying the primal-dual hybrid gradient (PDHG) method, popularized by Chambolle and Pock (2011), to a saddle-point formulation of LP. PDLP enhances PDHG for LP by combining several new techniques with older tricks from the literature; the enhancements include diagonal preconditioning, presolving, adaptive step sizes, and adaptive restarting. PDLP compares favorably with SCS on medium-sized instances when solving both to moderate and high accuracy. Furthermore, we highlight standard benchmark instances and a large-scale application (PageRank) where our open-source prototype of PDLP outperforms a commercial LP solver. The prototype of PDLP is written in Julia and available at https://github.com/google-research/FirstOrderLp.jl.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2021/talk/ANYQTY/
JuMP Track
Miles Lubin
PUBLISH
REKLVV@@pretalx.com
-REKLVV
Cerberus: A solver for mixed-integer programs with disjunctions
en
en
20210730T193000
20210730T194000
0.01000
Cerberus: A solver for mixed-integer programs with disjunctions
Typically, DP problems are reformulated as mixed-integer programming (MIP) problems, and then passed to a MIP solver. Crucially, the MIP solver only receives this "flattened" MIP reformulation, and not the original, rich DP structure. We discuss how this structural information can be used within a LP-based branch-and-cut algorithm for dynamic reformulation and domain propagation without breaking incremental LP solves, a crucial ingredient for the success of modern solvers. We focus in particular on how the JuMP ecosystem facilitates the rapid development of such a solver which is heavily dependent on advanced functionality from the both the underlying solvers and the modeling interface.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/REKLVV/
JuMP Track
Joey Huchette
PUBLISH
FHWUR9@@pretalx.com
-FHWUR9
HiGHS
en
en
20210730T194000
20210730T195000
0.01000
HiGHS
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/FHWUR9/
JuMP Track
Ivet Galabova
PUBLISH
TP88SL@@pretalx.com
-TP88SL
vOptSolver: an ecosystem for multi-objective linear optimization
en
en
20210730T195000
20210730T200000
0.01000
vOptSolver: an ecosystem for multi-objective linear optimization
vOptSolver is aimed to be a software for scientifics and practionners. It has been conceived to be intuitive for various profile of users (mathematicians, informaticians, and engineers), corresponding to needs encountered in research and development (open-source codes available for the design of new algorithms), decision-making (ready-to-use methods and algorithms for solving optimization problems), and education (environment for teachning and practicing the theories and algorithms).
The optimization problem to solve is built in formulating a model with the algebraic modeling language JuMP, extended to support multi-objective models, for non-structured optimization problems, or in calling the corresponding API for structured optimization problems. The problem data and the optimization results are set on and handled by the datastructures and functionalities of Julia.
vOptSolver integrates several generic and specific algorithms of the literature for computing the set of exact non-dominated points. It returns also the efficient solutions corresponding to this set. The generic algorithms make use of a MIP solver, while specific algorithms call problem-dedicated algorithms.
References :
I. Dunning, J. Huchette, M. Lubin, JuMP: A Modeling Language for Mathematical Optimization, SIAM Review 59 (2) (2017) 295–320.
B. Legat, O. Dowson, J. D. Garcia, M. Lubin, MathOptInterface: a data structure for mathematical optimization problems (2020). arXiv:2002.03447
X. Gandibleux, G. Soleilhac, A. Przybylski, S. Ruzika, vOptSolver: an open source software environment for multiobjective mathematical optimization, IFORS2017: 21st Conference of the International Federation of Operational Research Societies. July 17-21, 2017. Quebec City (Canada). (2017).
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon2021/talk/TP88SL/
JuMP Track
Xavier Gandibleux
PUBLISH
Z8AJ9J@@pretalx.com
-Z8AJ9J
A Derivative-Free Local Optimizer for Multi-Objective Problems
en
en
20210730T200000
20210730T203000
0.03000
A Derivative-Free Local Optimizer for Multi-Objective Problems
I will revisit the basic concepts of multi-objective optimization and introduce the notion of Pareto optimality and Pareto criticality. Based on this idea, the steepest descent direction for multi-objective problems (MOPs) is derived. When used in conjunction with a trust region strategy, the steepest descent direction can be used to generate iterates converging to first-order critical points.
Besides talking about the mathematical background, I want to describe how local surrogate models are constructed and how we use other available packages (JuMP, NLopt, DynamicPolynomials etc.) in our implementation.
Moreover, I will show the results of a few numerical experiments proving the efficiency of the approach and talk a bit about how the local solver could be embedded in a global(ish) framework.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2021/talk/Z8AJ9J/
JuMP Track
Manuel Berkemeier