0.15
JuliaCon 2022 (Times are UTC)
juliacon-2022
2022-07-19
2022-07-30
12
00:05
https://pretalx.com
UTC
Green
Introduction to Julia
Workshop
2022-07-19T14:00:00+00:00
14:00
03:00
This workshop is geared towards anyone who wants to start using Julia.
It will be an extremely accessible overview of Julia.
juliacon-2022-21293-introduction-to-julia
JuliaCon
Jose Storopoli
en
We'll cover how to download and install Julia in Windows, Mac and Linux.
Next, we will show how to use Julia in the terminal (REPL), in VSCode,
and also in an interactive notebook with Pluto.
We will contrast Julia with Python, showcasing major differences and
also comparing Julia to a popular beginner's language such as Python.
Additionally, we will teach how to install and uninstall packages.
The bulk of the workshop will be how to run Julia commands,
what are statements and an overview of the Julia syntax.
We encourage everyone that wants to know more about Julia independent of
skill-level to join us.
false
https://pretalx.com/juliacon-2022/talk/WLNWBV/
https://pretalx.com/juliacon-2022/talk/WLNWBV/feedback/
Green
Introduction to Graph Computing
Workshop
2022-07-20T14:00:00+00:00
14:00
03:00
Graph computing is an innovative technology that allows developers to build applications and systems as directed acyclic graphs (DAGs). Graph computing offers generic solutions to some of the most fundamental challenges in enterprise computing such as scalability, transparency and lineage. In this workshop, we survey the available graph computing tools in Julia, then walk through a few hands-on examples of building real world applications and systems using graph computing.
juliacon-2022-18060-introduction-to-graph-computing
JuliaCon
Yadong Li
en
Graph computing is an innovative and hyper-efficient technology for building large and distributed systems or applications, where common challenges include scalability, transparency, explainability, lineage, adaptability and reproducibility. We coined the acronym STELAR for these challenges.
In almost every organization, significant engineering resources and efforts are devoted to addressing the STELAR needs for their core enterprise systems. These efforts are not portable because they are specific to the particular organization and architecture. For example, the solution to improve the scalability of the trading system at JP Morgan is not applicable to Goldman Sachs, as their technology stacks are fundamentally different. There is an enormous waste of time, money, energy and human talents for re-creating bespoke solutions to the same STELAR problems across the industry. The world would be a much better place if we could solve these problems once and for all in enterprise systems. This is the promise of graph computing.
Instead of functions in the traditional programming paradigm, directed acyclic graphs (DAG) are the fundamental building blocks in graph computing. A DAG is a special type of graph, which consists of a collection of nodes and directional connections between them. Acyclic means that these connections do not form any loops. A DAG can also be used as a generic representation of any kind of computing or workflow. Conceptually, any computation, from the simplest formula in a spreadsheet to the most complex enterprise systems, reduces to a DAG. In graph computing, complex DAGs representing entire applications or systems are built by composing smaller and modular DAGs, analogous to function compositions in the traditional programming paradigm.
Compared to the function centric representation in traditional programming, DAG is a much better and more convenient representation for building generic solutions to STELAR. Once built, these solutions are applicable to any enterprise system as they all reduce to DAGs.
The outline of this workshop is as follows:
* Introduction to the key ideas and benefits of graph computing, and how DAGs can help solve the STELAR program generically.
* Survey of existing graph computing solutions, approaches and tools. We will cover both Python and Julia tools, as well as commercial and open source solutions
* Discuss the key challenges in graph computing and approaches, including graph creation and distribution
* Hands-on sessions in building real world applications/systems using graph computing. In these sessions, we will be using available graph computing tools like Dask, Dagger.jl and Julius etc.
* Build a simple task graph and execute it
* Query the graph data after execution
* Build an ML data processing pipeline
* Building generic and reusable patterns using graph composition
* Graph distribution, for building distributed systems
* End to end AAD (adjoint algorithmic differentiation) with graphs
* Build streaming pipelines in graph
*Plenty of time for questions.
false
https://pretalx.com/juliacon-2022/talk/WL9FZZ/
https://pretalx.com/juliacon-2022/talk/WL9FZZ/feedback/
Green
Getting started with Julia and Machine Learning
Workshop
2022-07-20T18:00:00+00:00
18:00
03:00
A three-hour introductory workshop for newcomers to Julia and machine
learning. Participants will have training in some technical domain,
for example, in science, economics or engineering. While no prior
experience with Julia or machine learning is needed, it is assumed
participants have Julia 1.7 installed on their computer.
juliacon-2022-18061-getting-started-with-julia-and-machine-learning
JuliaCon
Anthony BlaomSamuel
en
## Overview
In their simplest manifestation, machine learning algorithms extract,
or "learn", from historical data some essential properties enabling
them to respond intelligently to new data (typically,
automatically). For example, spam filters predict whether to designate
a new email as "junk", based on how a user previously designated a
large number of previous messages. A property valuation site suggests
the sale price for a new home, given its location and other
attributes, based on a database of previous sales.
Julia is uniquely positioned to accelerate developments in machine
learning and there has been an explosion of Julia machine learning
libraries. [MLJ](https://alan-turing-institute.github.io/MLJ.jl/dev/)
(Machine Learning in Julia) is a popular toolbox providing a common
interface for interacting with over 180 machine learning models
written in Julia and other languages. This workshop will introduce
basic machine learning concepts, and walk participants through enough
Julia to get started using MLJ.
## Prerequisites
- **Essential.** A computer with [Julia 1.7.3](https://github.com/ablaom/HelloJulia.jl/blob/dev/FIRST_STEPS.md) installed.
- **Strongly recommended,** Workshop resources pre-installed. See [here](https://github.com/ablaom/HelloJulia.jl/wiki/JuliaCon-2022-workshop:-Getting-started-with-Julia-and-MLJ).
- **Recommended.** Basic linear algebra and statistics, such
as covered in first year university courses.
- **Recommended but not essential.** Prior experience with a scripting
language, such as python, MATLAB or R.
## Objectives
- Be able to carry out basic mathematical operations using Julia,
perform random sampling, define and apply functions, carry out
iterative tasks
- Be able to load data sets and do basic plotting
- Understand what supervised learning models are, and how to evaluate
them using a holdout test set or using cross-validation
- Be able to train and evaluate a supervised learning model using
the MLJ package
## Resources
[HelloJulia.jl](https://github.com/ablaom/HelloJulia.jl)
## Format
This workshop will be a combination of formal presentation and live
coding.
false
https://pretalx.com/juliacon-2022/talk/Q7WJ9F/
https://pretalx.com/juliacon-2022/talk/Q7WJ9F/feedback/
Green
GPU accelerated medical image segmentation framework
Workshop
2022-07-21T14:00:00+00:00
14:00
03:00
Medical image segmentation with Julia
Participants can download data before task 9 from https://drive.google.com/drive/folders/1HqEgzS8BV2c7xYNrZdEAnrHk7osJJ--2
Additionally tou can download required libraries to your enviroment
]add ProgressMeter StaticArrays BSON Distributed Flux Hyperopt Plots MedEye3d Distributions Clustering IrrationalConstants ParallelStencil CUDA HDF5 MedEval3D Colors
juliacon-2022-17247-gpu-accelerated-medical-image-segmentation-framework
JuliaCon
/media/juliacon-2022/submissions/9PKGGH/pretty_VPVkItu.png
Jakub Mitura
en
As the preparation for the workshop I will ask participants to load earlier the dataset on which we would work on - Dataset can be found in link [2]. Additionally you can load required packages to the enviroment where You will work in [3]. In order to fully participate you need to have Nvidia GPU available.
Medical image segmentation is a rapidly developing field of Computer Vision. This area of research requires knowledge in radiologic imaging, mathematics and computer science. In order to provide assistance to the researchers multiple software packages were developed. However because of the rapidly changing scientific environment those tools can no longer be effective for some of the users.
Such situation is present in the case of Julia language users that require support for the interactive programming development style that is not popular among traditional software tools. Another characteristic of modern programming for 3 dimensional medical imaging data is GPU acceleration which can give outstanding improvement of algorithms performance in case of working with 3D medical imaging. Hence in this work the author presents sets of new Julia language software tools that are designed to fulfil emerging needs. Those tools include GPU accelerated medical image viewer with annotation possibilities that is characterised by a very convenient programming interface. CUDA accelerated Medical segmentation metrics tool that supplies state of the art implementations of algorithms required for quantification of similarity between algorithm output and gold standard. Lastly, a set of utility tools connecting those two mentioned packages with HDF5 file system and preprocessing using MONAI and PythonCall.
Main unique feature of the presented framework is ease of interoperability with other Julia packages, which in the rapidly developing ecosystem of scientific computing may spark in the opinion of the author application of multiple algorithms from fields usually not widely used in medical image segmentation like differential programming, topology etc.
I am planning to conduct a workshop with the assumption of only basic knowledge of Julia programming and no medical knowledge at all. Most of the time would be devoted to walk through end to end example medical image segmentation like in the tutorial available under link below [1], with code executed live during workshop. In order to run some parts of the workshop users would need a CUDA environment. Because of the complex nature of the problem some theoretical introductions will also be needed.
Plan for the workshop :
1.Introduction to medical imaging data format
2.Presentation of loading data and simple preprocessing using MONAI and PythonCall
3.Tutorial presenting how to use MedEye3d viewer and annotator
4.Implementing first phase of example algorithm on CPU showing some Julia features supporting work on multidimensional arrays
5.Presenting further part of the example algorithm using GPU acceleration with CUDA.jl and ParallelStencil with short introduction to GPU programming .
6.Presenting how to save and retrieve data using HDF5.jl
7.Show how to apply medical segmentation metrics from MedEval3D, and some introduction how to choose properly the metric depending on the problem
8.Discuss How one can improve the performance of the algorithm and what are some planned future directions
[1] https://github.com/jakubMitura14/MedPipe3DTutorial
[2]Participants can download data before task 9 from https://drive.google.com/drive/folders/1HqEgzS8BV2c7xYNrZdEAnrHk7osJJ--2
[3] ]add Flux Hyperopt Plots UNet MedEye3d Distributions Clustering IrrationalConstants ParallelStencil CUDA HDF5 MedEval3D MedPipe3D Colors
false
https://pretalx.com/juliacon-2022/talk/9PKGGH/
https://pretalx.com/juliacon-2022/talk/9PKGGH/feedback/
Green
Statistics symposium
Minisymposium
2022-07-22T08:00:00+00:00
08:00
03:00
Statistics is a domain where some early stage development of packages, and some early applications, have come about in Julia. We think of this mini-symposium as a combination of (a) Report on many interesting recent developments in this field and (b) Offer a birds eye view to the people interested in this field, and help them assess the state of maturity so as to make decisions about whether Julia is appropriate for their statistics work.
juliacon-2022-17967-statistics-symposium
JuliaCon
Ayush Patnaik
en
1. "Doing applied statistics research in Julia", Ajay Shah (20 minutes)
We show the journey of two applied statistics research papers, done fully in Julia, by researchers who were previously working in R. What was convenient, what were the chokepoints, what were the gains in expressivity and in performance. Based on this, we evaluate the state of maturity of Julia for doing applied statistics. We propose practical pathways for statisticians, and speak to the Julia community about what is required next. We report on recent developments in the field of Julia and statistics.
2. "CRRao: A unified framework for statistical models", Sourish Das (20 minutes)
Many statistical models are available in Julia, and many more will come. CRRao is a consistent framework through which callers interact with a large suite of models. For the end-user, it reduces the cost and complexity of estimating statistical models. It offers convenient guidelines through which development of additional statistical models can take place in the future.
3. "TSx: A time series class for Julia", Chirag Anand (20 minutes)
DataFrames.jl is a powerful system, but expressing the standard tasks of manipulating time series -- e.g. as seen in finance or macroeconomics -- is often cumbersome. We draw on the work of the R community, which has built zoo and xts, to build a time series class, TSx, which delivers a simple set of operators and functions for the people working with time series. It constitutes syntactic sugar on top of the capabilities of DataFrame.jl and thus harnesses the capabilities and efficiency of that package. We conduct comparisons of capabilities and performance against zoo and xts in R.
4. "Comparing glm in Julia, R and SAS", Mousum Datta (10 minutes)
glm is an unusually important class of statistical models. We compare the capabilities, correctness and performance of the present glm systems in Julia, R and SAS. We report on recent improvements that have been injected into GLM.jl.
5. "Working with survey data", Ayush Patnaik (10 minutes)
The Julia package survey.jl builds some of the functionality required for statistical estimators with stratified random sampling. For a limited subset of the capabilities of Thomas Lumley's R package `survey', we show the correctness and the performance gains of the Julia package.
false
https://pretalx.com/juliacon-2022/talk/7JNQCM/
https://pretalx.com/juliacon-2022/talk/7JNQCM/feedback/
Green
JuliaMolSim: Computation with Atoms
Minisymposium
2022-07-22T14:00:00+00:00
14:00
03:00
The JuliaMolSim community is hosting a minisymposium! Come hear about AtomsBase, our project to create a unified interface for representing atomic geometries, as well as packages for simulation (both quantum mechanical and classical particles-based) and machine learning on atomistic systems. Do you have an idea for a package you think the community needs? Participate in our “quick pitch” session and find co-developers to help build it!
juliacon-2022-18135-juliamolsim-computation-with-atoms
JuliaCon
/media/juliacon-2022/submissions/JYDQEB/collisions_hNkPW20.png
Rachel Kurchin
en
The JuliaMolSim community is open to anyone who uses/develops Julia code that is used for simulating/analyzing systems that are resolved at the level of atomic/molecular coordinates. You can learn more about the packages we maintain and join conversations on our Slack workspace by going to our website at [https://juliamolsim.github.io](https://juliamolsim.github.io/) .
Our BoF session from JuliaCon 2021, “Building a Chemistry and Materials Science Ecosystem in Julia,” helped jumpstart the Slack community. A major subsequent output from those ongoing conversations was the development of the AtomsBase interface, defining a common set of functions for specifying atomic geometries. We’re really excited about the prospect of this effort enabling great interoperability between different types of simulation and analysis as well as to share code for tasks like visualization and I/O. In fact, it already has begun to have this impact in a number of academic projects with international collaborators and funded by major agencies such as the US Department of Energy. A major part of the strength and impact of these efforts has been substantial investment of effort from the beginning by mathematicians, computer scientists, and domain scientists working together, a hallmark of the Julia community writ large and a major part of the reason we’re building this community in Julia.
This year, we’re hosting a minisymposium to keep the community going strong, make new connections, show off cool projects, and collect new ideas! Our planned agenda (so far!) is as follows:
1. Introduction to JuliaMolSim in general and AtomsBase in particular with brief showcase of packages adopting the interface so far
2. Some “deeper-dive” talks on packages now using AtomsBase, focusing on updates since last JuliaCon and also elucidating other emerging themes such as support for automatic differentiation (AD) and GPU utilization
1. Chemellia machine learning ecosystem (Rachel Kurchin)
2. Molly.jl particle simulation package (Joe Greener)
3. DFTK.jl density functional theory package (Michael Herbst)
4. CESMIX project (Emmanuel Lujan)
3. Other contributed talks from the JuliaMolSim community, including:
1. Fermi.jl (Gustavo Aroeira)
2. ACE.jl (Christoph Ortner)
3. NQCDynamics.jl (James Gardner)
4. “Quick pitch” session: what’s the next community project a la AtomsBase? Pitch your idea and find collaborators! (If you are interested in pitching, contact the minisymposium organizers and we will be in touch with more details). Some example topics could include:
1. Plotting recipes (e.g. in Makie) for AtomsBase systems
2. An ab initio MD engine based in Molly, utilizing DFTK for energy/force calculations via AtomsBase
false
https://pretalx.com/juliacon-2022/talk/JYDQEB/
https://pretalx.com/juliacon-2022/talk/JYDQEB/feedback/
Red
Hands-on ocean modeling and ML with Oceananigans.jl
Workshop
2022-07-22T15:00:00+00:00
15:00
03:00
Come get your feet wet with Oceananigans.jl, a native Julia, fast, friendly, flexible and fun ocean model. In the first half of this workshop participants will be helped to run and analyze one of several simple ocean problems. The problems relate to state-of-the-art challenges in climate science and computational science. In the second half we will examine Julia language features, packages and design choices that enable Oceananigans.jl.
juliacon-2022-18122-hands-on-ocean-modeling-and-ml-with-oceananigans-jl
Chris HillFrancis PoulinGregory WagnerValentin ChuravySimone SilvestriTomas ChorSuyash BireRodrigo DuranJean-Michel Campin
en
Oceananigans.jl is a state of the art ocean modeling tool written from scratch in Julia. Oceananigans uses an underlying finite volume, locally orthogonal staggered-grid fluid modeling paradigm. This allows Oceananigans to support everything from highly-idealized large-eddy-simulation studies of geophysical turbulence to large scale planetary circulation projects. The code is configured for different problems using native Julia scripting. Julia metaprogramming supports wide flexibility in numerical methods and supports large ensemble experiments. These latter style of experiment facilitate semi-automated Bayesian search that can be used to produce reduced-order models that emulate more detailed physical process models accurately. Julia typing and dispatch is used to support discrete numerics involving staggered numerical grid locations and to support (through the Julia KernelAbstractions.jl package) GPU and CPU execution from a single code base. Advanced graphics with Makie.jl and data management using NCDatasets.jl and JLD2.jl are fully integrated. Integration of Oceananigans within SciML workflows for developing neural differentiable equation improvements to physics based schemes is also possible.
In this workshop we cover both hands-on execution of a variety of different model configurations and exploring how key features of the Julia language and packages from the Julia ecosystem are used to enable a range of use cases.
The workshop will include two parts. A first part will consist of breakout room sessions. Each room will have a lead who will walk participants through configuring and running an Oceananigans instance on either a cloud resource or on participants local systems. A second part will involve multiple Oceananigans.jl team members walking through the key Julia language aspects that make Oceananigans a flexible and fun tool to use for all manner of scientific ocean modeling problems on Earth and beyond.
Workshop participants will get hands-on experience with real-world high-end scientific modeling for ocean and fluid problems in Julia. They will also learn about the how many elements of the Julia language and ecosystem can be used together to create a performant and expressive modeling tool that is also easy to engage with.
false
https://pretalx.com/juliacon-2022/talk/VAHYFE/
https://pretalx.com/juliacon-2022/talk/VAHYFE/feedback/
Green
Introduction to Julia with a focus on statistics (in Hebrew)
Workshop
2022-07-23T10:00:00+00:00
10:00
03:00
This session has moved to Zoom. Please join with Zoom ID: 6376486897 at 2PM Israel time.
This (Hebrew language) workshop provides an introduction to the Julia language for machine learning engineers, data-scientists, and statisticians. Attendees will gain a solid entry point for using Julia as their preferred data analysis tool.
juliacon-2022-16905-introduction-to-julia-with-a-focus-on-statistics-in-hebrew-
/media/juliacon-2022/submissions/F7WDXE/statsJulia_NmURM8n.png
Yoni Nazarathy
en
This Juliacon 2022 workshop in Hebrew (עברית) is aimed at data-scientists, machine learning engineers, and statisticians that have experience with a language like Python or R, but have not used Julia previously. In learning to use Julia, a contemporary "stats based" approach is taken focusing on short scripts that achieve concrete goals. This is similar to the approach of the [Statistics with Julia book](https://statisticswithjulia.org/).
The primary focus is on statistical applications and packages. The Julia language is covered as a by-product of the applications. Thus, this workshop is much more of a how to use Julia for stats course than a how to program in Julia course. This approach may be suitable for statisticians and data-scientists that tend to do their day-to-day scripting with a data and model based approach - as opposed to a software development approach.
An extensive Jupyter notebook for the workshop together with data files is [here](https://github.com/yoninazarathy/StatisticsWithJuliaFromTheGroundUp-2022). You can install it to follow along. The Jupyter notebook is not in Hebrew.
If you don't already have Julia with IJulia (Jupyter) installed, you can follow the instructions in [this video](https://www.youtube.com/watch?v=KJleqSITuRo). It is recommended that you have Julia 1.7.3 or higher installed.
false
https://pretalx.com/juliacon-2022/talk/F7WDXE/
https://pretalx.com/juliacon-2022/talk/F7WDXE/feedback/
Green
Interactive data visualizations with Makie.jl
Workshop
2022-07-23T14:00:00+00:00
14:00
03:00
Makie.jl is a Julia-native interactive data visualization library. In this workshop, participants will learn how to create complex interactive and static plots, using the full range of tools Makie has to offer. Topics could include writing custom recipes, understanding the scene graph, mastering the layout system, handling complex observable structures and tweaking visual styles. The workshop will also be an opportunity to learn about the architecture and underlying ideas of Makie.
juliacon-2022-18051-interactive-data-visualizations-with-makie-jl
JuliaCon
Julius KrumbiegelSimon Danisch
en
The participants will follow along while different small interactive visualization projects are coded live, showing how to go from idea to implementation.
false
https://pretalx.com/juliacon-2022/talk/FBLWD3/
https://pretalx.com/juliacon-2022/talk/FBLWD3/feedback/
Green
Julia REPL Mastery Workshop
Workshop
2022-07-24T14:00:00+00:00
14:00
03:00
A fundamental Julia experience is feeling the power of computing at your fingertips - but how many of us squeeze the absolute most out of the Julia REPL? In this workshop, you'll go from 0 to hero in about 3 hours with a whopping collection of all the tips, tricks and goodies you could ever hope for in your Julia REPL experience.
juliacon-2022-16549-julia-repl-mastery-workshop
JuliaCon
Miguel Raz Guzmán Macedo
en
This workshop will be a jam-packed, hands-on tour of the Julia REPL so that beginners and experts alike can learn a few tips and tricks. Every Julia user spends a significant amount of coding time interacting with the REPL - my claim for this workshop is that all Julia users can save themselves more than 3 hours of productive coding time over their careers should they attend this workshop, so why not invest in yourself now?
Plan (pending review) for the material that will be covered:
* Navigation - moving around, basic commands, variables, shortcuts and keyboard combinations, cross language comparison of REPL features, Vim Mode homework
* Internals and configuration - Basic APIs, display control codes, terminals and font support, startup file options, prompt changing, flag configurations
* REPL Modes - Shell mode, Pkg mode, help mode, workflow demos for contributing code fixes, BuildYourOwnMode demo, Term.jl
* Tools and packages - OhMyREPL.jl, PkgTemplates.jl, Eyeball.jl, TerminalPager.jl, AbstractTrees.jl, Debugger.jl, UnicodePlots.jl, ProgressMeters.jl, PlutoREPL.jl assignment
false
https://pretalx.com/juliacon-2022/talk/PFUHDL/
https://pretalx.com/juliacon-2022/talk/PFUHDL/feedback/
Green
Differentiable Earth system models in Julia
Minisymposium
2022-07-25T14:00:00+00:00
14:00
03:00
This minisymposium will feature the use of the differentiable programming paradigm applied to Earth System Models (ESMs). The goal is to exploit derivative information and seamlessly combine PDE-constrained optimization and scientific machine learning (SciML). Speakers will address (1) Why differentiable programming for ESMs; (2) What ESM applications are we targeting?; and (3) How are we realizing differentiable ESMs? Target ESMs include ice sheet, ocean, and solid Earth models.
juliacon-2022-18023-differentiable-earth-system-models-in-julia
JuliaCon
Patrick HeimbachNora LooseMathieu MorlighemBoris KausChris HillSri Hari Krishna NarayananSarah Williamson
en
The differentiable programming paradigm offers large potential to improve Earth system models (ESMs) in at least two ways: (i) in the context of parameter calibration, state estimation, initialization for prediction, and uncertainty quantification derivative information (tangent linear, adjoint and Hessian) are key ingredients; (ii) combining PDE-constrained optimization with SciML approaches may be performed naturally in a composable way and within the same programming framework. This minisymposium is organized in three parts (all speakers listed are tentative):
1/ Why differentiable programming for ESMs? Speakers will discuss the use of derivative information for PDE-constrained optimization in ice sheet (M. Morlighem, N. Petra), ocean (P. Heimbach) and solid Earth (B. Kaus) modeling; the use of SciML in the context of ESMs (J. Le Sommer, A. Ramadhan); The use of adjoints for sensitivity analysis and uncertainty quantification (N. Loose).
2/ What ESM applications are we targeting? The minisymposium will feature three ESM applications for
Global ocean modeling (C. Hill); ice sheet modeling (J. Bolibar, L. Raess).
3/ How are we realizing differentiable ESMs? A key algorithmic framework is the use of general-purpose automatic differentiation. The Julia is developing a number of packages. ESM applications will likely push the envelope of the capability of existing AD tools. The minisymposium will present how these tools are being used in the context of ESMs (S. Williamson, M. Morlighem). Furthermore, specific algorithmic challenges in ongoing AD tool development will be highlighted (S. Narayanan/M. Schanen/...).
The minisymposium seeks to engage both the ESM and the AD tool communities to advance their respective capability. There will be time for discussion. Ideally we are targeting a 3-hour mini symposium.
false
https://pretalx.com/juliacon-2022/talk/UNVUDM/
https://pretalx.com/juliacon-2022/talk/UNVUDM/feedback/
Green
Modeling of Chemical Reaction Networks using Catalyst.jl
Workshop
2022-07-25T18:00:00+00:00
18:00
03:00
Catalyst.jl is a modeling package for analysis and high performance simulation of chemical reaction networks (CRNs). It defines symbolic representations for CRNs, which can be created programmatically or specified via a domain specific language. Catalyst provides tooling to analyze models, and to translate CRNs to ModelingToolkit-based ODE, SDE, and jump process models. In this workshop we will overview how to generate, analyze, and efficiently solve such models across a variety of applications.
juliacon-2022-16904-modeling-of-chemical-reaction-networks-using-catalyst-jl
JuliaCon
Torkel LomanSamuel Isaacson
en
Workshop Pluto notebooks will be available at https://github.com/TorkelE/JuliaCon2022_Catalyst_Workshop
At the highest level, Catalyst models can be specified via a domain-specific language (DSL), where they can be concisely written as a list of chemical reactions. Such models are converted into a Symbolics.jl-based intermediate representation (IR), represented as a ModelingToolkit.jl AbstractSystem. This IR acts as a common target for many tools within SciML, enabling them to be applied to Catalyst-based models. Symbolic models can also be directly constructed using the symbolic IR, allowing programmatic construction of CRNs or extensions of DSL-defined CRNs.
In this workshop, we will demonstrate how to generate CRN models through the Catalyst DSL and programmatically via the IR. Catalyst features such as custom rate laws, component-based modeling, and parametric stoichiometry will be explored to demonstrate the breadth of models supported by Catalyst. We will then illustrate how such models can be translated to other symbolic Modelingtoolkit-based mathematical representations, and simulated with SciML tooling. Such representations include deterministic ODE models (based on reaction rate equations), stochastic SDE models (based on chemical Langevin equations), and stochastic jump process models (based on the chemical master equation and Gillespie's method). For each of these representations, the DifferentialEquations.jl package provides a variety of solvers that can accurately and efficiently simulate the model's dynamics. We will also demonstrate further tools for analysis of CRN-based models, including methods for parameter fitting, network analysis, calculation of steady states, and bifurcation analysis (through the BifurcationKit.jl package).
To help users with real-world applicability, we will demonstrate how to appropriately use the Catalyst and SciML tooling to scale simulations to tens of thousands of reactions in ways that exploit sparsity, giving easy access to methodologies which outperform competitor packages by orders of magnitude in performance. Aspects such as parallelization of simulations, automatic differentiation usage (in model calibration), and more will be discussed throughout the various topics to give users a complete view of how Catalyst.jl can impact their modeling workflows.
false
https://pretalx.com/juliacon-2022/talk/98UQX3/
https://pretalx.com/juliacon-2022/talk/98UQX3/feedback/
Red
Julia in Astronomy & Astrophysics Research
Minisymposium
2022-07-25T18:00:00+00:00
18:00
03:00
This minisymposium aims to provide researchers in astronomers and astrophysicists an opportunity to share how Julia has enhanced their science and the challenges they encountered. We aim to identify shared needs (e.g., opportunities for new/upgraded packages) that could significantly accelerate the adoption of Julia among the astronomical research community. A secondary goal is to help strengthen the community of Julia developers active in astronomical research.
juliacon-2022-16365-julia-in-astronomy-astrophysics-research
JuliaCon
Eric B. Ford
en
This mini-symposium aims to help accelerate the adoption of Julia among astronomers and astrophysicists. Astrophysicists have long been among the leaders in High-Performance Computing. Large astronomical surveys continue to create new opportunities for researchers with the skills and tools to harness Big Data efficiently. Early adopters of Julia have developed packages providing functionality commonly needed by the astronomical community (e.g., AstroLib,jl, AstroTime.jl, Cosmology.jl, FITSIO.jl, UnitfulAstro.jl) and/or gained experience applying Julia to their research problems. According to NASA’s Astrophysical Data System, ~30 astronomy papers include “Julia” and “Bezanson et al. (2017)”, with over half of those being published since 2021. This mini-symposium invite researchers with experience applying Julia to astronomical research to share their experiences through a series of short talks and followed by a panel discussion.
Talks should not emphasize the astronomical methods or conclusions, but rather how using Julia impacted their project. How did Julia enhance their science or their productivity? What challenges related to Julia did they encounter? What work-arounds did they find? What additions or upgrades to the Julia package ecosystem would be helpful for their future projects? …or for accelerating adoption of Julia among the astronomical community? What resources did they use for integrating their research groups and/or collaborators into the Julia community? Where could filling a gap in documentation and or developing improved training materials be particularly impactful for helping astronomers to transition to Julia?
false
https://pretalx.com/juliacon-2022/talk/DFA3RD/
https://pretalx.com/juliacon-2022/talk/DFA3RD/feedback/
Green
Julia for High-Performance Computing
Minisymposium
2022-07-26T14:00:00+00:00
14:00
03:00
The "Julia for HPC" minisymposium aims to gather current and prospective Julia practitioners in the field of high-performance computing (HPC) from multidisciplinary applications. We invite participation from industry, academia, and government institutions interested in Julia’s capabilities for supercomputing. The goal is to provide a venue for Julia enthusiasts to share best practices, discuss current limitations, and identify future developments in the scientific HPC community.
juliacon-2022-17892-julia-for-high-performance-computing
JuliaCon
/media/juliacon-2022/submissions/LUWYRJ/Julia_for_HPC-minisymposium-juliacon22_AKrxsio.png
Michael Schlottke-LakemperCarsten BauerHendrik RanochaJohannes BlaschkeJeffrey Vetter
en
**YouTube Link:** https://www.youtube.com/watch?v=fog1x9rs71Q
As we approach the era of exascale computing, scalable performance and fast development on extremely heterogeneous hardware have become ever more important aspects for high-performance computing (HPC). Scientists and developers with interest in Julia for HPC need to know how to leverage the capabilities of the language and ecosystem to address these issues and which tools and best practices can help them to achieve their performance goals.
What do we mean by HPC? While HPC can be mainly associated with running large-scale physical simulations like computational fluid dynamics, molecular dynamics, high-energy physics, climate models etc., we use a more inclusive definition beyond the scope of computational science and engineering. More recently, rapid prototyping with high-productivity languages like Julia, machine learning training, data management, computer science research, research software engineering, large scale data visualization and in-situ analysis have expanded the scope for defining HPC. For us, the core of HPC is not to run simple test problems faster but involves everything that enables solving challenging problems in simulation or data science, on heterogeneous hardware platforms, from a high-end workstation to the world's largest supercomputers powered with different vendors CPUs and accelerators (e.g. GPUs).
In this two-hour minisymposium, we will give an overview of the current state of affairs of Julia for HPC in a series of eight 10-minute talks. The focus of these overview talks is to introduce and motivate the audience by highlighting aspects making the Julia language beneficial for scientific HPC workflows such as scalable deployments, compute accelerator support, user support, and HPC applications. In addition, we have reserved some time for participants to interact, discuss and share the current landscape of their investments in Julia HPC, while encouraging networking with their colleagues over topics of common interest.
The minisymposium schedule, with confirmed speakers and topics, is as follows:
* 0:00: *William F Godoy (ORNL) & Michael Schlottke-Lakemper (U Stuttgart/HLRS):* **Julia for High-Performance Computing**
* 0:05: *Samuel Omlin (CSCS):* **Scalability of the Julia/GPU stack**
* 0:15: *Simon Byrne (Caltech/CliMA):* **MPI.jl**
* 0:25: Q&A
* 0:30: *Tim Besard (Julia Computing):* **CUDA.jl: Update on new features and developments**
* 0:40: *Julian Samaroo (MIT):* **AMDGPU.jl: State of development and roadmap to the future**
* 0:50: Q&A
* 1:00: *Albert Reuther (MIT):* **Supporting Julia Users at MIT LL Supercomputing Center**
* 1:10: *Johannes Blaschke (NERSC):* **Supporting Julia users on NERSC’s “Cori” and “Perlmutter” systems**
* 1:20: Q&A
* 1:25: *Michael Schlottke-Lakemper (U Stuttgart/HLRS):* **Running Julia code in parallel with MPI: Lessons learned**
* 1:35: *Ludovic Räss (ETH Zurich):* **Julia and GPU-HPC for geoscience applications**
* 1:45: Q&A, Discussion & Wrap up
The overall goal of the minisymposium is to identify and summarize current practices, limitations, and future developments as Julia experiences growth and positions itself in the larger HPC community due to its appeal in scientific computing. It also exemplifies the strength of the existing Julia HPC community that collaboratively prepared this event. We are an international, multi institutional, and multi disciplinary group interested in advancing Julia for HPC applications in our academic and national laboratory environments. We would like to welcome new people from multiple backgrounds sharing our interest and bring them together in this minisymposium.
In this spirit, the minisymposium will serve as a starting point for further Julia HPC activities at JuliaCon 2022. During the main conference, **a Birds of Feather session** will provide an opportunity to bring together the community for more discussions and to allow new HPC users to join the conversation. Furthermore, a number of talks will be dedicated to topics relevant for HPC developers and users alike.
false
https://pretalx.com/juliacon-2022/talk/LUWYRJ/
https://pretalx.com/juliacon-2022/talk/LUWYRJ/feedback/
Red
A Complete Guide to Efficient Transformations of data frames
Workshop
2022-07-26T14:00:00+00:00
14:00
03:00
DataFrames.jl provides a comprehensive set of functions that allow performing transformations of tabular data using an operation specification language. This language lets users pass columns from a source data frame, a function to apply to them, and column names to store the result in the target data frame. In this workshop, I will explain the functionalities that it provides. [Here](https://github.com/bkamins/JuliaCon2022-DataFrames-Tutorial) you can find workshop materials.
juliacon-2022-16291-a-complete-guide-to-efficient-transformations-of-data-frames
JuliaCon
Bogumił Kamiński
en
The operation specification language that is part of DataFrames.jl can be used to perform transformations of data frames and split-apply-combine operations of grouped data frames. It is supported by the following functions `combine`, `select`, `select!`, `transform`, `transform!`, `subset`, and `subset!`.
Over the years, following users' requests, DataFrames.jl operation specification language has evolved over the years to efficiently support virtually any operation typically needed when working with tabular data. However, this means that it has become relatively complex. New users often feel overwhelmed by the number of options it provides.
This workshop aims to give a comprehensive guide to DataFrames.jl operation specification language. The presented material will help users learn this language and will be a reference resource.
Workshop materials are available for download [here](https://github.com/bkamins/JuliaCon2022-DataFrames-Tutorial).
false
https://pretalx.com/juliacon-2022/talk/83E8CW/
https://pretalx.com/juliacon-2022/talk/83E8CW/feedback/
Green
`do block` considered harmless
Lightning talk
2022-07-27T12:30:00+00:00
12:30
00:10
Is life possible without for-loops? This talk reviews some syntactic constructs available in Julia, especially the do-block, and relates them to programming language theory concepts.
juliacon-2022-18033--do-block-considered-harmless
JuliaCon
Nicolau Leal Werneck
en
First we go through the similarities between for-loops, comprehensions and functions such as `map` and `reduce`. Programming as building useful abstractions. How "constraints liberate", and abstract ideas can impact concrete computational performance. Finally we discuss a couple of new features in Julia 1.9, especially `Unfold`.
false
https://pretalx.com/juliacon-2022/talk/WKNY78/
https://pretalx.com/juliacon-2022/talk/WKNY78/feedback/
Green
Teaching with Julia (my experience)
Lightning talk
2022-07-27T12:40:00+00:00
12:40
00:10
I am tenure at the University and I teach several courses of Computer Science. In this talk of 10 minutes I will teach how I will my use of Julia as a useful resource for teaching. In this talk I will introduce how I have used Pluto/PlutoSliderServer to explain concept and to allow students to check some implementations. Also, I show an online web for creating easily online quiz for Moodle. Finally, I have used it to prototype some algorithms that later the students should implement.
juliacon-2022-18114-teaching-with-julia-my-experience-
JuliaCon
Daniel Molina
en
I am tenure at the University and I teach several courses of Computer Science. Some time ago I started to use Julia in my research, and more recently I have started to use it as a teaboth ching resource. In this talk I do not refer Julia as the programming language for students (they are last courses in which students can use whatever programming language they want), but as a resource to create tools to help me in the teaching.
In this regards, I have used Julia in three different approach, that I will quickly cover:
- For explaining concepts, I have used Pluto notebooks to visualize them. Also, recently I have used PlutoSliderServer (as Pluto but without editing option) to allow students to check some calculations they have to do during their exercises. An example is https://mh.danimolina.net/ (in Spanish).
- Due to the pandemic period, the use of online tests in Moodle has increased a lot. There are several formats to create them, but they are not intuitive enough for non-technical people. In order to solve that, I have created an online web to create easily the online quiz for Moodle (https://pradofacil.danimolina.net/, in Spanish), with a simple syntax that people from different background can easily use it.
- Finally, in some courses student must implement several algorithms. In order to identify the best parameter values, and to identify the best approaches to implements them, I previously solve them in Julia. Also, it has serve me to predict how much computational time they will require. In my experience, although some specific implementation in C++ is faster, the average implementations takes similar time than using Julia. Many implementations are slower than my Julia implementation, due to some developing decisions.
I consider this talk could be interesting for audience due to the following reasons:
- It gives a very general view of the possibilities of Julia as a teaching resource.
- It can be useful for other teachers giving ideas of integrating Julia in their portfolio.
- It shows my personal experience, and the feedback obtained.
I will be able to prepare the talk both in English and in Spanish. If it is accepted I will create an English version of the shown resources.
false
https://pretalx.com/juliacon-2022/talk/KGLNUH/
https://pretalx.com/juliacon-2022/talk/KGLNUH/feedback/
Green
Simulation of atmospheric turbulence with MicroHH.jl
Lightning talk
2022-07-27T12:50:00+00:00
12:50
00:10
Turbulence in the atmosphere is often studied using 3D simulations as a virtual laboratory. Most experiments require code modification by users, but this is hard, because most codes are written in low-level languages as Fortran and C++. MicroHH.jl, a Julia port of the dynamical core of MicroHH, has been designed to solve this problem. It is built on MPI.jl, LoopVectorization.jl, CUDA.jl, and HDF5.jl for the IO. Interaction with running simulations is made possible via user scripts.
juliacon-2022-17997-simulation-of-atmospheric-turbulence-with-microhh-jl
JuliaCon
/media/juliacon-2022/submissions/X37FHS/cbl_2d_juliacon_lxvYT0T.png
Chiel van Heerwaarden
en
false
https://pretalx.com/juliacon-2022/talk/X37FHS/
https://pretalx.com/juliacon-2022/talk/X37FHS/feedback/
Green
ProtoSyn.jl: a package for molecular manipulation and simulation
Lightning talk
2022-07-27T13:00:00+00:00
13:00
00:10
ProtoSyn.jl is an open-source alternative to molecular manipulation and simulations software, built with a modular architecture and offering a clean canvas where new protocols and models can be tested and benchmarked. By delivering good documentation, ProtoSyn.jl aims to lower the entry barrier to inexperienced scientists and allow a “plug-and-play” experience when implementing modifications. Learn more on the project’s GitHub page:
https://github.com/sergio-santos-group/ProtoSyn.jl
juliacon-2022-17242-protosyn-jl-a-package-for-molecular-manipulation-and-simulation
/media/juliacon-2022/submissions/XRQRK3/ProtoSyn_ocFsqVN.png
José Pereira
en
The ever-increasing expansion in computer power felt in the last few decades has fuelled a revolution in the way we make science. Modern labs are empowered by large databases, efficient collaboration tools and fast molecular simulation software packages that save both time and money. Preliminary screening of new drug targets or protein designs are just some examples of recent applications of such tools. However, in this field, users have been experiencing a wider gap between expectations and the available technology: existing solutions are quickly becoming outdated, with legacy code and poor documentation.
On the scope of protein design, for example, the Rosetta software (and its Python wrap, PyRosetta) have become ubiquitous in any modern lab, despite suffering from the two-language problem and being virtually opaque to any attempt to modify or improve the source code. Such impediment has caused a severe lag in implementing new and modern solutions, such as GPU usage, cloud-based distributed computing or even molecular energy/forces calculations using machine learning models. Implementations are eventually added as single in-house scripts or patch code that lacks cohesion and proper documentation, steepening the learning curve to inexperienced users. Despite Rosetta’s massive and warranted success, there’s room for improvement.
ProtoSyn.jl, taking advantage of the growing Julia programming language and community, intends to provide an open-source, robust and simple to use package for molecular manipulation and simulation. Some of its functionalities include a complete set of molecular manipulation tools (add, remove and mutate residues, apply dihedral rotations and/or rotamers, apply secondary structures, copy and paste fragments, loops and other structures, include non-canonical aminoacids, post-translational modifications and even ramified polymers structures, such as glycoproteins or polysaccharides, among others), common simulation algorithms (such as Monte-Carlo or Steepest Descent), custom energy functions, etc. Much like setting up a puzzle, ProtoSyn.jl offers blocks of functions that can be mixed and matched to produce arbitrarily complex simulation algorithms. Capitalizing on recent advances, ProtoSyn.jl delivers a “plug-and-play” experience: users are encouraged to include novel applications, such as machine learning models for energy/forces calculations, by following clean documentation guides, complete with examples and tutorials. Enjoying the advantages Julia, ProtoSyn.jl can perform calculations on the GPU (using CUDA.jl), employ SIMD technology (using SIMD.jl), carry out distributed computing tasks (using Distributed.jl) and even directly call Python code (using PyCall.jl).
In a nutshell, ProtoSyn.jl intends to be an open-source alternative to molecular manipulation and simulations software’s, focusing on modularity and proper documentation, and offering a clean canvas where new protocols, algorithms and models can be tested, benchmarked and shared. Learn more on the project’s GitHub page:
https://github.com/sergio-santos-group/ProtoSyn.jl
false
https://pretalx.com/juliacon-2022/talk/XRQRK3/
https://pretalx.com/juliacon-2022/talk/XRQRK3/feedback/
Green
PDDL.jl: A fast and flexible interface for automated planning
Lightning talk
2022-07-27T13:10:00+00:00
13:10
00:10
The Planning Domain Definition Language (PDDL) is a formal specification language for symbolic planning problems and domains that is widely used by the AI planning community. This talk presents PDDL.jl, a fast and flexible interface for planning over PDDL domains. It aims to be what PyTorch is to deep learning, or what PPLs are to Bayesian inference: A general high-performance platform for contemporary AI applications and research programs that leverage automated symbolic planning.
juliacon-2022-17255-pddl-jl-a-fast-and-flexible-interface-for-automated-planning
JuliaCon
/media/juliacon-2022/submissions/A9SRVU/JuliaPlanners_ignaBTJ.png
Xuan (Tan Zhi Xuan)
en
The [Planning Domain Definition Language (PDDL)](https://en.wikipedia.org/wiki/Planning_Domain_Definition_Language) is a formal specification language for symbolic planning problems and domains that is widely used by the AI planning community. However, most implementations of PDDL are closely tied to particular planning systems and algorithms, and are not designed for interoperability or modular use within larger AI systems. This limitation makes it difficult to support extensions to PDDL without implementing a dedicated planner for that extension, inhibiting the generality, reach, and adoption of automated planning.
To address these limitations, we present [**PDDL.jl**](https://github.com/JuliaPlanners/PDDL.jl), an extensible parser, interpreter, and compiler interface for fast and flexible AI planning. PDDL.jl exposes the semantics of planning domains through a common interface for executing actions, querying state variables, and other basic operations used within AI planning applications. PDDL.jl also supports the extension of PDDL semantics (e.g. to stochastic and continuous domains), domain abstraction for generalized heuristic search (via abstract interpretation), and domain compilation for efficient planning, enabling speed and flexibility for PDDL and its many descendants.
Collectively, these features allow PDDL.jl to serve as a general high-performance platform for AI applications and research programs that leverage the integration of symbolic planning with other AI technologies, such as neuro-symbolic reinforcement learning, probabilistic programming, and Bayesian inverse planning for value learning and goal inference.
false
https://pretalx.com/juliacon-2022/talk/A9SRVU/
https://pretalx.com/juliacon-2022/talk/A9SRVU/feedback/
Green
Real-Time, I/O, and Multitasking: Julia for Medical Imaging
Lightning talk
2022-07-27T13:20:00+00:00
13:20
00:10
In this talk, we show how to use Julia to build the system software for a medical imaging device. Such a device is a distributed system that has to coordinate the handling of real-time signals and asynchronous tasks. The talk will highlight the key parts and design patterns of our software stack. We show how we used a variety of Julia features to implement the control logic of the entire imaging device and the coordination and communication with the large number of sub-devices it controls.
juliacon-2022-17942-real-time-i-o-and-multitasking-julia-for-medical-imaging
JuliaCon
Niklas Hackelberg
en
Medical imaging devices are complex distributed systems that can feature a large variety of different parts from power amplifiers and circuitry for safety and control to robots, motors and pumps to signal generation and acquisition units. During measurements all these heterogeneous devices need to be coordinated to produce the data from which a tomographic image can be reconstructed. A central part of a measurement is the synchronous multi-channel acquisition and generation of signals. Contrary to other parts of the measurement process, hard real-time requirements typically apply here.
In this talk, we showcase a Julia software stack for the new tomographic imaging modality Magnetic Particle Imaging (MPI). Our software stack is composed of the MPIMeasurements.jl package and a Julia client library from the RedPitayaDAQServer project. The resulting system is a framework that allows us to load different device combinations from configuration files and and perform different measurements with them. In particular, the talk will outline two of the main challenges we faced during development and how they have been resolved using several of Julia's features.
The first challenge is the coordination of a varying number of heterogenous devices in a maintainable and extendable manner. This is especially important for MPI, as a very common approach to image reconstruction requires a very time-intensive calibration process, where a quick and intertwined coordination of devices could save hours of invaluable scan time. Julia tasks and multi-threading with threads dedicated to specific tasks allowed us implement a very flexible architecture for managing all the devices.
The second challenge is the configuration of a cluster of data acquisition boards and the transmission of real-time signals from this cluster. The cluster is realized using the low-cost RedPitaya STEMlab hardware and open-source software components provided in our RedPitayaDAQServer project, which include a Julia client library. To achieve real-time signal transmission, the Julia client needs to communicate with the servers running on each data acquisition board of the homogenous cluster and maintain consistent high network performance to ensure that no data loss occurs. Our solutions here involve Julia tasks, channels, metaprogramming and multiple-dispatch to implement an interface to our data acquisition boards that allows for batch execution of commands and high performance continuous data transmission.
Core packages being presented:
• https://github.com/MagneticParticleImaging/MPIMeasurements.jl
• https://github.com/tknopp/RedPitayaDAQServer
false
https://pretalx.com/juliacon-2022/talk/ZPVDSR/
https://pretalx.com/juliacon-2022/talk/ZPVDSR/feedback/
Green
Build an extensible gallery of examples
Lightning talk
2022-07-27T13:30:00+00:00
13:30
00:10
Examples are an essential part of the package documentation. In this talk, I'll introduce DemoCards.jl as a plugin package for Documenter to manage your demo files. I'll explain its design and show how it is used to build the demos in JuliaImages and JuliaPlots. Package authors and document writers are potential users of this package.
juliacon-2022-17065-build-an-extensible-gallery-of-examples
Johnny Chen
en
false
https://pretalx.com/juliacon-2022/talk/ZFUAHG/
https://pretalx.com/juliacon-2022/talk/ZFUAHG/feedback/
Green
Comrade: High-Performance Black Hole Imaging
Lightning talk
2022-07-27T14:30:00+00:00
14:30
00:10
In 2019 the Event Horizon Telescope (EHT) Collaboration produced the first image of a black hole. This talk details how Julia has been an essential tool for EHT black hole imaging and the advancement of black hole science. I will demonstrate how Julia’s features such as multiple dispatch, differentiable programming, and composability have enabled orders of magnitude performance improvement, moving black hole imaging from clusters to a single laptop.
juliacon-2022-18053-comrade-high-performance-black-hole-imaging
JuliaCon
Paul Tiede
en
In 2019 the global Event Horizon Telescope (EHT) made history by producing the first-ever image of a black hole on horizon scales. However, imaging a black hole is a complicated task. The EHT is a radio interferometer and does not directly produce the on-sky image. Instead, it measures the Fourier transform of the image. Furthermore, the telescope only samples the image at a handful of points in the Fourier domain. As a result of the incomplete Fourier sampling, infinitely many images are consistent with the EHT observations. Quantifying this uncertainty is imperative for any EHT analyses and black hole science as a whole.
Bayesian inference provides a natural avenue to quantify image uncertainty. However, this approach is computationally demanding. Due to computational complexity, low-level languages (e.g., C++) are required to make the calculation feasible. On the other hand, interactivity is critical when modeling, as the usual workflow involves choosing an image structure, applying it to the data, and graphically assessing the results. Incorporating interactivity into the modeling pipeline requires a second package written in Python. Historically, this separation has increased the learning curve and limited the adoption of Bayesian methods.
In the first part of the talk, I will introduce Comrade. Comrade is a Julia Bayesian black hole imaging package geared towards EHT and next-generation EHT (ngEHT) analyses. This package aims to be highly flexible, including many image models such as geometric, imaging, and physical accretion models. Additionally, Comrade is fast. In fact, it is over 100x faster than other EHT modeling packages while using far fewer resources. This drastic speed increase was due to Julia’s excellent introspection, package management, and auto differentiation libraries.
In the second part of my talk, I will detail how this performance increase has enabled novel black hole research and will be vital for future black hole science. Within the next decade, the ngEHT will increase its number of observations and its data volume per observation by an order of magnitude to produce higher-quality images. As a result of this significant increase in data, the ngEHT will require new tools. I will explain how Julia can play a vital role in next-generation black hole science and what additional language features are needed.
false
https://pretalx.com/juliacon-2022/talk/3LHDTD/
https://pretalx.com/juliacon-2022/talk/3LHDTD/feedback/
Green
Reaction rates and phase diagrams in ElectrochemicalKinetics.jl
Lightning talk
2022-07-27T14:40:00+00:00
14:40
00:10
I will introduce ElectrochemicalKinetics.jl, a package that implements a variety of models for electrochemical reaction rates (such as Butler-Volmer or Marcus-Hush-Chidsey). It can also fit model parameters and construct nonequilibrium phase diagrams. While the package has already been of great use in electrochemical research applications, I will focus more on the design choices as well as the challenges that have come up in implementing automatic differentiation support.
juliacon-2022-18140-reaction-rates-and-phase-diagrams-in-electrochemicalkinetics-jl
JuliaCon
Rachel Kurchin
en
In electrochemical reaction modeling, there are a variety of mathematical models (such as Butler-Volmer, Marcus, or Marcus-Hush-Chidsey kinetics) used to describe the relationship between the overpotential and the reaction rate (or electric current). Another important entity is the inverse of this function, i.e. given a current, what overpotential would be needed to drive it? Most of the models used do not have analytical inverses, so inverting them requires an optimization problem to be solved.
In ElectrochemicalKinetics.jl, I created a generic interface for computing these reaction rates and overpotentials, as well as using these quantities for other analyses such as fitting model parameters or constructing nonequilibrium phase diagrams, important for predicting, for example, behavior of a battery under fast charge or discharge conditions. Given a `KineticModel` object `m` we can always compute the rate constant at a given overpotential with the same syntax, no matter if `m isa ButlerVolmer` or `m isa Marcus` or any other implemented model type. This allows for easy comparison between these models, including when analyzing real data.
We can also construct nonequilibrium phase diagrams, to, for example, understand and predict lithium intercalation behavior in a battery at various charge or discharge rates. Building these phase diagrams requires calling the inverse function mentioned above and using it within another optimization (to satisfy the thermodynamic common-tangent condition), making automatic differentiation challenging. I will also discuss some of these challenges and the solutions I have found for them so far.
false
https://pretalx.com/juliacon-2022/talk/LJYRGJ/
https://pretalx.com/juliacon-2022/talk/LJYRGJ/feedback/
Green
RVSpectML: Precision Velocities from Spectroscopic Time Series
Lightning talk
2022-07-27T14:50:00+00:00
14:50
00:10
Astronomers have detected nearly a thousand exoplanets by precisely charting the radial velocity (RV) wobble of their host stars. The [RVSpectML](https://rvspectml.github.io/RvSpectML-Overview/) family of packages is a new, open-source, modular and performant pipeline for measuring radial velocities and stellar variability indicators from spectroscopic time-series. This talk aims to give potential users and/or developers an overview of the component packages and their status.
juliacon-2022-17683-rvspectml-precision-velocities-from-spectroscopic-time-series
JuliaCon
/media/juliacon-2022/submissions/BLBKZM/71478307_RzFeKVm.png
Eric B. Ford
en
*Purpose:* The RVSpectML family of packages provides performant implementations of both traditional methods for measuring precise radial velocities (e.g., computing RVs from CCFs or template matching) and a variety of physics-informed machine learning-based approaches to mitigating stellar variability (e.g., Doppler-constrained PCA, Scalpels, custom line lists, Gaussian process latent variable models). It aims to make it practical for researchers to experiment with new approaches. Additionally, it aims to help astronomers improve the robustness of exoplanet discoveries by exploring the sensitivity of their results to choice of data analysis algorithm.
*Context:* Recently, NASA and NSF chartered the [Extreme Precision Radial Velocity Working Group](https://exoplanets.nasa.gov/exep/NNExplore/EPRV/) to recommend a plan for detecting potentially Earth-like planets around other stars. Their recommendations included developing a modular, customizable, and open-source pipeline for analyzing spectroscopic timeseries data from multiple instruments. The [RVSpectML](https://rvspectml.github.io/RvSpectML-Overview/) family of packages directly addresses this need.
false
https://pretalx.com/juliacon-2022/talk/BLBKZM/
https://pretalx.com/juliacon-2022/talk/BLBKZM/feedback/
Green
State of JuliaGeo
Lightning talk
2022-07-27T15:00:00+00:00
15:00
00:10
[JuliaGeo](https://juliageo.org) is a community that contains several related Julia packages for manipulating, querying, and processing geospatial geometry data. We aim to provide a common interface between geospatial packages. In 2022 there has been a big push to have parity with the Python geospatial packages, such as rasterio and geopandas. In this 10 minute talk, we'd like to show these improvements---both in code and documentation---during a tour of the geospatial ecosystem.
juliacon-2022-18144-state-of-juliageo
JuliaCon
Maarten PronkJosh DayRafael Schouten
en
[JuliaGeo](https://juliageo.org) is a community that contains several related Julia packages for manipulating, querying, and processing geospatial geometry data. We aim to provide a common interface between geospatial packages. In 2022 there has been a big push to have parity with the Python geospatial packages, such as rasterio and geopandas. In this 10 minute talk, we'd like to show these improvements---both in code and documentation---during a tour of the geospatial ecosystem.
We'll showcase the new traits-based release of GeoInterface.jl and work on GeoDataFrames.jl, GeoArrays.jl and Rasters.jl. It includes new packages like GeoFormatTypes, Extents.jl and GeoAcceleratedArrays.jl. We will conclude with future plans, such as enabling geospatial operations in DTables using Dagger.jl.
Links to the [slides](https://app.box.com/s/7dysp78eqlo2b201nx795f0efaci2il6) and [demo](https://app.box.com/s/r5ulbktmqinl732ixjw5h9xyxy6h0mus).
false
https://pretalx.com/juliacon-2022/talk/WCKJQB/
https://pretalx.com/juliacon-2022/talk/WCKJQB/feedback/
Green
Towards Using Julia for Real-Time applications in ASML
Talk
2022-07-27T15:10:00+00:00
15:10
00:30
ASML is a 30.000+ employee company which is the world leader on photo-lithographic systems that are crucial for semiconductor manufacturing. During the last two years a community of Julia enthusiasts has been running pilot projects to assess opportunities offered by Julia for rapid development of early Proof-of-Concepts and, subsequent, rapid deployment in prototypes and, whether possible, actual products. Similar to other robotic systems, ASML lithography systems have hard real-time....
juliacon-2022-16944-towards-using-julia-for-real-time-applications-in-asml
JuliaCon
Francesco Fucci
en
... requirements, which means that software execution must be highly performant & deterministic (i.e., predictable and reproducible). As such, design engineers must look at various aspects when developing software, like fine control of memory, optimal design of data types and modeling algorithms, as well as efficient CPU cache utilization.
User controlled garbage collection (or memory management) and system image binary contents(e.g., absence of JiT, removal of metadata etc.) revealed to be essential aspects to consider making Julia accepted as a language of choice in such a complex domain, compared to more (low level) established languages like C and C++. The goal of this talk is to discuss the strategies and techniques that we explored to enable us using Julia in the on-line execution of lithography models.
We believe that this work will provide new insights about Julia future, by giving new opportunities for further adoption of Julia in complex industrial software systems
false
https://pretalx.com/juliacon-2022/talk/GUQBSE/
https://pretalx.com/juliacon-2022/talk/GUQBSE/feedback/
Green
Opening remarks
Lightning talk
2022-07-27T16:30:00+00:00
16:30
00:10
Opening remarks
juliacon-2022-21373-opening-remarks
en
false
https://pretalx.com/juliacon-2022/talk/YN8QPM/
https://pretalx.com/juliacon-2022/talk/YN8QPM/feedback/
Green
Keynote- Erin LeDell
Keynote
2022-07-27T16:40:00+00:00
16:40
00:45
Keynote- Erin LeDell
juliacon-2022-21232-keynote-erin-ledell
JuliaCon
Erin LeDell
en
Keynote- Erin LeDell
false
https://pretalx.com/juliacon-2022/talk/NPPKUW/
https://pretalx.com/juliacon-2022/talk/NPPKUW/feedback/
Green
Julia Computing Sponsored Talk
Platinum sponsor talk
2022-07-27T17:25:00+00:00
17:25
00:15
Julia Computing's mission is to develop products that bring Julia's superpowers to its customers. Julia Computing's flagship product is JuliaHub, a secure, software-as-a-service platform for developing Julia programs, deploying them, and scaling to thousands of nodes.
juliacon-2022-21235-julia-computing-sponsored-talk
JuliaCon
en
false
https://pretalx.com/juliacon-2022/talk/AL8VGC/
https://pretalx.com/juliacon-2022/talk/AL8VGC/feedback/
Green
AWS Sponsor Talk
Gold sponsor talk
2022-07-27T17:40:00+00:00
17:40
00:10
Amazon Web Services sponsor Talk
juliacon-2022-21507-aws-sponsor-talk
JuliaCon
en
false
https://pretalx.com/juliacon-2022/talk/R7AYWY/
https://pretalx.com/juliacon-2022/talk/R7AYWY/feedback/
Green
Optimization of bike manufacturing and distribution (use-case)
Lightning talk
2022-07-27T19:10:00+00:00
19:10
00:10
This is a use-case scenario of using Julia for planning and optimization of production in one of the largest bicycle manufacturing plants in Europe. The optimization model has been implemented utilizing JuMP and custom made heuristics. The Julia solution has increased profitability of the manufacturing plant over 10% (compared to the previous approach) and the optimal part allocation made it possible to increase the bike production volume by 25%.
juliacon-2022-18007-optimization-of-bike-manufacturing-and-distribution-use-case-
JuliaCon
Przemysław Szufel
en
Kross S.A. (https://kross.eu/) is one of the largest bicycle manufacturers in Europe with a production capacity of up to 1 million bikes a year. The company is also exporting their products to over 50 countries around the globe. The problem that currently the entire bicycle manufacturing industry is facing is the shortage of various key bike components due to the COVID-19 logistic chain disturbances. The goal of the company is to maximize customer (retailer) satisfaction by simultaneously meeting all business constraints with regard to production (part availability, assembly line capacity) and the observed demand for bikes (taking into consideration possible bike substitution, pricing and discount policies) In order to optimize the bicycle production and optimize the distribution plan we have built a mathematical model of the manufacturing plant. The basic model formaulation includes an NP-hard Mixed Linear Integer Programming optimization problem with 4,000,000 decision variables and over 100,000,000 business constraints. The mathematical model has been implemented in Julia programming language using the JuMP package along with Julia linear algebra features and several heuristics and algebra transformations. The model has been subsequently solved using a custom designed heuristics as well as solver packages. This data science project had an overall huge effect on the business of the customer. The computational model made it possible to manufacture 25% more bikes and yields a 10% higher total profitability of the bike factory compared to the best recommendations by a leading ERP solution that has been previously used by the company for production planning.
false
https://pretalx.com/juliacon-2022/talk/DBS3SS/
https://pretalx.com/juliacon-2022/talk/DBS3SS/feedback/
Green
TintiNet.jl: a language model for protein 1D property estimation
Lightning talk
2022-07-27T19:20:00+00:00
19:20
00:10
AI has inaugurated a new era in Bioinformatics, to the point where contemporary language models can extract structural information from processing single protein sequences. Contributing to this field, we built TintiNet.jl, a 100% open-source, open-data and Julia-based portable language model to predict 1D protein structural properties. Our model achieves top performance - computational and predictive -, when compared to other modern algorithms, with only a fraction of the parameter count.
juliacon-2022-18147-tintinet-jl-a-language-model-for-protein-1d-property-estimation
JuliaCon
Guilherme Fahur Bottino
en
The objective of TintiNet.jl is to improve the current state of single-sequence-based prediction of 1D protein structural properties by drastically reducing the size of the models employed while preserving or improving their raw predictive power.
Our main design principles were to avoid intra-serialized processing layers (such as recurrent neural networks) and to employ encoding layers that could grow deeper without a steep increase in computational complexity. Our solution was to develop a hybrid convolutional-transformer architecture, employing the Julia Language, The Flux.jl framework and the Transformers.jl contributed layers to Flux.jl, as well as some BioJulia packages (BioSequences.jl, BioStructures.jl, BioAlignments.jl and FASTX.jl). The project is 100% open-source and open-data, and scripts and procedures to implement the methodology presented are available at https://github.com/hugemiler/TintiNet.jl.
By training and evaluating our model in an extensive collection of over 30000 protein sequences, we demonstrate that this architecture can achieve a similar degree of merit (classification accuracy and regression error) when compared to the three most modern, state-of-the-art models. Since it has a much reduced number of parameters compared to its alternatives, it occupies much less memory and generates predictions up to 10 times faster.
false
https://pretalx.com/juliacon-2022/talk/PMAYRF/
https://pretalx.com/juliacon-2022/talk/PMAYRF/feedback/
Green
PtyLab.jl - Ptychography Reconstruction
Lightning talk
2022-07-27T19:30:00+00:00
19:30
00:10
Conventional Ptychography is a lensless imaging technique which captures a sequence of light diffraction patterns to solve the optical phase problem. The resulting datasets are large and can typically not directly be solved. Instead, iterative reconstruction algorithms with low runtime memory footprint are employed.
Here we present PtyLab.jl, a software for ptychographic data analysis and demonstrate how a functional programming style in Julia allows for performant iterative algorithms.
juliacon-2022-16929-ptylab-jl-ptychography-reconstruction
JuliaCon
Felix WechslerLars Loetgering
en
Conventional Ptychography is a powerful technique since it can retrieve phase and amplitude of an object which is usually not accessible by most common imaging techniques.
The drawback of this method is that it requires a stack of images taken at different displacements of an object with respect to a probe laser beam (such as a Gaussian laser beam).
The recorded images are the intensity of the diffraction pattern of the object illuminated with the probe.
Via iterative reconstruction algorithms one can retrieve amplitude and phase of both the probe and the object. To achieve reasonable runtimes, the algorithms require low memory consumption.
In PtyLab.jl we achieve that with a functional style of programming where buffers are implicitly stored at the beginning of the reconstruction in different functions.
Furthermore, we could demonstrate that this style combined with Julia could achieve reasonable speed-ups in comparison to Matlab and Python.
false
https://pretalx.com/juliacon-2022/talk/YUYXMM/
https://pretalx.com/juliacon-2022/talk/YUYXMM/feedback/
Green
Cropbox.jl: A Declarative Crop Modeling Framework
Lightning talk
2022-07-27T19:40:00+00:00
19:40
00:10
[Cropbox.jl](https://github.com/cropbox/Cropbox.jl) provides a domain specific language for developing crop models.
juliacon-2022-18176-cropbox-jl-a-declarative-crop-modeling-framework
JuliaCon
Kyungdahm Yun
en
Crop models describe how agricultural crops grow under dynamic environmental conditions and management practices. The models have many applications in agricultural science including, but not limited to, predicting yields of the crops under climate change scenarios and finding an optimal strategy for maximizing the yield. Crop modeling can encompass multiple aspects of research activities, but practically speaking, it is a task of formulating quantitative knowledge about the crops and translating them into a computer program.
Many crop models were traditionally developed in imperative programming languages where unrestricted control flows and state mutations could easily lead to error-prone code and inevitable technical debts. Also model developers and model users were often left behind in two disconnected workflows due to the lack of interactive programming environment.
Cropbox is a new modeling framework to bring a declarative approach towards the crop modeling and to consolidate the model development and use in a streamlined workflow implemented on Julia ecosystem. With an insight that the crop model is essentially an integrated network of generalized state variables, the modelers can write down a high-level specification of the model *system* represented by a collection of *variables* with specific behaviors associated. The framework then analyzes the specification and automatically generates lower-level Julia code that works with regular functions implementing common features like simulation running, configuration management, evaluation with common metrics, calibration of parameters, visualization of the result, and manipulation of interactive plots.
In this talk, I will briefly introduce the design and implementation of Cropbox and demonstrate some modeling applications such as a coupled leaf gas-exchange model ([LeafGasExchange.jl](https://github.com/cropbox/LeafGasExchange.jl)), a whole-plant garlic growth model ([Garlic.jl](https://github.com/cropbox/Garlic.jl)), and a 3D root structure growth model ([CropRootBox.jl](https://github.com/cropbox/CropRootBox.jl)).
false
https://pretalx.com/juliacon-2022/talk/ET78DS/
https://pretalx.com/juliacon-2022/talk/ET78DS/feedback/
Green
GapTrain: a faster and automated way to generate GA potentials
Lightning talk
2022-07-27T19:50:00+00:00
19:50
00:10
Chemical predictions have gained ground in the last decade as a way to automate the streamlining of chemical reactivity of multiple substrates. This procedure requires the modeling of interatomic potentials, which can be done by fitting these potentials to data obtained at the quantum-mechanical level. Therefore, the aim of this work is to propose GapTrain.jl, a fast, automatic and broad model to develop the Gaussian approximation potential based on a hundred or thousand data.
juliacon-2022-18055-gaptrain-a-faster-and-automated-way-to-generate-ga-potentials
JuliaCon
/media/juliacon-2022/submissions/CJ3XLV/png_20220411_082238_0000_i24wbob.png
Letícia Madureira
en
## Introduction
<div align=justify>
Molecular simulations are a key point in computational chemistry for reproducing experimental reality. The accuracy of these models involves a number of elements, such as the inclusion of the solvation medium. Thus, interatomic potentials combined with molecular dynamics and Monte Carlo (MC) has been widely applied to surface potential energy. Moreover, most of these potentials are
parameterised for isolated entities with fixed connectivity and thus unable to describe bond breaking/forming processes.
Machine learning approaches have revolutionized force field-based simulations and can be implemented for the entire periodic table. Within small chemical subspaces, models can be achieved using neural networks (NNs), kernel-based methods such as the Gaussian Approximation Potential (GAP) framework or gradient-domain machine learning (GDML), and linear fitting with properly chosen basis functions, each with different data requirements and transferability. GAPs have been used to study a range of elemental, multicomponent inorganic, gas-phase organic molecular, and more recently condensed-phase systems, such as methane and phosphorus. These potentials, while accurate, have required
considerable computational effort and human oversight.
Indeed, condensed-phase NN and GAP fitting approaches typically require several thousand reference (“ground truth”) evaluations.
In the present work – with a view to developing potentials to
simulate solution phase reactions – we consider bulk water as
a test case and develop a strategy which requires just hundreds
of total ground truth evaluations and no a priori knowledge of
the system, apart from the molecular composition. We show
how this methodology is directly transferable to different
chemical systems in the gas phase as well as in implicit and
explicit solvent, focusing on the applicability to a range of
scenarios that are relevant in computational chemistry.
</div>
## References
**1** D. Frenkel and B. Smit, Understanding Molecular Simulation:
From Algorithms to Applications, Academic Press,
Cambridge, Massachusetts, 2nd edn, 2002.
**2** K. Lindorff-Larsen, P. Maragakis, S. Piana, M. P. Eastwood,
R. O. Dror and D. E. Shaw, PLoS One, 2012, 7, e32131.
**3** R. Iimie, P. Minary and M. E. Tuckerman, Proc. Natl. Acad.
Sci. U. S. A., 2005, 102, 6654–6659.
**4** F. No ́e, A. Tkatchenko, K.-R. M ̈uller and C. Clementi, Annu.
Rev. Phys. Chem., 2020, 71, 361–390.
**5** T. Mueller, A. Hernandez and C. Wang, J. Chem. Phys., 2020,
152, 050902.
**6** O. T. Unke, D. Koner, S. Patra, S. K ̈aser and M. Meuwly,
Mach. Learn. Sci. Technol., 2020, 1, 013001.
**7** R. Z. Khaliullin, H. Eshet, T. D. K ̈uhne, J. Behler and
M. Parrinello, Nat. Mater., 2011, 10, 693–697.
**8** G. C. Sosso, G. Miceli, S. Caravati, F. Giberti, J. Behler and
M. Bernasconi, J. Phys. Chem. Lett., 2013, 4, 4241–4246.
**9** H. Niu, L. Bonati, P. M. Piaggi and M. Parrinello, Nat.
Commun., 2020, 11, 2654.
false
https://pretalx.com/juliacon-2022/talk/CJ3XLV/
https://pretalx.com/juliacon-2022/talk/CJ3XLV/feedback/
Green
Using Julia for Observational Health Research
Lightning talk
2022-07-27T20:00:00+00:00
20:00
00:10
Observational health research is a domain of health informatics centered around the use of what is known as "Real World Data". This data comes in several different modalities, standards, and levels of quality. Through efforts done in JuliaHealth, JuliaInterop, and associated communities, the ability to work with this data is now fully realized. Through this talk, viewers will see how an observational health study can be conducted with Julia and how similar tools can be adapted to their research.
juliacon-2022-18161-using-julia-for-observational-health-research
JuliaCon
Jacob Zelko
en
One patient encounter to a health care provider can produce an enormous amount of Real World Data (RWD). Per the United States Food and Drug Administration, RWD, "relates to patient health status and/or the delivery of health care routinely collected from a variety of sources." Some examples of RWD are electronic health records, medical claims, or mobile device data. Julia is primed to handle the computation required to generate clinical significance from RWD in the domain of observational health research.
Historically however, Julia's ecosystem has not been mature enough to participate directly in observational health research concerning large amounts of RWD. In the past, to effectively utilize this data the open science community, OHDSI (Observational Health Data Sciences and Informatics), was formed. The core standard that OHDSI has developed and is being rapidly adopted worldwide for handling RWD is the Observational Medical Outcomes Partnership Common Data Model - commonly referred to the OMOP CDM. Traditionally the tools built by OHDSI to interact with the OMOP CDM to extract and analyze patient information have been built in the R programming language. As a result, this has precluded other research communities from participating directly in this space.
I am pleased to announce in this talk that the Julia ecosystem has now reached a level of maturity to bridge to observational health research communities such as OHDSI to enable future observational health researchers to leverage the benefits of Julia. In this talk, I will provide a gentle introduction to observational health research and popular Common Data Models such as OMOP. This will lead into a discussion on lessons learned from an observational health study I performed called "Assessing Health Equity in Mental Healthcare Delivery Using a Federated Network Research Model" which used Julia as its main driving engine. Finally, tools available in the Julia ecosystem from JuliaHealth, JuliaInterop, and others that enable bridging between these two communities will be highlighted.
By the end of this talk, it should be made clear to potential researchers from the Julia community that the Julia ecosystem is matured enough to participate in observational health research endeavors. Furthermore, through the lessons I share through this talk, potential researchers can take inspirations on methods I used for their own work. My end goal for this talk is that by showing how these communities can be bridged, novel collaborations can be made and the benefits of using Julia can be easily accessed in observational health research.
false
https://pretalx.com/juliacon-2022/talk/CPS73H/
https://pretalx.com/juliacon-2022/talk/CPS73H/feedback/
Green
Finding Fast Radio Bursts, Faster
Lightning talk
2022-07-27T20:10:00+00:00
20:10
00:10
In radio astronomy, "Fast Radio Bursts" are short, high-energy signals of unknown origin. So far, relatively few have been discovered as many telescopes weren't designed to observe radio transients. Additionally, searching real-time spectral data is an expensive task, for which there are only a few aging packages to automate. In this talk, we'll look at using CUDA.jl and the Julia ecosystem to accelerate the hunt for these mysterious sources and the integration into an FRB detection pipeline.
juliacon-2022-18240-finding-fast-radio-bursts-faster
JuliaCon
/media/juliacon-2022/submissions/ML8N7S/dm_Bwyfr56.png
Kiran Shila
en
One of the main bottlenecks in a fast radio burst detection pipeline is a first pass pulse detection. FRBs, pulsars, airplane radars, and microwaves opened prematurely can all produce pulse-like profiles. Processing the dynamic spectral data in real-time to limit the search space and produce a list of possible candidates is an important first step.
When wideband radio pulses travel through space, ionized interstellar media disperses the pulse in time. This results in a received spectrum with a peak descending in frequency as a function of time instead of receiving all of the frequencies at once. For a given received time/frequency data point, the pulse signature may be buried under noise. As we don't know the amount of dispersion of an arbitrary pulse a priori, we have to integrate all possible dispersion curves for every start time to find a possible correlation.
Using both hand-written CUDA.jl kernels and Julia's GPU-array abstractions, we can implement a performant divide and conquer approach to search for these pulses. Then, leveraging the Julia ecosystem, we can embed this transformation into a modern, integrated FRB detection pipeline.
false
https://pretalx.com/juliacon-2022/talk/ML8N7S/
https://pretalx.com/juliacon-2022/talk/ML8N7S/feedback/
Green
Using contexts and capabilities to provide privacy protection
Lightning talk
2022-07-27T20:20:00+00:00
20:20
00:10
Privacy is an important aspect of the internet today. Providing privacy protection, however, is a difficult problem especially when you work with many data processes and systems. To solve this problem holistically, privacy needs to be a built-in feature, not an after-thought. I will talk about how to solve this problem with the idea of context and capabilities.
juliacon-2022-18042-using-contexts-and-capabilities-to-provide-privacy-protection
JuliaCon
Tom Kwong
en
Privacy is an important aspect of the internet today. When you need to use a particular service, you often need to hand over some personal information. The service provider typically provides some protection about the use of your personal information based upon its privacy policy.
From the service provider’s perspective, this is not a simple task. Suppose that you have collected your users’ email addresses and made the promise that you do not share them with any third party vendor. In a large company, there could be many systems and processes that make use of email addresses. How do you ensure that none of your code leaks information to any third party vendors?
The problem can be solved with contexts and capabilities. Contexts are environmental information that tracks the purpose of your code. Capabilities represent a set of purposes that your code can be used for. As an example, bar is a function that writes sensitive information, such as email address, to a user database and it has the capability of “user-management”. Then, when a function foo() calls bar(), it is allowed as long as foo‘s stated capabilities also include “user-management”.
This talk will cover more about the why’s and the general mechanics of context and capabilities. I will also present a prototype that provides some basic functionalities of tracking contexts, defining capabilities and validating capabilities at runtime.
Context is also known as coeffects. You can find more information about the theory of context-aware programming languages at http://tomasp.net/coeffects/.
More information about context and capabilities can be found at this Hack language’s documentation: https://docs.hhvm.com/hack/contexts-and-capabilities/introduction.
false
https://pretalx.com/juliacon-2022/talk/RTBE9E/
https://pretalx.com/juliacon-2022/talk/RTBE9E/feedback/
Green
GatherTown -- Social break
Social hour
2022-07-27T20:30:00+00:00
20:30
01:00
Join us on [Gather.town](https://app.gather.town/invite?token=2ucLB9IpmCAXZIex4Dvh2VFCeR6QLEdP) for a social hour.
juliacon-2022-21379-gathertown-social-break
en
false
https://pretalx.com/juliacon-2022/talk/J3VNLN/
https://pretalx.com/juliacon-2022/talk/J3VNLN/feedback/
Red
From Mesh Generation to Adaptive Simulation: A Journey in Julia
Talk
2022-07-27T12:30:00+00:00
12:30
00:30
We present a Julia toolchain for the adaptive simulation of hyperbolic PDEs such as flow equations on complex domains. It begins with HOHQMesh.jl to create a curved, unstructured mesh. This mesh is then used in Trixi.jl, a numerical simulation framework for conservation laws. We visualize the results using Julia’s plotting packages. We highlight select features in Trixi.jl, like adaptive mesh refinement (AMR) or shock capturing, useful for practical applications with complex transient behavior.
juliacon-2022-17227-from-mesh-generation-to-adaptive-simulation-a-journey-in-julia
/media/juliacon-2022/submissions/YSLKZJ/witch_simu_kaquJtz.png
Andrew Winters
en
Applications of interest in computational fluid mechanics typically occur on domains with curved boundaries. Further, the solution of a non-linear physical model can develop complex phenomena such as discontinuities, singularities, and turbulence.
Attacking such complex flow problems may seem daunting. In this talk, however, we present a toolchain with components entirely available in the Julia ecosystem to do just that. In broad strokes the workflow is:
1. Use HOHQMesh.jl to interactively prototype and visualize a domain with curved boundaries.
2. HOHQMesh.jl then generates an all quadrilateral mesh amenable for high-order numerical methods.
3. The mesh file is passed to Trixi.jl, a numerical simulation framework for conservation laws.
4. Solution-adaptive refinement of the mesh within Trixi.jl is handled by P4est.jl.
5. After the simulation, a first visualization is made using either Plots.jl or Makie.jl.
6. Solution data can also be exported with Trixi2Vtk.jl for visualization in external software like ParaView.
The strength and simplicity of this workflow is through the combination of several packages either originally written in Julia, like Trixi.jl, or wrappers, like P4est.jl or HOHQMesh.jl, that provide Julia users access to powerful, well-developed numerical libraries and tools written in other programming languages.
false
https://pretalx.com/juliacon-2022/talk/YSLKZJ/
https://pretalx.com/juliacon-2022/talk/YSLKZJ/feedback/
Red
CUPofTEA, versioned analysis and visualization of land science
Talk
2022-07-27T13:00:00+00:00
13:00
00:30
We present a GitHub organization, CUPofTEAproject, hosting a Franklin.jl website, cupoftea.earth, and a suite of Julia packages. The organization goal is to host versioned analysis and web interactive visualization (using WGLMakie.jl) of science studies about exchanges between terrestrial ecosystems and the atmosphere.
juliacon-2022-17971-cupoftea-versioned-analysis-and-visualization-of-land-science
/media/juliacon-2022/submissions/9J3PGX/Screenshot_2022-04-08_112117_v59cwuh.png
Alexandre A. Renchon
en
Terrestrial ecosystems (i.e., not ocean) have been absorbing about a third of human CO2 emissions, mitigating climate change as atmospheric carbon goes into biomass or soil organic matter. However, it is unclear if this carbon sink will continue in the future. The scientific community uses measurements from field site and satellite remote sensing to understand mechanisms regulating this behavior and create models to make predictions. However, efforts are segmented into specific disciplines (e.g., field experimentalists, modelers, plant ecologists, hydrologists) that rarely collaborate. This is due to using different programming languages (i.e., experimentalists using scripting language such as Python or R, and modelers using fast languages such as Fortran), or the nature of scientific publications encouraging small teams. In recent decades, global standardized databases are being created, as well as community open-source research tools, and Julia, a scripting language as fast as Fortran. This opens the door for collaboration in land-atmosphere exchange science. We use Julia, GitHub, and packages such as Franklin.jl and WGLMakie.jl to create CUPofTEA, a community platform to host versioned analysis and visualizations of land-atmosphere exchange science across fields. We demonstrate the workflow with DAMMmodel.jl, a package to analyze and visualize the response of CO2 emission from ecosystems to soil moisture and temperature, and global database of ecosystem (FLUXNET) and soil (COSORE) fluxes.
false
https://pretalx.com/juliacon-2022/talk/9J3PGX/
https://pretalx.com/juliacon-2022/talk/9J3PGX/feedback/
Red
ModalDecisionTrees: Decision Trees, meet Modal Logics
Lightning talk
2022-07-27T14:30:00+00:00
14:30
00:10
ModalDecisionTrees.jl offers a set of symbolic machine learning algorithms that extend classical decision tree learning algorithms, and are able to natively handle time series and image data. *Modal Decision Trees* leverage *modal logics* to perform a primitive-but-powerful form of entity-relation reasoning; this allows them to capture temporal and spatial patterns, and makes them suitable to natively deal (= no need for feature extraction) with data such as multivariate time-series and images.
juliacon-2022-17876-modaldecisiontrees-decision-trees-meet-modal-logics
JuliaCon
Giovanni Pagliarini
en
Symbolic learning provides *transparent* (or *interpretable*) models, and is becoming increasingly popular as AI permeates more and more aspects of our lives, while simultaneously raising ethical concerns. Mainly based on decision trees and rule-based models, symbolic modeling have been largely studied with either propositional or first-order logic as the underlying logical formalism. These logics are two extremes in terms of *expressive power* and *computational tractability*: on one hand, propositional logic can only express a simple form of reasoning, which makes classical decision trees easy to learn but also unable to deal with non-tabular data; on the other hand, first-order logics can express complex sentences in terms of entities and relations, but at the cost of higher computational complexities. A middle point between the two has been overlooked: modal logic.
ModalDecisionTrees.jl offers a set of symbolic machine learning methods based on extensions of classical decision tree learning algorithms (CART and C4.5), that leverage modal logics to perform a rather simple (but powerful) form of entity-relation reasoning; this allows *"Modal Decision Trees"* (MDTs) to capture temporal, spatial, and spatio-temporal patterns, and makes them suitable to natively deal (= no need for feature extraction) with data such as multivariate time-series and image data.
To fix the ideas, consider the case of time-series classification. While classical trees can only make decisions based on scalar values, and thus can only deal with time-series when they are priorly *flattenedly described* by a set of scalar descriptors (feature extraction step), a *modal* image classification rule can speak in terms of temporal patterns such as *there exists an interval in the time-series where variable i has a certain property, _containing_ another interval where variable j has another property*.
Modal logic can express the existence of entities (for example, a time interval, or an image region) with given properties, and properties can be *local*, such as the value of a variable being always lower than a certain threshold within the time interval, or *relational*, such as one entity being *contained* in, or *overlapping* with another one.
This process involves an intermediate step where data samples are represented as graphs (Kripke structures, in logical jargon) representing entities, their local properties, and their relations.
Note how rules and patterns can, of course, be as complex as the reality they are trying to capture; however, they can always be straightforwardly translatable into natural language, which represents the essence of the *transparency* of these models, as well as the main reason why one may want to use this package.
MDTs have been shown to achieve performances that are higher when compared to classical decision trees, and often comparable to those of functional gradient-based methods (e.g., Neural Networks), in tasks such as multivariate time-series classification (e.g., COVID-19 diagnosis from audio recordings of coughs and breaths) and image classification (e.g., land cover classification).
Despite this package being at its infancy, ModalDecisionTrees.jl can be used with the Machine Learning Julia (MLJ) framework, and provides:
- support for *bagging* (i.e, forests, ensembles of trees);
- support for *multimodal* learning;
- tools for inspecting models and analyzing single rules.
Package available at: https://github.com/giopaglia/ModalDecisionTrees.jl
false
https://pretalx.com/juliacon-2022/talk/RQP9TG/
https://pretalx.com/juliacon-2022/talk/RQP9TG/feedback/
Red
Multivariate polynomials in Julia
Lightning talk
2022-07-27T14:40:00+00:00
14:40
00:10
Depending on the applications, the requirement for a multivariate polynomial library may be efficient computation of product, division, substitution, evaluation, gcd or even Gröbner bases. It is well understood that the concrete representation to use for these polynomials depends on whether they are sparse or not. In this talk, we show that in Julia, the choice of representation also depends on whether to specialize the compilation on the variables.
juliacon-2022-18106-multivariate-polynomials-in-julia
JuliaCon
Benoît LegatChris Elrod
en
Multivariate polynomials appear in various applications such as computer algebra systems, homotopy continuation or Sum-of-Squares programming. Different applications have varied requirements for a multivariate polynomial library making it challenging to choose a representation that would be the most efficient for all use cases. It is well understood for instance that the concrete representation to use for these polynomials depends on whether they are sparse or not. Having an abstract interface allows both the application to be independent on the actual representation used but also some lower level operations such as the computation of polynomial division or gcd.
In fact, this abstraction is even more important in Julia than other languages because Julia allows yet another aspect to enter into the design of multivariate polynomials. We show in this talk that for basic operations, the Julia compiler can either compile a generic method working for any set of variables or a method specialized to a specific one. This can be achieved in Julia by moving some part of the polynomial description from field values to type parameters, hence also reducing the memory footprint of polynomials. These 2 aspects: sparsity and specialization make up for 4 different representations that all have specific use cases where they are most appropriate. Packages relying on multivariate polynomial computation for which more than one use case can occur should therefore be implemented on an abstract multivariate polynomial interface and require the user to choose the implementation via the type of polynomials given as input.
We illustrate this with actual Julia packages implementing these representations: DynamicPolynomials.jl (sparse, non-specialised), TypedPolynomials.jl (sparse, specialized), SIMDPolynomials.jl (sparse, specialized) and TaylorSeries.jl (dense, specialized). We analyze the impact of the choice of representation for these representations in a benchmark for gcd computation. The gcd implementation is written generically thanks to the abstract MultivariatePolynomials.jl interface.
false
https://pretalx.com/juliacon-2022/talk/TRFSJY/
https://pretalx.com/juliacon-2022/talk/TRFSJY/feedback/
Red
PHCpack.jl: Solving polynomial systems via homotopy continuation
Lightning talk
2022-07-27T14:50:00+00:00
14:50
00:10
PHCpack is a software package for solving polynomial systems via homotopy continuation methods. Our interface exports the functionality of PHCpack either via the executable or the shared object file, via its C interface. The software is free and open source and we have a cloud server that hosts the application at phcpack.org.
Our talk will also explore a specific application area in the design of mechanisms.
juliacon-2022-18167-phcpack-jl-solving-polynomial-systems-via-homotopy-continuation
JuliaCon
Kylash ViswanathanJan Verschelde
en
Systems of many polynomial equations in several variables occur in various areas of science and engineering, such as mechanism design, Nash equilibria, computer vision, etc. Use cases of PHCpack can be found in more than one hundred scientific papers. In addition to the need in applications, theorems from algebraic geometry have led to efficient algorithms to compute all isolated solutions and to compute the degrees and dimensions of all positive dimensional solution sets. PHCpack contains many of the first implementations of algorithms in numerical algebraic geometry.
PHCpack allows users to provide polynomial systems in a variety of formats to the solver, including symbolically. The Julia Interface to PHCpack obtains the symbolic input from the user and using native Julia Dataframes and Data Structures, processes and returns the results numerically via PHCpack.
As one approach, using only the phc executable file, one can call the relevant features of PHCpack from a Julia program. Alternatively, we have compiled a C interface into a shared object, which can be imported into a Julia session.
As a use case, we consider the design of a 4-bar mechanism. The 4-bar mechanism traces a curve and given sufficiently many points on the curve that one wants the mechanism to trace, one can compute all necessary parameters of the mechanism. This computation requires the solution of a system of many equations and variables.
All the code is available in public github repositories.
https://github.com/kviswa5
https://github.com/janverschelde/PHCpack/tree/master/src
false
https://pretalx.com/juliacon-2022/talk/UL3T8K/
https://pretalx.com/juliacon-2022/talk/UL3T8K/feedback/
Red
A Tax-Benefit model for Scotland in Julia
Lightning talk
2022-07-27T15:00:00+00:00
15:00
00:10
This talk discusses ScotBen, a microsimulation tax-benefit model for Scotland written in Julia. Scotben lets you analyse how changes to the tax system change revenues, inequality and poverty.
juliacon-2022-18153-a-tax-benefit-model-for-scotland-in-julia
JuliaCon
Graham Stark
en
A Tax-Benefit model in Julia
A tax benefit model is a computer program that calculates the effects of possible changes to the fiscal system on a sample of households. We take each of the households in a household survey dataset, calculate how much tax the household members are liable for under some proposed tax and benefit regime, and how much benefits they are entitled to, and add add up the results. If the sample is representative of the population, and the modelling sufficiently accurate, the model can then tell you, for example, the net costs of the proposals, the numbers who are made better or worse off, the effective tax rates faced by individuals, the numbers taken in and out of poverty by some change, and much else.
I want to discuss a new Tax-Benefit model for Scotland written Julia (https://github.com/grahamstark/ScottishTaxBenefitModel.jl).
There are currently three web interfaces you can play with:
* https://ubi.virtual-worlds.scot/ (Models a Universal Basic Income)
* https://stb.virtual-worlds.scot/scotbud (construct a national budget)
* https://stb.virtual-worlds.scot/bcd/ (explores the incentive effects of the fiscal system)
false
https://pretalx.com/juliacon-2022/talk/KPRZAM/
https://pretalx.com/juliacon-2022/talk/KPRZAM/feedback/
Red
Bayesian Estimation of Macroeconomic Models in Julia
Talk
2022-07-27T15:10:00+00:00
15:10
00:30
Computational efficiency is vital when estimating macroeconomic models for use in policy analysis. We introduce the models contained within DSGE.jl and overview how to estimate them. We provide details on two estimation methods, adaptive Metropolis-Hastings and sequential Monte Carlo, and discuss how they can provide more efficiency during the estimation process.
juliacon-2022-18163-bayesian-estimation-of-macroeconomic-models-in-julia
Aidan Gleich
en
In this talk, I will discuss how the Federal Reserve Bank of New York (FRBNY) uses Julia for forecasting. I will first present the FRBNY model and the basics of our estimation methods, noting recent adjustments made necessary by the rapid changes in economic conditions over the last two years. During this discussion I will introduce our packages DSGE.jl, SMC.jl, and ModelConstructors.jl, which provide a user-friendly API for creating and estimating a variety of models, including our workhorse DSGE model.
I will then discuss how Julia allows us to prototype and test new estimation methods, providing examples through our research into adaptive Metropolis-Hastings and sequential Monte Carlo algorithms. Because DSGE models take significant time to estimate, being able to stay on the cutting edge of Bayesian estimation algorithms allows us to provide results efficiently. Metropolis-Hastings algorithms, a class of random-walk Markov Chain Monte Carlo estimators, use a fixed proposal distribution throughout the estimation process. Adaptive Metropolis-Hastings algorithms update the proposal distribution throughout the estimation process in an attempt to gain efficiency. SMC methods combine MH and importance sampling to create an easily parallelizable sampling algorithm. I will show how these two families of algorithms can speed up the estimation process while illustrating potential pitfalls.
This presentation will be useful to anyone who regularly conducts Bayesian estimation, especially in the context of time series and forecasting.
Disclaimer: This talk reflects the experience of the author and does not represent an endorsement by the Federal Reserve Bank of New York or the Federal Reserve System of any particular product or service. The views expressed in this talk are those of the author and do not necessarily reflect the position of the Federal Reserve Bank of New York or the Federal Reserve System. Any errors or omissions are the responsibility of the author.
false
https://pretalx.com/juliacon-2022/talk/Z98GWK/
https://pretalx.com/juliacon-2022/talk/Z98GWK/feedback/
Red
A Data Integration Framework for Microbiome Research
Lightning talk
2022-07-27T15:40:00+00:00
15:40
00:10
Standardized data objects can greatly support the collaborative development of new data science methods. In particular, commonly agreed data standards will provide improved efficiency and reliability in complex data integration tasks. We demonstrate the application of this framework in the context of microbiome research.
juliacon-2022-17715-a-data-integration-framework-for-microbiome-research
JuliaCon
Giulio Benedetti
en
Microorganisms shape every aspect of our life: from the soil of our farmland to the human gut, from the ocean to the municipal wastewater of our cities, microorganisms seem to inhabit and even dominate most ecosystems of this planet. As we expand our knowledge on the role that microbes play within and beyond our bodies, the need arises to store and analyze such information in a systematic and reproducible manner.
Standardized data objects can greatly support the collaborative development of new data science methods. In particular, commonly agreed data standards will provide improved efficiency and reliability in complex data integration tasks. We implement this approach in microbiome research in the a new Julia package, MicrobiomeAnalysis.jl (MIA), which introduces a new approach for microbiome data integration and analysis based on state-of-the-art data containers designed for robust data integration tasks: SummarizedExperiments.jl (SE) and MultiAssayExperiment.jl (MAE).
Our approach provides a general framework to study complex microbiome profiling data sets. Not only do the data containers make it instinctive to work with abundance assays, but they also integrate those assays with the corresponding metadata into a comprehensive data object. We demonstrate the approach based on common analysis tasks in microbial ecology, including alpha and beta diversity analysis and visualization of microbial community dynamics. The proposed approach is inspired by closely related and active efforts in R/Bioconductor. Developing a similar framework in the Julia language is a promising endeavour that can provide drastic performance improvements in certain computational tasks, such as dimension reduction and time series analysis while taking advantage of a shared conceptual framework.
Overall, our environment offers the starting point for developing effective standardized methods for microbiome research. The methodology is general, thus it can be easily applied to other multi-source study designs and data integration tasks.
false
https://pretalx.com/juliacon-2022/talk/QG8VUX/
https://pretalx.com/juliacon-2022/talk/QG8VUX/feedback/
Red
Building workflows for materials modeling on HPC Systems
Talk
2022-07-27T19:30:00+00:00
19:30
00:30
Materials computations, especially of the *ab initio* kind, are intrinsically complex. These difficulties have inspired us to develop an extensible, lightweight, high-level workflow framework, `Express.jl`, to automate long and extensive sequences of the *ab initio* calculations. In this talk, we would like to share some experiences that we gained in building a software framework and multifunctional scientific tools with Julia's versatility.
juliacon-2022-18105-building-workflows-for-materials-modeling-on-hpc-systems
JuliaCon
Qi (Ryan) Zhang
en
`Express.jl`, together with its "plugins" (such as `QuantumESPRESSOExpress.jl`), are shipped with well-tested workflow templates, including structure optimization, equation of state fitting, lattice dynamics calculation, and thermodynamic property calculation. It is designed to be highly modularized so that its components can be reused across various occasions, and customized workflows can be built on top of that. It helps users in the preparation of inputs, execution of simulations, and analysis of data. Users can also track the status of workflows in real-time and rerun failed jobs thanks to the data lineage feature `Express.jl` provides.
To achieve the goals mentioned above, we built several independent packages during the development of `Express.jl`, which are supposed to solve some ordinary problems in the physics, geoscience, and materials science communities, e.g., `EquationsOfStateOfSolids.jl`, `Geotherm.jl`, `Spglib.jl`, `Crystallography.jl`. Of course, as a project aimed to automate mundane operations of the *ab initio* calculations, we wrote a package (`SimpleWorkflows.jl`) to construct workflows from basic jobs and track their execution status. Because the most time-consuming part of the workflows is running external software (like Quantum ESPRESSO), we also built packages to interact with them, e.g., `QuantumESPRESSO.jl` and `Pseudopotentials.jl`. Besides, we discovered many valuable Julia packages and integrated them into our code, such as `Configurations.jl`, `Comonicon.jl`, `Setfield.jl`. In this talk, we would like to explain how Julia made our complicated codebase possible and share some experiences about when and how we should utilize these wonderful projects.
false
https://pretalx.com/juliacon-2022/talk/E99KP7/
https://pretalx.com/juliacon-2022/talk/E99KP7/feedback/
Red
Modeling a Crash Simulation System with ModelingToolkit.jl
Talk
2022-07-27T20:00:00+00:00
20:00
00:30
Previously traditional modeling tools were used to provide the acausal modeling framework which could be statically compiled and integrated with our distributed software. But with this comes the dual language problem and friction with model research and development. With ModelingToolkit.jl the tools needed to transition from traditional modeling frameworks are now available. This talk will cover our approach and success in re-writing our Hydraulic Crash Simulation system model in pure Julia.
juliacon-2022-17960-modeling-a-crash-simulation-system-with-modelingtoolkit-jl
JuliaCon
Bradley Carman
en
Instron's “Catapult” Crash Simulation System releases 2.75MN of energy with micron level control over a fraction of a second to reproduce a recorded crash force signal. This machine requires a model for many reasons: command signal generation, operational prediction and optimization, and engineering research and development. Therefore the model should be compilable for software but also flexible for engineering exploration (i.e. scriptable REPL mode). Julia opens the door to making this more efficient for its solution to the dual language problem but also provides a full featured programming language and modular package system with integrated unit testing that additionally help greatly with model development. ModelingToolkit offers the tools needed to easily rewrite and move the model from traditional modeling frameworks and provides not just the benefit of modeling in Julia, but a more flexible and open modeling tool.
To successfully transition, a few missing features needed to be developed: 1. How to integrate Julia code in software, 2. How to enhance ModelingToolkit with: parameter management, global parameters, algebraic ODE tearing, and static model code generation. We now have a faster model that is easier and more organized to develop with a testing and benchmarking suite to easily track and publish versioned changes. There is still work to do but we now have what we need to develop our future products with Julia.
false
https://pretalx.com/juliacon-2022/talk/DRLYT8/
https://pretalx.com/juliacon-2022/talk/DRLYT8/feedback/
Purple
Automatic Differentiation for Quantum Electron Structure
Talk
2022-07-27T12:30:00+00:00
12:30
00:30
DFTK.jl is a framework for the quantum-chemical simulation of materials using Density Functional Theory. Many relevant physical properties of materials, such as interatomic forces, stresses or polarizability, depend on the derivatives of quantities of interest with respect to input data. To perform such computations efficiently Automatic Differentiation has been implemented in DFTK using both forward and backward modes of AD.
juliacon-2022-18012-automatic-differentiation-for-quantum-electron-structure
Markus TowaraNiklas SchmitzGaspard Kemlin
en
The quantum-chemical simulation of electronic structures is an established approach in materials research. The desire to tackle even bigger systems and more involved materials, however, keeps posing challenges with respect to physical models, reliability and performance of methods such as Density Functional Theory (DFT). For instance, many relevant physical properties of materials, such as interatomic forces, stresses or polarizability, depend on the derivatives of quantities of interest with respect to some input data. To perform efficiently such computations, Automatic Differentiation has been recently implemented into DFTK (https://dftk.org), a Julia package for DFT, which aims to be fast enough for practical calculations.
Automatic Differentiation (AD, also known as Algorithmic Differentiation) allows the efficient and accurate calculation of derivatives of first and higher order of mathematical expressions, implicitly defined by source code.
The two most common modes of AD are tangent (forward) and adjoint (reverse) mode.
Of special interest is the reverse mode, as it allows to propagate derivative information from the outputs of some computation back to its inputs. This yields a computational complexity which scales with the number of outputs, as opposed to scaling with the number of inputs, like traditional finite differences or tangent AD.
In many applications in computational math, engineering, ML and finance the number of outputs is small (e.g. 1 for a least squares cost function), while the number of inputs is bigger by orders of magnitude.
Julia is based on the LLVM stack and allows inspection and modification of its own AST, as well as other already optimized code structures at run time.
This promises to combine the strengths of both operator-overloading style AD tools (flexibility, no running out of sync with the primal, coverage of all language features) and source code transformation style AD tools (less memory overhead, generated derivative code can be optimized by compiler).
This has spawned a variety of AD tools in the Julia ecosystem (see e.g. https://juliadiff.org for a enumeration of tools), each with its own design goals but also limitations.
The need to make these tools work together under a common interface has been identified by the Julia community and led to the development of the ChainRules.jl package.
We use Zygote to create automatic source code as much as possible.
There are two major reasons Zygote might not be used:
- The code to be differentiated uses features not supported by Zygote (e.g. use of mutation) and can not be sensibly refactored to a version conforming to Zygotes generation rules (e.g. due to performance requirements of the primal)
- Mathematical insight allows us to more efficiently implement the adjoint pullback by hand (e.g. terms with cancelling derivatives, symbolic differentiation of linear solvers, FFTs, etc.)
For both of these use cases we use the ChainRules interface to specify custom rrules.
For the performance critical parts of the primal we plan to investigate tools that support mutation (e.g. Enzyme), though we expect this to come with its own challenges.
Some of the custom rrules we implemented in ChainRules required mathematical investigation to achieve numerical stability of response properties. In particular, the variation of the ground state density with respect to a perturbative external potential solves a linear system which is ill-conditioned when working with metals. We propose a unified mathematical framework from the literature to enhance stability, via appropriate gauge choices and a Schur complement.
We will present our approach to introduce AD into an existing codebase, lessons learned, and what design patterns are suitable to both good performance and good compatibility with existing AD tools.
false
https://pretalx.com/juliacon-2022/talk/LGWRV8/
https://pretalx.com/juliacon-2022/talk/LGWRV8/feedback/
Purple
Fast Forward and Reverse-Mode Differentiation via Enzyme.jl
Talk
2022-07-27T13:00:00+00:00
13:00
00:30
Enzyme is a new LLVM-based differentiation framework capable of creating fast derivatives in a variety of languages. In this talk we will showcase improvements in Enzyme.jl, the Julia-language bindings for Enzyme that enable us to differentiate through parallelism (Julia tasks, MPI.jl, etc), mutable memory, JIT-constructs, all while maintaining performance. Moreover we will also showcase Enzyme's new forward mode capabilities in addition to its existing reverse-mode features.
juliacon-2022-17917-fast-forward-and-reverse-mode-differentiation-via-enzyme-jl
JuliaCon
William MosesLudger PaehlerTim GymnichValentin Churavy
en
false
https://pretalx.com/juliacon-2022/talk/X3UUFD/
https://pretalx.com/juliacon-2022/talk/X3UUFD/feedback/
Purple
JunctionTrees: Bayesian inference in discrete graphical models
Lightning talk
2022-07-27T14:30:00+00:00
14:30
00:10
JunctionTrees.jl implements the junction tree algorithm: an efficient method to perform Bayesian inference in discrete probabilistic graphical models. It exploits Julia's metaprogramming capabilities to separate the algorithm into a compilation and a runtime phase. This opens a wide range of optimization possibilities in the compilation stage. The non-optimized runtime performance of JunctionTrees.jl is similar to those of analog C++ libraries such as libdai and Merlin.
juliacon-2022-17957-junctiontrees-bayesian-inference-in-discrete-graphical-models
/media/juliacon-2022/submissions/AGW8BR/junction-trees-logo_6CMV6nQ.png
Martin Roa-Villescas
en
GitHub repo: https://github.com/mroavi/JunctionTrees.jl
Docs: https://mroavi.github.io/JunctionTrees.jl
JunctionTrees.jl encapsulates the result of the research we have been conducting in the context of improving the efficiency of Bayesian inference in probabilistic graphical models.
The junction tree algorithm is a core component of discrete inference in probabilistic graphical models. It lies at the heart of many courses that are taught at different universities around the world including MIT, Berkeley, and Stanford. Moreover, it serves as the backbone of successful commercial software, such as Hugin Expert, that aims to discover insight and provide predictive capabilities to effectively combat fraud and risk.
JunctionTrees.jl is mainly tailored towards students and researchers. This library offers a great starting point for understanding the implementation details of this algorithm thanks to the intrinsic readability of the Julia language and the thoroughly commented codebase. Moreover, this package constitutes an optimization framework that other researchers can make use of to experiment with different ideas to improve the performance of runtime Bayesian inference.
false
https://pretalx.com/juliacon-2022/talk/AGW8BR/
https://pretalx.com/juliacon-2022/talk/AGW8BR/feedback/
Purple
Automated Finite Elements: a comparison between Julia and C++
Lightning talk
2022-07-27T14:40:00+00:00
14:40
00:10
With Gridap, Julia has a Finite Element package that allows writing expressions that closely mimic the mathematical notation of the weak form of an equation and automate the assembly of the linear system from there. Rather than using macros, the equations are interpreted as regular Julia expressions. This approach is similar to what has been done in C++, e.g. in the Coolfluid 3 CFD code. In this talk, both methods will be compared, showing how Julia really is "C++ done right" for this use case.
juliacon-2022-18056-automated-finite-elements-a-comparison-between-julia-and-c-
Bart Janssens
en
This talk will be about comparing the implementation of the stabilized Navier-Stokes equations for incompressible flow, both using the [Gridap](https://github.com/gridap/Gridap.jl) package in Julia and using a [Boost.Proto](https://www.boost.org/doc/libs/1_78_0/doc/html/proto.html) based C++ code. The concrete C++ implementation can be found [here](https://github.com/barche/coolfluid3/blob/688173daa1a7cf32929b43fc1a0d9c0655e20660/plugins/UFEM/src/UFEM/NavierStokesAssembly.hpp#L57-L65), while the equivalent Gridap code is [here](https://github.com/barche/Channel_flow/blob/94aeb2982e01b08ff41848091a3b5d0b7b2a3983/Channel_2d_3d.jl#L104-L110). Aside from the obvious differences due to the use of unicode and the fact that Gridap operates at a higher level of abstraction, there are also some striking similarities in both approaches. The main point is that both packages operate on expressions that are valid code in the programming language that is used (i.e. Julia for Gridap, C++ for [Coolfluid 3](https://github.com/barche/coolfluid3)). This is possible because both languages offer a lot of flexibility in terms of operator overloading and strong typing. In the case of C++, the Boost.Proto library helps with building a structured framework for the interpretation of the expressions, based on the idea of expression templates and thus avoiding runtime overhead of inheritance in C++. In Julia, this step is taken care of using the built-in type system and generated functions.
The whole objective of this type of machinery is to offer a simple interface to the user, but end up with a finite element assembly loop that is as fast as possible. To this end, information such as the size of element matrices and vector dimensions must be known to the compiler. We will show that both systems indeed achieve this, and result in good performance for the assembly loop.
Due to the similarity in approach, the experience visible to the end user is also similar: both systems exhibit long compilation times and long error messages in case of user errors such as mixing up incompatible matrix dimensions. This will be illustrated using examples.
Finally, more advanced numerical techniques, such as the stabilized methods used in fluid simulations, require the user to be able to define custom functions that are used during assembly, e.g. to calculate the value of stabilization coefficients. This is where Julia really shines, as it is possible to simply define a normal function, while in C++ some extensive boilerplate code is required, as will be shown.
The conclusion is that Gridap has reached a level of maturity that makes it very attractive to use Julia for this kind of work. Even if some performance optimization may still be needed, development of and experimenting with new numerical methods is much easier than in a complicated C++ code.
false
https://pretalx.com/juliacon-2022/talk/FPZVML/
https://pretalx.com/juliacon-2022/talk/FPZVML/feedback/
Purple
Distributed AutoML Pipeline Search in PC/RasPi K8s Cluster
Lightning talk
2022-07-27T14:50:00+00:00
14:50
00:10
In this lightning talk, I will present an example workflow in leveraging the Kubernetes cluster of RaspberryPis to perform parallel search in finding the best AutoML pipeline in a given classification task. While many applications of RasPis are targeted for IOT usage, a K8s cluster of RasPis running Julia can be targeted to solve more complex problems and I will provide examples of the cluster performance running AutoMLPipeline applications.
juliacon-2022-16947-distributed-automl-pipeline-search-in-pc-raspi-k8s-cluster
JuliaCon
Paulito Palmes
en
There is a growing need for low-power computing devices due to their minimal thermal and energy footprint to be used in many HPC applications such as weather forecasting, ocean engineering, smarthome computing, biocomputing, AI modeling, etc. ARM-based processors such RasPis provide an attractive solution because they are cheap, versatile, and has great Linux hardware support as well as stable Julia releases. Due to its full Linux compatibility, making a K8s cluster from a bunch of RasPis become a trivial exercise as well as running Julia's cluster manager on top of K8s. This talk will provide an overview and an example walk-through how to leverage Julia+Raspis+K8s combinations to solve certain ML pipeline optimization tasks.
false
https://pretalx.com/juliacon-2022/talk/K7VNZJ/
https://pretalx.com/juliacon-2022/talk/K7VNZJ/feedback/
Purple
Comonicon, a full stack solution for building CLI applications
Lightning talk
2022-07-27T15:00:00+00:00
15:00
00:10
In this talk, I will introduce Comonicon. Comonicon is a CLI generator designed for Julia, unlike other CLI generators such as Fire, ArgParse, and so on, Comonicon does not only parse command-line arguments but also provide a full solution for building CLI application (via PackageCompiler), packing tarballs, generating shell auto-completion, CLI application installation, mitigating CLI latencies. I'll also talk about ideas arise from development about the future official Julia application.
juliacon-2022-16301-comonicon-a-full-stack-solution-for-building-cli-applications
JuliaCon
Xiu-zhe (Roger) Luo
en
[Comonicon](https://github.com/comonicon/Comonicon.jl) is a CLI generator that aims to provide a full solution for CLI applications, this includes
### Clean and Julian syntax
the interface only has `@main` and `@cast` that will collect all the information from docstring to function signature to create the CLI. The usage is extremely simple, just put `@main` or `@cast` in front of the functions or modules you would like to convert to a CLI node. It has proven to have a very intuitive and user-friendly experience in the past 2 years.
### Powerful and extensible code generators
Comonicon is built around an intermediate representation for CLIs. This means the Comonicon frontend is decoupled with its backend, one can also directly construct the IR to generate their CLI as a more advanced feature. Then different backends will generate corresponding backend code like a standard compiler codegen. This currently includes:
- a zero dependency command line arguments parsing function in Julia
- shell autocompletion (only ZSH is supported at the moment)
and this can be easily extended to generate code for other interfaces, one very experimental work is [generating GUI directly from the Comonicon IR](https://github.com/comonicon/ComoniconGUI.jl).
### Mitigating startup latencies
Most Julia CLI generators suffer from startup latencies in Julia because of the JIT compilation, we have put a relatively large effort into mitigating this latency caused by the CLI generators. And because Comonicon is able to generate a zero-dependency function `command_main` that parses the CLI arguments, in extreme cases, one can completely get rid of `Comonicon` and use the generated code directly to reach the most ideal latency achievable in current Julia.
### Build System
Comonicon provides a full build system for shipping your CLIs to other people. This means Comonicon can handle the installation of a Julia CLI application that guarantees its reproducibility by handling the corresponding project environment correctly. Or build the CLI application into binaries via PackageCompiler then package the application as tarball. A glance at its build CLI
```
Comonicon - Builder CLI.
Builder CLI for Comonicon Applications. If not sepcified, run the command install by default.
USAGE
julia --project deps/build.jl [command]
COMMAND
install install the CLI locally.
app [tarball] build the application, optionally make a tarball.
sysimg [tarball] build the system image, optionally make a tarball.
tarball build application and system image then make tarballs
for them.
EXAMPLE
julia --project deps/build.jl install
install the CLI to ~/.julia/bin.
julia --project deps/build.jl sysimg
build the system image in the path defined by Comonicon.toml or in deps by default.
julia --project deps/build.jl sysimg tarball
build the system image then make a tarball on this system image.
julia --project deps/build.jl app tarball
build the application based on Comonicon.toml and make a tarball from it.
```
### Configurable
The generated CLI application, the build options are all configurable via a `Comonicon.toml` file, one can easily change various default options directly from the configuration file to create your favorite CLI:
- enable/disable colorful help message
- set Julia compile options
- bundle custom assets
- installation options
- ...
### Summary
Comonicon is currently the only CLI generator designed for Julia and handles the entire workflow of creating a serious CLI application and shipping it to users. It still has a few directions to improve and in the future Julia versions, we hope with the progress of static compilation we will eventually be able to build small binaries and ship them to all platforms in a simple workflow via Comonicon so that one day Julia can also do what go/rust/cpp/... can do in CLI application development.
false
https://pretalx.com/juliacon-2022/talk/VME3D8/
https://pretalx.com/juliacon-2022/talk/VME3D8/feedback/
Purple
Cycles and Julia Sets: Novel algorithms for Numerical Analysis
Lightning talk
2022-07-27T15:10:00+00:00
15:10
00:10
We present a new collection of algorithms dedicated to compute the basins of attraction of any complex rational map. This is a relevant matter in Holomorphic Dynamics, and also a way to visualize and study amazing fractal objects like Julia Sets. These algorithms solve many computational problems that often arise in Numerical Analysis, like overflows or mathematical indeterminations, and provide more information about the dynamics of the system than traditional algorithms generally do.
juliacon-2022-17953-cycles-and-julia-sets-novel-algorithms-for-numerical-analysis
JuliaCon
/media/juliacon-2022/submissions/ZSARJD/image_OpGztQg.PNG
Víctor Álvarez Aparicio
en
In this talk we will present a new collection of algorithms dedicated to compute the basins of attraction of any complex rational map, and to study the dynamical behaviour of its fixed points and attracting n-cycles. By doing this, one can visualize and study amazingly beautiful and complex fractal objects like Julia Sets. The study of the basins of attraction of a dynamical system is a very relevant matter not only in Numerical Analysis and Holomorphic Dynamics, but also in other fields like Physics or Mechanical Engineering.
We are going to describe the methods implemented in the Lyapunov Cycle Detector module, available in the following GitHub repository: https://github.com/valvarezapa/LCD. In it you will find the code itself, several explanations of how does the methods work, what do they do, a lot of practical examples and even a User Guide.
Our work is based in very relevant theorems of Complex Dynamics, like the Ergodic Theorem, and is motivated by Sullivan's work on Dynamical Systems, recently awarded with the Abel prize. However, no previous knowledge about any of these topics is required, since we will focus on the algorithms and the implementation of the code. The graphics we will be able to generate are both rich and beautiful, and the concepts behind them are easy to grasp. Everyone interested in the mathematical framework or in any specific technicalities behind these algorithms can consult our paper "Algorithms for computing attraction basins of a self-map of the Hopf fibration based on Lyapunov functions" (currently on preprint).
From a scientific computing point of view, this new collection of algorithms solve most of the computational problems that often arise in Numerical Analysis, like overflows or mathematical indeterminations. We achieve this by considering the Hopf endomorphism induced by the given rational map, and iterating it over the complex projective line (P^1(C)). This approach also allows us to easily work with the infinity point. Since this kind of calculations often have high computational cost, we benefit from Julia’s efficiency and some built-in multi-threading macros in order to be able to visualize the results in a reasonable amount of time.
From a mathematical perspective, we will be considering a discrete-time dynamical system given by a complex rational map. The techniques we use to compute the basins of attraction are based on Lyapunov functions and Lyapunov coefficients (which are closely related to Lyapunov exponents; a very powerful and commonly used concept in dynamical systems). The Lyapunov function we define is constant in each basin of attraction, and depends on the notion of spherical derivative of the given rational map. This way, we can divide the Riemann Sphere (the plane of complex numbers adding the infinity point) into the different basins of attraction (each one with an associated constant) and the Julia set. Most famous Julia sets can be computed and visualized this way. Also, our algorithms provide more information about the dynamics of the system than most traditional algorithms on this topic generally do. Our methods are focused on detecting the attracting n-cycles of the given rational map and its basins. Note that fixed points are just the particular case of 1-cycles. In addition, by using Lyapunov coefficients we are able to measure how much attracting is every n-cycle.
false
https://pretalx.com/juliacon-2022/talk/ZSARJD/
https://pretalx.com/juliacon-2022/talk/ZSARJD/feedback/
Purple
High-performance xPU Stencil Computations in Julia
Lightning talk
2022-07-27T15:20:00+00:00
15:20
00:10
We present an efficient approach for writing architecture-agnostic parallel high-performance stencil computations in Julia. Powerful metaprogramming, costless abstractions and multiple dispatch enable writing a single code that is usable for both productive prototyping on a single CPU and for production runs on GPU or CPU workstations or supercomputers. Performance similar to CUDA C is achievable, which is typically a large improvement over reachable performance with `CUDA.jl` Array programming.
juliacon-2022-17974-high-performance-xpu-stencil-computations-in-julia
JuliaCon
/media/juliacon-2022/submissions/AKVUKM/logo_ParallelStencil_jc_iAdWnnz.png
Samuel OmlinLudovic Räss
en
Our approach for the expression of architecture-agnostic high-performance stencil computations relies on the usage of Julia's powerful metaprogramming capacities, costless high-level abstractions and multiple dispatch. We have instantiated the approach in the Julia package `ParallelStencil.jl`. Using `ParallelStencil`, a simple call to the macro `@parallel` is enough to parallelize and launch a kernel that contains stencil computations, which can be expressed explicitly or with math-close notation. The package used underneath for parallelization is defined in a initialization call beforehand. Currently supported are `CUDA.jl` for running on GPU and `Base.Threads` for CPU. Leveraging metaprogramming, `ParallelStencil` automatically generates high-performance code suitable for the target hardware, and automatically derives kernel launch parameters from the kernel arguments by analyzing the extensions of the contained arrays. A set of architecture-agnostic low level kernel language constructs allows for explicit low level kernel programming when useful, e.g., for the explicit control of shared memory on the GPU (these low level constructs are GPU-computing-biased).
Arrays are automatically allocated on the hardware chosen for the computations (GPU or CPU) when using the allocation macros provided by `ParallelStencil`, avoiding any need of code duplication. Moreover, the allocation macros are fully declarative in order to let `ParallelStencil` choose the best data layout in memory. Notably, logical arrays of structs (or of small arrays) can be either laid out in memory as arrays of structs or as structs of arrays accounting for the fact that each of these allocation approaches has its use cases where it performs best.
`ParallelStencil` is seamlessly interoperable with packages for distributed parallelization, as e.g. `ImplicitGlobalGrid.jl` or `MPI.jl`, in order to enable high-performance stencil computations on GPU or CPU supercomputers. Communication can be hidden behind computation with as simple macro call. The usage of this feature solely requires that communication can be triggered explicitly as it is possible with, e.g, `ImplicitGlobalGrid` and `MPI.jl`.
We demonstrate the wide applicability of our approach by reporting on several multi-GPU solvers for geosciences as, e.g., 3-D solvers for poro-visco-elastic twophase flow and for reactive porosity waves. As reference, the latter solvers were ported from MPI+CUDA C to Julia using `ParallelStencil` and `ImplicitGlobalGrid` and achieve 90% and 98% of the performance of the original solvers, respectively, and a nearly ideal parallel efficiency on thousands of NVIDIA Tesla P100 GPUs at the Swiss National Supercomputing Centre. Moreover, we have shown in recent contributions that the approach is naturally in no way limited to geosciences: we have showcased a computational cognitive neuroscience application modelling visual target selection using `ParallelStencil` and `MPI.jl` and a quantum fluid dynamics solver using the Nonlinear Gross-Pitaevski Equation implemented with `ParallelStencil` and `ImplicitGlobalGrid`.
Co-authors: Ludovic Räss¹ ²
¹ ETH Zurich | ² Swiss Federal Institute for Forest, Snow and Landscape Research (WSL)
false
https://pretalx.com/juliacon-2022/talk/AKVUKM/
https://pretalx.com/juliacon-2022/talk/AKVUKM/feedback/
Purple
Distributed Parallelization of xPU Stencil Computations in Julia
Lightning talk
2022-07-27T15:30:00+00:00
15:30
00:10
We present a straightforward approach for distributed parallelization of stencil-based Julia applications on a regular staggered grid using GPUs and CPUs. The approach allows to leverage remote direct memory access and was shown to enable close to ideal weak scaling of real-world applications on thousands of GPUs. The communication performed can be easily hidden behind computation.
juliacon-2022-17975-distributed-parallelization-of-xpu-stencil-computations-in-julia
JuliaCon
/media/juliacon-2022/submissions/RJYBLA/logo_ImplicitGlobalGrid_jc_WBpxQfb.png
Samuel OmlinLudovic Räss
en
The approach presented here renders the distributed parallelization of stencil-based GPU and CPU applications on a regular staggered grid almost trivial. We have instantiated the approach in the Julia package `ImplicitGlobalGrid.jl`. A highlight in the design of `ImplicitGlobalGrid` is the automatic implicit creation of the global computational grid based on the number of processes the application is run with (and based on the process topology, which can be explicitly chosen by the user or automatically defined). As a consequence, the user only needs to write a code to solve his problem on one GPU/CPU (local grid); then, as little as three functions can be enough to transform a single GPU/CPU application into a massively scaling Multi-GPU/CPU application: a first function creates the implicit global staggered grid, a second funtion performs a halo update on it, and a third function finalizes the global grid.
`ImplicitGlobalGrid` relies on `MPI.jl` to perform halo updates close to hardware limits. For GPU applications, `ImplicitGlobalGrid` leverages remote direct memory access when CUDA- or ROCm-aware MPI is available and uses highly optimized asynchronous data transfer routines to move the data through the hosts when CUDA- or ROCm-aware MPI is not present. In addition, pipelining is applied on all stages of the data transfers, improving the effective throughput between GPU and GPU. Low level management of memory, CUDA streams and ROCm queues permits to efficiently reuse send and receive buffers and streams throughout an application without putting the burden of their management to the user. Moreover, all data transfers are performed on non-blocking high-priority streams, allowing to overlap the communication optimally with computation. `ParallelStencil.jl`, e.g., can do so with a simple macro call.
`ImplicitGlobalGrid` is fully interoperable with `MPI.jl`. By default, it creates a Cartesian MPI communicator, which can be easily retrieved together with other MPI variables. Alternatively, an MPI communicator can be passed to `ImplicitGlobalGrid` for usage. As a result, `ImplicitGlobalGrid`'s functionality can be seamlessly extended using `MPI.jl`.
The modular design of `ImplicitGlobalGrid`, which heavily relies on multiple dispatch, enables adding support for other hardware with little development effort. Support for AMD GPUs using the recently matured `AMDGPU.jl` package has already been implemented as a result. `ImplicitGlobalGrid` supports at present distributed parallelization for CUDA- and ROCm-capable GPUs as well as for CPUs.
We show that our approach is broadly applicable by reporting scaling results of a 3-D Multi-GPU solver for poro-visco-elastic twophase flow and of various mini-apps which represent common building blocks of geoscience applications. For all these applications, nearly ideal parallel efficiency on thousands of NVIDIA Tesla P100 GPUs at the Swiss National Supercomputing Centre is demonstrated. Moreover, we have shown in a recent contribution that the approach is naturally in no way limited to geosciences: we have showcased a quantum fluid dynamics solver using the Nonlinear Gross-Pitaevski Equation implemented with `ParallelStencil` and `ImplicitGlobalGrid`.
Co-authors: Ludovic Räss¹ ², Ivan Utkin¹
¹ ETH Zurich | ² Swiss Federal Institute for Forest, Snow and Landscape Research (WSL)
false
https://pretalx.com/juliacon-2022/talk/RJYBLA/
https://pretalx.com/juliacon-2022/talk/RJYBLA/feedback/
Purple
Building Julia proxy mini apps for HPC system evaluation
Lightning talk
2022-07-27T15:40:00+00:00
15:40
00:10
We will showcase our efforts building Julia proxy applications, or mini apps, targeting the Summit and Frontier supercomputers. We developed XSBench.jl to simulate on-node CPU and GPU scalability of a Monte Carlo computational kernel and, and RIOPA.jl for parallel input/output (I/O) strategies. We will share the lessons learned from Julia’s fresh approach for performance and productivity as a viable language, similar to Fortran, C and C++ for high-performance computing (HPC) systems.
juliacon-2022-17918-building-julia-proxy-mini-apps-for-hpc-system-evaluation
JuliaCon
William F GodoyJeffrey VetterPhilip Fackler
en
We are developing Julia proxy applications, also known as mini apps, to understand the effects of parallel computation, memory, network and input/output (I/O) on the latest U.S. Department of Energy (DOE) extremely heterogeneous high-performance computing (HPC) systems. Our initial targets are the systems hosted at the Oak Ridge Leadership Computing Facility (OLCF): the Summit supercomputer, powered by IBM CPUs and NVIDIA GPUs; and the upcoming Frontier exascale system, powered by AMD CPUs and GPUs. Proxy applications, or mini apps, are simple yet powerful programs that isolate the important computational aspects that drive fully featured science applications. In this lighting talk, our efforts in developing two open-source proxy applications are presented: i) XSBench.jl, which is a port of the original C-based XSBench proxy app used to simulate on-node scalability of the OpenMC Monte Carlo computational kernel on CPU, and AMD and NVIDIA GPUs and, ii) RIOPA.jl, a Julia proxy application designed to mimic parallel I/O application characteristics and payloads. In particular, we are interested in the feasibility of using Julia as a HPC language, similar to Fortran, C and C++, by evaluating the current state and integration with HPC heterogenous programming models and backends: MPI.jl, Julia’s Base Threads, GPU programming: CUDA.jl, AMDGPU.jl and KernelAbstractions.jl; parallel I/O: HDF5.jl and ADIOS2.jl; and the portability of the resulting Julia proxy application across heterogeneous systems. We will share with the Julia community the current challenges, gaps and highlight potential opportunities to balance the trade-offs between programmer productivity and performance in a HPC environment as we prepare for the exascale era in supercomputing. Our goal is to showcase the value added by the Julia language in our early work constructing proxy apps for rapid prototyping as part of our efforts in the U.S. DOE Exascale Computing Project (ECP).
false
https://pretalx.com/juliacon-2022/talk/CJZ3MV/
https://pretalx.com/juliacon-2022/talk/CJZ3MV/feedback/
Purple
ASML Sponsored Talk
Silver sponsor talk
2022-07-27T15:50:00+00:00
15:50
00:05
We make machines that make chips; the hearts of the devices that keep us informed, entertained and safe.
juliacon-2022-21255-asml-sponsored-talk
en
false
https://pretalx.com/juliacon-2022/talk/FEWD7V/
https://pretalx.com/juliacon-2022/talk/FEWD7V/feedback/
Purple
MetaLenz Sponsored Talk
Silver sponsor talk
2022-07-27T15:55:00+00:00
15:55
00:05
Metalenz is commercializing metasurface technology and transforming optical sensing in consumer electronics and automotive markets.
juliacon-2022-21251-metalenz-sponsored-talk
en
false
https://pretalx.com/juliacon-2022/talk/VCSHJ3/
https://pretalx.com/juliacon-2022/talk/VCSHJ3/feedback/
Purple
Quantum computing with ITensor and PastaQ
Talk
2022-07-27T19:00:00+00:00
19:00
00:30
We introduce PastaQ.jl, a computational toolbox for simulating, designing, and benchmarking quantum hardware. PastaQ relies on a tensor network description of quantum processes, built on top of ITensors.jl, a leading library for efficient tensor network algorithms. Leveraging recent developments in tensor network differentiation in ITensor, PastaQ provides access to a broad range of computational tools to tackle a tasks routinely encountered when building quantum computers.
juliacon-2022-18169-quantum-computing-with-itensor-and-pastaq
Giacomo TorlaiMatthew Fishman
en
Quantum computers provide a new computational paradigm with far-reaching implications for a variety of scientific disciplines. Small quantum computers exist in today’s laboratories, but due to imperfections and noise, these machines can only handle problems of limited complexity. The successful development of larger quantum devices requires improved qubit manufacturing and control, active error correction, as well as theoretical advances.
In practice, when building a quantum computer, tasks throughout the quantum computing stack rely on efficient classical algorithms running on conventional computers. These tasks include simulations for designing quantum gates and circuits, qubit calibration, and device characterization/benchmarking. Tensor networks are a powerful framework for describing and simulating quantum systems. They play an important role in simulating the quantum dynamics underlying the experimental hardware, reconstructing quantum processes from measurements, and correcting errors in quantum devices.
PastaQ.jl is a new Julia package for quantum computing built on top of ITensors.jl. ITensors.jl is an established tensor network software library with a unique memory-independent array/tensor interface. ITensor provides a high level and flexible interface for easily calling state of the art tensor network algorithms and developing new tensor network algorithms. Recently in ITensor we have been adding support for automatic differentiation by adding differentiation rules with ChainRules.jl. These range from basic rules for tensor contraction to higher level rules for differentiating quantum state evolution. PastaQ builds on top of this new differentiation support in ITensor to enable a variety of quantum computing applications like the design/optimization of quantum gates via optimal control theory, simulation of quantum circuits, classical optimization of variational circuits, quantum tomography, etc. In this talk, we will discuss some basics of tensor network differentiation in ITensor, and discuss how these new differentiation tools are leveraged in PastaQ for advanced applications in designing and analyzing quantum computers.
false
https://pretalx.com/juliacon-2022/talk/CSARPH/
https://pretalx.com/juliacon-2022/talk/CSARPH/feedback/
Purple
QuantumCircuitOpt for Provably Optimal Quantum Circuit Design
Talk
2022-07-27T19:30:00+00:00
19:30
00:30
A key aspect for operating near-term intermediate-scale quantum (NISQ) computers is to develop compact circuits to implement quantum algorithms, given the hardware's architectural constraints. Efficient formulations and algorithms to solve such hard optimization problems with optimality guarantees is a key in designing such NISQ devices. This talk provides an overview of QuantumCircuitOpt.jl, a software package developed at LANL for provably optimal synthesis of architecture for Quantum circuits
juliacon-2022-18142-quantumcircuitopt-for-provably-optimal-quantum-circuit-design
JuliaCon
/media/juliacon-2022/submissions/KJTGC3/logo_2_fHtFhy4.png
Harsha Nagarajan
en
In recent years, the quantum computing community has seen an explosion of novel methods to implement non-trivial quantum computations on near-term intermediate-scale quantum (NISQ) hardware. An important direction of research has been to decompose an arbitrary entangled state, represented as a unitary, into a quantum circuit, that is, a sequence of gates supported by a quantum processor. It has been well known that circuits with longer decompositions and more entangling multi-qubit gates are error-prone for the current noisy, intermediate-scale quantum devices. To this end, we present the framework of "QuantumCircuitOpt" package, which is aimed at providing provably optimal quantum circuit design.
"QuantumCircuitOpt.jl" (QCOpt in short), is an open-source, Julia-based, package for provably optimal quantum circuit design. QCOpt implements mathematical optimization formulations and algorithms for decomposing arbitrary unitary gates into a sequence of hardware-native gates with global optimality guarantees on the quality of the designed circuit. To this end, QCOpt takes the following inputs: the total number of qubits, the set of hardware-native elementary gates, the target gate to be decomposed, and the maximum allowable size (depth) of the circuit. Given these inputs, QCOpt invokes appropriate gates from a menagerie of gates implemented within the package, reformulates the optimization problem into a mixed-integer program (MIP), applies feasibility-based bound propagation, derives various hardware-relevant valid constraints to reduce the search space, and finally provides an optimal circuit with error guarantees. On a variety of benchmark quantum gates, we show that QCOpt can find up to 57% reduction in the number of necessary gates on circuits with up to four qubits, and in run times less than a few minutes on commodity computing hardware. We also validate the efficacy of QCOpt as a tool for quantum circuit design in comparison with a naive brute-force enumeration algorithm. We also show how the QCOpt package can be adapted to various built-in types of native gate sets, based on different hardware platforms like those produced by IBM, Rigetti and Google.
Package link: https://github.com/harshangrjn/QuantumCircuitOpt.jl
false
https://pretalx.com/juliacon-2022/talk/KJTGC3/
https://pretalx.com/juliacon-2022/talk/KJTGC3/feedback/
Purple
Simulating and Visualizing Quantum Annealing in Julia
Talk
2022-07-27T20:00:00+00:00
20:00
00:30
QuantumAnnealing.jl provides a toolkit for performing simulations of Adiabatic Quantum Computers on classical hardware. The package includes functionality for rapid simulation of the Schrodinger evolution of the system, processing annealing schedules used by real world annealing hardware, implementing custom annealing schedules, and more.
juliacon-2022-17260-simulating-and-visualizing-quantum-annealing-in-julia
JuliaCon
/media/juliacon-2022/submissions/XNRBWC/SpinTrajExample_OkhvcoD.png
Zachary Morrell
en
The field of Quantum Computation has been rapidly growing in recent years. One driving factor behind this growth is the computational intractability of simulating quantum systems. The classical overhead for simulating quantum systems grows exponentially as the system size increases, making quantum computers an appealing option for performing these simulations. Algorithms have also been developed to perform actions such as search and optimization on quantum computers. Quantum Annealing is an optimization method which makes use of an adiabatic quantum computer to try to find a global minima. It relies on the Adiabatic Theorem which states that if a quantum system is prepared in its ground state, if the system evolves slowly enough it will stay in its ground state. A few companies have created quantum annealing hardware, most notably D-Wave Systems, so it is useful to be able to simulate small anneals to look for signatures that imply that the quantum annealing hardware is behaving as expected. To accomplish this, we can solve the Schrodinger equation with the time varying Hamiltonian for the quantum annealer we wish to simulate.
That is where this package comes into play. QuantumAnnealing.jl allows for the simulation of a quantum annealer with arbitrary annealing schedule (two functions which dictate how the system evolves from the initial "easy" state to the final "problem" state), and arbitrary target hamiltonian (the encoding of the problem which is supposed to be solved by the quantum annealer). This package also provides functionality for implementing controls on the annealing schedules, such as holding the schedules constant for a set amount of time (often called a pause) or increasing the speed of the anneal (often called a quench), as well as the ability to directly process D-Wave hardware schedules from CSV files into annealing schedule functions used by the simulator. The simulation can be performed either by using a wrapper around DifferentialEquations.jl, or by using a specialized solver we have written to quickly and accurately simulate the closed system evolution of the quantum annealing hamiltonian. This solver makes use of the Magnus Expansion and includes hard-coded implementations up to the fourth order, as well as a general implementation if a higher order solver is needed. This hardcoded solver has empirically produced a 20-30x speed improvement over the DifferentialEquations.jl wrapper.
Alongside QuantumAnnealing.jl, we have released a plotting package, QuantumAnnealingAnalytics.jl which provides useful plotting functionality for common use-cases of the QuantumAnnealing.jl package. QuantumAnnealingAnalytics.jl includes functions to plot the instantaneous ground state of the Hamiltonian (useful for determining how quickly it is expected that the system can evolve without leaving the ground state), plotting the probabilities of various energy levels for the final system (useful for comparing output statistics from hardware), and plotting output statistics from data files in the bqpjson format. This allows for much easier understanding of the system the user is working with and can be used to quickly reproduce figures found in seminal works in the field of Quantum Annealing.
false
https://pretalx.com/juliacon-2022/talk/XNRBWC/
https://pretalx.com/juliacon-2022/talk/XNRBWC/feedback/
Blue
Julia to the NEC SX-Aurora Tsubasa Vector Engine
Talk
2022-07-27T12:30:00+00:00
12:30
00:30
The package VectorEngine.jl enables the use of the SX-Aurora Tsubasa Vector Engine (VE) as an accelerator in hybrid Julia programs. It builds on GPUCompiler.jl leveraging the VEDA API as well as the LLVM-VE compiler and the Region Vectorizer. Even though the VE is very different from GPUs, using only few cores and arithmetic units with very long vector length, the package enables programmers to use it in a very similar way, simplifying the porting of Julia applications.
juliacon-2022-18175-julia-to-the-nec-sx-aurora-tsubasa-vector-engine
JuliaCon
Erich FochtValentin Churavy
en
The talk introduces the VectorEngine.jl [1] package, the first port of the Julia programming language to the NEC SX-Aurora Tsubasa Vector Engine (VE) [2]. It describes the design choices made for enabling the Julia programming languages, architecture specific details, similarities and differences between VEs and GPUs and the currently supported features.
The current instances of the VE are equipped with 6 HBM2 modules that deliver 1.55 TB/s memory bandwidth to 8 or 10 cores. Each core consists of a full fledged scalar processing unit (SPU) and a vector processing unit (VPU) running with very long vector lengths of up to 256 x 64 bit or 512 x 32 bit words. With C, C++ and Fortran the VE can run programs natively, completely on the VE, OpenMP and MPI parallelized, with Linux system calls being processed on the host machine. Native VE programs can offload function calls to the host CPU (reverse offloading). Alternatively the VE can be used as an accelerator, with the main program running on the host CPU and performance-critical kernels being offloaded to the VE with the help of libraries like AVEO or VEDA [3]. Prominent users of the SX-Aurora Vector Engines are in weather and climate research (eg. Deutscher Wetterdienst), earth sciences research (JAMSTEC, Earth Simulator), fusion research (National Institute for Fusion Science, Japan).
For enabling the VE for Julia use we chose to use the normal offoading programming paradigm that treats the VE as an accelerator running particular code kernels. The GPUCompiler.jl module was slightly expanded and used in VectorEngine.jl to support VEDA on VE, similar to the GPU specific implementations CUDA.jl, AMDGPU.jl and oneAPI.jl. Although VEs are very different from GPUs, chosing a usage pattern similar to GPUs is the most promissing approach for reducing porting efforts and making multi-accelerator Julia code maintainable. With VectorEngine.jl we can declare device-side arrays and structures, copy data between host and device side, declare kernel functions, create cross-compiled objects that can be executed on the accelerator, or use a simple macro like @veda to run a function on the device side and hide steps like compilation and arguments transfer from the user.
For cross-compiling VE device code we use the LLVM-VE compiler. It is a slightly extended version of the upstream LLVM compiler that supports VE as an official architecture since late 2021. For vectorization inside the Julia device code we use the Region Vectorizer [4], an advanced outer loop vectorizer capable of handling divergent control flow. The Region Vectorizer does not do data-dependency analysis, therefore loops that need to be vectorized must be annotated by the programmer.
At the time of the submission of the talk proposal VE device side Julia supports a very limited runtime, quite similar to that of other GPUs. It includes device arrays, transfer of structures, vectorization using the Region Vectorizer and device-side ccalls to libc functions as well as other VE libraries. We discuss the target of implementing most of the Julia runtime on the device side, a step that would enable a much wider range of codes on the accelerator.
[1] VectorEngine.jl github repository: https://github.com/sx-aurora-dev/VectorEngine.jl
[2] K. Komatsu et al, Performance evaluation of a vector supercomputer sx-aurora TSUBASA, https://dl.acm.org/doi/10.5555/3291656.3291728
[3] VEDA github repository: https://github.com/sx-aurora/veda
[4] Simon Moll, Vectorization system for unstructured codes with a Data-parallel Compiler IR, 2021, dissertation https://publikationen.sulb.uni-saarland.de/handle/20.500.11880/32453
false
https://pretalx.com/juliacon-2022/talk/QMZUZH/
https://pretalx.com/juliacon-2022/talk/QMZUZH/feedback/
Blue
Teaching GPU computing, experiences from our Master-level course
Talk
2022-07-27T13:00:00+00:00
13:00
00:30
In the Fall Semester 2021 at ETH Zurich, we designed and taught a new course: **Solving PDEs in parallel on GPUs with Julia**. We present technical and teaching experiences we gained: we look at our tech-stack `CUDA.jl`, `ParallelStencils.jl` and `ImplictGlobalGrid.jl` for GPU-computing; and `Franklin.jl`, `Literate.jl`, `IJulia.jl`/Jupyter for web, slides, and exercises. We look into the crash-course in Julia, teaching software-engineering (git, CI) and project-based student evaluations.
juliacon-2022-18014-teaching-gpu-computing-experiences-from-our-master-level-course
/media/juliacon-2022/submissions/YPGNCS/julia_gpu_course_w4RORIk.png
Ludovic RässMauro WerderSamuel Omlin
en
In the Fall Semester 2021 at ETH Zurich, we designed and taught a new Master-level course: [**Solving PDEs in parallel on GPUs with Julia**](https://eth-vaw-glaciology.github.io/course-101-0250-00/).
We had prior experience in teaching workshops and individual lectures based on Julia, this was our first end-to-end Julia-based lecture course. It filled a niche at ETH Zurich, Switzerland: namely numerical GPU computing for the domain scientists.
Whilst we had great prior experience with the GPU tech-stack used (we're developing part of it), we had to learn much on the presentational tech-stack to create a website, slides and assignments. The presentation will focus on both the GPU-stack (`CUDA.jl`, `ParallelStencils.jl` and `ImplictGlobalGrid.jl`) and the presentation-stack (`Literate.jl`, `Franklin.jl`, `IJulia.jl`/Jupyter).
Co-authors: Mauro Werder¹ ² , Samuel Omlin³
¹ Swiss Federal Institute for Forest, Snow and Landscape Research (WSL) | ² ETH Zurich | ³ Swiss National Supercomputing Centre (CSCS)
false
https://pretalx.com/juliacon-2022/talk/YPGNCS/
https://pretalx.com/juliacon-2022/talk/YPGNCS/feedback/
Blue
GPU4GEO - Frontier GPU multi-physics solvers in Julia
Lightning talk
2022-07-27T13:40:00+00:00
13:40
00:10
The accelerating outflow of ice in Antarctica or Greenland due to a warming climate or the geodynamic processes shaping the Earth share common computational challenges: extreme-scale high-performance computing (HPC) which requires the next-generation of numerical models, parallel solvers and supercomputers. We here present a fresh approach to modern HPC and share our experience running Julia on thousands of graphical processing units (GPUs).
juliacon-2022-18017-gpu4geo-frontier-gpu-multi-physics-solvers-in-julia
/media/juliacon-2022/submissions/7FVVF3/gpu4geo_jc_KtKmFdN.png
Ludovic RässAlbert de MontserratBoris KausSamuel Omlin
en
Computational Earth sciences leverage numerical modelling to understand and predict the evolution of complex multi-physical systems. Ice sheet dynamics and solid Earth geodynamics are, despite their apparent differences, two domains that build upon analogous physical description and share similar computational challenges. Resolving the interactions among various physical processes in three-dimensions on high spatio-temporal resolution is crucial to capture rapid changes in the system leading to the formation of, e.g., ice streams or mountains ranges.
Within the [**GPU4GEO**](https://ptsolvers.github.io/GPU4GEO/) project, we propose software tools which provide a way forward in ice dynamics, geodynamics and computational Earth sciences by exploiting two powerful emerging paradigms in HPC: supercomputing with Julia on graphical processing units (GPUs) and massively parallel iterative solvers. We use Julia as the main language because it features high-level and high-performance capabilities and performance portability amongst multiple backends (e.g., multi-core CPUs, and AMD and NVIDIA GPUs).
We will discuss our experience using `ParallelStencil.jl` and `ImplicitGlobalGrid.jl` as software building blocks in combination to `CUDA.jl`, `AMDGPU.jl` and `MPI.jl` for designing massively parallel and scalable solvers based on the pseudo-transient relaxation method, namely `FastIce.jl` and `JustRelax.jl`. Our work shows great promise for solving a wide range of mechanical multi-physics problems in geoscience, at scale and on GPU-accelerated supercomputers.
Co-authors: Ivan Utkin¹ ², Albert De Montserrat¹, Boris Kaus³, Samuel Omlin⁴
¹ ETH Zurich | ² Swiss Federal Institute for Forest, Snow and Landscape Research (WSL) | ³ Johannes Gutenberg University Mainz | ⁴ Swiss National Supercomputing Centre (CSCS)
false
https://pretalx.com/juliacon-2022/talk/7FVVF3/
https://pretalx.com/juliacon-2022/talk/7FVVF3/feedback/
Blue
Using Hawkes Processes in Julia: Finance and More!
Lightning talk
2022-07-27T14:30:00+00:00
14:30
00:10
Using HawkesProcesses.jl I'll introduce the theory behind Hawkes process and show how it can be used across many different applications. Hawkes processes in Julia benefit from the speed of the language and composability of the different libraries, as you can easily extend the Hawkes process using other packages without too much difficulty. Plus, by using Pluto notebooks I can build simple interactions that demonstrate the underlying mechanics of the Hawkes process.
juliacon-2022-17053-using-hawkes-processes-in-julia-finance-and-more-
Dean Markwick
en
false
https://pretalx.com/juliacon-2022/talk/PRYQ8N/
https://pretalx.com/juliacon-2022/talk/PRYQ8N/feedback/
Blue
Dithering in Julia with DitherPunk.jl
Lightning talk
2022-07-27T14:40:00+00:00
14:40
00:10
Dithering algorithms are a group of color quantization techniques that create the illusion of continuous color in images with small color palettes by adding high-frequency noise or patterns. Traditionally used in printing, they are now mostly used for stylistic purposes.
DitherPunk.jl implements a wide variety of fast and extensible dithering algorithms. Using its example, I will demonstrate how packages for creative coding can be built on top of the JuliaImages ecosystem.
juliacon-2022-18054-dithering-in-julia-with-ditherpunk-jl
JuliaCon
Adrian Hill
en
In this talk I will present [DitherPunk.jl](https://github.com/JuliaImages/DitherPunk.jl), a Julia package implementing over 30 dithering algorithms: from ordered dithering with Bayer matrices to digital halftoning and error diffusion methods such as Floyd-Steinberg.
DitherPunk.jl can be used for binary dithering, channel-wise dithering and for dithering with custom color palettes.
Typically, color dithering algorithms are implemented using Euclidean distances in RGB color space. By building on top of packages from the JuliaImages ecosystem such as Colors.jl and ColorVectorSpace.jl, algorithms can be applied in any color space using any color distance metric, allowing for a lot of creative experimentation.
Due to its modular design, DitherPunk.jl is highly extensible. This will be demonstrated by creating an ordered dithering algorithm from a signed distance function.
false
https://pretalx.com/juliacon-2022/talk/FXMQPQ/
https://pretalx.com/juliacon-2022/talk/FXMQPQ/feedback/
Blue
FdeSolver.jl: Solving fractional differential equations
Lightning talk
2022-07-27T14:50:00+00:00
14:50
00:10
FdeSolver Julia package solves fractional differential equations in the sense of Caputo, suitable for nonlinear and stiff ordinary differential systems. It has been used for describing memory effects in microbial community dynamics and complex systems. With some practical examples, I will present why (and how) developing such a computational package in open source programming is important (and useful).
juliacon-2022-18026-fdesolver-jl-solving-fractional-differential-equations
Moein Khalighi
en
Differential equations with fractional operators describe many real-world phenomena more accurately than integer order calculus. Fractional calculus has been recognized as a powerful method for understanding the memory and nonlocal correlation of dynamic processes, phenomena, or structures. However, the Julia programming language has missed a package for solving differential equations of fractional order in an accurate, reliable, and efficient way. Hence, we devise the FdeSolver Julia package specifically for solving two general classes of fractional-order problems: fractional differential equations (FDEs) and multi-order systems of FDEs. We implement the explicit and implicit predictor-corrector algorithms with sufficient convergence and accuracy of the solution of the nonlinear systems, including the fast Fourier transform technique that gives us high computation speed and efficient treatment of the persistent memory term. The following document provides some overall information for using the package: https://juliaturkudatascience.github.io/FdeSolver.jl/stable/readme
false
https://pretalx.com/juliacon-2022/talk/SV9TS9/
https://pretalx.com/juliacon-2022/talk/SV9TS9/feedback/
Blue
Automated PDE Solving in Julia with MethodOfLines.jl
Talk
2022-07-27T15:10:00+00:00
15:10
00:30
If you want to simulate something, sooner or later you’re going to come across partial differential equations. But solving PDEs is hard, right? It doesn’t have to be! In this talk we'll cut to the chase: how do I copy paste the textbook description of my PDE into Julia symbolic syntax and get a solution? MethodOfLines.jl is the answer, and in this talk we'll show you how to do it!
juliacon-2022-18002-automated-pde-solving-in-julia-with-methodoflines-jl
Alex Jones
en
MethodOfLines.jl is a system for the automated discretization of symbolically defined partial differential equations (PDEs), by the method of lines. By recognizing different linear and nonlinear terms in the specified system, we build a performant semidiscretization by symbolically applying effective finite difference schemes, which we then used to generate optimized Julia code. Consequently, one can solve the system with an appropriate ordinary differential equation (ODE) solver.
In this 30 minute talk, the audience will learn how to use MethodOfLines.jl to discretize and solve an example PDE that represents a physical simulation which arises in research, gaining the knowledge and skills to apply these tools to their own problems. They will also learn about some of the internals of MethodOfLines.jl. This will arm them with the knowledge required to implement improved finite difference schemes, benefiting their own research and others in the community. Finally, we will outline the proposed direction of development for the package moving forwards.
false
https://pretalx.com/juliacon-2022/talk/8AYKB7/
https://pretalx.com/juliacon-2022/talk/8AYKB7/feedback/
Blue
Automatic generation of C++ -- Julia bindings
Talk
2022-07-27T19:00:00+00:00
19:00
00:30
Interporability of Julia with C++ is essential for the use of the Julia programming language in fields with a large legacy of code written in this language. We will show how the generation of a Julia interface to a C++ library can be automatized using CxxWrap.jl and a tool that generates the required C++ glue code. The concept is demonstrated with a prototype called WrapIt!, https://www.github.com/grasph/wrapit, based on clang and which is already well advanced.
juliacon-2022-18049-automatic-generation-of-c-julia-bindings
JuliaCon
Philippe Gras
en
Interfacing Julia with C++ libraries can be done with the help of the CxxWrap.jl package. With this package bindings to C++ classes, their methods and to global functions with a clean Julia interface can be easily be implemented. To provide the bindings, a wrapper that defines the C++-Julia mapping must be written.
We will show in this talk that this wrappper code can be automatically generated from the C++ library source code. A code generator, called WrapIt! (https://github.com/grasph/wrapit) and which was developped as a proof of concept will be presented.
The clang libraries (https://clang.llvm.org/) was used to interpret the C++ code and in particular its C API, libclang. The tool is already well advanced. We will show the challenges that represents deducing the library interface direcly from the C++ header files, in particular for large libraries with hundreds or thousands of C++ classes, the technical choices made in WrapIt!, the status of this tool, and what would be need to make it a full-fledged wrapper generator.
false
https://pretalx.com/juliacon-2022/talk/EA7NVT/
https://pretalx.com/juliacon-2022/talk/EA7NVT/feedback/
Blue
Extending PyJL to Translate Python Libraries to Julia
Talk
2022-07-27T19:30:00+00:00
19:30
00:30
Many new high-level programming languages have emerged in recent years. Julia is one of these languages, that offers the speed of C, the macro capabilities of Lisp, and the user-friendliness of Python. However, its library set is still reduced when compared to languages, such as Python. We propose extending PyJL, an open source transpilation tool, to speedup the conversion of libraries to Julia.
juliacon-2022-18034-extending-pyjl-to-translate-python-libraries-to-julia
JuliaCon
Miguel Marcelino
en
PyJL is part of the Py2Many transpiler, which is a rule-based transpilation tool. PyJL builds upon Py2Many to translate Python source code to Julia. Parsing is performed through Python's _ast_ module, which generates an Abstract Syntax Tree. Then, several intermediate transformations convert the input Python source code into Julia source code.
In terms of our results, we managed to translate two commonly used benchmarks:
1. The N-Body problem, achieving a speedup of 19.5x when compared to Python, after adding only one type hint
2. An implementation of the Binary Trees benchmark to test Garbage Collection, which resulted in 8.6x faster execution time without requiring any user intervention
The current major limitations of PyJL are type inference and mapping Python's OO paradigm to Julia. Regarding type inference, PyJL requires type hints in function arguments and return types, and we are currently integrating pytype, a type inference mechanism, to verify the soundness of type hints. Regarding the OO paradigm, PyJL currently maps Julia's classes, including class constructors, and single inheritance to Julia. However, Python's special methods, such as \_\_repr\_\_ or \_\_str\_\_ still require proper translation.
Although the development of our transpilation tool is still at an early stage, our preliminary results show that the transpiler generates human-readable code that can achieve high performance with few changes to the generated source code.
false
https://pretalx.com/juliacon-2022/talk/3VSJHV/
https://pretalx.com/juliacon-2022/talk/3VSJHV/feedback/
Blue
Julia in VS Code - What's New
Talk
2022-07-27T20:00:00+00:00
20:00
00:30
We will highlight new features in the Julia VS Code extension that shipped in the last year and give a preview of some new work. The new features from last year that we will highlight are: 1) a new profiler UI, 2) a new table viewer UI, 3) a revamped plot gallery, 4) cloud indexing infrastructure, and 5) integration of JuliaFormatter. We will also present some brand-new features, in particular an entirely new test explorer UI integration.
juliacon-2022-18128-julia-in-vs-code-what-s-new
JuliaCon
David AnthoffSebastian Pfitzner
en
false
https://pretalx.com/juliacon-2022/talk/JPYJS8/
https://pretalx.com/juliacon-2022/talk/JPYJS8/feedback/
BoF
Simulating neural physiology & networks in Julia
Birds of Feather
2022-07-27T12:30:00+00:00
12:30
01:30
Could Julia be uniquely well-suited for rapidly developing new approaches to simulate the brain ? What if neuroscientists could use a composable set of tools to craft models of ion channels, compartmentalized neuronal morphology, networks of LIF or conductance-based neurons, reinforcement learning, and everything in-between?
Join the discussion on the [bof-voice](https://discord.com/channels/995757799076282478/997898697578913853) channel in discord.
juliacon-2022-18170-simulating-neural-physiology-networks-in-julia
JuliaCon
Alessio QuaresimaWiktor PhillipsTushar Chauhan
en
Julia’s software ecosystem certainly lessens the technical burden for computational neuroscientists—it boasts federated development of high-quality packages for solving differential equations, machine learning, automatic differentiation, and symbolic algebra. Deep language support for multithreaded, distributed, and GPU parallelism also makes the case for models that can span multiple scales, both in biological detail and overall network size.
Come join us for a community discussion about what a fresh Julian take on modeling the brain might look like. Together we will lay out an initial set of goals for building up a domain-specific ecosystem of packages for computational neuroscience.
The discussion will be moderated by Wiktor Phillips, Alessio Quaresima, and Tushar Chauhan.
false
https://pretalx.com/juliacon-2022/talk/V7CCYF/
https://pretalx.com/juliacon-2022/talk/V7CCYF/feedback/
BoF
Discussing Gender Diversity in the Julia Community
Birds of Feather
2022-07-27T14:30:00+00:00
14:30
01:30
Julia Gender Inclusive is an initiative that supports gender diversity in the Julia community. Over the last year, we have worked toward doing so through meetups and workshops for community building and education. In this Birds-of-Feather session, we hope to discuss current and future initiatives with other people with underrepresented genders, as well as supportive allies.
Join the discussion on the [bof-voice](https://discord.com/channels/995757799076282478/997898697578913853) channel
juliacon-2022-17698-discussing-gender-diversity-in-the-julia-community
Julia Gender Inclusive
en
The objective of this BoF is to find more people who feel their gender is underrepresented within the Julia community or want to support people who feel so. We aim to create a safe and fruitful discussion about gender diversity, increase awareness of our current initiatives, and receive input on new actions we can take as Julia Gender Inclusive.
true
https://pretalx.com/juliacon-2022/talk/YLSWBC/
https://pretalx.com/juliacon-2022/talk/YLSWBC/feedback/
BoF
Poster session
Virtual poster session
2022-07-27T18:00:00+00:00
18:00
01:30
The virtual poster session will take place in the designated area in [Gather.town](https://app.gather.town/invite?token=2ucLB9IpmCAXZIex4Dvh2VFCeR6QLEdP). See the full poster list on the [JuliaCon website](https://juliacon.org/2022/posters/).
juliacon-2022-21381-poster-session
en
false
https://pretalx.com/juliacon-2022/talk/M7JDGG/
https://pretalx.com/juliacon-2022/talk/M7JDGG/feedback/
JuMP
UnitJuMP: Automatic unit handling in JuMP
Lightning talk
2022-07-27T12:30:00+00:00
12:30
00:10
This talk will present the package UnitJuMP that allows the user to include units when modelling optimization problems in JuMP. Both variables and constraints can have specified units, as well as parameters involved in objective and constraints. If different units are combined, functionality in Unitful is used to check for compatibility and perform automatic conversions.
juliacon-2022-16830-unitjump-automatic-unit-handling-in-jump
JuMP
Truls Flatberg
en
When setting up complex optimization models involving real world problems, this will often involve
parameters and variables that represent physical quantitities with specified units. To ensure
correctness of the optimization model, considerable care has to be taken to avoid errors due
to inconsistent use of units. An example is investment models in the energy sector where
multiyear investments measured in GW can be combined with operational decisions on hour or minute basis with parameters being provided in a wide range of units (kWh, MJ, kcal, MMBTU).
The package UnitJuMP is an extension to the JuMP package that handles modeling of units within JuMP models.
The implementation is based on use of the Unitful package for generic handling of physical units.
The package is still in an early stage of development, and currently only supports the use of units in combination with linear and mixed integer linear optimization problems.
The package is available for download and testing at https://github.com/trulsf/UnitJuMP.jl.
false
https://pretalx.com/juliacon-2022/talk/RDLDYD/
https://pretalx.com/juliacon-2022/talk/RDLDYD/feedback/
JuMP
SparseVariables - Efficient sparse modelling with JuMP
Lightning talk
2022-07-27T12:40:00+00:00
12:40
00:10
Industry scale optimization problems are often large and sparse, and problem construction time can rival solution time. The default containers and macros in JuMP present some challenges for this class of problems, in particular some performance gotchas.
We present SparseVariables.jl for simple sparse modelling, and demonstrate performance and conciseness with a supply chain optimization example, benchmarking both problem construction time and LOC for multiple modelling approaches.
juliacon-2022-17198-sparsevariables-efficient-sparse-modelling-with-jump
JuMP
Lars Hellemo
en
## Motivation
Industry scale optimization problems, e.g. in supply chain management or energy systems modeling often involve constructing and solving very large, very sparse linear programs. For such problems problem construction time can rival solution time when the problem is solved by very efficient commercial linear programming solvers.
Julia and JuMP provide an elegant and fun modelling environment which integrates nicely with data management, versioning, reproducibility and portability.
The default containers and macros in JuMP do present some challenges for this class of problems, related to performance gotchas and incremental variable construction.
## What it is
Thanks to the unique hackability of Julia and JuMP, it has been straight-forward to create custom containers and macros to investigate alternative approaches to modelling large sparse systems with JuMP.
We present SparseVariables which provides a nice and compact syntax for these problems with good performance by default.
## Performance
To demonstrate SparseVariables, we present a demo supply chain optimization problem, which may be modelled in multiple ways, and benchmark the problem construction time and the number of lines of code for each approach.
## Elegance
With SparseVariables we avoid the boiler-plate necessary with suggested work-arounds for performance issues in JuMP, and also allow incremental construction of variables, which is useful for modular modelling code. Our container features slicing and other variable selection methods to allow for short and concise code, leveraging JuMP, MathOptInterface and the Julia package ecosystem.
false
https://pretalx.com/juliacon-2022/talk/STM8PM/
https://pretalx.com/juliacon-2022/talk/STM8PM/feedback/
JuMP
JuMP ToQUBO Automatic Reformulation
Lightning talk
2022-07-27T12:50:00+00:00
12:50
00:10
We present ToQUBO.jl, a Julia package to reformulate general optimization problems into QUBO instances. This tool aims to convert JuMP problems for straightforward application in physics and physics-inspired solution methods whose normal form is equivalent to QUBO. It automatically maps between source and target models, providing a smooth JuMP modeling experience.
We also present a simple interface to connect various annealers and samplers as QUBO solvers bundled in another package, Anneal.jl.
juliacon-2022-17059-jump-toqubo-automatic-reformulation
JuMP
/media/juliacon-2022/submissions/LJC7R8/logo_y512kFg.svg
Pedro Xavier
en
false
https://pretalx.com/juliacon-2022/talk/LJC7R8/
https://pretalx.com/juliacon-2022/talk/LJC7R8/feedback/
JuMP
A multi-precision algorithm for convex quadratic optimization
Talk
2022-07-27T13:00:00+00:00
13:00
00:30
In this talk, we describe a Julia implementation of RipQP, a regularized interior-point method for convex quadratic optimization. RipQP is able to solve problems in several floating-point formats, and can also start in a lower precision as a form of warm-start. The algorithm uses sparse factorizations or Krylov methods from the Julia package Krylov.jl. We present an easy way to use RipQP to solve problems modeled with QuadraticModels.jl and LLSModels.jl.
juliacon-2022-18068-a-multi-precision-algorithm-for-convex-quadratic-optimization
JuMP
Geoffroy Leconte
en
false
https://pretalx.com/juliacon-2022/talk/VGSB89/
https://pretalx.com/juliacon-2022/talk/VGSB89/feedback/
JuMP
Interior-point conic optimization with Clarabel.jl
Talk
2022-07-27T13:30:00+00:00
13:30
00:30
The talk will introduce Clarabel.jl, a conic convex optimization solver in pure Julia. Clarabel.jl uses an interior point technique with a novel homogeneous embedding and can solve LPs, QPs, SOCPs, SDPs or exponential cone programs. The talk will highlight the solver's performance advantages relative to competing solvers, discuss algorithmic and software design ideas drawn from existing solvers, and highlight extensibility features leveraging Julia's multiple dispatch system.
juliacon-2022-18115-interior-point-conic-optimization-with-clarabel-jl
JuMP
Paul Goulart
en
The talk will introduce Clarabel.jl, a new package for conic convex optimization implemented in pure Julia. The package is based on an interior point optimization method and can solve optimization problems in the form of linear and quadratic programs (LPs and QPs), second-order cone programs (SOCPs), semidefinite programs (SDPs) and constraints on the exponential cone.
The package implements a novel homogeneous embedding technique that offers substantially faster solve times relative to existing open-source and commercial solvers for some problem types. This improvement is due to both a reduction in the number of required interior point iterations as well as an improvement in both the size and sparsity of the linear system that must be solved at each iteration. The talk will describe details of this embedding and show performance results with respect to solvers based on the standard homogeneous self-dual embedding, including ECOS, Hypatia and MOSEK.
Our implementation of Clarabel.jl adopts design ideas from several existing solver packages. Based on our group’s prior experience implementing first-order optimization techniques in the ADMM-based solver COSMO.jl, Clarabel.jl adopts a modular implementation for convex cones that is easily extensible to new types. The solver organises its core internal data types following the design of the C++ QP solver OOQP, allowing for future extensions of the solver to exploit optimization problems with special internal structure, e.g. optimal control or support vector machine problems. Finally, the package works with generic types throughout, allowing for simple extension to abstract matrix or vector representations or use with arbitrary precision floating point types. The talk will describe these features and their implementation through Julia’s multiple dispatch system.
Clarabel.jl provides a simple native interface for solving cone programs in a standard format. The package also fully supports Julia's MathOptInterface package, and can therefore be used via both JuMP and Convex.jl.
The package will be available as an open-source package via Github under the Apache 2.0 license. An initial public release is planned for June 2022, but full documentation and examples are already available at:
https://oxfordcontrol.github.io/Clarabel.jl/
false
https://pretalx.com/juliacon-2022/talk/UNRVRP/
https://pretalx.com/juliacon-2022/talk/UNRVRP/feedback/
JuMP
JuMP 1.0: What you need to know
Talk
2022-07-27T14:30:00+00:00
14:30
00:30
JuMP 1.0 was released in March 2022. I'll present the state of JuMP today, how we got here, what the JuMP community should know about the 1.0 release, and what's next on the roadmap.
juliacon-2022-17940-jump-1-0-what-you-need-to-know
JuMP
Miles Lubin
en
false
https://pretalx.com/juliacon-2022/talk/KCL3JM/
https://pretalx.com/juliacon-2022/talk/KCL3JM/feedback/
JuMP
A user’s perspective on using JuMP in an academic project
Talk
2022-07-27T15:00:00+00:00
15:00
00:30
The Risk-Aware Market Clearing (RAMC) project investigates the quantification and management of risk in power systems, thereby bridging machine learning, optimization and risk analysis. This talk will discuss the team's experience --from a user perspective-- on using Julia and JuMP within an academic project and a multi-disciplinary team. This will include the motivation for using these tools, as well as hurdles encountered along the way, and practical experience on industrial-size systems.
juliacon-2022-18010-a-user-s-perspective-on-using-jump-in-an-academic-project
JuMP
Mathieu Tanneau
en
false
https://pretalx.com/juliacon-2022/talk/CFGAUV/
https://pretalx.com/juliacon-2022/talk/CFGAUV/feedback/
JuMP
COPT and its Julia interface
Lightning talk
2022-07-27T15:30:00+00:00
15:30
00:10
COPT (Cardinal Optimizer) is a mathematical optimization solver for large-scale optimization problems. It includes high-performance solvers for LP, MIP, SOCP, convex QP, convex QCP and other mathematical programming problems. In this talk we will give an overview of COPT and introduce its Julia interface.
juliacon-2022-18677-copt-and-its-julia-interface
JuMP
Qi Huangfu
en
false
https://pretalx.com/juliacon-2022/talk/XRDNVT/
https://pretalx.com/juliacon-2022/talk/XRDNVT/feedback/
JuMP
JuMP and HiGHS: the best open-source linear optimization solvers
Lightning talk
2022-07-27T15:40:00+00:00
15:40
00:10
This talk will describe how the JuMP and HiGHS teams have worked together to deliver the best open-source linear optimization solvers to the Julia community, and present some high-profile use cases.
juliacon-2022-18645-jump-and-highs-the-best-open-source-linear-optimization-solvers
JuMP
/media/juliacon-2022/submissions/ZPUZPU/HiGHS_Icon_square_KbMxUw9.jpg
Julian Hall
en
Almost from the moment that the development of HiGHS was proposed in 2018, the prospect of it offering top-class open-source linear optimization solvers with a well designed and fully supported API was attractive to JuMP. Since then, as HiGHS has developed from outstanding "gradware" to the world's best open-source linear optimization software, there have been invaluable contributions from the JuMP team. This great example of community cooperation now means that HiGHS is the default MILP solver in JuMP's documentation, and Julia users have a slick interface to HiGHS. One particular area of activity that is exploiting this is the rapidly-growing world of open-source energy systems planning, where the high license fees for commercial optimization solvers mean that open-source alternatives are critically important for small scale commercial enterprises, NGOs, and organisations in developing countries. Some high-profile use cases of the JuMP-HiGHS interface in this field will be presented.
false
https://pretalx.com/juliacon-2022/talk/ZPUZPU/
https://pretalx.com/juliacon-2022/talk/ZPUZPU/feedback/
JuMP
Pajarito's MathOptInterface Makeover
Talk
2022-07-27T19:00:00+00:00
19:00
00:30
Pajarito is an outer approximation solver for mixed integer conic problems. We have redesigned and rewritten Pajarito in version 0.8 to support MathOptInterface (finally!). Pajarito now has a generic cone interface that allows adding support for new convex cones through a small list of oracles. PajaritoExtras.jl extends Pajarito by defining several cones supported by the continuous conic solver Hypatia. We illustrate with applied examples, including mixed integer polynomial problems.
juliacon-2022-17013-pajarito-s-mathoptinterface-makeover
JuMP
Chris Coey
en
false
https://pretalx.com/juliacon-2022/talk/Y8BCSL/
https://pretalx.com/juliacon-2022/talk/Y8BCSL/feedback/
JuMP
A matrix-free fix-propagate-and-project heuristic for MILPs
Talk
2022-07-27T19:30:00+00:00
19:30
00:30
We present Scylla, a primal heuristic for mixed-integer optimization. It uses matrix-free approximate LP solving with specialized termination criteria and parallelized fix-and-propagate procedures blended with feasibility pump-like objective updates. Besides the presentation of the method and results, we will go over lessons learned on experimentation and implementation tricks including overhead reduction, asynchronous programming, and interfacing with natively-compiled libraries.
juliacon-2022-17935-a-matrix-free-fix-propagate-and-project-heuristic-for-milps
JuMP
Mathieu Besançon
en
The talk will present our work on Scylla in two aspects. The first will be the presentation of the method and different components and ideas they link to, from feasibility pump to primal-dual hybrid gradients for linear optimization and fix-and-propagate procedures. The second aspect of the talk will include lessons learned on asynchronous programming using the Task-Channel model, working with and interfacing native libraries or time management for time-constrained experimental runs.
false
https://pretalx.com/juliacon-2022/talk/JLYCL8/
https://pretalx.com/juliacon-2022/talk/JLYCL8/feedback/
JuMP
Verifying Inverse Model Neural Networks Using JuMP
Lightning talk
2022-07-27T20:00:00+00:00
20:00
00:10
Deep learning using neural networks is increasingly popular, but neural networks come with few built-in guarantees of correctness. This talk will discuss how I use JuMP to compute verified upper bounds on the error of a neural network trained as an inverse model. Using JuMP together with Distributed.jl allows me to run a large number of verification queries in parallel with minimal time spent on non-research development.
juliacon-2022-18769-verifying-inverse-model-neural-networks-using-jump
JuMP
/media/juliacon-2022/submissions/TSN8ZR/juliacon_talk_image_uVpYXvH.png
Chelsea Sidrane
en
The talk is based on https://arxiv.org/abs/2202.02429
false
https://pretalx.com/juliacon-2022/talk/TSN8ZR/
https://pretalx.com/juliacon-2022/talk/TSN8ZR/feedback/
JuMP
Complex number support in JuMP
Lightning talk
2022-07-27T20:10:00+00:00
20:10
00:10
Complex numbers appear in a variety of optimization problems such as AC optimal power flow problems (AC-OPF) and quantum information optimization. This talk presents the integration of complex numbers in JuMP. We first describe how to create complex variables and constraints with complex coefficients in JuMP. Then, we show how this addition makes use of the extensible design of MathOptInterface and JuMP. We illustrate this with examples from PowerModels.jl and SumOfSquares.jl.
juliacon-2022-18244-complex-number-support-in-jump
JuMP
Benoît Legat
en
false
https://pretalx.com/juliacon-2022/talk/LSUKYX/
https://pretalx.com/juliacon-2022/talk/LSUKYX/feedback/
Sponsored forums
Julia Computing Sponsored Forum
Sponsor forum
2022-07-27T19:30:00+00:00
19:30
00:45
Join the sponsored forum [here](https://discord.com/channels/995757799076282478/1000106542605029416).
juliacon-2022-21246-julia-computing-sponsored-forum
en
false
https://pretalx.com/juliacon-2022/talk/YGFHB7/
https://pretalx.com/juliacon-2022/talk/YGFHB7/feedback/
Green
Keynote - Jeremy Howard
Keynote
2022-07-28T09:00:00+00:00
09:00
00:45
Keynote - Jeremy Howard
juliacon-2022-20652-keynote-jeremy-howard
JuliaCon
Jeremy Howard
en
Keynote - Jeremy Howard
false
https://pretalx.com/juliacon-2022/talk/78XMRJ/
https://pretalx.com/juliacon-2022/talk/78XMRJ/feedback/
Green
Quiqbox.jl: Basis set generator for electronic structure problem
Lightning talk
2022-07-28T10:30:00+00:00
10:30
00:10
Quiqbox.jl is a Julia package that allows highly customizable Gaussian-type basis set design for electronic structure problems in quantum chemistry and quantum physics. The package provides a variety of useful functions around basis set generation such as RHF and UHF methods, standalone 1-electron and 2-electron integrals, and most importantly, variational optimization for basis set parameters. It supports Linux, Mac OS, and Windows.
juliacon-2022-17950-quiqbox-jl-basis-set-generator-for-electronic-structure-problem
JuliaCon
Weishi Wang
en
Quantum and classical computers are being applied to solve ab initio problems in physics and chemistry. In the NISQ era, solving the "electronic structure problem" has become one of the major benchmarks for identifying the boundary between classical and quantum computational power. Electronic structure in condensed matter physics is often defined on a lattice grid while electronic structure methods in quantum chemistry rely on atom-centered single-particle basis functions. Grid-based methods require a large number of single-particle basis functions to obtain sufficient resolution when expanding the N-body wave function. Typically, fewer atomic orbitals are needed than grid points but the convergence to the continuum limit is less systematic. To investigate the consequences and compromises of the single-particle basis set selection on electronic structure methods, we need more flexibility than is offered in standard solid-state and molecular electronic structure packages. Thus, we have developed an open-source software tool called "Quiqbox" in the Julia programming language that allows for easy construction of highly customized floating basis sets. This package allows for versatile configurations of single-particle basis functions as well as variational optimization based on automatic differentiation of basis set parameters.
false
https://pretalx.com/juliacon-2022/talk/SZSESM/
https://pretalx.com/juliacon-2022/talk/SZSESM/feedback/
Green
MathLink(Extras): The powers of Mathematica and Julia combined
Lightning talk
2022-07-28T10:40:00+00:00
10:40
00:10
Mathematica is a powerful tool for many purposes, but it can be cumbersome to work with. This is especially clear for more automated tasks.
In this short talk, I will introduce MathLink and MathLinkExtras, which enable interoperability between Julia and Mathematica.
I will introduce the basic syntax of MathLink and discuss an application of automated computation of nested integrals.
juliacon-2022-16672-mathlink-extras-the-powers-of-mathematica-and-julia-combined
JuliaCon
/media/juliacon-2022/submissions/KPXD3Y/ReinML_DWUFhzm.png
Mikael Fremling
en
Mathematica is arguably the go-to tool for your everyday mathematical needs. It can efficiently perform integrals, solve equations, find roots, refine expression, plot functions, and many more things.
However, there are tasks where Mathematica performs poorly or is just plain inconvenient to work with.
One such place is if/else statements and the control flow.
Try, for instance, to construct programs where the algebraic manipulations depend on the functional form of the expression.
Or if you want to make non-trivial variable changes inside and expression.
These limitations and many more are solved by Julia's MathLink (https://github.com/JuliaInterop/MathLink.jl) and MathLinkExtras (https://github.com/fremling/MathLinkExtras.jl) packages.
The first package provides access to Mathematica/Wolfram Engine, via the Wolfram Symbolic Transfer Protocol (WSTP).
The second is "sugar on top" and provides the basic algebraic operations (+,-,*,/) for the MathLink variable types.
As a practical example, I will show how MathLink and MathLinkExtras were used in a research project[1] to compute nested gaussian integrals.
[1] M. Fremling, "Exact gap-ratio results for mixed Wigner surmises of up to 4 eigenvalues", arXiv preprint arXiv:2202.01090 (2022). (https://arxiv.org/abs/2202.01090)
false
https://pretalx.com/juliacon-2022/talk/KPXD3Y/
https://pretalx.com/juliacon-2022/talk/KPXD3Y/feedback/
Green
Dates with Nanoseconds
Lightning talk
2022-07-28T10:50:00+00:00
10:50
00:10
Julia's DateTime type is limited to milliseconds, while the Time type supports nanoseconds. This talk introduces NanoDate.jl and NanoDates. This type works like DateTime with higher precision. CompoundPeriods behave more smoothly and are available as an operational design element for developers.
juliacon-2022-16388-dates-with-nanoseconds
Jeffrey Sarnoff
en
false
https://pretalx.com/juliacon-2022/talk/Y8G9VJ/
https://pretalx.com/juliacon-2022/talk/Y8G9VJ/feedback/
Green
Exploring audio circuits with ModelingToolkit.jl
Talk
2022-07-28T11:00:00+00:00
11:00
00:30
The study of audio circuits is interdisciplinary. It combines DSP, analog circuits, differential equations, and semiconductor theory. Mathematical tools like Fourier Transforms and standard circuit analysis cannot explain the behavior of stateful nonlinearities. A complete description of a circuit can only be obtained through time-domain (or ‘transient’, in SPICE terms) simulation. ModelingToolkit.jl enables rapid design iteration and combines features that traditionally require multiple tools.
juliacon-2022-18089-exploring-audio-circuits-with-modelingtoolkit-jl
JuliaCon
George Gkountouras
en
This talk is targeted at people who want to start using ModelingToolkit.jl for simulations in their domain. It shows a workflow that is possible today. Additionally, it hints that composability is the key to unlock future breakthroughs. As a bonus, we can implement recent audio processing papers in a few lines of code!
We begin with a survey of the numerical and symbolic software commonly employed in the field and explain our decision to use ModelingToolkit.jl. Typically, engineers working with audio systems face a 3-language problem: use SPICE-style software to analyze a circuit, then move on to Matlab/Scilab/Python to deliver a high-level prototype, and finally re-write that into a high-performance implementation in C/C++. Usually, the simplification of a circuit turns into a laborious, multi-week manual process. ModelingToolkit.jl covers these use cases and more.
Afterwards, we will explore increasingly complex audio circuits via simulation. Topics include:
- implementing Kirchhoff laws
- defining simple models (capacitor, diode)
- defining a circuit
- simulating the circuit and plotting the result
- defining hierarchical models (VCCS, vacuum tube)
- animated plotting
- defining controls (potentiometers)
- exploring variations on a venerable guitar pedal
Lastly, audio demos of the simulated circuits will be featured.
Assumed background:
Attendees are expected to have some programming experience in e.g. Python. It is helpful, although not required, to have experience working with analog circuits and/or SPICE simulation software.
false
https://pretalx.com/juliacon-2022/talk/9EM3P7/
https://pretalx.com/juliacon-2022/talk/9EM3P7/feedback/
Green
Universal Differential Equation models with wrong assumptions
Lightning talk
2022-07-28T11:30:00+00:00
11:30
00:10
The JuliaML ecosystem introduces an effective way to model natural phenomena with Universal Differential Equations. UDEs enrich differential equations combining an explicitly known term with a term learned from data via a Neural Network. Here, we explore what happens when our assumptions about the known term are wrong, making use of the rich interoperability of Julia. The insight we offer will be useful to the Julia community in better understanding strengths and possible shortcomings of UDEs.
juliacon-2022-18084-universal-differential-equation-models-with-wrong-assumptions
Luca Reale
en
### Introduction
Julia’s SciML ecosystem introduced an effective way to model natural phenomena as dynamical systems with Universal Differential Equations (UDE’s). The UDE framework enriches both classic and Neural Network differential equation modelling, combining an explicitly “known” term (that is, a term which functional expression is known) with an “unknown” term (that is, a term which functional expression is not known). Within a UDE, the unknown term, and therefore the overall functional form of the dynamical system, is learned from observational data by fitting a Neural Network. The task of the Neural Network is facilitated by the domain knowledge embodied by the known term; moreover, the interpolation and, importantly, extrapolation performance of the fitted model is greatly increased by that knowledge (and by a simplification step, such as SiNDY). All of this relies on the tacit assumption that what we think about the natural phenomena is correctly expressed in the known term. Most of the research has focused on the robust identification of the unknown term, and the properties of the Neural Network. We focus instead on the impact of possible pathologies in the design of a UDE system, and in particular, on possible errors we introduce in the expression of the known term. That is, we ask what happens if our domain knowledge is not correctly expressed. In the context of the famous quote “It ain’t what you don’t know that gets you in trouble. It’s what you know for sure that just ain’t so” attributed to Mark Twain, we explore the magnitude of the trouble you get into.
### Details
More in detail, for a set of variables X, we consider a dynamical system of the form
`dX(t)=F(X,t)=K(t)+U(X,t)`
where `K(t)` is the part of the dynamical equation assumed as “known”, and `U(X,t)` is the part assumed as “unknown”.
In this scenario, the observational data are samples from `X(t)` at various points in time.
Let `K*(X,t)` be a perturbed version of K (say, for a certain `ω`, `K*(t)=sin(t+ω)` when `K(t)=sin(t)`).
Our aim is to recover `F(X,t)` from the observed data by training a UDE of the form
`dX(t)=K*(t)+NN(X,t)`.
Under the perturbed scenario, we ask some simple questions, which answers are far from trivial:
- Can we recover the functional form of `F(X,t)`?
- Can we at least approximate it accurately?
- How does the perturbation we imposed on `K*(t)` impact our model accuracy?
In order to explore the discrepancy between expected and obtained results, we needed: synthetic data from the original dynamical system, that is a family of function for `K(t)`; a family of perturbed versions of the `K(t)` ; and a way to assess how far off we are from recovering the true `F(X,t)`. All three tasks were facilitated by the interoperability of Julia, and in the presentation we will show how that plays out.
1. We considered trigonometric, exponential, polynomial functions, as well as linear combinations of these functions to create the original dynamical system and generate synthetic observational data. This was made efficiently by the symbolic computation capabilities of Julia, e.g., Symbolics.jl.
2. We fitted family of UDEs to the data we generated under three scenarios: (a) a correctly specified known term, i.e., `K*(t)=K(t)`; (b) the lack of a known term, i.e., `K*(t)=0`; and (c) a perturbation of the known term. The UDE was subsequently simplified to recover a sparse representation of the dynamical system in terms of simple functions. This step was done within Julia’s SciML framework.
3. Finally, we evaluated the goodness of fit between the recovered dynamical system (simplified and not) with the original dynamical system. For this we developed a package, FunctionalDistances.jl, to automate as much as possible the estimation of the distance between two functions. (The package will be very shortly available in a github repository)
### Future Development
The preliminary results we obtained suggest that no UDE with a strongly perturbed known term provided a better model than their counterparts with correctly specified terms. Yet, a few perturbed models give better fits than unspecified ones, raising the question of whether errors in the UDE specification are indeed detectable.
Our talk will interest both people who study UDEs for our cautionary and surprising results, and the wider audience interested more in the use of Julia in mathematical modelling for the encouraging examples of interoperability we present.
The talk will present how Julia helped us in this experimental mathematical exercise, and offer many opportunities for further investigations.
The presentation will be as light as possible on the mathematical side, present ample examples of how the interoperability of Julia helped our analysis, and assumes little or no prior knowledge of UDE’s. Graphs and examples will also be used to aid understanding of the topic.
false
https://pretalx.com/juliacon-2022/talk/AYAUPK/
https://pretalx.com/juliacon-2022/talk/AYAUPK/feedback/
Green
Using SciML to predict the time evolution of a complex network.
Lightning talk
2022-07-28T11:40:00+00:00
11:40
00:10
Modeling the temporal evolution of complex networks is still an open challenge across many fields. Using the SciML ecosystem in Julia, we train and simplify a Neural ODE on the low-dimensional embeddings of a temporal sequence of networks. In this way, we discover a dynamical system representation of the network that allows us to predict its temporal evolution. In the talk we’ll show how the tight integration of SciML, Network, and Matrix Algebra packages in Julia opens new modeling directions.
juliacon-2022-17633-using-sciml-to-predict-the-time-evolution-of-a-complex-network-
Andre Macleod
en
**Introduction**
Complex networks can change over time as vertices and edges between vertices get added or removed. Modeling the temporal evolution of networks and predicting their structure is an open challenge across a wide variety of disciplines: from the study of ecological networks as food-webs, to predictions about the structure of economical networks; from the analysis of social networks, to the modeling of how our brain develops and adapts during our lives.
In their usual representation, networks are binary (an edge is either observed or not), sparse (each vertex is linked to a very small subset of the network), and large (up to billions of nodes), and changes are discrete rewiring events. These properties make them hard to handle with classic machine learning techniques and have barred the use of more traditional mathematical modeling such as differential equations. In this talk, we show how we used Julia, and in particular the Scientific Machine learning (SciML) framework, to model the temporal evolution of complex networks as continuous, multivariate, dynamical systems from observational data. We took an approach cutting across different mathematical disciplines (machine learning, differential equations, and graph theory): this was possible largely thanks to the integration of packages like Graph.jl (and the integrated MetaGraph.jl package), LinearAlgebra.jl (and other matrix decomposition packages), and the SciML ecosystem, e.g., DiffEqFlux.jl.
**Methodology**
1. To translate the discrete, high-dimensional, dynamical system into a continuous, low-dimensional one, we rely on a network embedding technique. A network embedding maps the vertices of a network to points in a (low-dimensional) metric space. We adopt the well-studied Random Dot-Product Graphs statistical model: the mapping is provided by a truncated Singular Value Decomposition of the network’s adjacency matrix; to reconstruct the network we use the fact that the probability of interaction between two vertices is given by the dot product of the points they map to. The decomposition of the adjacency matrices and their alignment is a computationally intensive step, and we tackle it thanks to the fast matrix algorithms available for Julia and their integration with packages that allow for network data wrangling (like Graphs.jl).
2. In the embedding framework, a discrete change in the network is modeled as the effect of a continuous displacement of the points in the metric space. Our goal then, is that of discovering from the data (the network observed at various points in time) an adequate dynamical system capturing the laws governing the temporal evolution of the complex network. This is possible thanks to a pipeline that combines Neural ODEs and the identification of nonlinear dynamical systems (such as SiNDY).
In general, each node may influence the temporal evolution of every over node in the network, and, if we are working in a space with dimension d, and N nodes, this translates to a dynamical system with N*N*d variables. As networks may often have thousands or millions of nodes, that number can be huge. In our talk we are going to discuss various strategies we adopted to tame the complexity of the dynamical system.
**Future Development**
As a proof of concept, we tested our modeling approach on a network of wild birds interacting over the span of 6 days, collected by the team behind the Animal Social Network Repository (ASNR). The network has 202 vertices (birds), and 11899 edges (contact between birds). In the talk we will showcase this application to show strengths and current limitations of our novel approach.
We are now working on three fronts:
- we need to scale up the framework, so to model very large networks (for example to model social networks, where being able to predict what people might link with others in the future could be a tool to fight the growing problem of misinformation and disinformation);
- we are considering stepping from Neural Differential Equations to Universal Differential Equations, both to capture any preexisting knowledge of the network dynamical system, and to help with the training complexity;
- we are exploring data augmentation techniques to interpolate between the estimated embeddings, other embeddings, and other neural network architectures.
These three development directions constitute interesting challenges for the Julia practitioner interested in extending Julia modeling abilities. We will discuss them in the talk and suggest how everyone can contribute
All the code will be made available in a dedicated git repository and we are preparing a detailed publication to illustrate our approach.
false
https://pretalx.com/juliacon-2022/talk/BEY33E/
https://pretalx.com/juliacon-2022/talk/BEY33E/feedback/
Green
Fast, Faster, Julia: High Performance Implementation of the NFFT
Talk
2022-07-28T12:30:00+00:00
12:30
00:30
In this talk, we present the architecture of the NFFT.jl package, which implements the non-equidistant fast Fourier trans-form (NFFT). The NFFT is commonly implemented in C/C++ and requires sophisticated performance optimizations to exploit the full potential of the underlying algorithm. We demonstrate how Julia enables a high-performance, generic, and dimension-agnostic implementation with only a fraction of the code required for established C/C++ NFFT implementations.
juliacon-2022-17057-fast-faster-julia-high-performance-implementation-of-the-nfft
JuliaCon
Tobias Knopp
en
The non-equidistant fast Fourier transform (NFFT) is an extension of the well-known fast Fourier transform (FFT) in which the sample points in one domain can be non-equidistant. The NFFT is an approximate algorithm and allows the approximation error to be controlled to achieve machine precision while keeping the algorithmic complexity in the same order of magnitude as a regular FFT. The NFFT plays an important role in many signal processing applications and has been intensively studied from both theoretical and computational perspectives. The fastest NFFT libraries are implemented in the low-level programming languages C and C++ and require a trade-off between generic code, code readability, and code efficiency.
In this talk, we show that Julia provides new ways to optimize these three conflicting goals. We outline the architecture and implementation of the NFFT.jl package, which has recently been refactored to match the performance of the modern C++ implementation FINUFFT. NFFT.jl is fully generic, dimension-independent, and has a flexible architecture that allows parts of the algorithm to be exchanged through different code paths. This is crucial for the realization of different precomputation strategies tailored to optimize either the computation time or the required main memory. NFFT.jl makes intensive use of the Cartesian macros in Julia Base, allowing for zero-overhead and dimension-agnostic implementation. In contrast, the two modern C (NFFT3) and C++ (FINUFFT) libraries use dedicated 1D, 2D and 3D code paths to achieve maximum performance. The generic Julia implementation thus avoids code duplication and requires 3-4 times less code than its C/C++ counterparts. NFFT.jl is multi-threaded and uses a cache-aware blocking technique to achieve decent speedups.
Package being presented:
- https://github.com/JuliaMath/NFFT.jl
false
https://pretalx.com/juliacon-2022/talk/BASTLY/
https://pretalx.com/juliacon-2022/talk/BASTLY/feedback/
Green
Julia's latest in high performance sorting
Talk
2022-07-28T13:00:00+00:00
13:00
00:30
This talk compares the runtime of Julia's builtin sorting with that of other languages and explains some of the techniques Julia uses to outperform other languages. This is a small part of the larger ongoing effort to equip Julia with state of the art and faster than state of the art performance for all sorting tasks.
juliacon-2022-17702-julia-s-latest-in-high-performance-sorting
/media/juliacon-2022/submissions/VC9YHN/sorting_wsCietR.svg
Lilith Hafner
en
Julia's radix sort implementation: https://github.com/JuliaLang/julia/blob/fc1093ff1560b47611293bf71f8074030116edcc/base/sort.jl#L681
Benchmark implementations: https://github.com/LilithHafner/InterLanguageSortingComparisons
Extended discussion surrounding the introduction of radix sort to Julia: https://github.com/JuliaLang/julia/pull/44230
Ongoing work: https://github.com/JuliaLang/julia/pull/45222
Potential future work: https://github.com/JuliaLang/julia/discussions/44876#discussioncomment-2890020
false
https://pretalx.com/juliacon-2022/talk/VC9YHN/
https://pretalx.com/juliacon-2022/talk/VC9YHN/feedback/
Green
Julia Gaussian Processes
Talk
2022-07-28T13:30:00+00:00
13:30
00:30
Julia Gaussian Processes (Julia GPs) is home to an ecosystem of packages whose aim is to enable research and modelling using GPs in Julia. It specifies a variety of interfaces, code which implements these interfaces in standard settings, and code built on top of these interfaces (e.g. plotting). The composability and modularity of these interfaces distinguishes it from other GP software. This talk will explore the things that you can currently do with the ecosystem, and where it’s heading.
juliacon-2022-18100-julia-gaussian-processes
JuliaCon
/media/juliacon-2022/submissions/N7DSLT/jgp_sgsHA1Q.png
Will Tebbutt
en
Gaussian processes provide a way to place prior distributions over unknown functions, and are used throughout probabilistic machine learning, statistics and numerous domain areas (climate science, epidemiology, geostatistics, model-based RL to name a few). Their popularity stems from their flexibility, interpretability, and the ease with which exact and approximate Bayesian inference can be performed, and their ability to be utilised as a single module in a larger probabilistic model.
The goal of the [JuliaGPs organisation](https://github.com/JuliaGaussianProcesses/) is to provide a range of software which is suitable for both methodological research and deployment of GPs. We achieve this through a variety of clearly-defined abstractions, interfaces, and libraries of code. These are designed to interoperate with each other, and the rest of the Julia ecosystem (Distributions.jl, probabilistic programming languages, AD, plotting, etc), instead of providing a single monolithic package which attempts to do everything. This modular approach allows a GP researcher to straightforwardly build on top of lower-level components of the ecosystem which are useful in their work, without compromising on convenience when applying a GP in a more applications-focused fashion.
This talk will (briefly) introduce GPs, and discuss the JuliaGPs: what its design principles are and how these relate to existing GP software, how it is structured, what is available, what it lets you do, what it doesn’t try to do, and where there are gaps that we are trying to fill (and could use some assistance!). It will provide some examples of standard use (e.g. regression and classification tasks), making use of the core packages ([AbstractGPs](https://github.com/JuliaGaussianProcesses/AbstractGPs.jl), [ApproximateGPs](https://github.com/JuliaGaussianProcesses/ApproximateGPs.jl/), [KernelFunctions](https://github.com/JuliaGaussianProcesses/KernelFunctions.jl)), and how to move forward from there. It will also show how the abstractions have been utilised in the existing contributors’ research, for example with [Stheno](https://github.com/JuliaGaussianProcesses/Stheno.jl), [TemporalGPs](https://github.com/JuliaGaussianProcesses/TemporalGPs.jl), [AugmentedGPLikelihoods](https://github.com/JuliaGaussianProcesses/AugmentedGPLikelihoods.jl), [GPDiffEq](https://github.com/Crown421/GPDiffEq.jl), [BayesianLinearRegressors](https://github.com/JuliaGaussianProcesses/BayesianLinearRegressors.jl), [LinearMixingModels](https://github.com/invenia/LinearMixingModels.jl), with the aim of providing inspiration for how you might do the same in your own.
false
https://pretalx.com/juliacon-2022/talk/N7DSLT/
https://pretalx.com/juliacon-2022/talk/N7DSLT/feedback/
Green
Restreaming of Jeremy Howard Keynote
Keynote
2022-07-28T14:30:00+00:00
14:30
00:45
Restreaming of the earlier Keynote by Jeremy Howard
juliacon-2022-21234-restreaming-of-jeremy-howard-keynote
JuliaCon
en
false
https://pretalx.com/juliacon-2022/talk/UZBZRQ/
https://pretalx.com/juliacon-2022/talk/UZBZRQ/feedback/
Green
oneAPI.jl: Programming Intel GPUs (and more) in Julia
Lightning talk
2022-07-28T15:20:00+00:00
15:20
00:10
oneAPI.jl is a Julia package that makes it possible to use the oneAPI framework to program accelerators like Intel GPUs. In this talk, I will explain the oneAPI framework, which accelerators it supports, and demonstrate how oneAPI.jl makes it possible to work with these accelerators from the Julia programming language.
juliacon-2022-18038-oneapi-jl-programming-intel-gpus-and-more-in-julia
JuliaCon
Tim Besard
en
oneAPI is a framework, developed by Intel but intended to be cross-platform, that can be used to program various hardware accelerators. This includes Intel GPUs, which exist as integrated solutions in many processors, and dedicated hardware that will be part of the Aurora supercomputer at Argonne National Laboratory.
To program these GPUs from Julia, we have created the oneAPI.jl package based on existing GPU infrastructure like GPUCompiler.jl and GPUArrays.jl. It builds on the low-level Level Zero APIs that are part of oneAPI, and relies on Khronos tools to compile Julia code to SPIR-V. With it, Intel GPUs can be programmed using the familiar programming styles supported by other GPU back-ends: high-level array abstractions that automatically exploit the implicit parallelism, and low-level kernels where the programmer is responsible for doing so.
false
https://pretalx.com/juliacon-2022/talk/XKGBAM/
https://pretalx.com/juliacon-2022/talk/XKGBAM/feedback/
Green
Julius Tech Sponsored Talk
Platinum sponsor talk
2022-07-28T15:30:00+00:00
15:30
00:15
Julius offers an auto-scaling, low code graph computing solution that allows firms to quickly build transparent and adaptable data analytics pipelines.
juliacon-2022-21244-julius-tech-sponsored-talk
en
Julius offers an auto-scaling, low code graph computing solution that allows firms to quickly build transparent and adaptable data analytics pipelines. Graph computing is an innovative technology that enables developers to organize pipelines as directed acyclic graphs (DAGs). With Julius, DAGs representing complex workflows are created by composing smaller modular DAGs, and can be applied to many enterprise use cases including: explainable ML, big data analytics, data visualization and transformation, AAD, and more.
For Julia users, we provide a dynamic platform to help developers make Julia more manageable and adoptable for enterprise computing. Engineers can produce enterprise scale solutions in a fraction of the time and cost.
false
https://pretalx.com/juliacon-2022/talk/R8VHSS/
https://pretalx.com/juliacon-2022/talk/R8VHSS/feedback/
Green
Annual Julia Developer Survey
Lightning talk
2022-07-28T15:45:00+00:00
15:45
00:10
Results of the Julia Developer Survey 2022
juliacon-2022-21626-annual-julia-developer-survey
en
false
https://pretalx.com/juliacon-2022/talk/UKMZJV/
https://pretalx.com/juliacon-2022/talk/UKMZJV/feedback/
Green
BlockDates: A Context-Aware Fuzzy Date Matching Solution
Talk
2022-07-28T16:30:00+00:00
16:30
00:30
We developed the open-source software package BlockDates using the Julia programming language to allow the extraction of fuzzy-matched dates from a block of text. The tool leverages contextual information and draws on external date data to find the best date matches. For each identified date, multiple interpretations are proposed and scored to find the best fit. The output includes several record-level variables that help explain the result and prioritize error detection.
juliacon-2022-17294-blockdates-a-context-aware-fuzzy-date-matching-solution
JuliaCon
Francis Smart
en
The date is often a critical piece of information for safety data analysis. It provides context and is necessary for measurement of event frequency and time-based trends. In some data sources, such as narrative information about an event or subject, the date is provided in various non-standardized formats. The Bureau of Transportation Statistics uses data provided in narrative, free-text format to validate and supplement reported safety event data.
We developed the open-source software package BlockDates using the Julia programming language to allow the extraction of fuzzy-matched dates from a block of text. The tool leverages contextual information and draws on external date data to find the best date matches. For each identified date, multiple interpretations are proposed and scored to find the best fit. The output includes several record-level variables that help explain the result and prioritize error detection.
In a sample of 59,314 narrative records that include dates, the tool returned positive scores for 96.5% of records, meaning high confidence the selected date is valid. Of those with no matching date, 77.9% were recognized correctly as having no viable match.
false
https://pretalx.com/juliacon-2022/talk/AEXDKT/
https://pretalx.com/juliacon-2022/talk/AEXDKT/feedback/
Green
An introduction to BOMBs.jl.
Lightning talk
2022-07-28T17:00:00+00:00
17:00
00:10
Mathematical models are crucial to build and predict the behaviour of new biological systems. However, selecting between plausible model candidates or estimate parameters is an arduous job, especially considering the different informative content of experiments. BOMBs.jl is a package to automate model simulations, pseudo-data generation, maximum likelihood estimation and Bayesian inference of parameters (Stan and Turing.jl), and design optimal experiments for model selection and inference.
juliacon-2022-17277-an-introduction-to-bombs-jl-
JuliaCon
/media/juliacon-2022/submissions/XRDKTW/BOMBsLogo_iLDaFH0.png
David Gomez-Cabeza
en
We designed BOMBs.jl intending to contribute to the widespread of mathematical models in Biological sciences. Users only need basic Julia knowledge to use the package. The only requirement is to know how dictionaries work. Users can define in the contents of a dictionary the set of ordinary differential equations (and some other model information), and BOMBs.jl will automatically generate all the necessary scripts to simulate the model (including models with external time-varying inputs) and estimate parameters (MLE). The package also generates all the scripts required to perform Bayesian inference using Stan or Turing.jl, leaving as much freedom as possible in prior definitions. BOMBs will also generate any necessary scripts to perform optimal experimental design for model selection (drive pairs of competing model simulations as far as possible) and model inference (using model predictions uncertainty), aiming at reducing the time and resources allocated to in vivo experiments. The package is documented, with functions generating the dictionary structures for the user, complementary functions explaining what should be the contents and structures of the dictionaries, a document including a brief description of each function in the package and a set of Jupyter notebooks showing how to use all the package functionalities.
The Jupyter Notebook of this talk is included in the GitHub repository of the package at https://github.com/csynbiosysIBioEUoE/BOMBs.jl/blob/main/Examples/JuliaCon2022Notebook.ipynb
false
https://pretalx.com/juliacon-2022/talk/XRDKTW/
https://pretalx.com/juliacon-2022/talk/XRDKTW/feedback/
Green
Build, Test, Sleep, Repeat: Modernizing Julia's CI pipeline
Lightning talk
2022-07-28T17:10:00+00:00
17:10
00:10
Julia's Continuous Integration pipeline has struggled for many years now as the needs of the community have significantly outgrown the old Buildbot system. In this talk we will detail the efforts of the CI dev team to provide reliability, reproducibility, security, and greater introspective ability in our CI builds. These CI improvements aren't just helping the Julia project itself, but also other related open-source projects, as we continue to generate self-contained, useful building blocks.
juliacon-2022-18145-build-test-sleep-repeat-modernizing-julia-s-ci-pipeline
JuliaCon
Elliot SabaDilum Aluthge
en
false
https://pretalx.com/juliacon-2022/talk/YED3MP/
https://pretalx.com/juliacon-2022/talk/YED3MP/feedback/
Green
Extreme Value Analysis in Julia with Extremes.jl
Talk
2022-07-28T17:20:00+00:00
17:20
00:30
In this talk, we present [Extremes.jl](https://github.com/jojal5/Extremes.jl), a package that provides exhaustive high-performance functions for the statistical analysis of extreme values with Julia. Parameter estimation, diagnostic tools for assessing model accuracy and high quantile estimation are implemented for stationary and non-stationary extreme value models. The functionalities will be illustrated in this talk by reproducing many results from the popular book of Coles (2001).
juliacon-2022-17948-extreme-value-analysis-in-julia-with-extremes-jl
JuliaCon
Gabriel Gobeil
en
Risk assessment and impact analysis of extreme values is an important aspect of climate sciences. Recently, the Intergovernmental Panel on Climate Change (IPCC) reported that extreme meteorological events are expected to increase in frequency and intensity with climate change, leading to important impacts on many sectors of activities (IPCC 2013). The only statistical discipline that develops a rigorous framework for the study of extremes events is Extreme value theory. However, unlike other programming languages commonly used by statisticians, tools for the analysis of extreme values are lacking in Julia despite the growing popularity of the language among scientific community.
In this talk, we present [Extremes.jl](https://github.com/jojal5/Extremes.jl), a package that provides exhaustive high-performance functions for the statistical analysis of extreme values. In particular, methods for the usual block maxima and peaks-over-threshold models are implemented. Model parameter estimation can be achieved by using the probability weighted moments, the maximum likelihood, and the Bayesian paradigm. Non-stationary models are also implemented as well as diagnostic plots for assessing model accuracy and high quantile estimation.
The proposed package is designed to be used by the statistical community as well as by engineers who need estimations of extremes. We illustrate the package functionalities by reproducing many results obtained by Coles (2001).
false
https://pretalx.com/juliacon-2022/talk/XMXJTD/
https://pretalx.com/juliacon-2022/talk/XMXJTD/feedback/
Green
Manopt.jl – Optimisation on Riemannian manifolds
Lightning talk
2022-07-28T17:50:00+00:00
17:50
00:10
`Manopt.jl` provides a set of optimization algorithms for problems given on a Riemannian manifold. Build upon on a generic optimization framework, together with the interface `ManifoldsBase.jl` for Riemannian manifolds, classical and recently developed methods are provided in an efficient implementation. This talk will also present some algorithms implemented in the package.
juliacon-2022-16876-manopt-jl-optimisation-on-riemannian-manifolds
JuliaCon
Ronny Bergmann
en
In many applications and optimization tasks, non-linear data appears naturally.
For example, when data on the sphere is measured, diffusion data can be captured as a signal or even multivariate data of symmetric positive definite matrice, and orientations like they appear for electron backscattered diffraction (EBSD) data. Another example are fixed rank matrices, appearing in matrix completion.
Working on these data, for example doing data interpolation and approximation, denoising, inpainting, or performing matrix completion, can usually be phrased as an optimization problem
Manopt.jl (manoptjl.org) provides a set of optimization algorithms for optimization problems given on a Riemannian manifold. Build upon on a generic optimization framework, together with the interface ManifoldsBase.jl for Riemannian manifolds, classical and recently developed methods are provided in an efficient implementation. Algorithms include the derivative-free Particle Swarm and Nelder–Mead algorithms, as well as classical gradient, conjugate gradient and stochastic gradient descent. Furthermore, quasi-Newton methods like a Riemannian L-BFGS and nonsmooth optimization algorithms like a Cyclic Proximal Point Algorithm, a (parallel) Douglas-Rachford algorithm and a Chambolle-Pock algorithm are provided, together with several basic cost functions, gradients and proximal maps as well as debug and record capabilities.
false
https://pretalx.com/juliacon-2022/talk/ZT7AZZ/
https://pretalx.com/juliacon-2022/talk/ZT7AZZ/feedback/
Green
GatherTown -- Social break
Social hour
2022-07-28T18:00:00+00:00
18:00
01:00
Join us on [Gather.town](https://app.gather.town/invite?token=2ucLB9IpmCAXZIex4Dvh2VFCeR6QLEdP) for a social hour.
juliacon-2022-21378-gathertown-social-break
en
false
https://pretalx.com/juliacon-2022/talk/HBYSDD/
https://pretalx.com/juliacon-2022/talk/HBYSDD/feedback/
Green
PyCallChainRules.jl: Reusing differentiable Python code in Julia
Lightning talk
2022-07-28T19:00:00+00:00
19:00
00:10
While Julia is great, there are still a lot of existing useful differentiable Python code in PyTorch, Jax, etc. Given PyCall.jl is already so great and seamless, one might wonder what it takes to differentiate through those calls to Python functions. PyCallChainRules.jl aims for that ideal. DLPack.jl is leveraged to pass CPU or GPU arrays without any copy between Julia and Python.
juliacon-2022-17919-pycallchainrules-jl-reusing-differentiable-python-code-in-julia
JuliaCon
Jayesh K. Gupta
en
Auto differentiation interfaces are rapidly converging with [`functorch`](https://github.com/pytorch/functorch) and [`jax`](https://github.com/google/jax) on the Python side and more explicit interfaces for dealing with gradients in Julia with [`Functors.jl`](https://github.com/FluxML/Functors.jl) and [Optimisers.jl](https://github.com/FluxML/Optimisers.jl) as well as more explicit machine learning layers in [Lux.jl](https://github.com/avik-pal/Lux.jl). While it is relatively easy to implement new functionality in Julia, Python still remains the standard interface layer for most state-of-the-art functionality especially for GPU kernels. Even if not the most performant, there is value in being able to call existing differentiable functions in Python, as a developer slowly implements equivalent functionality in Julia.
false
https://pretalx.com/juliacon-2022/talk/HBERVN/
https://pretalx.com/juliacon-2022/talk/HBERVN/feedback/
Green
Cosmological Emulators with Flux.jl and DifferentialEquations.jl
Lightning talk
2022-07-28T19:10:00+00:00
19:10
00:10
In the next decade, forthcoming galaxy surveys will provide the astrophysical community with an unprecedented wealth of data. The standard analysis pipeline, usually employed to analyze this kind of surveys, are quite expensive from a computational point of view.
In this presentation I will show how, using Flux.jl and DiffEquations.jl, it is possible to accelerate standard analysis of some order of magnitudes.
juliacon-2022-18058-cosmological-emulators-with-flux-jl-and-differentialequations-jl
JuliaCon
Marco Bonici
en
We are living in the Golden Age of Cosmology: in the 20th century, our comprehension of the Universe has been rapidly evolving, eventually leading to the establishment of a concordance model, the so-called ΛCDM model. Although the remarkable success of this model, which is able to explain with few parameters a great wealth of observations, there are several unanswered questions.
What is the origin of the primordial fluctuations in the Universe? Is this due to some form of inflationary scenario?
What is Dark Matter? Is it a new particle, not present in the Standard Model of Particle Physics? Is it composed by Primordial Black Holes?
Which is the nature of Dark Energy? Can the Cosmological Constant really explain its effects or is this the sign of the breakdown of Einstein theory of General Relativity?
In the next decade several galaxy surveys will start taking data, data that will be used to study the universe using different observational probes, such as weak lensing, galaxy clustering and their cross-correlation: studying these probes jointly will enhance the scientific outcome from galaxy surveys.
However, this improvement does not come at no cost.
The analysis of a galaxy survey employs the evaluation of a complicated theoretical model, with about a hundred parameters. The computation of this theoretical prediction requires about 1-10 seconds; although this is not an expensive step per se, considering that this computation is repeated 10^5-10^7 times shows that a complete analysis requires either a very long time or dedicated hardware.
In order to overcome this issue, I am developing several surrogate models, based on DifferentialEquations.jl and Flux.jl. The combination of these two packages is quite useful for this particular case: while several papers on this topic have usually relied solely on Neural Networks to build emulators, solving some of the differential equations involved in the model evaluation reduces the dimensionality of the emulated parameters space, obtaining a more precise surrogate model. The result of this work is the development of several surrogate models with a precision of ~ 0.1% (matching the requirement for the scientific analysis) with a speed-up of about 100-1000X. The developed models will be released after the publications of the related papers.
false
https://pretalx.com/juliacon-2022/talk/VWGBAL/
https://pretalx.com/juliacon-2022/talk/VWGBAL/feedback/
Green
Automatic Differentiation for Solid Mechanics in Julia
Lightning talk
2022-07-28T19:20:00+00:00
19:20
00:10
Automatic Differentiation (AD) is widely applied in many different fields of computer science and engineering to accurately evaluate derivatives of functions expressed in a computer programming language. In this talk we illustrate the use of AD for the solution of Finite Elements (FE) problems with special emphasis on solid mechanics.
juliacon-2022-16742-automatic-differentiation-for-solid-mechanics-in-julia
JuliaCon
/media/juliacon-2022/submissions/UPQFKL/SpringFineMeshNHb_TMEDxsr.png
Andrea Vigliotti
en
The standard implementation of the Finite Element Method for solid mechanics is based on discretizing the domain into elements and solving the weak form of the point-wise Cauchy's equilibrium equations. This process involves the evaluation of the components of complex tensorial quantities, such as the stress and the stiffness tensor, that are required to calculate the residual force vector and the tangent stiffness matrix. On the other hand the residual force vector and the tangent stiffness matrix coincides, both formally and numerically, with the gradient and the hessian of the free energy of the system, therefore they can be evaluated directly by taking the automatic derivatives of this quantity. The advantage, in this case, is that the free energy is a scalar quantity, which is significantly simpler to evaluate.
In particular, forward mode AD seems particularly suited for the solution of solid mechanics FE problems. In fact, even if FE models can have a very large number of degrees of freedom (DoFs), the free energy of the system in a given configuration is obtained as the sum over the elements of the mesh, and only the degrees of freedom of a single element are involved in the calculation of its contribution to the global residual force vector and tangent stiffness matrix. Therefore we only deal with a limited number of independent variables at time when evaluating individual elements contributions. In this situation a forward mode automatic differentiation scheme implemented through hyper-dual numbers is very efficient for the calculation of higher order derivatives of, however complicate scalar expressions.
The definition of a hyper-dual number system in Julia is particularly straightforward, in fact a structure capable of storing the gradient and the hessian of a quantity, alongside its value, can be simply defined as
```Julia
struct D2{T,N,M} <:Number
v::T
g::NTuple{N,T}
h::NTuple{M,T}
end
```
with
- `v` the value of the variable,
- `g` the components of the gradient of `v`,
- `h` the Hessian,
and where the type parameters are
- `T` the type of the values,
- `N` the number of independent variables controlling the gradient,
- `M = N(N+1)/2` is the number of independent elements in the Hessian, we remark that since the hessian is symmetric we only store and operate on half of it.
Thanks to the multiple dispatch feature of Julia, it is sufficient to implement the needed mathematical operator and function over this newly numerical type and the same code that evaluates an expression will also evalute its derivatives, without any change to the original source code. In addition, the usage of macros for implementation of the operations on the gradient and hessian tuples makes the resulting code particualarly efficient.
In this talk we will present [AD4SM.jl](https://github.com/avigliotti/AD4SM.jl/), a package that implements a second order hyper-dual number system for evaluating both the gradient and hessian of the free energy of a deformable body, allowing the solution of non linear equilibrium problems. The implementation of the dual number system was inspired by the [ForwardDiff.jl](https://github.com/JuliaDiff/ForwardDiff.jl) package, but here the hessian is explicitly introduced, since it is essential for the evaluation of the tangent stiffness matrix, and its symmetry exploited to maximize efficiency. A number of examples are also presented that illustrate how complex non linear problems, with non trivial constraints and boundary conditions, both in two and three dimensions can be numerically stated and solved [1].
#### References
[1] [Vigliotti, A., Auricchio, F., Automatic Differentiation for Solid Mechanics, Archives of Computational Methods in Engineering, Volume 28, Issue 3, Pages 875 - 895 May 2021](https://rdcu.be/b0yx2)
false
https://pretalx.com/juliacon-2022/talk/UPQFKL/
https://pretalx.com/juliacon-2022/talk/UPQFKL/feedback/
Green
ChainRules.jl meets Unitful.jl: Autodiff via Unit Analysis
Lightning talk
2022-07-28T19:30:00+00:00
19:30
00:10
Tools for performing autodifferentiation (AD) and dimensional work in Julia are robust, but not always compatible. This talk explores how we can understand rule-based AD in Julia by showing how to make dimensional quantities from `Unitful.jl` compose with `ChainRules.jl`. Combining these two projects produces an intuitive look at the building blocks of AD in Julia using only rudimentary calculus and dimensional analysis.
juliacon-2022-18065-chainrules-jl-meets-unitful-jl-autodiff-via-unit-analysis
JuliaCon
Sam Buercklin
en
`Unitful.jl` provides efficient type-level support for dimensional quantities we encounter when simulating physical systems. Likewise, `ChainRules.jl` forms the backbone of robust but easily-extensible autodifferentiation (AD) systems. Exploring these two systems together yields an insightful look at Julia's rule-based AD. Calculus, dimensional analysis, and physical intuition are sufficient to explain how `ChainRules.jl` works by building AD rules for `Unitful.jl`.
The versatility of the `ChainRules.jl` ecosystem arises from implementing and extending a ruleset for fundamental functions, such as `*` and `inv`, as `rrule`s and `frule`s. What can often seem like a mysterious black box that computes derivatives is actually composed of many individual `rrule`s or `frule`s built on rudimentary calculus.
These `rrule`s and `frule`s are interpreted by thinking about differentiation as a problem of physical dimensions, and `Unitful.jl` is used to confirm these findings. However, arithmetic between `Unitful.jl` quantities is not immediately compatible with `ChainRules.jl`-based AD. This talk presents the pertinent AD rules to enable basic `ChainRules.jl` compatibility. These rules are also used as a lens to understand how to read and write AD rules for the `ChainRules.jl` ecosystem.
false
https://pretalx.com/juliacon-2022/talk/G9SQQD/
https://pretalx.com/juliacon-2022/talk/G9SQQD/feedback/
Green
Using Optimization.jl to seek the optimal optimiser in SciML
Talk
2022-07-28T19:40:00+00:00
19:40
00:30
Optimization.jl seeks to bring together all of the optimization packages it can find, local and global, into one unified Julia interface. This means, you learn one package and you learn them all! Optimization.jl adds a few high-level features, such as integrating with automatic differentiation, to make its usage fairly simple for most cases, while allowing all of the options in a single unified interface.
juliacon-2022-17259-using-optimization-jl-to-seek-the-optimal-optimiser-in-sciml
JuliaCon
Vaibhav Dixit
en
Optimization.jl wraps most of the major optimisation packages available in Julia currently, namely BlackBoxOptim, CMAEvolutionStrategy, Evolutionary, Flux, GCMAES, MultistartOptimization, Metaheuristics, NOMAD, NLopt, Nonconvex, Optim and Quaddirect. Additionally the integration with ModelingToolkit and MathOptInterface allow it to leverage the state of art symbolic manipulation capabilities offered by these packages, specifically making use of it to construct the objective and constraint, jacobian and hessian efficiently. This talk will show how to use the Optimization.jl package, its various AD backends in combination with various optimiser backends. The interface is broken into three components, `OptimizationFunction`, `OptimizationProblem` and then `solve` on a `OptimizationProblem`. We will cover each of this components and discuss the pros and cons of choices available for specific problems. The focus would also be to show how such a flexible system is necessary for scientific machine learning by demonstrating some popular SciML models on real world problems.
false
https://pretalx.com/juliacon-2022/talk/PDCANR/
https://pretalx.com/juliacon-2022/talk/PDCANR/feedback/
Green
GatherTown -- Social break
Social hour
2022-07-28T20:30:00+00:00
20:30
01:00
Join us on [Gather.town](https://app.gather.town/invite?token=2ucLB9IpmCAXZIex4Dvh2VFCeR6QLEdP) for a social hour.
juliacon-2022-21377-gathertown-social-break
en
false
https://pretalx.com/juliacon-2022/talk/VLZBNZ/
https://pretalx.com/juliacon-2022/talk/VLZBNZ/feedback/
Red
Adaptive Radial Basis Function Surrogates in Julia
Talk
2022-07-28T12:30:00+00:00
12:30
00:30
This talk focuses on an iterative algorithm, called active learning, to update radial basis function surrogates by adaptively choosing points across its input space. This work extensively uses the SciML ecosystem, and in particular, Surrogates.jl.
juliacon-2022-18151-adaptive-radial-basis-function-surrogates-in-julia
JuliaCon
Ranjan Anantharaman
en
Active Learning algorithms have been applied to fine tune surrogate models. In this talk, we analyze these algorithms in the context of dynamical systems with a large number of input parameters. The talk will demonstrate:
1. An adaptive learning algorithm for radial basis functions
2. Its efficacy on dynamical systems with high dimensional input parameter spaces
This will make use of Surrogates.jl and the rest of the SciML ecosystem.
false
https://pretalx.com/juliacon-2022/talk/MDCJKK/
https://pretalx.com/juliacon-2022/talk/MDCJKK/feedback/
Red
Lux.jl: Explicit Parameterization of Neural Networks in Julia
Lightning talk
2022-07-28T13:00:00+00:00
13:00
00:10
Julia already has quite a few well-established Neural Network Frameworks -- [Flux](https://fluxml.ai/) & [KNet](https://denizyuret.github.io/Knet.jl/latest/). However, certain design elements -- **Coupled Model and Parameters** & **Internal Mutations** -- associated with these frameworks make them less compiler and user friendly. Making changes to address these problems in the respective frameworks would be too disruptive for users. To address these challenges, we designed `Lux,` a NN framework.
juliacon-2022-18647-lux-jl-explicit-parameterization-of-neural-networks-in-julia
Avik Pal
en
`Lux,` is a neural network framework built completely using pure functions to make it both compiler and automatic differentiation friendly. Relying on the most straightforward pure functions API ensures no reference issues to debug, and compilers can optimize it as much as possible, is compatible with Symbolics/XLA/etc. without any tricks.
Repository: https://github.com/avik-pal/ExplicitFluxLayers.jl/
false
https://pretalx.com/juliacon-2022/talk/NLDVYU/
https://pretalx.com/juliacon-2022/talk/NLDVYU/feedback/
Red
GraphPPL.jl: a package for specification of probabilistic models
Lightning talk
2022-07-28T13:10:00+00:00
13:10
00:10
We present GraphPPL.jl - a package for user-friendly specification of probabilistic models with variational inference constraints. GraphPPL.jl creates a model as a factor graph and supports the specification of factorization and form constraints on the variational posterior for the latent variables. The package collection GraphPPL.jl, ReactiveMP.jl and Rocket.jl provide together a full reactive programming-based ecosystem for running efficient and customizable variational Bayesian inference.
juliacon-2022-17223-graphppl-jl-a-package-for-specification-of-probabilistic-models
JuliaCon
Dmitry Bagaev
en
**Background**
Bayesian modeling has become a popular framework for important real-time machine learning applications, such as speech recognition and robot navigation. Unfortunately, many useful probabilistic time-series models contain a large number of latent variables, and consequently real-time Bayesian inference based on Monte Carlo sampling or other black-box methods in these models is not feasible.
**Problem statement**
Existing packages for automated Bayesian inference in the Julia language ecosystem, such as Turing.jl, Stan.jl, and Soss.jl, support probabilistic model specification by well-designed macro-based meta languages. These packages assume that inference is executed by black-box variational or sampling-based methods. In principle, for conjugate probabilistic time-series models, message passing-based variational inference by minimization of a constrained Bethe Free Energy yields approximate inference solutions obtained with cheaper computational costs. In this contribution, we develop a user-friendly and comprehensive meta language for specification of both a probabilistic model and variational inference constraints that balance accuracy of inference results with computational costs.
**Solution proposal**
The GraphPPL.jl package implements a user-friendly specification language for both the model and the inference constraints. GraphPPL.jl exports the `@model` macro to create a probabilistic model in the form of a factor graph that is compatible with ReactiveMP.jl's reactive message passing-based inference engine. To enable fast and accurate inference, all message update rules default to precomputed analytical solutions. The ReactiveMP.jl package already implements a selection of precomputed rules. If an analytical solution is not available, then the GraphPPL.jl package provides ways to tweak, relax, and customize local constraints in selected parts of the factor graph. To simplify this process, the package exports the `@constraints` macro to specify extra factorization and form constraints on the variational posterior [1]. For advanced use cases, GraphPPL.jl exports the `@meta` macro that enables custom message passing inference modifications for each node in a factor graph representation of the model. This approach enables local approximation methods only if necessary and allows for efficient variational Bayesian inference.
**Evaluation**
Over the past two years, our probabilistic modeling ecosystem comprising GraphPPL.jl, ReactiveMP.jl, and Rocket.jl has been battle tested with many sophisticated models that led to several publications in high-ranked journals such as Entropy [1] and Frontiers [2], and conferences like MLSP-2021 [3] and ISIT-2021 [4]. The current contribution enables a user-friendly approach to very sophisticated Bayesian modeling problems.
**Conclusions**
We believe that a user-friendly specification of efficient Bayesian inference solutions for complex models is a key factor to expedite application of Bayesian methods. We developed a complete ecosystem for running efficient, fast, and reactive variational Bayesian inference with a user-friendly specification language for the probabilistic model and variational constraints. We are excited to present GraphPPL.jl as a part of our complete variational Bayesian inference ecosystem and discuss the advantages and drawbacks of this approach.
**References**
[1] Ismail Senoz, Thijs van de Laar, Dmitry Bagaev, Bert de Vries. Variational Message Passing and Local Constraint Manipulation in Factor Graphs, Entropy. Special Issue on Approximate Bayesian Inference, 2021.
[2] Albert Podusenko, Bart van Erp, Magnus Koudahl, Bert de Vries. AIDA: An Active Inference-Based Design Agent for Audio Processing Algorithms, Frontiers in Signal Processing, 2022.
[3] Albert Podusenko, Bart van Erp, Dmitry Bagaev, Ismail Senoz, Bert de Vries. Message Passing-Based Inference in the Gamma Mixture Model, 2021 IEEE 31st International Workshop on Machine Learning for Signal Processing (MLSP).
[4] Ismail Senoz, Albert Podusenko, Semih Akbayrak, Christoph Mathys, Bert de Vries. The Switching Hierarchical Gaussian Filter, 2021 IEEE International Symposium on Information Theory (ISIT).
false
https://pretalx.com/juliacon-2022/talk/MVRVTP/
https://pretalx.com/juliacon-2022/talk/MVRVTP/feedback/
Red
TuringGLM.jl: Bayesian Generalized Linear models using @formula
Lightning talk
2022-07-28T13:20:00+00:00
13:20
00:10
TuringGLM makes easy to specify Bayesian **G**eneralized **L**inear **M**odels using the formula syntax and returns an instantiated [Turing](https://github.com/TuringLang/Turing.jl) model.
Example:
```julia
@formula(y ~ x1 + x2 + x3)
```
Heavily inspired by [brms](https://github.com/paul-buerkner/brms/) (uses RStan or CmdStanR) and [bambi](https://github.com/bambinos/bambi) (uses PyMC3).
juliacon-2022-16261-turingglm-jl-bayesian-generalized-linear-models-using-formula
Jose Storopoli
en
# TuringGLM
TuringGLM makes easy to specify Bayesian **G**eneralized **L**inear **M**odels using the formula syntax and returns an instantiated [Turing](https://github.com/TuringLang/Turing.jl) model.
Heavily inspired by [brms](https://github.com/paul-buerkner/brms/) (uses RStan or CmdStanR) and [bambi](https://github.com/bambinos/bambi) (uses PyMC3).
## `@formula`
The `@formula` macro is extended from [`StatsModels.jl`](https://github.com/JuliaStats/StatsModels.jl) along with [`MixedModels.jl`](https://github.com/JuliaStats/MixedModels.jl) for the random-effects (a.k.a. group-level predictors).
The syntax is done by using the `@formula` macro and then specifying the dependent variable followed by a tilde `~` then the independent variables separated by a plus sign `+`.
Example:
```julia
@formula(y ~ x1 + x2 + x3)
```
Moderations/interactions can be specified with the asterisk sign `*`, e.g. `x1 * x2`.
This will be expanded to `x1 + x2 + x1:x2`, which, following the principle of hierarchy,
the main effects must also be added along with the interaction effects. Here `x1:x2`
means that the values of `x1` will be multiplied (interacted) with the values of `x2`.
Random-effects (a.k.a. group-level effects) can be specified with the `(term | group)` inside
the `@formula`, where `term` is the independent variable and `group` is the **categorical**
representation (i.e., either a column of `String`s or a `CategoricalArray` in `data`).
You can specify a random-intercept with `(1 | group)`.
Example:
```julia
@formula(y ~ (1 | group) + x1)
```
## Data
TuringGLM supports any `Tables.jl`-compatible data interface.
The most popular ones are `DataFrame`s and `NamedTuple`s.
## Supported Models
TuringGLM supports non-hiearchical and hierarchical models.
For hierarchical models, only single random-intercept hierarchical models are supported.
For likelihoods, `TuringGLM.jl` supports:
* `Gaussian()` (the default if not specified): linear regression
* `Student()`: robust linear regression
* `Logistic()`: logistic regression
* `Pois()`: Poisson count data regression
* `NegBin()`: negative binomial robust count data regression
false
https://pretalx.com/juliacon-2022/talk/8JWMG8/
https://pretalx.com/juliacon-2022/talk/8JWMG8/feedback/
Red
Text Segmentation with Julia
Lightning talk
2022-07-28T13:30:00+00:00
13:30
00:10
Introducing TextSegmentation.jl, a package for Text Segmentation with Julia. Text Segmentation is a method of dividing an unstructured document including various contents into several parts according to its topics. So it is an important technique that supports various natural language processing tasks such as summarization, extraction, and question answering. If the audience listen to this presentation, they will learn Text Segmentation and how to use packages and be able to perform it easily.
juliacon-2022-18032-text-segmentation-with-julia
JuliaCon
Kento Kawasaki
en
TextSegmentation.jl(https://github.com/kawasaki-kento/TextSegmentation.jl) provides a julia implementation of unsupervised text segmentation methods. Text Segmentation is a method for dividing an unstructured document including various contents into several parts according to their topics. A specific example of its use is pre-processing in natural language processing. Natural language processing includes various tasks such as summarization, extraction, and question answering, but to achieve higher accuracy, text preprocessing is necessary. Text segmentation helps improve the accuracy of those tasks by allowing documents to be segmented according to topics. As specific text segmentation methods, this package provides the following three:
- TextTiling
+ TextTiling is a method for finding segment boundaries based on lexical cohesion and similarity between adjacent blocks.
- C99
+ C99 is a method for determining segment boundaries by divisive clustering.
- TopicTiling
+ TopicTiling is an extension of TextTiling that uses the topic IDs of words in a sentence to calculate the similarity between blocks.
The planned presentations are as follows
1. introduction
+ I will introduce the purpose of TextSegmentation.jl and what it is useful for.
2. Text Segmentation
+ Specific methods of text segmentation will be explained.
3. overview of the package
+ Introduce how to use the package and how to perform the tasks.
4. example
+ Using simple text data, this section explains how to actually perform text segmentation with the package.
5. future work
+ Share future prospects for TextSegmentation.jl.
false
https://pretalx.com/juliacon-2022/talk/VGEWU7/
https://pretalx.com/juliacon-2022/talk/VGEWU7/feedback/
Red
Recommendation.jl: Modeling User-Item Interactions in Julia
Lightning talk
2022-07-28T13:40:00+00:00
13:40
00:10
Recommender system is a data-driven application that generates personalized content for users. This talk shows how Julia can be a deeply satisfying option to capture the unique characteristics of recommenders, which rely heavily on repetitive matrix computations in multi-stage data pipelines. To build trustworthy systems in terms of not only accuracy and scalability but usability and fairness at large, we particularly focus on API design and evaluation methods implemented on Recommendation.jl.
juliacon-2022-17284-recommendation-jl-modeling-user-item-interactions-in-julia
JuliaCon
/media/juliacon-2022/submissions/VWVY9S/overview_UGhf2Z2.png
Takuya Kitazawa
en
This talk demonstrates **Recommendation.jl**, a Julia package for building recommender systems, with a special emphasis on its design principle and evaluation framework. While the package was first presented at JuliaCon 2019 to collect early feedback from the community, this talk highlights how the implementation has evolved afterwards and gives a preview of upcoming "v1.0.0" major release, accompanied by a proceeding paper.
An underlying question for the audiences throughout the talk is: *How should "good" recommender systems be?* On one hand, improving the accuracy of recommendation with sophisticated algorithms is indeed desired. However, at the same time, recommendation is not always the same as machine learning problems, and non-accuracy aspects of the systems are equally or even more important in practice; we particularly discuss the importance of decoupling data from business logic and validating data/model quality based on a diverse set of decision criteria.
First of all, a core of recommendation engine largely relies on simple math and matrix computation against sparse user-item data, where we can take full advantage of numerical computing methods. Thus, Julia is a great choice to efficiently and effectively implement an end-to-end recommendation pipeline that typically consists of multiple sub-tasks as follows:
1. preprocessing user-item data;
2. building a recommendation model;
3. evaluating a ranked list of recommended contents;
4. post-processing the recommendation.
Here, Recommendation.jl provides a unified abstraction layer, namely `DataAccessor`, which represents user-item interactions in an accessible form. Since data for recommender systems is readily standardizable as a collection of user, item, and contextual features, the common interface helps us to follow the separation of concerns principle and ensure the easiness and reliability of data manipulation. To be more precise, raw data is always converted into a `DataAccessor` instance at the data preprocessing phase (Phase#1) with proper validation (e.g., data type check, missing value handling), and hence the subsequent steps can simply access the data (or metadata) through the instance without worrying about unexpected input.
Moreover, when it comes to generating recommendations at later phases (Phase#2-4), Recommendation.jl enables developers to optimize recommenders against not only standard accuracy metrics (e.g., recall, precision) but non-accuracy measures such as novelty, diversity, and serendipity. Even though the idea of diverse or serendipitous recommendation is not new in the literature, the topic has rapidly gained traction as society realizes the importance of fairness in intelligent systems. In this talk, we dive deep into the concept of these non-accuracy metrics and their implementation in Julia.
Last but not least, there are a couple of new recommendation models recently added to the package, including matrix factorization with Bayesian personalized ranking loss and factorization machines. We plan to provide comprehensive benchmark results for supported recommender-metric pairs to undergo trade-off discussion. Furthermore, we compare Recommendation.jl with other publicly available recommendation toolkit like LensKit (Python), MyMediaLite (C#), and LibRec (Java).
false
https://pretalx.com/juliacon-2022/talk/VWVY9S/
https://pretalx.com/juliacon-2022/talk/VWVY9S/feedback/
Red
G Research Sponsored Talk
Silver sponsor talk
2022-07-28T13:50:00+00:00
13:50
00:05
G-Research is Europe’s leading quantitative finance research firm
juliacon-2022-21253-g-research-sponsored-talk
en
false
https://pretalx.com/juliacon-2022/talk/JKKXTS/
https://pretalx.com/juliacon-2022/talk/JKKXTS/feedback/
Red
Pumas Sponsored Talk
Silver sponsor talk
2022-07-28T13:55:00+00:00
13:55
00:05
With deep expertise in allied fields of clinical pharmacology, pharmacometrics, drug development, regulations and advanced data analytics including machine learning, Pumas-AI works with companies, laboratories and universities as their healthcare intelligence partner.
juliacon-2022-21249-pumas-sponsored-talk
en
false
https://pretalx.com/juliacon-2022/talk/JC3GKD/
https://pretalx.com/juliacon-2022/talk/JC3GKD/feedback/
Red
HPC sparse linear algebra in Julia with PartitionedArrays.jl
Lightning talk
2022-07-28T16:30:00+00:00
16:30
00:10
PartitionedArrays is a distributed sparse linear algebra engine that allows Julia users to easily prototype and deploy large computations on distributed-memory HPC platforms. The long-term goal is to provide a Julia alternative to the parallel vectors and sparse matrices available in well-known distributed algebra packages such as PETSc. Using PartitionedArrays, application libraries have shown excellent strong and weak scaling results up to tends of thousands of CPU cores.
juliacon-2022-17952-hpc-sparse-linear-algebra-in-julia-with-partitionedarrays-jl
JuliaCon
/media/juliacon-2022/submissions/ZZ3HGF/logo_EjpCY6N.png
Francesc VerdugoALBERTO FRANCISCO MARTIN HUERTAS
en
PartitionedArrays (https://github.com/fverdugo/PartitionedArrays.jl) is a distributed sparse linear algebra engine that allows Julia programmers to easily prototype and deploy large computations on distributed-memory, high performance computing (HPC) platforms. The package provides a data-oriented parallel implementation of vectors and sparse matrices, ready to use in several applications, including (but not limited to) the discretization of partial differential equations (PDEs) with grid-based algorithms such as finite differences, finite volumes, or finite element methods. The long-term goal of this package is to provide a Julia alternative to the parallel vectors and sparse matrices available in well-known distributed algebra packages such as PETSc or Trilinos. It also aims at providing the basic building blocks for the implementation in Julia of other linear algebra algorithms such as distributed sparse linear solvers. We started this project motivated by the fact that using bindings to PETSc or Trilinos for parallel computations in Julia can be cumbersome in many situations. One is forced to use MPI as the parallel execution model and drivers need to be executed non-interactively with commands like `mpiexec -n 4 julia input.jl`, which posses serious difficulties to the development process. Some typos and bugs can be debugged interactively with a single MPI rank in the Julia REPL, but genuine parallel bugs often need to be debugged non-interactively using `mpiexec`. In this case, one cannot use development tools such as Revise or Debugger, which is a serious limitation, specially for complex codes that take a lot of time to JIT-compile since one ends up running code in fresh Julia sessions. To overcome these limitations, PartitionedArrays provides a data-oriented parallel execution model that allows one to implement parallel algorithms in a generic way, independently of the underlying message passing software that is eventually used at the production stage. At this moment, the library provides two backends for running the generic parallel algorithms: a sequential backend and an MPI backend. In the former, the parallel data structures are logically parallel from the user perspective, but they are stored in a conventional (sequential) Julia session using standard serial arrays. The sequential back end does not mean to distribute the data in a single part. The data can be split in an arbitrary number of parts, but they are processed one after the other in a standard Julia sequential process. This configuration is specially handy for developing new parallel codes. The sequential backend runs in a standard Julia session and one can use tools like Revise and Debugger, which dramatically improves the developer experience. Once the code works with the sequential backend, it can be automatically deployed in a supercomputer via the MPI backend. In the latter case, the data layout of the distributed vectors and sparse matrices is compatible with the linear solvers provided by libraries like PETSc or MUMPS. This allows one to use these libraries for solving large systems of linear algebraic equations efficiently until competitive Julia alternatives are available. The API of PartitionedArrays allows the programmer to write efficient parallel algorithms since it enables fine control over data exchanges. In particular, asynchronous communication directives are provided, making possible to overlap communication and computation. This is useful, e.g., to efficiently implement the distributed sparse matrix-vector product, where the product on the owned entries can be overlapped with the communication of the off-processor vector components. Application codes using PartitionedArrays such as the parallel finite element library GridapDistributed have shown excellent strong and weak scaling results up to tends of thousands of CPU cores. In the near future, we plan to add hierarchical/multilevel parallel data structures to the library to extend its support to multilevel parallel algorithms such as multigrid, multilevel domain decomposition, and multilevel Montecarlo methods. In this talk, we will provide an overview of the main components of the library and show users how to get started by means of simple examples. PartitionedArrays can be easily installed from the official Julia language package registry and it is distributed with an MPI licence.
false
https://pretalx.com/juliacon-2022/talk/ZZ3HGF/
https://pretalx.com/juliacon-2022/talk/ZZ3HGF/feedback/
Red
Calling Julia from MATLAB using MATDaemon.jl
Lightning talk
2022-07-28T16:40:00+00:00
16:40
00:10
MATLAB is a proprietary programming language popular for scientific computing. Calling MATLAB code from Julia via the C API has been supported for many years via MATLAB.jl. The reverse direction is more complex. One approach is to compile Julia via the C++ MEX API as in Mex.jl. In MATDaemon.jl (https://bit.ly/3JxTFFU), we instead communicate by writing data to .mat files. This method is robust across Julia and MATLAB versions, and easy to use: just download jlcall.m from the GitHub repository.
juliacon-2022-18173-calling-julia-from-matlab-using-matdaemon-jl
JuliaCon
Jonathan Doucette
en
MATLAB is a popular programming language in the scientific community. Unfortunately, it is proprietary, closed-source, and expensive. Within academia, purchasing MATLAB licenses can feel like a tax on research development, especially given the growing global trends towards open, reproducible, and transparent science. For this reason, many scientists are switching as much as possible to open software development using open-source programming languages such as Python, R, and Julia.
Transitioning between programming languages can be a daunting task, however. Beyond the obvious requirement of learning a new language, translating and rewriting existing codebases in a new language can be difficult to do in a modular fashion. Modularity is important in order to be able to translate and test as you go. Indeed, it is often not even necessary to translate an entire library to gain nontrivial improvements; for example, by rewriting only performance-critical code paths.
A convenient way to ease such transitions is through language interoperability. From the Julia side, calling out to the MATLAB C API has been possible for many years using the fantastic `MATLAB.jl` package, made possible by Julia’s support for calling C code. Calling Julia from MATLAB, however, is more complex for several reasons. The first approach one might try is to use Julia’s C API in conjunction with the MATLAB C/C++ MEX API in order to build a MEX – that is, (M)ATLAB (EX)ecutable – function which embeds Julia. This is the approach taken by the `Mex.jl` Julia package. When this approach works, it is extremely effective: calling Julia is convenient with little overhead. Unfortunately, writing scripts to compile MEX functions across operating systems and MATLAB versions is notoriously fragile. Indeed, the current version of `Mex.jl` only supports Julia v1.5.
For these reasons, we created `MATDaemon.jl` (https://github.com/jondeuce/MATDaemon.jl). This package aims to call Julia from MATLAB in as simple a manner as possible while being robust across both Julia and MATLAB versions – it should “just work”. `MATDaemon.jl` does this by communicating with Julia via writing MATLAB variables to disk as `.mat` files. These variables are then read by Julia using the `MAT.jl` package. The Julia function indicated is then called and the output variables are similarly written to `.mat` files and read back by MATLAB. Naturally, this comes at the cost of some overhead which would not be present when using the MEX approach. In order to alleviate some of the overhead, a Julia daemon is created using the `DaemonMode.jl` package (https://github.com/dmolina/DaemonMode.jl). This helps to avoid Julia startup time by running Julia code on a persistent server. While this package is still not recommended for use in tight performance critical loops due to overhead on the order of seconds, it is certainly fast enough for use in rewriting larger bottlenecks and for interactive use in the MATLAB REPL.
Due to its simplicity,`MATDaemon.jl` is easy to use: just download the jlcall.m MATLAB function from the GitHub repository (https://github.com/jondeuce/MATDaemon.jl/blob/master/api/jlcall.m) and call Julia. For example, running `jlcall('sort', {rand(2,5)}, struct('dims', int64(2)))` will sort a 2x5 MATLAB double array along the second dimension. A temporary workspace folder `.jlcall` is created containing a local `Project.toml` and `Manifest.toml` file in order to not pollute the global Julia environment. And that’s it! See the documentation in the GitHub repository for example usage, including loading local Julia projects, Base modules, running setup scripts, and more.
false
https://pretalx.com/juliacon-2022/talk/CB3PEY/
https://pretalx.com/juliacon-2022/talk/CB3PEY/feedback/
Red
LinearSolve.jl: because A\b is not good enough
Lightning talk
2022-07-28T16:50:00+00:00
16:50
00:10
Need to solve Ax=b for x? Then use A\b! Or wait, no. Don't. If you use that method, how do you swap that out for a method that performs GPU offloading? How do you switch between UMFPACK and KLU for sparse matrices? Krylov subspace methods? What does all of this mean and why is A\b not good enough? Find out all of this and more at 11. P.S. LinearSolve.jl is the answer.
juliacon-2022-17928-linearsolve-jl-because-a-b-is-not-good-enough
JuliaCon
Chris Rackauckas
en
We tell people that to solve Ax=b, you use A\b. But in reality, that is insufficient for many problems. For dense matrices, LU-factorization, QR-factorization, and SVD-factorization approaches are all possible ways to solve this, each making an engineering trade-off between performance and accuracy. While with Julia's Base you can use lu(A)\b, qr(A)\b, and svd(A)\b, this idea does not scale to all of the cases that can arise. For example, Krylov subspace methods require you set a tolerance `tol`... how do you expect to do that? krylov(A;tol=1e-7)\b? No, get outta here, the libraries don't support that. And even if they did, this still isn't as efficient as... you get the point.
This becomes a major issue with packages. Say Optim.jl uses a linear solve within its implementation of BFGS (it does). Let's say the code is A\b. Now you know in your case A is a sparse matrix which is irregular, and thus KLU is 5x faster than the UMFPACK that Julia's \ defaults to. How do you tell Optim.jl to use KLU instead? Oops, you can't. But wouldn't it be nice if you could just pass `linsolve = KLUFactorization()` and have it do that?
Okay, we can keep belaboring the point, which is that the true interface of linear solvers needs to have many features and performance, and it needs to be a multiple dispatching interface so that it can be used within other packages and have the algorithms swapped around by passing just one type. What a great time for the SciML ecosystem to swoop in! This leads us to LinearSolve.jl, a common interface for linear solver libraries. What we will discuss is the following:
- Why there are so many different linear solver methods. What are they used for? When are which ones recommended? Short list: LU, QR, SVD, RecursiveFactorization.jl (pure Julia, and the fastest?), GPU-offload LU, UMFPACK, KLU, CG, GMRES, Pardiso, ...
- How do you swap between linear solvers in the LinearSolve.jl interface. It's easy: solve(prob,UMFPACKFactorization()) vs solve(prob,KLUFactorization()).
- How do you efficiently reuse factorizations? For example, the numerical factorization stage can be reused when swapping out `b` if doing many `A\b` operations. But did know that if A is a sparse matrix you only need to perform the symbolic factorization stage once for each sparsity pattern? How do you do all of this efficiently? LinearSolve.jl has a caching interfaces that automates all of this!
- What is a preconditioner? How do you use preconditioners?
We will showcase examples where stiff differential equation solving is accelerated by over 20x just by swapping out to the correct linear solvers (https://diffeq.sciml.ai/stable/tutorials/advanced_ode_example/). This will showcase that it's not a small detail, and in fact, every library should adopt this swappable linear solver interface.
false
https://pretalx.com/juliacon-2022/talk/RUQAHC/
https://pretalx.com/juliacon-2022/talk/RUQAHC/feedback/
Red
CALiPPSO.jl: Jamming of Hard-Spheres via Linear Optimization
Lightning talk
2022-07-28T17:00:00+00:00
17:00
00:10
The CALiPPSO.jl package implements a new algorithm for producing *disordered* [spheres packings](https://en.wikipedia.org/wiki/Sphere_packing) with very high accuracy. The algorithm reaches the critical jamming point of hard spheres through a chain of constrained linear optimization problems. CALiPPSO.jl exploits the functionality of JuMP for modelling and is thus compatible with several optimizers. In collaboration with C. Artiaco, G. Parisi, and F. Ricci Tersenghi.
juliacon-2022-17263-calippso-jl-jamming-of-hard-spheres-via-linear-optimization
/media/juliacon-2022/submissions/HCEGRV/CanicasEnRCP_yLvoHXh.png
Rafael Diaz
en
You can find the complete description of our algorithm in [this preprint](https://arxiv.org/abs/2203.05654).
The package can be installed directly from Julila's package manager, and the documentation is available [here](https://rdhr.github.io/CALiPPSO.jl/dev/index.html)
false
https://pretalx.com/juliacon-2022/talk/HCEGRV/
https://pretalx.com/juliacon-2022/talk/HCEGRV/feedback/
Red
Writing a GenericArpack library in Julia.
Talk
2022-07-28T17:10:00+00:00
17:10
00:30
Arpack is a library for computing eigenvalues and eigenvectors of a linear operator. It has been used in many technical computing packages. The goal of the `GenericArpack.jl` package is to create a Julia translation of Arpack. Right now, the Julia `GenericArpack.jl` methods produce _bitwise identical_ results to the `Arpack_jll` methods for Float64 types in all testcases. The new library has zero dependency on BLAS and supports element types beyond those in Arpack, such as `DoubleFloats.jl`.
juliacon-2022-18064-writing-a-genericarpack-library-in-julia-
JuliaCon
David Gleich
en
## Summary of key points
Arpack is a library for iteratively computing eigenvalues and eigenvectors of a linear operator. It has been widely used, debugged, and implemented in many technical computing packages. The goal of the `GenericArpack.jl` package is to create a Julia translation of the Arpack methods. (Currently, only the symmetric solver has been translated.)
The new library has zero dependency on system level BLAS, allowing it to support matrix element types beyond those in Arpack, such as those in `DoubleFloats.jl` and `MultiFloats.jl`. Other advantages of the `GenericArpack.jl` package include thread safety and using Julia features to allow one to optionally avoid the reverse communication interface. Despite not using the system BLAS, the goal was to make the Julia output equivalent to an alternative compilation of the Arpack source code for `Float64` types.
An anticipated future use is using the `GenericArpack.jl` package to give WebASM Julia implementations tools for iterative eigensolvers. This would enables a wide variety of in-browser analysis including finite element, pseudospectra, and spectral graph theory.
## Talk overview
The talk will discuss some interesting challenges that arose:
- representing state information in Julia for the statically located Fortran `save` variables in Arpack.
- sensitivity to the `norm` operation and implementing an exact replacement for the OpenBLAS `dnrm2` on x86 architectures that uses 80-bit floating point features of x86 CPUs (without calling the OpenBLAS function)
- getting the same random initialization vectors as Arpack (i.e. porting the Fortran `dlarnv` function)
- designing an interface that allows us to compare results between `Arpack_jll` and `GenericArpack.jl` at internal methods in Arpack call chains.
- how much code it took to translate the symmetric eigensolver to a Hermitian eigensolver (which is not in Arpack)
It will also discuss some tools created that may be useful elsewhere
- a tridiagonal eigensolver that only computes a single row of the eigenvector matrix (the Arpack `dstqrb` function)
- allocation analysis that automatically runs, parses output, and cleans up after a `track-allocations` run of Julia
- Julia implementations of a few LAPACK/BLAS functions and the details needed to match bitwise match OpenBLAS calls (on MacOS)
## Initial rough talk slide ideas
- teaser: the world's most precise estimate of the largest few singular values of the netflix ratings matrix. (100M non-zeros) ... or something similar.
- reveal: the code... using GenericArpack; svds(...)
- pitch: A dropin replacement for Arpack.jl (for symmetric problems).
- what is Arpack and why is it important?
- Arpack and reverse communication.
- summary of project goals: why _translation, same input/same output_ and not something else (new algorithms, etc.), also why minimal dependencies.
- basic translation approach: an exercise in @view / sub-arrays.
- getting bitwise identical output -- the Lanczos/Arnoldi information seems close, but somewhat different from Arpack
- key issue: well, turns out this is _very_ sensitive to the norm function.
- real problem: OpenBlas norm uses 80-bit FP operations. (And why they can get away with sqrt(sum(x.^2)) and you can't!)
- solution 1: use double-double to simulate! (but it's slow)
- solution 2: just use ideas from `BitFloats.jl` and llvm intrinsics instead
- So, I've written everything, it passes tests, etc. Why does it use _so_ many allocations? (When it should use zero, like the Fortran code!)
- tools for hunting down allocations. (well, really just parsing track-allocation output)
- A curiosity: why does the line `while true` allocate?
- I wish there was a "strict" mode that doesn't allow quite so much flexibility.
- because we can: from Arpack `ido` (really what you the user do!) to Julia idonow to avoid reverse communication.
- because we can: going from symmetric real-valued Arpack methods to Hermitian complex-valued methods (which do not exist in Arpack)
- because someone will ask: comparing performance. This will show the current state of performance. At the moment, for a problem Arpack solves in 15ms, GenericArpack.jl takes 23ms; although there has been only minor performance tuning.
- A list of future work. Portion the non-Hermitian complex valued case; an "AbstractEigenspace.jl" package that multiple people could implement; Handling differences.
- The vision: Why this would be super useful. Iterative Eigenvalues in the browser for Pluto.jl running via WebASM... for really cool demos akin to pseudospectra... for finite elements in the browser ... for interactive spectral graph analysis in the browser... for mixed precision Arpack computations (Lanczos/Arnoldi info in high-precision, vectors in low-precision).
false
https://pretalx.com/juliacon-2022/talk/7H77WX/
https://pretalx.com/juliacon-2022/talk/7H77WX/feedback/
Red
OnlineSampling : online inference on reactive models
Talk
2022-07-28T19:00:00+00:00
19:00
00:30
OnlineSampling.jl is a Julia package for online Bayesian inference on reactive models, i.e., streaming probabilistic models.
Online sampling provides 1) a small macro based domain specific language to describe reactive models and 2) a semi-symbolic inference algorithm which combines exact solutions using Belief Propagation for trees of Gaussian random variables, and approximate solutions using Particle Filtering.
juliacon-2022-18127-onlinesampling-online-inference-on-reactive-models
JuliaCon
Waïss Azizianmarc lelargeGuillaume Baudart
en
[OnlineSampling](https://github.com/wazizian/OnlineSampling.jl) is a probabilistic programming language that focuses on reactive models, i.e., streaming probabilistic models based on the synchronous model of execution.
Programs execute synchronously in lockstep on a global discrete logical clock.
Inputs and outputs are data streams, programs are stream processors.
For such models, inference is a reactive process that returns the distribution of parameters at the current time step given the observations so far.
## Synchronous Reactive Programming
We use Julia's macro system to program reactive models in a style reminiscent of synchronous dataflow programming languages.
A stream function is introduced by the macro `@node`.
Inside a `node`, the macro `@init` can be used to initialize a variable.
Another macro `@prev` can then be used to access the value of a variable at the previous time step.
Then, the macro `@nodeiter` turns a node into a Julia iterator which unfolds the execution of a node and returns the current value at each step.
For examples, the following function `cpt` implements a simple counter incremented at each step, and prints its value
```julia
@node function cpt()
@init x = 0
x = @prev(x) + 1
return x
end
for x in @nodeiter T = 10 cpt()
println(x)
end
```
## Reactive Probabilistic Programming
Reactive constructs `@init` and `@prev` can be mixed with probabilistic constructs to program reactive probabilistic models.
Following recent probabilistic languages (e.g., Turing.jl), probabilistic constructs are the following:
- `x = rand(D)` introduces a random variable `x` with the prior distribution `D`.
- `@observe(x, v)` conditions the models assuming the random variable `x` takes the value `v`.
For example, the following example is a HMM where we try to estimate the position of a moving agent from noisy observations.
```julia
speed = 1.0
noise = 0.5
@node function model()
@init x = rand(MvNormal([0.0], ScalMat(1, 1000.0))) # x_0 ~ N(0, 1000)
x = rand(MvNormal(@prev(x), ScalMat(1, speed))) # x_t ~ N(x_{t-1}, speed)
y = rand(MvNormal(x, ScalMat(1, noise))) # y_t ~ N(x_t, noise)
return x, y
end
@node function hmm(obs)
x, y = @nodecall model()
@observe(y, obs) # assume y_t is observed with value obs_t
return x
end
steps = 100
obs = rand(steps, 1)
cloud = @nodeiter particles = 1000 hmm(eachrow(obs)) # launch the inference with 1000 particles (return an iterator)
for (x, o) in zip(cloud, obs)
samples = rand(x, 1000) # sample the 1000 values from the posterior
println("Estimated: ", mean(samples), " Observation: ", o)
end
```
## Semi-symbolic algorithm
The inference method is a Rao-Blackwellised particle filter, a semi-symbolic algorithm which tries to analytically compute closed-form solutions, and falls back to a particle filter when symbolic computations fail.
For Gaussian random variables with linear relations, we implemented belief propagation if the factor graph is a tree.
As a result, in the previous HMM example, belief propagation is able to recover the equation of a Kalman filter and compute the exact solution and only one particle is necessary as shown below.
```julia
cloud = @noderun particles = 1 algo = belief_propagation hmm(eachrow(obs)) # launch the inference with 1 particles for all observations
d = dist(cloud.particles[1]) # distribution for the last state
```
## Internals
This package relies on Julia's metaprogramming capabilities.
Under the hood, the macro `@node` generates a stateful stream processor which closely mimic the `Iterator` interface of Julia. The state correspond to the memory used to store all the variables accessed via `@prev`.
The heavy lifting to create these functions is done by a Julia macro which acts on the Abstract Syntax Tree. The transformations at this level include, for `t > 0`, adding the code to retrieve the previous internal state, update it and return it.
However, some transformations are best done at a later stage of the Julia pipeline.
One of them is the handling of calls to `@prev` during the initial step `t = 0`.
To seamlessly handle the various constructs of the Julia language, these calls are invalidated at the level of Intermediate Representation (IR) thanks to the package `IRTools`.
Another operation at the IR level is the automatic realization of a symbolic variable undergoing an unsupported transform: when a function is applied to a random variable and there is no method matching the variable type, this variable is automatically sampled.
We also provide a "pointer-minimal" implementation of belief propagation: during execution when a random variables is not referenced anymore by the program, it can be freed by the garbage collector (GC).
false
https://pretalx.com/juliacon-2022/talk/PFHGSD/
https://pretalx.com/juliacon-2022/talk/PFHGSD/feedback/
Red
Dynamical Low Rank Approximation in Julia
Lightning talk
2022-07-28T19:30:00+00:00
19:30
00:10
We present LowRankArithmetic.jl and LowRankIntegrators.jl. The conjunction of both packages forms the backbone of a computational infrastructure that enables simple and non-intrusive use of dynamical low rank approximation for on-the-fly compression of large matrix-valued data streams or the approximate solution of otherwise intractable matrix-valued ODEs. We showcase the utility of these packages for the quantification of uncertainty in scientific models.
juliacon-2022-18025-dynamical-low-rank-approximation-in-julia
JuliaCon
/media/juliacon-2022/submissions/YBH93M/program_figure.001_0nteiDR.png
Flemming Holtorf
en
Many scientific computing problems boil down to solving large matrix-valued ordinary differential equations (ODE); prominent examples for that are the propagation of uncertainties through (partial-)differential equation models or on-the-fly compression of large-scale simulation or experimental data. While the naive integration of such matrix-valued ODEs often remains prohibitively expensive, it is in many cases found that their solution admit an accurate low-rank approximation. Exploiting such a low rank structure generally holds the potential for substantial computational resource savings (time and memory) over naive integration approaches, often recovering the tractability of integration.
Dynamical low rank approximation (DLRA), a concept also known under the names Dirac-Fraenkel time-varying variational principle or dynamically orthogonal schemes, seeks to exploit the low-rank structure of the solution of matrix-valued ODEs by performing the integration within the manifold of fixed (low-)rank matrices. However, while theoretically elegant, the effective use of DLRA in practice is often cumbersome due to the need for custom implementations of integration routines that take advantage of the assumed low-rank structure. To alleviate this limitation, we present the packages LowRankArithmetic.jl and LowRankIntegrators.jl. The conjunction of both packages forms the backbone of a computational infrastructure that enables simple and non-intrusive use of DLRA. To that end, LowRankArithmetic.jl facilitates the propagation of low rank matrix representations through finite compositions of a rich set of algebraic operations, alleviating the need for custom implementations. Based on this key functionality, LowRankIntegrators.jl implements state-of-the-art integration routines for DLRA that automatically take advantage of low rank structure; the user needs to supply nothing more than the the right-hand-side of a matrix-valued ODE of interest.
In this talk, we briefly review the conceptual idea behind DLRA, outline the primitives underpinning LowRankArthmetic.jl and LowRankIntegrators.jl, and showcase their utility for the propagation of uncertainties through scientific models ranging from stochastic PDEs to the chemical master equation.
false
https://pretalx.com/juliacon-2022/talk/YBH93M/
https://pretalx.com/juliacon-2022/talk/YBH93M/feedback/
Red
Visualization Dashboards with Pluto!
Lightning talk
2022-07-28T19:40:00+00:00
19:40
00:10
Data visualization with intuitive interactions is an essential feature of many scientific investigations. I propose to go over use cases and examples on why/how to develop reactive dashboards in Julia using "Pluto.jl". Pluto provides a way to isolate cells in a separate page of which the style is editable as regular HTML/CSS. Alongside PlutoUI's experimental layout feature, this is a powerful tool to create immersive interactive experiences for users.
juliacon-2022-17986-visualization-dashboards-with-pluto-
JuliaCon
Guilherme Gomes Haetinger
en
In Python, one could assemble a Jupyter notebook to experiment with visualizations and turn it into a Dash dashboard with multiple plots, panes, and widgets. In R, someone could do the same with RShiny. Julia has support for Dash and Jupyter, but one could argue that dashboarding and experimentation should be part of the same workflow. Pluto's solution to this problem is complete and extensible, which is every scientist's dream.
false
https://pretalx.com/juliacon-2022/talk/SQJTRS/
https://pretalx.com/juliacon-2022/talk/SQJTRS/feedback/
Red
Visualizing astronomical data with AstroImages.jl
Lightning talk
2022-07-28T19:50:00+00:00
19:50
00:10
To study the cosmos, astronomers examine images captured of light exceeding human-visible colors and dynamic range. AstroImages.jl makes it easy to load, manipulate, and visualize astronomical data intuitively and efficiently using arbitrary color-schemes, stretched color scales, RGB composites, PNG rendering, and plot recipes. Come to our talk to see how you too can create beautiful images of the universe!
juliacon-2022-17990-visualizing-astronomical-data-with-astroimages-jl
JuliaCon
/media/juliacon-2022/submissions/VVPY9G/JuliaCon_2022_Talk_Image_BtwALWa.png
William Thompson
en
To study the cosmos, astronomers use data cubes with many dimensions representing images with axes for sky position, time, wavelength, polarization, and more. Since these large datasets often span many orders of magnitude in intensity and typically include colours invisible to humans, astronomers like to visualize their images using a variety of non-linear stretching and contrast adjustments.
Additionally, images may contain metadata specifying arbitrary mappings of pixel positions to multiple celestial coordinate systems.
Julia is a powerful language for processing astronomical data, but these visualization tasks are a challenge for any tool. Built on Images, DimensionalData, FITS, WCS, and Plots, AstroImages.jl makes it easy to load, manipulate, and visualize astronomical data intuitively and efficiently with support for arbitrary colorschemes, stretched color scales, RGB composites, lazy PNG rendering, and plot recipes.
Come to our talk to see how you too can create beautiful images of the universe!
false
https://pretalx.com/juliacon-2022/talk/VVPY9G/
https://pretalx.com/juliacon-2022/talk/VVPY9G/feedback/
Red
Microbiome.jl & BiobakeryUtils.jl for analyzing metagenomic data
Lightning talk
2022-07-28T20:00:00+00:00
20:00
00:10
Microbiome.jl is a Julia package to facilitate analysis of microbial community data. BiobakeryUtils.jl is built on top of Microbiome.jl, and provides utilities for working with a suite of command line tools (the bioBakery) that are widely used for converting raw metagenomic sequencing data into tables of taxon and gene function counts. Together, these packages provide an effective way to link microbial community data with the power of Julia’s numerical, statistical, and plotting libraries.
juliacon-2022-17890-microbiome-jl-biobakeryutils-jl-for-analyzing-metagenomic-data
JuliaCon
Deleted UserKevin BonhamAnnelle Kayisire Abatoni
en
false
https://pretalx.com/juliacon-2022/talk/PXRENJ/
https://pretalx.com/juliacon-2022/talk/PXRENJ/feedback/
Purple
Tricks.jl: abusing backedges for fun and profit
Lightning talk
2022-07-28T13:00:00+00:00
13:00
00:10
Tricks.jl is a package that does cool tricks to do more work at compile time.
It does this by generating (`@generated`) functions that just return "hardcoded" values, and then trigger the generation when (if) that value changes. This retriggering is done using backedges.
Tricks.jl can for example declare Tim Holy traits that depend on whether or not a method has been defined
[Slides](https://raw.githack.com/oxinabox/TricksJuliaCon2022/main/build/index.html)
juliacon-2022-16961-tricks-jl-abusing-backedges-for-fun-and-profit
JuliaCon
Frames Catherine White
en
Tricks.jl was made at the JuliaCon 2019 hackathon in Baltimore.
Shortly after manual backedges were added to Julia-1.3.
But has never been explained at a JuliaCon.
This talk is expressly targeted at advanced Julia users wanting to understand on the internals.
Attendee's will learn a bunch about backedges, why they exist and how they work.
false
https://pretalx.com/juliacon-2022/talk/DZNPL9/
https://pretalx.com/juliacon-2022/talk/DZNPL9/feedback/
Purple
Making Abstract Interpretation Less Abstract in Cthulhu.jl
Lightning talk
2022-07-28T13:10:00+00:00
13:10
00:10
Cthulhu.jl is a highly useful tool for performance engineering as well as general debugging of Julia programs. However, as the name implies, one can quickly descend into the abyss that is Julia's compilation pipeline and get lost in the vast amounts of code even modest looking Julia functions may end up generating. I present a combination of Cthulhu.jl with a step-by-step debugger, showing concrete results every step along the type lattice to make compilation more interpretable.
juliacon-2022-18062-making-abstract-interpretation-less-abstract-in-cthulhu-jl
JuliaCon
Simeon Schaub
en
One of the main motivations for this work was the use case of debugging source-to-source automatic differentiation on scientific codes as exemplified by Zygote.jl, which emit code that is significantly more complex than that of the original program. That can make it difficult to correctly identify intermediate steps. This is often complicated by the fact that the more complicated code can lead to results not being inferred to a concrete type anymore.
I leverage the already existing infrastructure in JuliaInterpreter.jl to enable explorative analysis of what the code does by interpreting the program based on concrete input values. Because interpretation works on a statement-by-statement basis just as inference does, this allows the user to go back and look at what the interpreter computed for any intermediate steps or see which branch actually ended up being taken.
One of the main challenges was the fact that JuliaInterpreter.jl was designed to interpret untyped Julia IR, which has slightly different semantics to IR after inference which again differs from the semantics of IR after all other Julia-specific optimizations such as inlining. A prototype currently exists in https://github.com/JuliaDebug/Cthulhu.jl/pull/214. I plan to introduce a flexible plugin infrastructure for this to be able to develop most of this outside of Cthulhu first. Support for step-by-step execution and for interpreting optimized Julia IR is also being worked on.
In this talk I aim to first give an overview on how Cthulhu.jl differs from a debugger and the various advantages and disadvantages of both approaches. I will then explain how I combined the two and how users can take advantage of these new capabilities in their own workflows. While I will be primarily targeting intermediate to advanced Julia users, I believe this could even be of use to those who have not used Cthulhu.jl before, because it allows for a much more interactive exploration of the intricacies of Julia IR.
false
https://pretalx.com/juliacon-2022/talk/WTPZLZ/
https://pretalx.com/juliacon-2022/talk/WTPZLZ/feedback/
Purple
Reducing Running Time and Time to First X: A Walkthrough
Lightning talk
2022-07-28T13:20:00+00:00
13:20
00:10
Optimizing Julia isn't hard if you compare it to Python or R where you have to be an expert in Python or R and C/C++. I'll describe what type stability is and why it is important for performance. I'll discuss it in the context of performance (raw throughput) and in the context of time to first X (TTFX). Julia is sort of notorious for having really bad TTFX in certain cases. This talk explains the workflow that you can use to reduce running time and TTFX.
juliacon-2022-17234-reducing-running-time-and-time-to-first-x-a-walkthrough
Rik Huijzer
en
false
https://pretalx.com/juliacon-2022/talk/LJHYAQ/
https://pretalx.com/juliacon-2022/talk/LJHYAQ/feedback/
Purple
Garbage Collection in Julia.
Lightning talk
2022-07-28T13:30:00+00:00
13:30
00:10
Garbage collection is one of those productivity tools that you don't think about until you need to. We will discuss the current state of Julia GC and what can be done to make it better.
juliacon-2022-16792-garbage-collection-in-julia-
JuliaCon
Christine Flood
en
"Garbage collection is like an omniscient housekeeper who can go through your things getting rid of those that you will never use and making more room for the things you need."
Most programmers are happy to have the memory management issues handled for them right up until the point that they aren't. Then they start trying to do unnatural things, like reusing arrays, creating off heap storage, or turning off GC all together.
This talk will focus on the internals of the current Julia collector, and what we can do to make things better.
false
https://pretalx.com/juliacon-2022/talk/8VQAAD/
https://pretalx.com/juliacon-2022/talk/8VQAAD/feedback/
Purple
Parallelizing Julia’s Garbage Collector
Lightning talk
2022-07-28T13:40:00+00:00
13:40
00:10
With the increasing popularity of Julia for memory intensive applications, garbage collection is becoming a performance bottleneck.
Julia currently uses a serial mark-and-sweep collector, in which objects are traced starting from a root-set (e.g. thread’s stacks, global variables, etc.) and unreachable objects are then deallocated.
We discuss in this talk how we recently parallelized tracing of live Julia objects and the performance improvements we got so far.
juliacon-2022-18076-parallelizing-julia-s-garbage-collector
Diogo Netto
en
false
https://pretalx.com/juliacon-2022/talk/LXSC3P/
https://pretalx.com/juliacon-2022/talk/LXSC3P/feedback/
Purple
Unbake the Cake (and Eat it Too!): Flexible and Performant GC
Lightning talk
2022-07-28T13:50:00+00:00
13:50
00:10
The tension between performance and flexibility is always present when developing new systems. Often, poor performance is unacceptable. But poor flexibility hinders experimentation and evolution, which may lead to bad performance later on. In this talk, we show how we used MMTk.io – a toolkit we are developing that provides language implementers with a powerful garbage collection framework – to implement a flexible (unbaking the cake) and performant (and eating it too) memory manager for Julia.
juliacon-2022-17900-unbake-the-cake-and-eat-it-too-flexible-and-performant-gc
JuliaCon
Luis Eduardo de Souza Amorim
en
When implementing a system such as a programming language runtime, decisions usually favor performance over flexibility. Lacking performance is often unacceptable but lacking flexibility can hinder experimentation and evolution, which may also affect performance in the long run. Consider memory management, for example. If we ignore flexibility and only favor performance, sticking to a particular type of garbage collector that "performs well" can have a huge effect later on, such that changing any aspect about it can be almost impossible without rewriting the whole system.
MMTk.io is a memory management toolkit providing language implementers with a powerful memory management framework and researchers with a multi-runtime platform for memory management research. Instead of a single, monolithic collector, MMTk efficiently implements various garbage collector strategies, increasing flexibility without compromising performance.
MMTk started with its original Java implementation, which was integrated into Jikes RVM in 2002. Since then, it has gained a fresh Rust implementation, which is under active development and even though it is not ready for production use, can currently be used experimentally.
To use MMTk, one must develop a binding, which contains three artefacts: (i) a logical extension of the VM, (ii) a logical extension of MMTk core, and (iii) an implementation of MMTk's API. At the moment, there are various bindings under development including bindings for V8, OpenJDK, Jikes RVM, Ruby, GHC, PyPy and now Julia.
In this talk we discuss our experience developing the MMTk binding for Julia. Julia currently implements a precise non-moving generational garbage collector. It relies on some LLVM features to calculate roots, but the code follows a monolithic approach, as described earlier.
We reuse some of Julia's strategies for calculating roots and processing objects, integrating these into an Immix implementation inside MMTk. Our implementation passes all but a few of Julia's correctness tests, and has shown promising results regarding GC performance. We hope that with MMTk-Julia we are able to easily explore different GC strategies, including a partially-moving GC, observing how these strategies affect the language's performance.
false
https://pretalx.com/juliacon-2022/talk/3XBUWE/
https://pretalx.com/juliacon-2022/talk/3XBUWE/feedback/
Purple
Unlocking Julia's LLVM JIT Compiler
Lightning talk
2022-07-28T16:30:00+00:00
16:30
00:10
Julia's compiler spends almost all of its time generating, optimizing, and compiling LLVM IR. Currently, much of this work is done under one giant lock, which is also held during type inference, reducing compiler throughput in a multithreaded environment. By using finer-grained locking and handling LLVM IR in a threadsafe manner, we can reduce contention of compilation resources. This work also leads into future JIT optimizations such as lazy, parallel, and speculative compilation of Julia code.
juliacon-2022-17927-unlocking-julia-s-llvm-jit-compiler
JuliaCon
Prem Chintalapudi
en
Julia's JIT compiler converts Julia IR to LLVM IR, optimizes it, and converts it to machine code for efficient subsequent execution. However, much of this process relies on shared global resources, such as a global LLVM context, the pass manager that runs the optimization, and various data caches. This has necessitated the presence of a global lock to prevent multiple threads from simultaneously modifying this data. Furthermore, as generation of LLVM IR and type inference may co-recurse indefinitely, type inference also acquires and holds the same lock during its execution. This serialized compilation process increases the startup time of multithreaded environments (often referred to as time-to-first-plot, TTFP) and prevents our execution environment from performing more complex transformations, such as speculative and parallel compilation.
Thus far, refactorings of our IR generation pipeline have reduced the number of global variables used in the compiler and added finer grained locks to our JIT stack in preparation for removing the global locks. At this stage, much of the remaining challenge in removing the global lock is in proving thread safety and progressively reducing the scope of the lock until the minimum amount of critical code is protected. Once that work has completed, work on speculative optimization and IR generation can begin, which should bring additional improvements to TTFP for situations without multiple contending compilation threads.
false
https://pretalx.com/juliacon-2022/talk/NEKFDC/
https://pretalx.com/juliacon-2022/talk/NEKFDC/feedback/
Purple
Metal.jl - A GPU backend for Apple hardware
Lightning talk
2022-07-28T16:40:00+00:00
16:40
00:10
In this talk, updates on the development of a GPU backend for Apple hardware (specifically the M-series chipset) will be presented along with a brief showcase of current capabilities and interface. The novel compilation flow will be explained and compared to the other GPU backends as well as the benefits and limitations of both a unified memory model and Apple's Metal capabilities. A brief overview of Apple's non-GPU hardware accelerators and their potential will also be discussed.
juliacon-2022-17880-metal-jl-a-gpu-backend-for-apple-hardware
JuliaCon
Max HawkinsTim Besard
en
The release of Apple's M-series chipset brings new hardware into play for Julia to target. Base CPU functionality is already highly used within the community, but so far, the M1 chip's hardware accelerators have primarily been inaccessible to Julia programmers. Metal.jl has been developed as a GPU backend (like CUDA./, AMD.jl, and oneAPI.jl) specifically targeting the M-series GPUs. Given Apple's continued expansion of the M1 chipset and devotion to hardware accelerators, a Julia interface targeting these compute devices is becoming increasingly beneficial.
false
https://pretalx.com/juliacon-2022/talk/AAJJGP/
https://pretalx.com/juliacon-2022/talk/AAJJGP/feedback/
Purple
ArrayAllocators.jl: Arrays via calloc, NUMA, and aligned memory
Lightning talk
2022-07-28T16:50:00+00:00
16:50
00:10
ArrayAllocators.jl uses the standard array interface to allow faster `zeros` with `calloc`, allocation on specific NUMA nodes on multi-processor systems as well as aligned memory. The allocators are given as an argument to `Array{T}` in place of `undef`. Overall, this allows Julia to match the allocation performance of popular numerical libraries such as NumPy, which uses some of these techniques.
In this talk, we will also explore some of the unexpected properties of these allocation methods.
juliacon-2022-18174-arrayallocators-jl-arrays-via-calloc-numa-and-aligned-memory
JuliaCon
Mark Kittisopikul, Ph.D.
en
Julia offers an extensible array interface that allows array types to wrap around C pointers obtained from specialized or operating system specific application programming interfaces while integrating into the garbage collection system. ArrayAllocators.jl uses this array interface to allow faster `zeros` with `calloc`, allocation on specific NUMA nodes on multi-processor systems, and the allocation of aligned memory for vectorization. The allocators are given as an argument to `Array{T}` or other subtypes of `AbstractArray` in place of the `undef` initializer to provide a familiar interface to the user. In this talk, I will describe how to use ArrayAllocators.jl to optimize applications via `calloc`, NUMA, and aligned memory.
The easy availability of these allocation methods allows Julia to match the performance and caveats of other libraries or code that uses these methods. For example, NumPy's implementation of `numpy.zeros` uses `calloc` by default which may make it appear that NumPy is out performing Julia for certain microbenchmarks. On some operating systems, the initial allocation is significantly faster than explicitly filling the array with zeros as is currently done in `Base` since the operating system may defer the actual allocation of the memory until a later time. Often the initial allocation time is similar to the allocation time of `undef` arrays.
Another application is to make Julia NUMA-aware by allocating memory on specific NUMA nodes. I will demonstrate how to optimize the performance of common memory operations on systems with multiple NUMA nodes on modern processors, which may be counter-intuitive.
A final application is to align memory to power-of-two byte boundaries. This is useful to assist advanced vectorization applications where 64-byte aligned memory may accelerate the use of AVX-512 instructions.
Finally, I will discuss the integer overflow features of ArrayAllocators.jl and how other packages may extend ArrayAllocators.jl to easily add new ways of allocating memory for arrays.
In summary, ArrayAllocators.jl and its subpackages provide a familiar mechanism to allocate memory for arrays via low level methods in a familiar manner. This allows Julia programs to take advantage of advanced operating system features that may accelerate the initialization and use of the memory.
false
https://pretalx.com/juliacon-2022/talk/SE8MEL/
https://pretalx.com/juliacon-2022/talk/SE8MEL/feedback/
Purple
Compile-time programming with CompTime.jl
Talk
2022-07-28T17:00:00+00:00
17:00
00:30
Inspired by the compile-time features of Zig, we present a CompTime.jl, a package that wraps Julia’s features for generated functions into a seamless interface between compile-time and runtime semantics. The desire for this came from heavy use of @generated functions within Catlab.jl, and we have found that CompTime.jl makes our code more readable, maintainable, and debuggable. We will give a tutorial and then a brief peek into the implementation.
juliacon-2022-18020-compile-time-programming-with-comptime-jl
Owen Lynch
en
CompTime.jl presents a macro, `@ct_enable`, that can be applied to a function, and provides a DSL within the function to mark parts that should be run at compile time, and parts that should be run at runtime. As with `@generated` functions, the code run at compile time can only depend on the types of the arguments. However, with this macro, arbitrary control structures (for loops, while loops, if statements, etc.) can be run at compile time and “unrolled”, so that, for instance, only the body of the condition that succeeded in an if statement appears at runtime. Additionally, computation based on types that cannot be itself typechecked very well can be moved to compile time, and what is left to runtime can then be completely type checked, unlocking the power of the Julia compiler to optimize.
This does not present any new technical capabilities beyond what is already provided by generated functions; rather, the chief benefit is the many conveniences enabled by moving all the syntax-processing to general functions. For instance, the code generated for a specific set of argument types can be printed out and inspected, and in backtraces the line numbers associated with the generated code are meaningful, pointing to the correct line in the `@ct_enable`-decorated function whether the error happens at runtime or compile time. Thus, the experience of working with generated functions becomes accessible to a Julia user that knows nothing about syntax trees.
Finally, if all of the CompTime annotations are stripped out of a `@ct_enable`-decorated function, one is left with a perfectly valid Julia function that runs completely at runtime. Thus, in situations where one expects to run the function only a couple times on each new datatype, the first-compile slowdown can be avoided.
In this talk, we will present a tutorial of how to use CompTime.jl, accessible to a novice Julia programmer with no previous knowledge of generated functions but of interest to all audiences. Then, at the end of the talk, we will take a peek under the hood of CompTime and show the audience a bit of the frightening delight that is writing code that generates code to generate code. This will mainly be as a fun brain-twister; however, the Julia programmer familiar with macro writing may learn a thing or two of interest.
false
https://pretalx.com/juliacon-2022/talk/DSB7E3/
https://pretalx.com/juliacon-2022/talk/DSB7E3/feedback/
Purple
Monitoring Performance on a Hardware Level With LIKWID.jl
Talk
2022-07-28T17:30:00+00:00
17:30
00:30
Have you ever wondered how many FLOPS your CPU or GPU actually performs when executing (parts of) your Julia code? Or how much data it has read from main memory or a certain cache? Then this talk is for you! I will present LIKWID.jl (Like I Knew What I'm Doing), a Julia wrapper around the same-named performance benchmarkig suite, that allows you to analyse the performance of your Julia code by monitoring various hardware performance counters sitting inside of your CPU or GPU.
juliacon-2022-17996-monitoring-performance-on-a-hardware-level-with-likwid-jl
JuliaCon
Carsten Bauer
en
In my talk I will first lay the ground by telling you everything you need to know about hardware performance counters and will then introduce you to LIKWID.jl. Specifically, I will explain how to install LIKWID, how to use LIKWID.jl's Marker API, i.e. how to mark certain regions in your Julia code for performance monitoring, and how to properly run your Julia code under LIKWID. We will then use these techniques to analyse a few illustrative Julia examples running on CPUs and an NVIDIA GPU. Finally, I will discuss potential pitfalls (e.g. when benchmarking multithreaded Julia code) and future plans.
Disclaimer: LIKWID.jl works on Linux :) but not on Windows or macOS :(
false
https://pretalx.com/juliacon-2022/talk/DAASVV/
https://pretalx.com/juliacon-2022/talk/DAASVV/feedback/
Purple
Platform-aware programming in Julia
Talk
2022-07-28T19:00:00+00:00
19:00
00:30
Heterogeneous computing resources, such as GPUs, TPUs, and FPGAs, are widely used to accelerate computations, or make them possible, in scientific/technical computing. We will talk about how loose addressing of heterogeneous computing requirements in programming language designs affects portability and modularity. We propose contextual types to answer the underlying research questions, where programs are typed by their execution platforms and Julia's multiple dispatch plays an essential role.
juliacon-2022-17969-platform-aware-programming-in-julia
JuliaCon
Francisco Heron de Carvalho Junior
en
The importance of heterogeneous computing in enabling computationally intensive solutions to problems addressed by scientific and technical computing applications is no longer new. In fact, heterogeneous computing plays a central role in the design of high-end parallel computing platforms for exascale computing. For this reason, the Julia community has concentrated efforts to support GPU programming through the JuliaGPU organization. Currently, there are packages that provide the functionality of existing GPU programming APIs, such as OpenCL, CUDA, AMD ROcm, and OneAPI, as well as high-level interfaces to launch common operations on GPUs (e.g. FFT). In particular, OneAPI is a recent cross-industry initiative to provide a unified, standards-based programming model for accelerators (XPUs).
In our work, it is convenient to distinguish between package developers and application programmers, where the former provide the high-level functionality necessary for later ones to solve problems of interest to them. Both are interested in performing computations as quickly as possible, taking advantage of hardware features often purchased by application programmers from IaaS cloud providers. Thus, since application programmers prefer packages enabled to exploit the capabilities of the target execution platform, package developers are interested in optimizing the performance of critical performance functions by noting the presence of multicore support, SIMD extensions, accelerators, and so on. In fact, considering hardware features in programming is a common practice among HPC programmers.
However, to deal with the large number of alternative heterogeneous computing resources available, package developers deal with portability, maintenance, and modularity issues. First, they need APIs that allow them to inspect hardware configurations during execution. However, there is no general alternative, as they alone do not cover all the architectural details that can influence programming decisions to accelerate the code. To avoid this, developers can give application programmers the responsibility of selecting the appropriate package version for the target architecture, or ask them to provide details about the target architecture through parameters, making programming interfaces more complex. Second, package developers are often required to interlace code for different architectures in the same function, making it difficult to make changes as accelerator technology evolves, such as when implementations should be provided to new accelerators. A common situation occurs when the programming API is deprecated, as has been the case with some Julia packages that use OpenCL.jl (e.g. https://github.com/JuliaEarth/ImageQuilting.jl/issues/16).
We argue that the traditional view of programming language designers that programs should be viewed as abstract entities dissociated from the target execution platform is not adequate to a context in which programs must efficiently exploit heterogeneous computing resources provided by IaaS cloud providers, eager to sell their services. In fact, these features may vary between runs of the same program, as application programmers try to meet their schedules and satisfy their budget constraints. So, the design of programming languages should follow the assumption that the software is now closely related to the hardware on which it will run, still making it possible to control the level of independence in relation to hardware assumptions through abstraction mechanisms (in fact, independence in relation to the hardware is still needed most of the time). For that, we propose typing programs with their target execution platforms through a notion of contextual types.
Contextual types are inspired by our previous work with HPC Shelf, a component-based platform to provide HPC services (http://www.hpcshelf.org). Surprisingly, they can free application programmers from making assumptions about target execution environments, focusing that responsibility on package developers in a modular and scalable way. In fact, contextual types help package developers write different, independent methods of the same function for different hardware configurations. In addition, other developers, as well as application programmers, can provide their own methods for specific hardware configurations not supported by the chosen package. To do this, the runtime system must be aware of the underlying features of the execution platform.
We chose Julia as the appropriate language to evaluate our proposal for two main reasons. Firstly, Julia was designed with HPC requirements in mind, as it is primarily focused on scientific and technical computing applications. Second, it implements a multiple dispatch approach that fits contextual types into the task of selecting methods for different hardware configurations. In fact, multiple dispatch has a close analogy with HPC Shelf's contextual contract resolution mechanism.
false
https://pretalx.com/juliacon-2022/talk/DQHQZ8/
https://pretalx.com/juliacon-2022/talk/DQHQZ8/feedback/
Purple
Optimizing Floating Point Math in Julia
Talk
2022-07-28T19:30:00+00:00
19:30
00:30
Why did `exp10` get 2x faster in Julia 1.6? One reason is, unlike most other languages, Julia doesn't use the operating system-provided implementations for math (Libm). This talk will be an overview of improvements in Julia's math library since version 1.5, and areas for future improvements. We will cover will be computing optimal polynomials, table based implementations, and bit-hacking for peak performance.
juliacon-2022-17949-optimizing-floating-point-math-in-julia
JuliaCon
Oscar Smith
en
In this talk we will cover the fundamental numerical techniques for implementing accurate and fast floating point functions. We will start with a brief review of how Floating Point math works. Then use the changes made to `exp` and friends (`exp2`, `exp10`, and `expm1`) over the past two years as a demonstration for the main techniques of computing functions.
Specifically we will look at:
* Range reduction
* Polynomial kernels using the `Remez` algorithm
* Fast polynomial evaluation
* Table based methods
* Bit manipulation (to make everything fast)
We will also discuss how to test the accuracy of implementations using [FunctionAccuracyTests.jl](https://github.com/JuliaMath/FunctionAccuracyTests.jl), and areas for future improvements in Base and beyond. Present and future work areas optimized routines are the Bessel Functions, cumulative distribution functions, and optimized elementary functions for [DoubleFloats.jl](https://github.com/JuliaMath/DoubleFloats.jl), and PRs across the entire package ecosystem are always welcome.
false
https://pretalx.com/juliacon-2022/talk/S8KUPP/
https://pretalx.com/juliacon-2022/talk/S8KUPP/feedback/
Purple
JuliaSyntax.jl: A new Julia compiler frontend in Julia
Talk
2022-07-28T20:00:00+00:00
20:00
00:30
JuliaSyntax.jl is a new Julia language frontend designed for precise error reporting, speed and flexibility. In this talk we'll tour the JuliaSyntax parser implementation and tree data structures, highlighting benefits for users and tool builders. We'll discuss how to losslessly map Julia source text for character-precise error reporting and how a "parse stream" abstraction cleanly separates the parser from syntax tree creation while being 10x faster than Julia's reference parser.
juliacon-2022-17930-juliasyntax-jl-a-new-julia-compiler-frontend-in-julia
JuliaCon
Claire Foster
en
JuliaSyntax aims to be a complete compiler frontend (parser, data structures and code lowering) designed for the growing needs of the julia community, as split broadly into users, tool authors and core developers.
For users we need *interactivity* and *precision*
* Speed: Bare parsing is 20x faster; parsing to a basic tree 10x faster, and parsing to Expr around 6x faster
* Robustness: The parse tree covers the full source text regardless of syntax errors so partially complete source text can be processed in use cases like editor tooling and REPL completions
* Precision: We map every character of the source text so highlighting of errors or other compiler diagnostics can be fully precise
For tool authors, JuliaSyntax aims to be *accessible* and *flexible*
* Lossless parsing accounts for all source text, including comments and whitespace so that tools can faithfully work with the source code.
* We support Julia source code versions different from the Julia version running JuliaSyntax itself so only one tooling deployment is needed per machine
* JuliaSyntax is hackable and accessible to the community, due to being written in Julia itself
* Layered tree data structures support various use cases from code formatting to semantic analysis.
For core developers, JuliaSyntax aims to provide *familiarity* and *easy of integration*:
* The code mirrors the structure of the flisp parser
* It depends only on Base
* The syntax tree data structures are hackable independently from the parser implementation.
For a detailed description of the package aims and current status, see the source repository documentation on github at https://github.com/JuliaLang/JuliaSyntax.jl#readme
false
https://pretalx.com/juliacon-2022/talk/RKLTEP/
https://pretalx.com/juliacon-2022/talk/RKLTEP/feedback/
Blue
Improvements in package precompilation
Talk
2022-07-28T12:30:00+00:00
12:30
00:30
Julia code can be precompiled to save time loading and/or compiling it on first execution. Precompilation is nuanced because Julia code comes in many flavors, including source text, lowered code, type-inferred code, and various stages of optimization to reach native machine code. We will summarize what has (and hasn't) previously been precompiled, some of the challenges posed by Julia's dynamism, the nature of some recent changes, and prospects for near-term extensions to precompilation.
juliacon-2022-18154-improvements-in-package-precompilation
JuliaCon
Tim Holy
en
Package precompilation occurs when you first use a package or as triggered by changes to your package environment. The goal of precompilation is to re-use work that would otherwise have to be repeated each time you load a raw source file; potentially-saved work includes parsing the source text, type-inference, optimization, generation of LLVM IR, and/or compilation of native code. While there are many cases in computing where a previously-calculated result can be recomputed faster than it can be retrieved from storage, code compilation is not (yet) one such case. Indeed, the time needed for compilation is the dominant contribution to Julia's *latency*, the delay you experience when you first execute a task in a fresh session. In an effort to reduce this latency, Julia has long supported certain forms of precompilation.
Package precompilation occurs in a clean environment with just the package dependencies pre-loaded, and the results are written to disk (*serialization*). When loaded (*deserialization*), the results have to be "spliced in" to a running Julia session which may include a great deal of external code. Several of the most-loved features of Julia---its method polymorphism, aggressive type specialization, and support for dynamic code development allowing redefinition and/or changes in dispatch priority---conspire to make precompilation a significant challenge. Some examples include saving type-specialized code (which types should be precompiled?), code that may be valid in one environment but invalid in another (due to redefinition or having been superseded in dispatch priority), and code that needs to be compiled for types defined in external packages. While lowered code is essentially a direct translation of the raw source text, saving any later form of code requires additional information, specifically the types that methods should be specialized for. This information can be provided manually through explicit `precompile` directives, or indirectly from the state of a session that includes all necessary and/or useful specializations.
Julia versions prior to 1.8 provide exhaustive support for precompiling lowered code (allowing re-use of the results of parsing). A subset of the results of type-inference could also be precompiled, but in practice much type-inferred code was excluded: it was not possible to save the results of type-inference for any method defined in a different package. That meant it was not possible to save the results of type-inference for new type-specializations of externally-defined methods. Finally, native code was not possible to precompile except by generating a custom "system image" using a tool like PackageCompiler.
Julia 1.8 introduced the ability to save the results of type-inference for external methods, and thus provides exhaustive support for saving type-inferred code. As a result, packages generally seem to exhibit lower time-to-first task, with the magnitude of the savings varying considerably depending on the relative contributions of inference and native-code generation to experienced latency.
To go beyond these advances, we have begun to build support for serialization and deserialization of native code at package level. Native code would still be stored package-by-package (supporting Julia's famous composability), and this requires the ability to link this code after loading. Different from static languages like C and C++, this linking must be compatible with Julia's dynamic features like late specialization and code-invalidation. We will describe the progress made so far and the steps needed to bring this vision to fruition.
false
https://pretalx.com/juliacon-2022/talk/DUQQLN/
https://pretalx.com/juliacon-2022/talk/DUQQLN/feedback/
Blue
Hunting down allocations with Julia 1.8's Allocation Profiler
Lightning talk
2022-07-28T13:00:00+00:00
13:00
00:10
Ever written code that was too slow because of excessive allocations, but didn't know where in your code they were coming from? Julia 1.8 introduces a new Allocation Profiler for finding and understanding sources of allocations in your julia programs, providing stack traces and type info for allocation hotspots. In this talk we will introduce the allocation profiler, cover how to use it, and talk through a small success story in our own codebase.
juliacon-2022-17982-hunting-down-allocations-with-julia-1-8-s-allocation-profiler
JuliaCon
Nathan DalyPete Vilter
en
The Julia 1.8 release includes a sampling Allocation Profiler, producing a profile of sampled allocations from a running program, which you can use to understand, and hopefully reduce, the most expensive allocations in your program. The profiles are best viewed together with PProf.jl, which is a powerful (but complex) visual profile analysis tool.
Using this new profiler to track down and eliminate allocations can help improve performance, but there are some gotchas to keep in mind. What sample rate should you be using to get an accurate view of your program's behavior? How should you interpret the results? How do you navigate pprof's interface? We'll introduce these topics with quick practical guidance for the budding allocation hunters in the audience.
false
https://pretalx.com/juliacon-2022/talk/YHYSEM/
https://pretalx.com/juliacon-2022/talk/YHYSEM/feedback/
Blue
HighDimPDE.jl: A Julia package for solving high-dimensional PDEs
Lightning talk
2022-07-28T13:10:00+00:00
13:10
00:10
High-dimensional PDEs cannot be solved with standard numerical methods, as their computational cost increases exponentially in the number of dimensions. This problem, known as the curse of dimensionality, vanishes with HighDimPDE.jl. The package implements novel solvers that can solve non-local nonlinear PDEs in potentially up to 1000 dimensions.
juliacon-2022-17250-highdimpde-jl-a-julia-package-for-solving-high-dimensional-pdes
Victor Boussange
en
High-dimensional partial differential equations (PDEs) arise in a variety of scientific domains including physics, engineering, finance and biology. High-dimensional PDEs cannot be solved with standard numerical methods, as their computational cost increases exponentially in the number of dimensions, a problem known as the curse of dimensionality. HighDimPDE.jl is a Julia package that breaks down the curse of dimensionality in solving PDEs. Building upon the [SciML ecosystem](https://sciml.ai/), the package implements novel solvers that can solve non-local nonlinear PDEs in potentially up to thousands of dimensions. Already proposing two solvers with different pros and cons, it aims at hosting more.
In this talk, we firstly introduce the package, briefly present the two currently implemented solvers, and showcase their advantages with concrete examples.
false
https://pretalx.com/juliacon-2022/talk/ZNEVTB/
https://pretalx.com/juliacon-2022/talk/ZNEVTB/feedback/
Blue
Solving transient PDEs in Julia with Gridap.jl
Lightning talk
2022-07-28T13:20:00+00:00
13:20
00:10
In this talk we present a new feature of Gridap.jl focusing on the solution of transient Partial Differential Equations (PDEs). We will show a new API that: a) leads to weak forms with very simple syntax, b) supports automatic differentiation, c) enables the solution of multi-field and DAE systems, and d) can be used in parallel computing through GridapDistributed.jl. We will showcase the novel features for a variety of applications in fluid and solid dynamics.
juliacon-2022-18143-solving-transient-pdes-in-julia-with-gridap-jl
JuliaCon
Oriol Colomes
en
Gridap is an open-source, finite element (FE) library implemented in the Julia programming language. The main goal of Gridap is to adopt a more modern programming style than existing FE applications written in C/C++ or Fortran in order to simplify the simulation of challenging problems in science and engineering and improve productivity in the research of new discretization methods. The library is a feature-rich general-purpose FE code able to solve a wide range of partial differential equations (PDEs), including linear, nonlinear, and multi-physics problems. Gridap is extensible and modular. One can implement new FE spaces, new reference elements, and use external mesh generators, linear solvers, and visualization tools. In addition, it blends perfectly well with other packages of the Julia package ecosystem, since Gridap is implemented 100% in Julia.
In this presentation we highlight a new feature introduced in Gridap.jl during the last year, a new high-level API that allows the user to simulate complex transient PDEs with very few lines of code. This new API goes in line with the distinctive features of Gridap.jl, allowing for the definition of weak forms in a syntax that is very similar to the mathematical notation used in academic works. The new API has a series of noticeable features, namely: it supports ODEs of arbitrary order, provided that a solver for the specific order is implemented, allows automatic differentiation of all the jacobians associated to the trannsient problem, enables the solution of multi-field and Diferential Algebraic Equation (DAE) systems, and can be used in parallel computing through the extension of the API to the GridapDistributed.jl package.
In JuliaCon2022 we will showcase this novel feature with a number of real applications in fluid and solid dynamics. The applications will include problems resulting in 1st and 2nd order ODEs, problems with constant and time-dependent coefficients, and problems with time-dependent geometries.
false
https://pretalx.com/juliacon-2022/talk/JBVLSK/
https://pretalx.com/juliacon-2022/talk/JBVLSK/feedback/
Blue
Progradio.jl - Projected Gradient Optimization
Lightning talk
2022-07-28T13:30:00+00:00
13:30
00:10
Most (Mathematical) Optimization problems are subject to bounds on the decision variables. In general, a nonlinear cost function `f(x)` is to be minimized, with the vector `x` constrained by simple bounds `l <= x <= u`. The *Projected Gradient* class of methods is tailored for this very optimization problem. Our package includes various Projected Gradient methods, fully implemented in Julia. We make use of Julia's Iterator interface, allowing for user-defined termination criteria.
juliacon-2022-17956-progradio-jl-projected-gradient-optimization
/media/juliacon-2022/submissions/Z9Y73V/logo256_oniouhN.png
Eduardo M. G. Vila
en
false
https://pretalx.com/juliacon-2022/talk/Z9Y73V/
https://pretalx.com/juliacon-2022/talk/Z9Y73V/feedback/
Blue
Transformer models and framework in Julia
Lightning talk
2022-07-28T13:40:00+00:00
13:40
00:10
An introduction to the Transformers.jl and relative packages for building transformer models.
juliacon-2022-17281-transformer-models-and-framework-in-julia
JuliaCon
Peter Cheng
en
I would talk about the new API design in Transformers.jl, from the new text encoder that build on top of TextEncoderBase.jl, to the new model implementation that build on top of NeuralAttentionlib.jl and the new pretrain model management API based on HuggingFaceApi.jl.
false
https://pretalx.com/juliacon-2022/talk/KX9NAV/
https://pretalx.com/juliacon-2022/talk/KX9NAV/feedback/
Blue
Automating Reinforcement Learning for Solving Economic Models
Lightning talk
2022-07-28T16:30:00+00:00
16:30
00:10
I present a new package which aims to automate the process of using reinforcement learning to solve discrete-time heterogeneous-agent macroeconomic models. Models with discrete choice, matching, aggregate uncertainly, and multiple locations are supported. The pure-Julia package, tentatively named Bucephalus.jl, also defines a data structure for describing this class of models, allowing new solvers to be easily implemented and models to be defined once and solved many ways.
juliacon-2022-16942-automating-reinforcement-learning-for-solving-economic-models
JuliaCon
Jeffrey Sun
en
Heterogeneous-agent macroeconomic models, though relatively recent in their development, have been applied across macroeconomics, and have contributed to our understanding of inequality, trade, business cycles, migration, epidemics, and the transmission of monetary policy.
Conventional methods of solving these models, which generally require computing policy or value functions on a grid which covers the model's entire state space, are subject to a curse of dimensionality. High-dimensional state spaces make a model unfeasible to solve. Using neural networks instead of grids to approximate policy and value functions solves this problem, and has become an important and active area of research. Because these models must be trained by simulating agents and updating based on simulated outcomes, these solution methods are a form of reinforcement learning.
At present, the ability to use reinforcement learning to solve economic models is limited to economists who are also trained in these techniques. Bucephalus.jl aims to make these techniques accessible by automating the process while remaining applicable to a broad class of models. The user describes a model using a simple model description syntax built on Julia macros. The models are then automatically compiled to a standard data structure, to which, in principle, many solvers could then be applied. I present a solver that uses deep reinforcement learning to solve for steady state, impulse responses, and transition paths.
The package furthermore implements reinforcement learning techniques never before applied to this domain, including discrete-choice policy networks and nested generalized moments.
false
https://pretalx.com/juliacon-2022/talk/J7RCP7/
https://pretalx.com/juliacon-2022/talk/J7RCP7/feedback/
Blue
Bender.jl: A utility package for customizable deep learning
Lightning talk
2022-07-28T16:40:00+00:00
16:40
00:10
A wide range of research on feedforward neural networks requires "bending" the chain rule during backpropagation. The package Bender.jl provides neural network layers (compatible with Flux.jl), which gives users more freedom to choose every aspect of the forward mapping. This makes it easy to leverage ChainRules.jl to compose a wide range of experiments, such as training binary neural networks, Feedback Alignment and Direct Feedback Alignment in just a few lines of code.
juliacon-2022-18092-bender-jl-a-utility-package-for-customizable-deep-learning
JuliaCon
Rasmus Kjær Høier
en
In this lightning talk we will explore two different use cases of [Bender.jl](https://github.com/Rasmuskh/Bender.jl), namely training binary neural networks and training neural networks using the biologically motivated Feedback Alignment algorithm. Binary neural networks and feedback alignment might seem like very different areas of research, but from an implementation point of view they are very similar, as both amount to modifying the chain rule during backpropagation. Implementing a binary neural network requires modifying backpropagation in order to allow non-zero error signals to propagate through binary activation functions and feedback alignment requires modifying backpropagation to use a set of auxilary weights for transporting errors backwards (in order to avoid the biologically implausible weight symmetry requirement inherent to backpropagation). By allowing the user to specify the exact nature of the forward mapping when initializing a layer it is possible to leverage ChainRules.jl to easily implement these and similar experiments.
false
https://pretalx.com/juliacon-2022/talk/7S9YZV/
https://pretalx.com/juliacon-2022/talk/7S9YZV/feedback/
Blue
Effortless Bayesian Deep Learning through Laplace Redux
Lightning talk
2022-07-28T16:50:00+00:00
16:50
00:10
Treating deep neural networks probabilistically comes with numerous advantages including improved robustness and greater interpretability. These factors are key to building artificial intelligence (AI) that is trustworthy. A drawback commonly associated with existing Bayesian methods is that they increase computational costs. Recent work has shown that Bayesian deep learning can be effortless through Laplace approximation. This talk presents an implementation in Julia: `BayesLaplace.jl`.
juliacon-2022-18103-effortless-bayesian-deep-learning-through-laplace-redux
JuliaCon
/media/juliacon-2022/submissions/Z7MXFS/intro_uz7fH3F.gif
Patrick Altmeyer
en
#### Problem: Bayes can be costly 😥
Deep learning models are typically heavily under-specified in the data, which makes them vulnerable to adversarial attacks and impedes interpretability. Bayesian deep learning promises an intuitive remedy: instead of relying on a single explanation for the data, we are interested in computing averages over many compelling explanations. Multiple approaches to Bayesian deep learning have been put forward in recent years including variational inference, deep ensembles and Monte Carlo dropout. Despite their usefulness these approaches involve additional computational costs compared to training just a single network. Recently, another promising approach has entered the limelight: Laplace approximation (LA).
#### Solution: Laplace Redux 🤩
While LA was first proposed in the 18th century, it has so far not attracted serious attention from the deep learning community largely because it involves a possibly large Hessian computation. The authors of this recent [NeurIPS paper](https://arxiv.org/abs/2106.14806) are on a mission to change the perception that LA has no use in DL: they demonstrate empirically that LA can be used to produce Bayesian model averages that are at least at par with existing approaches in terms of uncertainty quantification and out-of-distribution detection, while being significantly cheaper to compute. Our package [`BayesLaplace.jl`](https://github.com/pat-alt/BayesLaplace.jl) provides a light-weight implementation of this approach in Julia that allows users to recover Bayesian representations of deep neural networks in an efficient post-hoc manner.
#### Limitations and Goals 🚩
The package functionality is still limited to binary classification models trained in Flux. It also lacks any framework for optimizing with respect to the Bayesian prior. In future work we aim to extend the functionality. We would like to develop a library that is at least at par with an existing Python library: [Laplace](https://aleximmer.github.io/Laplace/). Contrary to the existing Python library, we would like to leverage Julia's support for language interoperability to also facilitate applications to deep neural networks trained in other programming languages like Python an R.
#### Further reading 📚
For more information on this topic please feel free to check out my introductory blog post: [[TDS](https://towardsdatascience.com/go-deep-but-also-go-bayesian-ab25efa6f7b)], [[blog](https://www.paltmeyer.com/blog/posts/effortsless-bayesian-dl/)]. Presentation slides can be found [here](https://www.paltmeyer.com/LaplaceRedux.jl/dev/resources/juliacon22/presentation.html#/title-slide).
false
https://pretalx.com/juliacon-2022/talk/Z7MXFS/
https://pretalx.com/juliacon-2022/talk/Z7MXFS/feedback/
Blue
Large-Scale Machine Learning Inference with BanyanONNXRunTime.jl
Lightning talk
2022-07-28T17:00:00+00:00
17:00
00:10
BanyanONNXRunTime.jl is an open-source Julia package for running PyTorch/TensorFlow models on large distributed arrays. In this talk, we show how you can use BanyanONNXRunTime.jl with BanyanDataFrames.jl for running ML models on tabular data and with BanyanImages.jl for running ML models on image data.
juliacon-2022-18081-large-scale-machine-learning-inference-with-banyanonnxruntime-jl
JuliaCon
/media/juliacon-2022/submissions/HPAHBV/image_4_g03MeGW.png
Caleb WinstonCailin Winston
en
More information about BanyanONNXRunTime.jl can be found on GitHub:
https://github.com/banyan-team/banyan-julia
https://github.com/banyan-team/banyan-julia-examples
false
https://pretalx.com/juliacon-2022/talk/HPAHBV/
https://pretalx.com/juliacon-2022/talk/HPAHBV/feedback/
Blue
SpeedyWeather.jl: A 16-bit weather model with machine learning
Lightning talk
2022-07-28T17:10:00+00:00
17:10
00:10
We present SpeedyWeather.jl, a global atmospheric model currently developed as a prototype for a 16-bit climate model incorporating machine learning for accuracy and computational efficiency on different hardware. SpeedyWeather.jl is designed for type flexibility with low precision, and automatic differentiation to replace parts of the model with neural networks for a more accurate representation of climate processes and computational efficiency.
juliacon-2022-18121-speedyweather-jl-a-16-bit-weather-model-with-machine-learning
JuliaCon
/media/juliacon-2022/submissions/DZFPGX/frame0240_UfuTcsB.png
Milan Klöwer
en
Computational resources are a major limitation to improve reliability in numerical predictions of weather and climate. Most simulations run on conventional CPUs in 64-bit floats, although some weather forecast centres now use 32 bits operationally for higher performance. Successful 16-bit simulations have been previously demonstrated with projects like ShallowWaters.jl, increasing performance by 4x with respect to a 64-bit simulation on Fujitsu’s A64FX CPU. However, it remains to be seen whether these results can also be achieved for global atmospheric models, like those used for weather and climate simulation. A new model, SpeedyWeather.jl aims to address this question. As with ShallowWaters.jl, SpeedyWeather.jl aims to support hardware-accelerated low precision arithmetic, yet will be substantially more complex. Much like state-of-the-art numerical weather prediction models, SpeedyWeather.jl includes a “dynamical core” for advancing forward the basic equations describing fluid flow in the Earth’s atmosphere and “parametrizations” for representing physical processes that take place below the scale of the model’s spatial grid, such as the development of clouds from convective updrafts. As such, it is intended to be a simple model for exploring weather and climate simulation in the Julia ecosystem. SpeedyWeather.jl is, like ShallowWaters.jl, fully type-flexible to support arbitrary number formats for performance and analysis (like Sherlogs.jl) simultaneously. This means the model development is precision-agnostic, which allows us to address the common problems of dynamic range and critical precision loss often incurred from using low-precision number formats. The aim of this project is to develop a prototype towards the first global 16-bit weather and climate models.
Beyond numerical weather prediction, low-precision arithmetic is now routinely used in deep learning and neural networks. SpeedyWeather.jl is developed so that entire parts of the model may be replaced by artificial neural networks, thereby complementing conventional physics-based climate modelling with a data-driven approach. Such “hybrid” climate models promise to improve the representation of climate processes that are conventionally poorly resolved, either by training against higher resolution simulations or simulations based on more sophisticated, yet expensive, algorithms. In addition, hybrid models offer the prospect of fitting climate models to observational data. In order to train the neural network components of the model, SpeedyWeather.jl aims to be fully differentiable using automatic differentiation. Implementing parts of weather and climate models with artificial neural networks can also improve computational efficiency and facilitate low precision linear algebra. This talk presents the concept, implementation details, challenges and first results in the development of SpeedyWeather.jl towards a hybrid model incorporating both differential equation solvers and machine learning.
Co-Authors:
- Tom Kimpson (University of Oxford, UK)
- Alistair White and Maximilian Gelbrecht (Potsdam Institute for Climate Impact Research and Technical University of Munich, Germany)
- Sam Hatfield (European Centre for Medium-Range Weather Forecasts, Reading, UK)
false
https://pretalx.com/juliacon-2022/talk/DZFPGX/
https://pretalx.com/juliacon-2022/talk/DZFPGX/feedback/
Blue
ExplainableAI.jl: Interpreting neural networks in Julia
Lightning talk
2022-07-28T17:20:00+00:00
17:20
00:10
In pursuit of interpreting black-box models such as deep image classifiers, a number of techniques have been developed that attribute and visualize the importance of input features with respect to the output of a model.
ExplainableAI.jl brings several of these methods to Julia, building on top of primitives from the Flux ecosystem. In this talk, we will give an overview of current features and show how the package can easily be extended, allowing users to implement their own methods and rules.
juliacon-2022-17973-explainableai-jl-interpreting-neural-networks-in-julia
JuliaCon
Adrian Hill
en
false
https://pretalx.com/juliacon-2022/talk/MFU9MN/
https://pretalx.com/juliacon-2022/talk/MFU9MN/feedback/
Blue
Training Spiking Neural Networks in pure Julia
Lightning talk
2022-07-28T17:30:00+00:00
17:30
00:10
Training artificial neural networks to recapitulate the dynamics of biological neuronal recordings has become a prominent tool to understand computations in the brain. We present an implementation of a recursive-least squares algorithm to train units in a recurrent spiking network. Our code can reproduce the activity of 50,000 neurons of a mouse performing a decision-making task in less than an hour of training time. It can scale to a million neurons on a GPU with 80 GB of memory.
juliacon-2022-17748-training-spiking-neural-networks-in-pure-julia
JuliaCon
Ben ArthurChristopher Kim
en
Spiking networks that operate in a fluctuation driven regime are a common way to model brain activity. Individual neurons within a spiking artificial neural network are trained to reproduce the spiking activity of individual neurons in the brain. In doing so, they capture the structured activity patterns of the recorded neurons, as well as the spiking irregularities and the trial-to-trial variability. Such trained networks can be analyzed in silico to gain insights into the dynamics and connectivity of cortical circuits underlying the recorded neural activity that would be otherwise difficult to obtain in vivo.
The number of simultaneously recorded neurons in behaving animals has been increasing in the last few years at an exponential rate. It is now possible to simultaneously record from about 1000 neurons using electrophysiology in behaving animals, and up to 100,000 using calcium imaging. When combining several sessions of recordings, the amount of data becomes huge and could grow to millions of recorded neurons in the next few years. There is a need then for fast algorithms and code bases to train networks of spiking neurons on ever larger data sets.
Here we use a recursive least-squares training algorithm (RLS; also known as FORCE; Sussillo and Abbott, 2009), adapted for spiking networks (Kim and Chow 2018, 2021), which uses an on-line estimation of the inverse covariance matrix between connected neurons to update the strength of plastic synapses. We make the code more performant through a combination of data parallelism, leveraging of BLAS, use of symmetric packed arrays, reduction in storage precision, and refactoring for GPUs.
Our goal is to train the synaptic current input to each neuron such that the resulting spikes follow the target activity pattern over an interval in time. We use a leaky integrate-and-fire neuron model with current-based synapses. The peri-stimulus time histograms of the spike trains are converted to the equivalent target synaptic currents using the transfer function of the neuron model. We treat every neuron’s synaptic current as a read-out, which makes our task equivalent to training a recurrently connected read-out for each neuron. Since a neuron's synaptic current can be expressed as a weighted sum of the spiking activities of its presynaptic neurons, we adjust the strength of the incoming synaptic connections by the RLS algorithm in order to generate the target activity.
This training scheme allows us to set up independent objective functions for each neuron and to update them in parallel. A CPU version of the algorithm partitions the neurons onto threads and uses the standard BLAS libraries to perform the matrix operations. As the vast majority of memory is consumed by the inverse covariance matrix, larger models can be accommodated by reducing precision for all state variables and using a packed symmetric matrix for the covariance (see SymmetricFormats.jl for the SymmetricPacked type definition). These memory-use optimizations also have the benefit of being faster too. For the GPU version, custom batched BLAS kernels were written for packed symmetric matrices (see BatchedBLAS.jl for the batched_spmv! and batched_spr! functions).
We benchmarked on synthetic targets consisting of sinusoids with identical frequencies and random phases. For a model with one million neurons, 512 static connections per neuron, and 45 plastic connections per neuron, the CPU code took 1260 seconds per training iteration on a 48-core Intel machine and the GPU code took 48 seconds on an Nvidia A100. For this connectivity pattern, one million is the largest number of neurons (within a factor of two) that could fit in the 80 GB GPU. The CPU cores, with 768 GB of RAM, accommodated four million neurons with this connectivity.
We also tested our algorithm's ability to learn real target functions using 50,000 neurons recorded in five different brain regions from a mouse performing a decision-making task. The recording intervals were 3 sec long and spikes rates averaged 7 Hz. Replacing the static connections with random Gaussian noise and using 256 plastic connections, the model achieved a correlation of 0.8 between the desired and learned currents in 30 minutes of training time.
Our work enables one to train spiking recurrent networks to reproduce the spiking activity of huge data sets of recorded neurons in a reasonable amount of time. By doing so, it facilitates analyzing the relations between connectivity patterns, network dynamics and brain functions in networks of networks in the brain. We also introduce two new Julia packages to better support packed symmetric matrices.
false
https://pretalx.com/juliacon-2022/talk/UTTHUM/
https://pretalx.com/juliacon-2022/talk/UTTHUM/feedback/
Blue
Simple Chains: Fast CPU Neural Networks
Lightning talk
2022-07-28T17:40:00+00:00
17:40
00:10
SimpleChains is an open source pure-Julia machine learning library developed by PumasAI and JuliaComputing in collaboration with Roche and the University of Maryland, Baltimore.
It is specialized for relatively small-sized models and NeuralODEs, attaining best in class performance for these problems. The performance advantage remains significant when scaling to tens of thousands of parameters, where it's still >5x faster than Flux or Pytorch while all use a CPU, even outperforming GPUs.
juliacon-2022-18130-simple-chains-fast-cpu-neural-networks
JuliaCon
Chris Elrod
en
SimpleChains is a pure-Julia library that is simple in two ways:
1. All kernels are simple loops (it leverages LoopVectorization.jl for performance).
2. It only supports simple (feedforward) neural networks.
It additionally manages memory manually, and currently relies on hand written pull back definitions.
In combination, these allow it to be 50x faster than Flux training an MNIST example on a 10980XE.
This talk will focus on introducing the library, showing off a few examples, and explaining some of they "why" behind it's performance.
false
https://pretalx.com/juliacon-2022/talk/9RFTHY/
https://pretalx.com/juliacon-2022/talk/9RFTHY/feedback/
Blue
Automated Geometric Theorem Proving in Julia
Lightning talk
2022-07-28T19:00:00+00:00
19:00
00:10
This talk introduces [GeometricTheoremProver.jl](https://github.com/lucaferranti/GeometricTheoremProver.jl), a Julia package for automated deduction in Euclidean geometry. The talk will give a short overview of geometric theorem proving concepts and hands-on demos on how to use the package to write and prove statements in Euclidean geometry. A roadmap of the package for future development plans will also be presented.
juliacon-2022-18085-automated-geometric-theorem-proving-in-julia
JuliaCon
Luca Ferranti
en
Geometry has been central in formal reasoning, with Euclid's *Elements* being the first example of axiomatic system. Centuries later, Euclidean geometry is still central e.g. in mathematical education as introduction to formal proofs. To make things more exciting, Tarski proved in early 1900 that Euclidean geometry is decidable, that is a computer program should be able to answer questions like "is this statement true?". This opened several interesting questions: How can I prove *efficiently* statements in Euclidean geometry? Can I generate *readable* proofs? During the talk, I will touch those questions while introducing [GeometricTheoremProver.jl](https://github.com/lucaferranti/GeometricTheoremProver.jl), a package for automated reasoning in Euclidean geometry written fully in Julia. The talk will combine a short overview of geometric theorem proving concepts with hands-on demos on how to use the package to write and prove statements in Euclidean geometry. Finally, the talk will also present a roadmap for the package, hopefully giving pointers to the interested listener on how to contribute.
false
https://pretalx.com/juliacon-2022/talk/DFYH73/
https://pretalx.com/juliacon-2022/talk/DFYH73/feedback/
Blue
SIMD-vectorized implementation of high order IRK integrators
Lightning talk
2022-07-28T19:10:00+00:00
19:10
00:10
We present a preliminary version of a SIMD-vectorized implementation of the sixteenth order 8-stage implicit Runge-Kutta integrator IRKGL16 implemented in the Julia package IRKGaussLegendre.jl. For numerical integrations of typical non-stiff problems performed in double precision, we show that a vectorized implementation of IRKGL16 that exploits the SIMD-based parallelism can clearly outperform high order explicit Runge-Kutta schemes available in the standard package DifferentialEquations.jl.
juliacon-2022-18156-simd-vectorized-implementation-of-high-order-irk-integrators
MikelJoseba MakazagaAnder Murua
en
We present a preliminary version of a SIMD-vectorized implementation of the sixteenth order implicit Runge-Kutta integrator IRKGL16 implemented in the Julia package IRKGaussLegendre.jl.
The solver IRKGL16 is an implicit Runge-Kutta integrator of collocation type based on the Gauss-Legendre quadrature formula of 8 nodes. It is intended for high precision numerical integration of non-stiff systems of ordinary differential equations. In its sequential implementation, the scheme has interesting properties (symplecticness and time-symmetry) that make it particularly useful for long-term integrations of conservative problems. Such properties are also very useful for Scientific Machine Learning applications, as gradients can be exactly calculated by integrating backward in time the adjoint equations.
For numerical integration of typical non-stiff problems with very high accuracy beyond the precision offered by double precision (i.e., standard IEEE binary64 floating precision) arithmetic our sequential implementation of IRKGL16 is more efficient than high order explicit Runge-Kutta schemes implemented in the standard package DifferentialEquations.jl. However, our sequential implementation of IRKGL16 is generally unable to outperform them in double precision arithmetic.
We show that a vectorized implementation of IRKGL16 that exploits the SIMD-based parallelism offered by modern processor can be more efficient than high order explicit Runge-Kutta methods even for double precision computations. We demonstrate that by comparing our vectorized implementation of IRKGL16 with a 9th order explicit Runge-Kutta method (Vern9 from DifferentialEquations.jl) for different benchmark problems.
Our current implementation (https://github.com/mikelehu/IRKGL_SIMD.jl) depends on the Julia package SIMD.jl to efficiently perform computations on vectors with eight Float64 numbers. The right-hand side of the system of ODEs to be integrated has to be implemented as a generic function defined in terms of the arithmetic operations and elementary functions implemented for vectors in the package SIMD.jl. The state variables must be collected in an array of Float64 or Float32 floating point numbers. The SIMD-based vectorization process is performed automatically under the hood.
false
https://pretalx.com/juliacon-2022/talk/VAZYBR/
https://pretalx.com/juliacon-2022/talk/VAZYBR/feedback/
Blue
Zero knowledge proofs of shuffle with ShuffleProofs.jl
Lightning talk
2022-07-28T19:20:00+00:00
19:20
00:10
Many remote electronic voting systems use the ElGamal re-encryption mixnet as the foundation of their design, motivated by a number of ways authorities can be held accountable. In particular, zero-knowledge proofs of shuffle as implemented in the Verifiactum library offer an elegant and well-established solution. In ShuffleProofs.jl, I implement a Verificatum compatible verifier and prover for non-interactive zero-knowledge proofs of shuffle, making it more accessible, as I shall demonstrate.
juliacon-2022-17676-zero-knowledge-proofs-of-shuffle-with-shuffleproofs-jl
JuliaCon
Janis Erdmanis
en
Zero-knowledge proofs (ZKP) are the key for making distributed applications privacy-preserving while keeping participants accountable. Widely used in remote electronic voting system designs and cryptocurrencies, they are still hard to understand, tinker with and thus are accessible only to a tiny minority of skilled cryptographers, dampening the creation of new innovative solutions.
An exciting ZKP application is making a re-encryption mix in the ElGamal cryptosystem accountable for not adding, removing, or modifying ciphertexts. While multiple protocols exist for the purpose, none is as contested as the WikstromTerelius variant implemented in the Verificatum library used to make election systems verifiable in Estonia, Norway, Switzerland and elsewhere. But is far from optimal to tinker with as is implemented in Java. In ShuffleProofs.jl, I implement Verificatum compatible noninteractive zero-knowledge verifier and prover for correct re-encryption, improving its accessibility for non-practitioners.
To demonstrate the usefulness and bring every listener on the same line, I shall discuss a most typical ElGamal voting system used widely as foundations for many designs representing it in only 30 lines of Julia code. After discussing the properties of the system, I will demonstrate how to add verifiability so that even if an adversary controlled the re-encryption mix server, it would not be able to add, remove or modify votes without being noticed.
I shall also demonstrate how we can use the ShuffleProofs.jl to verify Verificatum generated proofs of shuffle, which can help independent auditors to verify real elections on the field. In addition, I shall touch a bit on how one can implement their own verifier as a finite state machine making ShuffleProofs.jl futureproof with all sorts of implementations. Lastly, I will recap and articulate some practices on how zero-knowledge proofs can be implemented in Julia and how they could be made accessible for wider audiences to tinker with.
false
https://pretalx.com/juliacon-2022/talk/XJTDWH/
https://pretalx.com/juliacon-2022/talk/XJTDWH/feedback/
Blue
MagNav.jl: airborne Magnetic anomaly Navigation
Lightning talk
2022-07-28T19:30:00+00:00
19:30
00:10
MagNav.jl is an open-source Julia package that contains a full suite of tools for aeromagnetic compensation and airborne magnetic anomaly navigation. This talk will describe the high-level functionalities of the package, then provide a brief tutorial using real flight data that is available within the package. The functionalities can be divided into the four essential components of MagNav: sensors (flight data), magnetic anomaly maps, aeromagnetic compensation models, and navigation algorithms.
juliacon-2022-18016-magnav-jl-airborne-magnetic-anomaly-navigation
JuliaCon
Deleted User
en
false
https://pretalx.com/juliacon-2022/talk/8GLBMW/
https://pretalx.com/juliacon-2022/talk/8GLBMW/feedback/
Blue
Validating a tsunami model for coastal inundation
Lightning talk
2022-07-28T19:40:00+00:00
19:40
00:10
How do we trust that a given fluid model is suitable for simulating water waves as they approach and wash over the land? This talk presents some of the benchmark tests used to validate a tsunami model. Using our [Julia implementation of a fluid model](https://github.com/justinmimbs/WaveTank.jl), we check how well it conserves mass, matches analytical solutions, and reproduces laboratory experiments.
juliacon-2022-17879-validating-a-tsunami-model-for-coastal-inundation
JuliaCon
Justin Mimbs
en
A computational model of a physical process requires careful validation before it can be trusted to tell something useful about the world. In this research, we did not set out to formulate a new model, but to implement an existing formulation in Julia. The tsunami model presented by Yamazaki et al. in [1] is a depth-averaged, nonhydrostatic fluid model with a free surface, capable of simulating tsunami waves as they transform and run up on land. Though the authors presented their validation results, we still needed a suite of tests to help verify that our implementation matches the specification, and that it is suitable for our application area.
In this talk, we will explore the following kinds of validation tests for numerical tsunami models by walking through examples with our Julia implementation.
- Conservation of mass
- Solution convergence
- Comparison to analytical solutions
- Comparison to laboratory experiments
As a first-principles measure of validity, a fluid model needs to conserve mass--that is, as the model progresses over time, there should always be the same amount of fluid in the model.
Another basic test of a numerical model is solution convergence. It is necessary to discretize space and time for a fluid model, and as the resolution increases (i.e., as the discretization size decreases) it is expected that the solutions converge.
Centuries of study of fluid mechanics have provided analytical solutions to many idealized wave scenarios. These are useful for comparing against numerical models. We will look at the translation of a solitary wave (a wave that propagates without changing shape).
Analytical wave theories can't describe all the ways that waves interact with complex bottom surfaces, so next we turn to laboratory experiments. Over recent decades, researchers have performed experiments in large wave tanks, generating waves for various scenarios and measuring the effects. We recreate several laboratory experiments with our model and compare the results.
----
[1] Yamazaki, Y., Kowalik, Z. and Cheung, K.F. (2009), Depth-integrated, non-hydrostatic model for wave breaking and run-up. International Journal for Numerical Methods in Fluids, 61: 473-497. https://doi.org/10.1002/fld.1952
false
https://pretalx.com/juliacon-2022/talk/UTT8SE/
https://pretalx.com/juliacon-2022/talk/UTT8SE/feedback/
Blue
JCheck.jl: Randomized Property Testing Made Easy
Lightning talk
2022-07-28T19:50:00+00:00
19:50
00:10
JCheck is a native Julia implementation of a randomized property testing (RPT) framework. It aims at integrating as seamlessly as possible to the Test.jl package in order to enable developers to easily use RPT along with more "traditional" approaches. Although a fair number of generators are included, designing novel ones for custom data types is a straightforward process. Additional features such as shrinkage and specification of "special" non-random input are available.
juliacon-2022-18150-jcheck-jl-randomized-property-testing-made-easy
JuliaCon
/media/juliacon-2022/submissions/U7WAAD/logo_JciFPRn.svg
Patrick Fournier
en
[Slides](https://www.patrickfournier.ca/juliacon2022/)
[JuliaHub](https://juliahub.com/ui/Packages/JCheck/xkdfQ/)
Since Julia's main purpose is technical computing, we believe its users could benefit from an easy-to-use framework for property testing. A lot of Julia code is in fact implementation of various kinds of abstract objects for which at least some theoretical properties are known beforehand. While randomized property testing alone is not usually sufficient for serious software development, it might definitely be a great addition to a battery of tests. For that reason, we designed JCheck.jl so it integrates seamlessly with Test.jl.
To make JCheck agreeable to use, a lot of care has been put into the efficiency of the input generation process. Random inputs are reused to reduce the number of generated data to a minimum. "Built-in" generators which can be used as a building block for more complex ones have been designed to be as efficient as possible.
JCheck can be extended to support custom types in 2 ways. Type unions of types for which generators are implemented are supported automatically. More intricate types are supported through method dispatch. Note that it is trivial to define a generator for a type for which we can already generate random instances.
JCheck support so-called "special cases", i.e. non-random cases that are always checked.
In order to make the analysis of failing cases easier, JCheck support shrinking. When such a case is detected, it will try to make it as simple as possible. Whether shrunk or not, failing cases can be serialized to a file to make further investigation easier.
false
https://pretalx.com/juliacon-2022/talk/U7WAAD/
https://pretalx.com/juliacon-2022/talk/U7WAAD/feedback/
Blue
Juliaup - The Julia installer and version multiplexer
Lightning talk
2022-07-28T20:00:00+00:00
20:00
00:10
This talk will present a deep dive into juliaup, the upcoming new official Julia installer and version multiplexer. The talk will give a brief presentation of the features of Juliaup, and then dive into design decision, integration with existing system package managers and an outlook of planned future work.
juliacon-2022-18129-juliaup-the-julia-installer-and-version-multiplexer
JuliaCon
David Anthoff
en
false
https://pretalx.com/juliacon-2022/talk/J9AX9Y/
https://pretalx.com/juliacon-2022/talk/J9AX9Y/feedback/
Blue
Contributing to Open Source with Technical Writing.
Lightning talk
2022-07-28T20:10:00+00:00
20:10
00:10
The goal of this talk is to enlighten members of the Julia ecosystem on how they can make an impact by contributing to open source with technical writing. While this talk would be targeted at beginners, there would be something for even the more experienced members.
juliacon-2022-17976-contributing-to-open-source-with-technical-writing-
JuliaCon
Ifihanagbara Olusheye
en
This talk would cover two main headings: Open Source and Technical Writing. It would also explore how they can relate to each other. The following aspects would be touched:
- What Open Source is
- What Technical Writing is
- How to contribute to Open Source with Technical Writing
- How it helps the Julia Ecosystem
- Steps on beginning a Technical Writing Journey
- Tips
false
https://pretalx.com/juliacon-2022/talk/8SWDQG/
https://pretalx.com/juliacon-2022/talk/8SWDQG/feedback/
BoF
JuliaGPU
Birds of Feather
2022-07-28T12:30:00+00:00
12:30
00:45
The JuliaGPU community welcomes both long-standing contributors and newcomers to a birth-of-the feather event on the state of the JuliaGPU ecosystem.
Join the discussion on the [bof-voice](https://discord.com/channels/995757799076282478/997898697578913853) channel in discord.
Voice your feedback and experiences.
juliacon-2022-18730-juliagpu
JuliaCon
Valentin ChuravyTim Besard
en
false
https://pretalx.com/juliacon-2022/talk/RHYB8M/
https://pretalx.com/juliacon-2022/talk/RHYB8M/feedback/
BoF
Julia in HPC
Birds of Feather
2022-07-28T16:30:00+00:00
16:30
01:30
The Julia HPC community has been growing over the last years with monthly meetings to coordinate development and to solve problems arising in the use of Julia for high-performance computing.
The Julia in HPC Birds of a Feather is an ideal opportunity to join the community and to discuss your experiences with using Julia in an HPC context.
**Note:** We will host the BoF via Zoom and share the meeting link 15 min before start time in the #hpc channels of JuliaCon Discord and Julia Slack.
juliacon-2022-18063-julia-in-hpc
JuliaCon
Valentin ChuravyJohannes BlaschkeMichael Schlottke-Lakemper
en
false
https://pretalx.com/juliacon-2022/talk/QVESXM/
https://pretalx.com/juliacon-2022/talk/QVESXM/feedback/
BoF
BoF - JuliaLang en Español
Birds of Feather
2022-07-28T19:00:00+00:00
19:00
01:30
Enhorabuena, ha llegado el momento de tener un foro dedicado para los usuarios de JuliaLang en español.
Discutiremos:
- foros y centros donde se usa Julia en español
- materiales educativos (cursos, libros, artículos, video tutoriales), y planes a futuro
- diversidad, inclusión y apoyo de hispano-parlantes
Join the discussion on the [bof-voice](https://discord.com/channels/995757799076282478/997898697578913853) channel in discord.
juliacon-2022-16548-bof-julialang-en-espaol
JuliaCon
Miguel Raz Guzmán MacedoPamela Alejandra Bustamante FaúndezAgustín CovarrubiasArgel Ramírez Reyes
en
Vamos a abrir un espacio de discusión donde los hispanoparlantes de JuliaLang se puedan conocer, compartirse, y coordinarse para que pueda crecer la comunidad de usuarios de Julia en español. No importa tu nivel de proficiencia con Julia, aquí le daremos la bienvenida a todas, todos y todes.
Trataremos de lidiar con cada tema durante 15-20 minutos cada uno y permitir retroalimentación por parte de todos para tener metas concretas al final de la junta a través de una discusión estructurada y moderada.
false
https://pretalx.com/juliacon-2022/talk/7M3BKA/
https://pretalx.com/juliacon-2022/talk/7M3BKA/feedback/
JuMP
Improving nonlinear programming support in JuMP
Talk
2022-07-28T16:30:00+00:00
16:30
00:30
In JuMP 1.0, support for nonlinear programming is a second-class citizen. You must use the separate `@NL` macros, the automatic differentiation engine is a JuMP-specific implementation that cannot be swapped for alternative implementations, and vector-valued nonlinear expressions are not supported. In this talk, we discuss our plans and progress to address these issues and make nonlinear programming a first-class citizen. This work is supported by funding from Los Alamos National Laboratory.
juliacon-2022-17229-improving-nonlinear-programming-support-in-jump
JuMP
Oscar Dowson
en
false
https://pretalx.com/juliacon-2022/talk/XBX9BH/
https://pretalx.com/juliacon-2022/talk/XBX9BH/feedback/
JuMP
Benchmarking Nonlinear Optimization with AC Optimal Power Flow
Talk
2022-07-28T17:00:00+00:00
17:00
00:30
This work discusses some of the requirements for deploying non-convex nonlinear optimization methods to solve large-scale problems in practice. AC Optimal Power Flow is proposed as a proxy-application for testing the viability of nonlinear optimization frameworks for solving such problems. The current performance of several Julia frameworks for nonlinear optimization is evaluated using a standard benchmark library for AC Optimal Power Flow.
juliacon-2022-16937-benchmarking-nonlinear-optimization-with-ac-optimal-power-flow
JuMP
Carleton Coffrin
en
The AC Optimal Power Flow problem (AC-OPF) is one of the most foundational optimization problems that arises in the design and operations of power networks. Mathematically the AC-OPF is a large-scale, sparse, non-convex nonlinear continuous optimization problem. In practice AC-OPF is most often solved to local optimality conditions using interior point methods. This project proposes AC-OPF as _proxy-application_ for testing the viability of different nonlinear optimization frameworks, as performant solutions to AC-OPF has proven to be a necessary (but not always sufficient) condition for solving a wide range of industrial network optimization tasks.
### Objectives
* Communicate the technical requirements for solving real-world continuous non-convex mathematical optimization problems.
* Highlight scalability requirements for the problem sizes that occur in practice.
* Provide a consistent implementation for solving AC-OPF in different nonlinear optimization frameworks.
### AC-OPF Implementations
This work adopts the mathematical model and data format that is used in the IEEE PES benchmark library for AC-OPF, [PGLib-OPF](https://github.com/power-grid-lib/pglib-opf). The Julia package [PowerModels](https://github.com/lanl-ansi/PowerModels.jl) is used for parsing the problem data files and making standard data transformations.
The implementations of the AC-OPF problem in various Julia NonLinear Programming (NLP) frameworks are available in [Rosetta-OPF](https://github.com/lanl-ansi/rosetta-opf) project, which currently includes implementations in [JuMP](https://github.com/jump-dev/JuMP.jl), [NLPModels](https://github.com/JuliaSmoothOptimizers/NLPModels.jl), [Nonconvex](https://github.com/JuliaNonconvex/Nonconvex.jl), [Optim](https://github.com/JuliaNLSolvers/Optim.jl) and [Optimization](https://github.com/SciML/Optimization.jl). This work reports on the solution quality and runtime of solving the PGLib-OPF datasets with each of these NLP frameworks.
false
https://pretalx.com/juliacon-2022/talk/XQMLCH/
https://pretalx.com/juliacon-2022/talk/XQMLCH/feedback/
JuMP
Advances in Transformations and NLP Modeling for InfiniteOpt.jl
Talk
2022-07-28T17:30:00+00:00
17:30
00:30
InfiniteOpt.jl is built on a unifying abstraction for infinite-dimensional optimization problems that enable it to tackle a wide variety of problems in innovative ways. We present recent advances to InfiniteOpt.jl that significantly its flexibility to model/solve these challenging problems. We have developed a general transformation API to facilitate diverse solution methodologies, and we have created an intuitive nonlinear interface that overcomes the current shortcomings of JuMP.jl.
juliacon-2022-17243-advances-in-transformations-and-nlp-modeling-for-infiniteopt-jl
JuMP
Joshua Pulsipher
en
false
https://pretalx.com/juliacon-2022/talk/LEG8TJ/
https://pretalx.com/juliacon-2022/talk/LEG8TJ/feedback/
JuMP
The JuliaSmoothOptimizers (JSO) Organization
Talk
2022-07-28T19:00:00+00:00
19:00
00:30
The JSO organization is a set of Julia packages for smooth, nonsmooth optimization, and numerical linear algebra intended to work consistently together and exploit the structure present in problems. It provides modeling facilities, widely useful known methods, either in the form of interfaces or pure Julia implementations, but also unique methods that are the product of active research. We review the main features of JSO, its current status, and hint at future developments.
juliacon-2022-18057-the-juliasmoothoptimizers-jso-organization
JuMP
Dominique Orban
en
false
https://pretalx.com/juliacon-2022/talk/YTTXMK/
https://pretalx.com/juliacon-2022/talk/YTTXMK/feedback/
JuMP
PDE-constrained optimization using JuliaSmoothOptimizers
Talk
2022-07-28T19:30:00+00:00
19:30
00:30
In this presentation, we showcase a new optimization infrastructure within JuliaSmoothOptimizers for PDE-constrained optimization problems in Julia. We introduce PDENLPModels.jl a package that discretizes PDE-constrained optimization problems using finite elements methods via Gridap.jl. The resulting problem can then be solved by solvers tailored for large-scale optimization implemented in pure Julia such as DCISolver.jl and FletcherPenaltyNLPSolver.jl.
juliacon-2022-18048-pde-constrained-optimization-using-juliasmoothoptimizers
JuMP
Tangi Migot
en
The study of algorithms for optimization problems has become the backbone of data science and its multiple applications. Nowadays, new challenges involve ever-increasing amounts of data and model complexity. Examples include optimization problems constrained by partial differential equations (PDE) that are frequent in imaging, signal processing, shape optimization, and seismic inversion. In this presentation, we showcase a new optimization infrastructure to model and solve PDE-constrained problems in the Julia programming language. We build upon the JuliaSmoothOptimizers infrastructure for modeling and solving continuous optimization problems. We introduce PDENLPModels.jl a package that discretizes PDE-constrained optimization problems using finite elements methods via Gridap.jl. The resulting problem can then be solved by solvers tailored for large-scale optimization implemented in pure Julia such as DCISolver.jl and FletcherPenaltyNLPSolver.jl.
false
https://pretalx.com/juliacon-2022/talk/UDKTPD/
https://pretalx.com/juliacon-2022/talk/UDKTPD/feedback/
JuMP
Generalized Disjunctive Programming via DisjunctiveProgramming
Talk
2022-07-28T20:00:00+00:00
20:00
00:30
We present a Julia package (DisjunctiveProgramming.jl) that extends the functionality in JuMP to allow modeling problems via logical propositions and disjunctive constraints. Logical propositions are converted into algebraic expressions by converting the Boolean expressions to Conjunctive Normal Form and then to algebraic inequalities. The package allows the user to specify the technique to reformulate the disjunctions (Big-M or Convex-Hull reformulation) into mixed-integer constraints.
juliacon-2022-18242-generalized-disjunctive-programming-via-disjunctiveprogramming
JuMP
Hector D. Perez
en
Modeling systems with discrete-continuous decisions is commonly done in algebraic form with mixed-integer programming models, which can be linear or nonlinear in the continuous variables. A more systematic approach to modeling such systems is to use Generalized Disjunctive Programming (GDP) (Chen & Grossmann, 2019; Grossmann & Trespalacios, 2013), which generalizes the Disjunctive Programming paradigm proposed by Balas (2018). GDP allows modeling systems from a logic-based level of abstraction that captures the fundamental rules governing such systems via algebraic constraints and logic. The models obtained via GDP can then be reformulated into the pure algebraic form best suited for the application of interest. The two main reformulation strategies are the Big-M reformulation (Nemhauser & Wolsey, 1999; Trespalacios & Grossmann, 2015) and the Convex-Hull reformulation (Lee & Grossmann, 2000), the latter of which yields tighter models than those typically used in standard mixed-integer programming (Grossmann & Lee, 2003).
DisjunctiveProgramming.jl supports reformulations for disjunctions containing linear, quadratic, and/or nonlinear constraints. When using the Big-M reformulation, the user can specify the Big-M value to be used, which can either be general to the disjunction or specific to each constraint expression in the disjunction. Alternately, the user can allow the package to determine the tightest Big-M value based on the variable bounds and constraint functions using interval arithmetic (IntervalArithmetic.jl [Sanders, et al., 2022]). When the Convex-Hull reformulation is selected, the perspective function approximation from Furman, et al. (2020) is used for nonlinear constraints with a specified ϵ tolerance value. This is done by relying on manipulation of symbolic expressions via Symbolics.jl (Gowda, et al., 2022).
false
https://pretalx.com/juliacon-2022/talk/L9SQZ9/
https://pretalx.com/juliacon-2022/talk/L9SQZ9/feedback/
Sponsored forums
Julius Tech Sponsored Forum
Sponsor forum
2022-07-28T19:00:00+00:00
19:00
00:45
Enterprise adoption for Julia can be a difficult process for developers and engineers to champion. In this sponsored forum, we invite leading industry experts to talk about the common challenges organizations face when bringing Julia and Julia based solutions onboard. Join [here](https://discord.com/channels/995757799076282478/1000106391371010259).
juliacon-2022-21245-julius-tech-sponsored-forum
en
Enterprise adoption for Julia can be a difficult process for developers and engineers to champion. In this sponsored forum, we invite leading industry experts to talk about the common challenges organizations face when bringing Julia and Julia based solutions onboard. We will also discuss best practices and practical ways to approach integration. The audience will be provided the opportunity to submit questions as well.
1. James Lee, Julius Technologies
2. Tom Kwong, Meta
3. Dr. Chris Rackauckas, Julia Computing
4. Jarrett Revels, Beacon Biosignals
false
https://pretalx.com/juliacon-2022/talk/FSCXXS/
https://pretalx.com/juliacon-2022/talk/FSCXXS/feedback/
Green
which(methods)
Lightning talk
2022-07-29T13:00:00+00:00
13:00
00:10
I will talk about which methods get called by `which(methods)` calls. How does Julia decide? How fast does it decide? And when does it figure it all out?
Let’s take a lightning fast dive together into the complexities of the method selection algorithm we affectionately call multiple-dispatch.
juliacon-2022-17947-which-methods-
JuliaCon
Jameson Nash
en
The semantics of multiple dispatch can sound simple and obvious at first glance. But there are lots of strange cases to consider when you start to really explore the details. So how do we do this in a mere few hundred lines of C code? What happens if two methods overlap in applicability? What happens when the user calls invoke instead? How do we track when something changes, after loading a new package or Reviseing an existing one?
I will walk through the process of taking the full list of methods in Julia and picking out exactly which method to call. Then take a look at how we extend that action to re-evaluate the results after every new method that gets added.
false
https://pretalx.com/juliacon-2022/talk/RQZBFB/
https://pretalx.com/juliacon-2022/talk/RQZBFB/feedback/
Green
Building an inclusive (and fun!) Julia community
Lightning talk
2022-07-29T13:10:00+00:00
13:10
00:10
A critical component of any programming language’s potential for impact is the diversity of its
community! A supportive, inclusive community draws in new learners, brings fresh perspectives
to package development, and ultimately expands the reach a language has.
juliacon-2022-19354-building-an-inclusive-and-fun-julia-community
JuliaCon
Kyla McConnellJulia Müller
en
In this talk, we focus on the advances in gender diversity that have been made by the Julia
community this year: from the development of the new beginner-level live course “Learn Julia
with Us” to the continued outreach, community building and mutual support by Julia Gender
Inclusive as a whole.
Kyla and Julia will share their experiences in co-organizing Julia Gender Inclusive and an
equivalent group in the R community. The talk hopes to inspire and share tips for fostering a
community that is inclusive and accessible, encouraging underrepresented groups to learn and
lead with confidence, and creating an atmosphere that is supportive for all, no matter their
background!
false
https://pretalx.com/juliacon-2022/talk/B9DJ9G/
https://pretalx.com/juliacon-2022/talk/B9DJ9G/feedback/
Green
Help! How to grow a corporate Julia community?
Talk
2022-07-29T13:30:00+00:00
13:30
00:30
For many years we simply accepted the two language problem at our company and spent our time converting MATLAB/Python prototypes into C/C++/Java production code. But during the last two years we have been growing an internal Julia community from 3 initial enthusiasts to over 300 Julians. We would like to share our ongoing journey with you and inspire other Julians who want to kickstart similar communities at their company.
juliacon-2022-16803-help-how-to-grow-a-corporate-julia-community-
JuliaCon
Matthijs Cox
en
ASML is 30.000 employee company which is the world leader on photo-lithographic system that are crucial for semiconductor manufacture. For many years we accepted the two language problem and spent our time converting MATLAB/Python prototypes into C/C++/Java production code. But during the last two years we have been growing an internal ASML Julia community from 3 initial enthusiasts to over 300 Julians. We would like to share our ongoing journey with you and inspire other Julians who want to kickstart similar communities at their company.
Our journey included many obstacles, but we are now in a good position with Julia at ASML. Thanks to Julia’s package manager and LocalRegistry.jl, it was easy to set up an internal registry. This led to a flourishing internal Julia package ecosystem with currently over 50 registered packages used by several of ASML’s research and development departments.
To grow further, we foresee plenty of challenges. Changing the software culture from a project-driven to an inner-source approach is one such challenge. Another challenge relates to deployment of Julia into all of our existing software platforms, ranging from embedded hardware systems to cloud services.
By overcoming these challenges, we will finally solve the two language problem at ASML and bring different engineering competencies together. It shouldn’t matter if you are a domain expert, data scientist, data engineer, software engineer, software architect, machine learning engineer, business analyst or anything else. If you can code then you can learn Julia and join in on the fun.
false
https://pretalx.com/juliacon-2022/talk/EKZHPS/
https://pretalx.com/juliacon-2022/talk/EKZHPS/feedback/
Green
Keynote - Husain Attarwala
Keynote
2022-07-29T14:30:00+00:00
14:30
00:45
Keynote - Husain Attarwala, Moderna
juliacon-2022-21233-keynote-husain-attarwala
JuliaCon
Husain Attarwala
en
Keynote - Husain Attarwala, Moderna
false
https://pretalx.com/juliacon-2022/talk/9N9HZ3/
https://pretalx.com/juliacon-2022/talk/9N9HZ3/feedback/
Green
Relational AI Sponsored Talk
Platinum sponsor talk
2022-07-29T15:15:00+00:00
15:15
00:15
At RelationalAI, we are building the world’s fastest, most scalable, most expressive, most open knowledge graph management system, built on top of the world’s only complete relational reasoning engine that uses the knowledge and data captured in enterprise databases to learn and reason.
juliacon-2022-21243-relational-ai-sponsored-talk
en
false
https://pretalx.com/juliacon-2022/talk/SFJBMG/
https://pretalx.com/juliacon-2022/talk/SFJBMG/feedback/
Green
The State of Julia in 2022
Talk
2022-07-29T15:30:00+00:00
15:30
00:30
An update on Julia from the core development team.
juliacon-2022-20833-the-state-of-julia-in-2022
JuliaCon
Viral B Shah
en
false
https://pretalx.com/juliacon-2022/talk/SSUNVP/
https://pretalx.com/juliacon-2022/talk/SSUNVP/feedback/
Green
Closing remarks
Lightning talk
2022-07-29T16:00:00+00:00
16:00
00:10
Closing remarks
juliacon-2022-21374-closing-remarks
en
false
https://pretalx.com/juliacon-2022/talk/HRZUUY/
https://pretalx.com/juliacon-2022/talk/HRZUUY/feedback/
Green
Dagger.jl Development and Roadmap
Lightning talk
2022-07-29T16:30:00+00:00
16:30
00:10
Dagger.jl is a Julia library aiming to improve the way Julia users do distributed programming. With its functional task-focused API, distributed table and array implementations, and intelligent scheduler, Dagger is quickly becoming the de-facto distributed programming interface for many parts of our ecosystem.
This talk is focused on Dagger's development over the last year, and where we see Dagger going over the next few years. I'll also provide examples of how to use Dagger's new features.
juliacon-2022-18164-dagger-jl-development-and-roadmap
JuliaCon
Julian P Samaroo
en
false
https://pretalx.com/juliacon-2022/talk/8A83SB/
https://pretalx.com/juliacon-2022/talk/8A83SB/feedback/
Green
DTables.jl - quickstart, current state and next steps!
Lightning talk
2022-07-29T16:40:00+00:00
16:40
00:10
DTables.jl is a distributed table implementation based on Dagger.jl. It aims to provide distributed and out-of-core tabular data processing for the Julia programming language. The DTables package consists of data structures, distributed algorithms and it's built to be compatible with our rich data processing ecosystem.
The talk covers a quick intro on how to use the DTable, what functionality is currently available and what are our plans for the future!
juliacon-2022-18155-dtables-jl-quickstart-current-state-and-next-steps-
JuliaCon
Krystian Guliński
en
false
https://pretalx.com/juliacon-2022/talk/ZA9RYG/
https://pretalx.com/juliacon-2022/talk/ZA9RYG/feedback/
Green
`BesselK.jl`: a fast differentiable implementation of `besselk`
Lightning talk
2022-07-29T16:50:00+00:00
16:50
00:10
The modified Bessel function of the second kind, provided by `SpecialFunctions.jl` as `besselk`, is an important function in several fields. Despite its significance, convenient numerical implementations of its derivatives with respect to the order parameter are not easily available. In this talk, we discuss a solution to this problem that leverages Julia's exceptional automatic differentiation ecosystem to provide fast and accurate derivatives with respect to order.
juliacon-2022-18073--besselk-jl-a-fast-differentiable-implementation-of-besselk-
JuliaCon
Christopher J Geoga
en
false
https://pretalx.com/juliacon-2022/talk/N3J9KR/
https://pretalx.com/juliacon-2022/talk/N3J9KR/feedback/
Green
2022 Update: Diversity and Inclusion in the Julia community
Lightning talk
2022-07-29T17:00:00+00:00
17:00
00:10
In this talk, we will give an annual update on the current diversity and inclusion efforts underway in the community. We will also present stats from Google Analytics showing aggregate country of origin, gender, and age. These stats will help provide additional context for the continued challenge of D&I in the Julia community and will set the stage for the Julia Inclusive BoF session.
juliacon-2022-17998-2022-update-diversity-and-inclusion-in-the-julia-community
JuliaCon
Logan Kilpatrick
en
D&I continues to be a challenge in technical communities around the world. While the Julia community has done a lot of work trying to address this challenge, the work is very much still ongoing. As part of our commitment to D&I, we have been providing yearly updates on the state of D&I in the Julia community (with a slight focus on gender diversity since we can get those stats from Google Analytics). We hope that this process keeps us accountable to continue to do more to promote equity and inclusion.
false
https://pretalx.com/juliacon-2022/talk/LZELWV/
https://pretalx.com/juliacon-2022/talk/LZELWV/feedback/
Green
The JuliaCon Proceedings
Lightning talk
2022-07-29T17:10:00+00:00
17:10
00:10
In this talk, we will present the JuliaCon proceedings, the purpose, scope, and target audience of this venue. The proceedings are a community-driven initiative to publish articles of interest to the research and developer communities gathered by JuliaCon, they do not require application processing fees nor a paywall on article, making both producing and accessing the articles possible for all. We will then give a quick tour of the reviewing and publication process which happen transparently in
juliacon-2022-18646-the-juliacon-proceedings
Mathieu BesançonCarsten BauerRanjan AnantharamanValentin Churavy
en
false
https://pretalx.com/juliacon-2022/talk/S8D3NM/
https://pretalx.com/juliacon-2022/talk/S8D3NM/feedback/
Green
Interplay between chaos and stochasticity in celestial mechanics
Lightning talk
2022-07-29T17:20:00+00:00
17:20
00:10
This work is focused on the development of an open-source Julia package for the stochastic characterization and the study of chaotic motion in astrodynamics. We focus on the computation of various chaos indicators, among others Fast Lyapunov Indicators (FLI), Finite-Time Lyapunov exponents (FTLE) and Mean Exponential Growth factor of Nearby Orbits (MEGNO) and chaos indicators more in general.
juliacon-2022-18166-interplay-between-chaos-and-stochasticity-in-celestial-mechanics
/media/juliacon-2022/submissions/7Q8WXA/4.ppp_r7A2hNr.png
Matteo Manzi
en
Chaotic behavior is omnipresent in celestial mechanics dynamical systems and it is relevant for both the understanding and leveraging the stability of planetary systems, inner solar system in particular. The quantification of the probability of impacts of near Earth objects after close encounters with celestial bodies; the possibility of designing robust low energy transfer trajectories, not limited to invariant manifolds but also leveraging the weak stability boundary for the design of the ballistic captures trajectories in time-dependent dynamical systems; the characterization of diffusion processes in Nearly-Integrable Hamiltonian systems in celestial mechanics. In order to have a robust description of chaos, therefore being able to describe chaotic motion in the context of dynamical systems characterized by parametric uncertanties, and in parallel being able to investigate the effect of random perturbations (e.g. Langevin equation, jump-diffusion processes) this work builds on “Polynomial Stochastic Dynamic Indicators” (Vasile, Manzi) in which tools from functional analysis, such as orthogonal polynomials (e.g. PolyChaos.jl) and more in general feature maps coming from the theory of support vector machine and kernel methods are used to approximate the functional describing a positive measure defining the state of the system.
This probabilistic generalizations of existing chaos indicators will be computed for a number of dynamical systems (e.g. Duffing oscillator, circular and elliptic restricted three-body problem, etc.) and the relevance of uncertainty quantification for robust trajectory design will be discussed.
This framework will be used to understand the effect of uncertainty and stochasticity, on the behaviour of both individual trajectories and ensambles of trajectories coming from the sampling of the probabilistic space; the influence of this in the overall goal of predicting chaotic dynamical systems characterized by parametric uncertainties will be assessed. Bifurcating phenomena and invariant sets in time-dependant dynamical systems will be discussed, particularly in this context of Lagrangian coherent structures.
Moreover, the relation between memory effects in non-Markovian processes, fractional calculus and time-delay embedding will be outlined using the aforementioned tools.
The computational efficiency of numerical integration schemes of Ordinary and Stochastic Differential Equations will be exploited to produce animations describing bifurcating phenomena and the chaotic nature of dynamical systems.
false
https://pretalx.com/juliacon-2022/talk/7Q8WXA/
https://pretalx.com/juliacon-2022/talk/7Q8WXA/feedback/
Green
How to be an effective Julia advocate?
Lightning talk
2022-07-29T17:30:00+00:00
17:30
00:10
A major part of Julia's success as a language has come from a large community of user advocates. User advocacy continues to be one of the most effective outreach mechanisms and this talk aims to help improve the approach of those seeking to advocate for Julia.
juliacon-2022-17999-how-to-be-an-effective-julia-advocate-
JuliaCon
Logan Kilpatrick
en
This talk will share the lessons learned after speaking, posting, and presenting about Julia over the last 2+ years. It doesn't matter if you are an expert user or a novice, these principles will allow you to effectively highlight the benefits of Julia in a way that is conducive to the audience being receptive to Julia.
We will cover:
- How to frame Julia as a language
- Sharing use cases
- What to avoid
- Getting people involved
And more!
false
https://pretalx.com/juliacon-2022/talk/TSQ8XC/
https://pretalx.com/juliacon-2022/talk/TSQ8XC/feedback/
Green
Optimize your marketing spend with Julia!
Lightning talk
2022-07-29T17:40:00+00:00
17:40
00:10
"Half the money I spend on advertising is wasted; the trouble is I don't know which half." (J.Wanamaker, 19th-century retailer)
Optimizing marketing spend is still difficult, but this talk introduces a modern marketing analysis: Media Mix Modelling (MMM).
We will combine the strength of Julia with Bayesian decision-making to optimize marketing spend for a hypothetical business.
Find more details in the associated [GitHub Repo](https://github.com/svilupp/JuliaCon2022/)
juliacon-2022-18146-optimize-your-marketing-spend-with-julia-
JuliaCon
/media/juliacon-2022/submissions/QTN3ZY/diggity-marketing-SB0WARG16HI-unsplash_nCyH4LJ.jpeg
Jan Siml
en
This talk requires no previous knowledge.
Media Mix Modelling (MMM) is the go-to analysis for deciding how to spend your precious marketing budget. It has been around for more than half a century, and its importance is poised to increase with the rise of the privacy-conscious consumer.
There are a few key marketing concepts that we will cover, e.g., ad stock, saturation and ROAS.
We will leverage the power of Bayesian inference with Turing.jl to establish the effectiveness of our campaigns (/marketing channels). The main advantage of the Bayesian approach will be the quantification of uncertainty, which we will channel into our decision-making when deciding on the budget allocations.
The "optimal" spend strategy ("budget") will be found with the help of Metaheuristics.jl.
Overall, we will draw on Julia's core strengths, such as composability and speed.
The implementation closely follows the methodology of the amazing Robyn package, but it leverages Bayesian inference for the marketing parameters. While there are many resources available for Python and R, I believe this is the first tutorial for MMM in Julia.
Following the talk, you can use the provided notebook and scripts to replicate this analysis for your marketing budget.
You can find the notebook, presentation and additional resources in the following repository:
- [GitHub Repo](https://github.com/svilupp/JuliaCon2022/)
- [PDF of the presentation](https://github.com/svilupp/JuliaCon2022/blob/main/MediaMixModellingDemo/presentation/presentation.pdf)
Session photo thanks to <a href="https://unsplash.com/@diggitymarketing?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Diggity Marketing</a> on <a href="https://unsplash.com/s/photos/digital-marketing?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Unsplash</a>
false
https://pretalx.com/juliacon-2022/talk/QTN3ZY/
https://pretalx.com/juliacon-2022/talk/QTN3ZY/feedback/
Green
GatherTown -- Social break
Social hour
2022-07-29T18:00:00+00:00
18:00
01:00
Join us on [Gather.town](https://app.gather.town/invite?token=2ucLB9IpmCAXZIex4Dvh2VFCeR6QLEdP) for a social hour.
juliacon-2022-21376-gathertown-social-break
en
false
https://pretalx.com/juliacon-2022/talk/XLZ7ZN/
https://pretalx.com/juliacon-2022/talk/XLZ7ZN/feedback/
Green
Large-Scale Tabular Data Analytics with BanyanDataFrames.jl
Talk
2022-07-29T19:00:00+00:00
19:00
00:30
BanyanDataFrames.jl is an open-source library for processing massive Parquet/CSV/Arrow datasets in your Virtual Private Cloud. One of the key goals of the project is to match the API of DataFrames.jl as much as possible. In this talk, we will provide an overview of BanyanDataFrames.jl and discuss challenges and success so far in achieving massively scalable data analytics with the Julia language.
juliacon-2022-18082-large-scale-tabular-data-analytics-with-banyandataframes-jl
JuliaCon
/media/juliacon-2022/submissions/ZMKHUZ/image_4_KS3cRLS.png
Caleb WinstonCailin Winston
en
More information about BanyanDataFrames.jl can be found on GitHub:
https://github.com/banyan-team/banyan-julia
https://github.com/banyan-team/banyan-julia-examples
false
https://pretalx.com/juliacon-2022/talk/ZMKHUZ/
https://pretalx.com/juliacon-2022/talk/ZMKHUZ/feedback/
Green
How to debug Julia simulation codes (ODEs, optimization, etc.!)
Talk
2022-07-29T19:30:00+00:00
19:30
00:30
The ODE solver spit out dt<dtmin, what do you do? MethodError Dual{...}, what does it mean? Plenty of people ask these questions every day. In this talk I'll walk through the steps of debugging Julia simulation codes and help you get something working!
juliacon-2022-17920-how-to-debug-julia-simulation-codes-odes-optimization-etc-
JuliaCon
Chris Rackauckas
en
Debugging simulation codes can be very different from "standard" or "simple" codes. There's many details that can show up that the user needs to be aware of. Thus while there have been many beginner tutorials in using Julia, and many tutorials on how to using SciML ecosystem tools like DifferentialEquations.jl, there has never been a tutorial that says "okay, I got this error when using Optim.jl, what do I do now?". Some major pieces have been written which condense such information, such as the DifferentialEquations.jl PSA on Discourse (https://discourse.julialang.org/t/psa-how-to-help-yourself-debug-differential-equation-solving-issues/62489), but we believe there remains many things to say.
And also, a video walkthrough is simply the best way to "show some how I do it" so to speak.
So let's do it! But what would this entail? There are many topics to cover, including:
- How to read the gigantic stack traces that arise from dual number issues. Why does f(du,u::Array{Float64},p,t) fail with this error? Why can dual numbers cause issues in some mutating code? How do you use https://github.com/SciML/PreallocationTools.jl to solve these Dual number issues?
- When trying to debug code deep within some package context, how do you do it in a "nice" way (i.e. without the slow interpreted mode of Debugger.jl)? The answer is using Revise with tools like `@show` and `x=Ref{Any}()` and then in the package you can do `Main.x[] = ...`. Never seen this trick before? Well then you'll be interested in this talk. We'll showcase how to use these tricks in real-world contexts where such debugging arises.
- When you get dt<dtmin or other ODE solver exit warnings? What are you supposed to do? u'=u^2-u with u(0)=2, oh wait analytically that should error? How do I find out if my model is written incorrectly (it is) and how do I figure out what I should be changing?
- When doing optimization, say using GalacticOptim.jl or Flux.jl, what are these "Zygote does not support mutation" errors? Why do they exist and how do I work around them?
- Everyone is asking for an MWE. How do I make a good MWE? How do I figure out what in the package that is likely causing the issue, and use this to help developers help me?
I will continue to grow the list by keeping tabs on what comes up the most often in Github issues and Discourse posts. At the end of the day, I hope this can be a video that is pasted onto thousands of Discourse questions to give people a much more in-depth view of how to fix issues, and potentially train the next generation of "Discourse answerers".
false
https://pretalx.com/juliacon-2022/talk/VVTKRB/
https://pretalx.com/juliacon-2022/talk/VVTKRB/feedback/
Green
Scaling up Training of Any Flux.jl Model Made Easy
Talk
2022-07-29T20:00:00+00:00
20:00
00:30
In this talk, we will be discussing some of the state of the art techniques to scale training of ML models beyond a single GPU, why they work and how to scale your own ML pipelines. We will be demonstrating how we have scaled up training of Flux models both by means of data parallelism and by model parallelism. We will be showcasing ResNetImageNet.jl and DaggerFlux.jl to accelerate training of deep learning and scientific ML models such as PINNs and the scaling it achieves.
juliacon-2022-18028-scaling-up-training-of-any-flux-jl-model-made-easy
JuliaCon
Dhairya Gandhi
en
With the scale of the datasets and the size of the models growing rapidly, one cannot reasonably train these models on a single GPU. It is no secret that that training big ML models - be they large language models, image recognition tasks, large PINNs etc - requires an immense amount of hardware, and engineering knowledge.
So far, our tools in FluxML have been limited to training on a single GPU, and there is a pressing need for tooling that can scale up training beyond a single GPU. This is important not just for current Deep Learning models but also to scale training of scientific machine learning models as we see more sophisticated neural surrogates emerge for simulations and modelling. To fulfil this need, we have developed some tools that can reliably and generically scale training of differentiable pipelines beyond a single machine or GPU device. We will be showcasing [ResNetImageNet.jl](https://github.com/DhairyaLGandhi/ResNetImageNet.jl) and [DaggerFlux.jl](https://github.com/FluxML/DaggerFlux.jl) which uses Dagger.jl to accelerate training of various model types and the scaling it achieves.
false
https://pretalx.com/juliacon-2022/talk/LJRHQR/
https://pretalx.com/juliacon-2022/talk/LJRHQR/feedback/
Green
GatherTown -- Social break
Social hour
2022-07-29T20:30:00+00:00
20:30
01:00
Join us on [Gather.town](https://app.gather.town/invite?token=2ucLB9IpmCAXZIex4Dvh2VFCeR6QLEdP) for a social hour.
juliacon-2022-21380-gathertown-social-break
en
false
https://pretalx.com/juliacon-2022/talk/FSDULA/
https://pretalx.com/juliacon-2022/talk/FSDULA/feedback/
Red
Fractional Order Computing and Modeling with Julia
Talk
2022-07-29T12:30:00+00:00
12:30
00:30
As the generalization of classical calculus and differential equations, fractional calculus and fractional differential equations are important areas since their invention, to provide a comprehensive Differential Equations package, [SciFracX](https://github.com/SciFracX) is here to explore fractional order area with Julia. In 2022 JuliaCon, we will talk about the progress we have made in FractionalDiffEq.jl and FractionalCalculus.jl, how Julia helped us speed up fractional order modeling and com
juliacon-2022-17962-fractional-order-computing-and-modeling-with-julia
JuliaCon
Qingyu Qu
en
In 1695, a letter from L'Hopital to Leibniz represented the birth of the fractional calculus, "Can the meaning of derivatives with integer order be generalized to derivatives with non-integer orders? What if the order will be 1/2?", Leibniz replied in September 30, 1965: "It will lead to a paradox, from which one day useful consequences will be drawn".
Fractional order computing and modeling has become a more and more appealing topic especially recent decades, natural models usually can be more elaborated in fractional order area. Dating back to Leibniz and L'Hopital raised the "non-integer" calculus question, many giants of science have make hard work on fractional calculus to promote its further development. Fractional calculus are very helpful in describing linear viscoelasticity, acoustics, rheology, polymeric chemistry, and so forth. Moreover, fractional derivatives have been proved to be a very suitable tool for the description of memory and hereditary properties of various materials and processes.
SciML organization has done outstanding work in numerical solvers for differential equations, but there are still some kinds of differential equations that SciML has [not supported yet](https://github.com/SciML/DifferentialEquations.jl/issues/461), as the generalization of integer order differential equations, fractional calculus and fractional differential equations began to attract increasing interest for its important role in science and engineering since early 20th century. While most of the numerical software are programmed using Matlab and are not well maintained, users didn't have unifying tools to help with fractional order modeling and computing, being inspired by SciML, we initiated SciFracX organization. Our mission is to make the fractional order computing and modeling more easier using Julia, speed up researches and provides powerful scientific researching tools. Right now, there are four Julia packages in this organization: FractionalSystems.jl, FractionalCalculus.jl, FractionalDiffEq.jl and FractionalTransforms.jl. We would introduce the FractionalSystems.jl, FractionalDiffEq.jl and FractionalCalculus.jl packages.
The [FractionalSystems.jl](https://github.com/SciFracX/FractionalSystems.jl) package is a package focus on fractional order control, inspired by [FOMCON](https://fomcon.net/) and [FOTF](https://www.mathworks.com/matlabcentral/fileexchange/60874-fotf-toolbox) toolbox, FractionalSystems.jl aims at providing fractional order modeling with Julia. Building on [ControlSystems.jl](https://github.com/JuliaControl/ControlSystems.jl), FractionalSystems.jl provides similar functionalities with ControlSystems.jl, mainly time domain and frequency domain modeling and analysis. While FractionalSystems.jl is only several months old, there are still a lot we need to do in the future.
The [FractionalDiffEq.jl](https://github.com/SciFracX/FractionalDiffEq.jl) package follows the design pattern of DifferentialEquations.jl. To solve a problem, we first define a ```***Problem``` according to our model, then pass the defined problem to ```solve``` function and choose an algorithm to solve our problem.
```julia
prob = ***Problem(fun, α, u0, T)
#prob = ***Problem(parrays, oarrays, RHS, T)
solve(prob, h, Alg())
```
Now, FractionalDiffEq.jl is capable of solving fractional order ODE, PDE, DDE, integral equations and nonlinear FDE systems, what impressed us the most is two times speedup compared to MATLAB when we were trying to solve the same problem in Julia(Didn't use any performance optimization)
[FractionalCalculus.jl](https://github.com/SciFracX/FractionalCalculus.jl) has supports for common sense of fractional derivative and integral, including Caputo, Riemann Liouville, Hadamard and Riesz sense etc. To keep it simple and stupid, we use two intuitive function ```fracdiff``` for fractional derivative and ```fracint``` for fractional integral. All we need to do is to pass the function, order, specific point, step size and which algorithm we want to use.
```julia
fracdiff(fun, α, point, h, Alg())
```
```julia
fracint(fun, α, point, h, Alg())
```
By using Julia in fractional order computing and modeling, we indeed observed amazing progress in no matter [the speed](http://scifracx.org/FractionalDiffEq.jl/dev/system_of_FDE/#System-of-fractional-differential-equations) or the ease of use in fractional order modeling and computing.
While there are no such similar organizations in Python or R community doing high performance computing in fractional order area, the future of SciFracX is promising, we envision a bright future for using Julia in fractional order area. Our work of next round is clear:
* Keep adding more high performance algorithms.
* Make the usage of API more simple and elegant.
* Write more illustrative documents for usability.
* Integrate with the SciML ecosystem to provide users more useful features.
false
https://pretalx.com/juliacon-2022/talk/UQNLK3/
https://pretalx.com/juliacon-2022/talk/UQNLK3/feedback/
Red
PointSpreadFunctions.jl - optical point spread functions
Talk
2022-07-29T13:00:00+00:00
13:00
00:30
Various ways of calculating three-dimensional optical point spread functions (PSFs) are presented. The methods account for the vector nature of the optical field as well as phase aberrations. Quantitative comparisons in terms of speed and accuracy will be presented.
juliacon-2022-17959-pointspreadfunctions-jl-optical-point-spread-functions
JuliaCon
/media/juliacon-2022/submissions/X89FYS/PSFs.jl_Demo_ldm8gKK.png
Rainer HeintzmannFelix Wechsler
en
Methods of calculating optical point spread functions (PSFs) calculated by the toolbox https://github.com/RainerHeintzmann/PointSpreadFunctions.jl
are presented. These methods range from propagating field components via the angular spectrum method using Fourier-transforms to a version that applies spatial constraints in each propagation step to avoid wrap-around effects. Another method starts with the analytical solution, sinc(r), with r denoting the distance to the focus, of a related scalar problem, which is then modified to account for various influences of high-NA aplanatic optical systems.
The toolbox supports aberrations as specified via Zernike coefficients.
The toolbox also contains practical tools such as a PSF distillation tool which automatically identifies single point sources and averages their measured images with sub-pixel precision.
Future directions may include ways to identify aberrations from measured PSFs.
The toolbox will also be extended towards supporting a wider range of microscopy modes.
false
https://pretalx.com/juliacon-2022/talk/X89FYS/
https://pretalx.com/juliacon-2022/talk/X89FYS/feedback/
Red
Control-systems analysis and design with JuliaControl
Talk
2022-07-29T16:30:00+00:00
16:30
00:30
The Julia language is uniquely suitable for control-systems analysis and design. Features like a mathematical syntax, powerful method overloading, strong and generic linear algebra, arbitrary-precision arithmetics, all while allowing high performance, creates a compelling platform to build a control ecosystem upon. We will present the JuliaControl packages and illustrate how they make use of Julia to enable novel and sophisticated features while keeping implementations readable and maintainable.
juliacon-2022-17195-control-systems-analysis-and-design-with-juliacontrol
JuliaCon
/media/juliacon-2022/submissions/ZSCNR7/logo_YlQUJC4.png
Fredrik Bagge Carlson
en
The control engineer typically carries a large metaphorical toolbox, full of both formal algorithms and heuristic methods. Mathematical modeling, simulation, system identification, frequency-domain analysis and uncertainty modeling and quantification are typical examples of elements of a control-design workflow, all of which may be required to complete a control project. Bits and pieces of this workflow have been present in open-source packages for a long time, but a comprehensive and integrated solution has previously been limited to proprietary and/or legacy languages.
[JuliaControl](https://github.com/JuliaControl/) has been around since 2015, and has steadily grown into a highly capable, open-source ecosystem for control using linear methods. With comparatively low effort, algorithms and data structures in the ecosystem have been made generic with respect to the number type used, opening the doors for high-precision arithmetics, uncertainty propagation, automatic differentiation and symbolic computations in every step of the control workflow from simulation to design and verification. We believe that this feature is unique among control software, and will demonstrate its usefulness to the control theorist and engineer with a few examples.
While JuliaControl is largely limited in scope to linear control methods, the full breadth of the scientific computing ecosystem in Julia is just around the corner, offering nonlinear optimization, optimal control, and equation-based modeling and simulation. In this talk, we will demonstrate how JuliaControl interoperates with [ModelingToolkit](https://github.com/SciML/ModelingToolkit.jl/) and the [DifferentialEquations](https://diffeq.sciml.ai/stable/) ecosystem to extend the scope of the capabilities to simulation and design for nonlinear control systems.
Finally, we will share some of the control-related developments in the proprietary [JuliaSim](https://juliacomputing.com/products/juliasim/) platform, offering advanced functionality like controller autotuning, nonlinear Model-Predictive Control (MPC) and LMI-based methods (Linear Matrix Inequality) for robust analysis and design.
false
https://pretalx.com/juliacon-2022/talk/ZSCNR7/
https://pretalx.com/juliacon-2022/talk/ZSCNR7/feedback/
Red
Explaining Black-Box Models through Counterfactuals
Talk
2022-07-29T17:00:00+00:00
17:00
00:30
We propose [`CounterfactualExplanations.jl`](https://www.paltmeyer.com/CounterfactualExplanations.jl/dev/): a package for explaining black-box models through counterfactuals. Counterfactual explanations are based on the simple idea of strategically perturbing model inputs to change model predictions. Our package is novel, easy-to-use and extensible. It can be used to explain custom predictive models including those developed and trained in other programming languages.
juliacon-2022-17904-explaining-black-box-models-through-counterfactuals
JuliaCon
/media/juliacon-2022/submissions/HU8FVH/juliacon_EFHHMx8.gif
Patrick Altmeyer
en
### The Need for Explainability ⬛
Machine learning models like deep neural networks have become so complex, opaque and underspecified in the data that they are generally considered as black boxes. Nonetheless, they often form the basis for data-driven decision-making systems. This creates the following problem: human operators in charge of such systems have to rely on them blindly, while those individuals subject to them generally have no way of challenging an undesirable outcome:
> “You cannot appeal to (algorithms). They do not listen. Nor do they bend.”
> — Cathy O'Neil in *Weapons of Math Destruction*, 2016
### Enter: Counterfactual Explanations 🔮
Counterfactual Explanations can help human stakeholders make sense of the systems they develop, use or endure: they explain how inputs into a system need to change for it to produce different decisions. Explainability benefits internal as well as external quality assurance. Explanations that involve realistic and actionable changes can be used for the purpose of algorithmic recourse (AR): they offer human stakeholders a way to not only understand the system's behaviour, but also strategically react to it. Counterfactual Explanations have certain advantages over related tools for explainable artificial intelligence (XAI) like surrogate eplainers (LIME and SHAP). These include:
- Full fidelity to the black-box model, since no proxy is involved.
- Connection to Probabilisitic Machine Learning and Causal Inference.
- No need for (reasonably) interpretable features.
- Less susceptible to adversarial attacks than LIME and SHAP.
### Problem: Limited Availability in Julia Ecosystem 😔
Software development in the space of XAI has largely focused on various global methods and surrogate explainers with implementations available for both Python and R. In the Julia space we have only been able to identify one package that falls into the broader scope of XAI, namely [`ShapML.jl`](https://github.com/nredell/ShapML.jl). Support for Counterfactual Explanations has so far not been implemented in Julia.
### Solution: `CounterfactualExplanations.jl` 🎉
Through this project we aim to close that gap and thereby contribute to broader community efforts towards explainable AI. Highlights of our new package include:
- **Simple and intuitive interface** to generate counterfactual explanations for differentiable classification models trained in Julia, Python and R.
- **Detailed documentation** involving illustrative example datasets and various counterfactual generators for binary and multi-class prediction tasks.
- **Interoperability** with other popular programming languages as demonstrated through examples involving deep learning models trained in Python and R (see [here](https://www.paltmeyer.com/CounterfactualExplanations.jl/dev/tutorials/interop/)).
- **Seamless extensibility** through custom models and counterfactual generators (see [here](https://www.paltmeyer.com/CounterfactualExplanations.jl/dev/tutorials/models/)).
### Ambitions for the Package 🎯
Our goal is to provide a go-to place for counterfactual explanations in Julia. To this end, the following is a non-exhaustive list of exciting future developments we envision:
1. Additional counterfactual generators and predictive models.
2. Additional datasets for testing, evaluation and benchmarking.
3. Improved preprocessing including native support for categorical features.
4. Support for regression models.
The package is designed to be extensible, which should facilitate contributions through the community.
### Further Resources 📚
For some additional colour you may find the following resources helpful:
- [Slides](https://www.paltmeyer.com/CounterfactualExplanations.jl/dev/resources/juliacon22/presentation.html#/title-slide).
- [Blog post](https://towardsdatascience.com/individual-recourse-for-black-box-models-5e9ed1e4b4cc) and [motivating example](https://www.paltmeyer.com/CounterfactualExplanations.jl/dev/cats_dogs/).
- Package docs: [[stable]](https://pat-alt.github.io/CounterfactualExplanations.jl/stable), [[dev]](https://pat-alt.github.io/CounterfactualExplanations.jl/dev).
- [GitHub repo](https://github.com/pat-alt/CounterfactualExplanations.jl).
false
https://pretalx.com/juliacon-2022/talk/HU8FVH/
https://pretalx.com/juliacon-2022/talk/HU8FVH/feedback/
Red
Building an Immediate-Mode GUI (IMGUI) from scratch
Talk
2022-07-29T17:30:00+00:00
17:30
00:30
Broadly, there are two paradigms of interfacing with a UI library to create a Graphical User Interface (GUI) - Retained-Mode (RM) and Immediate-Mode (IM). This talk is for anyone who wants to understand how to make an immediate-mode GUI from scratch. I will explain the inner workings of an immediate-mode UI library and show one possible way to implement simple widgets like buttons, sliders, and text-boxes from scratch.
Link: https://github.com/Sid-Bhatia-0/SimpleIMGUI.jl
juliacon-2022-18004-building-an-immediate-mode-gui-imgui-from-scratch
JuliaCon
Siddharth Bhatia
en
Needless to say, Graphical User Interfaces (GUIs) are used in a wide variety of applications. For example, several desktop applications like web browsers, computer games etc. have some form of a GUI. Typically, a GUI has some widgets like buttons, text-boxes etc. and the user can interact with those widgets with the help of a mouse or a keyboard in order to use the application.
Broadly, there are two paradigms of interfacing with a UI library to create a GUI - Retained-Mode (RM) and Immediate-Mode (IM). This talk is for anyone who wants to understand how to make an immediate-mode GUI from scratch. I will attempt to explain the inner workings of an immediate-mode UI library and show one possible way to implement simple widgets like buttons, sliders, and text-boxes from scratch.
We will look at one possible way to structure the render loop for a desktop application and dive deeper into input handling and widget interaction. The goal is to strip out as many unnecessary features as possible and explain the barebones structure of how to make an IMGUI from scratch. For this purpose, I will stick to the lightweight libraries GLFW.jl (to create and manage windows) and SimpleDraw.jl (to draw the user interface).
SimpleIMGUI.jl: https://github.com/Sid-Bhatia-0/SimpleIMGUI.jl
Supplementary material: https://github.com/Sid-Bhatia-0/JuliaCon2022Talk
false
https://pretalx.com/juliacon-2022/talk/GXTYSA/
https://pretalx.com/juliacon-2022/talk/GXTYSA/feedback/
Red
GeneDrive.jl: Simulate and Optimize Biological Interventions
Talk
2022-07-29T19:00:00+00:00
19:00
00:30
This talk introduces GeneDrive.jl, a package designed to study the effect of biotic and abiotic interactions on metapopulations, outlining functionalities and use cases. GeneDrive.jl is a 3-part framework for building and analyzing simulations wherein organisms are subjected to anthropogenic and environmental change. It includes: (1) Data models that exploit the power of Julia's type system. (2) Dynamic models that build on DifferentialEquations.jl. (3) Decision models that employ JuMP.jl.
juliacon-2022-17984-genedrive-jl-simulate-and-optimize-biological-interventions
/media/juliacon-2022/submissions/878K9K/logo_rL3VOk5.png
Valeri Vasquez
en
Understanding and controlling biological dynamics is a concern in arenas as diverse as public health, agriculture, or conservation. Both environmental and human factors influence those dynamics, often in complex ways. Decisions about the timing, magnitude, and location where interventions are required to control the presence of harmful organisms – be they disease vectors, crop pests, or invasive species – must be made amid this ever-changing reality of biotic and abiotic interactions.
The GeneDrive.jl package facilitates replicable, scalable, and extensible computational experiments on the topic of biological dynamics and control by drawing on several pre-existing tools within the Julia ecosystem. It formalizes Julia data structures to store information and dispatch methods unique to species and genotype, enabling the straightforward incorporation of empirical knowledge. Once constructed, problems can be solved using either dynamic or optimization methods by building on the extensively developed DifferentialEquations.jl and JuMP.jl packages.
This one-time specification of the experimental data, on which both ODE and optimization solving algorithms can be called, encourages experimentation with operational levers in addition to biological ones. GeneDrive.jl employs mathematical programming for its optimization routines rather than the optimal control approaches more common in the biological sciences. This enables the inclusion of more detailed genetic and ecological information than would otherwise be tractable.
The origin of this package’s name, gene drives, are DNA sequences that spread through a population at higher frequencies than Mendelian inheritance patterns. These tools furnish a promising new approach to the mitigation of diseases carried by mosquito vectors and circumvent the problems of traditional prevention practices (e.g., growing insecticide resistance). GeneDrive.jl is applicable to biological tools beyond gene drive (see examples in the documentation), however, it is named in honor of this new technological horizon.
false
https://pretalx.com/juliacon-2022/talk/878K9K/
https://pretalx.com/juliacon-2022/talk/878K9K/feedback/
Purple
Reproducible Publications with Julia and Quarto
Talk
2022-07-29T12:30:00+00:00
12:30
00:30
Quarto is an open-source scientific and technical publishing system that builds on standard markdown with features essential for scientific communication. The system has support for reproducible embedded computations, equations, citations, crossrefs, figure panels, callouts, advanced layout, and more. In this talk we'll explore the use of Quarto with Julia, describing both integration with IJulia and the Julia VS Code extension, as well as areas for future improvement and exploration.
juliacon-2022-18035-reproducible-publications-with-julia-and-quarto
JuliaCon
/media/juliacon-2022/submissions/KKBXAZ/hello-julia_dzCQKrs.png
J.J. Allaire
en
Quarto is an open-source scientific and technical publishing system that builds on standard markdown with features essential for scientific communication. One of the most important enhancements is embedded computations, which enable documents to be fully reproducible. There are also a wide variety of technical authoring features including equations, citations, crossrefs, figure panels, callouts, advanced layout, and more. In this talk we'll explore the use of Quarto with Julia, describing both integration with IJulia and the Julia VS Code extension, as well as areas for future improvement and exploration.
Quarto is built on Pandoc and as a result can target dozens of output formats including HTML, PDF, MS Word, OpenOffice, and ePub. Quarto also includes a project system that enables publishing collections of documents as a blog, full website, or book. Output formats are extensible, making it possible to create Journal ready LaTeX and HTML output from the same source code. Several examples of creating these output types with Julia will be presented, and we will take advantage of integration between the Quarto and Jupyter VS Code extensions to demonstrate productive workflows.
After reviewing the basics of the system and presenting examples, we'll dive more into the technical details of how Quarto works. One of the things that makes Pandoc so capable is that it is not merely a markdown system but rather a generalized system for computing on documents. We'll describe the Pandoc AST for documents and how users of Quarto can write filters to transform the AST during rendering. Examples of filters authored with both Lua (the Pandoc embedded language for filters) and Julia (via the PandocFilters.jl package) will be presented.
Embedded computations present the opportunity for fully reproducible workflows, but also create new performance challenges. The system needs to support expensive, long-running computations but at the same time interactive and iterative use (especially for content authoring). Quarto includes a variety of facilities for managing these tradeoffs, including daemonized Jupyter kernels for interactive use, caching computations, and the ability to freeze computational documents. We'll demonstrate using all of these techniques with Julia, and discuss their benefits, drawbacks, and potential for future improvement.
Quarto interfaces with embedded Julia code using its Juptyer computational engine and the IJulia kernel. Documents can be authored in either a plain text markdown format or as Jupyter notebooks. There are several other literate programming systems available in the Julia ecosystem (Pluto, Neptune, Weave.jl, etc.) which have their own benefits and tradeoffs. We'll discuss why we chose IJulia along with an exploration of how we could integrate with other systems.
false
https://pretalx.com/juliacon-2022/talk/KKBXAZ/
https://pretalx.com/juliacon-2022/talk/KKBXAZ/feedback/
Purple
WhereTraits.jl has now a disambiguity resolution system!
Lightning talk
2022-07-29T13:00:00+00:00
13:00
00:10
When [WhereTraits.jl](https://github.com/jolin-io/WhereTraits.jl) was published 2 years ago, the key missing feature was to address ambiguations between traits function definitions. It is implemented now!
If you as a user encounter a trait conflict, you are now prompted with a concrete example resolution. You simply specify an ordering between the traits and everything is resolved automatically for you.
A feature only available in WhereTraits - even normal julia functions cannot do this.
juliacon-2022-18008-wheretraits-jl-has-now-a-disambiguity-resolution-system-
JuliaCon
Stephan Sahm
en
Method disambiguation is one of the top julia problems which can become tricky to resolve. This is especially true, if one of your dependent packages defined one conflicting part, and another package defined the other part. As a user, you just want to say that the one function should be used instead of the other. Unfortunately, that is not possible, and instead you have to look into the actual source code and implement a resolution yourself.
The same problem occurs to traits, and maybe even more pronounced. If your function has two different traits specializations, let's say one for `MyAwesomeTrait` and another for `GreatGreatTrait`, it is unclear what to do if your type is both a MyAwesomeTrait and a GreatGreatTrait. You will get exactly such a Method Disambiguation Error.
WhereTraits.jl resolved this difficulty in its most recent release by adding extra support for an ordering between traits, which is used for automatic disambiguation. I.e. the user gets exactly the mentioned power of deciding that the one trait should be preferred over the other. And all this without any performance penalty. The disambiguation system is defined such that traits definition and traits ordering can be in different packages and can be defined multiple times, making the system very flexible and generic.
This new feature is outstanding as even normal Julia function dispatch does not support it.
In this talk I am going to present this new feature, and explain how it is implemented.
false
https://pretalx.com/juliacon-2022/talk/ME9GW8/
https://pretalx.com/juliacon-2022/talk/ME9GW8/feedback/
Purple
Invenia Sponsored Talk
Silver sponsor talk
2022-07-29T13:10:00+00:00
13:10
00:05
We are a team of scientists and engineers working together to solve the social, economic and environmental issues that we face in the world today.
juliacon-2022-21254-invenia-sponsored-talk
en
false
https://pretalx.com/juliacon-2022/talk/FTMX73/
https://pretalx.com/juliacon-2022/talk/FTMX73/feedback/
Purple
Juliacon Experiences
BoF (45 mins)
2022-07-29T13:15:00+00:00
13:15
00:45
This session hosts all of this year's experience talks.
juliacon-2022-21240-juliacon-experiences
Julia FrankAgustin CovarrubiasValeria PerezSaranjeet Kaur BhogalMarina CagliariPatrick AltmeyerGarrek StemoJeremiah Lasquety-ReyesDr. Vikas NegiMartin SmitFábio Rodrigues SodréArturo ErdelyOlga EleftherakouCharlie Kawczynski
en
false
https://pretalx.com/juliacon-2022/talk/PTFQER/
https://pretalx.com/juliacon-2022/talk/PTFQER/feedback/
Purple
Metaheuristics.jl: Towards Any Optimization
Talk
2022-07-29T16:30:00+00:00
16:30
00:30
Real-world problems require sophisticated methodologies providing feasible and efficient solutions. Metaheuristics are algorithms proposed to approximate those optimal solutions in a short time, making them suitable for applications where saving time is important. Metaheuristics.jl package implements relevant state-of-the-art algorithms for constrained, multi-, many-objective and bilevel optimization. Moreover, performance indicators are implemented in this package.
juliacon-2022-17107-metaheuristics-jl-towards-any-optimization
JuliaCon
Jesús-Adolfo Mejía-de-Dios
en
This talk presents the main features of Metaheuristics.jl, which is a package for global optimization to approximate solutions for single-, multi-, and many-objective optimization. Several examples are given to illustrate the implementation and the resolution of the different optimization problems.
false
https://pretalx.com/juliacon-2022/talk/8LRJVY/
https://pretalx.com/juliacon-2022/talk/8LRJVY/feedback/
Purple
InferOpt.jl: combinatorial optimization in ML pipelines
Talk
2022-07-29T17:00:00+00:00
17:00
00:30
We present InferOpt.jl, a generic package for combining combinatorial optimization algorithms with machine learning models. It has two purposes:
- Increasing the expressivity of learning models thanks to new types of structured layers.
- Increasing the efficiency of optimization algorithms thanks to an additional inference step.
Our library provides wrappers for several state-of-the-art methods in order to make them compatible with Julia's automatic differentiation ecosystem.
juliacon-2022-17902-inferopt-jl-combinatorial-optimization-in-ml-pipelines
JuliaCon
Guillaume DalleLouis BouvierLéo Baty
en
### Overview
We focus on a generic prediction problem: given an instance `x`, we want to predict an output `y` that minimizes the cost function `c(y)` on a feasible set `Y(x)`. When `Y(x)` is combinatorially large, a common approach in the literature is to exploit a surrogate optimization problem, which is usually a Linear Program (LP) `max_y θᵀy`.
A typical use of InferOpt.jl is integrating the optimization problem (LP) into a structured learning pipeline of the form `x -> θ -> y`, where the cost vector `θ = φ_w(x)` is given by an ML encoder. Our goal is to learn the weights `w` in a principled way. To do so, we consider two distinct paradigms:
1. *Learning by experience*, whereby we want to minimize the cost induced by our pipeline using only past instances `x`.
2. *Learning by imitation*, for which we have "true" solutions `y` or cost vectors `θ` associated with each past instance `x`.
We provide a unified framework to derive well-known loss functions, and we pave the way for new ones. Our package will be open-sourced in time for JuliaCon 2022.
### Related works
InferOpt.jl gathers many previous approaches to derive (sub-)differentiable layers in structured learning:
- [_Differentiation of Blackbox Combinatorial Solvers_](https://arxiv.org/pdf/1912.02175.pdf) for linear interpolations of piecewise constant functions
- [_Learning with Fenchel-Young Losses_](https://arxiv.org/abs/1901.02324) for regularized optimizers and the associated structured losses
- [_Learning with Differentiable Perturbed Optimizers_](https://arxiv.org/abs/2002.08676) for stochastically-perturbed optimizers
- [_Structured Support Vector Machines_](https://pub.ist.ac.at/~chl/papers/nowozin-fnt2011.pdf) for cases in which we have a distance on the output space
- [_Smart "Predict, then Optimize"_](https://arxiv.org/abs/1710.08005) for two-stage decision frameworks in which we know past true costs
In addition, we provide several tools for directly minimizing the cost function using smooth approximations.
### Package content
Since we want our package to be as generic as possible, we do not make any assumption on the kind of algorithm used to solve combinatorial problems. We only ask the user to provide a callable `maximizer`, which takes the cost vector `θ` as argument and returns a solution `y`: regardless of the implementation, our wrappers can turn it into a differentiable layer.
As such, our approach is different from that of [DiffOpt.jl](https://github.com/jump-dev/DiffOpt.jl), in which the optimizer has to be a convex [JuMP.jl](https://github.com/jump-dev/JuMP.jl) model. It is also different from [ImplicitDifferentiation.jl](https://github.com/gdalle/ImplicitDifferentiation.jl), which implements a single approach for computing derivatives (whereas we provide several), and does not include structured loss function.
All of our wrappers come with their own forward and reverse differentiation rules, defined using [ChainRules.jl](https://github.com/JuliaDiff/ChainRules.jl). As a result, they are compatible with a wide range of automatic differentiation backends and machine learning libraries. For instance, if the encoder `φ_w` is a [Flux.jl](https://github.com/FluxML/Flux.jl) model, then the wrapped optimizer can also be included as a layer in a `Flux.Chain`.
### Examples
We include various examples and tutorials to apply this generic framework on concrete problems. Since our wrappers are model- and optimizer-agnostic, we can accommodate a great variety of algorithms for both aspects.
On the optimization side, our examples make use of:
- Mixed-Integer Linear Programs;
- Shortest path algorithms;
- Scheduling algorithms;
- Dynamic Programming.
On the model side, we exploit the following classes of predictors:
- Generalized Linear Models;
- Convolutional Neural Networks;
- Graph Neural Networks.
false
https://pretalx.com/juliacon-2022/talk/P7XJCV/
https://pretalx.com/juliacon-2022/talk/P7XJCV/feedback/
Purple
Time to Say Goodbye to Good Old PCA
Lightning talk
2022-07-29T17:30:00+00:00
17:30
00:10
In this talk, I present ProjectionPursuit.jl, a package that is designed to address the limitation of PCA, that is the lack of flexibility for dimension reduction. I also discuss the background of projection pursuit, why the result of PCA could be misleading, and compare projection pursuit and PCA with some data examples.
An alternative title of this talk is “Bring Your Own Objective Function: Why PCA can be a bad idea?”.
juliacon-2022-16864-time-to-say-goodbye-to-good-old-pca
JuliaCon
/media/juliacon-2022/submissions/YMFXKE/pp_pxvtPHr.jpg
Yijun Xie
en
Principal Component Analysis (PCA) is arguably the most popular dimension reduction method in practice. The basic idea of PCA is to reduce the dimension of original data while retaining as much variance as possible. While sometimes effective, in practice PCA has some pitfalls, especially when the components of interest of the data are orthogonal to the leading principal components. A possible alternative, the projection pursuit technique, was proposed by Kruskal (1972) and Friedman and Tukey (1974). However, unlike PCA, projection pursuit does not have a closed-form solution, and thus computational complexity limits its prevalence. In this talk, we present the Julia package, ProjectionPursuit.jl, which combines the new computational tools needed to implement the high-dimensional projection pursuit and the lightning speed of Julia. We show that with the help of this package, one can easily implement the idea of projection pursuit to various applications.
false
https://pretalx.com/juliacon-2022/talk/YMFXKE/
https://pretalx.com/juliacon-2022/talk/YMFXKE/feedback/
Purple
RegressionFormulae.jl: familiar `@formula` syntax for regression
Lightning talk
2022-07-29T17:40:00+00:00
17:40
00:10
StatsModels.jl provides the `@formula` mini-language for conveniently specifying table-to-matrix transformations for statistical modeling. [RegressionFormulae.jl](https://github.com/kleinschmidt/RegressionFormulae.jl) extends this mini-language with additional syntax that users coming from other statistical modeling ecosystems such as R may be familiar with. This package also serves as a template for developers wish to expand the StatsModels.jl `@formula` syntax in their own packages.
juliacon-2022-18125-regressionformulae-jl-familiar-formula-syntax-for-regression
JuliaCon
Dave KleinschmidtPhillip Alday
en
StatsModels.jl provides the `@formula` mini-language for conveniently specifying table-to-matrix transformations for statistical modeling. This mini-language is designed with extensibility and composability in mind, using normal Julia mechanisms of multiple dispatch to implement additional syntax both inside StatsModels.jl and in external packages. RegressionFormulae.jl takes advantage of this extensibility to provide _additional syntax_ that is familiar to many users of other statistical software (e.g., R) in an "opt-in" manner, without forcing _all_ downstream packages that depend on StatsModels.jl/`@formula` to support this syntax.
The StatsModels.jl `@formula` syntax is based on the Wilkinson-Rogers Formula Notation which has been a widely-used standard in multi-factor regression modeling since it was first described in Wilkinson and Rogers (1973). The basic syntax includes operators for _addition_ (`+`) and _crossing_ (`&` and `*`) of regressors, as well as the `~` operator to link outcome and regressor terms. As the conventions around this syntax have evolved in the last 50 years, other systems have introduced additional operators.
RegressionFormulae.jl expands the StatsModels.jl `@formula` to support two commonly-used operators from R: `^` (incomplete crossing) and `/` (nesting). Specifically, it implements
- `(a + b + c + ...) ^ n` to create all interactions up to `n`-way, corresponding to an incomplete cross of `a, b, c, ...`.
- `a / b` to create `a + a & b`, which results in a "nested" model of `b`, with a separate coefficient for `b` for each level of `a`
Both of these operators are particularly useful for creating _interpretable_ models. Models with high-order interactions are extremely challenging to interpret and require considerable care, and are prone to over-fitting since the number of coefficients grows very quickly with additional terms participating in the interactions. The incomplete cross `^` syntax can ameliorate these difficulties, limiting the highest degree of the resulting interaction terms and reducing the overall number of predictors. Nesting (`a / b`) similarly provides an alternative to fully crossed models (`a * b`) that is more directly interpretable in situations where the analytic questions are focused on the effects of a predictor `b` within each individual level of some other variable `a`, without concern for direct _comparison_ of these effects to each other.
Finally, this syntax is implemented in a way that does not _require_ other modeling packages that use `@formula` to support them, or even _prevent_ other packages from defining _alternative_ meaning to the `^` or `/` operators. Within a `@formula`, the special syntax is implemented by methods like
```julia
function StatsModels.apply_schema(
t::FunctionTerm{typeof(/)},
...
```
and
```julia
function Base.:(/)(outer::CategoricalTerm, inner::AbstractTerm)
...
```
The result of this is that if RegressionFormulae.jl is not loaded, then `/` and `^` inside a `@formula` behave exactly as they normally would (e.g., as calls the normal Julia functions `/` and `^`). Moreover, if a user loads RegressionFormulae.jl at the same time as some other package that defines special syntax for `/` or `^` (for `RegressionModel`), they will receive a warning about method redefinition or method ambiguity.
false
https://pretalx.com/juliacon-2022/talk/HFG3AW/
https://pretalx.com/juliacon-2022/talk/HFG3AW/feedback/
Purple
Random utility models with DiscreteChoiceModels.jl
Lightning talk
2022-07-29T17:50:00+00:00
17:50
00:10
Random utility models are widely used in social science. While most statistical software, including Julia, has some facilities for estimating multinomial logit models, more advanced models such as mixed logit models and models with different utility functions for different outcomes generally require specific choice modeling software. This presentation describes a new package, DiscreteChoiceModels.jl, which provides flexible and high-performance multinomial and forthcoming mixed logit estimation.
juliacon-2022-18069-random-utility-models-with-discretechoicemodels-jl
JuliaCon
Matthew Wigginton Bhagat-Conway
en
Random utility models are ubiquitous in fields including economics, transportation, and marketing [1]. Estimation of simple multinomial logit models is available in many statistical packages, including Julia via Econometrics.jl [2], more advanced choice models are generally fit with choice-model-specific packages e.g., [3], [4]. These packages allow more-flexible utility specifications by allowing utility function definitions to vary over outcomes, and by allowing additional forms of random utility model, such as the mixed logit model which allows random parameter variation [5].
DiscreteChoiceModels.jl provides such a package for Julia. It has an intuitive syntax for specifying discrete-choice models, allowing users to directly write out utility functions. For instance, the code below specifies the Swissmetro example mode-choice mode distributed with Biogeme [3]:
multinomial_logit(
@utility(begin
1 ~ αtrain + βtravel_time * TRAIN_TT / 100 + βcost * (TRAIN_CO * (GA == 0)) / 100
2 ~ αswissmetro + βtravel_time * SM_TT / 100 + βcost * SM_CO * (GA == 0) / 100
3 ~ αcar + βtravel_time * CAR_TT / 100 + βcost * CAR_CO / 100
end),
:CHOICE,
data,
availability=[
1 => :avtr,
2 => :avsm,
3 => :avcar,
]
)
Within the utility function specification (@utility), the first three lines specify the utility functions for each of the three modes specified by the CHOICE variable: train, car, and the hypothetical Swissmetro. Any variable starting with α or β is treated as a coefficient to be estimated, while other variables are assumed to be data columns. The remainder of the model specification indicates that the choice is indicated by the variable CHOICE, what data to use, and, optionally, what columns indicate availability for each alternative.
Mixed logit models
Support for mixed logit models is under development. Mixed logit models will specify random coefficients as distributions from Distributions.jl [6]. For instance, to specify that αtrain should be normally distributed with mean 0 and standard deviation 1 as starting values, you would add
αtrain = Normal(0, exp(0))
with the exponent indicating that the value will be exponentiated to ensure that the standard deviation will always be positive.
Performance
Julia is designed for high-performance computing, so a major goal of DiscreteChoiceModels.jl is to estimate models more quickly than other modeling packages. To that end, two multinomial logit models were developed and benchmarked using three packages—DiscreteChoiceModels.jl, Biogeme [3], and Apollo [4], using default settings for all three packages. The first model is the Swissmetro example from Biogeme, with 6,768 observations, 3 alternatives, and 4 free parameters. The second is a vehicle ownership model using the 2017 US National Household Travel Survey, with 129,696 observations, 5 alternatives, and 35 free parameters. All runtimes are the median of 10 runs, and executed serially on a quad-core Intel i7 with 16GB of RAM, running Debian 11.1. DiscreteChoiceModels.jl outperforms other packages when used with a DataFrame, while using Dagger introduces distributed computing overhead on a single machine.
Model DiscreteChoiceModels.jl: DataFrame DiscreteChoiceModels.jl: Dagger Biogeme Apollo
------------------- ------------------------------------ --------------------------------- --------- --------
Swissmetro 188ms 2047ms 252ms 824ms
Vehicle ownership 35.1s 46.9s 163.4s 227.2s
References
[1] M. Ben-Akiva and S. R. Lerman, Discrete choice analysis: Theory and application to travel demand. MIT Press, 1985.
[2] J. B. S. Calderón, “Econometrics.jl,” Proc JuliaCon Conf, doi: 10.21105/jcon.00038.
[3] M. Bierlaire, “A short introduction to PandasBiogeme,” Ecole Poltechnique Fédérale de Lausanne, Lausanne, TRANSP-OR 200605, Jun. 2020. Available: https://transp-or.epfl.ch/documents/technicalReports/Bier20.pdf
[4] S. Hess and D. Palma, “Apollo: A flexible, powerful and customisable freeware package for choice model estimation and application,” J Choice Model, doi: 10.1016/j.jocm.2019.100170.
[5] K. Train, Discrete Choice Methods with Simulation. Cambridge, UK: Cambridge University Press, 2009.
[6] M. Besançon et al., “Distributions.jl: Definition and Modeling of Probability Distributions in the JuliaStats Ecosystem,” J Stat Soft, doi: 10.18637/jss.v098.i16.
false
https://pretalx.com/juliacon-2022/talk/DHYP8U/
https://pretalx.com/juliacon-2022/talk/DHYP8U/feedback/
Blue
A Fresh Approach to Open Source Voice Assistant Development
Lightning talk
2022-07-29T12:30:00+00:00
12:30
00:10
We present JustSayIt.jl, a software and high-level API for offline, low latency and secure translation of human speech to computer commands or text, leveraging the Vosk Speech Recognition Toolkit. The API includes an unprecedented, highly generic extension to the Julia programming language, which allows to declare arguments in standard function definitions to be obtainable by voice. As a result, it empowers any programmer to quickly write new commands that take arguments from human voice.
juliacon-2022-17955-a-fresh-approach-to-open-source-voice-assistant-development
JuliaCon
Samuel Omlin
en
Leading software companies have heavily invested in voice assistant software since the dawn of the century. However, they naturally prioritize use cases that directly or indirectly bring economic profit. As a result, their developments cover, e.g., the needs of the entertainment sector abundantly, but those of academia and software development only poorly. There is particularly little support for Linux, whereas it is the preferred operating system for many software developers and computational scientists. The open source voice assistant project MyCroft fully supports Linux, but provides little tools that appear helpful for productive work in academia and software development; moreover, adding new skills to MyCroft seems to be complex for average users and appears to require considerable knowledge about the specificities of MyCroft. [JustSayIt.jl](https://github.com/omlins/JustSayIt.jl) addresses these shortcomings by providing a lightweight framework for easily extensible, offline, low latency, highly accurate and secure speech to command or text translation on Linux, MacOS and Windows.
[JustSayIt](https://github.com/omlins/JustSayIt.jl)'s high-level API allows to declare arguments in standard Julia function definitions to be obtainable by voice, which constitutes an unprecedented, highly generic extension to the Julia programming language. For such functions, [JustSayIt](https://github.com/omlins/JustSayIt.jl) automatically generates a wrapper method that takes care of the complexity of retrieving the arguments from the speakers voice, including interpretation and conversion of the voice arguments to potentially any data type. [JustSayIt](https://github.com/omlins/JustSayIt.jl) commands are implemented with such voice argument functions, triggered by a user definable mapping of command names to functions. As a result, it empowers programmers without any knowledge of speech recognition to quickly write new commands that take their arguments from the speakers voice. Moreover, [JustSayIt](https://github.com/omlins/JustSayIt.jl) unites the Julia and Python communities by using both languages: it leverages Julia's performance and metaprogramming capabilities and Python's larger ecosystem where no Julia package is considered suitable. [JustSayIt](https://github.com/omlins/JustSayIt.jl) relies on PyCall.jl and Conda.jl, which renders installing and calling Python packages from within Julia almost trivial. [JustSayIt](https://github.com/omlins/JustSayIt.jl) is ideally suited for development by the world-wide open source community as it provides an intuitive high-level API that is readily understandable by any programmer and unites the Python and Julia community.
[JustSayIt](https://github.com/omlins/JustSayIt.jl) implements a novel algorithm for high performance context dependent recognition of spoken commands which leverages the [Vosk Speech Recognition Toolkit](https://github.com/alphacep/vosk-api/). A specialized high performance recognizer is defined for each function argument that is obtainable by voice and has a restriction on the valid input. In addition, when beneficial for recognition accuracy, the recognizer for a voice argument is generated dynamically depending on the command path taken before the argument. To enable minimal latency for single word commands (latency refers here to the time elapsed between a command is spoken and executed), the latter can be triggered in certain conditions upon bare recognition of the corresponding sounds without waiting for silence as normally done for the confirmation of recognitions. Thus, [JustSayIt](https://github.com/omlins/JustSayIt.jl) is suitable for commands where a perceivable latency would be unacceptable, as, e.g., mouse clicks. Single word commands' latency is typically in the order of a few milliseconds on a regular notebook. [JustSayIt](https://github.com/omlins/JustSayIt.jl) achieves this high performance using only one CPU core and can therefore run continuously without harming the computer usage experience.
In conclusion, [JustSayIt](https://github.com/omlins/JustSayIt.jl) demonstrates that the development of our future voice assistants can take a fresh and new path that is neither driven by the priorities and economic interests of global software companies nor by a small open source community of speech recognition experts; instead, the entire world-wide open source community is empowered to contribute in shaping our future daily assistants.
false
https://pretalx.com/juliacon-2022/talk/H3N8UN/
https://pretalx.com/juliacon-2022/talk/H3N8UN/feedback/
Blue
ImplicitDifferentiation.jl: differentiating implicit functions
Lightning talk
2022-07-29T12:40:00+00:00
12:40
00:10
We present a Julia package for differentiating through functions that are defined implicitly. It can be used to compute derivatives for a wide array of "black box" procedures, from optimization algorithms to fixed point iterations or systems of nonlinear equations.
Since it mostly relies on defining custom chain rules, our code is lightweight and integrates nicely with Julia's automatic differentiation and machine learning ecosystem.
juliacon-2022-17893-implicitdifferentiation-jl-differentiating-implicit-functions
JuliaCon
Guillaume DalleMohamed Tarek
en
### Introduction
Differentiable programming is a core ingredient of modern machine learning, and it is one of the areas where Julia truly shines. By defining new kinds of differentiable layers, we can hope to increase the expressivity of deep learning pipelines without having to scale up the number of parameters.
For instance, in structured prediction settings, domain knowledge can be encoded into optimization problems of many flavors: linear, quadratic, conic, nonlinear or even combinatorial. In domain adaptation, differentiable distances based on optimal transport are often computed using the Sinkhorn fixed point iteration algorithm. Last but not least, in differential equation-constrained optimization and neural differential equations, one often needs to obtain derivatives for solutions of nonlinear equation systems with respect to equation parameters.
Note that these complex functions are all defined *implicitly*, through a condition that their output must satisfy. As a consequence, differentiating said output (e.g. the minimizer of an optimization problem) with respect to the input (e.g. the cost vector or constraint matrix) requires the automatization of the implicit function theorem.
### Related works
When trying to differentiate through iterative procedures, unrolling the loop is a natural approach. However, it is computationally demanding and it only works for pure Julia code with no external "black box" calls. On the other hand, using the implicit function theorem means we can decouple the derivative from the function itself: see [_Efficient and Modular Implicit Differentiation_](https://arxiv.org/abs/2105.15183) for an overview of the related theory.
In the last few years, this implicit differentiation paradigm has given rise to several Python libraries such as [OpenMDAO](https://github.com/OpenMDAO/OpenMDAO), [cvxpylayers](https://github.com/cvxgrp/cvxpylayers) and [JAXopt](https://github.com/google/jaxopt). In Julia, the most advanced one is [DiffOpt.jl](https://github.com/jump-dev/DiffOpt.jl), which allows the user to differentiate through a [JuMP.jl](https://github.com/jump-dev/JuMP.jl) optimization model. A more generic approach was recently experimented with in [NonconvexUtils.jl](https://github.com/JuliaNonconvex/NonconvexUtils.jl): our goal with [ImplicitDifferentiation.jl](https://github.com/gdalle/ImplicitDifferentiation.jl) is to make it more efficient, reliable and easily usable for everyone.
### Package content
Our package provides a simple toolbox that can differentiate through any kind of user-specified function `x -> y(x)`. The only requirement is that its output be characterized with a condition of the form `F(x,y(x)) = 0`.
Beyond the generic machinery of implicit differentiation, we also include several use cases as tutorials: unconstrained and constrained optimization, fixed point algorithms and nonlinear equation systems, etc.
### Technical details
The central construct of our package is a wrapper of the form
```julia
struct ImplicitFunction{O,C}
forward::O
conditions::C
end
```
where `forward` computes the mapping `x -> y(x)`, while `conditions` corresponds to `(x,y) -> F(x, y)`. By defining custom pushforwards and pullbacks, we ensure that `ImplicitFunction` objects can be used with any [ChainRules.jl](https://github.com/JuliaDiff/ChainRules.jl)-compatible automatic differentiation backend, in forward or reverse mode.
To attain maximum efficiency, we never actually store a full jacobian matrix: we only reason with vector-jacobian and jacobian-vector products. Thus, when solving linear systems (which is a key requirement of implicit differentiation), we exploit iterative Krylov subspace methods for their ability to handle lazy linear operators.
false
https://pretalx.com/juliacon-2022/talk/DTHTBC/
https://pretalx.com/juliacon-2022/talk/DTHTBC/feedback/
Blue
Du Bois Data Visualizations: A Julia Recipe
Lightning talk
2022-07-29T12:50:00+00:00
12:50
00:10
We introduce a new plotting package allowing users to easily create publication-quality figures in W.E.B. Du Bois’s unique style of data visualizations. A groundbreaking sociologist and historian, Du Bois collected data on Black Georgia residents in the late 19th century and designed over 60 eye-catching graphs to depict these data at the 1900 Paris Exposition. We showcase our package by replicating the original figures exactly and by revisiting them with new data.
juliacon-2022-17834-du-bois-data-visualizations-a-julia-recipe
JuliaCon
/media/juliacon-2022/submissions/FZ38VW/Plate_25_old_4KYrheV.png
Eirik Brandsaas & Kyra SadoviEirik Brandsaas
en
This collection of plotting recipes uses the Makie.jl plotting package to recreate some of Du Bois’s most famous and complex data visualizations on the state of African-Americans at the turn of the 20th century. In additional to replicating the original, users can easily create figures with their own data in the same style and format, ready for publication.
W.E.B. Du Bois (b. 1868, d. 1963) was a sociologist, historian, and civil rights activist. Du Bois originally presented these plates at the 1900 Paris Exposition. He and his team of researchers at Atlanta University collected data on Black Georgia residents in order to create a comprehensive view of the quality of life and aggregate characteristics of what was at the time the largest African-American population in any U.S. state. Using these data, DuBois and his team created two series of data visualizations, 63 plates in total, six of which are recreated by this package. They are unusual in both shape and color palette as they were meant to capture viewers’ attention at the Paris Exposition – a venue where audiences might not otherwise stop at a table with information on African-American populations if not for eye-catching data visualizations.
A number of groups and individuals have contributed to the project that is digitizing and recreating these plates. The Du Boisian Visualization Toolkit, from the Dignity + Debt Network, provides information on Du Bois’s most-used color palettes and fonts. Multiple R packages, style guides, Excel Tableau, and Python replications, and other resources have been contributed to this project to replicate the original figures. Our package contributes on two fronts. We present the first replication using Julia. Second. More importantly, our package is designed to allow the user to present their own data instead of merely replicating the originals.
We have attached one picture that shows one example of what users can obtain from our package.
References
Du Bois, W. E. B., Battle-Baptiste, W., & Rusert, B. (2018). W.E.B du Bois's Data Portraits: Visualizing Black America. W.E.B. Du Bois Center at the University of Massachusetts Amherst.
Link to Makie.jl package: https://makie.juliaplots.org/stable/
Link to Du Bois challenge and original exhibits: https://github.com/ajstarks/dubois-data-portraits/tree/master/challenge
false
https://pretalx.com/juliacon-2022/talk/FZ38VW/
https://pretalx.com/juliacon-2022/talk/FZ38VW/feedback/
Blue
Data Analysis and Visualization with AlgebraOfGraphics
Lightning talk
2022-07-29T13:00:00+00:00
13:00
00:10
Based on the Makie library, AlgebraOfGraphics offers visualizations for common analyses (frequency table, 1- or 2-D histogram and kernel density, linear and non-linear regression...), as well as functions to express how the data should be processed, grouped, styled, and visualized. These building blocks can be combined using the `*` and `+` operators, thus forming a rich algebra of visualizations. The unified syntax layer simplifies the creation of AlgebraOfGraphics-based UIs for data analysis.
juliacon-2022-17743-data-analysis-and-visualization-with-algebraofgraphics
JuliaCon
/media/juliacon-2022/submissions/V3NPES/aog_demo_YKNlEXE.png
Pietro Vertechi
en
In this talk, I will give an overview of the Algebra of Graphics approach to data visualizations in Julia.
Algebra of Graphics is an adaptation of the Grammar of Graphics—a declarative language to define visualizations by mapping columns of a dataset to plot attribtues—to the Julia programming language and philosophy, with some important differences.
I will first discuss the key components of AlgebraOfGraphics (data selection, mapping of data to plot attributes, analysis and visualization selection) and show how they can be combined multiplicatively (by merging information) or additively (by drawing distinct visualizations on separate layers).
Then, I will delve into the AlgebraOfGraphics philosophy. The aim of AlgebraOfGraphics is to empower users to produce visualizations that answer _questions_ about their data. This is achieved via _reusable building blocks_, which the users define based on their knowledge of the data. Rich visualizations can be built by combining these building blocks: I will demonstrate this technique on an example dataset. I will also show how AlgebraOfGraphics attempts to lessen the cognitive burden on the user by providing opinionated graphical defaults as well as wide format support. That way, the user can focus on the question at hand, rather than visual fine-tuning or data wrangling.
As the AlgebraOfGraphics syntax is uniform, it can be used as the backend to a Graphical User Interface for data analysis and visualization. I will show a prototype of such a GUI, based on AlgebraOfGraphics, web technologies, and the web-based backend of Makie.
false
https://pretalx.com/juliacon-2022/talk/V3NPES/
https://pretalx.com/juliacon-2022/talk/V3NPES/feedback/
Blue
QuEra Computing Sponsor Talk
Silver sponsor talk
2022-07-29T13:25:00+00:00
13:25
00:05
QuEra is a neutral-atom based quantum computing startup located in the heart of Boston near Harvard University.
juliacon-2022-21250-quera-computing-sponsor-talk
en
false
https://pretalx.com/juliacon-2022/talk/ENV8KX/
https://pretalx.com/juliacon-2022/talk/ENV8KX/feedback/
Blue
Working with Firebase in Julia
Talk
2022-07-29T13:30:00+00:00
13:30
00:30
In this talk, I intend to discuss about the use of Firebase in Julia through Firebase.jl
https://github.com/ashwani-rathee/Firebase.jl
A lot of databases are well supported in Julia but support for Firebase is rather limited, which is an issue. We want to attract more younger people towards Julian community, but a big chunk of these people prefer to use Firebase in their relatively small size projects. Through this talk, I want to demonstrate how to use Firebase.jl for project developement.
juliacon-2022-18123-working-with-firebase-in-julia
JuliaCon
Ashwani Rathee
en
Firebase.jl is the solution for working with Firebase with the Julia programming language. Firebase.jl provides support for realtime database, cloud firestore, storage, authentication which are quite useful in small and large size projects alike.
false
https://pretalx.com/juliacon-2022/talk/HFNKTC/
https://pretalx.com/juliacon-2022/talk/HFNKTC/feedback/
Blue
Interactive Julia data dashboards with Genie
Talk
2022-07-29T16:30:00+00:00
16:30
00:30
Genie provides a powerful set of features for fast and easy creation of interactive data dashboards, helping data and research scientists to design, build, and publish production ready interactive apps and dashboards using pure Julia. In this talk we'll explain and demonstrate how to build a production ready, powerful data dashboard, going from 0 to live in 20 minutes!
juliacon-2022-18030-interactive-julia-data-dashboards-with-genie
JuliaCon
/media/juliacon-2022/submissions/BH8WSR/Screenshot_2022-04-10_at_12.45.36_ezj55PR.png
Adrian SalceanuHelmut Hänsel
en
Building upon over 5 years of experience with open source web development with Julia, Genie's powerful dashboarding features allow Julia users to develop and publish data apps without needing to use any web development techniques. Genie exposes a smooth workflow and rich programming APIs that allow data and research scientists to control all the aspects of building and deploying data apps and dashboards using only their favourite programming language: Julia.
false
https://pretalx.com/juliacon-2022/talk/BH8WSR/
https://pretalx.com/juliacon-2022/talk/BH8WSR/feedback/
Blue
Declarative data transformation via graph transformation
Talk
2022-07-29T17:00:00+00:00
17:00
00:30
SQL is far from the only declarative paradigm for specifying the dynamics of data! The field of graph transformation formalizes a generalization of term rewriting systems that is visual, intuitive, and applicable to a wide array of data structures, including Catlab's ACSet datatypes. We will describe the basic theory of graph transformation and show how our implementation in Catlab.jl can be applied to e-graph equality saturation and general agent-based model simulations.
juliacon-2022-17989-declarative-data-transformation-via-graph-transformation
JuliaCon
/media/juliacon-2022/submissions/SHU83W/Screen_Shot_2022-04-08_at_7.05.24_PM_XO3hz9j.png
Kristopher Brown
en
What data structure is able to characterize the process of transforming data? Certainly a pointer to a function is sufficient, but this is opaque, making static analysis difficult and lowering code intelligibility. SQL statements offer more transparency at the cost of some expressivity, yet this is still difficult to read and interpret as complexity scales. The graph transformation paradigm expresses data update, addition, and deletion in terms of rewrite rules, which are combinatorial structures characterizing patterns of a data for matching or replacement.
Graph rewriting techniques are specified in the notoriously abstract language of category theory, although historically concrete implementations have been restricted to labelled graphs. Catlab.jl is an applied category theory package which has recently added a performant implementation of graph rewriting (double, single, and sesqui pushout paradigms) at a novel level of generality, allowing a generic interface to the major application areas of graph rewriting: graph languages (rewriting defines a graph grammar), graph relations (rewriting is a relation between input and output data structures), and graph transition systems (rewriting evolves a system in time).
The scientific community particularly benefits from its code being interpretable and having a straightforward semantics. Just like many low level data manipulations are made safer and more transparent when redescribed as SQL operations, we argue that many data transformation applications would benefit from replacing arbitrary code with a set of declarative rewrite rules. We demonstrate this through some complex applications constructed from these building blocks. This includes an extended example of an epidemiological agent based model (in collaboration with epidemiologist Sean Wu, a research scientist at the Institute for Health Metrics and Evaluation) and an example of equational laws defining an e-graph data structure, with rewrite rules inducing an equality saturation procedure for free. The pattern-based API is easy to use and interpret - no category theory is needed to understand or use these tools!
false
https://pretalx.com/juliacon-2022/talk/SHU83W/
https://pretalx.com/juliacon-2022/talk/SHU83W/feedback/
Blue
How to recover models from data using DataDrivenDiffEq.jl
Talk
2022-07-29T17:30:00+00:00
17:30
00:30
In this talk, we will address the problem of data-driven estimation and approximation of completely or partially unknown systems using DataDrivenDiffEq.jl.
We will start by giving a short introduction to the field of symbolic regression in general followed by an example of its practical use.
Here we learn how to
(a) set up a DataDrivenProblem,
(b) use ModelingToolkit.jl to incorporate prior knowledge,
(c) use different algorithms to recover the underlying equations.
juliacon-2022-18044-how-to-recover-models-from-data-using-datadrivendiffeq-jl
JuliaCon
Carl Julius Martensen
en
How do we model the friction in the joint of a robot, biological feedback signal, or the influence of seemingly unrelated parameters on our dynamical system?
With the rise of machine learning the classical domain of modeling is becoming more and more driven by data. While the automated discovery of possibly complex relations can help in gaining new insights, classical equations still dominate state-of-the-art machine learning models in terms of extrapolation capabilities and explainability. DataDrivenDiffEq.jl provides a unified application programming interface to define and solve these problems. It brings together operator-based inference, sparse, and symbolic regression to bridge the gap from black to white-box models.
After a brief theoretical introduction to the theory of system identification, currently implemented algorithms, and their underlying models we will explore the conceptual layer of the software. Within the Hands-on example, we will see how DataDrivenDiffEq.jl API mimics the mathematical formulation, builds upon and extends ModelingToolkit.jl, SymbolicUtils.jl, and Symbolics.jl to allow expression-based modeling, and seamlessly integrates into the Scientific Machine Learning ecosystem to `solve` a variety of estimation problems.
false
https://pretalx.com/juliacon-2022/talk/F3J8QM/
https://pretalx.com/juliacon-2022/talk/F3J8QM/feedback/
BoF
Julia for Space Engineering
Birds of Feather
2022-07-29T12:30:00+00:00
12:30
01:30
On paper, Julia and its ecosystem are a perfect match for space engineering. We have got all the cool tools at our disposal The number of people working on great Julia-based solutions for space engineering are increasing year over year. What else do we need to gain orbital velocity?
Let's have a discussion, come up with a plan, and let's go light this candle!
Join the discussion on the [bof-voice](https://discord.com/channels/995757799076282478/997898697578913853) channel in discord.
juliacon-2022-18086-julia-for-space-engineering
JuliaCon
Helge EichhornJorge A. Pérez-Hernández
en
false
https://pretalx.com/juliacon-2022/talk/AG7CGF/
https://pretalx.com/juliacon-2022/talk/AG7CGF/feedback/
BoF
Production Data Engineering in Julia
Birds of Feather
2022-07-29T16:30:00+00:00
16:30
01:30
We believe that Julia is uniquely well-positioned to pioneer new approaches to dataflow orchestration that are currently dominated by monolithic frameworks. In this BoF, Julia's nascent Data Engineering community will swap experiences and identify opportunities to collaborate on open-source next-generation data engineering tools.
Join the discussion on the [bof-voice](https://discord.com/channels/995757799076282478/997898697578913853) channel in discord.
juliacon-2022-18152-production-data-engineering-in-julia
JuliaCon
Curtis VogtJacob QuinnJarrett Revels
en
Julia has already succeeded by empowering many scientists and engineers to author their own high-performance compute kernels without the usual ergonomics/composability sacrifices that high-performance code often entails. However, actually leveraging these kernels within production contexts often requires packaging them into an automated service, usually within the context of wider automated pipelines. It is not surprising that in the past few years, many new capabilities and packages have emerged that facilitate this by enabling Julia to be executed atop Kubernetes, interop with tabular data sources/sinks via Apache Arrow, and integrate with other popular cloud-native technologies. This blossoming ecosystem within the wider Julia community demonstrates both the desire and opportunity for Julia's usage in production data engineering contexts.
Topics of discussion for this BoF include:
- current data engineering efforts/challenges faced by industry Julia users maintaining production systems
- containerization of Julia processes and Julia-functions-as-a-service
- executing Julia-based jobs/services via Kubernetes
- Julia-centric workflow/dataflow orchestration
- the intersection of Julia's tabular data ecosystem and enterprise data architectures
Our goal is two-fold:
- uncover the shared data engineering problems, tools, and opportunities that characterize Julia's nascent Data Engineering community
- identify concrete opportunities for open-source and cross-organization collaboration (hackathons, blogs, package development, etc.)
false
https://pretalx.com/juliacon-2022/talk/PWSDHS/
https://pretalx.com/juliacon-2022/talk/PWSDHS/feedback/
JuMP
DiffOpt.jl differentiating your favorite optimization problems
Talk
2022-07-29T16:30:00+00:00
16:30
00:30
DiffOpt aims at differentiating optimization problems written in MathOptInterface. Moreover, everything “just works” in JuMP. The current framework is based on existing techniques for differentiating the solution of optimization problems with respect to the input parameters. We will show the current state of the package that supports Quadratic Programs and Conic Programs. Moreover, we will highlight how other packages are used to keep the library generic and efficient.
juliacon-2022-17972-diffopt-jl-differentiating-your-favorite-optimization-problems
JuMP
Joaquim Dias Garcia
en
Joint work with: Mathieu Besançon, Benoît Legat, Akshay Sharma.
false
https://pretalx.com/juliacon-2022/talk/CUJU8K/
https://pretalx.com/juliacon-2022/talk/CUJU8K/feedback/
JuMP
Recent developments in ParametricOptInterface.jl
Lightning talk
2022-07-29T17:00:00+00:00
17:00
00:10
ParametricOptInterface.jl is a MathOptInterface extension that helps users deal with parameters in MOI/JuMP. The package started as a GSOC project in 2020 and has seen some new developments in recent months. The goal of this talk is to show the current state of ParametricOptInterface amid the JuMP ecosystem as well as to show some interesting use cases of the package.
juliacon-2022-18019-recent-developments-in-parametricoptinterface-jl
JuMP
Guilherme Bodin
en
false
https://pretalx.com/juliacon-2022/talk/L79WHV/
https://pretalx.com/juliacon-2022/talk/L79WHV/feedback/
JuMP
Risk Budgeting Portfolios from simulations
Lightning talk
2022-07-29T17:10:00+00:00
17:10
00:10
Risk budgeting is a portfolio strategy where each asset contributes a pre-specified amount to the total portfolio risk. We propose a numerical framework in JuMP that uses only simulations of returns for estimating risk budgeting portfolios, and provide a Sample Average Approximation algorithm. We leveraged automatic differentiation and JuMP's modeling flexibility to build a clear and concise code. We also report on memory issues encountered when solving for every day in a 14 year horizon.
juliacon-2022-17980-risk-budgeting-portfolios-from-simulations
JuMP
Bernardo Freitas Paulo da Costa
en
false
https://pretalx.com/juliacon-2022/talk/NPHSNW/
https://pretalx.com/juliacon-2022/talk/NPHSNW/feedback/
JuMP
Optimising Fantasy Football with JuMP
Lightning talk
2022-07-29T17:20:00+00:00
17:20
00:10
Fantasy Premier League is an online fantasy sports game where you select a team of 15 players and score points based on their performance each week. You have a finite budget and each player costs a certain amount, plus a number of other constraints which makes this an optimisation problem that JuMP can solve. In this talk I will work through this problem and show how each constraint is translated into the JuMP language. It will be a fun introduction to optimisation in an alternative domain.
juliacon-2022-17076-optimising-fantasy-football-with-jump
JuMP
Dean Markwick
en
false
https://pretalx.com/juliacon-2022/talk/QNAEBY/
https://pretalx.com/juliacon-2022/talk/QNAEBY/feedback/
JuMP
Stochastic Optimal Control with MarkovBounds.jl
Lightning talk
2022-07-29T17:30:00+00:00
17:30
00:10
We present MarkovBounds.jl -- a meta-package to SumOfSquares.jl which enables the computation of guaranteed bounds on the optimal value of a large class of stochastic optimal control problems via a high-level, practitioner-friendly interface.
juliacon-2022-18074-stochastic-optimal-control-with-markovbounds-jl
JuMP
Flemming Holtorf
en
The optimal control of stochastic processes is arguably one of the most fundamental questions in the context of decision-making under uncertainty. When the controlled process is a jump-diffusion process characterized by polynomial data (drift- and diffusion coefficient, jumps, etc.), it is well known that polynomial optimization and the machinery of the moment-sum-of-squares (SOS) hierarchy provides a systematic way to construct informative (and often tight) convex relaxations for the associated optimal control problems. While the JuMP ecosystem offers with SumOfSquares.jl in principle everything that is required to study stochastic optimal control problems from this perspective, it remains a cumbersome and error-prone process to translate a concrete stochastic optimal control problem into its SOS relaxation. Moreover, this translation process requires expert knowledge, rendering it inaccessible to a large audience. MarkovBounds.jl is intended to close this gap by providing a high-level interface which allows the user to define stochastic optimal control problems in symbolic form using for example DynamicPolynomials.jl or Symbolics.jl, and automates subsequent translation to associated SOS relaxations. Finite and (discounted) infinite horizon problems are supported as well as several common objective function types. Furthermore, MarkovBounds.jl supports the combination of standard SOS relaxations with discretization approaches to tighten the relaxations.
In this talk, we will briefly review the conceptual ideas behind constructing SOS relaxations for stochastic optimal control problems and showcase the use of MarkovBounds.jl for the optimal control of populations in a predator-prey system, expression of protein in a stochastic bio circuit and the bounding of rare event probabilities.
false
https://pretalx.com/juliacon-2022/talk/QJ8YRN/
https://pretalx.com/juliacon-2022/talk/QJ8YRN/feedback/
JuMP
Streamlining nonlinear programming on GPUs
Talk
2022-07-29T19:00:00+00:00
19:00
00:30
We propose a prototype for a vectorized modeler written in pure Julia, targeting the resolution of large-scale nonlinear optimization problems. The prototype has been designed to evaluate seamlessly the problem's expression tree with GPU accelerators. We discuss the implementation and the challenges we have encountered, as well as preliminary results comparing our prototype together with JuMP's AD backend.
juliacon-2022-17220-streamlining-nonlinear-programming-on-gpus
JuMP
François PacaudMichel Schanen
en
How fast can you evaluate the derivatives of a nonlinear optimization problem? Most real-world optimization instances come with thousands of variables and constraints; in such a large-scale setting using sparse automatic differentiation (AD) is often a non-negotiable requirement. It is well known that by choosing appropriately the partials (for instance with a coloring algorithm) the evaluations of the Jacobian and of the Hessian in sparse format translate respectively to one vectorized forward pass and one vectorized forward-over-reverse pass. Thus, any good AD library should be able to efficiently visit back and forth the problem's expression tree. In this talk, we propose a prototype for a vectorized modeler, where the expression tree is manipulating vector expressions. By doing so, the forward and the reverse evaluations rewrite into the universal language of sparse linear algebra. By chaining linear algebra calls together, we show that the evaluation of the expression tree can be deported to any sparse linear algebra backend. Notably, we show that both the forward and reverse passes can be streamlined efficiently, using either MKLSparse on Intel CPU or cusparse on CUDA GPU. We discuss the prototype's performance on the optimal power flow problem, using JuMP's AD backend as comparison. We show that on the largest instances, we can fasten the evaluation of the sparse Hessian by a factor of 50. Although our code remains a prototype, we have hope that the emergence of new AD library like Enzyme will permit to extend further the idea. We finish the talk by discussing ideas on how to vectorize the evaluation of the derivatives inside JuMP's AD backend.
false
https://pretalx.com/juliacon-2022/talk/8QUDVW/
https://pretalx.com/juliacon-2022/talk/8QUDVW/feedback/
JuMP
Fast optimization via randomized numerical linear algebra
Talk
2022-07-29T19:30:00+00:00
19:30
00:30
We introduce RandomizedPreconditioners.jl, a package for preconditioning linear systems using randomized numerical linear algebra. Crucially, our preconditioners do not require a priori knowledge of structure present in the linear system, making them especially useful for general-purpose algorithms. We demonstrate significant speedups of positive semidefinite linear system solves, which we use to build fast constrained optimization solvers.
juliacon-2022-17985-fast-optimization-via-randomized-numerical-linear-algebra
JuMP
Theo Diamandis
en
In this talk, we discuss how techniques in randomized numerical linear algebra can dramatically speed up linear system solves, which are a fundamental primitive for most constrained optimization algorithms.
We start with randomized approximations of matrices and introduce the Nystrom Sketch. Following the approach developed by Frangella et al. [1], we use this sketch to construct preconditioners for positive definite linear systems.
We then introduce RandomizedPreconditioners.jl, a lightweight package which includes these randomized preconditioners and sketches. We show how this package allows preconditioners to be added with only a few extra lines of code, and we use the package to dramatically speedup a convex optimization solver that uses the alternating direction method of multipliers (ADMM) algorithm. We demonstrate how we can amortize the cost of this preconditioner over all solver iterations, allowing us to capture this speedup for minimal additional cost
Finally, we conclude with future work to address other types of linear systems and other ways to speed up optimization solvers with randomized numerical linear algebra primitives that are implemented in RandomizedPreconditioners.jl.
[1] Zachary Frangella, Joel A Tropp, and Madeleine Udell. “Randomized Nyström Preconditioning.” In:arXiv preprint arXiv:2110.02820(2021). https://arxiv.org/abs/2110.02820
false
https://pretalx.com/juliacon-2022/talk/EBBQDB/
https://pretalx.com/juliacon-2022/talk/EBBQDB/feedback/
Sponsored forums
Relational AI Sponsored Forum
Sponsor forum
2022-07-29T16:30:00+00:00
16:30
00:45
At RelationalAI, we are building the world’s fastest, most scalable, most expressive, most open knowledge graph management system, built on top of the world’s only complete relational reasoning engine that uses the knowledge and data captured in enterprise databases to learn and reason. Join the [sponsored forum here](https://discord.com/channels/995757799076282478/1000106747232538644).
juliacon-2022-21247-relational-ai-sponsored-forum
en
false
https://pretalx.com/juliacon-2022/talk/HAURQJ/
https://pretalx.com/juliacon-2022/talk/HAURQJ/feedback/
Green
JuliaCon Hackathon
Social hour
2022-07-30T12:30:00+00:00
12:30
06:00
Join us on [Gather.town](https://gather.town) for this year's Julia hackaton.
juliacon-2022-21383-juliacon-hackathon
en
Like in previous years we will have another legendary JuliaCon hackaton! Join us to built something you are excited about in Julia. We will also have mentors available to help if you run into issues.
false
https://pretalx.com/juliacon-2022/talk/8MRLPJ/
https://pretalx.com/juliacon-2022/talk/8MRLPJ/feedback/