2.0
-//Pentabarf//Schedule//EN
PUBLISH
WLNWBV@@pretalx.com
-WLNWBV
Introduction to Julia
en
en
20220719T140000
20220719T170000
3.00000
Introduction to Julia
We'll cover how to download and install Julia in Windows, Mac and Linux.
Next, we will show how to use Julia in the terminal (REPL), in VSCode,
and also in an interactive notebook with Pluto.
We will contrast Julia with Python, showcasing major differences and
also comparing Julia to a popular beginner's language such as Python.
Additionally, we will teach how to install and uninstall packages.
The bulk of the workshop will be how to run Julia commands,
what are statements and an overview of the Julia syntax.
We encourage everyone that wants to know more about Julia independent of
skill-level to join us.
PUBLIC
CONFIRMED
Workshop
https://pretalx.com/juliacon-2022/talk/WLNWBV/
Green
Jose Storopoli
PUBLISH
WL9FZZ@@pretalx.com
-WL9FZZ
Introduction to Graph Computing
en
en
20220720T140000
20220720T170000
3.00000
Introduction to Graph Computing
Graph computing is an innovative and hyper-efficient technology for building large and distributed systems or applications, where common challenges include scalability, transparency, explainability, lineage, adaptability and reproducibility. We coined the acronym STELAR for these challenges.
In almost every organization, significant engineering resources and efforts are devoted to addressing the STELAR needs for their core enterprise systems. These efforts are not portable because they are specific to the particular organization and architecture. For example, the solution to improve the scalability of the trading system at JP Morgan is not applicable to Goldman Sachs, as their technology stacks are fundamentally different. There is an enormous waste of time, money, energy and human talents for re-creating bespoke solutions to the same STELAR problems across the industry. The world would be a much better place if we could solve these problems once and for all in enterprise systems. This is the promise of graph computing.
Instead of functions in the traditional programming paradigm, directed acyclic graphs (DAG) are the fundamental building blocks in graph computing. A DAG is a special type of graph, which consists of a collection of nodes and directional connections between them. Acyclic means that these connections do not form any loops. A DAG can also be used as a generic representation of any kind of computing or workflow. Conceptually, any computation, from the simplest formula in a spreadsheet to the most complex enterprise systems, reduces to a DAG. In graph computing, complex DAGs representing entire applications or systems are built by composing smaller and modular DAGs, analogous to function compositions in the traditional programming paradigm.
Compared to the function centric representation in traditional programming, DAG is a much better and more convenient representation for building generic solutions to STELAR. Once built, these solutions are applicable to any enterprise system as they all reduce to DAGs.
The outline of this workshop is as follows:
* Introduction to the key ideas and benefits of graph computing, and how DAGs can help solve the STELAR program generically.
* Survey of existing graph computing solutions, approaches and tools. We will cover both Python and Julia tools, as well as commercial and open source solutions
* Discuss the key challenges in graph computing and approaches, including graph creation and distribution
* Hands-on sessions in building real world applications/systems using graph computing. In these sessions, we will be using available graph computing tools like Dask, Dagger.jl and Julius etc.
* Build a simple task graph and execute it
* Query the graph data after execution
* Build an ML data processing pipeline
* Building generic and reusable patterns using graph composition
* Graph distribution, for building distributed systems
* End to end AAD (adjoint algorithmic differentiation) with graphs
* Build streaming pipelines in graph
*Plenty of time for questions.
PUBLIC
CONFIRMED
Workshop
https://pretalx.com/juliacon-2022/talk/WL9FZZ/
Green
Yadong Li
PUBLISH
Q7WJ9F@@pretalx.com
-Q7WJ9F
Getting started with Julia and Machine Learning
en
en
20220720T180000
20220720T210000
3.00000
Getting started with Julia and Machine Learning
## Overview
In their simplest manifestation, machine learning algorithms extract,
or "learn", from historical data some essential properties enabling
them to respond intelligently to new data (typically,
automatically). For example, spam filters predict whether to designate
a new email as "junk", based on how a user previously designated a
large number of previous messages. A property valuation site suggests
the sale price for a new home, given its location and other
attributes, based on a database of previous sales.
Julia is uniquely positioned to accelerate developments in machine
learning and there has been an explosion of Julia machine learning
libraries. [MLJ](https://alan-turing-institute.github.io/MLJ.jl/dev/)
(Machine Learning in Julia) is a popular toolbox providing a common
interface for interacting with over 180 machine learning models
written in Julia and other languages. This workshop will introduce
basic machine learning concepts, and walk participants through enough
Julia to get started using MLJ.
## Prerequisites
- **Essential.** A computer with [Julia 1.7.3](https://github.com/ablaom/HelloJulia.jl/blob/dev/FIRST_STEPS.md) installed.
- **Strongly recommended,** Workshop resources pre-installed. See [here](https://github.com/ablaom/HelloJulia.jl/wiki/JuliaCon-2022-workshop:-Getting-started-with-Julia-and-MLJ).
- **Recommended.** Basic linear algebra and statistics, such
as covered in first year university courses.
- **Recommended but not essential.** Prior experience with a scripting
language, such as python, MATLAB or R.
## Objectives
- Be able to carry out basic mathematical operations using Julia,
perform random sampling, define and apply functions, carry out
iterative tasks
- Be able to load data sets and do basic plotting
- Understand what supervised learning models are, and how to evaluate
them using a holdout test set or using cross-validation
- Be able to train and evaluate a supervised learning model using
the MLJ package
## Resources
[HelloJulia.jl](https://github.com/ablaom/HelloJulia.jl)
## Format
This workshop will be a combination of formal presentation and live
coding.
PUBLIC
CONFIRMED
Workshop
https://pretalx.com/juliacon-2022/talk/Q7WJ9F/
Green
Anthony Blaom
Samuel
PUBLISH
9PKGGH@@pretalx.com
-9PKGGH
GPU accelerated medical image segmentation framework
en
en
20220721T140000
20220721T170000
3.00000
GPU accelerated medical image segmentation framework
As the preparation for the workshop I will ask participants to load earlier the dataset on which we would work on - Dataset can be found in link [2]. Additionally you can load required packages to the enviroment where You will work in [3]. In order to fully participate you need to have Nvidia GPU available.
Medical image segmentation is a rapidly developing field of Computer Vision. This area of research requires knowledge in radiologic imaging, mathematics and computer science. In order to provide assistance to the researchers multiple software packages were developed. However because of the rapidly changing scientific environment those tools can no longer be effective for some of the users.
Such situation is present in the case of Julia language users that require support for the interactive programming development style that is not popular among traditional software tools. Another characteristic of modern programming for 3 dimensional medical imaging data is GPU acceleration which can give outstanding improvement of algorithms performance in case of working with 3D medical imaging. Hence in this work the author presents sets of new Julia language software tools that are designed to fulfil emerging needs. Those tools include GPU accelerated medical image viewer with annotation possibilities that is characterised by a very convenient programming interface. CUDA accelerated Medical segmentation metrics tool that supplies state of the art implementations of algorithms required for quantification of similarity between algorithm output and gold standard. Lastly, a set of utility tools connecting those two mentioned packages with HDF5 file system and preprocessing using MONAI and PythonCall.
Main unique feature of the presented framework is ease of interoperability with other Julia packages, which in the rapidly developing ecosystem of scientific computing may spark in the opinion of the author application of multiple algorithms from fields usually not widely used in medical image segmentation like differential programming, topology etc.
I am planning to conduct a workshop with the assumption of only basic knowledge of Julia programming and no medical knowledge at all. Most of the time would be devoted to walk through end to end example medical image segmentation like in the tutorial available under link below [1], with code executed live during workshop. In order to run some parts of the workshop users would need a CUDA environment. Because of the complex nature of the problem some theoretical introductions will also be needed.
Plan for the workshop :
1.Introduction to medical imaging data format
2.Presentation of loading data and simple preprocessing using MONAI and PythonCall
3.Tutorial presenting how to use MedEye3d viewer and annotator
4.Implementing first phase of example algorithm on CPU showing some Julia features supporting work on multidimensional arrays
5.Presenting further part of the example algorithm using GPU acceleration with CUDA.jl and ParallelStencil with short introduction to GPU programming .
6.Presenting how to save and retrieve data using HDF5.jl
7.Show how to apply medical segmentation metrics from MedEval3D, and some introduction how to choose properly the metric depending on the problem
8.Discuss How one can improve the performance of the algorithm and what are some planned future directions
[1] https://github.com/jakubMitura14/MedPipe3DTutorial
[2]Participants can download data before task 9 from https://drive.google.com/drive/folders/1HqEgzS8BV2c7xYNrZdEAnrHk7osJJ--2
[3] ]add Flux Hyperopt Plots UNet MedEye3d Distributions Clustering IrrationalConstants ParallelStencil CUDA HDF5 MedEval3D MedPipe3D Colors
PUBLIC
CONFIRMED
Workshop
https://pretalx.com/juliacon-2022/talk/9PKGGH/
Green
Jakub Mitura
PUBLISH
7JNQCM@@pretalx.com
-7JNQCM
Statistics symposium
en
en
20220722T080000
20220722T110000
3.00000
Statistics symposium
1. "Doing applied statistics research in Julia", Ajay Shah (20 minutes)
We show the journey of two applied statistics research papers, done fully in Julia, by researchers who were previously working in R. What was convenient, what were the chokepoints, what were the gains in expressivity and in performance. Based on this, we evaluate the state of maturity of Julia for doing applied statistics. We propose practical pathways for statisticians, and speak to the Julia community about what is required next. We report on recent developments in the field of Julia and statistics.
2. "CRRao: A unified framework for statistical models", Sourish Das (20 minutes)
Many statistical models are available in Julia, and many more will come. CRRao is a consistent framework through which callers interact with a large suite of models. For the end-user, it reduces the cost and complexity of estimating statistical models. It offers convenient guidelines through which development of additional statistical models can take place in the future.
3. "TSx: A time series class for Julia", Chirag Anand (20 minutes)
DataFrames.jl is a powerful system, but expressing the standard tasks of manipulating time series -- e.g. as seen in finance or macroeconomics -- is often cumbersome. We draw on the work of the R community, which has built zoo and xts, to build a time series class, TSx, which delivers a simple set of operators and functions for the people working with time series. It constitutes syntactic sugar on top of the capabilities of DataFrame.jl and thus harnesses the capabilities and efficiency of that package. We conduct comparisons of capabilities and performance against zoo and xts in R.
4. "Comparing glm in Julia, R and SAS", Mousum Datta (10 minutes)
glm is an unusually important class of statistical models. We compare the capabilities, correctness and performance of the present glm systems in Julia, R and SAS. We report on recent improvements that have been injected into GLM.jl.
5. "Working with survey data", Ayush Patnaik (10 minutes)
The Julia package survey.jl builds some of the functionality required for statistical estimators with stratified random sampling. For a limited subset of the capabilities of Thomas Lumley's R package `survey', we show the correctness and the performance gains of the Julia package.
PUBLIC
CONFIRMED
Minisymposium
https://pretalx.com/juliacon-2022/talk/7JNQCM/
Green
Ayush Patnaik
PUBLISH
JYDQEB@@pretalx.com
-JYDQEB
JuliaMolSim: Computation with Atoms
en
en
20220722T140000
20220722T170000
3.00000
JuliaMolSim: Computation with Atoms
The JuliaMolSim community is open to anyone who uses/develops Julia code that is used for simulating/analyzing systems that are resolved at the level of atomic/molecular coordinates. You can learn more about the packages we maintain and join conversations on our Slack workspace by going to our website at [https://juliamolsim.github.io](https://juliamolsim.github.io/) .
Our BoF session from JuliaCon 2021, “Building a Chemistry and Materials Science Ecosystem in Julia,” helped jumpstart the Slack community. A major subsequent output from those ongoing conversations was the development of the AtomsBase interface, defining a common set of functions for specifying atomic geometries. We’re really excited about the prospect of this effort enabling great interoperability between different types of simulation and analysis as well as to share code for tasks like visualization and I/O. In fact, it already has begun to have this impact in a number of academic projects with international collaborators and funded by major agencies such as the US Department of Energy. A major part of the strength and impact of these efforts has been substantial investment of effort from the beginning by mathematicians, computer scientists, and domain scientists working together, a hallmark of the Julia community writ large and a major part of the reason we’re building this community in Julia.
This year, we’re hosting a minisymposium to keep the community going strong, make new connections, show off cool projects, and collect new ideas! Our planned agenda (so far!) is as follows:
1. Introduction to JuliaMolSim in general and AtomsBase in particular with brief showcase of packages adopting the interface so far
2. Some “deeper-dive” talks on packages now using AtomsBase, focusing on updates since last JuliaCon and also elucidating other emerging themes such as support for automatic differentiation (AD) and GPU utilization
1. Chemellia machine learning ecosystem (Rachel Kurchin)
2. Molly.jl particle simulation package (Joe Greener)
3. DFTK.jl density functional theory package (Michael Herbst)
4. CESMIX project (Emmanuel Lujan)
3. Other contributed talks from the JuliaMolSim community, including:
1. Fermi.jl (Gustavo Aroeira)
2. ACE.jl (Christoph Ortner)
3. NQCDynamics.jl (James Gardner)
4. “Quick pitch” session: what’s the next community project a la AtomsBase? Pitch your idea and find collaborators! (If you are interested in pitching, contact the minisymposium organizers and we will be in touch with more details). Some example topics could include:
1. Plotting recipes (e.g. in Makie) for AtomsBase systems
2. An ab initio MD engine based in Molly, utilizing DFTK for energy/force calculations via AtomsBase
PUBLIC
CONFIRMED
Minisymposium
https://pretalx.com/juliacon-2022/talk/JYDQEB/
Green
Rachel Kurchin
PUBLISH
VAHYFE@@pretalx.com
-VAHYFE
Hands-on ocean modeling and ML with Oceananigans.jl
en
en
20220722T150000
20220722T180000
3.00000
Hands-on ocean modeling and ML with Oceananigans.jl
Oceananigans.jl is a state of the art ocean modeling tool written from scratch in Julia. Oceananigans uses an underlying finite volume, locally orthogonal staggered-grid fluid modeling paradigm. This allows Oceananigans to support everything from highly-idealized large-eddy-simulation studies of geophysical turbulence to large scale planetary circulation projects. The code is configured for different problems using native Julia scripting. Julia metaprogramming supports wide flexibility in numerical methods and supports large ensemble experiments. These latter style of experiment facilitate semi-automated Bayesian search that can be used to produce reduced-order models that emulate more detailed physical process models accurately. Julia typing and dispatch is used to support discrete numerics involving staggered numerical grid locations and to support (through the Julia KernelAbstractions.jl package) GPU and CPU execution from a single code base. Advanced graphics with Makie.jl and data management using NCDatasets.jl and JLD2.jl are fully integrated. Integration of Oceananigans within SciML workflows for developing neural differentiable equation improvements to physics based schemes is also possible.
In this workshop we cover both hands-on execution of a variety of different model configurations and exploring how key features of the Julia language and packages from the Julia ecosystem are used to enable a range of use cases.
The workshop will include two parts. A first part will consist of breakout room sessions. Each room will have a lead who will walk participants through configuring and running an Oceananigans instance on either a cloud resource or on participants local systems. A second part will involve multiple Oceananigans.jl team members walking through the key Julia language aspects that make Oceananigans a flexible and fun tool to use for all manner of scientific ocean modeling problems on Earth and beyond.
Workshop participants will get hands-on experience with real-world high-end scientific modeling for ocean and fluid problems in Julia. They will also learn about the how many elements of the Julia language and ecosystem can be used together to create a performant and expressive modeling tool that is also easy to engage with.
PUBLIC
CONFIRMED
Workshop
https://pretalx.com/juliacon-2022/talk/VAHYFE/
Red
Chris Hill
Francis Poulin
Gregory Wagner
Valentin Churavy
Simone Silvestri
Tomas Chor
Suyash Bire
Rodrigo Duran
Jean-Michel Campin
PUBLISH
F7WDXE@@pretalx.com
-F7WDXE
Introduction to Julia with a focus on statistics (in Hebrew)
en
en
20220723T100000
20220723T130000
3.00000
Introduction to Julia with a focus on statistics (in Hebrew)
This Juliacon 2022 workshop in Hebrew (עברית) is aimed at data-scientists, machine learning engineers, and statisticians that have experience with a language like Python or R, but have not used Julia previously. In learning to use Julia, a contemporary "stats based" approach is taken focusing on short scripts that achieve concrete goals. This is similar to the approach of the [Statistics with Julia book](https://statisticswithjulia.org/).
The primary focus is on statistical applications and packages. The Julia language is covered as a by-product of the applications. Thus, this workshop is much more of a how to use Julia for stats course than a how to program in Julia course. This approach may be suitable for statisticians and data-scientists that tend to do their day-to-day scripting with a data and model based approach - as opposed to a software development approach.
An extensive Jupyter notebook for the workshop together with data files is [here](https://github.com/yoninazarathy/StatisticsWithJuliaFromTheGroundUp-2022). You can install it to follow along. The Jupyter notebook is not in Hebrew.
If you don't already have Julia with IJulia (Jupyter) installed, you can follow the instructions in [this video](https://www.youtube.com/watch?v=KJleqSITuRo). It is recommended that you have Julia 1.7.3 or higher installed.
PUBLIC
CONFIRMED
Workshop
https://pretalx.com/juliacon-2022/talk/F7WDXE/
Green
Yoni Nazarathy
PUBLISH
FBLWD3@@pretalx.com
-FBLWD3
Interactive data visualizations with Makie.jl
en
en
20220723T140000
20220723T170000
3.00000
Interactive data visualizations with Makie.jl
The participants will follow along while different small interactive visualization projects are coded live, showing how to go from idea to implementation.
PUBLIC
CONFIRMED
Workshop
https://pretalx.com/juliacon-2022/talk/FBLWD3/
Green
Julius Krumbiegel
Simon Danisch
PUBLISH
PFUHDL@@pretalx.com
-PFUHDL
Julia REPL Mastery Workshop
en
en
20220724T140000
20220724T170000
3.00000
Julia REPL Mastery Workshop
This workshop will be a jam-packed, hands-on tour of the Julia REPL so that beginners and experts alike can learn a few tips and tricks. Every Julia user spends a significant amount of coding time interacting with the REPL - my claim for this workshop is that all Julia users can save themselves more than 3 hours of productive coding time over their careers should they attend this workshop, so why not invest in yourself now?
Plan (pending review) for the material that will be covered:
* Navigation - moving around, basic commands, variables, shortcuts and keyboard combinations, cross language comparison of REPL features, Vim Mode homework
* Internals and configuration - Basic APIs, display control codes, terminals and font support, startup file options, prompt changing, flag configurations
* REPL Modes - Shell mode, Pkg mode, help mode, workflow demos for contributing code fixes, BuildYourOwnMode demo, Term.jl
* Tools and packages - OhMyREPL.jl, PkgTemplates.jl, Eyeball.jl, TerminalPager.jl, AbstractTrees.jl, Debugger.jl, UnicodePlots.jl, ProgressMeters.jl, PlutoREPL.jl assignment
PUBLIC
CONFIRMED
Workshop
https://pretalx.com/juliacon-2022/talk/PFUHDL/
Green
Miguel Raz Guzmán Macedo
PUBLISH
UNVUDM@@pretalx.com
-UNVUDM
Differentiable Earth system models in Julia
en
en
20220725T140000
20220725T170000
3.00000
Differentiable Earth system models in Julia
The differentiable programming paradigm offers large potential to improve Earth system models (ESMs) in at least two ways: (i) in the context of parameter calibration, state estimation, initialization for prediction, and uncertainty quantification derivative information (tangent linear, adjoint and Hessian) are key ingredients; (ii) combining PDE-constrained optimization with SciML approaches may be performed naturally in a composable way and within the same programming framework. This minisymposium is organized in three parts (all speakers listed are tentative):
1/ Why differentiable programming for ESMs? Speakers will discuss the use of derivative information for PDE-constrained optimization in ice sheet (M. Morlighem, N. Petra), ocean (P. Heimbach) and solid Earth (B. Kaus) modeling; the use of SciML in the context of ESMs (J. Le Sommer, A. Ramadhan); The use of adjoints for sensitivity analysis and uncertainty quantification (N. Loose).
2/ What ESM applications are we targeting? The minisymposium will feature three ESM applications for
Global ocean modeling (C. Hill); ice sheet modeling (J. Bolibar, L. Raess).
3/ How are we realizing differentiable ESMs? A key algorithmic framework is the use of general-purpose automatic differentiation. The Julia is developing a number of packages. ESM applications will likely push the envelope of the capability of existing AD tools. The minisymposium will present how these tools are being used in the context of ESMs (S. Williamson, M. Morlighem). Furthermore, specific algorithmic challenges in ongoing AD tool development will be highlighted (S. Narayanan/M. Schanen/...).
The minisymposium seeks to engage both the ESM and the AD tool communities to advance their respective capability. There will be time for discussion. Ideally we are targeting a 3-hour mini symposium.
PUBLIC
CONFIRMED
Minisymposium
https://pretalx.com/juliacon-2022/talk/UNVUDM/
Green
Patrick Heimbach
Nora Loose
Mathieu Morlighem
Boris Kaus
Chris Hill
Sri Hari Krishna Narayanan
Sarah Williamson
PUBLISH
98UQX3@@pretalx.com
-98UQX3
Modeling of Chemical Reaction Networks using Catalyst.jl
en
en
20220725T180000
20220725T210000
3.00000
Modeling of Chemical Reaction Networks using Catalyst.jl
Workshop Pluto notebooks will be available at https://github.com/TorkelE/JuliaCon2022_Catalyst_Workshop
At the highest level, Catalyst models can be specified via a domain-specific language (DSL), where they can be concisely written as a list of chemical reactions. Such models are converted into a Symbolics.jl-based intermediate representation (IR), represented as a ModelingToolkit.jl AbstractSystem. This IR acts as a common target for many tools within SciML, enabling them to be applied to Catalyst-based models. Symbolic models can also be directly constructed using the symbolic IR, allowing programmatic construction of CRNs or extensions of DSL-defined CRNs.
In this workshop, we will demonstrate how to generate CRN models through the Catalyst DSL and programmatically via the IR. Catalyst features such as custom rate laws, component-based modeling, and parametric stoichiometry will be explored to demonstrate the breadth of models supported by Catalyst. We will then illustrate how such models can be translated to other symbolic Modelingtoolkit-based mathematical representations, and simulated with SciML tooling. Such representations include deterministic ODE models (based on reaction rate equations), stochastic SDE models (based on chemical Langevin equations), and stochastic jump process models (based on the chemical master equation and Gillespie's method). For each of these representations, the DifferentialEquations.jl package provides a variety of solvers that can accurately and efficiently simulate the model's dynamics. We will also demonstrate further tools for analysis of CRN-based models, including methods for parameter fitting, network analysis, calculation of steady states, and bifurcation analysis (through the BifurcationKit.jl package).
To help users with real-world applicability, we will demonstrate how to appropriately use the Catalyst and SciML tooling to scale simulations to tens of thousands of reactions in ways that exploit sparsity, giving easy access to methodologies which outperform competitor packages by orders of magnitude in performance. Aspects such as parallelization of simulations, automatic differentiation usage (in model calibration), and more will be discussed throughout the various topics to give users a complete view of how Catalyst.jl can impact their modeling workflows.
PUBLIC
CONFIRMED
Workshop
https://pretalx.com/juliacon-2022/talk/98UQX3/
Green
Torkel Loman
Samuel Isaacson
PUBLISH
DFA3RD@@pretalx.com
-DFA3RD
Julia in Astronomy & Astrophysics Research
en
en
20220725T180000
20220725T210000
3.00000
Julia in Astronomy & Astrophysics Research
This mini-symposium aims to help accelerate the adoption of Julia among astronomers and astrophysicists. Astrophysicists have long been among the leaders in High-Performance Computing. Large astronomical surveys continue to create new opportunities for researchers with the skills and tools to harness Big Data efficiently. Early adopters of Julia have developed packages providing functionality commonly needed by the astronomical community (e.g., AstroLib,jl, AstroTime.jl, Cosmology.jl, FITSIO.jl, UnitfulAstro.jl) and/or gained experience applying Julia to their research problems. According to NASA’s Astrophysical Data System, ~30 astronomy papers include “Julia” and “Bezanson et al. (2017)”, with over half of those being published since 2021. This mini-symposium invite researchers with experience applying Julia to astronomical research to share their experiences through a series of short talks and followed by a panel discussion.
Talks should not emphasize the astronomical methods or conclusions, but rather how using Julia impacted their project. How did Julia enhance their science or their productivity? What challenges related to Julia did they encounter? What work-arounds did they find? What additions or upgrades to the Julia package ecosystem would be helpful for their future projects? …or for accelerating adoption of Julia among the astronomical community? What resources did they use for integrating their research groups and/or collaborators into the Julia community? Where could filling a gap in documentation and or developing improved training materials be particularly impactful for helping astronomers to transition to Julia?
PUBLIC
CONFIRMED
Minisymposium
https://pretalx.com/juliacon-2022/talk/DFA3RD/
Red
Eric B. Ford
PUBLISH
LUWYRJ@@pretalx.com
-LUWYRJ
Julia for High-Performance Computing
en
en
20220726T140000
20220726T170000
3.00000
Julia for High-Performance Computing
**YouTube Link:** https://www.youtube.com/watch?v=fog1x9rs71Q
As we approach the era of exascale computing, scalable performance and fast development on extremely heterogeneous hardware have become ever more important aspects for high-performance computing (HPC). Scientists and developers with interest in Julia for HPC need to know how to leverage the capabilities of the language and ecosystem to address these issues and which tools and best practices can help them to achieve their performance goals.
What do we mean by HPC? While HPC can be mainly associated with running large-scale physical simulations like computational fluid dynamics, molecular dynamics, high-energy physics, climate models etc., we use a more inclusive definition beyond the scope of computational science and engineering. More recently, rapid prototyping with high-productivity languages like Julia, machine learning training, data management, computer science research, research software engineering, large scale data visualization and in-situ analysis have expanded the scope for defining HPC. For us, the core of HPC is not to run simple test problems faster but involves everything that enables solving challenging problems in simulation or data science, on heterogeneous hardware platforms, from a high-end workstation to the world's largest supercomputers powered with different vendors CPUs and accelerators (e.g. GPUs).
In this two-hour minisymposium, we will give an overview of the current state of affairs of Julia for HPC in a series of eight 10-minute talks. The focus of these overview talks is to introduce and motivate the audience by highlighting aspects making the Julia language beneficial for scientific HPC workflows such as scalable deployments, compute accelerator support, user support, and HPC applications. In addition, we have reserved some time for participants to interact, discuss and share the current landscape of their investments in Julia HPC, while encouraging networking with their colleagues over topics of common interest.
The minisymposium schedule, with confirmed speakers and topics, is as follows:
* 0:00: *William F Godoy (ORNL) & Michael Schlottke-Lakemper (U Stuttgart/HLRS):* **Julia for High-Performance Computing**
* 0:05: *Samuel Omlin (CSCS):* **Scalability of the Julia/GPU stack**
* 0:15: *Simon Byrne (Caltech/CliMA):* **MPI.jl**
* 0:25: Q&A
* 0:30: *Tim Besard (Julia Computing):* **CUDA.jl: Update on new features and developments**
* 0:40: *Julian Samaroo (MIT):* **AMDGPU.jl: State of development and roadmap to the future**
* 0:50: Q&A
* 1:00: *Albert Reuther (MIT):* **Supporting Julia Users at MIT LL Supercomputing Center**
* 1:10: *Johannes Blaschke (NERSC):* **Supporting Julia users on NERSC’s “Cori” and “Perlmutter” systems**
* 1:20: Q&A
* 1:25: *Michael Schlottke-Lakemper (U Stuttgart/HLRS):* **Running Julia code in parallel with MPI: Lessons learned**
* 1:35: *Ludovic Räss (ETH Zurich):* **Julia and GPU-HPC for geoscience applications**
* 1:45: Q&A, Discussion & Wrap up
The overall goal of the minisymposium is to identify and summarize current practices, limitations, and future developments as Julia experiences growth and positions itself in the larger HPC community due to its appeal in scientific computing. It also exemplifies the strength of the existing Julia HPC community that collaboratively prepared this event. We are an international, multi institutional, and multi disciplinary group interested in advancing Julia for HPC applications in our academic and national laboratory environments. We would like to welcome new people from multiple backgrounds sharing our interest and bring them together in this minisymposium.
In this spirit, the minisymposium will serve as a starting point for further Julia HPC activities at JuliaCon 2022. During the main conference, **a Birds of Feather session** will provide an opportunity to bring together the community for more discussions and to allow new HPC users to join the conversation. Furthermore, a number of talks will be dedicated to topics relevant for HPC developers and users alike.
PUBLIC
CONFIRMED
Minisymposium
https://pretalx.com/juliacon-2022/talk/LUWYRJ/
Green
Michael Schlottke-Lakemper
Carsten Bauer
Hendrik Ranocha
Johannes Blaschke
Jeffrey Vetter
PUBLISH
83E8CW@@pretalx.com
-83E8CW
A Complete Guide to Efficient Transformations of data frames
en
en
20220726T140000
20220726T170000
3.00000
A Complete Guide to Efficient Transformations of data frames
The operation specification language that is part of DataFrames.jl can be used to perform transformations of data frames and split-apply-combine operations of grouped data frames. It is supported by the following functions `combine`, `select`, `select!`, `transform`, `transform!`, `subset`, and `subset!`.
Over the years, following users' requests, DataFrames.jl operation specification language has evolved over the years to efficiently support virtually any operation typically needed when working with tabular data. However, this means that it has become relatively complex. New users often feel overwhelmed by the number of options it provides.
This workshop aims to give a comprehensive guide to DataFrames.jl operation specification language. The presented material will help users learn this language and will be a reference resource.
Workshop materials are available for download [here](https://github.com/bkamins/JuliaCon2022-DataFrames-Tutorial).
PUBLIC
CONFIRMED
Workshop
https://pretalx.com/juliacon-2022/talk/83E8CW/
Red
Bogumił Kamiński
PUBLISH
WKNY78@@pretalx.com
-WKNY78
`do block` considered harmless
en
en
20220727T123000
20220727T124000
0.01000
`do block` considered harmless
First we go through the similarities between for-loops, comprehensions and functions such as `map` and `reduce`. Programming as building useful abstractions. How "constraints liberate", and abstract ideas can impact concrete computational performance. Finally we discuss a couple of new features in Julia 1.9, especially `Unfold`.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/WKNY78/
Green
Nicolau Leal Werneck
PUBLISH
KGLNUH@@pretalx.com
-KGLNUH
Teaching with Julia (my experience)
en
en
20220727T124000
20220727T125000
0.01000
Teaching with Julia (my experience)
I am tenure at the University and I teach several courses of Computer Science. Some time ago I started to use Julia in my research, and more recently I have started to use it as a teaboth ching resource. In this talk I do not refer Julia as the programming language for students (they are last courses in which students can use whatever programming language they want), but as a resource to create tools to help me in the teaching.
In this regards, I have used Julia in three different approach, that I will quickly cover:
- For explaining concepts, I have used Pluto notebooks to visualize them. Also, recently I have used PlutoSliderServer (as Pluto but without editing option) to allow students to check some calculations they have to do during their exercises. An example is https://mh.danimolina.net/ (in Spanish).
- Due to the pandemic period, the use of online tests in Moodle has increased a lot. There are several formats to create them, but they are not intuitive enough for non-technical people. In order to solve that, I have created an online web to create easily the online quiz for Moodle (https://pradofacil.danimolina.net/, in Spanish), with a simple syntax that people from different background can easily use it.
- Finally, in some courses student must implement several algorithms. In order to identify the best parameter values, and to identify the best approaches to implements them, I previously solve them in Julia. Also, it has serve me to predict how much computational time they will require. In my experience, although some specific implementation in C++ is faster, the average implementations takes similar time than using Julia. Many implementations are slower than my Julia implementation, due to some developing decisions.
I consider this talk could be interesting for audience due to the following reasons:
- It gives a very general view of the possibilities of Julia as a teaching resource.
- It can be useful for other teachers giving ideas of integrating Julia in their portfolio.
- It shows my personal experience, and the feedback obtained.
I will be able to prepare the talk both in English and in Spanish. If it is accepted I will create an English version of the shown resources.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/KGLNUH/
Green
Daniel Molina
PUBLISH
X37FHS@@pretalx.com
-X37FHS
Simulation of atmospheric turbulence with MicroHH.jl
en
en
20220727T125000
20220727T130000
0.01000
Simulation of atmospheric turbulence with MicroHH.jl
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/X37FHS/
Green
Chiel van Heerwaarden
PUBLISH
XRQRK3@@pretalx.com
-XRQRK3
ProtoSyn.jl: a package for molecular manipulation and simulation
en
en
20220727T130000
20220727T131000
0.01000
ProtoSyn.jl: a package for molecular manipulation and simulation
The ever-increasing expansion in computer power felt in the last few decades has fuelled a revolution in the way we make science. Modern labs are empowered by large databases, efficient collaboration tools and fast molecular simulation software packages that save both time and money. Preliminary screening of new drug targets or protein designs are just some examples of recent applications of such tools. However, in this field, users have been experiencing a wider gap between expectations and the available technology: existing solutions are quickly becoming outdated, with legacy code and poor documentation.
On the scope of protein design, for example, the Rosetta software (and its Python wrap, PyRosetta) have become ubiquitous in any modern lab, despite suffering from the two-language problem and being virtually opaque to any attempt to modify or improve the source code. Such impediment has caused a severe lag in implementing new and modern solutions, such as GPU usage, cloud-based distributed computing or even molecular energy/forces calculations using machine learning models. Implementations are eventually added as single in-house scripts or patch code that lacks cohesion and proper documentation, steepening the learning curve to inexperienced users. Despite Rosetta’s massive and warranted success, there’s room for improvement.
ProtoSyn.jl, taking advantage of the growing Julia programming language and community, intends to provide an open-source, robust and simple to use package for molecular manipulation and simulation. Some of its functionalities include a complete set of molecular manipulation tools (add, remove and mutate residues, apply dihedral rotations and/or rotamers, apply secondary structures, copy and paste fragments, loops and other structures, include non-canonical aminoacids, post-translational modifications and even ramified polymers structures, such as glycoproteins or polysaccharides, among others), common simulation algorithms (such as Monte-Carlo or Steepest Descent), custom energy functions, etc. Much like setting up a puzzle, ProtoSyn.jl offers blocks of functions that can be mixed and matched to produce arbitrarily complex simulation algorithms. Capitalizing on recent advances, ProtoSyn.jl delivers a “plug-and-play” experience: users are encouraged to include novel applications, such as machine learning models for energy/forces calculations, by following clean documentation guides, complete with examples and tutorials. Enjoying the advantages Julia, ProtoSyn.jl can perform calculations on the GPU (using CUDA.jl), employ SIMD technology (using SIMD.jl), carry out distributed computing tasks (using Distributed.jl) and even directly call Python code (using PyCall.jl).
In a nutshell, ProtoSyn.jl intends to be an open-source alternative to molecular manipulation and simulations software’s, focusing on modularity and proper documentation, and offering a clean canvas where new protocols, algorithms and models can be tested, benchmarked and shared. Learn more on the project’s GitHub page:
https://github.com/sergio-santos-group/ProtoSyn.jl
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/XRQRK3/
Green
José Pereira
PUBLISH
A9SRVU@@pretalx.com
-A9SRVU
PDDL.jl: A fast and flexible interface for automated planning
en
en
20220727T131000
20220727T132000
0.01000
PDDL.jl: A fast and flexible interface for automated planning
The [Planning Domain Definition Language (PDDL)](https://en.wikipedia.org/wiki/Planning_Domain_Definition_Language) is a formal specification language for symbolic planning problems and domains that is widely used by the AI planning community. However, most implementations of PDDL are closely tied to particular planning systems and algorithms, and are not designed for interoperability or modular use within larger AI systems. This limitation makes it difficult to support extensions to PDDL without implementing a dedicated planner for that extension, inhibiting the generality, reach, and adoption of automated planning.
To address these limitations, we present [**PDDL.jl**](https://github.com/JuliaPlanners/PDDL.jl), an extensible parser, interpreter, and compiler interface for fast and flexible AI planning. PDDL.jl exposes the semantics of planning domains through a common interface for executing actions, querying state variables, and other basic operations used within AI planning applications. PDDL.jl also supports the extension of PDDL semantics (e.g. to stochastic and continuous domains), domain abstraction for generalized heuristic search (via abstract interpretation), and domain compilation for efficient planning, enabling speed and flexibility for PDDL and its many descendants.
Collectively, these features allow PDDL.jl to serve as a general high-performance platform for AI applications and research programs that leverage the integration of symbolic planning with other AI technologies, such as neuro-symbolic reinforcement learning, probabilistic programming, and Bayesian inverse planning for value learning and goal inference.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/A9SRVU/
Green
Xuan (Tan Zhi Xuan)
PUBLISH
ZPVDSR@@pretalx.com
-ZPVDSR
Real-Time, I/O, and Multitasking: Julia for Medical Imaging
en
en
20220727T132000
20220727T133000
0.01000
Real-Time, I/O, and Multitasking: Julia for Medical Imaging
Medical imaging devices are complex distributed systems that can feature a large variety of different parts from power amplifiers and circuitry for safety and control to robots, motors and pumps to signal generation and acquisition units. During measurements all these heterogeneous devices need to be coordinated to produce the data from which a tomographic image can be reconstructed. A central part of a measurement is the synchronous multi-channel acquisition and generation of signals. Contrary to other parts of the measurement process, hard real-time requirements typically apply here.
In this talk, we showcase a Julia software stack for the new tomographic imaging modality Magnetic Particle Imaging (MPI). Our software stack is composed of the MPIMeasurements.jl package and a Julia client library from the RedPitayaDAQServer project. The resulting system is a framework that allows us to load different device combinations from configuration files and and perform different measurements with them. In particular, the talk will outline two of the main challenges we faced during development and how they have been resolved using several of Julia's features.
The first challenge is the coordination of a varying number of heterogenous devices in a maintainable and extendable manner. This is especially important for MPI, as a very common approach to image reconstruction requires a very time-intensive calibration process, where a quick and intertwined coordination of devices could save hours of invaluable scan time. Julia tasks and multi-threading with threads dedicated to specific tasks allowed us implement a very flexible architecture for managing all the devices.
The second challenge is the configuration of a cluster of data acquisition boards and the transmission of real-time signals from this cluster. The cluster is realized using the low-cost RedPitaya STEMlab hardware and open-source software components provided in our RedPitayaDAQServer project, which include a Julia client library. To achieve real-time signal transmission, the Julia client needs to communicate with the servers running on each data acquisition board of the homogenous cluster and maintain consistent high network performance to ensure that no data loss occurs. Our solutions here involve Julia tasks, channels, metaprogramming and multiple-dispatch to implement an interface to our data acquisition boards that allows for batch execution of commands and high performance continuous data transmission.
Core packages being presented:
• https://github.com/MagneticParticleImaging/MPIMeasurements.jl
• https://github.com/tknopp/RedPitayaDAQServer
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/ZPVDSR/
Green
Niklas Hackelberg
PUBLISH
ZFUAHG@@pretalx.com
-ZFUAHG
Build an extensible gallery of examples
en
en
20220727T133000
20220727T134000
0.01000
Build an extensible gallery of examples
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/ZFUAHG/
Green
Johnny Chen
PUBLISH
3LHDTD@@pretalx.com
-3LHDTD
Comrade: High-Performance Black Hole Imaging
en
en
20220727T143000
20220727T144000
0.01000
Comrade: High-Performance Black Hole Imaging
In 2019 the global Event Horizon Telescope (EHT) made history by producing the first-ever image of a black hole on horizon scales. However, imaging a black hole is a complicated task. The EHT is a radio interferometer and does not directly produce the on-sky image. Instead, it measures the Fourier transform of the image. Furthermore, the telescope only samples the image at a handful of points in the Fourier domain. As a result of the incomplete Fourier sampling, infinitely many images are consistent with the EHT observations. Quantifying this uncertainty is imperative for any EHT analyses and black hole science as a whole.
Bayesian inference provides a natural avenue to quantify image uncertainty. However, this approach is computationally demanding. Due to computational complexity, low-level languages (e.g., C++) are required to make the calculation feasible. On the other hand, interactivity is critical when modeling, as the usual workflow involves choosing an image structure, applying it to the data, and graphically assessing the results. Incorporating interactivity into the modeling pipeline requires a second package written in Python. Historically, this separation has increased the learning curve and limited the adoption of Bayesian methods.
In the first part of the talk, I will introduce Comrade. Comrade is a Julia Bayesian black hole imaging package geared towards EHT and next-generation EHT (ngEHT) analyses. This package aims to be highly flexible, including many image models such as geometric, imaging, and physical accretion models. Additionally, Comrade is fast. In fact, it is over 100x faster than other EHT modeling packages while using far fewer resources. This drastic speed increase was due to Julia’s excellent introspection, package management, and auto differentiation libraries.
In the second part of my talk, I will detail how this performance increase has enabled novel black hole research and will be vital for future black hole science. Within the next decade, the ngEHT will increase its number of observations and its data volume per observation by an order of magnitude to produce higher-quality images. As a result of this significant increase in data, the ngEHT will require new tools. I will explain how Julia can play a vital role in next-generation black hole science and what additional language features are needed.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/3LHDTD/
Green
Paul Tiede
PUBLISH
LJYRGJ@@pretalx.com
-LJYRGJ
Reaction rates and phase diagrams in ElectrochemicalKinetics.jl
en
en
20220727T144000
20220727T145000
0.01000
Reaction rates and phase diagrams in ElectrochemicalKinetics.jl
In electrochemical reaction modeling, there are a variety of mathematical models (such as Butler-Volmer, Marcus, or Marcus-Hush-Chidsey kinetics) used to describe the relationship between the overpotential and the reaction rate (or electric current). Another important entity is the inverse of this function, i.e. given a current, what overpotential would be needed to drive it? Most of the models used do not have analytical inverses, so inverting them requires an optimization problem to be solved.
In ElectrochemicalKinetics.jl, I created a generic interface for computing these reaction rates and overpotentials, as well as using these quantities for other analyses such as fitting model parameters or constructing nonequilibrium phase diagrams, important for predicting, for example, behavior of a battery under fast charge or discharge conditions. Given a `KineticModel` object `m` we can always compute the rate constant at a given overpotential with the same syntax, no matter if `m isa ButlerVolmer` or `m isa Marcus` or any other implemented model type. This allows for easy comparison between these models, including when analyzing real data.
We can also construct nonequilibrium phase diagrams, to, for example, understand and predict lithium intercalation behavior in a battery at various charge or discharge rates. Building these phase diagrams requires calling the inverse function mentioned above and using it within another optimization (to satisfy the thermodynamic common-tangent condition), making automatic differentiation challenging. I will also discuss some of these challenges and the solutions I have found for them so far.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/LJYRGJ/
Green
Rachel Kurchin
PUBLISH
BLBKZM@@pretalx.com
-BLBKZM
RVSpectML: Precision Velocities from Spectroscopic Time Series
en
en
20220727T145000
20220727T150000
0.01000
RVSpectML: Precision Velocities from Spectroscopic Time Series
*Purpose:* The RVSpectML family of packages provides performant implementations of both traditional methods for measuring precise radial velocities (e.g., computing RVs from CCFs or template matching) and a variety of physics-informed machine learning-based approaches to mitigating stellar variability (e.g., Doppler-constrained PCA, Scalpels, custom line lists, Gaussian process latent variable models). It aims to make it practical for researchers to experiment with new approaches. Additionally, it aims to help astronomers improve the robustness of exoplanet discoveries by exploring the sensitivity of their results to choice of data analysis algorithm.
*Context:* Recently, NASA and NSF chartered the [Extreme Precision Radial Velocity Working Group](https://exoplanets.nasa.gov/exep/NNExplore/EPRV/) to recommend a plan for detecting potentially Earth-like planets around other stars. Their recommendations included developing a modular, customizable, and open-source pipeline for analyzing spectroscopic timeseries data from multiple instruments. The [RVSpectML](https://rvspectml.github.io/RvSpectML-Overview/) family of packages directly addresses this need.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/BLBKZM/
Green
Eric B. Ford
PUBLISH
WCKJQB@@pretalx.com
-WCKJQB
State of JuliaGeo
en
en
20220727T150000
20220727T151000
0.01000
State of JuliaGeo
[JuliaGeo](https://juliageo.org) is a community that contains several related Julia packages for manipulating, querying, and processing geospatial geometry data. We aim to provide a common interface between geospatial packages. In 2022 there has been a big push to have parity with the Python geospatial packages, such as rasterio and geopandas. In this 10 minute talk, we'd like to show these improvements---both in code and documentation---during a tour of the geospatial ecosystem.
We'll showcase the new traits-based release of GeoInterface.jl and work on GeoDataFrames.jl, GeoArrays.jl and Rasters.jl. It includes new packages like GeoFormatTypes, Extents.jl and GeoAcceleratedArrays.jl. We will conclude with future plans, such as enabling geospatial operations in DTables using Dagger.jl.
Links to the [slides](https://app.box.com/s/7dysp78eqlo2b201nx795f0efaci2il6) and [demo](https://app.box.com/s/r5ulbktmqinl732ixjw5h9xyxy6h0mus).
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/WCKJQB/
Green
Maarten Pronk
Josh Day
Rafael Schouten
PUBLISH
GUQBSE@@pretalx.com
-GUQBSE
Towards Using Julia for Real-Time applications in ASML
en
en
20220727T151000
20220727T154000
0.03000
Towards Using Julia for Real-Time applications in ASML
... requirements, which means that software execution must be highly performant & deterministic (i.e., predictable and reproducible). As such, design engineers must look at various aspects when developing software, like fine control of memory, optimal design of data types and modeling algorithms, as well as efficient CPU cache utilization.
User controlled garbage collection (or memory management) and system image binary contents(e.g., absence of JiT, removal of metadata etc.) revealed to be essential aspects to consider making Julia accepted as a language of choice in such a complex domain, compared to more (low level) established languages like C and C++. The goal of this talk is to discuss the strategies and techniques that we explored to enable us using Julia in the on-line execution of lithography models.
We believe that this work will provide new insights about Julia future, by giving new opportunities for further adoption of Julia in complex industrial software systems
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon-2022/talk/GUQBSE/
Green
Francesco Fucci
PUBLISH
YN8QPM@@pretalx.com
-YN8QPM
Opening remarks
en
en
20220727T163000
20220727T164000
0.01000
Opening remarks
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/YN8QPM/
Green
PUBLISH
NPPKUW@@pretalx.com
-NPPKUW
Keynote- Erin LeDell
en
en
20220727T164000
20220727T172500
0.04500
Keynote- Erin LeDell
Keynote- Erin LeDell
PUBLIC
CONFIRMED
Keynote
https://pretalx.com/juliacon-2022/talk/NPPKUW/
Green
Erin LeDell
PUBLISH
AL8VGC@@pretalx.com
-AL8VGC
Julia Computing Sponsored Talk
en
en
20220727T172500
20220727T174000
0.01500
Julia Computing Sponsored Talk
PUBLIC
CONFIRMED
Platinum sponsor talk
https://pretalx.com/juliacon-2022/talk/AL8VGC/
Green
PUBLISH
R7AYWY@@pretalx.com
-R7AYWY
AWS Sponsor Talk
en
en
20220727T174000
20220727T175000
0.01000
AWS Sponsor Talk
PUBLIC
CONFIRMED
Gold sponsor talk
https://pretalx.com/juliacon-2022/talk/R7AYWY/
Green
PUBLISH
DBS3SS@@pretalx.com
-DBS3SS
Optimization of bike manufacturing and distribution (use-case)
en
en
20220727T191000
20220727T192000
0.01000
Optimization of bike manufacturing and distribution (use-case)
Kross S.A. (https://kross.eu/) is one of the largest bicycle manufacturers in Europe with a production capacity of up to 1 million bikes a year. The company is also exporting their products to over 50 countries around the globe. The problem that currently the entire bicycle manufacturing industry is facing is the shortage of various key bike components due to the COVID-19 logistic chain disturbances. The goal of the company is to maximize customer (retailer) satisfaction by simultaneously meeting all business constraints with regard to production (part availability, assembly line capacity) and the observed demand for bikes (taking into consideration possible bike substitution, pricing and discount policies) In order to optimize the bicycle production and optimize the distribution plan we have built a mathematical model of the manufacturing plant. The basic model formaulation includes an NP-hard Mixed Linear Integer Programming optimization problem with 4,000,000 decision variables and over 100,000,000 business constraints. The mathematical model has been implemented in Julia programming language using the JuMP package along with Julia linear algebra features and several heuristics and algebra transformations. The model has been subsequently solved using a custom designed heuristics as well as solver packages. This data science project had an overall huge effect on the business of the customer. The computational model made it possible to manufacture 25% more bikes and yields a 10% higher total profitability of the bike factory compared to the best recommendations by a leading ERP solution that has been previously used by the company for production planning.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/DBS3SS/
Green
Przemysław Szufel
PUBLISH
PMAYRF@@pretalx.com
-PMAYRF
TintiNet.jl: a language model for protein 1D property estimation
en
en
20220727T192000
20220727T193000
0.01000
TintiNet.jl: a language model for protein 1D property estimation
The objective of TintiNet.jl is to improve the current state of single-sequence-based prediction of 1D protein structural properties by drastically reducing the size of the models employed while preserving or improving their raw predictive power.
Our main design principles were to avoid intra-serialized processing layers (such as recurrent neural networks) and to employ encoding layers that could grow deeper without a steep increase in computational complexity. Our solution was to develop a hybrid convolutional-transformer architecture, employing the Julia Language, The Flux.jl framework and the Transformers.jl contributed layers to Flux.jl, as well as some BioJulia packages (BioSequences.jl, BioStructures.jl, BioAlignments.jl and FASTX.jl). The project is 100% open-source and open-data, and scripts and procedures to implement the methodology presented are available at https://github.com/hugemiler/TintiNet.jl.
By training and evaluating our model in an extensive collection of over 30000 protein sequences, we demonstrate that this architecture can achieve a similar degree of merit (classification accuracy and regression error) when compared to the three most modern, state-of-the-art models. Since it has a much reduced number of parameters compared to its alternatives, it occupies much less memory and generates predictions up to 10 times faster.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/PMAYRF/
Green
Guilherme Fahur Bottino
PUBLISH
YUYXMM@@pretalx.com
-YUYXMM
PtyLab.jl - Ptychography Reconstruction
en
en
20220727T193000
20220727T194000
0.01000
PtyLab.jl - Ptychography Reconstruction
Conventional Ptychography is a powerful technique since it can retrieve phase and amplitude of an object which is usually not accessible by most common imaging techniques.
The drawback of this method is that it requires a stack of images taken at different displacements of an object with respect to a probe laser beam (such as a Gaussian laser beam).
The recorded images are the intensity of the diffraction pattern of the object illuminated with the probe.
Via iterative reconstruction algorithms one can retrieve amplitude and phase of both the probe and the object. To achieve reasonable runtimes, the algorithms require low memory consumption.
In PtyLab.jl we achieve that with a functional style of programming where buffers are implicitly stored at the beginning of the reconstruction in different functions.
Furthermore, we could demonstrate that this style combined with Julia could achieve reasonable speed-ups in comparison to Matlab and Python.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/YUYXMM/
Green
Felix Wechsler
Lars Loetgering
PUBLISH
ET78DS@@pretalx.com
-ET78DS
Cropbox.jl: A Declarative Crop Modeling Framework
en
en
20220727T194000
20220727T195000
0.01000
Cropbox.jl: A Declarative Crop Modeling Framework
Crop models describe how agricultural crops grow under dynamic environmental conditions and management practices. The models have many applications in agricultural science including, but not limited to, predicting yields of the crops under climate change scenarios and finding an optimal strategy for maximizing the yield. Crop modeling can encompass multiple aspects of research activities, but practically speaking, it is a task of formulating quantitative knowledge about the crops and translating them into a computer program.
Many crop models were traditionally developed in imperative programming languages where unrestricted control flows and state mutations could easily lead to error-prone code and inevitable technical debts. Also model developers and model users were often left behind in two disconnected workflows due to the lack of interactive programming environment.
Cropbox is a new modeling framework to bring a declarative approach towards the crop modeling and to consolidate the model development and use in a streamlined workflow implemented on Julia ecosystem. With an insight that the crop model is essentially an integrated network of generalized state variables, the modelers can write down a high-level specification of the model *system* represented by a collection of *variables* with specific behaviors associated. The framework then analyzes the specification and automatically generates lower-level Julia code that works with regular functions implementing common features like simulation running, configuration management, evaluation with common metrics, calibration of parameters, visualization of the result, and manipulation of interactive plots.
In this talk, I will briefly introduce the design and implementation of Cropbox and demonstrate some modeling applications such as a coupled leaf gas-exchange model ([LeafGasExchange.jl](https://github.com/cropbox/LeafGasExchange.jl)), a whole-plant garlic growth model ([Garlic.jl](https://github.com/cropbox/Garlic.jl)), and a 3D root structure growth model ([CropRootBox.jl](https://github.com/cropbox/CropRootBox.jl)).
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/ET78DS/
Green
Kyungdahm Yun
PUBLISH
CJ3XLV@@pretalx.com
-CJ3XLV
GapTrain: a faster and automated way to generate GA potentials
en
en
20220727T195000
20220727T200000
0.01000
GapTrain: a faster and automated way to generate GA potentials
## Introduction
<div align=justify>
Molecular simulations are a key point in computational chemistry for reproducing experimental reality. The accuracy of these models involves a number of elements, such as the inclusion of the solvation medium. Thus, interatomic potentials combined with molecular dynamics and Monte Carlo (MC) has been widely applied to surface potential energy. Moreover, most of these potentials are
parameterised for isolated entities with fixed connectivity and thus unable to describe bond breaking/forming processes.
Machine learning approaches have revolutionized force field-based simulations and can be implemented for the entire periodic table. Within small chemical subspaces, models can be achieved using neural networks (NNs), kernel-based methods such as the Gaussian Approximation Potential (GAP) framework or gradient-domain machine learning (GDML), and linear fitting with properly chosen basis functions, each with different data requirements and transferability. GAPs have been used to study a range of elemental, multicomponent inorganic, gas-phase organic molecular, and more recently condensed-phase systems, such as methane and phosphorus. These potentials, while accurate, have required
considerable computational effort and human oversight.
Indeed, condensed-phase NN and GAP fitting approaches typically require several thousand reference (“ground truth”) evaluations.
In the present work – with a view to developing potentials to
simulate solution phase reactions – we consider bulk water as
a test case and develop a strategy which requires just hundreds
of total ground truth evaluations and no a priori knowledge of
the system, apart from the molecular composition. We show
how this methodology is directly transferable to different
chemical systems in the gas phase as well as in implicit and
explicit solvent, focusing on the applicability to a range of
scenarios that are relevant in computational chemistry.
</div>
## References
**1** D. Frenkel and B. Smit, Understanding Molecular Simulation:
From Algorithms to Applications, Academic Press,
Cambridge, Massachusetts, 2nd edn, 2002.
**2** K. Lindorff-Larsen, P. Maragakis, S. Piana, M. P. Eastwood,
R. O. Dror and D. E. Shaw, PLoS One, 2012, 7, e32131.
**3** R. Iimie, P. Minary and M. E. Tuckerman, Proc. Natl. Acad.
Sci. U. S. A., 2005, 102, 6654–6659.
**4** F. No ́e, A. Tkatchenko, K.-R. M ̈uller and C. Clementi, Annu.
Rev. Phys. Chem., 2020, 71, 361–390.
**5** T. Mueller, A. Hernandez and C. Wang, J. Chem. Phys., 2020,
152, 050902.
**6** O. T. Unke, D. Koner, S. Patra, S. K ̈aser and M. Meuwly,
Mach. Learn. Sci. Technol., 2020, 1, 013001.
**7** R. Z. Khaliullin, H. Eshet, T. D. K ̈uhne, J. Behler and
M. Parrinello, Nat. Mater., 2011, 10, 693–697.
**8** G. C. Sosso, G. Miceli, S. Caravati, F. Giberti, J. Behler and
M. Bernasconi, J. Phys. Chem. Lett., 2013, 4, 4241–4246.
**9** H. Niu, L. Bonati, P. M. Piaggi and M. Parrinello, Nat.
Commun., 2020, 11, 2654.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/CJ3XLV/
Green
Letícia Madureira
PUBLISH
CPS73H@@pretalx.com
-CPS73H
Using Julia for Observational Health Research
en
en
20220727T200000
20220727T201000
0.01000
Using Julia for Observational Health Research
One patient encounter to a health care provider can produce an enormous amount of Real World Data (RWD). Per the United States Food and Drug Administration, RWD, "relates to patient health status and/or the delivery of health care routinely collected from a variety of sources." Some examples of RWD are electronic health records, medical claims, or mobile device data. Julia is primed to handle the computation required to generate clinical significance from RWD in the domain of observational health research.
Historically however, Julia's ecosystem has not been mature enough to participate directly in observational health research concerning large amounts of RWD. In the past, to effectively utilize this data the open science community, OHDSI (Observational Health Data Sciences and Informatics), was formed. The core standard that OHDSI has developed and is being rapidly adopted worldwide for handling RWD is the Observational Medical Outcomes Partnership Common Data Model - commonly referred to the OMOP CDM. Traditionally the tools built by OHDSI to interact with the OMOP CDM to extract and analyze patient information have been built in the R programming language. As a result, this has precluded other research communities from participating directly in this space.
I am pleased to announce in this talk that the Julia ecosystem has now reached a level of maturity to bridge to observational health research communities such as OHDSI to enable future observational health researchers to leverage the benefits of Julia. In this talk, I will provide a gentle introduction to observational health research and popular Common Data Models such as OMOP. This will lead into a discussion on lessons learned from an observational health study I performed called "Assessing Health Equity in Mental Healthcare Delivery Using a Federated Network Research Model" which used Julia as its main driving engine. Finally, tools available in the Julia ecosystem from JuliaHealth, JuliaInterop, and others that enable bridging between these two communities will be highlighted.
By the end of this talk, it should be made clear to potential researchers from the Julia community that the Julia ecosystem is matured enough to participate in observational health research endeavors. Furthermore, through the lessons I share through this talk, potential researchers can take inspirations on methods I used for their own work. My end goal for this talk is that by showing how these communities can be bridged, novel collaborations can be made and the benefits of using Julia can be easily accessed in observational health research.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/CPS73H/
Green
Jacob Zelko
PUBLISH
ML8N7S@@pretalx.com
-ML8N7S
Finding Fast Radio Bursts, Faster
en
en
20220727T201000
20220727T202000
0.01000
Finding Fast Radio Bursts, Faster
One of the main bottlenecks in a fast radio burst detection pipeline is a first pass pulse detection. FRBs, pulsars, airplane radars, and microwaves opened prematurely can all produce pulse-like profiles. Processing the dynamic spectral data in real-time to limit the search space and produce a list of possible candidates is an important first step.
When wideband radio pulses travel through space, ionized interstellar media disperses the pulse in time. This results in a received spectrum with a peak descending in frequency as a function of time instead of receiving all of the frequencies at once. For a given received time/frequency data point, the pulse signature may be buried under noise. As we don't know the amount of dispersion of an arbitrary pulse a priori, we have to integrate all possible dispersion curves for every start time to find a possible correlation.
Using both hand-written CUDA.jl kernels and Julia's GPU-array abstractions, we can implement a performant divide and conquer approach to search for these pulses. Then, leveraging the Julia ecosystem, we can embed this transformation into a modern, integrated FRB detection pipeline.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/ML8N7S/
Green
Kiran Shila
PUBLISH
RTBE9E@@pretalx.com
-RTBE9E
Using contexts and capabilities to provide privacy protection
en
en
20220727T202000
20220727T203000
0.01000
Using contexts and capabilities to provide privacy protection
Privacy is an important aspect of the internet today. When you need to use a particular service, you often need to hand over some personal information. The service provider typically provides some protection about the use of your personal information based upon its privacy policy.
From the service provider’s perspective, this is not a simple task. Suppose that you have collected your users’ email addresses and made the promise that you do not share them with any third party vendor. In a large company, there could be many systems and processes that make use of email addresses. How do you ensure that none of your code leaks information to any third party vendors?
The problem can be solved with contexts and capabilities. Contexts are environmental information that tracks the purpose of your code. Capabilities represent a set of purposes that your code can be used for. As an example, bar is a function that writes sensitive information, such as email address, to a user database and it has the capability of “user-management”. Then, when a function foo() calls bar(), it is allowed as long as foo‘s stated capabilities also include “user-management”.
This talk will cover more about the why’s and the general mechanics of context and capabilities. I will also present a prototype that provides some basic functionalities of tracking contexts, defining capabilities and validating capabilities at runtime.
Context is also known as coeffects. You can find more information about the theory of context-aware programming languages at http://tomasp.net/coeffects/.
More information about context and capabilities can be found at this Hack language’s documentation: https://docs.hhvm.com/hack/contexts-and-capabilities/introduction.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/RTBE9E/
Green
Tom Kwong
PUBLISH
J3VNLN@@pretalx.com
-J3VNLN
GatherTown -- Social break
en
en
20220727T203000
20220727T213000
1.00000
GatherTown -- Social break
PUBLIC
CONFIRMED
Social hour
https://pretalx.com/juliacon-2022/talk/J3VNLN/
Green
PUBLISH
YSLKZJ@@pretalx.com
-YSLKZJ
From Mesh Generation to Adaptive Simulation: A Journey in Julia
en
en
20220727T123000
20220727T130000
0.03000
From Mesh Generation to Adaptive Simulation: A Journey in Julia
Applications of interest in computational fluid mechanics typically occur on domains with curved boundaries. Further, the solution of a non-linear physical model can develop complex phenomena such as discontinuities, singularities, and turbulence.
Attacking such complex flow problems may seem daunting. In this talk, however, we present a toolchain with components entirely available in the Julia ecosystem to do just that. In broad strokes the workflow is:
1. Use HOHQMesh.jl to interactively prototype and visualize a domain with curved boundaries.
2. HOHQMesh.jl then generates an all quadrilateral mesh amenable for high-order numerical methods.
3. The mesh file is passed to Trixi.jl, a numerical simulation framework for conservation laws.
4. Solution-adaptive refinement of the mesh within Trixi.jl is handled by P4est.jl.
5. After the simulation, a first visualization is made using either Plots.jl or Makie.jl.
6. Solution data can also be exported with Trixi2Vtk.jl for visualization in external software like ParaView.
The strength and simplicity of this workflow is through the combination of several packages either originally written in Julia, like Trixi.jl, or wrappers, like P4est.jl or HOHQMesh.jl, that provide Julia users access to powerful, well-developed numerical libraries and tools written in other programming languages.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon-2022/talk/YSLKZJ/
Red
Andrew Winters
PUBLISH
9J3PGX@@pretalx.com
-9J3PGX
CUPofTEA, versioned analysis and visualization of land science
en
en
20220727T130000
20220727T133000
0.03000
CUPofTEA, versioned analysis and visualization of land science
Terrestrial ecosystems (i.e., not ocean) have been absorbing about a third of human CO2 emissions, mitigating climate change as atmospheric carbon goes into biomass or soil organic matter. However, it is unclear if this carbon sink will continue in the future. The scientific community uses measurements from field site and satellite remote sensing to understand mechanisms regulating this behavior and create models to make predictions. However, efforts are segmented into specific disciplines (e.g., field experimentalists, modelers, plant ecologists, hydrologists) that rarely collaborate. This is due to using different programming languages (i.e., experimentalists using scripting language such as Python or R, and modelers using fast languages such as Fortran), or the nature of scientific publications encouraging small teams. In recent decades, global standardized databases are being created, as well as community open-source research tools, and Julia, a scripting language as fast as Fortran. This opens the door for collaboration in land-atmosphere exchange science. We use Julia, GitHub, and packages such as Franklin.jl and WGLMakie.jl to create CUPofTEA, a community platform to host versioned analysis and visualizations of land-atmosphere exchange science across fields. We demonstrate the workflow with DAMMmodel.jl, a package to analyze and visualize the response of CO2 emission from ecosystems to soil moisture and temperature, and global database of ecosystem (FLUXNET) and soil (COSORE) fluxes.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon-2022/talk/9J3PGX/
Red
Alexandre A. Renchon
PUBLISH
RQP9TG@@pretalx.com
-RQP9TG
ModalDecisionTrees: Decision Trees, meet Modal Logics
en
en
20220727T143000
20220727T144000
0.01000
ModalDecisionTrees: Decision Trees, meet Modal Logics
Symbolic learning provides *transparent* (or *interpretable*) models, and is becoming increasingly popular as AI permeates more and more aspects of our lives, while simultaneously raising ethical concerns. Mainly based on decision trees and rule-based models, symbolic modeling have been largely studied with either propositional or first-order logic as the underlying logical formalism. These logics are two extremes in terms of *expressive power* and *computational tractability*: on one hand, propositional logic can only express a simple form of reasoning, which makes classical decision trees easy to learn but also unable to deal with non-tabular data; on the other hand, first-order logics can express complex sentences in terms of entities and relations, but at the cost of higher computational complexities. A middle point between the two has been overlooked: modal logic.
ModalDecisionTrees.jl offers a set of symbolic machine learning methods based on extensions of classical decision tree learning algorithms (CART and C4.5), that leverage modal logics to perform a rather simple (but powerful) form of entity-relation reasoning; this allows *"Modal Decision Trees"* (MDTs) to capture temporal, spatial, and spatio-temporal patterns, and makes them suitable to natively deal (= no need for feature extraction) with data such as multivariate time-series and image data.
To fix the ideas, consider the case of time-series classification. While classical trees can only make decisions based on scalar values, and thus can only deal with time-series when they are priorly *flattenedly described* by a set of scalar descriptors (feature extraction step), a *modal* image classification rule can speak in terms of temporal patterns such as *there exists an interval in the time-series where variable i has a certain property, _containing_ another interval where variable j has another property*.
Modal logic can express the existence of entities (for example, a time interval, or an image region) with given properties, and properties can be *local*, such as the value of a variable being always lower than a certain threshold within the time interval, or *relational*, such as one entity being *contained* in, or *overlapping* with another one.
This process involves an intermediate step where data samples are represented as graphs (Kripke structures, in logical jargon) representing entities, their local properties, and their relations.
Note how rules and patterns can, of course, be as complex as the reality they are trying to capture; however, they can always be straightforwardly translatable into natural language, which represents the essence of the *transparency* of these models, as well as the main reason why one may want to use this package.
MDTs have been shown to achieve performances that are higher when compared to classical decision trees, and often comparable to those of functional gradient-based methods (e.g., Neural Networks), in tasks such as multivariate time-series classification (e.g., COVID-19 diagnosis from audio recordings of coughs and breaths) and image classification (e.g., land cover classification).
Despite this package being at its infancy, ModalDecisionTrees.jl can be used with the Machine Learning Julia (MLJ) framework, and provides:
- support for *bagging* (i.e, forests, ensembles of trees);
- support for *multimodal* learning;
- tools for inspecting models and analyzing single rules.
Package available at: https://github.com/giopaglia/ModalDecisionTrees.jl
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/RQP9TG/
Red
Giovanni Pagliarini
PUBLISH
TRFSJY@@pretalx.com
-TRFSJY
Multivariate polynomials in Julia
en
en
20220727T144000
20220727T145000
0.01000
Multivariate polynomials in Julia
Multivariate polynomials appear in various applications such as computer algebra systems, homotopy continuation or Sum-of-Squares programming. Different applications have varied requirements for a multivariate polynomial library making it challenging to choose a representation that would be the most efficient for all use cases. It is well understood for instance that the concrete representation to use for these polynomials depends on whether they are sparse or not. Having an abstract interface allows both the application to be independent on the actual representation used but also some lower level operations such as the computation of polynomial division or gcd.
In fact, this abstraction is even more important in Julia than other languages because Julia allows yet another aspect to enter into the design of multivariate polynomials. We show in this talk that for basic operations, the Julia compiler can either compile a generic method working for any set of variables or a method specialized to a specific one. This can be achieved in Julia by moving some part of the polynomial description from field values to type parameters, hence also reducing the memory footprint of polynomials. These 2 aspects: sparsity and specialization make up for 4 different representations that all have specific use cases where they are most appropriate. Packages relying on multivariate polynomial computation for which more than one use case can occur should therefore be implemented on an abstract multivariate polynomial interface and require the user to choose the implementation via the type of polynomials given as input.
We illustrate this with actual Julia packages implementing these representations: DynamicPolynomials.jl (sparse, non-specialised), TypedPolynomials.jl (sparse, specialized), SIMDPolynomials.jl (sparse, specialized) and TaylorSeries.jl (dense, specialized). We analyze the impact of the choice of representation for these representations in a benchmark for gcd computation. The gcd implementation is written generically thanks to the abstract MultivariatePolynomials.jl interface.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/TRFSJY/
Red
Benoît Legat
Chris Elrod
PUBLISH
UL3T8K@@pretalx.com
-UL3T8K
PHCpack.jl: Solving polynomial systems via homotopy continuation
en
en
20220727T145000
20220727T150000
0.01000
PHCpack.jl: Solving polynomial systems via homotopy continuation
Systems of many polynomial equations in several variables occur in various areas of science and engineering, such as mechanism design, Nash equilibria, computer vision, etc. Use cases of PHCpack can be found in more than one hundred scientific papers. In addition to the need in applications, theorems from algebraic geometry have led to efficient algorithms to compute all isolated solutions and to compute the degrees and dimensions of all positive dimensional solution sets. PHCpack contains many of the first implementations of algorithms in numerical algebraic geometry.
PHCpack allows users to provide polynomial systems in a variety of formats to the solver, including symbolically. The Julia Interface to PHCpack obtains the symbolic input from the user and using native Julia Dataframes and Data Structures, processes and returns the results numerically via PHCpack.
As one approach, using only the phc executable file, one can call the relevant features of PHCpack from a Julia program. Alternatively, we have compiled a C interface into a shared object, which can be imported into a Julia session.
As a use case, we consider the design of a 4-bar mechanism. The 4-bar mechanism traces a curve and given sufficiently many points on the curve that one wants the mechanism to trace, one can compute all necessary parameters of the mechanism. This computation requires the solution of a system of many equations and variables.
All the code is available in public github repositories.
https://github.com/kviswa5
https://github.com/janverschelde/PHCpack/tree/master/src
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/UL3T8K/
Red
Kylash Viswanathan
Jan Verschelde
PUBLISH
KPRZAM@@pretalx.com
-KPRZAM
A Tax-Benefit model for Scotland in Julia
en
en
20220727T150000
20220727T151000
0.01000
A Tax-Benefit model for Scotland in Julia
A Tax-Benefit model in Julia
A tax benefit model is a computer program that calculates the effects of possible changes to the fiscal system on a sample of households. We take each of the households in a household survey dataset, calculate how much tax the household members are liable for under some proposed tax and benefit regime, and how much benefits they are entitled to, and add add up the results. If the sample is representative of the population, and the modelling sufficiently accurate, the model can then tell you, for example, the net costs of the proposals, the numbers who are made better or worse off, the effective tax rates faced by individuals, the numbers taken in and out of poverty by some change, and much else.
I want to discuss a new Tax-Benefit model for Scotland written Julia (https://github.com/grahamstark/ScottishTaxBenefitModel.jl).
There are currently three web interfaces you can play with:
* https://ubi.virtual-worlds.scot/ (Models a Universal Basic Income)
* https://stb.virtual-worlds.scot/scotbud (construct a national budget)
* https://stb.virtual-worlds.scot/bcd/ (explores the incentive effects of the fiscal system)
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/KPRZAM/
Red
Graham Stark
PUBLISH
Z98GWK@@pretalx.com
-Z98GWK
Bayesian Estimation of Macroeconomic Models in Julia
en
en
20220727T151000
20220727T154000
0.03000
Bayesian Estimation of Macroeconomic Models in Julia
In this talk, I will discuss how the Federal Reserve Bank of New York (FRBNY) uses Julia for forecasting. I will first present the FRBNY model and the basics of our estimation methods, noting recent adjustments made necessary by the rapid changes in economic conditions over the last two years. During this discussion I will introduce our packages DSGE.jl, SMC.jl, and ModelConstructors.jl, which provide a user-friendly API for creating and estimating a variety of models, including our workhorse DSGE model.
I will then discuss how Julia allows us to prototype and test new estimation methods, providing examples through our research into adaptive Metropolis-Hastings and sequential Monte Carlo algorithms. Because DSGE models take significant time to estimate, being able to stay on the cutting edge of Bayesian estimation algorithms allows us to provide results efficiently. Metropolis-Hastings algorithms, a class of random-walk Markov Chain Monte Carlo estimators, use a fixed proposal distribution throughout the estimation process. Adaptive Metropolis-Hastings algorithms update the proposal distribution throughout the estimation process in an attempt to gain efficiency. SMC methods combine MH and importance sampling to create an easily parallelizable sampling algorithm. I will show how these two families of algorithms can speed up the estimation process while illustrating potential pitfalls.
This presentation will be useful to anyone who regularly conducts Bayesian estimation, especially in the context of time series and forecasting.
Disclaimer: This talk reflects the experience of the author and does not represent an endorsement by the Federal Reserve Bank of New York or the Federal Reserve System of any particular product or service. The views expressed in this talk are those of the author and do not necessarily reflect the position of the Federal Reserve Bank of New York or the Federal Reserve System. Any errors or omissions are the responsibility of the author.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon-2022/talk/Z98GWK/
Red
Aidan Gleich
PUBLISH
QG8VUX@@pretalx.com
-QG8VUX
A Data Integration Framework for Microbiome Research
en
en
20220727T154000
20220727T155000
0.01000
A Data Integration Framework for Microbiome Research
Microorganisms shape every aspect of our life: from the soil of our farmland to the human gut, from the ocean to the municipal wastewater of our cities, microorganisms seem to inhabit and even dominate most ecosystems of this planet. As we expand our knowledge on the role that microbes play within and beyond our bodies, the need arises to store and analyze such information in a systematic and reproducible manner.
Standardized data objects can greatly support the collaborative development of new data science methods. In particular, commonly agreed data standards will provide improved efficiency and reliability in complex data integration tasks. We implement this approach in microbiome research in the a new Julia package, MicrobiomeAnalysis.jl (MIA), which introduces a new approach for microbiome data integration and analysis based on state-of-the-art data containers designed for robust data integration tasks: SummarizedExperiments.jl (SE) and MultiAssayExperiment.jl (MAE).
Our approach provides a general framework to study complex microbiome profiling data sets. Not only do the data containers make it instinctive to work with abundance assays, but they also integrate those assays with the corresponding metadata into a comprehensive data object. We demonstrate the approach based on common analysis tasks in microbial ecology, including alpha and beta diversity analysis and visualization of microbial community dynamics. The proposed approach is inspired by closely related and active efforts in R/Bioconductor. Developing a similar framework in the Julia language is a promising endeavour that can provide drastic performance improvements in certain computational tasks, such as dimension reduction and time series analysis while taking advantage of a shared conceptual framework.
Overall, our environment offers the starting point for developing effective standardized methods for microbiome research. The methodology is general, thus it can be easily applied to other multi-source study designs and data integration tasks.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/QG8VUX/
Red
Giulio Benedetti
PUBLISH
E99KP7@@pretalx.com
-E99KP7
Building workflows for materials modeling on HPC Systems
en
en
20220727T193000
20220727T200000
0.03000
Building workflows for materials modeling on HPC Systems
`Express.jl`, together with its "plugins" (such as `QuantumESPRESSOExpress.jl`), are shipped with well-tested workflow templates, including structure optimization, equation of state fitting, lattice dynamics calculation, and thermodynamic property calculation. It is designed to be highly modularized so that its components can be reused across various occasions, and customized workflows can be built on top of that. It helps users in the preparation of inputs, execution of simulations, and analysis of data. Users can also track the status of workflows in real-time and rerun failed jobs thanks to the data lineage feature `Express.jl` provides.
To achieve the goals mentioned above, we built several independent packages during the development of `Express.jl`, which are supposed to solve some ordinary problems in the physics, geoscience, and materials science communities, e.g., `EquationsOfStateOfSolids.jl`, `Geotherm.jl`, `Spglib.jl`, `Crystallography.jl`. Of course, as a project aimed to automate mundane operations of the *ab initio* calculations, we wrote a package (`SimpleWorkflows.jl`) to construct workflows from basic jobs and track their execution status. Because the most time-consuming part of the workflows is running external software (like Quantum ESPRESSO), we also built packages to interact with them, e.g., `QuantumESPRESSO.jl` and `Pseudopotentials.jl`. Besides, we discovered many valuable Julia packages and integrated them into our code, such as `Configurations.jl`, `Comonicon.jl`, `Setfield.jl`. In this talk, we would like to explain how Julia made our complicated codebase possible and share some experiences about when and how we should utilize these wonderful projects.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon-2022/talk/E99KP7/
Red
Qi (Ryan) Zhang
PUBLISH
DRLYT8@@pretalx.com
-DRLYT8
Modeling a Crash Simulation System with ModelingToolkit.jl
en
en
20220727T200000
20220727T203000
0.03000
Modeling a Crash Simulation System with ModelingToolkit.jl
Instron's “Catapult” Crash Simulation System releases 2.75MN of energy with micron level control over a fraction of a second to reproduce a recorded crash force signal. This machine requires a model for many reasons: command signal generation, operational prediction and optimization, and engineering research and development. Therefore the model should be compilable for software but also flexible for engineering exploration (i.e. scriptable REPL mode). Julia opens the door to making this more efficient for its solution to the dual language problem but also provides a full featured programming language and modular package system with integrated unit testing that additionally help greatly with model development. ModelingToolkit offers the tools needed to easily rewrite and move the model from traditional modeling frameworks and provides not just the benefit of modeling in Julia, but a more flexible and open modeling tool.
To successfully transition, a few missing features needed to be developed: 1. How to integrate Julia code in software, 2. How to enhance ModelingToolkit with: parameter management, global parameters, algebraic ODE tearing, and static model code generation. We now have a faster model that is easier and more organized to develop with a testing and benchmarking suite to easily track and publish versioned changes. There is still work to do but we now have what we need to develop our future products with Julia.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon-2022/talk/DRLYT8/
Red
Bradley Carman
PUBLISH
LGWRV8@@pretalx.com
-LGWRV8
Automatic Differentiation for Quantum Electron Structure
en
en
20220727T123000
20220727T130000
0.03000
Automatic Differentiation for Quantum Electron Structure
The quantum-chemical simulation of electronic structures is an established approach in materials research. The desire to tackle even bigger systems and more involved materials, however, keeps posing challenges with respect to physical models, reliability and performance of methods such as Density Functional Theory (DFT). For instance, many relevant physical properties of materials, such as interatomic forces, stresses or polarizability, depend on the derivatives of quantities of interest with respect to some input data. To perform efficiently such computations, Automatic Differentiation has been recently implemented into DFTK (https://dftk.org), a Julia package for DFT, which aims to be fast enough for practical calculations.
Automatic Differentiation (AD, also known as Algorithmic Differentiation) allows the efficient and accurate calculation of derivatives of first and higher order of mathematical expressions, implicitly defined by source code.
The two most common modes of AD are tangent (forward) and adjoint (reverse) mode.
Of special interest is the reverse mode, as it allows to propagate derivative information from the outputs of some computation back to its inputs. This yields a computational complexity which scales with the number of outputs, as opposed to scaling with the number of inputs, like traditional finite differences or tangent AD.
In many applications in computational math, engineering, ML and finance the number of outputs is small (e.g. 1 for a least squares cost function), while the number of inputs is bigger by orders of magnitude.
Julia is based on the LLVM stack and allows inspection and modification of its own AST, as well as other already optimized code structures at run time.
This promises to combine the strengths of both operator-overloading style AD tools (flexibility, no running out of sync with the primal, coverage of all language features) and source code transformation style AD tools (less memory overhead, generated derivative code can be optimized by compiler).
This has spawned a variety of AD tools in the Julia ecosystem (see e.g. https://juliadiff.org for a enumeration of tools), each with its own design goals but also limitations.
The need to make these tools work together under a common interface has been identified by the Julia community and led to the development of the ChainRules.jl package.
We use Zygote to create automatic source code as much as possible.
There are two major reasons Zygote might not be used:
- The code to be differentiated uses features not supported by Zygote (e.g. use of mutation) and can not be sensibly refactored to a version conforming to Zygotes generation rules (e.g. due to performance requirements of the primal)
- Mathematical insight allows us to more efficiently implement the adjoint pullback by hand (e.g. terms with cancelling derivatives, symbolic differentiation of linear solvers, FFTs, etc.)
For both of these use cases we use the ChainRules interface to specify custom rrules.
For the performance critical parts of the primal we plan to investigate tools that support mutation (e.g. Enzyme), though we expect this to come with its own challenges.
Some of the custom rrules we implemented in ChainRules required mathematical investigation to achieve numerical stability of response properties. In particular, the variation of the ground state density with respect to a perturbative external potential solves a linear system which is ill-conditioned when working with metals. We propose a unified mathematical framework from the literature to enhance stability, via appropriate gauge choices and a Schur complement.
We will present our approach to introduce AD into an existing codebase, lessons learned, and what design patterns are suitable to both good performance and good compatibility with existing AD tools.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon-2022/talk/LGWRV8/
Purple
Markus Towara
Niklas Schmitz
Gaspard Kemlin
PUBLISH
X3UUFD@@pretalx.com
-X3UUFD
Fast Forward and Reverse-Mode Differentiation via Enzyme.jl
en
en
20220727T130000
20220727T133000
0.03000
Fast Forward and Reverse-Mode Differentiation via Enzyme.jl
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon-2022/talk/X3UUFD/
Purple
William Moses
Ludger Paehler
Tim Gymnich
Valentin Churavy
PUBLISH
AGW8BR@@pretalx.com
-AGW8BR
JunctionTrees: Bayesian inference in discrete graphical models
en
en
20220727T143000
20220727T144000
0.01000
JunctionTrees: Bayesian inference in discrete graphical models
GitHub repo: https://github.com/mroavi/JunctionTrees.jl
Docs: https://mroavi.github.io/JunctionTrees.jl
JunctionTrees.jl encapsulates the result of the research we have been conducting in the context of improving the efficiency of Bayesian inference in probabilistic graphical models.
The junction tree algorithm is a core component of discrete inference in probabilistic graphical models. It lies at the heart of many courses that are taught at different universities around the world including MIT, Berkeley, and Stanford. Moreover, it serves as the backbone of successful commercial software, such as Hugin Expert, that aims to discover insight and provide predictive capabilities to effectively combat fraud and risk.
JunctionTrees.jl is mainly tailored towards students and researchers. This library offers a great starting point for understanding the implementation details of this algorithm thanks to the intrinsic readability of the Julia language and the thoroughly commented codebase. Moreover, this package constitutes an optimization framework that other researchers can make use of to experiment with different ideas to improve the performance of runtime Bayesian inference.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/AGW8BR/
Purple
Martin Roa-Villescas
PUBLISH
FPZVML@@pretalx.com
-FPZVML
Automated Finite Elements: a comparison between Julia and C++
en
en
20220727T144000
20220727T145000
0.01000
Automated Finite Elements: a comparison between Julia and C++
This talk will be about comparing the implementation of the stabilized Navier-Stokes equations for incompressible flow, both using the [Gridap](https://github.com/gridap/Gridap.jl) package in Julia and using a [Boost.Proto](https://www.boost.org/doc/libs/1_78_0/doc/html/proto.html) based C++ code. The concrete C++ implementation can be found [here](https://github.com/barche/coolfluid3/blob/688173daa1a7cf32929b43fc1a0d9c0655e20660/plugins/UFEM/src/UFEM/NavierStokesAssembly.hpp#L57-L65), while the equivalent Gridap code is [here](https://github.com/barche/Channel_flow/blob/94aeb2982e01b08ff41848091a3b5d0b7b2a3983/Channel_2d_3d.jl#L104-L110). Aside from the obvious differences due to the use of unicode and the fact that Gridap operates at a higher level of abstraction, there are also some striking similarities in both approaches. The main point is that both packages operate on expressions that are valid code in the programming language that is used (i.e. Julia for Gridap, C++ for [Coolfluid 3](https://github.com/barche/coolfluid3)). This is possible because both languages offer a lot of flexibility in terms of operator overloading and strong typing. In the case of C++, the Boost.Proto library helps with building a structured framework for the interpretation of the expressions, based on the idea of expression templates and thus avoiding runtime overhead of inheritance in C++. In Julia, this step is taken care of using the built-in type system and generated functions.
The whole objective of this type of machinery is to offer a simple interface to the user, but end up with a finite element assembly loop that is as fast as possible. To this end, information such as the size of element matrices and vector dimensions must be known to the compiler. We will show that both systems indeed achieve this, and result in good performance for the assembly loop.
Due to the similarity in approach, the experience visible to the end user is also similar: both systems exhibit long compilation times and long error messages in case of user errors such as mixing up incompatible matrix dimensions. This will be illustrated using examples.
Finally, more advanced numerical techniques, such as the stabilized methods used in fluid simulations, require the user to be able to define custom functions that are used during assembly, e.g. to calculate the value of stabilization coefficients. This is where Julia really shines, as it is possible to simply define a normal function, while in C++ some extensive boilerplate code is required, as will be shown.
The conclusion is that Gridap has reached a level of maturity that makes it very attractive to use Julia for this kind of work. Even if some performance optimization may still be needed, development of and experimenting with new numerical methods is much easier than in a complicated C++ code.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/FPZVML/
Purple
Bart Janssens
PUBLISH
K7VNZJ@@pretalx.com
-K7VNZJ
Distributed AutoML Pipeline Search in PC/RasPi K8s Cluster
en
en
20220727T145000
20220727T150000
0.01000
Distributed AutoML Pipeline Search in PC/RasPi K8s Cluster
There is a growing need for low-power computing devices due to their minimal thermal and energy footprint to be used in many HPC applications such as weather forecasting, ocean engineering, smarthome computing, biocomputing, AI modeling, etc. ARM-based processors such RasPis provide an attractive solution because they are cheap, versatile, and has great Linux hardware support as well as stable Julia releases. Due to its full Linux compatibility, making a K8s cluster from a bunch of RasPis become a trivial exercise as well as running Julia's cluster manager on top of K8s. This talk will provide an overview and an example walk-through how to leverage Julia+Raspis+K8s combinations to solve certain ML pipeline optimization tasks.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/K7VNZJ/
Purple
Paulito Palmes
PUBLISH
VME3D8@@pretalx.com
-VME3D8
Comonicon, a full stack solution for building CLI applications
en
en
20220727T150000
20220727T151000
0.01000
Comonicon, a full stack solution for building CLI applications
[Comonicon](https://github.com/comonicon/Comonicon.jl) is a CLI generator that aims to provide a full solution for CLI applications, this includes
### Clean and Julian syntax
the interface only has `@main` and `@cast` that will collect all the information from docstring to function signature to create the CLI. The usage is extremely simple, just put `@main` or `@cast` in front of the functions or modules you would like to convert to a CLI node. It has proven to have a very intuitive and user-friendly experience in the past 2 years.
### Powerful and extensible code generators
Comonicon is built around an intermediate representation for CLIs. This means the Comonicon frontend is decoupled with its backend, one can also directly construct the IR to generate their CLI as a more advanced feature. Then different backends will generate corresponding backend code like a standard compiler codegen. This currently includes:
- a zero dependency command line arguments parsing function in Julia
- shell autocompletion (only ZSH is supported at the moment)
and this can be easily extended to generate code for other interfaces, one very experimental work is [generating GUI directly from the Comonicon IR](https://github.com/comonicon/ComoniconGUI.jl).
### Mitigating startup latencies
Most Julia CLI generators suffer from startup latencies in Julia because of the JIT compilation, we have put a relatively large effort into mitigating this latency caused by the CLI generators. And because Comonicon is able to generate a zero-dependency function `command_main` that parses the CLI arguments, in extreme cases, one can completely get rid of `Comonicon` and use the generated code directly to reach the most ideal latency achievable in current Julia.
### Build System
Comonicon provides a full build system for shipping your CLIs to other people. This means Comonicon can handle the installation of a Julia CLI application that guarantees its reproducibility by handling the corresponding project environment correctly. Or build the CLI application into binaries via PackageCompiler then package the application as tarball. A glance at its build CLI
```
Comonicon - Builder CLI.
Builder CLI for Comonicon Applications. If not sepcified, run the command install by default.
USAGE
julia --project deps/build.jl [command]
COMMAND
install install the CLI locally.
app [tarball] build the application, optionally make a tarball.
sysimg [tarball] build the system image, optionally make a tarball.
tarball build application and system image then make tarballs
for them.
EXAMPLE
julia --project deps/build.jl install
install the CLI to ~/.julia/bin.
julia --project deps/build.jl sysimg
build the system image in the path defined by Comonicon.toml or in deps by default.
julia --project deps/build.jl sysimg tarball
build the system image then make a tarball on this system image.
julia --project deps/build.jl app tarball
build the application based on Comonicon.toml and make a tarball from it.
```
### Configurable
The generated CLI application, the build options are all configurable via a `Comonicon.toml` file, one can easily change various default options directly from the configuration file to create your favorite CLI:
- enable/disable colorful help message
- set Julia compile options
- bundle custom assets
- installation options
- ...
### Summary
Comonicon is currently the only CLI generator designed for Julia and handles the entire workflow of creating a serious CLI application and shipping it to users. It still has a few directions to improve and in the future Julia versions, we hope with the progress of static compilation we will eventually be able to build small binaries and ship them to all platforms in a simple workflow via Comonicon so that one day Julia can also do what go/rust/cpp/... can do in CLI application development.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/VME3D8/
Purple
Xiu-zhe (Roger) Luo
PUBLISH
ZSARJD@@pretalx.com
-ZSARJD
Cycles and Julia Sets: Novel algorithms for Numerical Analysis
en
en
20220727T151000
20220727T152000
0.01000
Cycles and Julia Sets: Novel algorithms for Numerical Analysis
In this talk we will present a new collection of algorithms dedicated to compute the basins of attraction of any complex rational map, and to study the dynamical behaviour of its fixed points and attracting n-cycles. By doing this, one can visualize and study amazingly beautiful and complex fractal objects like Julia Sets. The study of the basins of attraction of a dynamical system is a very relevant matter not only in Numerical Analysis and Holomorphic Dynamics, but also in other fields like Physics or Mechanical Engineering.
We are going to describe the methods implemented in the Lyapunov Cycle Detector module, available in the following GitHub repository: https://github.com/valvarezapa/LCD. In it you will find the code itself, several explanations of how does the methods work, what do they do, a lot of practical examples and even a User Guide.
Our work is based in very relevant theorems of Complex Dynamics, like the Ergodic Theorem, and is motivated by Sullivan's work on Dynamical Systems, recently awarded with the Abel prize. However, no previous knowledge about any of these topics is required, since we will focus on the algorithms and the implementation of the code. The graphics we will be able to generate are both rich and beautiful, and the concepts behind them are easy to grasp. Everyone interested in the mathematical framework or in any specific technicalities behind these algorithms can consult our paper "Algorithms for computing attraction basins of a self-map of the Hopf fibration based on Lyapunov functions" (currently on preprint).
From a scientific computing point of view, this new collection of algorithms solve most of the computational problems that often arise in Numerical Analysis, like overflows or mathematical indeterminations. We achieve this by considering the Hopf endomorphism induced by the given rational map, and iterating it over the complex projective line (P^1(C)). This approach also allows us to easily work with the infinity point. Since this kind of calculations often have high computational cost, we benefit from Julia’s efficiency and some built-in multi-threading macros in order to be able to visualize the results in a reasonable amount of time.
From a mathematical perspective, we will be considering a discrete-time dynamical system given by a complex rational map. The techniques we use to compute the basins of attraction are based on Lyapunov functions and Lyapunov coefficients (which are closely related to Lyapunov exponents; a very powerful and commonly used concept in dynamical systems). The Lyapunov function we define is constant in each basin of attraction, and depends on the notion of spherical derivative of the given rational map. This way, we can divide the Riemann Sphere (the plane of complex numbers adding the infinity point) into the different basins of attraction (each one with an associated constant) and the Julia set. Most famous Julia sets can be computed and visualized this way. Also, our algorithms provide more information about the dynamics of the system than most traditional algorithms on this topic generally do. Our methods are focused on detecting the attracting n-cycles of the given rational map and its basins. Note that fixed points are just the particular case of 1-cycles. In addition, by using Lyapunov coefficients we are able to measure how much attracting is every n-cycle.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/ZSARJD/
Purple
Víctor Álvarez Aparicio
PUBLISH
AKVUKM@@pretalx.com
-AKVUKM
High-performance xPU Stencil Computations in Julia
en
en
20220727T152000
20220727T153000
0.01000
High-performance xPU Stencil Computations in Julia
Our approach for the expression of architecture-agnostic high-performance stencil computations relies on the usage of Julia's powerful metaprogramming capacities, costless high-level abstractions and multiple dispatch. We have instantiated the approach in the Julia package `ParallelStencil.jl`. Using `ParallelStencil`, a simple call to the macro `@parallel` is enough to parallelize and launch a kernel that contains stencil computations, which can be expressed explicitly or with math-close notation. The package used underneath for parallelization is defined in a initialization call beforehand. Currently supported are `CUDA.jl` for running on GPU and `Base.Threads` for CPU. Leveraging metaprogramming, `ParallelStencil` automatically generates high-performance code suitable for the target hardware, and automatically derives kernel launch parameters from the kernel arguments by analyzing the extensions of the contained arrays. A set of architecture-agnostic low level kernel language constructs allows for explicit low level kernel programming when useful, e.g., for the explicit control of shared memory on the GPU (these low level constructs are GPU-computing-biased).
Arrays are automatically allocated on the hardware chosen for the computations (GPU or CPU) when using the allocation macros provided by `ParallelStencil`, avoiding any need of code duplication. Moreover, the allocation macros are fully declarative in order to let `ParallelStencil` choose the best data layout in memory. Notably, logical arrays of structs (or of small arrays) can be either laid out in memory as arrays of structs or as structs of arrays accounting for the fact that each of these allocation approaches has its use cases where it performs best.
`ParallelStencil` is seamlessly interoperable with packages for distributed parallelization, as e.g. `ImplicitGlobalGrid.jl` or `MPI.jl`, in order to enable high-performance stencil computations on GPU or CPU supercomputers. Communication can be hidden behind computation with as simple macro call. The usage of this feature solely requires that communication can be triggered explicitly as it is possible with, e.g, `ImplicitGlobalGrid` and `MPI.jl`.
We demonstrate the wide applicability of our approach by reporting on several multi-GPU solvers for geosciences as, e.g., 3-D solvers for poro-visco-elastic twophase flow and for reactive porosity waves. As reference, the latter solvers were ported from MPI+CUDA C to Julia using `ParallelStencil` and `ImplicitGlobalGrid` and achieve 90% and 98% of the performance of the original solvers, respectively, and a nearly ideal parallel efficiency on thousands of NVIDIA Tesla P100 GPUs at the Swiss National Supercomputing Centre. Moreover, we have shown in recent contributions that the approach is naturally in no way limited to geosciences: we have showcased a computational cognitive neuroscience application modelling visual target selection using `ParallelStencil` and `MPI.jl` and a quantum fluid dynamics solver using the Nonlinear Gross-Pitaevski Equation implemented with `ParallelStencil` and `ImplicitGlobalGrid`.
Co-authors: Ludovic Räss¹ ²
¹ ETH Zurich | ² Swiss Federal Institute for Forest, Snow and Landscape Research (WSL)
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/AKVUKM/
Purple
Samuel Omlin
Ludovic Räss
PUBLISH
RJYBLA@@pretalx.com
-RJYBLA
Distributed Parallelization of xPU Stencil Computations in Julia
en
en
20220727T153000
20220727T154000
0.01000
Distributed Parallelization of xPU Stencil Computations in Julia
The approach presented here renders the distributed parallelization of stencil-based GPU and CPU applications on a regular staggered grid almost trivial. We have instantiated the approach in the Julia package `ImplicitGlobalGrid.jl`. A highlight in the design of `ImplicitGlobalGrid` is the automatic implicit creation of the global computational grid based on the number of processes the application is run with (and based on the process topology, which can be explicitly chosen by the user or automatically defined). As a consequence, the user only needs to write a code to solve his problem on one GPU/CPU (local grid); then, as little as three functions can be enough to transform a single GPU/CPU application into a massively scaling Multi-GPU/CPU application: a first function creates the implicit global staggered grid, a second funtion performs a halo update on it, and a third function finalizes the global grid.
`ImplicitGlobalGrid` relies on `MPI.jl` to perform halo updates close to hardware limits. For GPU applications, `ImplicitGlobalGrid` leverages remote direct memory access when CUDA- or ROCm-aware MPI is available and uses highly optimized asynchronous data transfer routines to move the data through the hosts when CUDA- or ROCm-aware MPI is not present. In addition, pipelining is applied on all stages of the data transfers, improving the effective throughput between GPU and GPU. Low level management of memory, CUDA streams and ROCm queues permits to efficiently reuse send and receive buffers and streams throughout an application without putting the burden of their management to the user. Moreover, all data transfers are performed on non-blocking high-priority streams, allowing to overlap the communication optimally with computation. `ParallelStencil.jl`, e.g., can do so with a simple macro call.
`ImplicitGlobalGrid` is fully interoperable with `MPI.jl`. By default, it creates a Cartesian MPI communicator, which can be easily retrieved together with other MPI variables. Alternatively, an MPI communicator can be passed to `ImplicitGlobalGrid` for usage. As a result, `ImplicitGlobalGrid`'s functionality can be seamlessly extended using `MPI.jl`.
The modular design of `ImplicitGlobalGrid`, which heavily relies on multiple dispatch, enables adding support for other hardware with little development effort. Support for AMD GPUs using the recently matured `AMDGPU.jl` package has already been implemented as a result. `ImplicitGlobalGrid` supports at present distributed parallelization for CUDA- and ROCm-capable GPUs as well as for CPUs.
We show that our approach is broadly applicable by reporting scaling results of a 3-D Multi-GPU solver for poro-visco-elastic twophase flow and of various mini-apps which represent common building blocks of geoscience applications. For all these applications, nearly ideal parallel efficiency on thousands of NVIDIA Tesla P100 GPUs at the Swiss National Supercomputing Centre is demonstrated. Moreover, we have shown in a recent contribution that the approach is naturally in no way limited to geosciences: we have showcased a quantum fluid dynamics solver using the Nonlinear Gross-Pitaevski Equation implemented with `ParallelStencil` and `ImplicitGlobalGrid`.
Co-authors: Ludovic Räss¹ ², Ivan Utkin¹
¹ ETH Zurich | ² Swiss Federal Institute for Forest, Snow and Landscape Research (WSL)
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/RJYBLA/
Purple
Samuel Omlin
Ludovic Räss
PUBLISH
CJZ3MV@@pretalx.com
-CJZ3MV
Building Julia proxy mini apps for HPC system evaluation
en
en
20220727T154000
20220727T155000
0.01000
Building Julia proxy mini apps for HPC system evaluation
We are developing Julia proxy applications, also known as mini apps, to understand the effects of parallel computation, memory, network and input/output (I/O) on the latest U.S. Department of Energy (DOE) extremely heterogeneous high-performance computing (HPC) systems. Our initial targets are the systems hosted at the Oak Ridge Leadership Computing Facility (OLCF): the Summit supercomputer, powered by IBM CPUs and NVIDIA GPUs; and the upcoming Frontier exascale system, powered by AMD CPUs and GPUs. Proxy applications, or mini apps, are simple yet powerful programs that isolate the important computational aspects that drive fully featured science applications. In this lighting talk, our efforts in developing two open-source proxy applications are presented: i) XSBench.jl, which is a port of the original C-based XSBench proxy app used to simulate on-node scalability of the OpenMC Monte Carlo computational kernel on CPU, and AMD and NVIDIA GPUs and, ii) RIOPA.jl, a Julia proxy application designed to mimic parallel I/O application characteristics and payloads. In particular, we are interested in the feasibility of using Julia as a HPC language, similar to Fortran, C and C++, by evaluating the current state and integration with HPC heterogenous programming models and backends: MPI.jl, Julia’s Base Threads, GPU programming: CUDA.jl, AMDGPU.jl and KernelAbstractions.jl; parallel I/O: HDF5.jl and ADIOS2.jl; and the portability of the resulting Julia proxy application across heterogeneous systems. We will share with the Julia community the current challenges, gaps and highlight potential opportunities to balance the trade-offs between programmer productivity and performance in a HPC environment as we prepare for the exascale era in supercomputing. Our goal is to showcase the value added by the Julia language in our early work constructing proxy apps for rapid prototyping as part of our efforts in the U.S. DOE Exascale Computing Project (ECP).
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/CJZ3MV/
Purple
William F Godoy
Jeffrey Vetter
Philip Fackler
PUBLISH
FEWD7V@@pretalx.com
-FEWD7V
ASML Sponsored Talk
en
en
20220727T155000
20220727T155500
0.00500
ASML Sponsored Talk
PUBLIC
CONFIRMED
Silver sponsor talk
https://pretalx.com/juliacon-2022/talk/FEWD7V/
Purple
PUBLISH
VCSHJ3@@pretalx.com
-VCSHJ3
MetaLenz Sponsored Talk
en
en
20220727T155500
20220727T160000
0.00500
MetaLenz Sponsored Talk
PUBLIC
CONFIRMED
Silver sponsor talk
https://pretalx.com/juliacon-2022/talk/VCSHJ3/
Purple
PUBLISH
CSARPH@@pretalx.com
-CSARPH
Quantum computing with ITensor and PastaQ
en
en
20220727T190000
20220727T193000
0.03000
Quantum computing with ITensor and PastaQ
Quantum computers provide a new computational paradigm with far-reaching implications for a variety of scientific disciplines. Small quantum computers exist in today’s laboratories, but due to imperfections and noise, these machines can only handle problems of limited complexity. The successful development of larger quantum devices requires improved qubit manufacturing and control, active error correction, as well as theoretical advances.
In practice, when building a quantum computer, tasks throughout the quantum computing stack rely on efficient classical algorithms running on conventional computers. These tasks include simulations for designing quantum gates and circuits, qubit calibration, and device characterization/benchmarking. Tensor networks are a powerful framework for describing and simulating quantum systems. They play an important role in simulating the quantum dynamics underlying the experimental hardware, reconstructing quantum processes from measurements, and correcting errors in quantum devices.
PastaQ.jl is a new Julia package for quantum computing built on top of ITensors.jl. ITensors.jl is an established tensor network software library with a unique memory-independent array/tensor interface. ITensor provides a high level and flexible interface for easily calling state of the art tensor network algorithms and developing new tensor network algorithms. Recently in ITensor we have been adding support for automatic differentiation by adding differentiation rules with ChainRules.jl. These range from basic rules for tensor contraction to higher level rules for differentiating quantum state evolution. PastaQ builds on top of this new differentiation support in ITensor to enable a variety of quantum computing applications like the design/optimization of quantum gates via optimal control theory, simulation of quantum circuits, classical optimization of variational circuits, quantum tomography, etc. In this talk, we will discuss some basics of tensor network differentiation in ITensor, and discuss how these new differentiation tools are leveraged in PastaQ for advanced applications in designing and analyzing quantum computers.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon-2022/talk/CSARPH/
Purple
Giacomo Torlai
Matthew Fishman
PUBLISH
KJTGC3@@pretalx.com
-KJTGC3
QuantumCircuitOpt for Provably Optimal Quantum Circuit Design
en
en
20220727T193000
20220727T200000
0.03000
QuantumCircuitOpt for Provably Optimal Quantum Circuit Design
In recent years, the quantum computing community has seen an explosion of novel methods to implement non-trivial quantum computations on near-term intermediate-scale quantum (NISQ) hardware. An important direction of research has been to decompose an arbitrary entangled state, represented as a unitary, into a quantum circuit, that is, a sequence of gates supported by a quantum processor. It has been well known that circuits with longer decompositions and more entangling multi-qubit gates are error-prone for the current noisy, intermediate-scale quantum devices. To this end, we present the framework of "QuantumCircuitOpt" package, which is aimed at providing provably optimal quantum circuit design.
"QuantumCircuitOpt.jl" (QCOpt in short), is an open-source, Julia-based, package for provably optimal quantum circuit design. QCOpt implements mathematical optimization formulations and algorithms for decomposing arbitrary unitary gates into a sequence of hardware-native gates with global optimality guarantees on the quality of the designed circuit. To this end, QCOpt takes the following inputs: the total number of qubits, the set of hardware-native elementary gates, the target gate to be decomposed, and the maximum allowable size (depth) of the circuit. Given these inputs, QCOpt invokes appropriate gates from a menagerie of gates implemented within the package, reformulates the optimization problem into a mixed-integer program (MIP), applies feasibility-based bound propagation, derives various hardware-relevant valid constraints to reduce the search space, and finally provides an optimal circuit with error guarantees. On a variety of benchmark quantum gates, we show that QCOpt can find up to 57% reduction in the number of necessary gates on circuits with up to four qubits, and in run times less than a few minutes on commodity computing hardware. We also validate the efficacy of QCOpt as a tool for quantum circuit design in comparison with a naive brute-force enumeration algorithm. We also show how the QCOpt package can be adapted to various built-in types of native gate sets, based on different hardware platforms like those produced by IBM, Rigetti and Google.
Package link: https://github.com/harshangrjn/QuantumCircuitOpt.jl
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon-2022/talk/KJTGC3/
Purple
Harsha Nagarajan
PUBLISH
XNRBWC@@pretalx.com
-XNRBWC
Simulating and Visualizing Quantum Annealing in Julia
en
en
20220727T200000
20220727T203000
0.03000
Simulating and Visualizing Quantum Annealing in Julia
The field of Quantum Computation has been rapidly growing in recent years. One driving factor behind this growth is the computational intractability of simulating quantum systems. The classical overhead for simulating quantum systems grows exponentially as the system size increases, making quantum computers an appealing option for performing these simulations. Algorithms have also been developed to perform actions such as search and optimization on quantum computers. Quantum Annealing is an optimization method which makes use of an adiabatic quantum computer to try to find a global minima. It relies on the Adiabatic Theorem which states that if a quantum system is prepared in its ground state, if the system evolves slowly enough it will stay in its ground state. A few companies have created quantum annealing hardware, most notably D-Wave Systems, so it is useful to be able to simulate small anneals to look for signatures that imply that the quantum annealing hardware is behaving as expected. To accomplish this, we can solve the Schrodinger equation with the time varying Hamiltonian for the quantum annealer we wish to simulate.
That is where this package comes into play. QuantumAnnealing.jl allows for the simulation of a quantum annealer with arbitrary annealing schedule (two functions which dictate how the system evolves from the initial "easy" state to the final "problem" state), and arbitrary target hamiltonian (the encoding of the problem which is supposed to be solved by the quantum annealer). This package also provides functionality for implementing controls on the annealing schedules, such as holding the schedules constant for a set amount of time (often called a pause) or increasing the speed of the anneal (often called a quench), as well as the ability to directly process D-Wave hardware schedules from CSV files into annealing schedule functions used by the simulator. The simulation can be performed either by using a wrapper around DifferentialEquations.jl, or by using a specialized solver we have written to quickly and accurately simulate the closed system evolution of the quantum annealing hamiltonian. This solver makes use of the Magnus Expansion and includes hard-coded implementations up to the fourth order, as well as a general implementation if a higher order solver is needed. This hardcoded solver has empirically produced a 20-30x speed improvement over the DifferentialEquations.jl wrapper.
Alongside QuantumAnnealing.jl, we have released a plotting package, QuantumAnnealingAnalytics.jl which provides useful plotting functionality for common use-cases of the QuantumAnnealing.jl package. QuantumAnnealingAnalytics.jl includes functions to plot the instantaneous ground state of the Hamiltonian (useful for determining how quickly it is expected that the system can evolve without leaving the ground state), plotting the probabilities of various energy levels for the final system (useful for comparing output statistics from hardware), and plotting output statistics from data files in the bqpjson format. This allows for much easier understanding of the system the user is working with and can be used to quickly reproduce figures found in seminal works in the field of Quantum Annealing.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon-2022/talk/XNRBWC/
Purple
Zachary Morrell
PUBLISH
QMZUZH@@pretalx.com
-QMZUZH
Julia to the NEC SX-Aurora Tsubasa Vector Engine
en
en
20220727T123000
20220727T130000
0.03000
Julia to the NEC SX-Aurora Tsubasa Vector Engine
The talk introduces the VectorEngine.jl [1] package, the first port of the Julia programming language to the NEC SX-Aurora Tsubasa Vector Engine (VE) [2]. It describes the design choices made for enabling the Julia programming languages, architecture specific details, similarities and differences between VEs and GPUs and the currently supported features.
The current instances of the VE are equipped with 6 HBM2 modules that deliver 1.55 TB/s memory bandwidth to 8 or 10 cores. Each core consists of a full fledged scalar processing unit (SPU) and a vector processing unit (VPU) running with very long vector lengths of up to 256 x 64 bit or 512 x 32 bit words. With C, C++ and Fortran the VE can run programs natively, completely on the VE, OpenMP and MPI parallelized, with Linux system calls being processed on the host machine. Native VE programs can offload function calls to the host CPU (reverse offloading). Alternatively the VE can be used as an accelerator, with the main program running on the host CPU and performance-critical kernels being offloaded to the VE with the help of libraries like AVEO or VEDA [3]. Prominent users of the SX-Aurora Vector Engines are in weather and climate research (eg. Deutscher Wetterdienst), earth sciences research (JAMSTEC, Earth Simulator), fusion research (National Institute for Fusion Science, Japan).
For enabling the VE for Julia use we chose to use the normal offoading programming paradigm that treats the VE as an accelerator running particular code kernels. The GPUCompiler.jl module was slightly expanded and used in VectorEngine.jl to support VEDA on VE, similar to the GPU specific implementations CUDA.jl, AMDGPU.jl and oneAPI.jl. Although VEs are very different from GPUs, chosing a usage pattern similar to GPUs is the most promissing approach for reducing porting efforts and making multi-accelerator Julia code maintainable. With VectorEngine.jl we can declare device-side arrays and structures, copy data between host and device side, declare kernel functions, create cross-compiled objects that can be executed on the accelerator, or use a simple macro like @veda to run a function on the device side and hide steps like compilation and arguments transfer from the user.
For cross-compiling VE device code we use the LLVM-VE compiler. It is a slightly extended version of the upstream LLVM compiler that supports VE as an official architecture since late 2021. For vectorization inside the Julia device code we use the Region Vectorizer [4], an advanced outer loop vectorizer capable of handling divergent control flow. The Region Vectorizer does not do data-dependency analysis, therefore loops that need to be vectorized must be annotated by the programmer.
At the time of the submission of the talk proposal VE device side Julia supports a very limited runtime, quite similar to that of other GPUs. It includes device arrays, transfer of structures, vectorization using the Region Vectorizer and device-side ccalls to libc functions as well as other VE libraries. We discuss the target of implementing most of the Julia runtime on the device side, a step that would enable a much wider range of codes on the accelerator.
[1] VectorEngine.jl github repository: https://github.com/sx-aurora-dev/VectorEngine.jl
[2] K. Komatsu et al, Performance evaluation of a vector supercomputer sx-aurora TSUBASA, https://dl.acm.org/doi/10.5555/3291656.3291728
[3] VEDA github repository: https://github.com/sx-aurora/veda
[4] Simon Moll, Vectorization system for unstructured codes with a Data-parallel Compiler IR, 2021, dissertation https://publikationen.sulb.uni-saarland.de/handle/20.500.11880/32453
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon-2022/talk/QMZUZH/
Blue
Erich Focht
Valentin Churavy
PUBLISH
YPGNCS@@pretalx.com
-YPGNCS
Teaching GPU computing, experiences from our Master-level course
en
en
20220727T130000
20220727T133000
0.03000
Teaching GPU computing, experiences from our Master-level course
In the Fall Semester 2021 at ETH Zurich, we designed and taught a new Master-level course: [**Solving PDEs in parallel on GPUs with Julia**](https://eth-vaw-glaciology.github.io/course-101-0250-00/).
We had prior experience in teaching workshops and individual lectures based on Julia, this was our first end-to-end Julia-based lecture course. It filled a niche at ETH Zurich, Switzerland: namely numerical GPU computing for the domain scientists.
Whilst we had great prior experience with the GPU tech-stack used (we're developing part of it), we had to learn much on the presentational tech-stack to create a website, slides and assignments. The presentation will focus on both the GPU-stack (`CUDA.jl`, `ParallelStencils.jl` and `ImplictGlobalGrid.jl`) and the presentation-stack (`Literate.jl`, `Franklin.jl`, `IJulia.jl`/Jupyter).
Co-authors: Mauro Werder¹ ² , Samuel Omlin³
¹ Swiss Federal Institute for Forest, Snow and Landscape Research (WSL) | ² ETH Zurich | ³ Swiss National Supercomputing Centre (CSCS)
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon-2022/talk/YPGNCS/
Blue
Ludovic Räss
Mauro Werder
Samuel Omlin
PUBLISH
7FVVF3@@pretalx.com
-7FVVF3
GPU4GEO - Frontier GPU multi-physics solvers in Julia
en
en
20220727T134000
20220727T135000
0.01000
GPU4GEO - Frontier GPU multi-physics solvers in Julia
Computational Earth sciences leverage numerical modelling to understand and predict the evolution of complex multi-physical systems. Ice sheet dynamics and solid Earth geodynamics are, despite their apparent differences, two domains that build upon analogous physical description and share similar computational challenges. Resolving the interactions among various physical processes in three-dimensions on high spatio-temporal resolution is crucial to capture rapid changes in the system leading to the formation of, e.g., ice streams or mountains ranges.
Within the [**GPU4GEO**](https://ptsolvers.github.io/GPU4GEO/) project, we propose software tools which provide a way forward in ice dynamics, geodynamics and computational Earth sciences by exploiting two powerful emerging paradigms in HPC: supercomputing with Julia on graphical processing units (GPUs) and massively parallel iterative solvers. We use Julia as the main language because it features high-level and high-performance capabilities and performance portability amongst multiple backends (e.g., multi-core CPUs, and AMD and NVIDIA GPUs).
We will discuss our experience using `ParallelStencil.jl` and `ImplicitGlobalGrid.jl` as software building blocks in combination to `CUDA.jl`, `AMDGPU.jl` and `MPI.jl` for designing massively parallel and scalable solvers based on the pseudo-transient relaxation method, namely `FastIce.jl` and `JustRelax.jl`. Our work shows great promise for solving a wide range of mechanical multi-physics problems in geoscience, at scale and on GPU-accelerated supercomputers.
Co-authors: Ivan Utkin¹ ², Albert De Montserrat¹, Boris Kaus³, Samuel Omlin⁴
¹ ETH Zurich | ² Swiss Federal Institute for Forest, Snow and Landscape Research (WSL) | ³ Johannes Gutenberg University Mainz | ⁴ Swiss National Supercomputing Centre (CSCS)
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/7FVVF3/
Blue
Ludovic Räss
Albert de Montserrat
Boris Kaus
Samuel Omlin
PUBLISH
PRYQ8N@@pretalx.com
-PRYQ8N
Using Hawkes Processes in Julia: Finance and More!
en
en
20220727T143000
20220727T144000
0.01000
Using Hawkes Processes in Julia: Finance and More!
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/PRYQ8N/
Blue
Dean Markwick
PUBLISH
FXMQPQ@@pretalx.com
-FXMQPQ
Dithering in Julia with DitherPunk.jl
en
en
20220727T144000
20220727T145000
0.01000
Dithering in Julia with DitherPunk.jl
In this talk I will present [DitherPunk.jl](https://github.com/JuliaImages/DitherPunk.jl), a Julia package implementing over 30 dithering algorithms: from ordered dithering with Bayer matrices to digital halftoning and error diffusion methods such as Floyd-Steinberg.
DitherPunk.jl can be used for binary dithering, channel-wise dithering and for dithering with custom color palettes.
Typically, color dithering algorithms are implemented using Euclidean distances in RGB color space. By building on top of packages from the JuliaImages ecosystem such as Colors.jl and ColorVectorSpace.jl, algorithms can be applied in any color space using any color distance metric, allowing for a lot of creative experimentation.
Due to its modular design, DitherPunk.jl is highly extensible. This will be demonstrated by creating an ordered dithering algorithm from a signed distance function.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/FXMQPQ/
Blue
Adrian Hill
PUBLISH
SV9TS9@@pretalx.com
-SV9TS9
FdeSolver.jl: Solving fractional differential equations
en
en
20220727T145000
20220727T150000
0.01000
FdeSolver.jl: Solving fractional differential equations
Differential equations with fractional operators describe many real-world phenomena more accurately than integer order calculus. Fractional calculus has been recognized as a powerful method for understanding the memory and nonlocal correlation of dynamic processes, phenomena, or structures. However, the Julia programming language has missed a package for solving differential equations of fractional order in an accurate, reliable, and efficient way. Hence, we devise the FdeSolver Julia package specifically for solving two general classes of fractional-order problems: fractional differential equations (FDEs) and multi-order systems of FDEs. We implement the explicit and implicit predictor-corrector algorithms with sufficient convergence and accuracy of the solution of the nonlinear systems, including the fast Fourier transform technique that gives us high computation speed and efficient treatment of the persistent memory term. The following document provides some overall information for using the package: https://juliaturkudatascience.github.io/FdeSolver.jl/stable/readme
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/SV9TS9/
Blue
Moein Khalighi
PUBLISH
8AYKB7@@pretalx.com
-8AYKB7
Automated PDE Solving in Julia with MethodOfLines.jl
en
en
20220727T151000
20220727T154000
0.03000
Automated PDE Solving in Julia with MethodOfLines.jl
MethodOfLines.jl is a system for the automated discretization of symbolically defined partial differential equations (PDEs), by the method of lines. By recognizing different linear and nonlinear terms in the specified system, we build a performant semidiscretization by symbolically applying effective finite difference schemes, which we then used to generate optimized Julia code. Consequently, one can solve the system with an appropriate ordinary differential equation (ODE) solver.
In this 30 minute talk, the audience will learn how to use MethodOfLines.jl to discretize and solve an example PDE that represents a physical simulation which arises in research, gaining the knowledge and skills to apply these tools to their own problems. They will also learn about some of the internals of MethodOfLines.jl. This will arm them with the knowledge required to implement improved finite difference schemes, benefiting their own research and others in the community. Finally, we will outline the proposed direction of development for the package moving forwards.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon-2022/talk/8AYKB7/
Blue
Alex Jones
PUBLISH
EA7NVT@@pretalx.com
-EA7NVT
Automatic generation of C++ -- Julia bindings
en
en
20220727T190000
20220727T193000
0.03000
Automatic generation of C++ -- Julia bindings
Interfacing Julia with C++ libraries can be done with the help of the CxxWrap.jl package. With this package bindings to C++ classes, their methods and to global functions with a clean Julia interface can be easily be implemented. To provide the bindings, a wrapper that defines the C++-Julia mapping must be written.
We will show in this talk that this wrappper code can be automatically generated from the C++ library source code. A code generator, called WrapIt! (https://github.com/grasph/wrapit) and which was developped as a proof of concept will be presented.
The clang libraries (https://clang.llvm.org/) was used to interpret the C++ code and in particular its C API, libclang. The tool is already well advanced. We will show the challenges that represents deducing the library interface direcly from the C++ header files, in particular for large libraries with hundreds or thousands of C++ classes, the technical choices made in WrapIt!, the status of this tool, and what would be need to make it a full-fledged wrapper generator.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon-2022/talk/EA7NVT/
Blue
Philippe Gras
PUBLISH
3VSJHV@@pretalx.com
-3VSJHV
Extending PyJL to Translate Python Libraries to Julia
en
en
20220727T193000
20220727T200000
0.03000
Extending PyJL to Translate Python Libraries to Julia
PyJL is part of the Py2Many transpiler, which is a rule-based transpilation tool. PyJL builds upon Py2Many to translate Python source code to Julia. Parsing is performed through Python's _ast_ module, which generates an Abstract Syntax Tree. Then, several intermediate transformations convert the input Python source code into Julia source code.
In terms of our results, we managed to translate two commonly used benchmarks:
1. The N-Body problem, achieving a speedup of 19.5x when compared to Python, after adding only one type hint
2. An implementation of the Binary Trees benchmark to test Garbage Collection, which resulted in 8.6x faster execution time without requiring any user intervention
The current major limitations of PyJL are type inference and mapping Python's OO paradigm to Julia. Regarding type inference, PyJL requires type hints in function arguments and return types, and we are currently integrating pytype, a type inference mechanism, to verify the soundness of type hints. Regarding the OO paradigm, PyJL currently maps Julia's classes, including class constructors, and single inheritance to Julia. However, Python's special methods, such as \_\_repr\_\_ or \_\_str\_\_ still require proper translation.
Although the development of our transpilation tool is still at an early stage, our preliminary results show that the transpiler generates human-readable code that can achieve high performance with few changes to the generated source code.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon-2022/talk/3VSJHV/
Blue
Miguel Marcelino
PUBLISH
JPYJS8@@pretalx.com
-JPYJS8
Julia in VS Code - What's New
en
en
20220727T200000
20220727T203000
0.03000
Julia in VS Code - What's New
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon-2022/talk/JPYJS8/
Blue
David Anthoff
Sebastian Pfitzner
PUBLISH
V7CCYF@@pretalx.com
-V7CCYF
Simulating neural physiology & networks in Julia
en
en
20220727T123000
20220727T140000
1.03000
Simulating neural physiology & networks in Julia
Julia’s software ecosystem certainly lessens the technical burden for computational neuroscientists—it boasts federated development of high-quality packages for solving differential equations, machine learning, automatic differentiation, and symbolic algebra. Deep language support for multithreaded, distributed, and GPU parallelism also makes the case for models that can span multiple scales, both in biological detail and overall network size.
Come join us for a community discussion about what a fresh Julian take on modeling the brain might look like. Together we will lay out an initial set of goals for building up a domain-specific ecosystem of packages for computational neuroscience.
The discussion will be moderated by Wiktor Phillips, Alessio Quaresima, and Tushar Chauhan.
PUBLIC
CONFIRMED
Birds of Feather
https://pretalx.com/juliacon-2022/talk/V7CCYF/
BoF
Alessio Quaresima
Wiktor Phillips
Tushar Chauhan
PUBLISH
YLSWBC@@pretalx.com
-YLSWBC
Discussing Gender Diversity in the Julia Community
en
en
20220727T143000
20220727T160000
1.03000
Discussing Gender Diversity in the Julia Community
The objective of this BoF is to find more people who feel their gender is underrepresented within the Julia community or want to support people who feel so. We aim to create a safe and fruitful discussion about gender diversity, increase awareness of our current initiatives, and receive input on new actions we can take as Julia Gender Inclusive.
PUBLIC
CONFIRMED
Birds of Feather
https://pretalx.com/juliacon-2022/talk/YLSWBC/
BoF
Julia Gender Inclusive
PUBLISH
M7JDGG@@pretalx.com
-M7JDGG
Poster session
en
en
20220727T180000
20220727T193000
1.03000
Poster session
PUBLIC
CONFIRMED
Virtual poster session
https://pretalx.com/juliacon-2022/talk/M7JDGG/
BoF
PUBLISH
RDLDYD@@pretalx.com
-RDLDYD
UnitJuMP: Automatic unit handling in JuMP
en
en
20220727T123000
20220727T124000
0.01000
UnitJuMP: Automatic unit handling in JuMP
When setting up complex optimization models involving real world problems, this will often involve
parameters and variables that represent physical quantitities with specified units. To ensure
correctness of the optimization model, considerable care has to be taken to avoid errors due
to inconsistent use of units. An example is investment models in the energy sector where
multiyear investments measured in GW can be combined with operational decisions on hour or minute basis with parameters being provided in a wide range of units (kWh, MJ, kcal, MMBTU).
The package UnitJuMP is an extension to the JuMP package that handles modeling of units within JuMP models.
The implementation is based on use of the Unitful package for generic handling of physical units.
The package is still in an early stage of development, and currently only supports the use of units in combination with linear and mixed integer linear optimization problems.
The package is available for download and testing at https://github.com/trulsf/UnitJuMP.jl.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/RDLDYD/
JuMP
Truls Flatberg
PUBLISH
STM8PM@@pretalx.com
-STM8PM
SparseVariables - Efficient sparse modelling with JuMP
en
en
20220727T124000
20220727T125000
0.01000
SparseVariables - Efficient sparse modelling with JuMP
## Motivation
Industry scale optimization problems, e.g. in supply chain management or energy systems modeling often involve constructing and solving very large, very sparse linear programs. For such problems problem construction time can rival solution time when the problem is solved by very efficient commercial linear programming solvers.
Julia and JuMP provide an elegant and fun modelling environment which integrates nicely with data management, versioning, reproducibility and portability.
The default containers and macros in JuMP do present some challenges for this class of problems, related to performance gotchas and incremental variable construction.
## What it is
Thanks to the unique hackability of Julia and JuMP, it has been straight-forward to create custom containers and macros to investigate alternative approaches to modelling large sparse systems with JuMP.
We present SparseVariables which provides a nice and compact syntax for these problems with good performance by default.
## Performance
To demonstrate SparseVariables, we present a demo supply chain optimization problem, which may be modelled in multiple ways, and benchmark the problem construction time and the number of lines of code for each approach.
## Elegance
With SparseVariables we avoid the boiler-plate necessary with suggested work-arounds for performance issues in JuMP, and also allow incremental construction of variables, which is useful for modular modelling code. Our container features slicing and other variable selection methods to allow for short and concise code, leveraging JuMP, MathOptInterface and the Julia package ecosystem.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/STM8PM/
JuMP
Lars Hellemo
PUBLISH
LJC7R8@@pretalx.com
-LJC7R8
JuMP ToQUBO Automatic Reformulation
en
en
20220727T125000
20220727T130000
0.01000
JuMP ToQUBO Automatic Reformulation
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/LJC7R8/
JuMP
Pedro Xavier
PUBLISH
VGSB89@@pretalx.com
-VGSB89
A multi-precision algorithm for convex quadratic optimization
en
en
20220727T130000
20220727T133000
0.03000
A multi-precision algorithm for convex quadratic optimization
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon-2022/talk/VGSB89/
JuMP
Geoffroy Leconte
PUBLISH
UNRVRP@@pretalx.com
-UNRVRP
Interior-point conic optimization with Clarabel.jl
en
en
20220727T133000
20220727T140000
0.03000
Interior-point conic optimization with Clarabel.jl
The talk will introduce Clarabel.jl, a new package for conic convex optimization implemented in pure Julia. The package is based on an interior point optimization method and can solve optimization problems in the form of linear and quadratic programs (LPs and QPs), second-order cone programs (SOCPs), semidefinite programs (SDPs) and constraints on the exponential cone.
The package implements a novel homogeneous embedding technique that offers substantially faster solve times relative to existing open-source and commercial solvers for some problem types. This improvement is due to both a reduction in the number of required interior point iterations as well as an improvement in both the size and sparsity of the linear system that must be solved at each iteration. The talk will describe details of this embedding and show performance results with respect to solvers based on the standard homogeneous self-dual embedding, including ECOS, Hypatia and MOSEK.
Our implementation of Clarabel.jl adopts design ideas from several existing solver packages. Based on our group’s prior experience implementing first-order optimization techniques in the ADMM-based solver COSMO.jl, Clarabel.jl adopts a modular implementation for convex cones that is easily extensible to new types. The solver organises its core internal data types following the design of the C++ QP solver OOQP, allowing for future extensions of the solver to exploit optimization problems with special internal structure, e.g. optimal control or support vector machine problems. Finally, the package works with generic types throughout, allowing for simple extension to abstract matrix or vector representations or use with arbitrary precision floating point types. The talk will describe these features and their implementation through Julia’s multiple dispatch system.
Clarabel.jl provides a simple native interface for solving cone programs in a standard format. The package also fully supports Julia's MathOptInterface package, and can therefore be used via both JuMP and Convex.jl.
The package will be available as an open-source package via Github under the Apache 2.0 license. An initial public release is planned for June 2022, but full documentation and examples are already available at:
https://oxfordcontrol.github.io/Clarabel.jl/
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon-2022/talk/UNRVRP/
JuMP
Paul Goulart
PUBLISH
KCL3JM@@pretalx.com
-KCL3JM
JuMP 1.0: What you need to know
en
en
20220727T143000
20220727T150000
0.03000
JuMP 1.0: What you need to know
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon-2022/talk/KCL3JM/
JuMP
Miles Lubin
PUBLISH
CFGAUV@@pretalx.com
-CFGAUV
A user’s perspective on using JuMP in an academic project
en
en
20220727T150000
20220727T153000
0.03000
A user’s perspective on using JuMP in an academic project
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon-2022/talk/CFGAUV/
JuMP
Mathieu Tanneau
PUBLISH
XRDNVT@@pretalx.com
-XRDNVT
COPT and its Julia interface
en
en
20220727T153000
20220727T154000
0.01000
COPT and its Julia interface
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/XRDNVT/
JuMP
Qi Huangfu
PUBLISH
ZPUZPU@@pretalx.com
-ZPUZPU
JuMP and HiGHS: the best open-source linear optimization solvers
en
en
20220727T154000
20220727T155000
0.01000
JuMP and HiGHS: the best open-source linear optimization solvers
Almost from the moment that the development of HiGHS was proposed in 2018, the prospect of it offering top-class open-source linear optimization solvers with a well designed and fully supported API was attractive to JuMP. Since then, as HiGHS has developed from outstanding "gradware" to the world's best open-source linear optimization software, there have been invaluable contributions from the JuMP team. This great example of community cooperation now means that HiGHS is the default MILP solver in JuMP's documentation, and Julia users have a slick interface to HiGHS. One particular area of activity that is exploiting this is the rapidly-growing world of open-source energy systems planning, where the high license fees for commercial optimization solvers mean that open-source alternatives are critically important for small scale commercial enterprises, NGOs, and organisations in developing countries. Some high-profile use cases of the JuMP-HiGHS interface in this field will be presented.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/ZPUZPU/
JuMP
Julian Hall
PUBLISH
Y8BCSL@@pretalx.com
-Y8BCSL
Pajarito's MathOptInterface Makeover
en
en
20220727T190000
20220727T193000
0.03000
Pajarito's MathOptInterface Makeover
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon-2022/talk/Y8BCSL/
JuMP
Chris Coey
PUBLISH
JLYCL8@@pretalx.com
-JLYCL8
A matrix-free fix-propagate-and-project heuristic for MILPs
en
en
20220727T193000
20220727T200000
0.03000
A matrix-free fix-propagate-and-project heuristic for MILPs
The talk will present our work on Scylla in two aspects. The first will be the presentation of the method and different components and ideas they link to, from feasibility pump to primal-dual hybrid gradients for linear optimization and fix-and-propagate procedures. The second aspect of the talk will include lessons learned on asynchronous programming using the Task-Channel model, working with and interfacing native libraries or time management for time-constrained experimental runs.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon-2022/talk/JLYCL8/
JuMP
Mathieu Besançon
PUBLISH
TSN8ZR@@pretalx.com
-TSN8ZR
Verifying Inverse Model Neural Networks Using JuMP
en
en
20220727T200000
20220727T201000
0.01000
Verifying Inverse Model Neural Networks Using JuMP
The talk is based on https://arxiv.org/abs/2202.02429
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/TSN8ZR/
JuMP
Chelsea Sidrane
PUBLISH
LSUKYX@@pretalx.com
-LSUKYX
Complex number support in JuMP
en
en
20220727T201000
20220727T202000
0.01000
Complex number support in JuMP
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/LSUKYX/
JuMP
Benoît Legat
PUBLISH
YGFHB7@@pretalx.com
-YGFHB7
Julia Computing Sponsored Forum
en
en
20220727T193000
20220727T201500
0.04500
Julia Computing Sponsored Forum
PUBLIC
CONFIRMED
Sponsor forum
https://pretalx.com/juliacon-2022/talk/YGFHB7/
Sponsored forums
PUBLISH
78XMRJ@@pretalx.com
-78XMRJ
Keynote - Jeremy Howard
en
en
20220728T090000
20220728T094500
0.04500
Keynote - Jeremy Howard
Keynote - Jeremy Howard
PUBLIC
CONFIRMED
Keynote
https://pretalx.com/juliacon-2022/talk/78XMRJ/
Green
Jeremy Howard
PUBLISH
SZSESM@@pretalx.com
-SZSESM
Quiqbox.jl: Basis set generator for electronic structure problem
en
en
20220728T103000
20220728T104000
0.01000
Quiqbox.jl: Basis set generator for electronic structure problem
Quantum and classical computers are being applied to solve ab initio problems in physics and chemistry. In the NISQ era, solving the "electronic structure problem" has become one of the major benchmarks for identifying the boundary between classical and quantum computational power. Electronic structure in condensed matter physics is often defined on a lattice grid while electronic structure methods in quantum chemistry rely on atom-centered single-particle basis functions. Grid-based methods require a large number of single-particle basis functions to obtain sufficient resolution when expanding the N-body wave function. Typically, fewer atomic orbitals are needed than grid points but the convergence to the continuum limit is less systematic. To investigate the consequences and compromises of the single-particle basis set selection on electronic structure methods, we need more flexibility than is offered in standard solid-state and molecular electronic structure packages. Thus, we have developed an open-source software tool called "Quiqbox" in the Julia programming language that allows for easy construction of highly customized floating basis sets. This package allows for versatile configurations of single-particle basis functions as well as variational optimization based on automatic differentiation of basis set parameters.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/SZSESM/
Green
Weishi Wang
PUBLISH
KPXD3Y@@pretalx.com
-KPXD3Y
MathLink(Extras): The powers of Mathematica and Julia combined
en
en
20220728T104000
20220728T105000
0.01000
MathLink(Extras): The powers of Mathematica and Julia combined
Mathematica is arguably the go-to tool for your everyday mathematical needs. It can efficiently perform integrals, solve equations, find roots, refine expression, plot functions, and many more things.
However, there are tasks where Mathematica performs poorly or is just plain inconvenient to work with.
One such place is if/else statements and the control flow.
Try, for instance, to construct programs where the algebraic manipulations depend on the functional form of the expression.
Or if you want to make non-trivial variable changes inside and expression.
These limitations and many more are solved by Julia's MathLink (https://github.com/JuliaInterop/MathLink.jl) and MathLinkExtras (https://github.com/fremling/MathLinkExtras.jl) packages.
The first package provides access to Mathematica/Wolfram Engine, via the Wolfram Symbolic Transfer Protocol (WSTP).
The second is "sugar on top" and provides the basic algebraic operations (+,-,*,/) for the MathLink variable types.
As a practical example, I will show how MathLink and MathLinkExtras were used in a research project[1] to compute nested gaussian integrals.
[1] M. Fremling, "Exact gap-ratio results for mixed Wigner surmises of up to 4 eigenvalues", arXiv preprint arXiv:2202.01090 (2022). (https://arxiv.org/abs/2202.01090)
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/KPXD3Y/
Green
Mikael Fremling
PUBLISH
Y8G9VJ@@pretalx.com
-Y8G9VJ
Dates with Nanoseconds
en
en
20220728T105000
20220728T110000
0.01000
Dates with Nanoseconds
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/Y8G9VJ/
Green
Jeffrey Sarnoff
PUBLISH
9EM3P7@@pretalx.com
-9EM3P7
Exploring audio circuits with ModelingToolkit.jl
en
en
20220728T110000
20220728T113000
0.03000
Exploring audio circuits with ModelingToolkit.jl
This talk is targeted at people who want to start using ModelingToolkit.jl for simulations in their domain. It shows a workflow that is possible today. Additionally, it hints that composability is the key to unlock future breakthroughs. As a bonus, we can implement recent audio processing papers in a few lines of code!
We begin with a survey of the numerical and symbolic software commonly employed in the field and explain our decision to use ModelingToolkit.jl. Typically, engineers working with audio systems face a 3-language problem: use SPICE-style software to analyze a circuit, then move on to Matlab/Scilab/Python to deliver a high-level prototype, and finally re-write that into a high-performance implementation in C/C++. Usually, the simplification of a circuit turns into a laborious, multi-week manual process. ModelingToolkit.jl covers these use cases and more.
Afterwards, we will explore increasingly complex audio circuits via simulation. Topics include:
- implementing Kirchhoff laws
- defining simple models (capacitor, diode)
- defining a circuit
- simulating the circuit and plotting the result
- defining hierarchical models (VCCS, vacuum tube)
- animated plotting
- defining controls (potentiometers)
- exploring variations on a venerable guitar pedal
Lastly, audio demos of the simulated circuits will be featured.
Assumed background:
Attendees are expected to have some programming experience in e.g. Python. It is helpful, although not required, to have experience working with analog circuits and/or SPICE simulation software.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon-2022/talk/9EM3P7/
Green
George Gkountouras
PUBLISH
AYAUPK@@pretalx.com
-AYAUPK
Universal Differential Equation models with wrong assumptions
en
en
20220728T113000
20220728T114000
0.01000
Universal Differential Equation models with wrong assumptions
### Introduction
Julia’s SciML ecosystem introduced an effective way to model natural phenomena as dynamical systems with Universal Differential Equations (UDE’s). The UDE framework enriches both classic and Neural Network differential equation modelling, combining an explicitly “known” term (that is, a term which functional expression is known) with an “unknown” term (that is, a term which functional expression is not known). Within a UDE, the unknown term, and therefore the overall functional form of the dynamical system, is learned from observational data by fitting a Neural Network. The task of the Neural Network is facilitated by the domain knowledge embodied by the known term; moreover, the interpolation and, importantly, extrapolation performance of the fitted model is greatly increased by that knowledge (and by a simplification step, such as SiNDY). All of this relies on the tacit assumption that what we think about the natural phenomena is correctly expressed in the known term. Most of the research has focused on the robust identification of the unknown term, and the properties of the Neural Network. We focus instead on the impact of possible pathologies in the design of a UDE system, and in particular, on possible errors we introduce in the expression of the known term. That is, we ask what happens if our domain knowledge is not correctly expressed. In the context of the famous quote “It ain’t what you don’t know that gets you in trouble. It’s what you know for sure that just ain’t so” attributed to Mark Twain, we explore the magnitude of the trouble you get into.
### Details
More in detail, for a set of variables X, we consider a dynamical system of the form
`dX(t)=F(X,t)=K(t)+U(X,t)`
where `K(t)` is the part of the dynamical equation assumed as “known”, and `U(X,t)` is the part assumed as “unknown”.
In this scenario, the observational data are samples from `X(t)` at various points in time.
Let `K*(X,t)` be a perturbed version of K (say, for a certain `ω`, `K*(t)=sin(t+ω)` when `K(t)=sin(t)`).
Our aim is to recover `F(X,t)` from the observed data by training a UDE of the form
`dX(t)=K*(t)+NN(X,t)`.
Under the perturbed scenario, we ask some simple questions, which answers are far from trivial:
- Can we recover the functional form of `F(X,t)`?
- Can we at least approximate it accurately?
- How does the perturbation we imposed on `K*(t)` impact our model accuracy?
In order to explore the discrepancy between expected and obtained results, we needed: synthetic data from the original dynamical system, that is a family of function for `K(t)`; a family of perturbed versions of the `K(t)` ; and a way to assess how far off we are from recovering the true `F(X,t)`. All three tasks were facilitated by the interoperability of Julia, and in the presentation we will show how that plays out.
1. We considered trigonometric, exponential, polynomial functions, as well as linear combinations of these functions to create the original dynamical system and generate synthetic observational data. This was made efficiently by the symbolic computation capabilities of Julia, e.g., Symbolics.jl.
2. We fitted family of UDEs to the data we generated under three scenarios: (a) a correctly specified known term, i.e., `K*(t)=K(t)`; (b) the lack of a known term, i.e., `K*(t)=0`; and (c) a perturbation of the known term. The UDE was subsequently simplified to recover a sparse representation of the dynamical system in terms of simple functions. This step was done within Julia’s SciML framework.
3. Finally, we evaluated the goodness of fit between the recovered dynamical system (simplified and not) with the original dynamical system. For this we developed a package, FunctionalDistances.jl, to automate as much as possible the estimation of the distance between two functions. (The package will be very shortly available in a github repository)
### Future Development
The preliminary results we obtained suggest that no UDE with a strongly perturbed known term provided a better model than their counterparts with correctly specified terms. Yet, a few perturbed models give better fits than unspecified ones, raising the question of whether errors in the UDE specification are indeed detectable.
Our talk will interest both people who study UDEs for our cautionary and surprising results, and the wider audience interested more in the use of Julia in mathematical modelling for the encouraging examples of interoperability we present.
The talk will present how Julia helped us in this experimental mathematical exercise, and offer many opportunities for further investigations.
The presentation will be as light as possible on the mathematical side, present ample examples of how the interoperability of Julia helped our analysis, and assumes little or no prior knowledge of UDE’s. Graphs and examples will also be used to aid understanding of the topic.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/AYAUPK/
Green
Luca Reale
PUBLISH
BEY33E@@pretalx.com
-BEY33E
Using SciML to predict the time evolution of a complex network.
en
en
20220728T114000
20220728T115000
0.01000
Using SciML to predict the time evolution of a complex network.
**Introduction**
Complex networks can change over time as vertices and edges between vertices get added or removed. Modeling the temporal evolution of networks and predicting their structure is an open challenge across a wide variety of disciplines: from the study of ecological networks as food-webs, to predictions about the structure of economical networks; from the analysis of social networks, to the modeling of how our brain develops and adapts during our lives.
In their usual representation, networks are binary (an edge is either observed or not), sparse (each vertex is linked to a very small subset of the network), and large (up to billions of nodes), and changes are discrete rewiring events. These properties make them hard to handle with classic machine learning techniques and have barred the use of more traditional mathematical modeling such as differential equations. In this talk, we show how we used Julia, and in particular the Scientific Machine learning (SciML) framework, to model the temporal evolution of complex networks as continuous, multivariate, dynamical systems from observational data. We took an approach cutting across different mathematical disciplines (machine learning, differential equations, and graph theory): this was possible largely thanks to the integration of packages like Graph.jl (and the integrated MetaGraph.jl package), LinearAlgebra.jl (and other matrix decomposition packages), and the SciML ecosystem, e.g., DiffEqFlux.jl.
**Methodology**
1. To translate the discrete, high-dimensional, dynamical system into a continuous, low-dimensional one, we rely on a network embedding technique. A network embedding maps the vertices of a network to points in a (low-dimensional) metric space. We adopt the well-studied Random Dot-Product Graphs statistical model: the mapping is provided by a truncated Singular Value Decomposition of the network’s adjacency matrix; to reconstruct the network we use the fact that the probability of interaction between two vertices is given by the dot product of the points they map to. The decomposition of the adjacency matrices and their alignment is a computationally intensive step, and we tackle it thanks to the fast matrix algorithms available for Julia and their integration with packages that allow for network data wrangling (like Graphs.jl).
2. In the embedding framework, a discrete change in the network is modeled as the effect of a continuous displacement of the points in the metric space. Our goal then, is that of discovering from the data (the network observed at various points in time) an adequate dynamical system capturing the laws governing the temporal evolution of the complex network. This is possible thanks to a pipeline that combines Neural ODEs and the identification of nonlinear dynamical systems (such as SiNDY).
In general, each node may influence the temporal evolution of every over node in the network, and, if we are working in a space with dimension d, and N nodes, this translates to a dynamical system with N*N*d variables. As networks may often have thousands or millions of nodes, that number can be huge. In our talk we are going to discuss various strategies we adopted to tame the complexity of the dynamical system.
**Future Development**
As a proof of concept, we tested our modeling approach on a network of wild birds interacting over the span of 6 days, collected by the team behind the Animal Social Network Repository (ASNR). The network has 202 vertices (birds), and 11899 edges (contact between birds). In the talk we will showcase this application to show strengths and current limitations of our novel approach.
We are now working on three fronts:
- we need to scale up the framework, so to model very large networks (for example to model social networks, where being able to predict what people might link with others in the future could be a tool to fight the growing problem of misinformation and disinformation);
- we are considering stepping from Neural Differential Equations to Universal Differential Equations, both to capture any preexisting knowledge of the network dynamical system, and to help with the training complexity;
- we are exploring data augmentation techniques to interpolate between the estimated embeddings, other embeddings, and other neural network architectures.
These three development directions constitute interesting challenges for the Julia practitioner interested in extending Julia modeling abilities. We will discuss them in the talk and suggest how everyone can contribute
All the code will be made available in a dedicated git repository and we are preparing a detailed publication to illustrate our approach.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/BEY33E/
Green
Andre Macleod
PUBLISH
BASTLY@@pretalx.com
-BASTLY
Fast, Faster, Julia: High Performance Implementation of the NFFT
en
en
20220728T123000
20220728T130000
0.03000
Fast, Faster, Julia: High Performance Implementation of the NFFT
The non-equidistant fast Fourier transform (NFFT) is an extension of the well-known fast Fourier transform (FFT) in which the sample points in one domain can be non-equidistant. The NFFT is an approximate algorithm and allows the approximation error to be controlled to achieve machine precision while keeping the algorithmic complexity in the same order of magnitude as a regular FFT. The NFFT plays an important role in many signal processing applications and has been intensively studied from both theoretical and computational perspectives. The fastest NFFT libraries are implemented in the low-level programming languages C and C++ and require a trade-off between generic code, code readability, and code efficiency.
In this talk, we show that Julia provides new ways to optimize these three conflicting goals. We outline the architecture and implementation of the NFFT.jl package, which has recently been refactored to match the performance of the modern C++ implementation FINUFFT. NFFT.jl is fully generic, dimension-independent, and has a flexible architecture that allows parts of the algorithm to be exchanged through different code paths. This is crucial for the realization of different precomputation strategies tailored to optimize either the computation time or the required main memory. NFFT.jl makes intensive use of the Cartesian macros in Julia Base, allowing for zero-overhead and dimension-agnostic implementation. In contrast, the two modern C (NFFT3) and C++ (FINUFFT) libraries use dedicated 1D, 2D and 3D code paths to achieve maximum performance. The generic Julia implementation thus avoids code duplication and requires 3-4 times less code than its C/C++ counterparts. NFFT.jl is multi-threaded and uses a cache-aware blocking technique to achieve decent speedups.
Package being presented:
- https://github.com/JuliaMath/NFFT.jl
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon-2022/talk/BASTLY/
Green
Tobias Knopp
PUBLISH
VC9YHN@@pretalx.com
-VC9YHN
Julia's latest in high performance sorting
en
en
20220728T130000
20220728T133000
0.03000
Julia's latest in high performance sorting
Julia's radix sort implementation: https://github.com/JuliaLang/julia/blob/fc1093ff1560b47611293bf71f8074030116edcc/base/sort.jl#L681
Benchmark implementations: https://github.com/LilithHafner/InterLanguageSortingComparisons
Extended discussion surrounding the introduction of radix sort to Julia: https://github.com/JuliaLang/julia/pull/44230
Ongoing work: https://github.com/JuliaLang/julia/pull/45222
Potential future work: https://github.com/JuliaLang/julia/discussions/44876#discussioncomment-2890020
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon-2022/talk/VC9YHN/
Green
Lilith Hafner
PUBLISH
N7DSLT@@pretalx.com
-N7DSLT
Julia Gaussian Processes
en
en
20220728T133000
20220728T140000
0.03000
Julia Gaussian Processes
Gaussian processes provide a way to place prior distributions over unknown functions, and are used throughout probabilistic machine learning, statistics and numerous domain areas (climate science, epidemiology, geostatistics, model-based RL to name a few). Their popularity stems from their flexibility, interpretability, and the ease with which exact and approximate Bayesian inference can be performed, and their ability to be utilised as a single module in a larger probabilistic model.
The goal of the [JuliaGPs organisation](https://github.com/JuliaGaussianProcesses/) is to provide a range of software which is suitable for both methodological research and deployment of GPs. We achieve this through a variety of clearly-defined abstractions, interfaces, and libraries of code. These are designed to interoperate with each other, and the rest of the Julia ecosystem (Distributions.jl, probabilistic programming languages, AD, plotting, etc), instead of providing a single monolithic package which attempts to do everything. This modular approach allows a GP researcher to straightforwardly build on top of lower-level components of the ecosystem which are useful in their work, without compromising on convenience when applying a GP in a more applications-focused fashion.
This talk will (briefly) introduce GPs, and discuss the JuliaGPs: what its design principles are and how these relate to existing GP software, how it is structured, what is available, what it lets you do, what it doesn’t try to do, and where there are gaps that we are trying to fill (and could use some assistance!). It will provide some examples of standard use (e.g. regression and classification tasks), making use of the core packages ([AbstractGPs](https://github.com/JuliaGaussianProcesses/AbstractGPs.jl), [ApproximateGPs](https://github.com/JuliaGaussianProcesses/ApproximateGPs.jl/), [KernelFunctions](https://github.com/JuliaGaussianProcesses/KernelFunctions.jl)), and how to move forward from there. It will also show how the abstractions have been utilised in the existing contributors’ research, for example with [Stheno](https://github.com/JuliaGaussianProcesses/Stheno.jl), [TemporalGPs](https://github.com/JuliaGaussianProcesses/TemporalGPs.jl), [AugmentedGPLikelihoods](https://github.com/JuliaGaussianProcesses/AugmentedGPLikelihoods.jl), [GPDiffEq](https://github.com/Crown421/GPDiffEq.jl), [BayesianLinearRegressors](https://github.com/JuliaGaussianProcesses/BayesianLinearRegressors.jl), [LinearMixingModels](https://github.com/invenia/LinearMixingModels.jl), with the aim of providing inspiration for how you might do the same in your own.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon-2022/talk/N7DSLT/
Green
Will Tebbutt
PUBLISH
UZBZRQ@@pretalx.com
-UZBZRQ
Restreaming of Jeremy Howard Keynote
en
en
20220728T143000
20220728T151500
0.04500
Restreaming of Jeremy Howard Keynote
PUBLIC
CONFIRMED
Keynote
https://pretalx.com/juliacon-2022/talk/UZBZRQ/
Green
PUBLISH
XKGBAM@@pretalx.com
-XKGBAM
oneAPI.jl: Programming Intel GPUs (and more) in Julia
en
en
20220728T152000
20220728T153000
0.01000
oneAPI.jl: Programming Intel GPUs (and more) in Julia
oneAPI is a framework, developed by Intel but intended to be cross-platform, that can be used to program various hardware accelerators. This includes Intel GPUs, which exist as integrated solutions in many processors, and dedicated hardware that will be part of the Aurora supercomputer at Argonne National Laboratory.
To program these GPUs from Julia, we have created the oneAPI.jl package based on existing GPU infrastructure like GPUCompiler.jl and GPUArrays.jl. It builds on the low-level Level Zero APIs that are part of oneAPI, and relies on Khronos tools to compile Julia code to SPIR-V. With it, Intel GPUs can be programmed using the familiar programming styles supported by other GPU back-ends: high-level array abstractions that automatically exploit the implicit parallelism, and low-level kernels where the programmer is responsible for doing so.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/XKGBAM/
Green
Tim Besard
PUBLISH
R8VHSS@@pretalx.com
-R8VHSS
Julius Tech Sponsored Talk
en
en
20220728T153000
20220728T154500
0.01500
Julius Tech Sponsored Talk
Julius offers an auto-scaling, low code graph computing solution that allows firms to quickly build transparent and adaptable data analytics pipelines. Graph computing is an innovative technology that enables developers to organize pipelines as directed acyclic graphs (DAGs). With Julius, DAGs representing complex workflows are created by composing smaller modular DAGs, and can be applied to many enterprise use cases including: explainable ML, big data analytics, data visualization and transformation, AAD, and more.
For Julia users, we provide a dynamic platform to help developers make Julia more manageable and adoptable for enterprise computing. Engineers can produce enterprise scale solutions in a fraction of the time and cost.
PUBLIC
CONFIRMED
Platinum sponsor talk
https://pretalx.com/juliacon-2022/talk/R8VHSS/
Green
PUBLISH
UKMZJV@@pretalx.com
-UKMZJV
Annual Julia Developer Survey
en
en
20220728T154500
20220728T155500
0.01000
Annual Julia Developer Survey
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/UKMZJV/
Green
PUBLISH
AEXDKT@@pretalx.com
-AEXDKT
BlockDates: A Context-Aware Fuzzy Date Matching Solution
en
en
20220728T163000
20220728T170000
0.03000
BlockDates: A Context-Aware Fuzzy Date Matching Solution
The date is often a critical piece of information for safety data analysis. It provides context and is necessary for measurement of event frequency and time-based trends. In some data sources, such as narrative information about an event or subject, the date is provided in various non-standardized formats. The Bureau of Transportation Statistics uses data provided in narrative, free-text format to validate and supplement reported safety event data.
We developed the open-source software package BlockDates using the Julia programming language to allow the extraction of fuzzy-matched dates from a block of text. The tool leverages contextual information and draws on external date data to find the best date matches. For each identified date, multiple interpretations are proposed and scored to find the best fit. The output includes several record-level variables that help explain the result and prioritize error detection.
In a sample of 59,314 narrative records that include dates, the tool returned positive scores for 96.5% of records, meaning high confidence the selected date is valid. Of those with no matching date, 77.9% were recognized correctly as having no viable match.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon-2022/talk/AEXDKT/
Green
Francis Smart
PUBLISH
XRDKTW@@pretalx.com
-XRDKTW
An introduction to BOMBs.jl.
en
en
20220728T170000
20220728T171000
0.01000
An introduction to BOMBs.jl.
We designed BOMBs.jl intending to contribute to the widespread of mathematical models in Biological sciences. Users only need basic Julia knowledge to use the package. The only requirement is to know how dictionaries work. Users can define in the contents of a dictionary the set of ordinary differential equations (and some other model information), and BOMBs.jl will automatically generate all the necessary scripts to simulate the model (including models with external time-varying inputs) and estimate parameters (MLE). The package also generates all the scripts required to perform Bayesian inference using Stan or Turing.jl, leaving as much freedom as possible in prior definitions. BOMBs will also generate any necessary scripts to perform optimal experimental design for model selection (drive pairs of competing model simulations as far as possible) and model inference (using model predictions uncertainty), aiming at reducing the time and resources allocated to in vivo experiments. The package is documented, with functions generating the dictionary structures for the user, complementary functions explaining what should be the contents and structures of the dictionaries, a document including a brief description of each function in the package and a set of Jupyter notebooks showing how to use all the package functionalities.
The Jupyter Notebook of this talk is included in the GitHub repository of the package at https://github.com/csynbiosysIBioEUoE/BOMBs.jl/blob/main/Examples/JuliaCon2022Notebook.ipynb
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/XRDKTW/
Green
David Gomez-Cabeza
PUBLISH
YED3MP@@pretalx.com
-YED3MP
Build, Test, Sleep, Repeat: Modernizing Julia's CI pipeline
en
en
20220728T171000
20220728T172000
0.01000
Build, Test, Sleep, Repeat: Modernizing Julia's CI pipeline
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/YED3MP/
Green
Elliot Saba
Dilum Aluthge
PUBLISH
XMXJTD@@pretalx.com
-XMXJTD
Extreme Value Analysis in Julia with Extremes.jl
en
en
20220728T172000
20220728T175000
0.03000
Extreme Value Analysis in Julia with Extremes.jl
Risk assessment and impact analysis of extreme values is an important aspect of climate sciences. Recently, the Intergovernmental Panel on Climate Change (IPCC) reported that extreme meteorological events are expected to increase in frequency and intensity with climate change, leading to important impacts on many sectors of activities (IPCC 2013). The only statistical discipline that develops a rigorous framework for the study of extremes events is Extreme value theory. However, unlike other programming languages commonly used by statisticians, tools for the analysis of extreme values are lacking in Julia despite the growing popularity of the language among scientific community.
In this talk, we present [Extremes.jl](https://github.com/jojal5/Extremes.jl), a package that provides exhaustive high-performance functions for the statistical analysis of extreme values. In particular, methods for the usual block maxima and peaks-over-threshold models are implemented. Model parameter estimation can be achieved by using the probability weighted moments, the maximum likelihood, and the Bayesian paradigm. Non-stationary models are also implemented as well as diagnostic plots for assessing model accuracy and high quantile estimation.
The proposed package is designed to be used by the statistical community as well as by engineers who need estimations of extremes. We illustrate the package functionalities by reproducing many results obtained by Coles (2001).
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon-2022/talk/XMXJTD/
Green
Gabriel Gobeil
PUBLISH
ZT7AZZ@@pretalx.com
-ZT7AZZ
Manopt.jl – Optimisation on Riemannian manifolds
en
en
20220728T175000
20220728T180000
0.01000
Manopt.jl – Optimisation on Riemannian manifolds
In many applications and optimization tasks, non-linear data appears naturally.
For example, when data on the sphere is measured, diffusion data can be captured as a signal or even multivariate data of symmetric positive definite matrice, and orientations like they appear for electron backscattered diffraction (EBSD) data. Another example are fixed rank matrices, appearing in matrix completion.
Working on these data, for example doing data interpolation and approximation, denoising, inpainting, or performing matrix completion, can usually be phrased as an optimization problem
Manopt.jl (manoptjl.org) provides a set of optimization algorithms for optimization problems given on a Riemannian manifold. Build upon on a generic optimization framework, together with the interface ManifoldsBase.jl for Riemannian manifolds, classical and recently developed methods are provided in an efficient implementation. Algorithms include the derivative-free Particle Swarm and Nelder–Mead algorithms, as well as classical gradient, conjugate gradient and stochastic gradient descent. Furthermore, quasi-Newton methods like a Riemannian L-BFGS and nonsmooth optimization algorithms like a Cyclic Proximal Point Algorithm, a (parallel) Douglas-Rachford algorithm and a Chambolle-Pock algorithm are provided, together with several basic cost functions, gradients and proximal maps as well as debug and record capabilities.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/ZT7AZZ/
Green
Ronny Bergmann
PUBLISH
HBYSDD@@pretalx.com
-HBYSDD
GatherTown -- Social break
en
en
20220728T180000
20220728T190000
1.00000
GatherTown -- Social break
PUBLIC
CONFIRMED
Social hour
https://pretalx.com/juliacon-2022/talk/HBYSDD/
Green
PUBLISH
HBERVN@@pretalx.com
-HBERVN
PyCallChainRules.jl: Reusing differentiable Python code in Julia
en
en
20220728T190000
20220728T191000
0.01000
PyCallChainRules.jl: Reusing differentiable Python code in Julia
Auto differentiation interfaces are rapidly converging with [`functorch`](https://github.com/pytorch/functorch) and [`jax`](https://github.com/google/jax) on the Python side and more explicit interfaces for dealing with gradients in Julia with [`Functors.jl`](https://github.com/FluxML/Functors.jl) and [Optimisers.jl](https://github.com/FluxML/Optimisers.jl) as well as more explicit machine learning layers in [Lux.jl](https://github.com/avik-pal/Lux.jl). While it is relatively easy to implement new functionality in Julia, Python still remains the standard interface layer for most state-of-the-art functionality especially for GPU kernels. Even if not the most performant, there is value in being able to call existing differentiable functions in Python, as a developer slowly implements equivalent functionality in Julia.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/HBERVN/
Green
Jayesh K. Gupta
PUBLISH
VWGBAL@@pretalx.com
-VWGBAL
Cosmological Emulators with Flux.jl and DifferentialEquations.jl
en
en
20220728T191000
20220728T192000
0.01000
Cosmological Emulators with Flux.jl and DifferentialEquations.jl
We are living in the Golden Age of Cosmology: in the 20th century, our comprehension of the Universe has been rapidly evolving, eventually leading to the establishment of a concordance model, the so-called ΛCDM model. Although the remarkable success of this model, which is able to explain with few parameters a great wealth of observations, there are several unanswered questions.
What is the origin of the primordial fluctuations in the Universe? Is this due to some form of inflationary scenario?
What is Dark Matter? Is it a new particle, not present in the Standard Model of Particle Physics? Is it composed by Primordial Black Holes?
Which is the nature of Dark Energy? Can the Cosmological Constant really explain its effects or is this the sign of the breakdown of Einstein theory of General Relativity?
In the next decade several galaxy surveys will start taking data, data that will be used to study the universe using different observational probes, such as weak lensing, galaxy clustering and their cross-correlation: studying these probes jointly will enhance the scientific outcome from galaxy surveys.
However, this improvement does not come at no cost.
The analysis of a galaxy survey employs the evaluation of a complicated theoretical model, with about a hundred parameters. The computation of this theoretical prediction requires about 1-10 seconds; although this is not an expensive step per se, considering that this computation is repeated 10^5-10^7 times shows that a complete analysis requires either a very long time or dedicated hardware.
In order to overcome this issue, I am developing several surrogate models, based on DifferentialEquations.jl and Flux.jl. The combination of these two packages is quite useful for this particular case: while several papers on this topic have usually relied solely on Neural Networks to build emulators, solving some of the differential equations involved in the model evaluation reduces the dimensionality of the emulated parameters space, obtaining a more precise surrogate model. The result of this work is the development of several surrogate models with a precision of ~ 0.1% (matching the requirement for the scientific analysis) with a speed-up of about 100-1000X. The developed models will be released after the publications of the related papers.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/VWGBAL/
Green
Marco Bonici
PUBLISH
UPQFKL@@pretalx.com
-UPQFKL
Automatic Differentiation for Solid Mechanics in Julia
en
en
20220728T192000
20220728T193000
0.01000
Automatic Differentiation for Solid Mechanics in Julia
The standard implementation of the Finite Element Method for solid mechanics is based on discretizing the domain into elements and solving the weak form of the point-wise Cauchy's equilibrium equations. This process involves the evaluation of the components of complex tensorial quantities, such as the stress and the stiffness tensor, that are required to calculate the residual force vector and the tangent stiffness matrix. On the other hand the residual force vector and the tangent stiffness matrix coincides, both formally and numerically, with the gradient and the hessian of the free energy of the system, therefore they can be evaluated directly by taking the automatic derivatives of this quantity. The advantage, in this case, is that the free energy is a scalar quantity, which is significantly simpler to evaluate.
In particular, forward mode AD seems particularly suited for the solution of solid mechanics FE problems. In fact, even if FE models can have a very large number of degrees of freedom (DoFs), the free energy of the system in a given configuration is obtained as the sum over the elements of the mesh, and only the degrees of freedom of a single element are involved in the calculation of its contribution to the global residual force vector and tangent stiffness matrix. Therefore we only deal with a limited number of independent variables at time when evaluating individual elements contributions. In this situation a forward mode automatic differentiation scheme implemented through hyper-dual numbers is very efficient for the calculation of higher order derivatives of, however complicate scalar expressions.
The definition of a hyper-dual number system in Julia is particularly straightforward, in fact a structure capable of storing the gradient and the hessian of a quantity, alongside its value, can be simply defined as
```Julia
struct D2{T,N,M} <:Number
v::T
g::NTuple{N,T}
h::NTuple{M,T}
end
```
with
- `v` the value of the variable,
- `g` the components of the gradient of `v`,
- `h` the Hessian,
and where the type parameters are
- `T` the type of the values,
- `N` the number of independent variables controlling the gradient,
- `M = N(N+1)/2` is the number of independent elements in the Hessian, we remark that since the hessian is symmetric we only store and operate on half of it.
Thanks to the multiple dispatch feature of Julia, it is sufficient to implement the needed mathematical operator and function over this newly numerical type and the same code that evaluates an expression will also evalute its derivatives, without any change to the original source code. In addition, the usage of macros for implementation of the operations on the gradient and hessian tuples makes the resulting code particualarly efficient.
In this talk we will present [AD4SM.jl](https://github.com/avigliotti/AD4SM.jl/), a package that implements a second order hyper-dual number system for evaluating both the gradient and hessian of the free energy of a deformable body, allowing the solution of non linear equilibrium problems. The implementation of the dual number system was inspired by the [ForwardDiff.jl](https://github.com/JuliaDiff/ForwardDiff.jl) package, but here the hessian is explicitly introduced, since it is essential for the evaluation of the tangent stiffness matrix, and its symmetry exploited to maximize efficiency. A number of examples are also presented that illustrate how complex non linear problems, with non trivial constraints and boundary conditions, both in two and three dimensions can be numerically stated and solved [1].
#### References
[1] [Vigliotti, A., Auricchio, F., Automatic Differentiation for Solid Mechanics, Archives of Computational Methods in Engineering, Volume 28, Issue 3, Pages 875 - 895 May 2021](https://rdcu.be/b0yx2)
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/UPQFKL/
Green
Andrea Vigliotti
PUBLISH
G9SQQD@@pretalx.com
-G9SQQD
ChainRules.jl meets Unitful.jl: Autodiff via Unit Analysis
en
en
20220728T193000
20220728T194000
0.01000
ChainRules.jl meets Unitful.jl: Autodiff via Unit Analysis
`Unitful.jl` provides efficient type-level support for dimensional quantities we encounter when simulating physical systems. Likewise, `ChainRules.jl` forms the backbone of robust but easily-extensible autodifferentiation (AD) systems. Exploring these two systems together yields an insightful look at Julia's rule-based AD. Calculus, dimensional analysis, and physical intuition are sufficient to explain how `ChainRules.jl` works by building AD rules for `Unitful.jl`.
The versatility of the `ChainRules.jl` ecosystem arises from implementing and extending a ruleset for fundamental functions, such as `*` and `inv`, as `rrule`s and `frule`s. What can often seem like a mysterious black box that computes derivatives is actually composed of many individual `rrule`s or `frule`s built on rudimentary calculus.
These `rrule`s and `frule`s are interpreted by thinking about differentiation as a problem of physical dimensions, and `Unitful.jl` is used to confirm these findings. However, arithmetic between `Unitful.jl` quantities is not immediately compatible with `ChainRules.jl`-based AD. This talk presents the pertinent AD rules to enable basic `ChainRules.jl` compatibility. These rules are also used as a lens to understand how to read and write AD rules for the `ChainRules.jl` ecosystem.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/G9SQQD/
Green
Sam Buercklin
PUBLISH
PDCANR@@pretalx.com
-PDCANR
Using Optimization.jl to seek the optimal optimiser in SciML
en
en
20220728T194000
20220728T201000
0.03000
Using Optimization.jl to seek the optimal optimiser in SciML
Optimization.jl wraps most of the major optimisation packages available in Julia currently, namely BlackBoxOptim, CMAEvolutionStrategy, Evolutionary, Flux, GCMAES, MultistartOptimization, Metaheuristics, NOMAD, NLopt, Nonconvex, Optim and Quaddirect. Additionally the integration with ModelingToolkit and MathOptInterface allow it to leverage the state of art symbolic manipulation capabilities offered by these packages, specifically making use of it to construct the objective and constraint, jacobian and hessian efficiently. This talk will show how to use the Optimization.jl package, its various AD backends in combination with various optimiser backends. The interface is broken into three components, `OptimizationFunction`, `OptimizationProblem` and then `solve` on a `OptimizationProblem`. We will cover each of this components and discuss the pros and cons of choices available for specific problems. The focus would also be to show how such a flexible system is necessary for scientific machine learning by demonstrating some popular SciML models on real world problems.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon-2022/talk/PDCANR/
Green
Vaibhav Dixit
PUBLISH
VLZBNZ@@pretalx.com
-VLZBNZ
GatherTown -- Social break
en
en
20220728T203000
20220728T213000
1.00000
GatherTown -- Social break
PUBLIC
CONFIRMED
Social hour
https://pretalx.com/juliacon-2022/talk/VLZBNZ/
Green
PUBLISH
MDCJKK@@pretalx.com
-MDCJKK
Adaptive Radial Basis Function Surrogates in Julia
en
en
20220728T123000
20220728T130000
0.03000
Adaptive Radial Basis Function Surrogates in Julia
Active Learning algorithms have been applied to fine tune surrogate models. In this talk, we analyze these algorithms in the context of dynamical systems with a large number of input parameters. The talk will demonstrate:
1. An adaptive learning algorithm for radial basis functions
2. Its efficacy on dynamical systems with high dimensional input parameter spaces
This will make use of Surrogates.jl and the rest of the SciML ecosystem.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon-2022/talk/MDCJKK/
Red
Ranjan Anantharaman
PUBLISH
NLDVYU@@pretalx.com
-NLDVYU
Lux.jl: Explicit Parameterization of Neural Networks in Julia
en
en
20220728T130000
20220728T131000
0.01000
Lux.jl: Explicit Parameterization of Neural Networks in Julia
`Lux,` is a neural network framework built completely using pure functions to make it both compiler and automatic differentiation friendly. Relying on the most straightforward pure functions API ensures no reference issues to debug, and compilers can optimize it as much as possible, is compatible with Symbolics/XLA/etc. without any tricks.
Repository: https://github.com/avik-pal/ExplicitFluxLayers.jl/
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/NLDVYU/
Red
Avik Pal
PUBLISH
MVRVTP@@pretalx.com
-MVRVTP
GraphPPL.jl: a package for specification of probabilistic models
en
en
20220728T131000
20220728T132000
0.01000
GraphPPL.jl: a package for specification of probabilistic models
**Background**
Bayesian modeling has become a popular framework for important real-time machine learning applications, such as speech recognition and robot navigation. Unfortunately, many useful probabilistic time-series models contain a large number of latent variables, and consequently real-time Bayesian inference based on Monte Carlo sampling or other black-box methods in these models is not feasible.
**Problem statement**
Existing packages for automated Bayesian inference in the Julia language ecosystem, such as Turing.jl, Stan.jl, and Soss.jl, support probabilistic model specification by well-designed macro-based meta languages. These packages assume that inference is executed by black-box variational or sampling-based methods. In principle, for conjugate probabilistic time-series models, message passing-based variational inference by minimization of a constrained Bethe Free Energy yields approximate inference solutions obtained with cheaper computational costs. In this contribution, we develop a user-friendly and comprehensive meta language for specification of both a probabilistic model and variational inference constraints that balance accuracy of inference results with computational costs.
**Solution proposal**
The GraphPPL.jl package implements a user-friendly specification language for both the model and the inference constraints. GraphPPL.jl exports the `@model` macro to create a probabilistic model in the form of a factor graph that is compatible with ReactiveMP.jl's reactive message passing-based inference engine. To enable fast and accurate inference, all message update rules default to precomputed analytical solutions. The ReactiveMP.jl package already implements a selection of precomputed rules. If an analytical solution is not available, then the GraphPPL.jl package provides ways to tweak, relax, and customize local constraints in selected parts of the factor graph. To simplify this process, the package exports the `@constraints` macro to specify extra factorization and form constraints on the variational posterior [1]. For advanced use cases, GraphPPL.jl exports the `@meta` macro that enables custom message passing inference modifications for each node in a factor graph representation of the model. This approach enables local approximation methods only if necessary and allows for efficient variational Bayesian inference.
**Evaluation**
Over the past two years, our probabilistic modeling ecosystem comprising GraphPPL.jl, ReactiveMP.jl, and Rocket.jl has been battle tested with many sophisticated models that led to several publications in high-ranked journals such as Entropy [1] and Frontiers [2], and conferences like MLSP-2021 [3] and ISIT-2021 [4]. The current contribution enables a user-friendly approach to very sophisticated Bayesian modeling problems.
**Conclusions**
We believe that a user-friendly specification of efficient Bayesian inference solutions for complex models is a key factor to expedite application of Bayesian methods. We developed a complete ecosystem for running efficient, fast, and reactive variational Bayesian inference with a user-friendly specification language for the probabilistic model and variational constraints. We are excited to present GraphPPL.jl as a part of our complete variational Bayesian inference ecosystem and discuss the advantages and drawbacks of this approach.
**References**
[1] Ismail Senoz, Thijs van de Laar, Dmitry Bagaev, Bert de Vries. Variational Message Passing and Local Constraint Manipulation in Factor Graphs, Entropy. Special Issue on Approximate Bayesian Inference, 2021.
[2] Albert Podusenko, Bart van Erp, Magnus Koudahl, Bert de Vries. AIDA: An Active Inference-Based Design Agent for Audio Processing Algorithms, Frontiers in Signal Processing, 2022.
[3] Albert Podusenko, Bart van Erp, Dmitry Bagaev, Ismail Senoz, Bert de Vries. Message Passing-Based Inference in the Gamma Mixture Model, 2021 IEEE 31st International Workshop on Machine Learning for Signal Processing (MLSP).
[4] Ismail Senoz, Albert Podusenko, Semih Akbayrak, Christoph Mathys, Bert de Vries. The Switching Hierarchical Gaussian Filter, 2021 IEEE International Symposium on Information Theory (ISIT).
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/MVRVTP/
Red
Dmitry Bagaev
PUBLISH
8JWMG8@@pretalx.com
-8JWMG8
TuringGLM.jl: Bayesian Generalized Linear models using @formula
en
en
20220728T132000
20220728T133000
0.01000
TuringGLM.jl: Bayesian Generalized Linear models using @formula
# TuringGLM
TuringGLM makes easy to specify Bayesian **G**eneralized **L**inear **M**odels using the formula syntax and returns an instantiated [Turing](https://github.com/TuringLang/Turing.jl) model.
Heavily inspired by [brms](https://github.com/paul-buerkner/brms/) (uses RStan or CmdStanR) and [bambi](https://github.com/bambinos/bambi) (uses PyMC3).
## `@formula`
The `@formula` macro is extended from [`StatsModels.jl`](https://github.com/JuliaStats/StatsModels.jl) along with [`MixedModels.jl`](https://github.com/JuliaStats/MixedModels.jl) for the random-effects (a.k.a. group-level predictors).
The syntax is done by using the `@formula` macro and then specifying the dependent variable followed by a tilde `~` then the independent variables separated by a plus sign `+`.
Example:
```julia
@formula(y ~ x1 + x2 + x3)
```
Moderations/interactions can be specified with the asterisk sign `*`, e.g. `x1 * x2`.
This will be expanded to `x1 + x2 + x1:x2`, which, following the principle of hierarchy,
the main effects must also be added along with the interaction effects. Here `x1:x2`
means that the values of `x1` will be multiplied (interacted) with the values of `x2`.
Random-effects (a.k.a. group-level effects) can be specified with the `(term | group)` inside
the `@formula`, where `term` is the independent variable and `group` is the **categorical**
representation (i.e., either a column of `String`s or a `CategoricalArray` in `data`).
You can specify a random-intercept with `(1 | group)`.
Example:
```julia
@formula(y ~ (1 | group) + x1)
```
## Data
TuringGLM supports any `Tables.jl`-compatible data interface.
The most popular ones are `DataFrame`s and `NamedTuple`s.
## Supported Models
TuringGLM supports non-hiearchical and hierarchical models.
For hierarchical models, only single random-intercept hierarchical models are supported.
For likelihoods, `TuringGLM.jl` supports:
* `Gaussian()` (the default if not specified): linear regression
* `Student()`: robust linear regression
* `Logistic()`: logistic regression
* `Pois()`: Poisson count data regression
* `NegBin()`: negative binomial robust count data regression
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/8JWMG8/
Red
Jose Storopoli
PUBLISH
VGEWU7@@pretalx.com
-VGEWU7
Text Segmentation with Julia
en
en
20220728T133000
20220728T134000
0.01000
Text Segmentation with Julia
TextSegmentation.jl(https://github.com/kawasaki-kento/TextSegmentation.jl) provides a julia implementation of unsupervised text segmentation methods. Text Segmentation is a method for dividing an unstructured document including various contents into several parts according to their topics. A specific example of its use is pre-processing in natural language processing. Natural language processing includes various tasks such as summarization, extraction, and question answering, but to achieve higher accuracy, text preprocessing is necessary. Text segmentation helps improve the accuracy of those tasks by allowing documents to be segmented according to topics. As specific text segmentation methods, this package provides the following three:
- TextTiling
+ TextTiling is a method for finding segment boundaries based on lexical cohesion and similarity between adjacent blocks.
- C99
+ C99 is a method for determining segment boundaries by divisive clustering.
- TopicTiling
+ TopicTiling is an extension of TextTiling that uses the topic IDs of words in a sentence to calculate the similarity between blocks.
The planned presentations are as follows
1. introduction
+ I will introduce the purpose of TextSegmentation.jl and what it is useful for.
2. Text Segmentation
+ Specific methods of text segmentation will be explained.
3. overview of the package
+ Introduce how to use the package and how to perform the tasks.
4. example
+ Using simple text data, this section explains how to actually perform text segmentation with the package.
5. future work
+ Share future prospects for TextSegmentation.jl.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/VGEWU7/
Red
Kento Kawasaki
PUBLISH
VWVY9S@@pretalx.com
-VWVY9S
Recommendation.jl: Modeling User-Item Interactions in Julia
en
en
20220728T134000
20220728T135000
0.01000
Recommendation.jl: Modeling User-Item Interactions in Julia
This talk demonstrates **Recommendation.jl**, a Julia package for building recommender systems, with a special emphasis on its design principle and evaluation framework. While the package was first presented at JuliaCon 2019 to collect early feedback from the community, this talk highlights how the implementation has evolved afterwards and gives a preview of upcoming "v1.0.0" major release, accompanied by a proceeding paper.
An underlying question for the audiences throughout the talk is: *How should "good" recommender systems be?* On one hand, improving the accuracy of recommendation with sophisticated algorithms is indeed desired. However, at the same time, recommendation is not always the same as machine learning problems, and non-accuracy aspects of the systems are equally or even more important in practice; we particularly discuss the importance of decoupling data from business logic and validating data/model quality based on a diverse set of decision criteria.
First of all, a core of recommendation engine largely relies on simple math and matrix computation against sparse user-item data, where we can take full advantage of numerical computing methods. Thus, Julia is a great choice to efficiently and effectively implement an end-to-end recommendation pipeline that typically consists of multiple sub-tasks as follows:
1. preprocessing user-item data;
2. building a recommendation model;
3. evaluating a ranked list of recommended contents;
4. post-processing the recommendation.
Here, Recommendation.jl provides a unified abstraction layer, namely `DataAccessor`, which represents user-item interactions in an accessible form. Since data for recommender systems is readily standardizable as a collection of user, item, and contextual features, the common interface helps us to follow the separation of concerns principle and ensure the easiness and reliability of data manipulation. To be more precise, raw data is always converted into a `DataAccessor` instance at the data preprocessing phase (Phase#1) with proper validation (e.g., data type check, missing value handling), and hence the subsequent steps can simply access the data (or metadata) through the instance without worrying about unexpected input.
Moreover, when it comes to generating recommendations at later phases (Phase#2-4), Recommendation.jl enables developers to optimize recommenders against not only standard accuracy metrics (e.g., recall, precision) but non-accuracy measures such as novelty, diversity, and serendipity. Even though the idea of diverse or serendipitous recommendation is not new in the literature, the topic has rapidly gained traction as society realizes the importance of fairness in intelligent systems. In this talk, we dive deep into the concept of these non-accuracy metrics and their implementation in Julia.
Last but not least, there are a couple of new recommendation models recently added to the package, including matrix factorization with Bayesian personalized ranking loss and factorization machines. We plan to provide comprehensive benchmark results for supported recommender-metric pairs to undergo trade-off discussion. Furthermore, we compare Recommendation.jl with other publicly available recommendation toolkit like LensKit (Python), MyMediaLite (C#), and LibRec (Java).
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/VWVY9S/
Red
Takuya Kitazawa
PUBLISH
JKKXTS@@pretalx.com
-JKKXTS
G Research Sponsored Talk
en
en
20220728T135000
20220728T135500
0.00500
G Research Sponsored Talk
PUBLIC
CONFIRMED
Silver sponsor talk
https://pretalx.com/juliacon-2022/talk/JKKXTS/
Red
PUBLISH
JC3GKD@@pretalx.com
-JC3GKD
Pumas Sponsored Talk
en
en
20220728T135500
20220728T140000
0.00500
Pumas Sponsored Talk
PUBLIC
CONFIRMED
Silver sponsor talk
https://pretalx.com/juliacon-2022/talk/JC3GKD/
Red
PUBLISH
ZZ3HGF@@pretalx.com
-ZZ3HGF
HPC sparse linear algebra in Julia with PartitionedArrays.jl
en
en
20220728T163000
20220728T164000
0.01000
HPC sparse linear algebra in Julia with PartitionedArrays.jl
PartitionedArrays (https://github.com/fverdugo/PartitionedArrays.jl) is a distributed sparse linear algebra engine that allows Julia programmers to easily prototype and deploy large computations on distributed-memory, high performance computing (HPC) platforms. The package provides a data-oriented parallel implementation of vectors and sparse matrices, ready to use in several applications, including (but not limited to) the discretization of partial differential equations (PDEs) with grid-based algorithms such as finite differences, finite volumes, or finite element methods. The long-term goal of this package is to provide a Julia alternative to the parallel vectors and sparse matrices available in well-known distributed algebra packages such as PETSc or Trilinos. It also aims at providing the basic building blocks for the implementation in Julia of other linear algebra algorithms such as distributed sparse linear solvers. We started this project motivated by the fact that using bindings to PETSc or Trilinos for parallel computations in Julia can be cumbersome in many situations. One is forced to use MPI as the parallel execution model and drivers need to be executed non-interactively with commands like `mpiexec -n 4 julia input.jl`, which posses serious difficulties to the development process. Some typos and bugs can be debugged interactively with a single MPI rank in the Julia REPL, but genuine parallel bugs often need to be debugged non-interactively using `mpiexec`. In this case, one cannot use development tools such as Revise or Debugger, which is a serious limitation, specially for complex codes that take a lot of time to JIT-compile since one ends up running code in fresh Julia sessions. To overcome these limitations, PartitionedArrays provides a data-oriented parallel execution model that allows one to implement parallel algorithms in a generic way, independently of the underlying message passing software that is eventually used at the production stage. At this moment, the library provides two backends for running the generic parallel algorithms: a sequential backend and an MPI backend. In the former, the parallel data structures are logically parallel from the user perspective, but they are stored in a conventional (sequential) Julia session using standard serial arrays. The sequential back end does not mean to distribute the data in a single part. The data can be split in an arbitrary number of parts, but they are processed one after the other in a standard Julia sequential process. This configuration is specially handy for developing new parallel codes. The sequential backend runs in a standard Julia session and one can use tools like Revise and Debugger, which dramatically improves the developer experience. Once the code works with the sequential backend, it can be automatically deployed in a supercomputer via the MPI backend. In the latter case, the data layout of the distributed vectors and sparse matrices is compatible with the linear solvers provided by libraries like PETSc or MUMPS. This allows one to use these libraries for solving large systems of linear algebraic equations efficiently until competitive Julia alternatives are available. The API of PartitionedArrays allows the programmer to write efficient parallel algorithms since it enables fine control over data exchanges. In particular, asynchronous communication directives are provided, making possible to overlap communication and computation. This is useful, e.g., to efficiently implement the distributed sparse matrix-vector product, where the product on the owned entries can be overlapped with the communication of the off-processor vector components. Application codes using PartitionedArrays such as the parallel finite element library GridapDistributed have shown excellent strong and weak scaling results up to tends of thousands of CPU cores. In the near future, we plan to add hierarchical/multilevel parallel data structures to the library to extend its support to multilevel parallel algorithms such as multigrid, multilevel domain decomposition, and multilevel Montecarlo methods. In this talk, we will provide an overview of the main components of the library and show users how to get started by means of simple examples. PartitionedArrays can be easily installed from the official Julia language package registry and it is distributed with an MPI licence.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/ZZ3HGF/
Red
Francesc Verdugo
ALBERTO FRANCISCO MARTIN HUERTAS
PUBLISH
CB3PEY@@pretalx.com
-CB3PEY
Calling Julia from MATLAB using MATDaemon.jl
en
en
20220728T164000
20220728T165000
0.01000
Calling Julia from MATLAB using MATDaemon.jl
MATLAB is a popular programming language in the scientific community. Unfortunately, it is proprietary, closed-source, and expensive. Within academia, purchasing MATLAB licenses can feel like a tax on research development, especially given the growing global trends towards open, reproducible, and transparent science. For this reason, many scientists are switching as much as possible to open software development using open-source programming languages such as Python, R, and Julia.
Transitioning between programming languages can be a daunting task, however. Beyond the obvious requirement of learning a new language, translating and rewriting existing codebases in a new language can be difficult to do in a modular fashion. Modularity is important in order to be able to translate and test as you go. Indeed, it is often not even necessary to translate an entire library to gain nontrivial improvements; for example, by rewriting only performance-critical code paths.
A convenient way to ease such transitions is through language interoperability. From the Julia side, calling out to the MATLAB C API has been possible for many years using the fantastic `MATLAB.jl` package, made possible by Julia’s support for calling C code. Calling Julia from MATLAB, however, is more complex for several reasons. The first approach one might try is to use Julia’s C API in conjunction with the MATLAB C/C++ MEX API in order to build a MEX – that is, (M)ATLAB (EX)ecutable – function which embeds Julia. This is the approach taken by the `Mex.jl` Julia package. When this approach works, it is extremely effective: calling Julia is convenient with little overhead. Unfortunately, writing scripts to compile MEX functions across operating systems and MATLAB versions is notoriously fragile. Indeed, the current version of `Mex.jl` only supports Julia v1.5.
For these reasons, we created `MATDaemon.jl` (https://github.com/jondeuce/MATDaemon.jl). This package aims to call Julia from MATLAB in as simple a manner as possible while being robust across both Julia and MATLAB versions – it should “just work”. `MATDaemon.jl` does this by communicating with Julia via writing MATLAB variables to disk as `.mat` files. These variables are then read by Julia using the `MAT.jl` package. The Julia function indicated is then called and the output variables are similarly written to `.mat` files and read back by MATLAB. Naturally, this comes at the cost of some overhead which would not be present when using the MEX approach. In order to alleviate some of the overhead, a Julia daemon is created using the `DaemonMode.jl` package (https://github.com/dmolina/DaemonMode.jl). This helps to avoid Julia startup time by running Julia code on a persistent server. While this package is still not recommended for use in tight performance critical loops due to overhead on the order of seconds, it is certainly fast enough for use in rewriting larger bottlenecks and for interactive use in the MATLAB REPL.
Due to its simplicity,`MATDaemon.jl` is easy to use: just download the jlcall.m MATLAB function from the GitHub repository (https://github.com/jondeuce/MATDaemon.jl/blob/master/api/jlcall.m) and call Julia. For example, running `jlcall('sort', {rand(2,5)}, struct('dims', int64(2)))` will sort a 2x5 MATLAB double array along the second dimension. A temporary workspace folder `.jlcall` is created containing a local `Project.toml` and `Manifest.toml` file in order to not pollute the global Julia environment. And that’s it! See the documentation in the GitHub repository for example usage, including loading local Julia projects, Base modules, running setup scripts, and more.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/CB3PEY/
Red
Jonathan Doucette
PUBLISH
RUQAHC@@pretalx.com
-RUQAHC
LinearSolve.jl: because A\b is not good enough
en
en
20220728T165000
20220728T170000
0.01000
LinearSolve.jl: because A\b is not good enough
We tell people that to solve Ax=b, you use A\b. But in reality, that is insufficient for many problems. For dense matrices, LU-factorization, QR-factorization, and SVD-factorization approaches are all possible ways to solve this, each making an engineering trade-off between performance and accuracy. While with Julia's Base you can use lu(A)\b, qr(A)\b, and svd(A)\b, this idea does not scale to all of the cases that can arise. For example, Krylov subspace methods require you set a tolerance `tol`... how do you expect to do that? krylov(A;tol=1e-7)\b? No, get outta here, the libraries don't support that. And even if they did, this still isn't as efficient as... you get the point.
This becomes a major issue with packages. Say Optim.jl uses a linear solve within its implementation of BFGS (it does). Let's say the code is A\b. Now you know in your case A is a sparse matrix which is irregular, and thus KLU is 5x faster than the UMFPACK that Julia's \ defaults to. How do you tell Optim.jl to use KLU instead? Oops, you can't. But wouldn't it be nice if you could just pass `linsolve = KLUFactorization()` and have it do that?
Okay, we can keep belaboring the point, which is that the true interface of linear solvers needs to have many features and performance, and it needs to be a multiple dispatching interface so that it can be used within other packages and have the algorithms swapped around by passing just one type. What a great time for the SciML ecosystem to swoop in! This leads us to LinearSolve.jl, a common interface for linear solver libraries. What we will discuss is the following:
- Why there are so many different linear solver methods. What are they used for? When are which ones recommended? Short list: LU, QR, SVD, RecursiveFactorization.jl (pure Julia, and the fastest?), GPU-offload LU, UMFPACK, KLU, CG, GMRES, Pardiso, ...
- How do you swap between linear solvers in the LinearSolve.jl interface. It's easy: solve(prob,UMFPACKFactorization()) vs solve(prob,KLUFactorization()).
- How do you efficiently reuse factorizations? For example, the numerical factorization stage can be reused when swapping out `b` if doing many `A\b` operations. But did know that if A is a sparse matrix you only need to perform the symbolic factorization stage once for each sparsity pattern? How do you do all of this efficiently? LinearSolve.jl has a caching interfaces that automates all of this!
- What is a preconditioner? How do you use preconditioners?
We will showcase examples where stiff differential equation solving is accelerated by over 20x just by swapping out to the correct linear solvers (https://diffeq.sciml.ai/stable/tutorials/advanced_ode_example/). This will showcase that it's not a small detail, and in fact, every library should adopt this swappable linear solver interface.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/RUQAHC/
Red
Chris Rackauckas
PUBLISH
HCEGRV@@pretalx.com
-HCEGRV
CALiPPSO.jl: Jamming of Hard-Spheres via Linear Optimization
en
en
20220728T170000
20220728T171000
0.01000
CALiPPSO.jl: Jamming of Hard-Spheres via Linear Optimization
You can find the complete description of our algorithm in [this preprint](https://arxiv.org/abs/2203.05654).
The package can be installed directly from Julila's package manager, and the documentation is available [here](https://rdhr.github.io/CALiPPSO.jl/dev/index.html)
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/HCEGRV/
Red
Rafael Diaz
PUBLISH
7H77WX@@pretalx.com
-7H77WX
Writing a GenericArpack library in Julia.
en
en
20220728T171000
20220728T174000
0.03000
Writing a GenericArpack library in Julia.
## Summary of key points
Arpack is a library for iteratively computing eigenvalues and eigenvectors of a linear operator. It has been widely used, debugged, and implemented in many technical computing packages. The goal of the `GenericArpack.jl` package is to create a Julia translation of the Arpack methods. (Currently, only the symmetric solver has been translated.)
The new library has zero dependency on system level BLAS, allowing it to support matrix element types beyond those in Arpack, such as those in `DoubleFloats.jl` and `MultiFloats.jl`. Other advantages of the `GenericArpack.jl` package include thread safety and using Julia features to allow one to optionally avoid the reverse communication interface. Despite not using the system BLAS, the goal was to make the Julia output equivalent to an alternative compilation of the Arpack source code for `Float64` types.
An anticipated future use is using the `GenericArpack.jl` package to give WebASM Julia implementations tools for iterative eigensolvers. This would enables a wide variety of in-browser analysis including finite element, pseudospectra, and spectral graph theory.
## Talk overview
The talk will discuss some interesting challenges that arose:
- representing state information in Julia for the statically located Fortran `save` variables in Arpack.
- sensitivity to the `norm` operation and implementing an exact replacement for the OpenBLAS `dnrm2` on x86 architectures that uses 80-bit floating point features of x86 CPUs (without calling the OpenBLAS function)
- getting the same random initialization vectors as Arpack (i.e. porting the Fortran `dlarnv` function)
- designing an interface that allows us to compare results between `Arpack_jll` and `GenericArpack.jl` at internal methods in Arpack call chains.
- how much code it took to translate the symmetric eigensolver to a Hermitian eigensolver (which is not in Arpack)
It will also discuss some tools created that may be useful elsewhere
- a tridiagonal eigensolver that only computes a single row of the eigenvector matrix (the Arpack `dstqrb` function)
- allocation analysis that automatically runs, parses output, and cleans up after a `track-allocations` run of Julia
- Julia implementations of a few LAPACK/BLAS functions and the details needed to match bitwise match OpenBLAS calls (on MacOS)
## Initial rough talk slide ideas
- teaser: the world's most precise estimate of the largest few singular values of the netflix ratings matrix. (100M non-zeros) ... or something similar.
- reveal: the code... using GenericArpack; svds(...)
- pitch: A dropin replacement for Arpack.jl (for symmetric problems).
- what is Arpack and why is it important?
- Arpack and reverse communication.
- summary of project goals: why _translation, same input/same output_ and not something else (new algorithms, etc.), also why minimal dependencies.
- basic translation approach: an exercise in @view / sub-arrays.
- getting bitwise identical output -- the Lanczos/Arnoldi information seems close, but somewhat different from Arpack
- key issue: well, turns out this is _very_ sensitive to the norm function.
- real problem: OpenBlas norm uses 80-bit FP operations. (And why they can get away with sqrt(sum(x.^2)) and you can't!)
- solution 1: use double-double to simulate! (but it's slow)
- solution 2: just use ideas from `BitFloats.jl` and llvm intrinsics instead
- So, I've written everything, it passes tests, etc. Why does it use _so_ many allocations? (When it should use zero, like the Fortran code!)
- tools for hunting down allocations. (well, really just parsing track-allocation output)
- A curiosity: why does the line `while true` allocate?
- I wish there was a "strict" mode that doesn't allow quite so much flexibility.
- because we can: from Arpack `ido` (really what you the user do!) to Julia idonow to avoid reverse communication.
- because we can: going from symmetric real-valued Arpack methods to Hermitian complex-valued methods (which do not exist in Arpack)
- because someone will ask: comparing performance. This will show the current state of performance. At the moment, for a problem Arpack solves in 15ms, GenericArpack.jl takes 23ms; although there has been only minor performance tuning.
- A list of future work. Portion the non-Hermitian complex valued case; an "AbstractEigenspace.jl" package that multiple people could implement; Handling differences.
- The vision: Why this would be super useful. Iterative Eigenvalues in the browser for Pluto.jl running via WebASM... for really cool demos akin to pseudospectra... for finite elements in the browser ... for interactive spectral graph analysis in the browser... for mixed precision Arpack computations (Lanczos/Arnoldi info in high-precision, vectors in low-precision).
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon-2022/talk/7H77WX/
Red
David Gleich
PUBLISH
PFHGSD@@pretalx.com
-PFHGSD
OnlineSampling : online inference on reactive models
en
en
20220728T190000
20220728T193000
0.03000
OnlineSampling : online inference on reactive models
[OnlineSampling](https://github.com/wazizian/OnlineSampling.jl) is a probabilistic programming language that focuses on reactive models, i.e., streaming probabilistic models based on the synchronous model of execution.
Programs execute synchronously in lockstep on a global discrete logical clock.
Inputs and outputs are data streams, programs are stream processors.
For such models, inference is a reactive process that returns the distribution of parameters at the current time step given the observations so far.
## Synchronous Reactive Programming
We use Julia's macro system to program reactive models in a style reminiscent of synchronous dataflow programming languages.
A stream function is introduced by the macro `@node`.
Inside a `node`, the macro `@init` can be used to initialize a variable.
Another macro `@prev` can then be used to access the value of a variable at the previous time step.
Then, the macro `@nodeiter` turns a node into a Julia iterator which unfolds the execution of a node and returns the current value at each step.
For examples, the following function `cpt` implements a simple counter incremented at each step, and prints its value
```julia
@node function cpt()
@init x = 0
x = @prev(x) + 1
return x
end
for x in @nodeiter T = 10 cpt()
println(x)
end
```
## Reactive Probabilistic Programming
Reactive constructs `@init` and `@prev` can be mixed with probabilistic constructs to program reactive probabilistic models.
Following recent probabilistic languages (e.g., Turing.jl), probabilistic constructs are the following:
- `x = rand(D)` introduces a random variable `x` with the prior distribution `D`.
- `@observe(x, v)` conditions the models assuming the random variable `x` takes the value `v`.
For example, the following example is a HMM where we try to estimate the position of a moving agent from noisy observations.
```julia
speed = 1.0
noise = 0.5
@node function model()
@init x = rand(MvNormal([0.0], ScalMat(1, 1000.0))) # x_0 ~ N(0, 1000)
x = rand(MvNormal(@prev(x), ScalMat(1, speed))) # x_t ~ N(x_{t-1}, speed)
y = rand(MvNormal(x, ScalMat(1, noise))) # y_t ~ N(x_t, noise)
return x, y
end
@node function hmm(obs)
x, y = @nodecall model()
@observe(y, obs) # assume y_t is observed with value obs_t
return x
end
steps = 100
obs = rand(steps, 1)
cloud = @nodeiter particles = 1000 hmm(eachrow(obs)) # launch the inference with 1000 particles (return an iterator)
for (x, o) in zip(cloud, obs)
samples = rand(x, 1000) # sample the 1000 values from the posterior
println("Estimated: ", mean(samples), " Observation: ", o)
end
```
## Semi-symbolic algorithm
The inference method is a Rao-Blackwellised particle filter, a semi-symbolic algorithm which tries to analytically compute closed-form solutions, and falls back to a particle filter when symbolic computations fail.
For Gaussian random variables with linear relations, we implemented belief propagation if the factor graph is a tree.
As a result, in the previous HMM example, belief propagation is able to recover the equation of a Kalman filter and compute the exact solution and only one particle is necessary as shown below.
```julia
cloud = @noderun particles = 1 algo = belief_propagation hmm(eachrow(obs)) # launch the inference with 1 particles for all observations
d = dist(cloud.particles[1]) # distribution for the last state
```
## Internals
This package relies on Julia's metaprogramming capabilities.
Under the hood, the macro `@node` generates a stateful stream processor which closely mimic the `Iterator` interface of Julia. The state correspond to the memory used to store all the variables accessed via `@prev`.
The heavy lifting to create these functions is done by a Julia macro which acts on the Abstract Syntax Tree. The transformations at this level include, for `t > 0`, adding the code to retrieve the previous internal state, update it and return it.
However, some transformations are best done at a later stage of the Julia pipeline.
One of them is the handling of calls to `@prev` during the initial step `t = 0`.
To seamlessly handle the various constructs of the Julia language, these calls are invalidated at the level of Intermediate Representation (IR) thanks to the package `IRTools`.
Another operation at the IR level is the automatic realization of a symbolic variable undergoing an unsupported transform: when a function is applied to a random variable and there is no method matching the variable type, this variable is automatically sampled.
We also provide a "pointer-minimal" implementation of belief propagation: during execution when a random variables is not referenced anymore by the program, it can be freed by the garbage collector (GC).
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon-2022/talk/PFHGSD/
Red
Waïss Azizian
marc lelarge
Guillaume Baudart
PUBLISH
YBH93M@@pretalx.com
-YBH93M
Dynamical Low Rank Approximation in Julia
en
en
20220728T193000
20220728T194000
0.01000
Dynamical Low Rank Approximation in Julia
Many scientific computing problems boil down to solving large matrix-valued ordinary differential equations (ODE); prominent examples for that are the propagation of uncertainties through (partial-)differential equation models or on-the-fly compression of large-scale simulation or experimental data. While the naive integration of such matrix-valued ODEs often remains prohibitively expensive, it is in many cases found that their solution admit an accurate low-rank approximation. Exploiting such a low rank structure generally holds the potential for substantial computational resource savings (time and memory) over naive integration approaches, often recovering the tractability of integration.
Dynamical low rank approximation (DLRA), a concept also known under the names Dirac-Fraenkel time-varying variational principle or dynamically orthogonal schemes, seeks to exploit the low-rank structure of the solution of matrix-valued ODEs by performing the integration within the manifold of fixed (low-)rank matrices. However, while theoretically elegant, the effective use of DLRA in practice is often cumbersome due to the need for custom implementations of integration routines that take advantage of the assumed low-rank structure. To alleviate this limitation, we present the packages LowRankArithmetic.jl and LowRankIntegrators.jl. The conjunction of both packages forms the backbone of a computational infrastructure that enables simple and non-intrusive use of DLRA. To that end, LowRankArithmetic.jl facilitates the propagation of low rank matrix representations through finite compositions of a rich set of algebraic operations, alleviating the need for custom implementations. Based on this key functionality, LowRankIntegrators.jl implements state-of-the-art integration routines for DLRA that automatically take advantage of low rank structure; the user needs to supply nothing more than the the right-hand-side of a matrix-valued ODE of interest.
In this talk, we briefly review the conceptual idea behind DLRA, outline the primitives underpinning LowRankArthmetic.jl and LowRankIntegrators.jl, and showcase their utility for the propagation of uncertainties through scientific models ranging from stochastic PDEs to the chemical master equation.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/YBH93M/
Red
Flemming Holtorf
PUBLISH
SQJTRS@@pretalx.com
-SQJTRS
Visualization Dashboards with Pluto!
en
en
20220728T194000
20220728T195000
0.01000
Visualization Dashboards with Pluto!
In Python, one could assemble a Jupyter notebook to experiment with visualizations and turn it into a Dash dashboard with multiple plots, panes, and widgets. In R, someone could do the same with RShiny. Julia has support for Dash and Jupyter, but one could argue that dashboarding and experimentation should be part of the same workflow. Pluto's solution to this problem is complete and extensible, which is every scientist's dream.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/SQJTRS/
Red
Guilherme Gomes Haetinger
PUBLISH
VVPY9G@@pretalx.com
-VVPY9G
Visualizing astronomical data with AstroImages.jl
en
en
20220728T195000
20220728T200000
0.01000
Visualizing astronomical data with AstroImages.jl
To study the cosmos, astronomers use data cubes with many dimensions representing images with axes for sky position, time, wavelength, polarization, and more. Since these large datasets often span many orders of magnitude in intensity and typically include colours invisible to humans, astronomers like to visualize their images using a variety of non-linear stretching and contrast adjustments.
Additionally, images may contain metadata specifying arbitrary mappings of pixel positions to multiple celestial coordinate systems.
Julia is a powerful language for processing astronomical data, but these visualization tasks are a challenge for any tool. Built on Images, DimensionalData, FITS, WCS, and Plots, AstroImages.jl makes it easy to load, manipulate, and visualize astronomical data intuitively and efficiently with support for arbitrary colorschemes, stretched color scales, RGB composites, lazy PNG rendering, and plot recipes.
Come to our talk to see how you too can create beautiful images of the universe!
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/VVPY9G/
Red
William Thompson
PUBLISH
PXRENJ@@pretalx.com
-PXRENJ
Microbiome.jl & BiobakeryUtils.jl for analyzing metagenomic data
en
en
20220728T200000
20220728T201000
0.01000
Microbiome.jl & BiobakeryUtils.jl for analyzing metagenomic data
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/PXRENJ/
Red
Deleted User
Kevin Bonham
Annelle Kayisire Abatoni
PUBLISH
DZNPL9@@pretalx.com
-DZNPL9
Tricks.jl: abusing backedges for fun and profit
en
en
20220728T130000
20220728T131000
0.01000
Tricks.jl: abusing backedges for fun and profit
Tricks.jl was made at the JuliaCon 2019 hackathon in Baltimore.
Shortly after manual backedges were added to Julia-1.3.
But has never been explained at a JuliaCon.
This talk is expressly targeted at advanced Julia users wanting to understand on the internals.
Attendee's will learn a bunch about backedges, why they exist and how they work.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/DZNPL9/
Purple
Frames Catherine White
PUBLISH
WTPZLZ@@pretalx.com
-WTPZLZ
Making Abstract Interpretation Less Abstract in Cthulhu.jl
en
en
20220728T131000
20220728T132000
0.01000
Making Abstract Interpretation Less Abstract in Cthulhu.jl
One of the main motivations for this work was the use case of debugging source-to-source automatic differentiation on scientific codes as exemplified by Zygote.jl, which emit code that is significantly more complex than that of the original program. That can make it difficult to correctly identify intermediate steps. This is often complicated by the fact that the more complicated code can lead to results not being inferred to a concrete type anymore.
I leverage the already existing infrastructure in JuliaInterpreter.jl to enable explorative analysis of what the code does by interpreting the program based on concrete input values. Because interpretation works on a statement-by-statement basis just as inference does, this allows the user to go back and look at what the interpreter computed for any intermediate steps or see which branch actually ended up being taken.
One of the main challenges was the fact that JuliaInterpreter.jl was designed to interpret untyped Julia IR, which has slightly different semantics to IR after inference which again differs from the semantics of IR after all other Julia-specific optimizations such as inlining. A prototype currently exists in https://github.com/JuliaDebug/Cthulhu.jl/pull/214. I plan to introduce a flexible plugin infrastructure for this to be able to develop most of this outside of Cthulhu first. Support for step-by-step execution and for interpreting optimized Julia IR is also being worked on.
In this talk I aim to first give an overview on how Cthulhu.jl differs from a debugger and the various advantages and disadvantages of both approaches. I will then explain how I combined the two and how users can take advantage of these new capabilities in their own workflows. While I will be primarily targeting intermediate to advanced Julia users, I believe this could even be of use to those who have not used Cthulhu.jl before, because it allows for a much more interactive exploration of the intricacies of Julia IR.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/WTPZLZ/
Purple
Simeon Schaub
PUBLISH
LJHYAQ@@pretalx.com
-LJHYAQ
Reducing Running Time and Time to First X: A Walkthrough
en
en
20220728T132000
20220728T133000
0.01000
Reducing Running Time and Time to First X: A Walkthrough
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/LJHYAQ/
Purple
Rik Huijzer
PUBLISH
8VQAAD@@pretalx.com
-8VQAAD
Garbage Collection in Julia.
en
en
20220728T133000
20220728T134000
0.01000
Garbage Collection in Julia.
"Garbage collection is like an omniscient housekeeper who can go through your things getting rid of those that you will never use and making more room for the things you need."
Most programmers are happy to have the memory management issues handled for them right up until the point that they aren't. Then they start trying to do unnatural things, like reusing arrays, creating off heap storage, or turning off GC all together.
This talk will focus on the internals of the current Julia collector, and what we can do to make things better.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/8VQAAD/
Purple
Christine Flood
PUBLISH
LXSC3P@@pretalx.com
-LXSC3P
Parallelizing Julia’s Garbage Collector
en
en
20220728T134000
20220728T135000
0.01000
Parallelizing Julia’s Garbage Collector
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/LXSC3P/
Purple
Diogo Netto
PUBLISH
3XBUWE@@pretalx.com
-3XBUWE
Unbake the Cake (and Eat it Too!): Flexible and Performant GC
en
en
20220728T135000
20220728T140000
0.01000
Unbake the Cake (and Eat it Too!): Flexible and Performant GC
When implementing a system such as a programming language runtime, decisions usually favor performance over flexibility. Lacking performance is often unacceptable but lacking flexibility can hinder experimentation and evolution, which may also affect performance in the long run. Consider memory management, for example. If we ignore flexibility and only favor performance, sticking to a particular type of garbage collector that "performs well" can have a huge effect later on, such that changing any aspect about it can be almost impossible without rewriting the whole system.
MMTk.io is a memory management toolkit providing language implementers with a powerful memory management framework and researchers with a multi-runtime platform for memory management research. Instead of a single, monolithic collector, MMTk efficiently implements various garbage collector strategies, increasing flexibility without compromising performance.
MMTk started with its original Java implementation, which was integrated into Jikes RVM in 2002. Since then, it has gained a fresh Rust implementation, which is under active development and even though it is not ready for production use, can currently be used experimentally.
To use MMTk, one must develop a binding, which contains three artefacts: (i) a logical extension of the VM, (ii) a logical extension of MMTk core, and (iii) an implementation of MMTk's API. At the moment, there are various bindings under development including bindings for V8, OpenJDK, Jikes RVM, Ruby, GHC, PyPy and now Julia.
In this talk we discuss our experience developing the MMTk binding for Julia. Julia currently implements a precise non-moving generational garbage collector. It relies on some LLVM features to calculate roots, but the code follows a monolithic approach, as described earlier.
We reuse some of Julia's strategies for calculating roots and processing objects, integrating these into an Immix implementation inside MMTk. Our implementation passes all but a few of Julia's correctness tests, and has shown promising results regarding GC performance. We hope that with MMTk-Julia we are able to easily explore different GC strategies, including a partially-moving GC, observing how these strategies affect the language's performance.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/3XBUWE/
Purple
Luis Eduardo de Souza Amorim
PUBLISH
NEKFDC@@pretalx.com
-NEKFDC
Unlocking Julia's LLVM JIT Compiler
en
en
20220728T163000
20220728T164000
0.01000
Unlocking Julia's LLVM JIT Compiler
Julia's JIT compiler converts Julia IR to LLVM IR, optimizes it, and converts it to machine code for efficient subsequent execution. However, much of this process relies on shared global resources, such as a global LLVM context, the pass manager that runs the optimization, and various data caches. This has necessitated the presence of a global lock to prevent multiple threads from simultaneously modifying this data. Furthermore, as generation of LLVM IR and type inference may co-recurse indefinitely, type inference also acquires and holds the same lock during its execution. This serialized compilation process increases the startup time of multithreaded environments (often referred to as time-to-first-plot, TTFP) and prevents our execution environment from performing more complex transformations, such as speculative and parallel compilation.
Thus far, refactorings of our IR generation pipeline have reduced the number of global variables used in the compiler and added finer grained locks to our JIT stack in preparation for removing the global locks. At this stage, much of the remaining challenge in removing the global lock is in proving thread safety and progressively reducing the scope of the lock until the minimum amount of critical code is protected. Once that work has completed, work on speculative optimization and IR generation can begin, which should bring additional improvements to TTFP for situations without multiple contending compilation threads.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/NEKFDC/
Purple
Prem Chintalapudi
PUBLISH
AAJJGP@@pretalx.com
-AAJJGP
Metal.jl - A GPU backend for Apple hardware
en
en
20220728T164000
20220728T165000
0.01000
Metal.jl - A GPU backend for Apple hardware
The release of Apple's M-series chipset brings new hardware into play for Julia to target. Base CPU functionality is already highly used within the community, but so far, the M1 chip's hardware accelerators have primarily been inaccessible to Julia programmers. Metal.jl has been developed as a GPU backend (like CUDA./, AMD.jl, and oneAPI.jl) specifically targeting the M-series GPUs. Given Apple's continued expansion of the M1 chipset and devotion to hardware accelerators, a Julia interface targeting these compute devices is becoming increasingly beneficial.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/AAJJGP/
Purple
Max Hawkins
Tim Besard
PUBLISH
SE8MEL@@pretalx.com
-SE8MEL
ArrayAllocators.jl: Arrays via calloc, NUMA, and aligned memory
en
en
20220728T165000
20220728T170000
0.01000
ArrayAllocators.jl: Arrays via calloc, NUMA, and aligned memory
Julia offers an extensible array interface that allows array types to wrap around C pointers obtained from specialized or operating system specific application programming interfaces while integrating into the garbage collection system. ArrayAllocators.jl uses this array interface to allow faster `zeros` with `calloc`, allocation on specific NUMA nodes on multi-processor systems, and the allocation of aligned memory for vectorization. The allocators are given as an argument to `Array{T}` or other subtypes of `AbstractArray` in place of the `undef` initializer to provide a familiar interface to the user. In this talk, I will describe how to use ArrayAllocators.jl to optimize applications via `calloc`, NUMA, and aligned memory.
The easy availability of these allocation methods allows Julia to match the performance and caveats of other libraries or code that uses these methods. For example, NumPy's implementation of `numpy.zeros` uses `calloc` by default which may make it appear that NumPy is out performing Julia for certain microbenchmarks. On some operating systems, the initial allocation is significantly faster than explicitly filling the array with zeros as is currently done in `Base` since the operating system may defer the actual allocation of the memory until a later time. Often the initial allocation time is similar to the allocation time of `undef` arrays.
Another application is to make Julia NUMA-aware by allocating memory on specific NUMA nodes. I will demonstrate how to optimize the performance of common memory operations on systems with multiple NUMA nodes on modern processors, which may be counter-intuitive.
A final application is to align memory to power-of-two byte boundaries. This is useful to assist advanced vectorization applications where 64-byte aligned memory may accelerate the use of AVX-512 instructions.
Finally, I will discuss the integer overflow features of ArrayAllocators.jl and how other packages may extend ArrayAllocators.jl to easily add new ways of allocating memory for arrays.
In summary, ArrayAllocators.jl and its subpackages provide a familiar mechanism to allocate memory for arrays via low level methods in a familiar manner. This allows Julia programs to take advantage of advanced operating system features that may accelerate the initialization and use of the memory.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/SE8MEL/
Purple
Mark Kittisopikul, Ph.D.
PUBLISH
DSB7E3@@pretalx.com
-DSB7E3
Compile-time programming with CompTime.jl
en
en
20220728T170000
20220728T173000
0.03000
Compile-time programming with CompTime.jl
CompTime.jl presents a macro, `@ct_enable`, that can be applied to a function, and provides a DSL within the function to mark parts that should be run at compile time, and parts that should be run at runtime. As with `@generated` functions, the code run at compile time can only depend on the types of the arguments. However, with this macro, arbitrary control structures (for loops, while loops, if statements, etc.) can be run at compile time and “unrolled”, so that, for instance, only the body of the condition that succeeded in an if statement appears at runtime. Additionally, computation based on types that cannot be itself typechecked very well can be moved to compile time, and what is left to runtime can then be completely type checked, unlocking the power of the Julia compiler to optimize.
This does not present any new technical capabilities beyond what is already provided by generated functions; rather, the chief benefit is the many conveniences enabled by moving all the syntax-processing to general functions. For instance, the code generated for a specific set of argument types can be printed out and inspected, and in backtraces the line numbers associated with the generated code are meaningful, pointing to the correct line in the `@ct_enable`-decorated function whether the error happens at runtime or compile time. Thus, the experience of working with generated functions becomes accessible to a Julia user that knows nothing about syntax trees.
Finally, if all of the CompTime annotations are stripped out of a `@ct_enable`-decorated function, one is left with a perfectly valid Julia function that runs completely at runtime. Thus, in situations where one expects to run the function only a couple times on each new datatype, the first-compile slowdown can be avoided.
In this talk, we will present a tutorial of how to use CompTime.jl, accessible to a novice Julia programmer with no previous knowledge of generated functions but of interest to all audiences. Then, at the end of the talk, we will take a peek under the hood of CompTime and show the audience a bit of the frightening delight that is writing code that generates code to generate code. This will mainly be as a fun brain-twister; however, the Julia programmer familiar with macro writing may learn a thing or two of interest.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon-2022/talk/DSB7E3/
Purple
Owen Lynch
PUBLISH
DAASVV@@pretalx.com
-DAASVV
Monitoring Performance on a Hardware Level With LIKWID.jl
en
en
20220728T173000
20220728T180000
0.03000
Monitoring Performance on a Hardware Level With LIKWID.jl
In my talk I will first lay the ground by telling you everything you need to know about hardware performance counters and will then introduce you to LIKWID.jl. Specifically, I will explain how to install LIKWID, how to use LIKWID.jl's Marker API, i.e. how to mark certain regions in your Julia code for performance monitoring, and how to properly run your Julia code under LIKWID. We will then use these techniques to analyse a few illustrative Julia examples running on CPUs and an NVIDIA GPU. Finally, I will discuss potential pitfalls (e.g. when benchmarking multithreaded Julia code) and future plans.
Disclaimer: LIKWID.jl works on Linux :) but not on Windows or macOS :(
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon-2022/talk/DAASVV/
Purple
Carsten Bauer
PUBLISH
DQHQZ8@@pretalx.com
-DQHQZ8
Platform-aware programming in Julia
en
en
20220728T190000
20220728T193000
0.03000
Platform-aware programming in Julia
The importance of heterogeneous computing in enabling computationally intensive solutions to problems addressed by scientific and technical computing applications is no longer new. In fact, heterogeneous computing plays a central role in the design of high-end parallel computing platforms for exascale computing. For this reason, the Julia community has concentrated efforts to support GPU programming through the JuliaGPU organization. Currently, there are packages that provide the functionality of existing GPU programming APIs, such as OpenCL, CUDA, AMD ROcm, and OneAPI, as well as high-level interfaces to launch common operations on GPUs (e.g. FFT). In particular, OneAPI is a recent cross-industry initiative to provide a unified, standards-based programming model for accelerators (XPUs).
In our work, it is convenient to distinguish between package developers and application programmers, where the former provide the high-level functionality necessary for later ones to solve problems of interest to them. Both are interested in performing computations as quickly as possible, taking advantage of hardware features often purchased by application programmers from IaaS cloud providers. Thus, since application programmers prefer packages enabled to exploit the capabilities of the target execution platform, package developers are interested in optimizing the performance of critical performance functions by noting the presence of multicore support, SIMD extensions, accelerators, and so on. In fact, considering hardware features in programming is a common practice among HPC programmers.
However, to deal with the large number of alternative heterogeneous computing resources available, package developers deal with portability, maintenance, and modularity issues. First, they need APIs that allow them to inspect hardware configurations during execution. However, there is no general alternative, as they alone do not cover all the architectural details that can influence programming decisions to accelerate the code. To avoid this, developers can give application programmers the responsibility of selecting the appropriate package version for the target architecture, or ask them to provide details about the target architecture through parameters, making programming interfaces more complex. Second, package developers are often required to interlace code for different architectures in the same function, making it difficult to make changes as accelerator technology evolves, such as when implementations should be provided to new accelerators. A common situation occurs when the programming API is deprecated, as has been the case with some Julia packages that use OpenCL.jl (e.g. https://github.com/JuliaEarth/ImageQuilting.jl/issues/16).
We argue that the traditional view of programming language designers that programs should be viewed as abstract entities dissociated from the target execution platform is not adequate to a context in which programs must efficiently exploit heterogeneous computing resources provided by IaaS cloud providers, eager to sell their services. In fact, these features may vary between runs of the same program, as application programmers try to meet their schedules and satisfy their budget constraints. So, the design of programming languages should follow the assumption that the software is now closely related to the hardware on which it will run, still making it possible to control the level of independence in relation to hardware assumptions through abstraction mechanisms (in fact, independence in relation to the hardware is still needed most of the time). For that, we propose typing programs with their target execution platforms through a notion of contextual types.
Contextual types are inspired by our previous work with HPC Shelf, a component-based platform to provide HPC services (http://www.hpcshelf.org). Surprisingly, they can free application programmers from making assumptions about target execution environments, focusing that responsibility on package developers in a modular and scalable way. In fact, contextual types help package developers write different, independent methods of the same function for different hardware configurations. In addition, other developers, as well as application programmers, can provide their own methods for specific hardware configurations not supported by the chosen package. To do this, the runtime system must be aware of the underlying features of the execution platform.
We chose Julia as the appropriate language to evaluate our proposal for two main reasons. Firstly, Julia was designed with HPC requirements in mind, as it is primarily focused on scientific and technical computing applications. Second, it implements a multiple dispatch approach that fits contextual types into the task of selecting methods for different hardware configurations. In fact, multiple dispatch has a close analogy with HPC Shelf's contextual contract resolution mechanism.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon-2022/talk/DQHQZ8/
Purple
Francisco Heron de Carvalho Junior
PUBLISH
S8KUPP@@pretalx.com
-S8KUPP
Optimizing Floating Point Math in Julia
en
en
20220728T193000
20220728T200000
0.03000
Optimizing Floating Point Math in Julia
In this talk we will cover the fundamental numerical techniques for implementing accurate and fast floating point functions. We will start with a brief review of how Floating Point math works. Then use the changes made to `exp` and friends (`exp2`, `exp10`, and `expm1`) over the past two years as a demonstration for the main techniques of computing functions.
Specifically we will look at:
* Range reduction
* Polynomial kernels using the `Remez` algorithm
* Fast polynomial evaluation
* Table based methods
* Bit manipulation (to make everything fast)
We will also discuss how to test the accuracy of implementations using [FunctionAccuracyTests.jl](https://github.com/JuliaMath/FunctionAccuracyTests.jl), and areas for future improvements in Base and beyond. Present and future work areas optimized routines are the Bessel Functions, cumulative distribution functions, and optimized elementary functions for [DoubleFloats.jl](https://github.com/JuliaMath/DoubleFloats.jl), and PRs across the entire package ecosystem are always welcome.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon-2022/talk/S8KUPP/
Purple
Oscar Smith
PUBLISH
RKLTEP@@pretalx.com
-RKLTEP
JuliaSyntax.jl: A new Julia compiler frontend in Julia
en
en
20220728T200000
20220728T203000
0.03000
JuliaSyntax.jl: A new Julia compiler frontend in Julia
JuliaSyntax aims to be a complete compiler frontend (parser, data structures and code lowering) designed for the growing needs of the julia community, as split broadly into users, tool authors and core developers.
For users we need *interactivity* and *precision*
* Speed: Bare parsing is 20x faster; parsing to a basic tree 10x faster, and parsing to Expr around 6x faster
* Robustness: The parse tree covers the full source text regardless of syntax errors so partially complete source text can be processed in use cases like editor tooling and REPL completions
* Precision: We map every character of the source text so highlighting of errors or other compiler diagnostics can be fully precise
For tool authors, JuliaSyntax aims to be *accessible* and *flexible*
* Lossless parsing accounts for all source text, including comments and whitespace so that tools can faithfully work with the source code.
* We support Julia source code versions different from the Julia version running JuliaSyntax itself so only one tooling deployment is needed per machine
* JuliaSyntax is hackable and accessible to the community, due to being written in Julia itself
* Layered tree data structures support various use cases from code formatting to semantic analysis.
For core developers, JuliaSyntax aims to provide *familiarity* and *easy of integration*:
* The code mirrors the structure of the flisp parser
* It depends only on Base
* The syntax tree data structures are hackable independently from the parser implementation.
For a detailed description of the package aims and current status, see the source repository documentation on github at https://github.com/JuliaLang/JuliaSyntax.jl#readme
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon-2022/talk/RKLTEP/
Purple
Claire Foster
PUBLISH
DUQQLN@@pretalx.com
-DUQQLN
Improvements in package precompilation
en
en
20220728T123000
20220728T130000
0.03000
Improvements in package precompilation
Package precompilation occurs when you first use a package or as triggered by changes to your package environment. The goal of precompilation is to re-use work that would otherwise have to be repeated each time you load a raw source file; potentially-saved work includes parsing the source text, type-inference, optimization, generation of LLVM IR, and/or compilation of native code. While there are many cases in computing where a previously-calculated result can be recomputed faster than it can be retrieved from storage, code compilation is not (yet) one such case. Indeed, the time needed for compilation is the dominant contribution to Julia's *latency*, the delay you experience when you first execute a task in a fresh session. In an effort to reduce this latency, Julia has long supported certain forms of precompilation.
Package precompilation occurs in a clean environment with just the package dependencies pre-loaded, and the results are written to disk (*serialization*). When loaded (*deserialization*), the results have to be "spliced in" to a running Julia session which may include a great deal of external code. Several of the most-loved features of Julia---its method polymorphism, aggressive type specialization, and support for dynamic code development allowing redefinition and/or changes in dispatch priority---conspire to make precompilation a significant challenge. Some examples include saving type-specialized code (which types should be precompiled?), code that may be valid in one environment but invalid in another (due to redefinition or having been superseded in dispatch priority), and code that needs to be compiled for types defined in external packages. While lowered code is essentially a direct translation of the raw source text, saving any later form of code requires additional information, specifically the types that methods should be specialized for. This information can be provided manually through explicit `precompile` directives, or indirectly from the state of a session that includes all necessary and/or useful specializations.
Julia versions prior to 1.8 provide exhaustive support for precompiling lowered code (allowing re-use of the results of parsing). A subset of the results of type-inference could also be precompiled, but in practice much type-inferred code was excluded: it was not possible to save the results of type-inference for any method defined in a different package. That meant it was not possible to save the results of type-inference for new type-specializations of externally-defined methods. Finally, native code was not possible to precompile except by generating a custom "system image" using a tool like PackageCompiler.
Julia 1.8 introduced the ability to save the results of type-inference for external methods, and thus provides exhaustive support for saving type-inferred code. As a result, packages generally seem to exhibit lower time-to-first task, with the magnitude of the savings varying considerably depending on the relative contributions of inference and native-code generation to experienced latency.
To go beyond these advances, we have begun to build support for serialization and deserialization of native code at package level. Native code would still be stored package-by-package (supporting Julia's famous composability), and this requires the ability to link this code after loading. Different from static languages like C and C++, this linking must be compatible with Julia's dynamic features like late specialization and code-invalidation. We will describe the progress made so far and the steps needed to bring this vision to fruition.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon-2022/talk/DUQQLN/
Blue
Tim Holy
PUBLISH
YHYSEM@@pretalx.com
-YHYSEM
Hunting down allocations with Julia 1.8's Allocation Profiler
en
en
20220728T130000
20220728T131000
0.01000
Hunting down allocations with Julia 1.8's Allocation Profiler
The Julia 1.8 release includes a sampling Allocation Profiler, producing a profile of sampled allocations from a running program, which you can use to understand, and hopefully reduce, the most expensive allocations in your program. The profiles are best viewed together with PProf.jl, which is a powerful (but complex) visual profile analysis tool.
Using this new profiler to track down and eliminate allocations can help improve performance, but there are some gotchas to keep in mind. What sample rate should you be using to get an accurate view of your program's behavior? How should you interpret the results? How do you navigate pprof's interface? We'll introduce these topics with quick practical guidance for the budding allocation hunters in the audience.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/YHYSEM/
Blue
Nathan Daly
Pete Vilter
PUBLISH
ZNEVTB@@pretalx.com
-ZNEVTB
HighDimPDE.jl: A Julia package for solving high-dimensional PDEs
en
en
20220728T131000
20220728T132000
0.01000
HighDimPDE.jl: A Julia package for solving high-dimensional PDEs
High-dimensional partial differential equations (PDEs) arise in a variety of scientific domains including physics, engineering, finance and biology. High-dimensional PDEs cannot be solved with standard numerical methods, as their computational cost increases exponentially in the number of dimensions, a problem known as the curse of dimensionality. HighDimPDE.jl is a Julia package that breaks down the curse of dimensionality in solving PDEs. Building upon the [SciML ecosystem](https://sciml.ai/), the package implements novel solvers that can solve non-local nonlinear PDEs in potentially up to thousands of dimensions. Already proposing two solvers with different pros and cons, it aims at hosting more.
In this talk, we firstly introduce the package, briefly present the two currently implemented solvers, and showcase their advantages with concrete examples.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/ZNEVTB/
Blue
Victor Boussange
PUBLISH
JBVLSK@@pretalx.com
-JBVLSK
Solving transient PDEs in Julia with Gridap.jl
en
en
20220728T132000
20220728T133000
0.01000
Solving transient PDEs in Julia with Gridap.jl
Gridap is an open-source, finite element (FE) library implemented in the Julia programming language. The main goal of Gridap is to adopt a more modern programming style than existing FE applications written in C/C++ or Fortran in order to simplify the simulation of challenging problems in science and engineering and improve productivity in the research of new discretization methods. The library is a feature-rich general-purpose FE code able to solve a wide range of partial differential equations (PDEs), including linear, nonlinear, and multi-physics problems. Gridap is extensible and modular. One can implement new FE spaces, new reference elements, and use external mesh generators, linear solvers, and visualization tools. In addition, it blends perfectly well with other packages of the Julia package ecosystem, since Gridap is implemented 100% in Julia.
In this presentation we highlight a new feature introduced in Gridap.jl during the last year, a new high-level API that allows the user to simulate complex transient PDEs with very few lines of code. This new API goes in line with the distinctive features of Gridap.jl, allowing for the definition of weak forms in a syntax that is very similar to the mathematical notation used in academic works. The new API has a series of noticeable features, namely: it supports ODEs of arbitrary order, provided that a solver for the specific order is implemented, allows automatic differentiation of all the jacobians associated to the trannsient problem, enables the solution of multi-field and Diferential Algebraic Equation (DAE) systems, and can be used in parallel computing through the extension of the API to the GridapDistributed.jl package.
In JuliaCon2022 we will showcase this novel feature with a number of real applications in fluid and solid dynamics. The applications will include problems resulting in 1st and 2nd order ODEs, problems with constant and time-dependent coefficients, and problems with time-dependent geometries.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/JBVLSK/
Blue
Oriol Colomes
PUBLISH
Z9Y73V@@pretalx.com
-Z9Y73V
Progradio.jl - Projected Gradient Optimization
en
en
20220728T133000
20220728T134000
0.01000
Progradio.jl - Projected Gradient Optimization
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/Z9Y73V/
Blue
Eduardo M. G. Vila
PUBLISH
KX9NAV@@pretalx.com
-KX9NAV
Transformer models and framework in Julia
en
en
20220728T134000
20220728T135000
0.01000
Transformer models and framework in Julia
I would talk about the new API design in Transformers.jl, from the new text encoder that build on top of TextEncoderBase.jl, to the new model implementation that build on top of NeuralAttentionlib.jl and the new pretrain model management API based on HuggingFaceApi.jl.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/KX9NAV/
Blue
Peter Cheng
PUBLISH
J7RCP7@@pretalx.com
-J7RCP7
Automating Reinforcement Learning for Solving Economic Models
en
en
20220728T163000
20220728T164000
0.01000
Automating Reinforcement Learning for Solving Economic Models
Heterogeneous-agent macroeconomic models, though relatively recent in their development, have been applied across macroeconomics, and have contributed to our understanding of inequality, trade, business cycles, migration, epidemics, and the transmission of monetary policy.
Conventional methods of solving these models, which generally require computing policy or value functions on a grid which covers the model's entire state space, are subject to a curse of dimensionality. High-dimensional state spaces make a model unfeasible to solve. Using neural networks instead of grids to approximate policy and value functions solves this problem, and has become an important and active area of research. Because these models must be trained by simulating agents and updating based on simulated outcomes, these solution methods are a form of reinforcement learning.
At present, the ability to use reinforcement learning to solve economic models is limited to economists who are also trained in these techniques. Bucephalus.jl aims to make these techniques accessible by automating the process while remaining applicable to a broad class of models. The user describes a model using a simple model description syntax built on Julia macros. The models are then automatically compiled to a standard data structure, to which, in principle, many solvers could then be applied. I present a solver that uses deep reinforcement learning to solve for steady state, impulse responses, and transition paths.
The package furthermore implements reinforcement learning techniques never before applied to this domain, including discrete-choice policy networks and nested generalized moments.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/J7RCP7/
Blue
Jeffrey Sun
PUBLISH
7S9YZV@@pretalx.com
-7S9YZV
Bender.jl: A utility package for customizable deep learning
en
en
20220728T164000
20220728T165000
0.01000
Bender.jl: A utility package for customizable deep learning
In this lightning talk we will explore two different use cases of [Bender.jl](https://github.com/Rasmuskh/Bender.jl), namely training binary neural networks and training neural networks using the biologically motivated Feedback Alignment algorithm. Binary neural networks and feedback alignment might seem like very different areas of research, but from an implementation point of view they are very similar, as both amount to modifying the chain rule during backpropagation. Implementing a binary neural network requires modifying backpropagation in order to allow non-zero error signals to propagate through binary activation functions and feedback alignment requires modifying backpropagation to use a set of auxilary weights for transporting errors backwards (in order to avoid the biologically implausible weight symmetry requirement inherent to backpropagation). By allowing the user to specify the exact nature of the forward mapping when initializing a layer it is possible to leverage ChainRules.jl to easily implement these and similar experiments.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/7S9YZV/
Blue
Rasmus Kjær Høier
PUBLISH
Z7MXFS@@pretalx.com
-Z7MXFS
Effortless Bayesian Deep Learning through Laplace Redux
en
en
20220728T165000
20220728T170000
0.01000
Effortless Bayesian Deep Learning through Laplace Redux
#### Problem: Bayes can be costly 😥
Deep learning models are typically heavily under-specified in the data, which makes them vulnerable to adversarial attacks and impedes interpretability. Bayesian deep learning promises an intuitive remedy: instead of relying on a single explanation for the data, we are interested in computing averages over many compelling explanations. Multiple approaches to Bayesian deep learning have been put forward in recent years including variational inference, deep ensembles and Monte Carlo dropout. Despite their usefulness these approaches involve additional computational costs compared to training just a single network. Recently, another promising approach has entered the limelight: Laplace approximation (LA).
#### Solution: Laplace Redux 🤩
While LA was first proposed in the 18th century, it has so far not attracted serious attention from the deep learning community largely because it involves a possibly large Hessian computation. The authors of this recent [NeurIPS paper](https://arxiv.org/abs/2106.14806) are on a mission to change the perception that LA has no use in DL: they demonstrate empirically that LA can be used to produce Bayesian model averages that are at least at par with existing approaches in terms of uncertainty quantification and out-of-distribution detection, while being significantly cheaper to compute. Our package [`BayesLaplace.jl`](https://github.com/pat-alt/BayesLaplace.jl) provides a light-weight implementation of this approach in Julia that allows users to recover Bayesian representations of deep neural networks in an efficient post-hoc manner.
#### Limitations and Goals 🚩
The package functionality is still limited to binary classification models trained in Flux. It also lacks any framework for optimizing with respect to the Bayesian prior. In future work we aim to extend the functionality. We would like to develop a library that is at least at par with an existing Python library: [Laplace](https://aleximmer.github.io/Laplace/). Contrary to the existing Python library, we would like to leverage Julia's support for language interoperability to also facilitate applications to deep neural networks trained in other programming languages like Python an R.
#### Further reading 📚
For more information on this topic please feel free to check out my introductory blog post: [[TDS](https://towardsdatascience.com/go-deep-but-also-go-bayesian-ab25efa6f7b)], [[blog](https://www.paltmeyer.com/blog/posts/effortsless-bayesian-dl/)]. Presentation slides can be found [here](https://www.paltmeyer.com/LaplaceRedux.jl/dev/resources/juliacon22/presentation.html#/title-slide).
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/Z7MXFS/
Blue
Patrick Altmeyer
PUBLISH
HPAHBV@@pretalx.com
-HPAHBV
Large-Scale Machine Learning Inference with BanyanONNXRunTime.jl
en
en
20220728T170000
20220728T171000
0.01000
Large-Scale Machine Learning Inference with BanyanONNXRunTime.jl
More information about BanyanONNXRunTime.jl can be found on GitHub:
https://github.com/banyan-team/banyan-julia
https://github.com/banyan-team/banyan-julia-examples
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/HPAHBV/
Blue
Caleb Winston
Cailin Winston
PUBLISH
DZFPGX@@pretalx.com
-DZFPGX
SpeedyWeather.jl: A 16-bit weather model with machine learning
en
en
20220728T171000
20220728T172000
0.01000
SpeedyWeather.jl: A 16-bit weather model with machine learning
Computational resources are a major limitation to improve reliability in numerical predictions of weather and climate. Most simulations run on conventional CPUs in 64-bit floats, although some weather forecast centres now use 32 bits operationally for higher performance. Successful 16-bit simulations have been previously demonstrated with projects like ShallowWaters.jl, increasing performance by 4x with respect to a 64-bit simulation on Fujitsu’s A64FX CPU. However, it remains to be seen whether these results can also be achieved for global atmospheric models, like those used for weather and climate simulation. A new model, SpeedyWeather.jl aims to address this question. As with ShallowWaters.jl, SpeedyWeather.jl aims to support hardware-accelerated low precision arithmetic, yet will be substantially more complex. Much like state-of-the-art numerical weather prediction models, SpeedyWeather.jl includes a “dynamical core” for advancing forward the basic equations describing fluid flow in the Earth’s atmosphere and “parametrizations” for representing physical processes that take place below the scale of the model’s spatial grid, such as the development of clouds from convective updrafts. As such, it is intended to be a simple model for exploring weather and climate simulation in the Julia ecosystem. SpeedyWeather.jl is, like ShallowWaters.jl, fully type-flexible to support arbitrary number formats for performance and analysis (like Sherlogs.jl) simultaneously. This means the model development is precision-agnostic, which allows us to address the common problems of dynamic range and critical precision loss often incurred from using low-precision number formats. The aim of this project is to develop a prototype towards the first global 16-bit weather and climate models.
Beyond numerical weather prediction, low-precision arithmetic is now routinely used in deep learning and neural networks. SpeedyWeather.jl is developed so that entire parts of the model may be replaced by artificial neural networks, thereby complementing conventional physics-based climate modelling with a data-driven approach. Such “hybrid” climate models promise to improve the representation of climate processes that are conventionally poorly resolved, either by training against higher resolution simulations or simulations based on more sophisticated, yet expensive, algorithms. In addition, hybrid models offer the prospect of fitting climate models to observational data. In order to train the neural network components of the model, SpeedyWeather.jl aims to be fully differentiable using automatic differentiation. Implementing parts of weather and climate models with artificial neural networks can also improve computational efficiency and facilitate low precision linear algebra. This talk presents the concept, implementation details, challenges and first results in the development of SpeedyWeather.jl towards a hybrid model incorporating both differential equation solvers and machine learning.
Co-Authors:
- Tom Kimpson (University of Oxford, UK)
- Alistair White and Maximilian Gelbrecht (Potsdam Institute for Climate Impact Research and Technical University of Munich, Germany)
- Sam Hatfield (European Centre for Medium-Range Weather Forecasts, Reading, UK)
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/DZFPGX/
Blue
Milan Klöwer
PUBLISH
MFU9MN@@pretalx.com
-MFU9MN
ExplainableAI.jl: Interpreting neural networks in Julia
en
en
20220728T172000
20220728T173000
0.01000
ExplainableAI.jl: Interpreting neural networks in Julia
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/MFU9MN/
Blue
Adrian Hill
PUBLISH
UTTHUM@@pretalx.com
-UTTHUM
Training Spiking Neural Networks in pure Julia
en
en
20220728T173000
20220728T174000
0.01000
Training Spiking Neural Networks in pure Julia
Spiking networks that operate in a fluctuation driven regime are a common way to model brain activity. Individual neurons within a spiking artificial neural network are trained to reproduce the spiking activity of individual neurons in the brain. In doing so, they capture the structured activity patterns of the recorded neurons, as well as the spiking irregularities and the trial-to-trial variability. Such trained networks can be analyzed in silico to gain insights into the dynamics and connectivity of cortical circuits underlying the recorded neural activity that would be otherwise difficult to obtain in vivo.
The number of simultaneously recorded neurons in behaving animals has been increasing in the last few years at an exponential rate. It is now possible to simultaneously record from about 1000 neurons using electrophysiology in behaving animals, and up to 100,000 using calcium imaging. When combining several sessions of recordings, the amount of data becomes huge and could grow to millions of recorded neurons in the next few years. There is a need then for fast algorithms and code bases to train networks of spiking neurons on ever larger data sets.
Here we use a recursive least-squares training algorithm (RLS; also known as FORCE; Sussillo and Abbott, 2009), adapted for spiking networks (Kim and Chow 2018, 2021), which uses an on-line estimation of the inverse covariance matrix between connected neurons to update the strength of plastic synapses. We make the code more performant through a combination of data parallelism, leveraging of BLAS, use of symmetric packed arrays, reduction in storage precision, and refactoring for GPUs.
Our goal is to train the synaptic current input to each neuron such that the resulting spikes follow the target activity pattern over an interval in time. We use a leaky integrate-and-fire neuron model with current-based synapses. The peri-stimulus time histograms of the spike trains are converted to the equivalent target synaptic currents using the transfer function of the neuron model. We treat every neuron’s synaptic current as a read-out, which makes our task equivalent to training a recurrently connected read-out for each neuron. Since a neuron's synaptic current can be expressed as a weighted sum of the spiking activities of its presynaptic neurons, we adjust the strength of the incoming synaptic connections by the RLS algorithm in order to generate the target activity.
This training scheme allows us to set up independent objective functions for each neuron and to update them in parallel. A CPU version of the algorithm partitions the neurons onto threads and uses the standard BLAS libraries to perform the matrix operations. As the vast majority of memory is consumed by the inverse covariance matrix, larger models can be accommodated by reducing precision for all state variables and using a packed symmetric matrix for the covariance (see SymmetricFormats.jl for the SymmetricPacked type definition). These memory-use optimizations also have the benefit of being faster too. For the GPU version, custom batched BLAS kernels were written for packed symmetric matrices (see BatchedBLAS.jl for the batched_spmv! and batched_spr! functions).
We benchmarked on synthetic targets consisting of sinusoids with identical frequencies and random phases. For a model with one million neurons, 512 static connections per neuron, and 45 plastic connections per neuron, the CPU code took 1260 seconds per training iteration on a 48-core Intel machine and the GPU code took 48 seconds on an Nvidia A100. For this connectivity pattern, one million is the largest number of neurons (within a factor of two) that could fit in the 80 GB GPU. The CPU cores, with 768 GB of RAM, accommodated four million neurons with this connectivity.
We also tested our algorithm's ability to learn real target functions using 50,000 neurons recorded in five different brain regions from a mouse performing a decision-making task. The recording intervals were 3 sec long and spikes rates averaged 7 Hz. Replacing the static connections with random Gaussian noise and using 256 plastic connections, the model achieved a correlation of 0.8 between the desired and learned currents in 30 minutes of training time.
Our work enables one to train spiking recurrent networks to reproduce the spiking activity of huge data sets of recorded neurons in a reasonable amount of time. By doing so, it facilitates analyzing the relations between connectivity patterns, network dynamics and brain functions in networks of networks in the brain. We also introduce two new Julia packages to better support packed symmetric matrices.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/UTTHUM/
Blue
Ben Arthur
Christopher Kim
PUBLISH
9RFTHY@@pretalx.com
-9RFTHY
Simple Chains: Fast CPU Neural Networks
en
en
20220728T174000
20220728T175000
0.01000
Simple Chains: Fast CPU Neural Networks
SimpleChains is a pure-Julia library that is simple in two ways:
1. All kernels are simple loops (it leverages LoopVectorization.jl for performance).
2. It only supports simple (feedforward) neural networks.
It additionally manages memory manually, and currently relies on hand written pull back definitions.
In combination, these allow it to be 50x faster than Flux training an MNIST example on a 10980XE.
This talk will focus on introducing the library, showing off a few examples, and explaining some of they "why" behind it's performance.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/9RFTHY/
Blue
Chris Elrod
PUBLISH
DFYH73@@pretalx.com
-DFYH73
Automated Geometric Theorem Proving in Julia
en
en
20220728T190000
20220728T191000
0.01000
Automated Geometric Theorem Proving in Julia
Geometry has been central in formal reasoning, with Euclid's *Elements* being the first example of axiomatic system. Centuries later, Euclidean geometry is still central e.g. in mathematical education as introduction to formal proofs. To make things more exciting, Tarski proved in early 1900 that Euclidean geometry is decidable, that is a computer program should be able to answer questions like "is this statement true?". This opened several interesting questions: How can I prove *efficiently* statements in Euclidean geometry? Can I generate *readable* proofs? During the talk, I will touch those questions while introducing [GeometricTheoremProver.jl](https://github.com/lucaferranti/GeometricTheoremProver.jl), a package for automated reasoning in Euclidean geometry written fully in Julia. The talk will combine a short overview of geometric theorem proving concepts with hands-on demos on how to use the package to write and prove statements in Euclidean geometry. Finally, the talk will also present a roadmap for the package, hopefully giving pointers to the interested listener on how to contribute.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/DFYH73/
Blue
Luca Ferranti
PUBLISH
VAZYBR@@pretalx.com
-VAZYBR
SIMD-vectorized implementation of high order IRK integrators
en
en
20220728T191000
20220728T192000
0.01000
SIMD-vectorized implementation of high order IRK integrators
We present a preliminary version of a SIMD-vectorized implementation of the sixteenth order implicit Runge-Kutta integrator IRKGL16 implemented in the Julia package IRKGaussLegendre.jl.
The solver IRKGL16 is an implicit Runge-Kutta integrator of collocation type based on the Gauss-Legendre quadrature formula of 8 nodes. It is intended for high precision numerical integration of non-stiff systems of ordinary differential equations. In its sequential implementation, the scheme has interesting properties (symplecticness and time-symmetry) that make it particularly useful for long-term integrations of conservative problems. Such properties are also very useful for Scientific Machine Learning applications, as gradients can be exactly calculated by integrating backward in time the adjoint equations.
For numerical integration of typical non-stiff problems with very high accuracy beyond the precision offered by double precision (i.e., standard IEEE binary64 floating precision) arithmetic our sequential implementation of IRKGL16 is more efficient than high order explicit Runge-Kutta schemes implemented in the standard package DifferentialEquations.jl. However, our sequential implementation of IRKGL16 is generally unable to outperform them in double precision arithmetic.
We show that a vectorized implementation of IRKGL16 that exploits the SIMD-based parallelism offered by modern processor can be more efficient than high order explicit Runge-Kutta methods even for double precision computations. We demonstrate that by comparing our vectorized implementation of IRKGL16 with a 9th order explicit Runge-Kutta method (Vern9 from DifferentialEquations.jl) for different benchmark problems.
Our current implementation (https://github.com/mikelehu/IRKGL_SIMD.jl) depends on the Julia package SIMD.jl to efficiently perform computations on vectors with eight Float64 numbers. The right-hand side of the system of ODEs to be integrated has to be implemented as a generic function defined in terms of the arithmetic operations and elementary functions implemented for vectors in the package SIMD.jl. The state variables must be collected in an array of Float64 or Float32 floating point numbers. The SIMD-based vectorization process is performed automatically under the hood.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/VAZYBR/
Blue
Mikel
Joseba Makazaga
Ander Murua
PUBLISH
XJTDWH@@pretalx.com
-XJTDWH
Zero knowledge proofs of shuffle with ShuffleProofs.jl
en
en
20220728T192000
20220728T193000
0.01000
Zero knowledge proofs of shuffle with ShuffleProofs.jl
Zero-knowledge proofs (ZKP) are the key for making distributed applications privacy-preserving while keeping participants accountable. Widely used in remote electronic voting system designs and cryptocurrencies, they are still hard to understand, tinker with and thus are accessible only to a tiny minority of skilled cryptographers, dampening the creation of new innovative solutions.
An exciting ZKP application is making a re-encryption mix in the ElGamal cryptosystem accountable for not adding, removing, or modifying ciphertexts. While multiple protocols exist for the purpose, none is as contested as the WikstromTerelius variant implemented in the Verificatum library used to make election systems verifiable in Estonia, Norway, Switzerland and elsewhere. But is far from optimal to tinker with as is implemented in Java. In ShuffleProofs.jl, I implement Verificatum compatible noninteractive zero-knowledge verifier and prover for correct re-encryption, improving its accessibility for non-practitioners.
To demonstrate the usefulness and bring every listener on the same line, I shall discuss a most typical ElGamal voting system used widely as foundations for many designs representing it in only 30 lines of Julia code. After discussing the properties of the system, I will demonstrate how to add verifiability so that even if an adversary controlled the re-encryption mix server, it would not be able to add, remove or modify votes without being noticed.
I shall also demonstrate how we can use the ShuffleProofs.jl to verify Verificatum generated proofs of shuffle, which can help independent auditors to verify real elections on the field. In addition, I shall touch a bit on how one can implement their own verifier as a finite state machine making ShuffleProofs.jl futureproof with all sorts of implementations. Lastly, I will recap and articulate some practices on how zero-knowledge proofs can be implemented in Julia and how they could be made accessible for wider audiences to tinker with.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/XJTDWH/
Blue
Janis Erdmanis
PUBLISH
8GLBMW@@pretalx.com
-8GLBMW
MagNav.jl: airborne Magnetic anomaly Navigation
en
en
20220728T193000
20220728T194000
0.01000
MagNav.jl: airborne Magnetic anomaly Navigation
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/8GLBMW/
Blue
Deleted User
PUBLISH
UTT8SE@@pretalx.com
-UTT8SE
Validating a tsunami model for coastal inundation
en
en
20220728T194000
20220728T195000
0.01000
Validating a tsunami model for coastal inundation
A computational model of a physical process requires careful validation before it can be trusted to tell something useful about the world. In this research, we did not set out to formulate a new model, but to implement an existing formulation in Julia. The tsunami model presented by Yamazaki et al. in [1] is a depth-averaged, nonhydrostatic fluid model with a free surface, capable of simulating tsunami waves as they transform and run up on land. Though the authors presented their validation results, we still needed a suite of tests to help verify that our implementation matches the specification, and that it is suitable for our application area.
In this talk, we will explore the following kinds of validation tests for numerical tsunami models by walking through examples with our Julia implementation.
- Conservation of mass
- Solution convergence
- Comparison to analytical solutions
- Comparison to laboratory experiments
As a first-principles measure of validity, a fluid model needs to conserve mass--that is, as the model progresses over time, there should always be the same amount of fluid in the model.
Another basic test of a numerical model is solution convergence. It is necessary to discretize space and time for a fluid model, and as the resolution increases (i.e., as the discretization size decreases) it is expected that the solutions converge.
Centuries of study of fluid mechanics have provided analytical solutions to many idealized wave scenarios. These are useful for comparing against numerical models. We will look at the translation of a solitary wave (a wave that propagates without changing shape).
Analytical wave theories can't describe all the ways that waves interact with complex bottom surfaces, so next we turn to laboratory experiments. Over recent decades, researchers have performed experiments in large wave tanks, generating waves for various scenarios and measuring the effects. We recreate several laboratory experiments with our model and compare the results.
----
[1] Yamazaki, Y., Kowalik, Z. and Cheung, K.F. (2009), Depth-integrated, non-hydrostatic model for wave breaking and run-up. International Journal for Numerical Methods in Fluids, 61: 473-497. https://doi.org/10.1002/fld.1952
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/UTT8SE/
Blue
Justin Mimbs
PUBLISH
U7WAAD@@pretalx.com
-U7WAAD
JCheck.jl: Randomized Property Testing Made Easy
en
en
20220728T195000
20220728T200000
0.01000
JCheck.jl: Randomized Property Testing Made Easy
[Slides](https://www.patrickfournier.ca/juliacon2022/)
[JuliaHub](https://juliahub.com/ui/Packages/JCheck/xkdfQ/)
Since Julia's main purpose is technical computing, we believe its users could benefit from an easy-to-use framework for property testing. A lot of Julia code is in fact implementation of various kinds of abstract objects for which at least some theoretical properties are known beforehand. While randomized property testing alone is not usually sufficient for serious software development, it might definitely be a great addition to a battery of tests. For that reason, we designed JCheck.jl so it integrates seamlessly with Test.jl.
To make JCheck agreeable to use, a lot of care has been put into the efficiency of the input generation process. Random inputs are reused to reduce the number of generated data to a minimum. "Built-in" generators which can be used as a building block for more complex ones have been designed to be as efficient as possible.
JCheck can be extended to support custom types in 2 ways. Type unions of types for which generators are implemented are supported automatically. More intricate types are supported through method dispatch. Note that it is trivial to define a generator for a type for which we can already generate random instances.
JCheck support so-called "special cases", i.e. non-random cases that are always checked.
In order to make the analysis of failing cases easier, JCheck support shrinking. When such a case is detected, it will try to make it as simple as possible. Whether shrunk or not, failing cases can be serialized to a file to make further investigation easier.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/U7WAAD/
Blue
Patrick Fournier
PUBLISH
J9AX9Y@@pretalx.com
-J9AX9Y
Juliaup - The Julia installer and version multiplexer
en
en
20220728T200000
20220728T201000
0.01000
Juliaup - The Julia installer and version multiplexer
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/J9AX9Y/
Blue
David Anthoff
PUBLISH
8SWDQG@@pretalx.com
-8SWDQG
Contributing to Open Source with Technical Writing.
en
en
20220728T201000
20220728T202000
0.01000
Contributing to Open Source with Technical Writing.
This talk would cover two main headings: Open Source and Technical Writing. It would also explore how they can relate to each other. The following aspects would be touched:
- What Open Source is
- What Technical Writing is
- How to contribute to Open Source with Technical Writing
- How it helps the Julia Ecosystem
- Steps on beginning a Technical Writing Journey
- Tips
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/8SWDQG/
Blue
Ifihanagbara Olusheye
PUBLISH
RHYB8M@@pretalx.com
-RHYB8M
JuliaGPU
en
en
20220728T123000
20220728T131500
0.04500
JuliaGPU
PUBLIC
CONFIRMED
Birds of Feather
https://pretalx.com/juliacon-2022/talk/RHYB8M/
BoF
Valentin Churavy
Tim Besard
PUBLISH
QVESXM@@pretalx.com
-QVESXM
Julia in HPC
en
en
20220728T163000
20220728T180000
1.03000
Julia in HPC
PUBLIC
CONFIRMED
Birds of Feather
https://pretalx.com/juliacon-2022/talk/QVESXM/
BoF
Valentin Churavy
Johannes Blaschke
Michael Schlottke-Lakemper
PUBLISH
7M3BKA@@pretalx.com
-7M3BKA
BoF - JuliaLang en Español
en
en
20220728T190000
20220728T203000
1.03000
BoF - JuliaLang en Español
Vamos a abrir un espacio de discusión donde los hispanoparlantes de JuliaLang se puedan conocer, compartirse, y coordinarse para que pueda crecer la comunidad de usuarios de Julia en español. No importa tu nivel de proficiencia con Julia, aquí le daremos la bienvenida a todas, todos y todes.
Trataremos de lidiar con cada tema durante 15-20 minutos cada uno y permitir retroalimentación por parte de todos para tener metas concretas al final de la junta a través de una discusión estructurada y moderada.
PUBLIC
CONFIRMED
Birds of Feather
https://pretalx.com/juliacon-2022/talk/7M3BKA/
BoF
Miguel Raz Guzmán Macedo
Pamela Alejandra Bustamante Faúndez
Agustín Covarrubias
Argel Ramírez Reyes
PUBLISH
XBX9BH@@pretalx.com
-XBX9BH
Improving nonlinear programming support in JuMP
en
en
20220728T163000
20220728T170000
0.03000
Improving nonlinear programming support in JuMP
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon-2022/talk/XBX9BH/
JuMP
Oscar Dowson
PUBLISH
XQMLCH@@pretalx.com
-XQMLCH
Benchmarking Nonlinear Optimization with AC Optimal Power Flow
en
en
20220728T170000
20220728T173000
0.03000
Benchmarking Nonlinear Optimization with AC Optimal Power Flow
The AC Optimal Power Flow problem (AC-OPF) is one of the most foundational optimization problems that arises in the design and operations of power networks. Mathematically the AC-OPF is a large-scale, sparse, non-convex nonlinear continuous optimization problem. In practice AC-OPF is most often solved to local optimality conditions using interior point methods. This project proposes AC-OPF as _proxy-application_ for testing the viability of different nonlinear optimization frameworks, as performant solutions to AC-OPF has proven to be a necessary (but not always sufficient) condition for solving a wide range of industrial network optimization tasks.
### Objectives
* Communicate the technical requirements for solving real-world continuous non-convex mathematical optimization problems.
* Highlight scalability requirements for the problem sizes that occur in practice.
* Provide a consistent implementation for solving AC-OPF in different nonlinear optimization frameworks.
### AC-OPF Implementations
This work adopts the mathematical model and data format that is used in the IEEE PES benchmark library for AC-OPF, [PGLib-OPF](https://github.com/power-grid-lib/pglib-opf). The Julia package [PowerModels](https://github.com/lanl-ansi/PowerModels.jl) is used for parsing the problem data files and making standard data transformations.
The implementations of the AC-OPF problem in various Julia NonLinear Programming (NLP) frameworks are available in [Rosetta-OPF](https://github.com/lanl-ansi/rosetta-opf) project, which currently includes implementations in [JuMP](https://github.com/jump-dev/JuMP.jl), [NLPModels](https://github.com/JuliaSmoothOptimizers/NLPModels.jl), [Nonconvex](https://github.com/JuliaNonconvex/Nonconvex.jl), [Optim](https://github.com/JuliaNLSolvers/Optim.jl) and [Optimization](https://github.com/SciML/Optimization.jl). This work reports on the solution quality and runtime of solving the PGLib-OPF datasets with each of these NLP frameworks.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon-2022/talk/XQMLCH/
JuMP
Carleton Coffrin
PUBLISH
LEG8TJ@@pretalx.com
-LEG8TJ
Advances in Transformations and NLP Modeling for InfiniteOpt.jl
en
en
20220728T173000
20220728T180000
0.03000
Advances in Transformations and NLP Modeling for InfiniteOpt.jl
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon-2022/talk/LEG8TJ/
JuMP
Joshua Pulsipher
PUBLISH
YTTXMK@@pretalx.com
-YTTXMK
The JuliaSmoothOptimizers (JSO) Organization
en
en
20220728T190000
20220728T193000
0.03000
The JuliaSmoothOptimizers (JSO) Organization
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon-2022/talk/YTTXMK/
JuMP
Dominique Orban
PUBLISH
UDKTPD@@pretalx.com
-UDKTPD
PDE-constrained optimization using JuliaSmoothOptimizers
en
en
20220728T193000
20220728T200000
0.03000
PDE-constrained optimization using JuliaSmoothOptimizers
The study of algorithms for optimization problems has become the backbone of data science and its multiple applications. Nowadays, new challenges involve ever-increasing amounts of data and model complexity. Examples include optimization problems constrained by partial differential equations (PDE) that are frequent in imaging, signal processing, shape optimization, and seismic inversion. In this presentation, we showcase a new optimization infrastructure to model and solve PDE-constrained problems in the Julia programming language. We build upon the JuliaSmoothOptimizers infrastructure for modeling and solving continuous optimization problems. We introduce PDENLPModels.jl a package that discretizes PDE-constrained optimization problems using finite elements methods via Gridap.jl. The resulting problem can then be solved by solvers tailored for large-scale optimization implemented in pure Julia such as DCISolver.jl and FletcherPenaltyNLPSolver.jl.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon-2022/talk/UDKTPD/
JuMP
Tangi Migot
PUBLISH
L9SQZ9@@pretalx.com
-L9SQZ9
Generalized Disjunctive Programming via DisjunctiveProgramming
en
en
20220728T200000
20220728T203000
0.03000
Generalized Disjunctive Programming via DisjunctiveProgramming
Modeling systems with discrete-continuous decisions is commonly done in algebraic form with mixed-integer programming models, which can be linear or nonlinear in the continuous variables. A more systematic approach to modeling such systems is to use Generalized Disjunctive Programming (GDP) (Chen & Grossmann, 2019; Grossmann & Trespalacios, 2013), which generalizes the Disjunctive Programming paradigm proposed by Balas (2018). GDP allows modeling systems from a logic-based level of abstraction that captures the fundamental rules governing such systems via algebraic constraints and logic. The models obtained via GDP can then be reformulated into the pure algebraic form best suited for the application of interest. The two main reformulation strategies are the Big-M reformulation (Nemhauser & Wolsey, 1999; Trespalacios & Grossmann, 2015) and the Convex-Hull reformulation (Lee & Grossmann, 2000), the latter of which yields tighter models than those typically used in standard mixed-integer programming (Grossmann & Lee, 2003).
DisjunctiveProgramming.jl supports reformulations for disjunctions containing linear, quadratic, and/or nonlinear constraints. When using the Big-M reformulation, the user can specify the Big-M value to be used, which can either be general to the disjunction or specific to each constraint expression in the disjunction. Alternately, the user can allow the package to determine the tightest Big-M value based on the variable bounds and constraint functions using interval arithmetic (IntervalArithmetic.jl [Sanders, et al., 2022]). When the Convex-Hull reformulation is selected, the perspective function approximation from Furman, et al. (2020) is used for nonlinear constraints with a specified ϵ tolerance value. This is done by relying on manipulation of symbolic expressions via Symbolics.jl (Gowda, et al., 2022).
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon-2022/talk/L9SQZ9/
JuMP
Hector D. Perez
PUBLISH
FSCXXS@@pretalx.com
-FSCXXS
Julius Tech Sponsored Forum
en
en
20220728T190000
20220728T194500
0.04500
Julius Tech Sponsored Forum
Enterprise adoption for Julia can be a difficult process for developers and engineers to champion. In this sponsored forum, we invite leading industry experts to talk about the common challenges organizations face when bringing Julia and Julia based solutions onboard. We will also discuss best practices and practical ways to approach integration. The audience will be provided the opportunity to submit questions as well.
1. James Lee, Julius Technologies
2. Tom Kwong, Meta
3. Dr. Chris Rackauckas, Julia Computing
4. Jarrett Revels, Beacon Biosignals
PUBLIC
CONFIRMED
Sponsor forum
https://pretalx.com/juliacon-2022/talk/FSCXXS/
Sponsored forums
PUBLISH
RQZBFB@@pretalx.com
-RQZBFB
which(methods)
en
en
20220729T130000
20220729T131000
0.01000
which(methods)
The semantics of multiple dispatch can sound simple and obvious at first glance. But there are lots of strange cases to consider when you start to really explore the details. So how do we do this in a mere few hundred lines of C code? What happens if two methods overlap in applicability? What happens when the user calls invoke instead? How do we track when something changes, after loading a new package or Reviseing an existing one?
I will walk through the process of taking the full list of methods in Julia and picking out exactly which method to call. Then take a look at how we extend that action to re-evaluate the results after every new method that gets added.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/RQZBFB/
Green
Jameson Nash
PUBLISH
B9DJ9G@@pretalx.com
-B9DJ9G
Building an inclusive (and fun!) Julia community
en
en
20220729T131000
20220729T132000
0.01000
Building an inclusive (and fun!) Julia community
In this talk, we focus on the advances in gender diversity that have been made by the Julia
community this year: from the development of the new beginner-level live course “Learn Julia
with Us” to the continued outreach, community building and mutual support by Julia Gender
Inclusive as a whole.
Kyla and Julia will share their experiences in co-organizing Julia Gender Inclusive and an
equivalent group in the R community. The talk hopes to inspire and share tips for fostering a
community that is inclusive and accessible, encouraging underrepresented groups to learn and
lead with confidence, and creating an atmosphere that is supportive for all, no matter their
background!
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/B9DJ9G/
Green
Kyla McConnell
Julia Müller
PUBLISH
EKZHPS@@pretalx.com
-EKZHPS
Help! How to grow a corporate Julia community?
en
en
20220729T133000
20220729T140000
0.03000
Help! How to grow a corporate Julia community?
ASML is 30.000 employee company which is the world leader on photo-lithographic system that are crucial for semiconductor manufacture. For many years we accepted the two language problem and spent our time converting MATLAB/Python prototypes into C/C++/Java production code. But during the last two years we have been growing an internal ASML Julia community from 3 initial enthusiasts to over 300 Julians. We would like to share our ongoing journey with you and inspire other Julians who want to kickstart similar communities at their company.
Our journey included many obstacles, but we are now in a good position with Julia at ASML. Thanks to Julia’s package manager and LocalRegistry.jl, it was easy to set up an internal registry. This led to a flourishing internal Julia package ecosystem with currently over 50 registered packages used by several of ASML’s research and development departments.
To grow further, we foresee plenty of challenges. Changing the software culture from a project-driven to an inner-source approach is one such challenge. Another challenge relates to deployment of Julia into all of our existing software platforms, ranging from embedded hardware systems to cloud services.
By overcoming these challenges, we will finally solve the two language problem at ASML and bring different engineering competencies together. It shouldn’t matter if you are a domain expert, data scientist, data engineer, software engineer, software architect, machine learning engineer, business analyst or anything else. If you can code then you can learn Julia and join in on the fun.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon-2022/talk/EKZHPS/
Green
Matthijs Cox
PUBLISH
9N9HZ3@@pretalx.com
-9N9HZ3
Keynote - Husain Attarwala
en
en
20220729T143000
20220729T151500
0.04500
Keynote - Husain Attarwala
Keynote - Husain Attarwala, Moderna
PUBLIC
CONFIRMED
Keynote
https://pretalx.com/juliacon-2022/talk/9N9HZ3/
Green
Husain Attarwala
PUBLISH
SFJBMG@@pretalx.com
-SFJBMG
Relational AI Sponsored Talk
en
en
20220729T151500
20220729T153000
0.01500
Relational AI Sponsored Talk
PUBLIC
CONFIRMED
Platinum sponsor talk
https://pretalx.com/juliacon-2022/talk/SFJBMG/
Green
PUBLISH
SSUNVP@@pretalx.com
-SSUNVP
The State of Julia in 2022
en
en
20220729T153000
20220729T160000
0.03000
The State of Julia in 2022
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon-2022/talk/SSUNVP/
Green
Viral B Shah
PUBLISH
HRZUUY@@pretalx.com
-HRZUUY
Closing remarks
en
en
20220729T160000
20220729T161000
0.01000
Closing remarks
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/HRZUUY/
Green
PUBLISH
8A83SB@@pretalx.com
-8A83SB
Dagger.jl Development and Roadmap
en
en
20220729T163000
20220729T164000
0.01000
Dagger.jl Development and Roadmap
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/8A83SB/
Green
Julian P Samaroo
PUBLISH
ZA9RYG@@pretalx.com
-ZA9RYG
DTables.jl - quickstart, current state and next steps!
en
en
20220729T164000
20220729T165000
0.01000
DTables.jl - quickstart, current state and next steps!
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/ZA9RYG/
Green
Krystian Guliński
PUBLISH
N3J9KR@@pretalx.com
-N3J9KR
`BesselK.jl`: a fast differentiable implementation of `besselk`
en
en
20220729T165000
20220729T170000
0.01000
`BesselK.jl`: a fast differentiable implementation of `besselk`
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/N3J9KR/
Green
Christopher J Geoga
PUBLISH
LZELWV@@pretalx.com
-LZELWV
2022 Update: Diversity and Inclusion in the Julia community
en
en
20220729T170000
20220729T171000
0.01000
2022 Update: Diversity and Inclusion in the Julia community
D&I continues to be a challenge in technical communities around the world. While the Julia community has done a lot of work trying to address this challenge, the work is very much still ongoing. As part of our commitment to D&I, we have been providing yearly updates on the state of D&I in the Julia community (with a slight focus on gender diversity since we can get those stats from Google Analytics). We hope that this process keeps us accountable to continue to do more to promote equity and inclusion.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/LZELWV/
Green
Logan Kilpatrick
PUBLISH
S8D3NM@@pretalx.com
-S8D3NM
The JuliaCon Proceedings
en
en
20220729T171000
20220729T172000
0.01000
The JuliaCon Proceedings
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/S8D3NM/
Green
Mathieu Besançon
Carsten Bauer
Ranjan Anantharaman
Valentin Churavy
PUBLISH
7Q8WXA@@pretalx.com
-7Q8WXA
Interplay between chaos and stochasticity in celestial mechanics
en
en
20220729T172000
20220729T173000
0.01000
Interplay between chaos and stochasticity in celestial mechanics
Chaotic behavior is omnipresent in celestial mechanics dynamical systems and it is relevant for both the understanding and leveraging the stability of planetary systems, inner solar system in particular. The quantification of the probability of impacts of near Earth objects after close encounters with celestial bodies; the possibility of designing robust low energy transfer trajectories, not limited to invariant manifolds but also leveraging the weak stability boundary for the design of the ballistic captures trajectories in time-dependent dynamical systems; the characterization of diffusion processes in Nearly-Integrable Hamiltonian systems in celestial mechanics. In order to have a robust description of chaos, therefore being able to describe chaotic motion in the context of dynamical systems characterized by parametric uncertanties, and in parallel being able to investigate the effect of random perturbations (e.g. Langevin equation, jump-diffusion processes) this work builds on “Polynomial Stochastic Dynamic Indicators” (Vasile, Manzi) in which tools from functional analysis, such as orthogonal polynomials (e.g. PolyChaos.jl) and more in general feature maps coming from the theory of support vector machine and kernel methods are used to approximate the functional describing a positive measure defining the state of the system.
This probabilistic generalizations of existing chaos indicators will be computed for a number of dynamical systems (e.g. Duffing oscillator, circular and elliptic restricted three-body problem, etc.) and the relevance of uncertainty quantification for robust trajectory design will be discussed.
This framework will be used to understand the effect of uncertainty and stochasticity, on the behaviour of both individual trajectories and ensambles of trajectories coming from the sampling of the probabilistic space; the influence of this in the overall goal of predicting chaotic dynamical systems characterized by parametric uncertainties will be assessed. Bifurcating phenomena and invariant sets in time-dependant dynamical systems will be discussed, particularly in this context of Lagrangian coherent structures.
Moreover, the relation between memory effects in non-Markovian processes, fractional calculus and time-delay embedding will be outlined using the aforementioned tools.
The computational efficiency of numerical integration schemes of Ordinary and Stochastic Differential Equations will be exploited to produce animations describing bifurcating phenomena and the chaotic nature of dynamical systems.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/7Q8WXA/
Green
Matteo Manzi
PUBLISH
TSQ8XC@@pretalx.com
-TSQ8XC
How to be an effective Julia advocate?
en
en
20220729T173000
20220729T174000
0.01000
How to be an effective Julia advocate?
This talk will share the lessons learned after speaking, posting, and presenting about Julia over the last 2+ years. It doesn't matter if you are an expert user or a novice, these principles will allow you to effectively highlight the benefits of Julia in a way that is conducive to the audience being receptive to Julia.
We will cover:
- How to frame Julia as a language
- Sharing use cases
- What to avoid
- Getting people involved
And more!
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/TSQ8XC/
Green
Logan Kilpatrick
PUBLISH
QTN3ZY@@pretalx.com
-QTN3ZY
Optimize your marketing spend with Julia!
en
en
20220729T174000
20220729T175000
0.01000
Optimize your marketing spend with Julia!
This talk requires no previous knowledge.
Media Mix Modelling (MMM) is the go-to analysis for deciding how to spend your precious marketing budget. It has been around for more than half a century, and its importance is poised to increase with the rise of the privacy-conscious consumer.
There are a few key marketing concepts that we will cover, e.g., ad stock, saturation and ROAS.
We will leverage the power of Bayesian inference with Turing.jl to establish the effectiveness of our campaigns (/marketing channels). The main advantage of the Bayesian approach will be the quantification of uncertainty, which we will channel into our decision-making when deciding on the budget allocations.
The "optimal" spend strategy ("budget") will be found with the help of Metaheuristics.jl.
Overall, we will draw on Julia's core strengths, such as composability and speed.
The implementation closely follows the methodology of the amazing Robyn package, but it leverages Bayesian inference for the marketing parameters. While there are many resources available for Python and R, I believe this is the first tutorial for MMM in Julia.
Following the talk, you can use the provided notebook and scripts to replicate this analysis for your marketing budget.
You can find the notebook, presentation and additional resources in the following repository:
- [GitHub Repo](https://github.com/svilupp/JuliaCon2022/)
- [PDF of the presentation](https://github.com/svilupp/JuliaCon2022/blob/main/MediaMixModellingDemo/presentation/presentation.pdf)
Session photo thanks to <a href="https://unsplash.com/@diggitymarketing?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Diggity Marketing</a> on <a href="https://unsplash.com/s/photos/digital-marketing?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Unsplash</a>
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/QTN3ZY/
Green
Jan Siml
PUBLISH
XLZ7ZN@@pretalx.com
-XLZ7ZN
GatherTown -- Social break
en
en
20220729T180000
20220729T190000
1.00000
GatherTown -- Social break
PUBLIC
CONFIRMED
Social hour
https://pretalx.com/juliacon-2022/talk/XLZ7ZN/
Green
PUBLISH
ZMKHUZ@@pretalx.com
-ZMKHUZ
Large-Scale Tabular Data Analytics with BanyanDataFrames.jl
en
en
20220729T190000
20220729T193000
0.03000
Large-Scale Tabular Data Analytics with BanyanDataFrames.jl
More information about BanyanDataFrames.jl can be found on GitHub:
https://github.com/banyan-team/banyan-julia
https://github.com/banyan-team/banyan-julia-examples
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon-2022/talk/ZMKHUZ/
Green
Caleb Winston
Cailin Winston
PUBLISH
VVTKRB@@pretalx.com
-VVTKRB
How to debug Julia simulation codes (ODEs, optimization, etc.!)
en
en
20220729T193000
20220729T200000
0.03000
How to debug Julia simulation codes (ODEs, optimization, etc.!)
Debugging simulation codes can be very different from "standard" or "simple" codes. There's many details that can show up that the user needs to be aware of. Thus while there have been many beginner tutorials in using Julia, and many tutorials on how to using SciML ecosystem tools like DifferentialEquations.jl, there has never been a tutorial that says "okay, I got this error when using Optim.jl, what do I do now?". Some major pieces have been written which condense such information, such as the DifferentialEquations.jl PSA on Discourse (https://discourse.julialang.org/t/psa-how-to-help-yourself-debug-differential-equation-solving-issues/62489), but we believe there remains many things to say.
And also, a video walkthrough is simply the best way to "show some how I do it" so to speak.
So let's do it! But what would this entail? There are many topics to cover, including:
- How to read the gigantic stack traces that arise from dual number issues. Why does f(du,u::Array{Float64},p,t) fail with this error? Why can dual numbers cause issues in some mutating code? How do you use https://github.com/SciML/PreallocationTools.jl to solve these Dual number issues?
- When trying to debug code deep within some package context, how do you do it in a "nice" way (i.e. without the slow interpreted mode of Debugger.jl)? The answer is using Revise with tools like `@show` and `x=Ref{Any}()` and then in the package you can do `Main.x[] = ...`. Never seen this trick before? Well then you'll be interested in this talk. We'll showcase how to use these tricks in real-world contexts where such debugging arises.
- When you get dt<dtmin or other ODE solver exit warnings? What are you supposed to do? u'=u^2-u with u(0)=2, oh wait analytically that should error? How do I find out if my model is written incorrectly (it is) and how do I figure out what I should be changing?
- When doing optimization, say using GalacticOptim.jl or Flux.jl, what are these "Zygote does not support mutation" errors? Why do they exist and how do I work around them?
- Everyone is asking for an MWE. How do I make a good MWE? How do I figure out what in the package that is likely causing the issue, and use this to help developers help me?
I will continue to grow the list by keeping tabs on what comes up the most often in Github issues and Discourse posts. At the end of the day, I hope this can be a video that is pasted onto thousands of Discourse questions to give people a much more in-depth view of how to fix issues, and potentially train the next generation of "Discourse answerers".
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon-2022/talk/VVTKRB/
Green
Chris Rackauckas
PUBLISH
LJRHQR@@pretalx.com
-LJRHQR
Scaling up Training of Any Flux.jl Model Made Easy
en
en
20220729T200000
20220729T203000
0.03000
Scaling up Training of Any Flux.jl Model Made Easy
With the scale of the datasets and the size of the models growing rapidly, one cannot reasonably train these models on a single GPU. It is no secret that that training big ML models - be they large language models, image recognition tasks, large PINNs etc - requires an immense amount of hardware, and engineering knowledge.
So far, our tools in FluxML have been limited to training on a single GPU, and there is a pressing need for tooling that can scale up training beyond a single GPU. This is important not just for current Deep Learning models but also to scale training of scientific machine learning models as we see more sophisticated neural surrogates emerge for simulations and modelling. To fulfil this need, we have developed some tools that can reliably and generically scale training of differentiable pipelines beyond a single machine or GPU device. We will be showcasing [ResNetImageNet.jl](https://github.com/DhairyaLGandhi/ResNetImageNet.jl) and [DaggerFlux.jl](https://github.com/FluxML/DaggerFlux.jl) which uses Dagger.jl to accelerate training of various model types and the scaling it achieves.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon-2022/talk/LJRHQR/
Green
Dhairya Gandhi
PUBLISH
FSDULA@@pretalx.com
-FSDULA
GatherTown -- Social break
en
en
20220729T203000
20220729T213000
1.00000
GatherTown -- Social break
PUBLIC
CONFIRMED
Social hour
https://pretalx.com/juliacon-2022/talk/FSDULA/
Green
PUBLISH
UQNLK3@@pretalx.com
-UQNLK3
Fractional Order Computing and Modeling with Julia
en
en
20220729T123000
20220729T130000
0.03000
Fractional Order Computing and Modeling with Julia
In 1695, a letter from L'Hopital to Leibniz represented the birth of the fractional calculus, "Can the meaning of derivatives with integer order be generalized to derivatives with non-integer orders? What if the order will be 1/2?", Leibniz replied in September 30, 1965: "It will lead to a paradox, from which one day useful consequences will be drawn".
Fractional order computing and modeling has become a more and more appealing topic especially recent decades, natural models usually can be more elaborated in fractional order area. Dating back to Leibniz and L'Hopital raised the "non-integer" calculus question, many giants of science have make hard work on fractional calculus to promote its further development. Fractional calculus are very helpful in describing linear viscoelasticity, acoustics, rheology, polymeric chemistry, and so forth. Moreover, fractional derivatives have been proved to be a very suitable tool for the description of memory and hereditary properties of various materials and processes.
SciML organization has done outstanding work in numerical solvers for differential equations, but there are still some kinds of differential equations that SciML has [not supported yet](https://github.com/SciML/DifferentialEquations.jl/issues/461), as the generalization of integer order differential equations, fractional calculus and fractional differential equations began to attract increasing interest for its important role in science and engineering since early 20th century. While most of the numerical software are programmed using Matlab and are not well maintained, users didn't have unifying tools to help with fractional order modeling and computing, being inspired by SciML, we initiated SciFracX organization. Our mission is to make the fractional order computing and modeling more easier using Julia, speed up researches and provides powerful scientific researching tools. Right now, there are four Julia packages in this organization: FractionalSystems.jl, FractionalCalculus.jl, FractionalDiffEq.jl and FractionalTransforms.jl. We would introduce the FractionalSystems.jl, FractionalDiffEq.jl and FractionalCalculus.jl packages.
The [FractionalSystems.jl](https://github.com/SciFracX/FractionalSystems.jl) package is a package focus on fractional order control, inspired by [FOMCON](https://fomcon.net/) and [FOTF](https://www.mathworks.com/matlabcentral/fileexchange/60874-fotf-toolbox) toolbox, FractionalSystems.jl aims at providing fractional order modeling with Julia. Building on [ControlSystems.jl](https://github.com/JuliaControl/ControlSystems.jl), FractionalSystems.jl provides similar functionalities with ControlSystems.jl, mainly time domain and frequency domain modeling and analysis. While FractionalSystems.jl is only several months old, there are still a lot we need to do in the future.
The [FractionalDiffEq.jl](https://github.com/SciFracX/FractionalDiffEq.jl) package follows the design pattern of DifferentialEquations.jl. To solve a problem, we first define a ```***Problem``` according to our model, then pass the defined problem to ```solve``` function and choose an algorithm to solve our problem.
```julia
prob = ***Problem(fun, α, u0, T)
#prob = ***Problem(parrays, oarrays, RHS, T)
solve(prob, h, Alg())
```
Now, FractionalDiffEq.jl is capable of solving fractional order ODE, PDE, DDE, integral equations and nonlinear FDE systems, what impressed us the most is two times speedup compared to MATLAB when we were trying to solve the same problem in Julia(Didn't use any performance optimization)
[FractionalCalculus.jl](https://github.com/SciFracX/FractionalCalculus.jl) has supports for common sense of fractional derivative and integral, including Caputo, Riemann Liouville, Hadamard and Riesz sense etc. To keep it simple and stupid, we use two intuitive function ```fracdiff``` for fractional derivative and ```fracint``` for fractional integral. All we need to do is to pass the function, order, specific point, step size and which algorithm we want to use.
```julia
fracdiff(fun, α, point, h, Alg())
```
```julia
fracint(fun, α, point, h, Alg())
```
By using Julia in fractional order computing and modeling, we indeed observed amazing progress in no matter [the speed](http://scifracx.org/FractionalDiffEq.jl/dev/system_of_FDE/#System-of-fractional-differential-equations) or the ease of use in fractional order modeling and computing.
While there are no such similar organizations in Python or R community doing high performance computing in fractional order area, the future of SciFracX is promising, we envision a bright future for using Julia in fractional order area. Our work of next round is clear:
* Keep adding more high performance algorithms.
* Make the usage of API more simple and elegant.
* Write more illustrative documents for usability.
* Integrate with the SciML ecosystem to provide users more useful features.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon-2022/talk/UQNLK3/
Red
Qingyu Qu
PUBLISH
X89FYS@@pretalx.com
-X89FYS
PointSpreadFunctions.jl - optical point spread functions
en
en
20220729T130000
20220729T133000
0.03000
PointSpreadFunctions.jl - optical point spread functions
Methods of calculating optical point spread functions (PSFs) calculated by the toolbox https://github.com/RainerHeintzmann/PointSpreadFunctions.jl
are presented. These methods range from propagating field components via the angular spectrum method using Fourier-transforms to a version that applies spatial constraints in each propagation step to avoid wrap-around effects. Another method starts with the analytical solution, sinc(r), with r denoting the distance to the focus, of a related scalar problem, which is then modified to account for various influences of high-NA aplanatic optical systems.
The toolbox supports aberrations as specified via Zernike coefficients.
The toolbox also contains practical tools such as a PSF distillation tool which automatically identifies single point sources and averages their measured images with sub-pixel precision.
Future directions may include ways to identify aberrations from measured PSFs.
The toolbox will also be extended towards supporting a wider range of microscopy modes.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon-2022/talk/X89FYS/
Red
Rainer Heintzmann
Felix Wechsler
PUBLISH
ZSCNR7@@pretalx.com
-ZSCNR7
Control-systems analysis and design with JuliaControl
en
en
20220729T163000
20220729T170000
0.03000
Control-systems analysis and design with JuliaControl
The control engineer typically carries a large metaphorical toolbox, full of both formal algorithms and heuristic methods. Mathematical modeling, simulation, system identification, frequency-domain analysis and uncertainty modeling and quantification are typical examples of elements of a control-design workflow, all of which may be required to complete a control project. Bits and pieces of this workflow have been present in open-source packages for a long time, but a comprehensive and integrated solution has previously been limited to proprietary and/or legacy languages.
[JuliaControl](https://github.com/JuliaControl/) has been around since 2015, and has steadily grown into a highly capable, open-source ecosystem for control using linear methods. With comparatively low effort, algorithms and data structures in the ecosystem have been made generic with respect to the number type used, opening the doors for high-precision arithmetics, uncertainty propagation, automatic differentiation and symbolic computations in every step of the control workflow from simulation to design and verification. We believe that this feature is unique among control software, and will demonstrate its usefulness to the control theorist and engineer with a few examples.
While JuliaControl is largely limited in scope to linear control methods, the full breadth of the scientific computing ecosystem in Julia is just around the corner, offering nonlinear optimization, optimal control, and equation-based modeling and simulation. In this talk, we will demonstrate how JuliaControl interoperates with [ModelingToolkit](https://github.com/SciML/ModelingToolkit.jl/) and the [DifferentialEquations](https://diffeq.sciml.ai/stable/) ecosystem to extend the scope of the capabilities to simulation and design for nonlinear control systems.
Finally, we will share some of the control-related developments in the proprietary [JuliaSim](https://juliacomputing.com/products/juliasim/) platform, offering advanced functionality like controller autotuning, nonlinear Model-Predictive Control (MPC) and LMI-based methods (Linear Matrix Inequality) for robust analysis and design.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon-2022/talk/ZSCNR7/
Red
Fredrik Bagge Carlson
PUBLISH
HU8FVH@@pretalx.com
-HU8FVH
Explaining Black-Box Models through Counterfactuals
en
en
20220729T170000
20220729T173000
0.03000
Explaining Black-Box Models through Counterfactuals
### The Need for Explainability ⬛
Machine learning models like deep neural networks have become so complex, opaque and underspecified in the data that they are generally considered as black boxes. Nonetheless, they often form the basis for data-driven decision-making systems. This creates the following problem: human operators in charge of such systems have to rely on them blindly, while those individuals subject to them generally have no way of challenging an undesirable outcome:
> “You cannot appeal to (algorithms). They do not listen. Nor do they bend.”
> — Cathy O'Neil in *Weapons of Math Destruction*, 2016
### Enter: Counterfactual Explanations 🔮
Counterfactual Explanations can help human stakeholders make sense of the systems they develop, use or endure: they explain how inputs into a system need to change for it to produce different decisions. Explainability benefits internal as well as external quality assurance. Explanations that involve realistic and actionable changes can be used for the purpose of algorithmic recourse (AR): they offer human stakeholders a way to not only understand the system's behaviour, but also strategically react to it. Counterfactual Explanations have certain advantages over related tools for explainable artificial intelligence (XAI) like surrogate eplainers (LIME and SHAP). These include:
- Full fidelity to the black-box model, since no proxy is involved.
- Connection to Probabilisitic Machine Learning and Causal Inference.
- No need for (reasonably) interpretable features.
- Less susceptible to adversarial attacks than LIME and SHAP.
### Problem: Limited Availability in Julia Ecosystem 😔
Software development in the space of XAI has largely focused on various global methods and surrogate explainers with implementations available for both Python and R. In the Julia space we have only been able to identify one package that falls into the broader scope of XAI, namely [`ShapML.jl`](https://github.com/nredell/ShapML.jl). Support for Counterfactual Explanations has so far not been implemented in Julia.
### Solution: `CounterfactualExplanations.jl` 🎉
Through this project we aim to close that gap and thereby contribute to broader community efforts towards explainable AI. Highlights of our new package include:
- **Simple and intuitive interface** to generate counterfactual explanations for differentiable classification models trained in Julia, Python and R.
- **Detailed documentation** involving illustrative example datasets and various counterfactual generators for binary and multi-class prediction tasks.
- **Interoperability** with other popular programming languages as demonstrated through examples involving deep learning models trained in Python and R (see [here](https://www.paltmeyer.com/CounterfactualExplanations.jl/dev/tutorials/interop/)).
- **Seamless extensibility** through custom models and counterfactual generators (see [here](https://www.paltmeyer.com/CounterfactualExplanations.jl/dev/tutorials/models/)).
### Ambitions for the Package 🎯
Our goal is to provide a go-to place for counterfactual explanations in Julia. To this end, the following is a non-exhaustive list of exciting future developments we envision:
1. Additional counterfactual generators and predictive models.
2. Additional datasets for testing, evaluation and benchmarking.
3. Improved preprocessing including native support for categorical features.
4. Support for regression models.
The package is designed to be extensible, which should facilitate contributions through the community.
### Further Resources 📚
For some additional colour you may find the following resources helpful:
- [Slides](https://www.paltmeyer.com/CounterfactualExplanations.jl/dev/resources/juliacon22/presentation.html#/title-slide).
- [Blog post](https://towardsdatascience.com/individual-recourse-for-black-box-models-5e9ed1e4b4cc) and [motivating example](https://www.paltmeyer.com/CounterfactualExplanations.jl/dev/cats_dogs/).
- Package docs: [[stable]](https://pat-alt.github.io/CounterfactualExplanations.jl/stable), [[dev]](https://pat-alt.github.io/CounterfactualExplanations.jl/dev).
- [GitHub repo](https://github.com/pat-alt/CounterfactualExplanations.jl).
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon-2022/talk/HU8FVH/
Red
Patrick Altmeyer
PUBLISH
GXTYSA@@pretalx.com
-GXTYSA
Building an Immediate-Mode GUI (IMGUI) from scratch
en
en
20220729T173000
20220729T180000
0.03000
Building an Immediate-Mode GUI (IMGUI) from scratch
Needless to say, Graphical User Interfaces (GUIs) are used in a wide variety of applications. For example, several desktop applications like web browsers, computer games etc. have some form of a GUI. Typically, a GUI has some widgets like buttons, text-boxes etc. and the user can interact with those widgets with the help of a mouse or a keyboard in order to use the application.
Broadly, there are two paradigms of interfacing with a UI library to create a GUI - Retained-Mode (RM) and Immediate-Mode (IM). This talk is for anyone who wants to understand how to make an immediate-mode GUI from scratch. I will attempt to explain the inner workings of an immediate-mode UI library and show one possible way to implement simple widgets like buttons, sliders, and text-boxes from scratch.
We will look at one possible way to structure the render loop for a desktop application and dive deeper into input handling and widget interaction. The goal is to strip out as many unnecessary features as possible and explain the barebones structure of how to make an IMGUI from scratch. For this purpose, I will stick to the lightweight libraries GLFW.jl (to create and manage windows) and SimpleDraw.jl (to draw the user interface).
SimpleIMGUI.jl: https://github.com/Sid-Bhatia-0/SimpleIMGUI.jl
Supplementary material: https://github.com/Sid-Bhatia-0/JuliaCon2022Talk
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon-2022/talk/GXTYSA/
Red
Siddharth Bhatia
PUBLISH
878K9K@@pretalx.com
-878K9K
GeneDrive.jl: Simulate and Optimize Biological Interventions
en
en
20220729T190000
20220729T193000
0.03000
GeneDrive.jl: Simulate and Optimize Biological Interventions
Understanding and controlling biological dynamics is a concern in arenas as diverse as public health, agriculture, or conservation. Both environmental and human factors influence those dynamics, often in complex ways. Decisions about the timing, magnitude, and location where interventions are required to control the presence of harmful organisms – be they disease vectors, crop pests, or invasive species – must be made amid this ever-changing reality of biotic and abiotic interactions.
The GeneDrive.jl package facilitates replicable, scalable, and extensible computational experiments on the topic of biological dynamics and control by drawing on several pre-existing tools within the Julia ecosystem. It formalizes Julia data structures to store information and dispatch methods unique to species and genotype, enabling the straightforward incorporation of empirical knowledge. Once constructed, problems can be solved using either dynamic or optimization methods by building on the extensively developed DifferentialEquations.jl and JuMP.jl packages.
This one-time specification of the experimental data, on which both ODE and optimization solving algorithms can be called, encourages experimentation with operational levers in addition to biological ones. GeneDrive.jl employs mathematical programming for its optimization routines rather than the optimal control approaches more common in the biological sciences. This enables the inclusion of more detailed genetic and ecological information than would otherwise be tractable.
The origin of this package’s name, gene drives, are DNA sequences that spread through a population at higher frequencies than Mendelian inheritance patterns. These tools furnish a promising new approach to the mitigation of diseases carried by mosquito vectors and circumvent the problems of traditional prevention practices (e.g., growing insecticide resistance). GeneDrive.jl is applicable to biological tools beyond gene drive (see examples in the documentation), however, it is named in honor of this new technological horizon.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon-2022/talk/878K9K/
Red
Valeri Vasquez
PUBLISH
KKBXAZ@@pretalx.com
-KKBXAZ
Reproducible Publications with Julia and Quarto
en
en
20220729T123000
20220729T130000
0.03000
Reproducible Publications with Julia and Quarto
Quarto is an open-source scientific and technical publishing system that builds on standard markdown with features essential for scientific communication. One of the most important enhancements is embedded computations, which enable documents to be fully reproducible. There are also a wide variety of technical authoring features including equations, citations, crossrefs, figure panels, callouts, advanced layout, and more. In this talk we'll explore the use of Quarto with Julia, describing both integration with IJulia and the Julia VS Code extension, as well as areas for future improvement and exploration.
Quarto is built on Pandoc and as a result can target dozens of output formats including HTML, PDF, MS Word, OpenOffice, and ePub. Quarto also includes a project system that enables publishing collections of documents as a blog, full website, or book. Output formats are extensible, making it possible to create Journal ready LaTeX and HTML output from the same source code. Several examples of creating these output types with Julia will be presented, and we will take advantage of integration between the Quarto and Jupyter VS Code extensions to demonstrate productive workflows.
After reviewing the basics of the system and presenting examples, we'll dive more into the technical details of how Quarto works. One of the things that makes Pandoc so capable is that it is not merely a markdown system but rather a generalized system for computing on documents. We'll describe the Pandoc AST for documents and how users of Quarto can write filters to transform the AST during rendering. Examples of filters authored with both Lua (the Pandoc embedded language for filters) and Julia (via the PandocFilters.jl package) will be presented.
Embedded computations present the opportunity for fully reproducible workflows, but also create new performance challenges. The system needs to support expensive, long-running computations but at the same time interactive and iterative use (especially for content authoring). Quarto includes a variety of facilities for managing these tradeoffs, including daemonized Jupyter kernels for interactive use, caching computations, and the ability to freeze computational documents. We'll demonstrate using all of these techniques with Julia, and discuss their benefits, drawbacks, and potential for future improvement.
Quarto interfaces with embedded Julia code using its Juptyer computational engine and the IJulia kernel. Documents can be authored in either a plain text markdown format or as Jupyter notebooks. There are several other literate programming systems available in the Julia ecosystem (Pluto, Neptune, Weave.jl, etc.) which have their own benefits and tradeoffs. We'll discuss why we chose IJulia along with an exploration of how we could integrate with other systems.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon-2022/talk/KKBXAZ/
Purple
J.J. Allaire
PUBLISH
ME9GW8@@pretalx.com
-ME9GW8
WhereTraits.jl has now a disambiguity resolution system!
en
en
20220729T130000
20220729T131000
0.01000
WhereTraits.jl has now a disambiguity resolution system!
Method disambiguation is one of the top julia problems which can become tricky to resolve. This is especially true, if one of your dependent packages defined one conflicting part, and another package defined the other part. As a user, you just want to say that the one function should be used instead of the other. Unfortunately, that is not possible, and instead you have to look into the actual source code and implement a resolution yourself.
The same problem occurs to traits, and maybe even more pronounced. If your function has two different traits specializations, let's say one for `MyAwesomeTrait` and another for `GreatGreatTrait`, it is unclear what to do if your type is both a MyAwesomeTrait and a GreatGreatTrait. You will get exactly such a Method Disambiguation Error.
WhereTraits.jl resolved this difficulty in its most recent release by adding extra support for an ordering between traits, which is used for automatic disambiguation. I.e. the user gets exactly the mentioned power of deciding that the one trait should be preferred over the other. And all this without any performance penalty. The disambiguation system is defined such that traits definition and traits ordering can be in different packages and can be defined multiple times, making the system very flexible and generic.
This new feature is outstanding as even normal Julia function dispatch does not support it.
In this talk I am going to present this new feature, and explain how it is implemented.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/ME9GW8/
Purple
Stephan Sahm
PUBLISH
FTMX73@@pretalx.com
-FTMX73
Invenia Sponsored Talk
en
en
20220729T131000
20220729T131500
0.00500
Invenia Sponsored Talk
PUBLIC
CONFIRMED
Silver sponsor talk
https://pretalx.com/juliacon-2022/talk/FTMX73/
Purple
PUBLISH
PTFQER@@pretalx.com
-PTFQER
Juliacon Experiences
en
en
20220729T131500
20220729T140000
0.04500
Juliacon Experiences
PUBLIC
CONFIRMED
BoF (45 mins)
https://pretalx.com/juliacon-2022/talk/PTFQER/
Purple
Julia Frank
Agustin Covarrubias
Valeria Perez
Saranjeet Kaur Bhogal
Marina Cagliari
Patrick Altmeyer
Garrek Stemo
Jeremiah Lasquety-Reyes
Dr. Vikas Negi
Martin Smit
Fábio Rodrigues Sodré
Arturo Erdely
Olga Eleftherakou
Charlie Kawczynski
PUBLISH
8LRJVY@@pretalx.com
-8LRJVY
Metaheuristics.jl: Towards Any Optimization
en
en
20220729T163000
20220729T170000
0.03000
Metaheuristics.jl: Towards Any Optimization
This talk presents the main features of Metaheuristics.jl, which is a package for global optimization to approximate solutions for single-, multi-, and many-objective optimization. Several examples are given to illustrate the implementation and the resolution of the different optimization problems.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon-2022/talk/8LRJVY/
Purple
Jesús-Adolfo Mejía-de-Dios
PUBLISH
P7XJCV@@pretalx.com
-P7XJCV
InferOpt.jl: combinatorial optimization in ML pipelines
en
en
20220729T170000
20220729T173000
0.03000
InferOpt.jl: combinatorial optimization in ML pipelines
### Overview
We focus on a generic prediction problem: given an instance `x`, we want to predict an output `y` that minimizes the cost function `c(y)` on a feasible set `Y(x)`. When `Y(x)` is combinatorially large, a common approach in the literature is to exploit a surrogate optimization problem, which is usually a Linear Program (LP) `max_y θᵀy`.
A typical use of InferOpt.jl is integrating the optimization problem (LP) into a structured learning pipeline of the form `x -> θ -> y`, where the cost vector `θ = φ_w(x)` is given by an ML encoder. Our goal is to learn the weights `w` in a principled way. To do so, we consider two distinct paradigms:
1. *Learning by experience*, whereby we want to minimize the cost induced by our pipeline using only past instances `x`.
2. *Learning by imitation*, for which we have "true" solutions `y` or cost vectors `θ` associated with each past instance `x`.
We provide a unified framework to derive well-known loss functions, and we pave the way for new ones. Our package will be open-sourced in time for JuliaCon 2022.
### Related works
InferOpt.jl gathers many previous approaches to derive (sub-)differentiable layers in structured learning:
- [_Differentiation of Blackbox Combinatorial Solvers_](https://arxiv.org/pdf/1912.02175.pdf) for linear interpolations of piecewise constant functions
- [_Learning with Fenchel-Young Losses_](https://arxiv.org/abs/1901.02324) for regularized optimizers and the associated structured losses
- [_Learning with Differentiable Perturbed Optimizers_](https://arxiv.org/abs/2002.08676) for stochastically-perturbed optimizers
- [_Structured Support Vector Machines_](https://pub.ist.ac.at/~chl/papers/nowozin-fnt2011.pdf) for cases in which we have a distance on the output space
- [_Smart "Predict, then Optimize"_](https://arxiv.org/abs/1710.08005) for two-stage decision frameworks in which we know past true costs
In addition, we provide several tools for directly minimizing the cost function using smooth approximations.
### Package content
Since we want our package to be as generic as possible, we do not make any assumption on the kind of algorithm used to solve combinatorial problems. We only ask the user to provide a callable `maximizer`, which takes the cost vector `θ` as argument and returns a solution `y`: regardless of the implementation, our wrappers can turn it into a differentiable layer.
As such, our approach is different from that of [DiffOpt.jl](https://github.com/jump-dev/DiffOpt.jl), in which the optimizer has to be a convex [JuMP.jl](https://github.com/jump-dev/JuMP.jl) model. It is also different from [ImplicitDifferentiation.jl](https://github.com/gdalle/ImplicitDifferentiation.jl), which implements a single approach for computing derivatives (whereas we provide several), and does not include structured loss function.
All of our wrappers come with their own forward and reverse differentiation rules, defined using [ChainRules.jl](https://github.com/JuliaDiff/ChainRules.jl). As a result, they are compatible with a wide range of automatic differentiation backends and machine learning libraries. For instance, if the encoder `φ_w` is a [Flux.jl](https://github.com/FluxML/Flux.jl) model, then the wrapped optimizer can also be included as a layer in a `Flux.Chain`.
### Examples
We include various examples and tutorials to apply this generic framework on concrete problems. Since our wrappers are model- and optimizer-agnostic, we can accommodate a great variety of algorithms for both aspects.
On the optimization side, our examples make use of:
- Mixed-Integer Linear Programs;
- Shortest path algorithms;
- Scheduling algorithms;
- Dynamic Programming.
On the model side, we exploit the following classes of predictors:
- Generalized Linear Models;
- Convolutional Neural Networks;
- Graph Neural Networks.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon-2022/talk/P7XJCV/
Purple
Guillaume Dalle
Louis Bouvier
Léo Baty
PUBLISH
YMFXKE@@pretalx.com
-YMFXKE
Time to Say Goodbye to Good Old PCA
en
en
20220729T173000
20220729T174000
0.01000
Time to Say Goodbye to Good Old PCA
Principal Component Analysis (PCA) is arguably the most popular dimension reduction method in practice. The basic idea of PCA is to reduce the dimension of original data while retaining as much variance as possible. While sometimes effective, in practice PCA has some pitfalls, especially when the components of interest of the data are orthogonal to the leading principal components. A possible alternative, the projection pursuit technique, was proposed by Kruskal (1972) and Friedman and Tukey (1974). However, unlike PCA, projection pursuit does not have a closed-form solution, and thus computational complexity limits its prevalence. In this talk, we present the Julia package, ProjectionPursuit.jl, which combines the new computational tools needed to implement the high-dimensional projection pursuit and the lightning speed of Julia. We show that with the help of this package, one can easily implement the idea of projection pursuit to various applications.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/YMFXKE/
Purple
Yijun Xie
PUBLISH
HFG3AW@@pretalx.com
-HFG3AW
RegressionFormulae.jl: familiar `@formula` syntax for regression
en
en
20220729T174000
20220729T175000
0.01000
RegressionFormulae.jl: familiar `@formula` syntax for regression
StatsModels.jl provides the `@formula` mini-language for conveniently specifying table-to-matrix transformations for statistical modeling. This mini-language is designed with extensibility and composability in mind, using normal Julia mechanisms of multiple dispatch to implement additional syntax both inside StatsModels.jl and in external packages. RegressionFormulae.jl takes advantage of this extensibility to provide _additional syntax_ that is familiar to many users of other statistical software (e.g., R) in an "opt-in" manner, without forcing _all_ downstream packages that depend on StatsModels.jl/`@formula` to support this syntax.
The StatsModels.jl `@formula` syntax is based on the Wilkinson-Rogers Formula Notation which has been a widely-used standard in multi-factor regression modeling since it was first described in Wilkinson and Rogers (1973). The basic syntax includes operators for _addition_ (`+`) and _crossing_ (`&` and `*`) of regressors, as well as the `~` operator to link outcome and regressor terms. As the conventions around this syntax have evolved in the last 50 years, other systems have introduced additional operators.
RegressionFormulae.jl expands the StatsModels.jl `@formula` to support two commonly-used operators from R: `^` (incomplete crossing) and `/` (nesting). Specifically, it implements
- `(a + b + c + ...) ^ n` to create all interactions up to `n`-way, corresponding to an incomplete cross of `a, b, c, ...`.
- `a / b` to create `a + a & b`, which results in a "nested" model of `b`, with a separate coefficient for `b` for each level of `a`
Both of these operators are particularly useful for creating _interpretable_ models. Models with high-order interactions are extremely challenging to interpret and require considerable care, and are prone to over-fitting since the number of coefficients grows very quickly with additional terms participating in the interactions. The incomplete cross `^` syntax can ameliorate these difficulties, limiting the highest degree of the resulting interaction terms and reducing the overall number of predictors. Nesting (`a / b`) similarly provides an alternative to fully crossed models (`a * b`) that is more directly interpretable in situations where the analytic questions are focused on the effects of a predictor `b` within each individual level of some other variable `a`, without concern for direct _comparison_ of these effects to each other.
Finally, this syntax is implemented in a way that does not _require_ other modeling packages that use `@formula` to support them, or even _prevent_ other packages from defining _alternative_ meaning to the `^` or `/` operators. Within a `@formula`, the special syntax is implemented by methods like
```julia
function StatsModels.apply_schema(
t::FunctionTerm{typeof(/)},
...
```
and
```julia
function Base.:(/)(outer::CategoricalTerm, inner::AbstractTerm)
...
```
The result of this is that if RegressionFormulae.jl is not loaded, then `/` and `^` inside a `@formula` behave exactly as they normally would (e.g., as calls the normal Julia functions `/` and `^`). Moreover, if a user loads RegressionFormulae.jl at the same time as some other package that defines special syntax for `/` or `^` (for `RegressionModel`), they will receive a warning about method redefinition or method ambiguity.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/HFG3AW/
Purple
Dave Kleinschmidt
Phillip Alday
PUBLISH
DHYP8U@@pretalx.com
-DHYP8U
Random utility models with DiscreteChoiceModels.jl
en
en
20220729T175000
20220729T180000
0.01000
Random utility models with DiscreteChoiceModels.jl
Random utility models are ubiquitous in fields including economics, transportation, and marketing [1]. Estimation of simple multinomial logit models is available in many statistical packages, including Julia via Econometrics.jl [2], more advanced choice models are generally fit with choice-model-specific packages e.g., [3], [4]. These packages allow more-flexible utility specifications by allowing utility function definitions to vary over outcomes, and by allowing additional forms of random utility model, such as the mixed logit model which allows random parameter variation [5].
DiscreteChoiceModels.jl provides such a package for Julia. It has an intuitive syntax for specifying discrete-choice models, allowing users to directly write out utility functions. For instance, the code below specifies the Swissmetro example mode-choice mode distributed with Biogeme [3]:
multinomial_logit(
@utility(begin
1 ~ αtrain + βtravel_time * TRAIN_TT / 100 + βcost * (TRAIN_CO * (GA == 0)) / 100
2 ~ αswissmetro + βtravel_time * SM_TT / 100 + βcost * SM_CO * (GA == 0) / 100
3 ~ αcar + βtravel_time * CAR_TT / 100 + βcost * CAR_CO / 100
end),
:CHOICE,
data,
availability=[
1 => :avtr,
2 => :avsm,
3 => :avcar,
]
)
Within the utility function specification (@utility), the first three lines specify the utility functions for each of the three modes specified by the CHOICE variable: train, car, and the hypothetical Swissmetro. Any variable starting with α or β is treated as a coefficient to be estimated, while other variables are assumed to be data columns. The remainder of the model specification indicates that the choice is indicated by the variable CHOICE, what data to use, and, optionally, what columns indicate availability for each alternative.
Mixed logit models
Support for mixed logit models is under development. Mixed logit models will specify random coefficients as distributions from Distributions.jl [6]. For instance, to specify that αtrain should be normally distributed with mean 0 and standard deviation 1 as starting values, you would add
αtrain = Normal(0, exp(0))
with the exponent indicating that the value will be exponentiated to ensure that the standard deviation will always be positive.
Performance
Julia is designed for high-performance computing, so a major goal of DiscreteChoiceModels.jl is to estimate models more quickly than other modeling packages. To that end, two multinomial logit models were developed and benchmarked using three packages—DiscreteChoiceModels.jl, Biogeme [3], and Apollo [4], using default settings for all three packages. The first model is the Swissmetro example from Biogeme, with 6,768 observations, 3 alternatives, and 4 free parameters. The second is a vehicle ownership model using the 2017 US National Household Travel Survey, with 129,696 observations, 5 alternatives, and 35 free parameters. All runtimes are the median of 10 runs, and executed serially on a quad-core Intel i7 with 16GB of RAM, running Debian 11.1. DiscreteChoiceModels.jl outperforms other packages when used with a DataFrame, while using Dagger introduces distributed computing overhead on a single machine.
Model DiscreteChoiceModels.jl: DataFrame DiscreteChoiceModels.jl: Dagger Biogeme Apollo
------------------- ------------------------------------ --------------------------------- --------- --------
Swissmetro 188ms 2047ms 252ms 824ms
Vehicle ownership 35.1s 46.9s 163.4s 227.2s
References
[1] M. Ben-Akiva and S. R. Lerman, Discrete choice analysis: Theory and application to travel demand. MIT Press, 1985.
[2] J. B. S. Calderón, “Econometrics.jl,” Proc JuliaCon Conf, doi: 10.21105/jcon.00038.
[3] M. Bierlaire, “A short introduction to PandasBiogeme,” Ecole Poltechnique Fédérale de Lausanne, Lausanne, TRANSP-OR 200605, Jun. 2020. Available: https://transp-or.epfl.ch/documents/technicalReports/Bier20.pdf
[4] S. Hess and D. Palma, “Apollo: A flexible, powerful and customisable freeware package for choice model estimation and application,” J Choice Model, doi: 10.1016/j.jocm.2019.100170.
[5] K. Train, Discrete Choice Methods with Simulation. Cambridge, UK: Cambridge University Press, 2009.
[6] M. Besançon et al., “Distributions.jl: Definition and Modeling of Probability Distributions in the JuliaStats Ecosystem,” J Stat Soft, doi: 10.18637/jss.v098.i16.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/DHYP8U/
Purple
Matthew Wigginton Bhagat-Conway
PUBLISH
H3N8UN@@pretalx.com
-H3N8UN
A Fresh Approach to Open Source Voice Assistant Development
en
en
20220729T123000
20220729T124000
0.01000
A Fresh Approach to Open Source Voice Assistant Development
Leading software companies have heavily invested in voice assistant software since the dawn of the century. However, they naturally prioritize use cases that directly or indirectly bring economic profit. As a result, their developments cover, e.g., the needs of the entertainment sector abundantly, but those of academia and software development only poorly. There is particularly little support for Linux, whereas it is the preferred operating system for many software developers and computational scientists. The open source voice assistant project MyCroft fully supports Linux, but provides little tools that appear helpful for productive work in academia and software development; moreover, adding new skills to MyCroft seems to be complex for average users and appears to require considerable knowledge about the specificities of MyCroft. [JustSayIt.jl](https://github.com/omlins/JustSayIt.jl) addresses these shortcomings by providing a lightweight framework for easily extensible, offline, low latency, highly accurate and secure speech to command or text translation on Linux, MacOS and Windows.
[JustSayIt](https://github.com/omlins/JustSayIt.jl)'s high-level API allows to declare arguments in standard Julia function definitions to be obtainable by voice, which constitutes an unprecedented, highly generic extension to the Julia programming language. For such functions, [JustSayIt](https://github.com/omlins/JustSayIt.jl) automatically generates a wrapper method that takes care of the complexity of retrieving the arguments from the speakers voice, including interpretation and conversion of the voice arguments to potentially any data type. [JustSayIt](https://github.com/omlins/JustSayIt.jl) commands are implemented with such voice argument functions, triggered by a user definable mapping of command names to functions. As a result, it empowers programmers without any knowledge of speech recognition to quickly write new commands that take their arguments from the speakers voice. Moreover, [JustSayIt](https://github.com/omlins/JustSayIt.jl) unites the Julia and Python communities by using both languages: it leverages Julia's performance and metaprogramming capabilities and Python's larger ecosystem where no Julia package is considered suitable. [JustSayIt](https://github.com/omlins/JustSayIt.jl) relies on PyCall.jl and Conda.jl, which renders installing and calling Python packages from within Julia almost trivial. [JustSayIt](https://github.com/omlins/JustSayIt.jl) is ideally suited for development by the world-wide open source community as it provides an intuitive high-level API that is readily understandable by any programmer and unites the Python and Julia community.
[JustSayIt](https://github.com/omlins/JustSayIt.jl) implements a novel algorithm for high performance context dependent recognition of spoken commands which leverages the [Vosk Speech Recognition Toolkit](https://github.com/alphacep/vosk-api/). A specialized high performance recognizer is defined for each function argument that is obtainable by voice and has a restriction on the valid input. In addition, when beneficial for recognition accuracy, the recognizer for a voice argument is generated dynamically depending on the command path taken before the argument. To enable minimal latency for single word commands (latency refers here to the time elapsed between a command is spoken and executed), the latter can be triggered in certain conditions upon bare recognition of the corresponding sounds without waiting for silence as normally done for the confirmation of recognitions. Thus, [JustSayIt](https://github.com/omlins/JustSayIt.jl) is suitable for commands where a perceivable latency would be unacceptable, as, e.g., mouse clicks. Single word commands' latency is typically in the order of a few milliseconds on a regular notebook. [JustSayIt](https://github.com/omlins/JustSayIt.jl) achieves this high performance using only one CPU core and can therefore run continuously without harming the computer usage experience.
In conclusion, [JustSayIt](https://github.com/omlins/JustSayIt.jl) demonstrates that the development of our future voice assistants can take a fresh and new path that is neither driven by the priorities and economic interests of global software companies nor by a small open source community of speech recognition experts; instead, the entire world-wide open source community is empowered to contribute in shaping our future daily assistants.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/H3N8UN/
Blue
Samuel Omlin
PUBLISH
DTHTBC@@pretalx.com
-DTHTBC
ImplicitDifferentiation.jl: differentiating implicit functions
en
en
20220729T124000
20220729T125000
0.01000
ImplicitDifferentiation.jl: differentiating implicit functions
### Introduction
Differentiable programming is a core ingredient of modern machine learning, and it is one of the areas where Julia truly shines. By defining new kinds of differentiable layers, we can hope to increase the expressivity of deep learning pipelines without having to scale up the number of parameters.
For instance, in structured prediction settings, domain knowledge can be encoded into optimization problems of many flavors: linear, quadratic, conic, nonlinear or even combinatorial. In domain adaptation, differentiable distances based on optimal transport are often computed using the Sinkhorn fixed point iteration algorithm. Last but not least, in differential equation-constrained optimization and neural differential equations, one often needs to obtain derivatives for solutions of nonlinear equation systems with respect to equation parameters.
Note that these complex functions are all defined *implicitly*, through a condition that their output must satisfy. As a consequence, differentiating said output (e.g. the minimizer of an optimization problem) with respect to the input (e.g. the cost vector or constraint matrix) requires the automatization of the implicit function theorem.
### Related works
When trying to differentiate through iterative procedures, unrolling the loop is a natural approach. However, it is computationally demanding and it only works for pure Julia code with no external "black box" calls. On the other hand, using the implicit function theorem means we can decouple the derivative from the function itself: see [_Efficient and Modular Implicit Differentiation_](https://arxiv.org/abs/2105.15183) for an overview of the related theory.
In the last few years, this implicit differentiation paradigm has given rise to several Python libraries such as [OpenMDAO](https://github.com/OpenMDAO/OpenMDAO), [cvxpylayers](https://github.com/cvxgrp/cvxpylayers) and [JAXopt](https://github.com/google/jaxopt). In Julia, the most advanced one is [DiffOpt.jl](https://github.com/jump-dev/DiffOpt.jl), which allows the user to differentiate through a [JuMP.jl](https://github.com/jump-dev/JuMP.jl) optimization model. A more generic approach was recently experimented with in [NonconvexUtils.jl](https://github.com/JuliaNonconvex/NonconvexUtils.jl): our goal with [ImplicitDifferentiation.jl](https://github.com/gdalle/ImplicitDifferentiation.jl) is to make it more efficient, reliable and easily usable for everyone.
### Package content
Our package provides a simple toolbox that can differentiate through any kind of user-specified function `x -> y(x)`. The only requirement is that its output be characterized with a condition of the form `F(x,y(x)) = 0`.
Beyond the generic machinery of implicit differentiation, we also include several use cases as tutorials: unconstrained and constrained optimization, fixed point algorithms and nonlinear equation systems, etc.
### Technical details
The central construct of our package is a wrapper of the form
```julia
struct ImplicitFunction{O,C}
forward::O
conditions::C
end
```
where `forward` computes the mapping `x -> y(x)`, while `conditions` corresponds to `(x,y) -> F(x, y)`. By defining custom pushforwards and pullbacks, we ensure that `ImplicitFunction` objects can be used with any [ChainRules.jl](https://github.com/JuliaDiff/ChainRules.jl)-compatible automatic differentiation backend, in forward or reverse mode.
To attain maximum efficiency, we never actually store a full jacobian matrix: we only reason with vector-jacobian and jacobian-vector products. Thus, when solving linear systems (which is a key requirement of implicit differentiation), we exploit iterative Krylov subspace methods for their ability to handle lazy linear operators.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/DTHTBC/
Blue
Guillaume Dalle
Mohamed Tarek
PUBLISH
FZ38VW@@pretalx.com
-FZ38VW
Du Bois Data Visualizations: A Julia Recipe
en
en
20220729T125000
20220729T130000
0.01000
Du Bois Data Visualizations: A Julia Recipe
This collection of plotting recipes uses the Makie.jl plotting package to recreate some of Du Bois’s most famous and complex data visualizations on the state of African-Americans at the turn of the 20th century. In additional to replicating the original, users can easily create figures with their own data in the same style and format, ready for publication.
W.E.B. Du Bois (b. 1868, d. 1963) was a sociologist, historian, and civil rights activist. Du Bois originally presented these plates at the 1900 Paris Exposition. He and his team of researchers at Atlanta University collected data on Black Georgia residents in order to create a comprehensive view of the quality of life and aggregate characteristics of what was at the time the largest African-American population in any U.S. state. Using these data, DuBois and his team created two series of data visualizations, 63 plates in total, six of which are recreated by this package. They are unusual in both shape and color palette as they were meant to capture viewers’ attention at the Paris Exposition – a venue where audiences might not otherwise stop at a table with information on African-American populations if not for eye-catching data visualizations.
A number of groups and individuals have contributed to the project that is digitizing and recreating these plates. The Du Boisian Visualization Toolkit, from the Dignity + Debt Network, provides information on Du Bois’s most-used color palettes and fonts. Multiple R packages, style guides, Excel Tableau, and Python replications, and other resources have been contributed to this project to replicate the original figures. Our package contributes on two fronts. We present the first replication using Julia. Second. More importantly, our package is designed to allow the user to present their own data instead of merely replicating the originals.
We have attached one picture that shows one example of what users can obtain from our package.
References
Du Bois, W. E. B., Battle-Baptiste, W., & Rusert, B. (2018). W.E.B du Bois's Data Portraits: Visualizing Black America. W.E.B. Du Bois Center at the University of Massachusetts Amherst.
Link to Makie.jl package: https://makie.juliaplots.org/stable/
Link to Du Bois challenge and original exhibits: https://github.com/ajstarks/dubois-data-portraits/tree/master/challenge
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/FZ38VW/
Blue
Eirik Brandsaas & Kyra Sadovi
Eirik Brandsaas
PUBLISH
V3NPES@@pretalx.com
-V3NPES
Data Analysis and Visualization with AlgebraOfGraphics
en
en
20220729T130000
20220729T131000
0.01000
Data Analysis and Visualization with AlgebraOfGraphics
In this talk, I will give an overview of the Algebra of Graphics approach to data visualizations in Julia.
Algebra of Graphics is an adaptation of the Grammar of Graphics—a declarative language to define visualizations by mapping columns of a dataset to plot attribtues—to the Julia programming language and philosophy, with some important differences.
I will first discuss the key components of AlgebraOfGraphics (data selection, mapping of data to plot attributes, analysis and visualization selection) and show how they can be combined multiplicatively (by merging information) or additively (by drawing distinct visualizations on separate layers).
Then, I will delve into the AlgebraOfGraphics philosophy. The aim of AlgebraOfGraphics is to empower users to produce visualizations that answer _questions_ about their data. This is achieved via _reusable building blocks_, which the users define based on their knowledge of the data. Rich visualizations can be built by combining these building blocks: I will demonstrate this technique on an example dataset. I will also show how AlgebraOfGraphics attempts to lessen the cognitive burden on the user by providing opinionated graphical defaults as well as wide format support. That way, the user can focus on the question at hand, rather than visual fine-tuning or data wrangling.
As the AlgebraOfGraphics syntax is uniform, it can be used as the backend to a Graphical User Interface for data analysis and visualization. I will show a prototype of such a GUI, based on AlgebraOfGraphics, web technologies, and the web-based backend of Makie.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/V3NPES/
Blue
Pietro Vertechi
PUBLISH
ENV8KX@@pretalx.com
-ENV8KX
QuEra Computing Sponsor Talk
en
en
20220729T132500
20220729T133000
0.00500
QuEra Computing Sponsor Talk
PUBLIC
CONFIRMED
Silver sponsor talk
https://pretalx.com/juliacon-2022/talk/ENV8KX/
Blue
PUBLISH
HFNKTC@@pretalx.com
-HFNKTC
Working with Firebase in Julia
en
en
20220729T133000
20220729T140000
0.03000
Working with Firebase in Julia
Firebase.jl is the solution for working with Firebase with the Julia programming language. Firebase.jl provides support for realtime database, cloud firestore, storage, authentication which are quite useful in small and large size projects alike.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon-2022/talk/HFNKTC/
Blue
Ashwani Rathee
PUBLISH
BH8WSR@@pretalx.com
-BH8WSR
Interactive Julia data dashboards with Genie
en
en
20220729T163000
20220729T170000
0.03000
Interactive Julia data dashboards with Genie
Building upon over 5 years of experience with open source web development with Julia, Genie's powerful dashboarding features allow Julia users to develop and publish data apps without needing to use any web development techniques. Genie exposes a smooth workflow and rich programming APIs that allow data and research scientists to control all the aspects of building and deploying data apps and dashboards using only their favourite programming language: Julia.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon-2022/talk/BH8WSR/
Blue
Adrian Salceanu
Helmut Hänsel
PUBLISH
SHU83W@@pretalx.com
-SHU83W
Declarative data transformation via graph transformation
en
en
20220729T170000
20220729T173000
0.03000
Declarative data transformation via graph transformation
What data structure is able to characterize the process of transforming data? Certainly a pointer to a function is sufficient, but this is opaque, making static analysis difficult and lowering code intelligibility. SQL statements offer more transparency at the cost of some expressivity, yet this is still difficult to read and interpret as complexity scales. The graph transformation paradigm expresses data update, addition, and deletion in terms of rewrite rules, which are combinatorial structures characterizing patterns of a data for matching or replacement.
Graph rewriting techniques are specified in the notoriously abstract language of category theory, although historically concrete implementations have been restricted to labelled graphs. Catlab.jl is an applied category theory package which has recently added a performant implementation of graph rewriting (double, single, and sesqui pushout paradigms) at a novel level of generality, allowing a generic interface to the major application areas of graph rewriting: graph languages (rewriting defines a graph grammar), graph relations (rewriting is a relation between input and output data structures), and graph transition systems (rewriting evolves a system in time).
The scientific community particularly benefits from its code being interpretable and having a straightforward semantics. Just like many low level data manipulations are made safer and more transparent when redescribed as SQL operations, we argue that many data transformation applications would benefit from replacing arbitrary code with a set of declarative rewrite rules. We demonstrate this through some complex applications constructed from these building blocks. This includes an extended example of an epidemiological agent based model (in collaboration with epidemiologist Sean Wu, a research scientist at the Institute for Health Metrics and Evaluation) and an example of equational laws defining an e-graph data structure, with rewrite rules inducing an equality saturation procedure for free. The pattern-based API is easy to use and interpret - no category theory is needed to understand or use these tools!
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon-2022/talk/SHU83W/
Blue
Kristopher Brown
PUBLISH
F3J8QM@@pretalx.com
-F3J8QM
How to recover models from data using DataDrivenDiffEq.jl
en
en
20220729T173000
20220729T180000
0.03000
How to recover models from data using DataDrivenDiffEq.jl
How do we model the friction in the joint of a robot, biological feedback signal, or the influence of seemingly unrelated parameters on our dynamical system?
With the rise of machine learning the classical domain of modeling is becoming more and more driven by data. While the automated discovery of possibly complex relations can help in gaining new insights, classical equations still dominate state-of-the-art machine learning models in terms of extrapolation capabilities and explainability. DataDrivenDiffEq.jl provides a unified application programming interface to define and solve these problems. It brings together operator-based inference, sparse, and symbolic regression to bridge the gap from black to white-box models.
After a brief theoretical introduction to the theory of system identification, currently implemented algorithms, and their underlying models we will explore the conceptual layer of the software. Within the Hands-on example, we will see how DataDrivenDiffEq.jl API mimics the mathematical formulation, builds upon and extends ModelingToolkit.jl, SymbolicUtils.jl, and Symbolics.jl to allow expression-based modeling, and seamlessly integrates into the Scientific Machine Learning ecosystem to `solve` a variety of estimation problems.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon-2022/talk/F3J8QM/
Blue
Carl Julius Martensen
PUBLISH
AG7CGF@@pretalx.com
-AG7CGF
Julia for Space Engineering
en
en
20220729T123000
20220729T140000
1.03000
Julia for Space Engineering
PUBLIC
CONFIRMED
Birds of Feather
https://pretalx.com/juliacon-2022/talk/AG7CGF/
BoF
Helge Eichhorn
Jorge A. Pérez-Hernández
PUBLISH
PWSDHS@@pretalx.com
-PWSDHS
Production Data Engineering in Julia
en
en
20220729T163000
20220729T180000
1.03000
Production Data Engineering in Julia
Julia has already succeeded by empowering many scientists and engineers to author their own high-performance compute kernels without the usual ergonomics/composability sacrifices that high-performance code often entails. However, actually leveraging these kernels within production contexts often requires packaging them into an automated service, usually within the context of wider automated pipelines. It is not surprising that in the past few years, many new capabilities and packages have emerged that facilitate this by enabling Julia to be executed atop Kubernetes, interop with tabular data sources/sinks via Apache Arrow, and integrate with other popular cloud-native technologies. This blossoming ecosystem within the wider Julia community demonstrates both the desire and opportunity for Julia's usage in production data engineering contexts.
Topics of discussion for this BoF include:
- current data engineering efforts/challenges faced by industry Julia users maintaining production systems
- containerization of Julia processes and Julia-functions-as-a-service
- executing Julia-based jobs/services via Kubernetes
- Julia-centric workflow/dataflow orchestration
- the intersection of Julia's tabular data ecosystem and enterprise data architectures
Our goal is two-fold:
- uncover the shared data engineering problems, tools, and opportunities that characterize Julia's nascent Data Engineering community
- identify concrete opportunities for open-source and cross-organization collaboration (hackathons, blogs, package development, etc.)
PUBLIC
CONFIRMED
Birds of Feather
https://pretalx.com/juliacon-2022/talk/PWSDHS/
BoF
Curtis Vogt
Jacob Quinn
Jarrett Revels
PUBLISH
CUJU8K@@pretalx.com
-CUJU8K
DiffOpt.jl differentiating your favorite optimization problems
en
en
20220729T163000
20220729T170000
0.03000
DiffOpt.jl differentiating your favorite optimization problems
Joint work with: Mathieu Besançon, Benoît Legat, Akshay Sharma.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon-2022/talk/CUJU8K/
JuMP
Joaquim Dias Garcia
PUBLISH
L79WHV@@pretalx.com
-L79WHV
Recent developments in ParametricOptInterface.jl
en
en
20220729T170000
20220729T171000
0.01000
Recent developments in ParametricOptInterface.jl
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/L79WHV/
JuMP
Guilherme Bodin
PUBLISH
NPHSNW@@pretalx.com
-NPHSNW
Risk Budgeting Portfolios from simulations
en
en
20220729T171000
20220729T172000
0.01000
Risk Budgeting Portfolios from simulations
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/NPHSNW/
JuMP
Bernardo Freitas Paulo da Costa
PUBLISH
QNAEBY@@pretalx.com
-QNAEBY
Optimising Fantasy Football with JuMP
en
en
20220729T172000
20220729T173000
0.01000
Optimising Fantasy Football with JuMP
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/QNAEBY/
JuMP
Dean Markwick
PUBLISH
QJ8YRN@@pretalx.com
-QJ8YRN
Stochastic Optimal Control with MarkovBounds.jl
en
en
20220729T173000
20220729T174000
0.01000
Stochastic Optimal Control with MarkovBounds.jl
The optimal control of stochastic processes is arguably one of the most fundamental questions in the context of decision-making under uncertainty. When the controlled process is a jump-diffusion process characterized by polynomial data (drift- and diffusion coefficient, jumps, etc.), it is well known that polynomial optimization and the machinery of the moment-sum-of-squares (SOS) hierarchy provides a systematic way to construct informative (and often tight) convex relaxations for the associated optimal control problems. While the JuMP ecosystem offers with SumOfSquares.jl in principle everything that is required to study stochastic optimal control problems from this perspective, it remains a cumbersome and error-prone process to translate a concrete stochastic optimal control problem into its SOS relaxation. Moreover, this translation process requires expert knowledge, rendering it inaccessible to a large audience. MarkovBounds.jl is intended to close this gap by providing a high-level interface which allows the user to define stochastic optimal control problems in symbolic form using for example DynamicPolynomials.jl or Symbolics.jl, and automates subsequent translation to associated SOS relaxations. Finite and (discounted) infinite horizon problems are supported as well as several common objective function types. Furthermore, MarkovBounds.jl supports the combination of standard SOS relaxations with discretization approaches to tighten the relaxations.
In this talk, we will briefly review the conceptual ideas behind constructing SOS relaxations for stochastic optimal control problems and showcase the use of MarkovBounds.jl for the optimal control of populations in a predator-prey system, expression of protein in a stochastic bio circuit and the bounding of rare event probabilities.
PUBLIC
CONFIRMED
Lightning talk
https://pretalx.com/juliacon-2022/talk/QJ8YRN/
JuMP
Flemming Holtorf
PUBLISH
8QUDVW@@pretalx.com
-8QUDVW
Streamlining nonlinear programming on GPUs
en
en
20220729T190000
20220729T193000
0.03000
Streamlining nonlinear programming on GPUs
How fast can you evaluate the derivatives of a nonlinear optimization problem? Most real-world optimization instances come with thousands of variables and constraints; in such a large-scale setting using sparse automatic differentiation (AD) is often a non-negotiable requirement. It is well known that by choosing appropriately the partials (for instance with a coloring algorithm) the evaluations of the Jacobian and of the Hessian in sparse format translate respectively to one vectorized forward pass and one vectorized forward-over-reverse pass. Thus, any good AD library should be able to efficiently visit back and forth the problem's expression tree. In this talk, we propose a prototype for a vectorized modeler, where the expression tree is manipulating vector expressions. By doing so, the forward and the reverse evaluations rewrite into the universal language of sparse linear algebra. By chaining linear algebra calls together, we show that the evaluation of the expression tree can be deported to any sparse linear algebra backend. Notably, we show that both the forward and reverse passes can be streamlined efficiently, using either MKLSparse on Intel CPU or cusparse on CUDA GPU. We discuss the prototype's performance on the optimal power flow problem, using JuMP's AD backend as comparison. We show that on the largest instances, we can fasten the evaluation of the sparse Hessian by a factor of 50. Although our code remains a prototype, we have hope that the emergence of new AD library like Enzyme will permit to extend further the idea. We finish the talk by discussing ideas on how to vectorize the evaluation of the derivatives inside JuMP's AD backend.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon-2022/talk/8QUDVW/
JuMP
François Pacaud
Michel Schanen
PUBLISH
EBBQDB@@pretalx.com
-EBBQDB
Fast optimization via randomized numerical linear algebra
en
en
20220729T193000
20220729T200000
0.03000
Fast optimization via randomized numerical linear algebra
In this talk, we discuss how techniques in randomized numerical linear algebra can dramatically speed up linear system solves, which are a fundamental primitive for most constrained optimization algorithms.
We start with randomized approximations of matrices and introduce the Nystrom Sketch. Following the approach developed by Frangella et al. [1], we use this sketch to construct preconditioners for positive definite linear systems.
We then introduce RandomizedPreconditioners.jl, a lightweight package which includes these randomized preconditioners and sketches. We show how this package allows preconditioners to be added with only a few extra lines of code, and we use the package to dramatically speedup a convex optimization solver that uses the alternating direction method of multipliers (ADMM) algorithm. We demonstrate how we can amortize the cost of this preconditioner over all solver iterations, allowing us to capture this speedup for minimal additional cost
Finally, we conclude with future work to address other types of linear systems and other ways to speed up optimization solvers with randomized numerical linear algebra primitives that are implemented in RandomizedPreconditioners.jl.
[1] Zachary Frangella, Joel A Tropp, and Madeleine Udell. “Randomized Nyström Preconditioning.” In:arXiv preprint arXiv:2110.02820(2021). https://arxiv.org/abs/2110.02820
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon-2022/talk/EBBQDB/
JuMP
Theo Diamandis
PUBLISH
HAURQJ@@pretalx.com
-HAURQJ
Relational AI Sponsored Forum
en
en
20220729T163000
20220729T171500
0.04500
Relational AI Sponsored Forum
PUBLIC
CONFIRMED
Sponsor forum
https://pretalx.com/juliacon-2022/talk/HAURQJ/
Sponsored forums
PUBLISH
8MRLPJ@@pretalx.com
-8MRLPJ
JuliaCon Hackathon
en
en
20220730T123000
20220730T183000
6.00000
JuliaCon Hackathon
Like in previous years we will have another legendary JuliaCon hackaton! Join us to built something you are excited about in Julia. We will also have mentors available to help if you run into issues.
PUBLIC
CONFIRMED
Social hour
https://pretalx.com/juliacon-2022/talk/8MRLPJ/
Green