1.7
JuliaCon 2023
juliacon2023
2023-07-25
2023-07-29
5
00:05
https://pretalx.com
US/Eastern
32-123
Differentiable modelling on GPUs
Workshop
2023-07-25T09:00:00-04:00
09:00
03:00
Why to wait hours for computations to complete, when it could take only a few seconds? Tired of prototyping code in an interactive, high-level language and rewriting it in a lower-level language to get high-performance code? Unsure about the applicability of differentiable programming? Or simply curious about parallel and GPU computing and automatic differentiation being game changers in physics-based and data-driven modelling.
juliacon2023-26918-differentiable-modelling-on-gpus
HPC
Ludovic RässSamuel OmlinIvan Utkin
en
This workshop covers trendy areas in modern high-performance computing with examples from geoscientific applications. The physical processes governing natural systems' evolution are often mathematically described as systems of differential equations. In addition, the incredible amount of available data relating to these natural systems increases at unprecedented pace. Understanding and predicting the dynamic of the natural systems requires accurate predictive models built upon constraining parameters from differential equation solvers using a data-driven approach, where fast and accurate solutions require numerical implementations to leverage modern parallel hardware.
The goal of this workshop is to offer an interactive hands-on to solve and constrain systems of differential equations in parallel on GPUs using distributed stencil computations on GPUs and recent development of GPU-ready automatic differentiation (AD) capabilities. We plan to focus on conciseness and efficiency using the [ParallelStencil.jl](https://github.com/omlins/ParallelStencil.jl), [ImplicitGlobalGrid.jl](https://github.com/eth-cscs/ImplicitGlobalGrid.jl) and [Enzyme.jl](https://github.com/EnzymeAD/Enzyme.jl) Julia packages as tools.
The workshop consists of 2 parts:
1. You will learn about parallel and distributed GPU computing and iterative solvers.
2. You will implement a PDE constrained optimisation on GPUs.
By the end of this workshop, you will:
- Have a GPU PDE solver that predicts ice flow and an AD-powered inverse solver to calibrate the forward model;
- Have a Julia code that achieves similar performance to legacy C, CUDA, MPI code or outperforms it;
- Know how the Julia language solves the "two-language problem";
- Be able to leverage the computing power of modern GPU accelerated servers and supercomputers.
We look forward to having you on board and will make sure to foster exchange of ideas and knowledge to provide an as inclusive as possible event.
Workshop GitHub repo: https://github.com/PTsolvers/gpu-workshop-JuliaCon23
Co-authors: Ivan Utkin¹ ², Samuel Omlin¹
¹ ETH Zurich | ² Swiss Federal Institute for Forest, Snow and Landscape Research (WSL)
false
https://pretalx.com/juliacon2023/talk/GTKJZL/
https://pretalx.com/juliacon2023/talk/GTKJZL/feedback/
32-123
Lunch Workshop Day (Room 2)
Lunch Break
2023-07-25T12:00:00-04:00
12:00
01:30
We hope you're enjoying JuliaCon 2023 so far! Take a break and grab some lunch to recharge for the afternoon sessions. We have a delicious spread waiting for you in the dining hall. Bon appétit!
juliacon2023-28116-lunch-workshop-day-room-2-
JuliaCon
en
We hope you're enjoying JuliaCon 2023 so far! Take a break and grab some lunch to recharge for the afternoon sessions. We have a delicious spread waiting for you in the dining hall. Bon appétit!
false
https://pretalx.com/juliacon2023/talk/GLLENN/
https://pretalx.com/juliacon2023/talk/GLLENN/feedback/
32-123
Interactive Data Dashboards with Genie: Design to Deployment
Workshop
2023-07-25T13:30:00-04:00
13:30
03:00
Learn how to create production ready data applications and interactive data exploration dashboards in Julia, using Genie.
juliacon2023-26970-interactive-data-dashboards-with-genie-design-to-deployment
Statistics and Data Science
/media/juliacon2023/submissions/JJ3ZBP/blog-multiple-sources_VE463U2.jpg
Adrian SalceanuPere Giménez
en
In this comprehensive hands-on session, we'll build a fully functional, production-grade interactive data application using Genie. The key components of the application will include:
- An exploratory data analysis display of a dataset.
- A setup for configuring and training a machine learning model on the data.
- A REST API to serve the model.
We'll cover the entire process: from setting up a new app, designing and building the UI, writing the Julia backend for data processing, and adding interactivity. Once ready, we'll deploy the application on AWS using modern DevOps techniques.
If time allows it, we'll add more valuable features, such as restricting access with user authentication, improving performance with caching, and adding unit and integration testing.
false
https://pretalx.com/juliacon2023/talk/JJ3ZBP/
https://pretalx.com/juliacon2023/talk/JJ3ZBP/feedback/
32-144
Integrated assessment modeling using WorldDynamics.jl
Workshop
2023-07-25T09:00:00-04:00
09:00
03:00
Integrated Assessment Models (IAMs) are scientific models which enable the evaluation of the social and economic impact of environmental policies. WorldDynamics.jl is an open-source package that allows the development and reproduction of modern and historical IAMs in Julia. In this workshop, we demonstrate the main functionalities of the package while guiding the audience with comprehensive hands-on examples.
juliacon2023-26931-integrated-assessment-modeling-using-worlddynamics-jl
JuliaCon
Paulo Bruno Serafim
en
Integrated Assessment Models (IAMs) are valuable tools for a broad range of fields, including Economics and Social Science, enabling the evaluation of the economic and social impact of various environmental policies. They rely on scientific computing to analyze and understand complex and dynamic systems. We present WorldDynamics.jl, an open-source Julia package that allows the development and reproduction of IAMs using modern scientific computing techniques. In this workshop, we are going to interactively demonstrate the main functionalities of WorldDynamics.jl to the Julia community while guiding the audience with comprehensive hands-on examples.
Currently, the package reproduces the whole family of system dynamics models, from Jay Forrester's WORLD1 to the recent Earth4All, including the notable WORLD3, which was reimplemented directly from the DYNAMO code to Julia. Using the ModelingToolkit structure, each model is treated as a collection of subsystems, which are self-contained ODESytem with their own parameters, variables, and equations. All subsystems are connected into a single system, which forms the complete model, with interactions represented as a system of differential equations and solved using DifferentialEquations.jl and ModelingToolkit.jl. This modular structure allows us to play with subsystems individually and combine them in different ways into a unique model with little effort. Besides the original models, it is also straightforward to add existing updated versions as an extension, which exemplifies how a user is able to create their own update from a well-known model.
In this workshop, we are going to present the package functionalities with hands-on examples using several of the currently available models. Initially, we start with a general explanation of the package, emphasizing how it is being developed. Subsequently, we proceed with the major demonstration of the package's main features via practical coding cases, including the reproduction of the original figures from implemented historical models using PlotJS.jl. We also show how to alter the values of parameters, tables, and edit equations of models, leveraging the package modular approach to combine different subsystems in ways that were not easily achievable previously. Finally, we present the development roadmap, which includes current works under construction, our expectations for the near future, and what we intend to achieve in the next major releases.
After the workshop, we expect the audience will be able to have a clear comprehension of current WorldDynamics.jl capabilities and functionalities. Along with this talk, we are also submitting a talk proposal to present the motivation and goals behind the package, to share the model development process, and the challenges we face.
false
https://pretalx.com/juliacon2023/talk/8DLMGU/
https://pretalx.com/juliacon2023/talk/8DLMGU/feedback/
32-144
Lunch Workshop Day (Room 4)
Lunch Break
2023-07-25T12:00:00-04:00
12:00
01:30
We hope you're enjoying JuliaCon 2023 so far! Take a break and grab some lunch to recharge for the afternoon sessions. We have a delicious spread waiting for you in the dining hall. Bon appétit!
juliacon2023-28118-lunch-workshop-day-room-4-
JuliaCon
en
We hope you're enjoying JuliaCon 2023 so far! Take a break and grab some lunch to recharge for the afternoon sessions. We have a delicious spread waiting for you in the dining hall. Bon appétit!
false
https://pretalx.com/juliacon2023/talk/M9NMLS/
https://pretalx.com/juliacon2023/talk/M9NMLS/feedback/
32-144
Working with DataFrames.jl beyond CSV files
Workshop
2023-07-25T13:30:00-04:00
13:30
03:00
Data scientists need to work with various data sources and sinks in their projects. During the workshop I you will learn how you can work with standard data formats using DataFrames.jl. A special focus will be put on working with data that is larger than available RAM.
juliacon2023-26048-working-with-dataframes-jl-beyond-csv-files
Statistics and Data Science
Bogumił KamińskiJacob Quinn
en
Data science pipelines created in Julia typically need to be integrated into larger workflows involving various tools and technologies. Therefore an important aspect is ensuring interoperability, especially for the case of large data that does not fit in RAM of a single machine.
During the workshop we discuss working with the following data formats:
Section 1: https://github.com/bkamins/JuliaCon2023-Tutorial
* statistical packages (Stata/SAS/SPSS and RData);
* databases (SQLite and DuckDB);
* Apache Parquet.
Section 2: https://github.com/quinnj/JuliaCon2023-Tutorial
* CSV;
* Apache Arrow;
* JSON.
The examples will DataFrames.jl that provides a representative implementation of Tables.jl table.
false
https://pretalx.com/juliacon2023/talk/CMVSB9/
https://pretalx.com/juliacon2023/talk/CMVSB9/feedback/
32-G449 (Kiva)
Image Processing with Images.jl Workshop
Workshop
2023-07-25T09:00:00-04:00
09:00
03:00
The objective of this workshop is to give exposure for using various digital image processing methods in Julia. JuliaImages hosts the major Julia packages for image processing in which Images.jl provides access to various algorithms acting as the umbrella package. The applications of Image processing are found in areas such as machine vision, biomedical image processing and remote sensing. This workshop will help the Julia enthusiasts/scholars to get trained in Julia based image processing .
juliacon2023-24336-image-processing-with-images-jl-workshop
JuliaCon
Ashwani RatheeTim Holy
en
The goal of this workshop is to provide an overview of the various packages provided by JuliaImages through Images.jl. This package provides a collection of packages that can be used to do common image processing tasks. The majority of these packages can be found at JuliaImages, JuliaArrays, JuliaIO, JuliaGraphics, and JuliaMath. We will demonstrate how Images.jl streamlines the process of handling image processing in Julia. Furthermore, we will examine benchmarks, design choices, and how JuliaImages uses benefits provided by Julia itself.
In this workshop, we will explore algorithms provided by various packages like:
- ImageCore.jl
- ImageFiltering.jl
- ImageSegmentation.jl
- ImageTransformations.jl
- ImageQualityIndexes.jl
- ColorQuantisation.jl
These tools can prove to be very useful during various ML stages like preprocessing, postprocessing and evaluation which will be explored.
false
https://pretalx.com/juliacon2023/talk/TTAJXA/
https://pretalx.com/juliacon2023/talk/TTAJXA/feedback/
32-G449 (Kiva)
Lunch Workshop Day (Room 1)
Lunch Break
2023-07-25T12:00:00-04:00
12:00
01:30
We hope you're enjoying JuliaCon 2023 so far! Take a break and grab some lunch to recharge for the afternoon sessions. We have a delicious spread waiting for you in the dining hall. Bon appétit!
juliacon2023-28115-lunch-workshop-day-room-1-
JuliaCon
en
We hope you're enjoying JuliaCon 2023 so far! Take a break and grab some lunch to recharge for the afternoon sessions. We have a delicious spread waiting for you in the dining hall. Bon appétit!
false
https://pretalx.com/juliacon2023/talk/JKDW7E/
https://pretalx.com/juliacon2023/talk/JKDW7E/feedback/
32-G449 (Kiva)
Building REST APIs with Julia
Workshop
2023-07-25T13:30:00-04:00
13:30
03:00
Although Julia is a growing, fast language with relatively easy syntax, and thus quite a productive language, its popularity still seems enormously lopsided towards the scientific community. Thus it can be quite an intimidating language and community for newbies without heavy or good scientific or mathematical background to get into.
To address the intimidation above, I would like to propose this workshop, where we'll build and deploy a REST project for the web, a ubiquitous technology.
juliacon2023-26971-building-rest-apis-with-julia
JuliaCon
JAKE
en
Although Julia is a growing, fast language with relatively easy syntax, and thus quite a productive language, its popularity still seems enormously lopsided towards the scientific community. Thus it can be quite an intimidating language and community for newbies without heavy or good scientific or mathematical background to get into.
To address the aforementioned intimidation, I would like to propose this workshop, where we'll learn to build and deploy a REST project for the web. With the web being a ubiquitous technology, this workshop should potentially have a wide-reaching audience but with the target group being experienced developers in another language checking out Julia, and people who have some basic experience with Julia looking to build a project where they can hone the skills they've picked up and put them to "practical" use.
I'm looking at developing this workshop as a potential natural progression to the "Introduction to Julia" workshop.
false
https://pretalx.com/juliacon2023/talk/BX3MYR/
https://pretalx.com/juliacon2023/talk/BX3MYR/feedback/
32-D463 (Star)
Hands on lumped parameter models with CirculatorySystemModels.jl
Workshop
2023-07-25T09:00:00-04:00
09:00
03:00
Come and help us circulate the love for lumped parameter modelling with CirculatorySystemModels.jl, a fast, friendly, flexible and functional circulatory modelling framework. In this workshop participants will be helped to run and analyze a lumped parameter model, embodying the statement "if you can draw it you can model it". We will guide users through the specifics of how they can then begin to develop their own advanced features utilizing the dependent package ModelingToolkit.jl.
juliacon2023-26852-hands-on-lumped-parameter-models-with-circulatorysystemmodels-jl
SciML
/media/juliacon2023/submissions/AYQRRE/CirculatorySystemModels2_xSsEUlR.png
Torsten SchenkelHarry Saxton
en
The lumped-parameter model (also called lumped-element model) simplifies the description of the behavior of spatially distributed physical systems, such as electrical circuits, into a topology consisting of discrete entities that approximate the behavior of the distributed system under certain assumptions. It is useful in electrical systems (including electronics), mechanical multi-body systems, heat transfer, acoustics and circulatory mechanics.
Our focus in this workshop will will be on the dynamics of the circulatory system, or cardio-vascular system. Within the realm of circulatory system mechanics, lumped parameter (0D) modelling offers the unique ability to examine both cardiac function and global hemodynamics within the context of a single model. Due to the interconnected nature of the cardiovascular system, being able to quantify both is crucial for the effective prediction and diagnosis of cardiovascular diseases. Lumped parameter modelling derives one of its main strengths from the minimal computation time required to solve ODEs and algebraic equations. Furthermore, the relatively simple structure of the model allows most personalized simulations to be automated. Meaning the ability to embed these lumped parameter models into a clinical workflow could one day become trivial.
CirculatorySystemModels.jl is an acausal modelling library, built on top of ModelingToolkit.jl and the SciML ecosystem. This tight integration means that once the model is set up, global optimization, sensitivity analysis, uncertainty analysis, ..., are just a few lines of code away. And due to the speed of Julia, even high-dimensional parameter models become feasible for a high-abstraction level modeling system.
In this workshop we will introduce CirculatorySystemModels.jl in a hands on fashion. We will walk users through the fundamentals of acausal modelling, and will then demonstrate how we can build up complex cardio-vascular network from simple elements in a "if you can draw it, you can model it paradigm" by simply connecting the individual components. We will then use the power of ModelingToolkit.jl, DifferentialEquations.jl, and other packagers in the SciML ecosystem to show how the created models can be fed into the wider analysis framework.
We will demonstrate how new models/elements can be implemented. We will then take the user through the event handling feature within ModelingToolkit to demonstrate how we can recreate complex physiological components which open a new era of modelling which other platforms can't support. In the second half of the workshop we will focus on model analysis and demonstrate how our pre-built circulation models can be used within relevant research. The participants will leave with a clear understanding of how to use the Julia package ecosystem to efficiently handle these difficult circulation models, and will have a new perspective for understanding the model analysis advances made by Julia in recent years.
false
https://pretalx.com/juliacon2023/talk/AYQRRE/
https://pretalx.com/juliacon2023/talk/AYQRRE/feedback/
32-D463 (Star)
Lunch Workshop Day (Room 3)
Lunch Break
2023-07-25T12:00:00-04:00
12:00
01:30
We hope you're enjoying JuliaCon 2023 so far! Take a break and grab some lunch to recharge for the afternoon sessions. We have a delicious spread waiting for you in the dining hall. Bon appétit!
juliacon2023-28117-lunch-workshop-day-room-3-
JuliaCon
en
We hope you're enjoying JuliaCon 2023 so far! Take a break and grab some lunch to recharge for the afternoon sessions. We have a delicious spread waiting for you in the dining hall. Bon appétit!
false
https://pretalx.com/juliacon2023/talk/FGD3VM/
https://pretalx.com/juliacon2023/talk/FGD3VM/feedback/
32-D463 (Star)
Using NeuralODEs in real life applications
Workshop
2023-07-25T13:30:00-04:00
13:30
03:00
NeuralODEs lead to amazing results in academic examples. But the expectations are often being disappointed as soon as one tries to adapt this concept for real life use cases. Bad convergence behavior, handling of discontinuities and/or instabilities are just some of the stumbling blocks that might pop up during the first steps. During the workshop, we want to show how to integrate real life industrial models in NeuralODEs using FMI and present sophisticated training strategies.
juliacon2023-26880-using-neuralodes-in-real-life-applications
SciML
/media/juliacon2023/submissions/EWL3LC/fmifluxjl_logo_640_320_EcJsK1R.png
Tobias ThummererLars Mikelsons
en
Despite the great potential of NeuralODEs - the structural combination of an artifical neural network and an ODE solver - they are not yet a standard tool in modeling of physical systems. We believe, there are mainly two reasons for that: First, NeuralODEs develop their full potential if paired with existing models that capture some basic physics. However, existing models typically are set up in dedicated simulation tools and thus not available in Julia. Second, training of NeuralODEs is tricky and not yet plug and play. In this tutorial, we share methods to employ models from various simulation tools in NeuralODEs and training strategies that can deal with challenges of industrial scale. Both was validated in real examples ranging from automotive to medical use cases.
For us, one of the key challenges to make the jump from science to application, was to allow the re-use of existing simulation models from common modeling tools as parts of NeuralODEs. For this purpose, we picked the model exchange format FMI (https://fmi-standard.org/), which can be understood as a container format for ODEs and is supported by more than 170 simulation tools. Using our Julia package FMI.jl it is possible to handle FMUs – that’s how models compliant with FMI are called – in Julia. Further, the package connects FMUs to the common automatic differentiation packages in Julia. The library FMIFlux.jl enables the use of FMUs in machine learning applications. Using both allows for the following workflow:
(1) Export your model from your favorite simulation tool as FMU,
(2) Import the FMU in Julia,
(3) Use the FMU as layer of an ANN in Julia,
(4) Train the resulting NeuralODE, called NeuralFMU.
Unfortunately, the fourth step is more complicated than it looks at first glance. NeuralFMUs or NeuralODEs in general tend to exhibit expensive gradient computation and thus very long training times. Further, they tend to converge to local minima or enter unstable regions. In order to cope with these challenges, we present strategies to
• design a target-oriented topology for such a model
• initialize or even pre-train such models
• deal with large data and use batching
• efficiently train such multi-domain models
Equipped with knowledge about typical pitfalls and their workarounds, workshop participants will have an easier time dealing with their own NeuralODE applications.
false
https://pretalx.com/juliacon2023/talk/EWL3LC/
https://pretalx.com/juliacon2023/talk/EWL3LC/feedback/
26-100
Breakfast in Stata Center
Breakfast
2023-07-26T07:00:00-04:00
07:00
01:30
Get a delicious breakfast.
juliacon2023-35793-breakfast-in-stata-center
JuliaCon
en
Get a delicious continental breakfast and fresh coffee served directly in the Stata at JuliaCon 2023.
false
https://pretalx.com/juliacon2023/talk/BWEHVS/
https://pretalx.com/juliacon2023/talk/BWEHVS/feedback/
26-100
Opening Ceremony
Ceremony
2023-07-26T08:30:00-04:00
08:30
00:30
Welcome to the opening ceremony of JuliaCon 2023! Join us as we kick off this exciting conference with a warm welcome, keynote speeches, and announcements about what's in store for the week. Get ready to meet new colleagues, exchange ideas, and discover the latest in Julia programming. Let's get started!
juliacon2023-28119-opening-ceremony
JuliaCon
en
Welcome to the opening ceremony of JuliaCon 2023! Join us as we kick off this exciting conference with a warm welcome, keynote speeches, and announcements about what's in store for the week. Get ready to meet new colleagues, exchange ideas, and discover the latest in Julia programming. Let's get started!
false
https://pretalx.com/juliacon2023/talk/LTBAW7/
https://pretalx.com/juliacon2023/talk/LTBAW7/feedback/
26-100
Keynote: Tim Davis
Keynote
2023-07-26T09:00:00-04:00
09:00
00:45
Timothy A. Davis, PhD, is a Professor in the Computer Science and Engineering Department at Texas A&M University, and a Fellow of SIAM, ACM, and IEEE. He serves as an associate editor for the ACM Transactions on Mathematical Software. In 2018 he was awarded the Walston Chubb Award for Innovation, by Sigma Xi.
juliacon2023-28060-keynote-tim-davis
JuliaCon
en
Prof. Davis is a world leader in the creation of innovative algorithms and widely-used software for solving large sparse matrix problems that arise in a vast range of real-world technological and social applications. He is the author of Suitesparse
false
https://pretalx.com/juliacon2023/talk/U9ABUR/
https://pretalx.com/juliacon2023/talk/U9ABUR/feedback/
26-100
Keynote: Christopher Rackauckas
Keynote
2023-07-26T09:45:00-04:00
09:45
00:45
Dr. Rackauckas is a Research Affiliate and Co-PI of the Julia Lab at the Massachusetts Institute of Technology, VP of Modeling and Simulation at JuliaHub and Creator / Lead Developer of JuliaSim. He's also the Director of Scientific Research at Pumas-AI and Creator / Lead Developer of Pumas, and Lead Developer of the SciML Open Source Software Organization.
juliacon2023-28061-keynote-christopher-rackauckas
JuliaCon
en
Dr. Rackauckas's research and software is focused on Scientific Machine Learning (SciML): the integration of domain models with artificial intelligence techniques like machine learning. By utilizing the structured scientific (differential equation) models together with the unstructured data-driven models of machine learning, our simulators can be accelerated, our science can better approximate the true systems, all while enjoying the robustness and explainability of mechanistic dynamical models.
false
https://pretalx.com/juliacon2023/talk/YKK7AD/
https://pretalx.com/juliacon2023/talk/YKK7AD/feedback/
26-100
JuliaHub sponsor talk
Gold sponsor talk
2023-07-26T10:30:00-04:00
10:30
00:10
Sponsor talk of JuliaHub, platinum sponsor at JuliaCon.
juliacon2023-28190-juliahub-sponsor-talk
JuliaCon
en
Thanks to JuliaHub, platinum sponsor at JuliaCon 2023! Do check out their booth at the conference.
JuliaHub's mission is to bring the power of Julia to the world of scientific and technical computing. To equip the brightest minds working on the world’s most challenging problems with massive computational power, unparalleled speed and efficiency in a seamless and secure environment.
false
https://pretalx.com/juliacon2023/talk/XSAQWJ/
https://pretalx.com/juliacon2023/talk/XSAQWJ/feedback/
26-100
ASML & Julia
Gold sponsor talk
2023-07-26T10:40:00-04:00
10:40
00:10
Thank you to ASML, gold sponsor at JuliaCon 2023!
juliacon2023-31579-asml-julia
JuliaCon
en
Thank you to ASML, gold sponsor at JuliaCon 2023!
ASML is an innovation leader in the semiconductor industry. We provide chipmakers with everything they need – hardware, software and services – to mass produce patterns on silicon through lithography.
false
https://pretalx.com/juliacon2023/talk/WDKBSM/
https://pretalx.com/juliacon2023/talk/WDKBSM/feedback/
26-100
Julia Developer Survey Results
Ceremony
2023-07-26T11:00:00-04:00
11:00
00:15
Presentation of results from this year's Julia Developer Survey and discussion of progress made.
juliacon2023-32575-julia-developer-survey-results
JuliaCon
Andrew Claster
en
Presentation of questions and results from this year's Julia Developer Survey and discussion of progress made.
false
https://pretalx.com/juliacon2023/talk/W93AJB/
https://pretalx.com/juliacon2023/talk/W93AJB/feedback/
26-100
Morning Break Day 1 Room 7
Break
2023-07-26T11:15:00-04:00
11:15
00:15
Morning break for coffee and snacks, and transit time from the keynote to the rest of the day's talks.
juliacon2023-28129-morning-break-day-1-room-7
JuliaCon
en
Morning break for coffee and snacks, and transit time from the keynote to the rest of the day's talks.
false
https://pretalx.com/juliacon2023/talk/E7VTQ7/
https://pretalx.com/juliacon2023/talk/E7VTQ7/feedback/
26-100
Patterns for portable parallelism: porting CliMA to GPUs
Talk
2023-07-26T11:30:00-04:00
11:30
00:30
Despite many efforts, it can be difficult to find good abstractions that are efficient on both CPU and GPU code. In our effort to add GPU support to ClimaCore.jl, we have established with several useful patterns for describing common spatial operations at a high-level, which can then be specialized in different ways to different computational backends.
juliacon2023-27027-patterns-for-portable-parallelism-porting-clima-to-gpus
HPC
/media/juliacon2023/submissions/HQ7V3N/logo_0wNli0G.png
Simon Byrne
en
[ClimaCore.jl](https://github.com/CliMA/ClimaCore.jl) is a high-level framework for constructing spatial discretizations for earth system models, and is currently being used by the CliMA project for atmosphere and land modeling. It makes use of many features of the Julia language, such as broadcast operator fusion, code specialization and aggressive inlining.
We have recently undertaken an effort to add GPU support, with the aim to maintain the ability for users to describe the operations at a high level, while allowing for the CPU and GPU backends to make use of very different threading and memory access patterns for optimal performance. This talk will describe the context, and how we have made use of Julia syntax and implementation of Julia’s broadcasting operations, as well as features of the GPU support libraries, to design a flexible yet powerful mechanism for describing climate models.
false
https://pretalx.com/juliacon2023/talk/HQ7V3N/
https://pretalx.com/juliacon2023/talk/HQ7V3N/feedback/
26-100
HDF5.jl: Hierarchical data storage for the Julia ecosystem
Talk
2023-07-26T12:00:00-04:00
12:00
00:30
HDF5.jl is a Julia package for reading and writing data using the Hierarchical Data Format version 5 (HDF5) C library. HDF5 is a flexible, self-describing format suitable for storing complex scientific data, and is used as a container for many other formats.
This talk will give an overview of the HDF5 format and give an introduction and examples of basic usage of the HDF5.jl package. We will highlight some recent features and discuss future plans for the package.
juliacon2023-28010-hdf5-jl-hierarchical-data-storage-for-the-julia-ecosystem
HPC
/media/juliacon2023/submissions/WGQEUM/image_2D1zhUZ.png
Mark Kittisopikul, Ph.D.Simon ByrneMustafa Mohamad
en
As the name suggests, the HDF5 format allows storing data hierarchical layout of groups and datasets. It is self-describing, meaning that type and dimension metadata is stored in the file, alongside any custom metadata, known as attributes. Its flexibility means that it is widely used in many scientific domains, and as a container format by other libraries, including NetCDF, JLD.jl/JLD2.jl, PyTables and MATLAB’s MAT-files.
HDF5.jl is a Julia package for accessing HDF5 files, using the HDF5 C library maintained by The HDF Group. It provides a simple, high-level interface making it easy to save and load data, as well as a more flexible interface allowing users to take advantage of many of HDF5’s features. Although the HDF5.jl package has been around since 2012, we have recently undertaken some significant changes to improve the modularity and make available newer features.
Some recent feature additions to HDF5.jl package include:
* Filter pipeline API that supports custom plugins and several advanced compression filter subpackages.
* Distributed reading and writing with MPI.jl.
* Reentrant API locks to prevent errors when accessing from multiple threads.
* Virtual datasets, which support storing data across multiple HDF5 files.
* Direct access to remote files stored on AWS S3.
Finally, we will discuss future plans:
* A path for thread-parallel I/O operations with HDF5 files via the raw chunk API to access the byte-level layout of chunked datasets.
* BinaryBuilder.jl provided binaries across all supported platforms.
false
https://pretalx.com/juliacon2023/talk/WGQEUM/
https://pretalx.com/juliacon2023/talk/WGQEUM/feedback/
26-100
Lunch Day 1 (Room 7)
Lunch Break
2023-07-26T12:30:00-04:00
12:30
01:30
We hope you're enjoying JuliaCon 2023 so far! Please find our food trucks waiting right outside venue with food available for purchase.
juliacon2023-28072-lunch-day-1-room-7-
JuliaCon
en
We hope you're enjoying JuliaCon 2023 so far! Please find our food trucks waiting right outside venue with food available for purchase.
false
https://pretalx.com/juliacon2023/talk/8ZTXLH/
https://pretalx.com/juliacon2023/talk/8ZTXLH/feedback/
26-100
Making hard decisions: from influence diagrams to optimization
Talk
2023-07-26T14:00:00-04:00
14:00
00:30
We present the Decision Programming framework for solving multi-stage stochastic problems. The problem is first formulated as an influence diagram and then converted to a mixed-integer linear programming problem. The DecisionProgramming.jl package is implemented as an extension to JuMP, taking advantage of the versatility of JuMP in using different solvers and accessing different solver attributes.
juliacon2023-25258-making-hard-decisions-from-influence-diagrams-to-optimization
JuliaCon
Olli Herrala
en
Decision Programming combines ideas from the fields of decision analysis and stochastic programming, using the strengths of both approaches to overcome the limitations of the other. Stochastic optimization has struggled with representing endogenous uncertainties, which can be easily represented in an influence diagram. On the other hand, solution methods for influence diagrams are often problem-specific and require strong assumptions, such as the no-forgetting assumption. Using the solution methods available for mixed-integer linear problems helps us overcome these challenges.
While Decision Programming is still in a relatively early development phase as a mathematical optimization framework, we have published the package DecisionProgramming.jl, implementing the framework in its current state. We believe that Decision Programming can be a valuable tool for users in the field of mathematical optimization and decision-making. A significant amount of work has gone into making the framework user-friendly, making it easy for an inexperienced user to formulate decision problems and obtain solutions without strong knowledge of the underlying optimization methods.
In addition to the core framework, we present examples of primal heuristics and valid inequalities for improving the computational performance of the framework considerably. DecisionProgramming.jl is built as an extension to JuMP, allowing more experienced users to easily modify and further extend the formulations in the package. This combination of being easy to use for beginners and easy to build on for experienced users is what has strongly contributed to the popularity of JuMP, and we hope this will also be the case for DecisionProgramming.jl.
false
https://pretalx.com/juliacon2023/talk/VGMMMP/
https://pretalx.com/juliacon2023/talk/VGMMMP/feedback/
26-100
Anti-Patterns: How not to do things in Julia
Talk
2023-07-26T14:30:00-04:00
14:30
00:30
Design patterns offer general solutions to common problems and it is standard practice to include them in software development. On the other hand, Anti-Pattern is a complement to Design patterns. While design patterns focus on how to do things by following best practices, Anti-patterns focus on how not to do things. In this talk, I will speak about anti-patterns in Julia and their worst practices. The talk follows the problem-solution approach.
juliacon2023-27048-anti-patterns-how-not-to-do-things-in-julia
JuliaCon
Gajendra Deshpande
en
Design patterns offer general solutions to common problems and it is standard practice to include them in software development. On the other hand, Anti-Pattern is a complement to Design patterns. While design patterns focus on how to do things by following best practices, Anti-patterns focus on how not to do things. In this talk, I will speak about anti-patterns in Julia and their worst practices. The talk follows the problem-solution approach. In this talk, I will explain anti-patterns related to Correctness, maintainability, Readability, security, and performance and possible solution to avoid them in the code.
Outline
1. Introduction to Anti Patterns (02 minutes)
2. Correctness Anti Patterns related to breaking code or doing wrong things (04 Minutes)
3. Maintainability Anti Patterns that make code hard to maintain (04 Minutes)
4. Readability Anti Patterns that makes code hard to read and understand (03 Minutes)
5. Security Anti Patterns that make code vulnerable (02 Minutes)
6. Performance Anti Patterns that make code slow (02 Minutes)
7. General Anti Patterns such as Spaghetti Code, Golden Hammer, Boat Anchor, Dead Code,
Proliferation of Code and, Packaging (05Minutes)
8. Switching from other languages such as Python, R and Matlab (03 Minutes)
9. Summary and Questions (05 Minutes)
false
https://pretalx.com/juliacon2023/talk/PBWWFV/
https://pretalx.com/juliacon2023/talk/PBWWFV/feedback/
26-100
Package extensions
Talk
2023-07-26T15:00:00-04:00
15:00
00:30
[Package extensions](https://pkgdocs.julialang.org/dev/creating-packages/#Conditional-loading-of-code-in-packages-(Extensions)) are a new feature for packages available in Julia 1.9. It aims to solve the problem where you might be reluctant to take on a package as a dependency just to extend a function to a type in that package due to e.g. increased load time of the package.
juliacon2023-26758-package-extensions
Julia Base and Tooling
Kristoffer Carlsson
en
The multiple dispatch paradigm of Julia makes it very easy to extend functionality to new types. A package can just define a new method of a generic function that accepts a type in another package, and anyone that calls that function with that type will get the extended behavior. An example of this could be a plotting package that extends a plotting method to various types in the Julia ecosystem.
However, to extend a method with a type from another package, one needs to make that package a dependency. This has a cost, most noticeably in package load time. This means that for a package author, there is always a conflict between wanting to provide useful method extensions and keeping the load time of the package small.
The go-to solution for the problem above has been the Requires.jl package. In Requires.jl one defines a piece of code that gets automatically executed when another package is loaded. This code typically adds some extra method for one of the types in the package. Since you are not directly depending on the package, you no longer unconditionally pay the loading cost of it.
Unfortunately, Requires.jl comes with some drawbacks:
- The code that gets executed when a package gets loaded is `eval`-ed into the module which means that that code can not be precompiled, leading to worse latency.
- It is not possible to set any compatibility constraints on the packages triggering the
- Requires.jl had issues with PackageCompiler since it needed access to files during runtime.
The new ["package extension"](https://pkgdocs.julialang.org/dev/creating-packages/#Conditional-loading-of-code-in-packages-(Extensions)) functionality in 1.9 is meant to fully replace the usage of Requires.jl while solving the issues mentioned above. This talk will give an overview of how the new extension system works, discuss best practices and give some examples of its usage from the package ecosystem.
false
https://pretalx.com/juliacon2023/talk/YQQ9ZV/
https://pretalx.com/juliacon2023/talk/YQQ9ZV/feedback/
26-100
Continuous Improvement of the CI ecosystem in Julia
Talk
2023-07-26T15:30:00-04:00
15:30
00:30
This past year, the CI ecosystem in Julia has seen some notable improvements. Attend this talk to learn how we’ve built workflows to support the growing needs of Base Julia CI, our friends at SciML and even some other ecosystem projects that require very wide platform testing. Our efforts have also made Base Julia CI more reliable, reproducible, and easily analyzed. This talk will showcase some of the tools available to ecosystem projects in need of a deeper degree of testing.
juliacon2023-26890-continuous-improvement-of-the-ci-ecosystem-in-julia
Julia Base and Tooling
Anant ThazhemadamElliot Saba
en
Julia’s CI landscape has seen some notable improvements, both with respect to the compute and the tooling available. In this talk, we’ll demonstrate how you can make the best use of Buildkite for your packages in the Julia ecosystem from start to finish - Including testing on GPUs!
Since each package’s CI requirements are likely to vary from each other, we’ll talk about plugins that can help make your lives a bit easier, including tasks such as:
- Securely using secrets in your workflows
- Running workflows within isolated, reproducible sandboxes
- Easily customizing such sandboxes on the fly
- Avoiding boilerplate with template workflows
- Ensuring code integrity is maintained before running sensitive workflows
For each plugin, we’ll delve into the motivation behind the plugin first, followed by some small implementation details users need to know about to properly use each.
We’ll also present simple, toy examples of all this tooling so that users can reference them while building their own, more complex pipelines.
Finally, we’ll showcase some real examples where these plugins are put to good use by some of the packages in our ecosystem. Come, let’s talk CI!
false
https://pretalx.com/juliacon2023/talk/XJKBZY/
https://pretalx.com/juliacon2023/talk/XJKBZY/feedback/
26-100
PackageAnalyzer.jl: analyzing the open source ecosystem & more!
Talk
2023-07-26T16:00:00-04:00
16:00
00:30
We have developed [PackageAnalyzer.jl](https://github.com/JuliaEcosystem/PackageAnalyzer.jl) in order to analyze the whole open source Julia ecosystem, but it can also be used to analyze private registries and dependency graphs too. In this talk we will give an update on the growth of the Julia ecosystem and how well best-practices such as CI, tests, and documentation have kept up with this growth, as well as show you how to use PackageAnalyzer to easily analyze your own dependency graph.
juliacon2023-26933-packageanalyzer-jl-analyzing-the-open-source-ecosystem-more-
Julia Base and Tooling
Eric P. HansonMosè Giordano
en
[PackageAnalyzer.jl](https://github.com/JuliaEcosystem/PackageAnalyzer.jl) lets you statically inspect the content of a package and collect information about the use of documentation, testing suite, continuous integration, as well as the licenses used, the number of lines of code, the number of contributors. Additionally, using JuliaSyntax.jl, PackageAnalyzer can introspect the source code and count the number of struct definitions, exports, method definitions, and more. PackageAnalyzer also supports analyzing an entire `Manifest.toml` or even a whole registry, and takes care to look at exactly the code specified in a manifest rather than the latest version of the code in its source repository. This can be used to check that e.g. all dependencies in an application have an open-source license, or have tests, and is new to PackageAnalyzer v1.0.
We will show how to use PackageAnalyzer in an example application, and the kinds of analyses which can be performed. We will also show statistics about the whole open-source ecosystem, as an update to the JuliaCon 2021 talk called _Code, docs, and tests: what's in the General registry?_. We'll talk about the current state of the ecosystem as well as how it has grown and changed since 2021.
false
https://pretalx.com/juliacon2023/talk/BTUFJX/
https://pretalx.com/juliacon2023/talk/BTUFJX/feedback/
26-100
Development of a meta analysis package for Julia
Talk
2023-07-26T16:30:00-04:00
16:30
00:30
Meta analysis is a widely used statistical technique for pooling diverse study results. Julia does not have a meta analysis package. The goal of this talk is to present the steps and usage of a meta analysis package in pure Julia that I have developed and is in pre-alpha stage. I will explain the steps and processes of implementing fixed-effects and random-effects meta-analysis in Julia and construction of several plotting functions and advanced methods that are unique features of this package.
juliacon2023-26861-development-of-a-meta-analysis-package-for-julia
Julia Base and Tooling
Arindam Basu
en
The goal of this presentation is to present the first ever meta analysis package written entirely in Julia and describe in details the various advanced features in details. Meta analysis is an important technique of data analysis and is widely used in health and medicine for generation of evidence, and it is pivotal technique for evidence based health and widely used in other fields (economics, environmental health, and social sciences among other fields). Julia till date does not have a dedicated package for meta analysis and this package will fill this important gap in the Julia packages database.
I have developed a Julia based meta-analysis package and it is in early alpha and I am testing it with my colleagues and students before release. The meta analysis package is written in Julia using a range of other existing packages and implements the meta-analysis techniques described in leading texts. It includes the following features: meta-analysis of single variables, meta-analysis of comparative studies; fixed-effects, random-effects meta analysis that implements DerSimonian-Laird Method and REML methods; testing for heterogeneity; forest and funnel plots; regression tests for publication bias; sensitivity analyses and meta-regression, and multilevel meta-analysis. This package will enable users to conduct meta-analysis of binary variables, continuous variables, time to event,
In the presentation, I will present the functions, packages, discuss codes, and experiences in building the package. I will also provide several examples of graphs and different types of data sets. The presentation will end with plans for future developments, and setting up a user community. The presentation will serve as a demonstration and a tutorial on meta-analysis.
false
https://pretalx.com/juliacon2023/talk/7SUERE/
https://pretalx.com/juliacon2023/talk/7SUERE/feedback/
26-100
Poster Session + Appetizers
Poster Session
2023-07-26T18:00:00-04:00
18:00
02:00
Come check out this years JuliaCon 2023 posters. Poster session will happen in Stata and there will be food around.
juliacon2023-28059-poster-session-appetizers
JuliaCon
en
JuliaCon 2023 at MIT will host a poster session for Julia users to showcase their work and engage with the community. Join us for a vibrant exchange of ideas and insights!
See https://juliacon.org/2023/posters/ for the poster list and abstracts.
false
https://pretalx.com/juliacon2023/talk/GB7PYN/
https://pretalx.com/juliacon2023/talk/GB7PYN/feedback/
32-082
Pre-Recorded Talks
Pre-recorded Talks
2023-07-26T10:00:00-04:00
10:00
06:00
Pre-recorded talks will be played in a playlist at 32-082 on Wednesday from 10 AM - 4 PM.
juliacon2023-35914-pre-recorded-talks
Pre-recorded Talks
en
List of talks that will be played, in no particular order:
1. ctrl-VQE: Julianic simulations of a pulse-level VQE - https://pretalx.com/juliacon2023/talk/WUTK8E/
2. VLLimitOrderBook.jl, simulation of electronic order book dynamic - https://pretalx.com/juliacon2023/talk/RRL3KQ/
3. Streaming real-time financial market data with AWS cloud
- https://pretalx.com/juliacon2023/talk/8DFWR3/
4. Nested approaches for multi-stage stochastic planning problems
- https://pretalx.com/juliacon2023/talk/KB8HS7/
5. ModelOrderReduction.jl -- Symbolic-Enhanced Model Simplification - https://pretalx.com/juliacon2023/talk/HZV83P/
6. Nerf.jl a real-time Neural 3D Scene Reconstruction in pure Julia - https://pretalx.com/juliacon2023/talk/DESQVZ/
7. Becoming a research software engineer with Julia - https://pretalx.com/juliacon2023/talk/UTJDAF/
8. Ignite.jl: A brighter way to train neural networks - https://pretalx.com/juliacon2023/talk/B3GMAE/
9. Fast Higher-order Automatic Differentiation for Physical Models - https://pretalx.com/juliacon2023/talk/D8RHE7/
10. Julia usecases in actuarial science related fields - https://pretalx.com/juliacon2023/talk/Z3WWTN/
11. Vahana.jl - A framework for large-scale agent-based models - https://pretalx.com/juliacon2023/talk/B9DG8U/
12. Julia at NCI - https://pretalx.com/juliacon2023/talk/DT3B7Z/
13. Quantum Monte Carlo in Julia - https://pretalx.com/juliacon2023/talk/S339TU/
14. Lessons learned while working as a technical writer at FluxML - https://pretalx.com/juliacon2023/talk/7AYUSN/
15. RxInfer.jl: a package for real-time Bayesian Inference - https://pretalx.com/juliacon2023/talk/WQNE9L/
16. Wrapping Up Offline RL as part of AutoMLPipeline Workflow - https://pretalx.com/juliacon2023/talk/UXE8SJ/
17. Sorting gene trees by their path within a species network - https://pretalx.com/juliacon2023/talk/ZVLMVZ/
18. Pipelines & JobSchedulers for computational workflow development - https://pretalx.com/juliacon2023/talk/ZCNXEB/
19. NLP and human behavior with Julia (and a bit of R) - https://pretalx.com/juliacon2023/talk/LCVKZS/
false
https://pretalx.com/juliacon2023/talk/V3BP98/
https://pretalx.com/juliacon2023/talk/V3BP98/feedback/
32-123
Morning Break Day 1 Room 2
Break
2023-07-26T11:15:00-04:00
11:15
00:15
Morning break for coffee and snacks, and transit time from the keynote to the rest of the day's talks.
juliacon2023-28124-morning-break-day-1-room-2
JuliaCon
en
Morning break for coffee and snacks, and transit time from the keynote to the rest of the day's talks.
false
https://pretalx.com/juliacon2023/talk/MXMFWS/
https://pretalx.com/juliacon2023/talk/MXMFWS/feedback/
32-123
Third Millennium Symbolic Learning with Sole.jl
Talk
2023-07-26T11:30:00-04:00
11:30
00:30
[Sole.jl](https://bit.ly/3X5pQUJ) is a framework for *symbolic learning*, i.e., machine learning with symbolic logic.
It comprehends packages for:
- Computational logic ([SoleLogics.jl](https://bit.ly/3QwLRJF));
- Operating with multimodal (un)structured data ([SoleData.jl](https://bit.ly/3GYh4SF), [SoleFeatures.jl](https://bit.ly/3GVDNPj));
- Learning, inspecting and analyzing symbolic models ([SoleModels.jl](https://bit.ly/3GVDPqp), [SolePostHoc.jl](https://bit.ly/3QAIL7b)).
juliacon2023-26908-third-millennium-symbolic-learning-with-sole-jl
General Machine Learning
Giovanni Pagliarini
en
Symbolic learning is machine learning based on formal logic. Its peculiarity lies in the fact that the learned models enclose an **explicit knowledge representation**, which offers many opportunities:
- Verifying that the model's thought process is adequate for a given task;
- Learning of new insights by simple inspection of the model;
- Manual refinement of the model at a later time.
These levels of **transparency** (or *interpretability*) are generally not available with standard machine learning methods, thus, as AI permeates more and more aspects of our lives, symbolic learning is becoming increasingly popular. In spite of this, implementations of symbolic algorithms (e.g, extraction of decision trees or rules) are mostly scattered across different languages and machine learning frameworks.
*Enough with this!* The lesser and lesser minoritarian community of symbolic learning deserves a programming framework of its own. So, here comes [Sole.jl](https://bit.ly/3X5pQUJ), a collection of Julia packages for symbolic modeling and learning; Sole.jl covers **a relatively wide range of functionality** that is of interest for the symbolic community, but it also fills some gaps with a few functionalities for standard machine learning pipelines (e.g., feature selection on multimodal (un)structured data). At the time of writing, the framework comprehends the following released packages:
+ [SoleLogics.jl](https://bit.ly/3QwLRJF) lays the **logical foundations** for symbolic learning. It provides a useful codebase for [*computational logic*](https://bit.ly/3ZOzs8a), which features easy manipulation of:
+ Propositional and (multi)modal logics (propositions, logical constants, alphabet, grammars, fuzzy algebras);
+ [Logical formulas](https://bit.ly/3w5tNgf) (random generation, parsing, minimization);
+ [Logical interpretations](https://bit.ly/3w5t26M) (or models, e.g., [Kripke structures](https://bit.ly/3XbLXcc));
+ Algorithms for [model checking](https://bit.ly/3kgAoSj) (that is, checking that a formula is satisfied by an interpretation).
+ [SoleData.jl](https://bit.ly/3GYh4SF) provides a **data layer** built on top DataFrames.jl. Its codebase is machine learning oriented and allows to:
+ Instantiate and manipulate [*multimodal*](https://bit.ly/3J7TCEj) datasets for (un)supervised machine learning;
+ Deal with [*(un)structured* data](https://bit.ly/3ILl7mN) (e.g., graphs, images, time-series, etc.);
+ Describe datasets via basic statistical measures;
+ Save to/load from *npy/npz* format;
+ Perform basic data processing operations (e.g., windowing, moving average, etc.).
+ [SoleModels.jl](https://bit.ly/3GVDPqp) defines the building blocks of **symbolic modeling/learning**. It is the core of the framework, and it features:
+ Definitions for symbolic models (decision trees/forests, rules, etc.);
+ Optimized data structures, useful when learning models from datasets;
+ Support for mixed, neuro-symbolic computation.
Altogether, Sole.jl makes for a powerful tool built with an eye to **formal correctness**, and it can be of use for both machine learning practitioners and computational logicians.
**Q:** Ok, so what symbolic learning methods do you people provide?
**A:** At the moment, [ModalDecisionTrees.jl](https://github.com/aclai-lab/ModalDecisionTrees.jl) is the only package compatible with Sole.jl, and it provides novel decision tree algorithms based on multimodal temporal and spatial logics for time-series and image classification. Checkout the related [talk at JuliaCon22](https://live.juliacon.org/talk/RQP9TG).
**Q:** Why the name?
**A:** *Sole* stands for SymbOlic LEarning; it also means "sun" in Italian, a hint to the enlightening power of transparent modeling.
false
https://pretalx.com/juliacon2023/talk/LYSQWS/
https://pretalx.com/juliacon2023/talk/LYSQWS/feedback/
32-123
What's new with Progradio.jl - Projected Gradient Optimization
Lightning talk
2023-07-26T12:00:00-04:00
12:00
00:10
Box-constrained optimization problems are ubiquitous in many areas of science and engineering. Our package includes methods tailored to this class of optimization problems. Due to Julia's `Iterator` interface, Progradio's solvers can be paused, resumed, or terminated early. Since the first release, we have included a stricter line-search procedure, and support for simplex constraints (Σ x_j = 1). Progradio's unique features make it attractive to be used as a sub-routine for dynamic optimization.
juliacon2023-25356-what-s-new-with-progradio-jl-projected-gradient-optimization
General Machine Learning
/media/juliacon2023/submissions/QZ3PSY/logo256px_WHxrJfs.svg
Eduardo M. G. Vila
en
We implement methods for box-constrained optimization problems: `min_x f(x) s.t. l <= x <= u`. This class of problems is relevant in its own right, but frequently appears as a sub-routine of nonlinearly constrained optimization methods, such as penalty or barrier algorithms.
Progradio makes use of Julia's `Iterator` interface, allowing greater control at each iteration (pausing, resuming, user-defined termination criteria). This feature (which most solvers lack) truly allows the solvers to be used as sub-routines to other algorithms. The traditional `solve()` interface is also available.
Since `v0.1`, we have implemented a stricter line-search algorithm. In addition to the existing Armijo method, this Wolfe-like procedure uses left and right-derivative information for the step-size selection. A more accurate line-search results in a higher rate of convergence, at the expense of the added derivative evaluations.
Additionally, we have added support for problems with simplex constraints (Σ x_j = 1). This is achieved with a transformation of variables at each iteration. The added constraint is effectively handled by a modified line-search, while ensuring feasibility at every iteration.
Progradio's early-termination and feasibility capabilities are attractive for infinite-dimensional optimization, such as solving optimal control problems.
We give examples of how dynamic optimization packages in our JuDO-dev organization call Progradio's solvers as sub-routines.
false
https://pretalx.com/juliacon2023/talk/QZ3PSY/
https://pretalx.com/juliacon2023/talk/QZ3PSY/feedback/
32-123
Lunch Day 1 (Room 2)
Lunch Break
2023-07-26T12:30:00-04:00
12:30
01:30
We hope you're enjoying JuliaCon 2023 so far! Please find our food trucks waiting right outside venue with food available for purchase.
juliacon2023-28067-lunch-day-1-room-2-
JuliaCon
en
We hope you're enjoying JuliaCon 2023 so far! Please find our food trucks waiting right outside venue with food available for purchase.
false
https://pretalx.com/juliacon2023/talk/3T8883/
https://pretalx.com/juliacon2023/talk/3T8883/feedback/
32-123
the status of parquet in Julia
Lightning talk
2023-07-26T14:20:00-04:00
14:20
00:10
The parquet tabular data storage format has become one of the most ubiquitous, particularly in "big data" contexts where it is arguably the only binary format to successfully supplant CSV. Despite this, there are relatively few implementations of parquet, which, historically, has presented challenges for Julia. I will give a brief overview of Parquet2.jl, a pure Julia parquet implementation including comparison to other tools and formats and what is still needed to reach parity with pyarrow.
juliacon2023-25057-the-status-of-parquet-in-julia
Statistics and Data Science
Expanding Man (Michael Savastio)
en
We will touch on the following:
- Why did I write Parquet2.jl when Parquet.jl already existed?
- Extremely quick overview of features.
- Answering the often asked question: which format should I use?
- A very brief mention of some idiosyncrasies of the format, some challenges of testing against the JVM implementation and why edge cases pop up.
- What features are missing? How far is this from parity with the `pyarrow` implementation?
false
https://pretalx.com/juliacon2023/talk/QGVBDY/
https://pretalx.com/juliacon2023/talk/QGVBDY/feedback/
32-123
SyntheticEddyMethod.jl for fluid dynamics
Lightning talk
2023-07-26T14:30:00-04:00
14:30
00:10
Synthetic Eddy Methods (SEM) are a family of techniques used to generate realistic inflow data for Large Eddy Simulation (LES) simulation in fluid dynamics. This is one of the easiest and less memory-consumption inflow generators method. The package **SyntheticEddyMethod.jl** implements the SEM and recreates a statistically correct and coherent turbulent flow.
juliacon2023-25517-syntheticeddymethod-jl-for-fluid-dynamics
JuliaCon
/media/juliacon2023/submissions/N7UQRU/SEM2_6CQUMUp.png
Carlo Brunelli
en
The generation of inflow data for spatially developing turbulent flows is one of the challenges that must be addressed prior to the application of LES to industrial flows and complex geometries. The specification of realistic inlet boundary conditions is widely acknowledged to play a significant role in the accuracy of a numerical simulation. Reynolds Averaged Navier Stokes (RANS) approach needs only mean profiles for the velocity and turbulence variables, making the definition of inflow data relatively simple but the results less accurate. The generation of inflow data is much more difficult in **large-eddy** and **direct-numerical simulations**. The SEM has to be adequately tuned in order to be dependent on the turbulent intensity and to trigger the transitions to turbulence over the tested geometries in order to reproduce the experimental data. It is not advisable to use a generic white noise for creating turbulence mostly because it generates incredibly high local gradients and so the turbulence is going to be completely dissipated.
This package in a few lines allows the user to obtain a time-dependent statistically correct and coherent turbulent flow. The problem is initialized by the user setting the dimension of the inlet, the mesh size and turbulence intensity.
In the program logic, the eddies, which are coherent fluid structures, resembling the concept of vortex, are generated in a **virtual box**. This is a computational domain placed in such a way that the surface of the inlet is contained within it. In this environment, eddies are just translated, and the properties of the flow are computed on the inlet surface.
The instantaneous flow field is saved in a cache object and is updated when the user wants to advance of one time-step. Due to the customizable size of the **virtual box**, is it possible to use this package also in case of curved surfaces.
The flow properties are computed only on the points of the inlet surface. Each eddy is characterised by a single coordinate set defining the centre and an intensity level (+1, -1).
The creation of synthetic eddies relies upon mathematical and statistical definitions. More advanced users can also select between different mathematical functions adopted to generate the eddies (at the moment just the tent and Gaussian, more can be easily coded).
It has been realized in Julia because, at the moment of the development, such a method was implemented only in other programming languages. In comparison with other similar projects (in Python or Fortran), there is less code and the use is straightforward. Furthermore, it is really close to the mathematical notation and it makes it easier to understand by future contributors.
Finally, the results of the package have been tested. The spectral analysis shows that the flow field generated shows the expected turbulence characteristics. The final achievement will be to couple the turbulence generation with the FEM solver for incompressible flows.
In conclusion, this package can be widely adopted in the field of fluid dynamics for obtaining more realistic and accurate results.
Package repository: https://github.com/carlodev/SyntheticEddyMethod.jl
false
https://pretalx.com/juliacon2023/talk/N7UQRU/
https://pretalx.com/juliacon2023/talk/N7UQRU/feedback/
32-123
the new XGBoost wrapper
Lightning talk
2023-07-26T14:40:00-04:00
14:40
00:10
Gradient boosted trees are a wonderfully flexible tool for machine learning and XGBoost is a state-of-the-art, widely used C++ implementation. Thanks to the library's C bindings, XGBoost has been usable from Julia for quite a long time. Recently, the wrapper has been rewritten as 2.0 and offers many fun new features, some of which were previously only available in the Python, R or JVM wrappers.
juliacon2023-25060-the-new-xgboost-wrapper
JuliaCon
Expanding Man (Michael Savastio)
en
We will discuss some new features as of 2.0 of the package including:
- More flexible training via public-facing calls for single update rounds.
- Tables.jl compatibility.
- Automated Clang.jl wrapping of the full `libxgboost`.
- Introspection of XGBoost internal data (`DMatrix`, now an `AbstractMatrix`).
- Handling of `missing` data.
- Introspection of the trees themselves via AbstractTrees.jl compatible tree objects.
- Updated feature importance API.
- Now fully documented!
- Upcoming GPU support.
false
https://pretalx.com/juliacon2023/talk/RBKKSS/
https://pretalx.com/juliacon2023/talk/RBKKSS/feedback/
32-123
Simulation of fracture and damage with Peridynamics.jl
Lightning talk
2023-07-26T14:50:00-04:00
14:50
00:10
We introduce the new package Peridynamics.jl, which allows users to perform simulations of fracture and damage with the fast performance of Julia. Peridynamics is a non-local formulation of continuum mechanics based on a material point approach that has been of increasing interest in recent years. The purpose of the talk is to give a short introduction to the package and to showcase its capabilities.
juliacon2023-27019-simulation-of-fracture-and-damage-with-peridynamics-jl
JuliaCon
/media/juliacon2023/submissions/ZS8NCE/logo_IQiT3kc.png
Kai Partmann
en
Peridynamics uses an integral equation to describe the relative displacements and forces between material particles. Because this equation is also defined for material discontinuities, this method can conveniently be employed to model the spontaneous formation of cracks and fragments. Multiple material models of peridynamics were shown to correctly map complex fracture problems with various materials.
The package Peridynamics.jl is designed to make it easily usable and includes various features that help to set up simulations. The capability to model damage and crack growth due to contact with multiple bodies makes it a valuable tool for many applications. Extending the package with new material models is easily possible due to multiple-dispatch and Julia's powerful type system.
false
https://pretalx.com/juliacon2023/talk/ZS8NCE/
https://pretalx.com/juliacon2023/talk/ZS8NCE/feedback/
32-123
NetworkHawkesProcesses.jl
Talk
2023-07-26T15:00:00-04:00
15:00
00:30
NetworkHawkesProcesses.jl implements methods to simulate and estimate a class of probabilistic models that combines mutually-exciting Hawkes processes with network structure. It allows researchers to construct such models from a flexible set of model components, run inference from a list of compatible methods (including maximum-likelihood estimation, Markov chain Monte Carlo sampling, and variational inference), and explore results with visualization and diagnostic utilities.
juliacon2023-26921-networkhawkesprocesses-jl
JuliaCon
/media/juliacon2023/submissions/E8ESXU/logo_klPnjw4.png
Colin Swaney
en
[NetworkHawkesProcesses.jl](https://github.com/cswaney/NetworkHawkesProcesses.jl) is a pure Julia framework for defining, simulating, and performing inference on a class of probabilistic models that permit simultaneous inference on the structure of a network and its event generating process—the network Hawkes processes ([Linderman, 2016](https://dash.harvard.edu/handle/1/33493391)). The event generating process is assumed to follow an auto-regressive, multi-variate Poisson process known as a Hawkes process. Connections between nodes—the network "structure"—are assumed to follow any standard network model (i.e., independent connections). Combining these models provides a disciplined method for discovering latent network structure from event data observed in neuroscience, finance, and beyond.
**Package features**
- Supports continuous and discrete processes
- Uses modular design to support extensible components
- Implements simulation via Poisson thinning
- Provides multiple estimation/inference methods
- Supports a wide range of network specifications
- Supports non-homogeneous baselines
- Accelerates methods via Julia's built-in `Threads` module
false
https://pretalx.com/juliacon2023/talk/E8ESXU/
https://pretalx.com/juliacon2023/talk/E8ESXU/feedback/
32-123
Tips for writing and maintaining Dash.jl applications
Talk
2023-07-26T15:30:00-04:00
15:30
00:30
Dash.jl is the Julia version of Plotly's Dash, a framework for building data science web applications. Relying on the same frontend components (written in JavaScript) as the popular Python version of Dash, Dash.jl is a robust library and a trustworthy choice for Julia users. This talk presents industry-tested tips on how to write maintainable Dash.jl applications, Julian code patterns for Dash.jl as well as tricks to work around some of Dash.jl's limitations.
juliacon2023-26814-tips-for-writing-and-maintaining-dash-jl-applications
JuliaCon
Etienne Tétreault-Pinard
en
Dash.jl is an open-source Julia package. Built on top of Plotly.js, React and HTTP.jl, Dash.jl ties modern UI elements like dropdowns, sliders, and graphs directly to your analytical Julia code. Relying on the same frontend components (written in JavaScript) as the popular Python version of Dash, Dash.jl is a robust library and a trustworthy choice for Julia users.
The talk begins by going over some of Dash.jl’s fundamentals with the help of an example interactive Gantt chart application. We follow by covering tricks that can help user work around some of Dash.jl’s limitations like caching the results of slow-executing callbacks, enabling keyboard triggers for callbacks and adding custom mode-bar interactions to the stock graph component. More evolved subjects are discussed next such as writing scalable and testable multi-page applications. Implementing sticky state that can be shared via URL query strings is also covered, all this while putting emphasis on Julian code patterns.
The talk should allow Dash.jl newcomers to get an overview of the framework's capabilities. In turn, experienced Dash.jl users may learn a new trick or two.
Slides: https://etpinard.gitlab.io/dash.jl-tricks/juliacon2023/
Other useful links:
- https://github.com/plotly/dash.jl
- https://community.plotly.com/c/plotly-r-matlab-julia-net/julia/23
- https://discourse.julialang.org/tag/dash
- https://gitlab.com/etpinard/dash.jl-tricks
false
https://pretalx.com/juliacon2023/talk/YGJ9SL/
https://pretalx.com/juliacon2023/talk/YGJ9SL/feedback/
32-123
FuzzyLogic.jl: productive fuzzy inference in Julia
Lightning talk
2023-07-26T16:00:00-04:00
16:00
00:10
This talk introduces FuzzyLogic.jl, a library for fuzzy inference, giving a tour of its features and design principles. Write your fuzzy model with an expressive Julia-like DSL or read your existing model from common formats, tune and explore with available algorithms and visualization tools, and generate efficient stand-alone Julia or C++ code for your final model. Finally, the talk will show how to use the library to solve engineering problems in control theory and image processing.
juliacon2023-27029-fuzzylogic-jl-productive-fuzzy-inference-in-julia
JuliaCon
/media/juliacon2023/submissions/HMVG8Q/logo_Gtdl5bT.svg
Luca Ferranti
en
Since their introduction in the 60s, fuzzy logic inference methods have been successfully applied in several engineering domain, such as power electronics, control theory and signal processing.
This talk introduces [FuzzyLogic.jl](https://github.com/lucaferranti/FuzzyLogic.jl), a library for fuzzy logic and fuzzy inference, and gives a tour of its features and design principles.
**User-friendliness**: exploiting metaprogramming, it allows to implement fuzzy inference systems using expressive and concise Julia syntax. It also offers tools to visualize the inference system and different steps of the inference pipeline.
**Interoperability**: read fuzzy models from popular formats such as matlab .fis fromat, no need to manually rewrite or translate old legacy codes.
**Flexibility**: Easily(ish) tune the inference pipeline with your own algorithms.
**Efficiency**: after prototyping and exploring, generate efficient stand-alone julia or c++ code for your model.
false
https://pretalx.com/juliacon2023/talk/HMVG8Q/
https://pretalx.com/juliacon2023/talk/HMVG8Q/feedback/
32-123
MarkdownAST.jl: abstract syntax tree interface for Markdown
Lightning talk
2023-07-26T16:10:00-04:00
16:10
00:10
The [MarkdownAST.jl](https://github.com/JuliaDocs/MarkdownAST.jl) package provides the APIs to work with Markdown documents in an abstract syntax tree (AST) representation. It positions itself as an interface package between packages that can generate the AST (e.g. parsers), and code that consumes it (e.g. renders).
juliacon2023-26160-markdownast-jl-abstract-syntax-tree-interface-for-markdown
JuliaCon
Morten Piibeleht
en
The [MarkdownAST.jl](https://github.com/JuliaDocs/MarkdownAST.jl) package provides the APIs to work with Markdown documents in an abstract syntax tree (AST) representation.
The aim is to have a universal interface package between parsers and other packages that can generate the AST, and code that consumes ASTs (e.g. to analyze, render, or transform it).
In addition to the data structures used to represent the ASTs, it also has a library of functions to create and modify ASTs. It is also compatible with [AbstractTrees.jl](https://github.com/JuliaCollections/AbstractTrees.jl), particularly useful for different kinds of tree traversal.
While the core package itself only supports representing CommonMark and Julia Flavored Markdown documents, the design of the AST data structure is more general than that.
Users can define their own document _elements_ (i.e. types of nodes in the AST) that have semantics that is different from the pre-existing ones.
This can be used to implement Markdown extensions, or even more general nodes, but still take advantage of the types and functions provided by the package.
Current users:
* [Documenter.jl](https://github.com/JuliaDocs/Documenter.jl) uses MarkdownAST internally to represent the Markdown document it processes.
* [CommonMark.jl](https://github.com/MichaelHatherly/CommonMark.jl) can parse Markdown documents into MarkdownAST AST.
Acknowledgments: the basic data structure is derived from the implementation in Michael Hatherly's CommonMark package.
false
https://pretalx.com/juliacon2023/talk/GPRCYA/
https://pretalx.com/juliacon2023/talk/GPRCYA/feedback/
32-123
Quick Assembly of Personalized Voice Assistants with JustSayIt
Lightning talk
2023-07-26T16:20:00-04:00
16:20
00:10
We present an approach to quickly assemble fully personalized voice assistants with JustSayIt.jl. To assemble a voice assistant, it is sufficient to define a dictionary with command names as keys and objects representing actions as values. Objects of type `Cmd`, for example, will automatically open the corresponding application. To define application-specific commands - a key feature for voice assistants - a command dictionary can simply be tied to the `Cmd`-object triggering the application.
juliacon2023-26917-quick-assembly-of-personalized-voice-assistants-with-justsayit
JuliaCon
Samuel Omlin
en
Creating a feature-complete voice assistant for desktop computing is practically impossible, because it would mean to support any possible software that exists, including every tiny application written by individuals. Moreover, the way computers are used varies strongly from one user to the other, making personalizability indispensable for voice assistants. We address these challenges by empowering the users themselves to quickly assemble the voice assistant they desire.
[JustSayIt.jl](https://github.com/omlins/JustSayIt.jl) enables to quickly assemble fully personalized voice assistants. One solely needs to define a normal Julia dictionary with command names as keys and objects representing actions as values. The object type determines the kind of action that will be taken at runtime. For example, if the object is a `Function`, then it will be called; if it is a `Tuple` of keyboard keys representing a keyboard shortcut, then the keys will be pressed; and if it is a `Cmd`, then the corresponding application will be opened. Furthermore, the object can also be an array of action objects, representing a sequence of commands. In addition, it is trivial to define application-specific commands, which is key to effective voice control: it is sufficient to create a dictionary with the application-specific commands and tie it to the `Cmd`-object triggering the application. [JustSayIt.jl](https://github.com/omlins/JustSayIt.jl) will then automatically take care of activating the application-specific commands when the application is opened, and deactivating them again as soon as another application is opened. The activation and deactivation of commands requires adaption with respect to the speech recognizers used and the transition between them is a challenge when required within a contiguously spoken word group; [JustSayIt.jl](https://github.com/omlins/JustSayIt.jl) solves this challenge by dynamically generating those recognizers in function of the word group context, when beneficial for recognition accuracy.
The action-semantics that is associated to the object types in the command dictionaries combined with the possibility to define command sequences on-the-fly and application-specific commands without effort results in an unprecedented, highly expressive and effective way to assemble personalized voice assistants. Naturally, it is trivial to share the command dictionaries defining the assembly of a voice assistant and the [JustSayItRecipes.jl](https://github.com/omlins/JustSayItRecipes.jl) repository provides a platform for it. As a result, [JustSayIt.jl](https://github.com/omlins/JustSayIt.jl) empowers the world-wide open source community to shape together each one's personal daily assistant.
false
https://pretalx.com/juliacon2023/talk/HC8HLQ/
https://pretalx.com/juliacon2023/talk/HC8HLQ/feedback/
32-123
Julia through the lens of Policy Analysis: Applications
Talk
2023-07-26T16:30:00-04:00
16:30
00:30
Standard use cases for Julia appeal to the scientific community writ large. In contrast, Julia has not been widely adopted the public policy community. This talk is meant to demonstrate how Julia is useful for public policy through several use cases. These use cases are: Misinformation and Adversarial Machine Learning in decision critical systems
juliacon2023-27090-julia-through-the-lens-of-policy-analysis-applications
JuliaCon
Joshua Steier
en
The first use case will be centered around agent based modeling to understand misinformation spread and mitigations. The second use case will be focused on the practicality of adversarial attacks in machine learning systems, specifically from the financial, biology and health domains, which is meant to inform public policy and drive mitigations.
false
https://pretalx.com/juliacon2023/talk/B3LFUE/
https://pretalx.com/juliacon2023/talk/B3LFUE/feedback/
32-124
Morning Break Day 1 Room 3
Break
2023-07-26T11:15:00-04:00
11:15
00:15
Morning break for coffee and snacks, and transit time from the keynote to the rest of the day's talks.
juliacon2023-28125-morning-break-day-1-room-3
JuliaCon
en
Morning break for coffee and snacks, and transit time from the keynote to the rest of the day's talks.
false
https://pretalx.com/juliacon2023/talk/YJ8PDN/
https://pretalx.com/juliacon2023/talk/YJ8PDN/feedback/
32-124
MRI Compressed Sensing and Denoising in Julia
Talk
2023-07-26T11:30:00-04:00
11:30
00:30
Compressed sensing is an area of mathematics increasingly being applied in magnetic resonance imaging. MRI imagery is highly sparse and noisy making it an appropriate application to use these techniques. Compressed sensing complements existing techniques such as JPEG compression in computationally intensive applications such as MRI image processing to reduce acquisition time, in AI training time and can be used to build more robust systems with greater resilience to noise.
juliacon2023-26919-mri-compressed-sensing-and-denoising-in-julia
Biology and Medicine
Alexander Leong
en
Compressed sensing is used in numerous applications from medical imaging, network tomography, radio astronomy, recommendation systems to name a few.
This talk focuses on applying compressed sensing to Magnetic Resonance Imaging as an example. Several algorithms will be discussed including singular value thresholding, total variation denoising and wiener filtering. I will highlight how Julia libraries were used to rapidly prototype algorithms, experiments and explore key mathematical / optimization methods that can be readily applied by the practitioner.
Julia has made it easy to apply common linear algebra subroutines such as the SVD and libraries exist to perform key tasks. These include:
(i) reading in .h5 data used to store MRI images in k-space using HDF5.jl,
(ii) to perform the IFFT (Inverse Fast Fourier Transform) using FFTW.jl,
(iii) generate noise and perform some digital signal processing routines using Noise.jl and Deconvolution.jl respectively.
The relevant libraries and methodologies and how they fit into an overall data processing architecture will be discussed in this presentation.
A demonstration will be performed on how Pluto.jl is used to communicate results and visualize MRI imagery using Plots.jl. Pluto.jl is a great tool especially for testing out imaging parameters that achieve a desirable result.
The talk describes how one takes an equation, implements it in Julia, imports data and visualizes results in 30 mins. It is highly appropriate to a diverse audience of engineers, applied mathematicians and scientists alike that don’t have time and need to understand how to apply compressed sensing to their problem in Julia.
false
https://pretalx.com/juliacon2023/talk/3REQKW/
https://pretalx.com/juliacon2023/talk/3REQKW/feedback/
32-124
Exploring the State of Machine Learning for Biological Data
Talk
2023-07-26T12:00:00-04:00
12:00
00:30
Exploring the use of Julia, in analyzing biological data. Discussion of libraries and packages, challenges and opportunities of using machine learning on biological data, and examples of past and future applications.
juliacon2023-27055-exploring-the-state-of-machine-learning-for-biological-data
Biology and Medicine
Edmund Miller
en
This talk, "Exploring the State of Machine Learning for Biological Data in Julia," will delve into the use of the high-performance programming language, Julia, in analyzing biological data. We will discuss various libraries and packages available in Julia, such as BioJulia and Flux.jl, and the benefits of using Julia for machine learning in the field of biology. Additionally, the challenges and opportunities that arise when using machine learning techniques on biological data, such as dealing with high-dimensional and heterogeneous data, will be addressed. The talk will also include examples of how machine learning has been applied to biological data in the past and what the future holds for this field.
false
https://pretalx.com/juliacon2023/talk/CSG8NU/
https://pretalx.com/juliacon2023/talk/CSG8NU/feedback/
32-124
Lunch Day 1 (Room 3)
Lunch Break
2023-07-26T12:30:00-04:00
12:30
01:30
We hope you're enjoying JuliaCon 2023 so far! Please find our food trucks waiting right outside venue with food available for purchase.
juliacon2023-28068-lunch-day-1-room-3-
JuliaCon
en
We hope you're enjoying JuliaCon 2023 so far! Please find our food trucks waiting right outside venue with food available for purchase.
false
https://pretalx.com/juliacon2023/talk/YVPHK3/
https://pretalx.com/juliacon2023/talk/YVPHK3/feedback/
32-124
Learning Hybridization Networks Using Phylogenetic Invariants
Lightning talk
2023-07-26T14:00:00-04:00
14:00
00:10
Phylogenetic networks represent the evolutionary process of reticulate organisms by the explicit modeling of gene flow. While most existing network methods are not scalable to tackle big data, we introduce a novel method to reconstruct phylogenetic networks based on algebraic invariants without the heuristic search of network space. Our methodology is available in the Julia package phylo-diamond.jl, and it is at least 10 times faster than the fastest-to-date network methods.
juliacon2023-24737-learning-hybridization-networks-using-phylogenetic-invariants
Biology and Medicine
Zhaoxing Wu
en
The abundance of gene flow in the Tree of Life challenges the notion that evolution can be represented with a fully bifurcating process, as this process cannot capture important biological realities like hybridization, introgression, or horizontal gene transfer. Coalescent-based network methods are increasingly popular, yet not scalable for big data, because they need to perform a heuristic search in the space of networks as well as numerical optimization that can be NP-hard. Here, we introduce a novel method to reconstruct phylogenetic networks based on algebraic invariants. While there is a long tradition of using algebraic invariants in phylogenetics, our work is the first to define phylogenetic invariants on concordance factors (frequencies of 4-taxon splits in the input gene trees) to identify level-1 phylogenetic networks under the multispecies coalescent model. Our novel inference methodology is optimization-free as it only requires evaluation of polynomial equations, and as such, it bypasses the traversal of network space yielding a computational speed at least 10 times faster than the fastest-to-date network methods. We illustrate the accuracy and speed of our new method on a variety of simulated scenarios as well as in the estimation of a phylogenetic network for the genus Canis. We implement our novel theory on an open-source publicly available Julia package phylo-diamond.jl with broad applicability within the evolutionary biology community.
false
https://pretalx.com/juliacon2023/talk/SDVJ9Y/
https://pretalx.com/juliacon2023/talk/SDVJ9Y/feedback/
32-124
BioMakie.jl - Plotting and interface tools for biology
Lightning talk
2023-07-26T14:10:00-04:00
14:10
00:10
BioMakie.jl provides plotting methods for protein data such as structures and multiple sequence alignments. Interactive elements allow users to give additional functionality to plotted data, to facilitate inspection, manipulation, and presentation. A simple event handling system enables custom triggers and synchronization. Plotting and interface tools are further extended via Julia's interoperability with other programming languages.
juliacon2023-27125-biomakie-jl-plotting-and-interface-tools-for-biology
Biology and Medicine
Daniel Kool
en
This package provides plotting methods for protein data such as structures and multiple sequence alignments. Interactive elements allow users to give additional functionality to plotted data, to facilitate inspection, manipulation, and presentation. A simple event handling system enables custom triggers and synchronization.
The main goals are:
-Provide plotting/visualization tools for existing Bio packages
-Make it easier for people to get started with Bio in Julia
-Introduce tools and methods for creating custom datasets
-Connect with and complement other interfaces and tools from other languages
false
https://pretalx.com/juliacon2023/talk/EYFPLZ/
https://pretalx.com/juliacon2023/talk/EYFPLZ/feedback/
32-124
A Comparison of 4 Multiple Sclerosis Rx from Patient's Feedback
Lightning talk
2023-07-26T14:20:00-04:00
14:20
00:10
Identified 4 Multiple Sclerosis drugs: Gilenya, Tysabri, Copaxone, and Tecfidera as the basis for conducting a manual textual analysis from feedback provided by patients at WebMD website. The objective was two fold: (1) complete a textual analysis, evaluating, on the dimensions of Ease of Use,
Effectiveness, and Satisfaction. After categorizing different feedback into various groups, these data were visualized and analyzed using Julia
juliacon2023-25710-a-comparison-of-4-multiple-sclerosis-rx-from-patient-s-feedback
Biology and Medicine
Aditi Kowta
en
Identified 4 Multiple Sclerosis drugs: Gilenya, Tysabri, Copaxone, and Tecfidera as the basis for conducting a manual textual analysis from feedback provided by patients at WebMD website. The objective was two fold: (1) complete a textual analysis, evaluating, on the dimensions of Ease of Use,
Effectiveness, and Satisfaction. After categorizing different feedback into various groups, these data were visualized and analyzed using Julia. Each brand had at least a minimum of 100 unique patients providing feedback - Tysabri had 222, Gilenya had 102, Tecfidera had 162 and Copaxone had 198. Analysis will include basic means comparisons using ANOVA based analysis in Julia
false
https://pretalx.com/juliacon2023/talk/YPSFNQ/
https://pretalx.com/juliacon2023/talk/YPSFNQ/feedback/
32-124
Unlocking the Power of Genomic Analysis in Julia
Talk
2023-07-26T14:30:00-04:00
14:30
00:30
Learn how Julia, a high-performance programming language, can be used to analyze genomic data. Discussion of libraries, specific challenges and opportunities, past examples, and future possibilities of using Julia in genomic data analysis.
juliacon2023-27058-unlocking-the-power-of-genomic-analysis-in-julia
Biology and Medicine
Edmund Miller
en
Genomic data is becoming an increasingly valuable resource in the study of biology and medicine, as it allows for a deeper understanding of the underlying mechanisms of diseases and the development of more effective therapies. However, the sheer volume and complexity of genomic data can make it challenging to analyze. Julia, a high-performance programming language, has emerged as a powerful tool for genomic data analysis. In this talk, we will explore the use of Julia for genomic data analysis, including the various libraries and packages available, such as IntervalTrees and GenomicFeatures. We will also discuss some of the specific challenges and opportunities that arise when analyzing genomic data, such as dealing with large-scale data and integrating multiple data types. We will also show some examples of how Julia has been used in the past to analyze genomic data and what the future holds for this field. This talk will be beneficial for biologists, bioinformaticians, and data scientists interested in the application of Julia to genomic data analysis.
Expected Outcomes:
- Understanding of the power and capabilities of Julia for genomic data analysis
- Knowledge of the available libraries and packages for genomic data analysis in Julia
- Insights into the challenges and opportunities of using Julia for genomic data analysis
- Familiarity with examples of how Julia has been used in the past for genomic data analysis
- Ideas for potential future applications of Julia in genomic data analysis.
false
https://pretalx.com/juliacon2023/talk/AJJRS3/
https://pretalx.com/juliacon2023/talk/AJJRS3/feedback/
32-124
BiosimVS.jl: Virtual screening of ultra-large chemical libraries
Lightning talk
2023-07-26T15:00:00-04:00
15:00
00:10
BiosimVS.jl: Virtual screening of ultra-large chemical libraries
Virtual screening (VS) is a computational technique used in drug discovery, enabling searching libraries of small molecules in order to identify those compounds that are most likely to bind to a drug target. We discuss the BiosimVS.jl package that enables virtual screening of ultra-large scale chemical libraries, containing billions of molecules.
juliacon2023-26925-biosimvs-jl-virtual-screening-of-ultra-large-chemical-libraries
Biology and Medicine
Garik Petrosyan
en
We discuss the BiosimVS.jl package that enables virtual screening of ultra-large scale chemical libraries, containing billions of molecules. This is an important capability that is needed to keep pace with the rapid growth of sizes of modern virtual libraries. BiosimVS.jl relies on another library that we have developed, BiosimDock.jl, which can predict the binding affinity and the pose of a ligand inside the target. We have implemented an active learning pipeline to accelerate the VS process, training machine learning models on binding affinity predictions from BiosimDock.jl to predict best candidates for docking. BiosimVS.jl and BiosimDock.jl are wholly written in Julia, which made it possible to match and surpass the performance of state-of-the-art C++ docking and screening software. I will speak about how we use Julia and its packages to solve this complex problem.
false
https://pretalx.com/juliacon2023/talk/FRZKFJ/
https://pretalx.com/juliacon2023/talk/FRZKFJ/feedback/
32-124
An introduction to UnsupervisedClustering.jl package
Lightning talk
2023-07-26T15:10:00-04:00
15:10
00:10
We introduce UnsupervisedClustering.jl, a package that implements traditional unsupervised clustering algorithms and proposes advanced global optimization algorithms that allow escape from local optima.
juliacon2023-27023-an-introduction-to-unsupervisedclustering-jl-package
JuliaCon
Raphael Araujo Sampaio
en
In this talk, we will delve into the limitations of the traditional k-means algorithm, which often struggles to fit data that deviates from spherical distributions. In comparison, general Gaussian Mixture Models (GMMs) can fit richer structures but require estimating a quadratic number of parameters per cluster to represent the covariance matrices. Our research addresses these issues by proposing advanced global optimization algorithms that effectively combine with regularization strategies, leading to superior performance in cluster recovery compared to classical GMMs or k-means algorithms. Through a wide range of experiments on synthetic, we demonstrate the effectiveness of the proposed methods. We made available two Julia packages, UnsupervisedClustering.jl and RegularizedCovarianceMatrices.jl, that implement the proposed techniques for easy use and further research.
false
https://pretalx.com/juliacon2023/talk/CDPRWB/
https://pretalx.com/juliacon2023/talk/CDPRWB/feedback/
32-124
Polyhedral Computation
Lightning talk
2023-07-26T15:20:00-04:00
15:20
00:10
Polyhedra are at the foundation of many engineering tools such as Mixed-Integer Programming. Manipulating them can give key insights
juliacon2023-27014-polyhedral-computation
JuliaCon
Benoît Legat
en
Polyhedra are at the foundation of many engineering tools such as Mixed-Integer Programming and Computational Geometry. Manipulating them can give key insights but the operations on polyhedra, commonly referred to as Polyhedral Computation can be very costly.
This talk introduces an interface for Polyhedral Computation that is implemented by the main libraries that exist nowadays as well the implementation of these algorithms in Julia.
false
https://pretalx.com/juliacon2023/talk/JP3SPX/
https://pretalx.com/juliacon2023/talk/JP3SPX/feedback/
32-124
AdaptiveHierarchicalRegularBinning
Talk
2023-07-26T15:30:00-04:00
15:30
00:30
[`AdaptiveHierarchicalRegularBinning.jl`](https://github.com/pitsianis/AdaptiveHierarchicalRegularBinning.jl) computes a hierarchical space-partitioning tree for a given set of points of arbitrary dimensions, that divides the space and stores the reordered points offering efficient access. Space-partitioning data structures are vital for algorithms that exploit spatial distance to reduce computational complexity, see for example the Fast Multipole Method and Nearest Neighbors.
juliacon2023-26983-adaptivehierarchicalregularbinning
JuliaCon
Antonis Skourtis
en
# Model
Assuming a set of `n` points `V` in a `d`-dimensional space, we partition the space into regular hierarchical bins. The resulting data structure is a sparse tree `T` with, at most, `2^d` child nodes per node and a maximum depth of `L`.
## Normalization
The given set of points `V` is mapped into a `d`-dimensional unit hypercube via an affine transformation that scales and translates `V` resulting in `Vn`.
## Binning
We split each dimension of the unit hypercube in half and recursively continue splitting each resulting hypercube in the same manner. Each partition is called a bin. The subdivision stops at a maximum depth `L` or when a bin contains `k` or fewer points. The recursive splitting process is recorded as a hierarchical tree data structure.
## 1D Encoding
We use Morton encoding to map the `d`-dimensional set `Vn` to a one-dimensional space-filling curve. Each point in `Vn` is assigned an index in the reduced space resulting in `R`. Elements of `R` are bit-fields, each one consisting of `L` groups of `d` bits. These groups describe the position of the point in the corresponding level of the tree `T`. Points described by morton indices with equal most signifficant digits belong in the same bin. Thus, sorting `R` results in `Rs` which defines the sparse tree.
# Implementation
Cache-locality and parallel programming are some of the techniques we used, to make our implementation performant.
## Cache locality
Cache locality offers fast memory access that greatly improves the performance of our algorithm.
- The set of points `V`, and by extension `Vn`, is defined as a `Matrix` of size `(d, n)`. The leading dimension describes the number of dimensions in the hyperspace resulting in an access pattern that is more friendly to the cache since all points of `V` have their corresponding coordinates densely packed in memory.
- Sorting `R` and `Vn` offers a memory layout that is cache-friendly. Since we access `V` through the `T` tree, we only access points that are in the same bin. `Rs` describes a bin of `T` with a contiguous block of memory, thus preserving cache-locality.
- `T` is not a linked-tree. `T` is a tree stored densely in memory as a `Vector{Node}`. Each `Node` of `T` is aware of their children and parent using their indices in this dense `Vector{Node}`.
## Parallel Partial Sorting
The Morton curve bit-field denotes the tree node of each point. The partial sorting using the Most Significant Digit (MSD) radix-sort, places the points to the corresponding bins. Points that fall within the same leaf node do not get sorted. The radix-sort runs in parallel: the partition of digits is done with a parallel count-sort, and then each digit subset is processed independently in parallel.
## Adaptive Tree
Empty bins, that is, nodes that do not contain any points, are not stored or referenced explicitly.
# References
- Sun, X., & Pitsianis, N. P. (2001). A Matrix Version of the Fast Multipole Method. In SIAM Review (Vol. 43, Issue 2, pp. 289–300). Society for Industrial & Applied Mathematics (SIAM). https://doi.org/10.1137/s0036144500370835
- Curtin, R., March, W., Ram, P., Anderson, D., Gray, A. & Isbell, C.. (2013). Tree-Independent Dual-Tree Algorithms. <i>Proceedings of the 30th International Conference on Machine Learning</i>, in <i>Proceedings of Machine Learning Research</i> 28(3):1435-1443 Available from https://proceedings.mlr.press/v28/curtin13.html.
- Carrier, J., Greengard, L., & Rokhlin, V. (1988). A Fast Adaptive Multipole Algorithm for Particle Simulations. In SIAM Journal on Scientific and Statistical Computing (Vol. 9, Issue 4, pp. 669–686). Society for Industrial & Applied Mathematics (SIAM). https://doi.org/10.1137/0909044
- Yokota, R. (2013). An FMM Based on Dual Tree Traversal for Many-Core Architectures. In Journal of Algorithms & Computational Technology (Vol. 7, Issue 3, pp. 301–324). SAGE Publications. https://doi.org/10.1260/1748-3018.7.3.301
- Curtin, R.R. (2015). Faster Dual-Tree Traversal for Nearest Neighbor Search. In: Amato, G., Connor, R., Falchi, F., Gennaro, C. (eds) Similarity Search and Applications. SISAP 2015. Lecture Notes in Computer Science(), vol 9371. Springer, Cham. https://doi.org/10.1007/978-3-319-25087-8_7
- Greengard, L. (1990). The numerical solution of the n‐body problem. Computers in physics, 4(2), 142-152.
- Cho, M., Brand, D., Bordawekar, R., Finkler, U., Kulandaisamy, V., & Puri, R. (2015). PARADIS: An efficient parallel algorithm for in-place radix sort. Proceedings of the VLDB Endowment, 8(12), 1518-1529.
- Zagha, M., & Blelloch, G. E. (1991, August). Radix sort for vector multiprocessors. In Proceedings of the 1991 ACM/IEEE conference on Supercomputing (pp. 712-721).
- Morton, G. M. (1966). A computer oriented geodetic data base and a new technique in file sequencing.
false
https://pretalx.com/juliacon2023/talk/8B3QWF/
https://pretalx.com/juliacon2023/talk/8B3QWF/feedback/
32-124
Julia in the Catastrophe Business
Lightning talk
2023-07-26T16:00:00-04:00
16:00
00:10
This talk will illustrate the implementation of a pricing engine for the Reinsurance business, specifically for the Catastrophe Business. It will explain the specific problems we encountered at Ark Bermuda and the tools that Julia provides to aid in creating a corporate application
juliacon2023-27086-julia-in-the-catastrophe-business
JuliaCon
Alessio Bellisomi
en
In this talk, I will explain how we are using Julia to implement and executing all the task required to run Reinsurance business in a reinsurance startup like Ark. This involves various tasks like Portfolio Rollup, Program Pricing and Reporting.
I will detail how different tools that Julia provides fit our business requirements and software workflows. I will also highlight how the Julia community's help has been crucial in making of this a successful story.
false
https://pretalx.com/juliacon2023/talk/LZ78HV/
https://pretalx.com/juliacon2023/talk/LZ78HV/feedback/
32-124
Building a web API for dispersion modeling with Genie.jl
Lightning talk
2023-07-26T16:10:00-04:00
16:10
00:10
For the real-time online risk assessment in case of hazardous material release into the atmosphere, Genie.jl has been used to build a web service to run atmospheric dispersion models upon client requests that follow the OpenApi specification. Typical issues related to web API development have been addressed (authentication, DB management, server-to-client communication...). Moreover, multiple Julia packages for running dispersion models and operating on geospatial data have been developed.
juliacon2023-24633-building-a-web-api-for-dispersion-modeling-with-genie-jl
JuliaCon
Tristan Carion
en
An atmospheric transport and dispersion modeling framework for the online impact assessment of CBRN-type incidents (Chemical, Biological, Radiological and Nuclear) is currently being developed by the Royal Military Academy of Belgium and is hosted on the European Weather Cloud system. The backend of the framework is entirely written with Julia and uses [Genie.jl](https://github.com/GenieFramework/Genie.jl) to serve a REST API following the OpenApi specification (OAS). The goal of this presentation is twofold:
- describing how Genie.jl has been set up along with OAS to provide authentication, API testing, DB management, server-to-client communication and routing logics,
- showing the modeling ecosystem that has been built in Julia to support the framework.
Genie.jl is a comprehensive framework for building full-stack web applications in Julia. Due to the complexity of this modeling framework, the computing and data management parts are decoupled from the frontend GUI, and Genie.jl is used to set up a server listening to specific routes defined and documented following the OpenApi specification. It uses [SwaggerUI.jl](https://github.com/GenieFramework/SwagUI.jl) to render a user-friendly version of the API documentation. A JSON Web Token authentication standard has been implemented to authenticate the client requests to the API, and relies on [JSONWebTokens.jl](https://github.com/felipenoris/JSONWebTokens.jl). Database management and interaction are handled by the ORM layer of Genie [SearchLight.jl](https://github.com/GenieFramework/SearchLight.jl).
Besides the web development-related part of the project, various new packages for atmospheric transport and dispersion modeling have been developed, along with utility packages for geospatial computing:
- [ATP45.jl](https://github.com/tcarion/ATP45.jl) is a Julia implementation of a simple military prediction procedure in case of CBRN-type incidents
- [Flexpart.jl](https://github.com/tcarion/Flexpart.jl) is a Julia interface to the FLEXPART Lagrangian dispersion model. It provides the FLEXPART executable in the Julia ecosystem with BinaryBuilder.jl.
- [GaussianDispersion.jl](https://github.com/tcarion/GaussianDispersion.jl) is a Julia implementation of Gaussian dispersion models.
- [EcRequests.jl](https://github.com/tcarion/EcRequests.jl) interfaces the ECMWF services to retrieve weather forecast data.
- [GRIBDatasets.jl](https://github.com/tcarion/GRIBDatasets.jl) provides a high-level interface to read GRIB encoded files, a widely used data format for storing meteorological fields. It has been integrated as a source for [Rasters.jl](https://github.com/rafaqz/Rasters.jl), giving a powerful way of operating on this data format in Julia.
These packages will be briefly described and some typical usage examples will be shown.
false
https://pretalx.com/juliacon2023/talk/MFGSMK/
https://pretalx.com/juliacon2023/talk/MFGSMK/feedback/
32-144
Morning Break Day 1 Room 5
Break
2023-07-26T11:15:00-04:00
11:15
00:15
Morning break for coffee and snacks, and transit time from the keynote to the rest of the day's talks.
juliacon2023-28127-morning-break-day-1-room-5
JuliaCon
en
Morning break for coffee and snacks, and transit time from the keynote to the rest of the day's talks.
false
https://pretalx.com/juliacon2023/talk/N3VVPT/
https://pretalx.com/juliacon2023/talk/N3VVPT/feedback/
32-144
SARProcessing.jl: A flexible package for the SAR data processing
Lightning talk
2023-07-26T11:50:00-04:00
11:50
00:10
With a low barrier to entry and a large ecosystem of tools and libraries that allow quick prototyping, Julia has great potential for geospatial development. SARProcessing.jl is a much open-source project with the aim of making SAR data processing easy and fast for everyone. This talk provides a gentle hands-on introduction to setting up and enjoying the SARProcessing.jl package.
juliacon2023-25532-sarprocessing-jl-a-flexible-package-for-the-sar-data-processing
Geosciences
Iga SzczesniakKristian Aalling Sørensen
en
Synthetic Aperture Radar (SAR) is an imaging radar technique that uses the relative motion of the sensor and advanced radar signal processing to synthesize a long antenna to generate high-resolution images of the terrain. The analysis of these data is extremely computationally demanding. Julia can have a great impact on work in Earth Observation, which mostly uses satellite-based data severely limited by traditional in-series computing (processed by computing processing units (CPUs)). Our group is developing the software package SARProcessing.jl, for the Julia programming language, which facilitates and greatly enhances the potential for carrying out work with SAR data. Our goals are to both facilitate and significantly accelerate data pipelines, making, e.g., InSAR available to a broader audience. In this talk, we will explore the functionalities of the new software package SARProcessing.jl and provide an overview of its current features.
false
https://pretalx.com/juliacon2023/talk/MMCQLP/
https://pretalx.com/juliacon2023/talk/MMCQLP/feedback/
32-144
JuliaEO 2023: Outcomes, Overview & Impact
Talk
2023-07-26T12:00:00-04:00
12:00
00:30
A community of Julia developers working with Earth Observation was brought together at the _JuliaEO2023: Global Workshop on Earth Observation with Julia_. 300 hundred people registered and 40 attended in person. All major aspects were covered: big geospatial data, remote sensing, data processing, visualization, modelling, data science, machine learning, artificial intelligence, and cloud computing. A Docker container and a Dataverse archive complement the notebook collection for reproducibility.
juliacon2023-26885-juliaeo-2023-outcomes-overview-impact
Geosciences
/media/juliacon2023/submissions/WJHSMD/JuliaEOLogoNegativeOpaque_Border_UMbVmdI.png
Joao PineloGael ForgetAdriano Coutinho de LimaIga SzczesniakAndre Valente
en
The talk will describe the context and motivation for organizing the _JuliaEO 2023: Global Workshop on Earth Observation with Julia_, which took place in January 2023. It will proceed with reporting the topics covered and summarize the notebooks used by the 14 speakers. Finally, it will highlight how the Julia ecosystem currently supports most Earth Observation pipelines and the opportunities to expand it.
false
https://pretalx.com/juliacon2023/talk/WJHSMD/
https://pretalx.com/juliacon2023/talk/WJHSMD/feedback/
32-144
Lunch Day 1 (Room 5)
Lunch Break
2023-07-26T12:30:00-04:00
12:30
01:30
We hope you're enjoying JuliaCon 2023 so far! Take a break and grab some lunch to recharge for the afternoon sessions. We have a delicious spread waiting for you in the dining hall. Bon appétit!
juliacon2023-28070-lunch-day-1-room-5-
JuliaCon
en
We hope you're enjoying JuliaCon 2023 so far! Take a break and grab some lunch to recharge for the afternoon sessions. We have a delicious spread waiting for you in the dining hall. Bon appétit!
false
https://pretalx.com/juliacon2023/talk/ZLJGD8/
https://pretalx.com/juliacon2023/talk/ZLJGD8/feedback/
32-144
Phoenix or cyborg: The anatomy of Earth system software in Julia
Minisymposium
2023-07-26T14:00:00-04:00
14:00
01:00
Several large projects on Earth system modelling are based on Julia. In a field where Fortran traditionally dominates model development we want to gather developers to share how we build Earth system software in Julia. Instead of sharing project surfaces, this minisymposium wants to look deeper into their anatomy. The phoenix: are we reinventing traditional coding structures in a new language? Or the cyborg: does Julia allow us to replace these structures with new, enhanced machinery?
juliacon2023-26910-phoenix-or-cyborg-the-anatomy-of-earth-system-software-in-julia
Geosciences
/media/juliacon2023/submissions/TEW9BP/frame0240_xjZiyRH.png
Milan Klöwer
en
Best practices had to be developed to make the most of a new language in the development of Earth system models. Newer concepts in research software engineering find their way into the Earth system modelling community, particularly through the adoption of new languages. And the user demand on interfaces and interoperability to these models also has changed.
With custom types a domain specific language is created within our models and abstraction minimizes code redundancy. Ideally we would like to execute the same code on all computing architectures at high performance, but what are the best practices to achieve that? Earth system models are large with many interoperating parts. How can we build them with a flexibility that any user can swap in and out parts without diving deep into the code base? And we are aiming towards differentiable models that learn from data, ideally without sacrificing anything from above. This minisymposium aims to cover the following topics, but is not limited to
- Abstraction, multiple dispatch, types and type hierarchies in large projects
- Flexibility with respect to the computing architecture: CPUs, GPUs, and others
- Modularity: easily extendible models and their components
- Differentiability and optimization: Learning from data
- Integration of Julia’s machine learning frameworks into models
- Performance: Achieving and monitoring performance during development
- Usage of high-performance and parallelism frameworks
- Interconnectivity of packages across the Julia ecosystem
- Bad practices: Julia’s detours, unnecessary complexity, or missing features
Schedule, Part 1
- 14:00 Intro
- 14:10 Gregory Wagner (MIT) on ocean modelling with Oceananigans.jl
- 14:30 Skylar Gering (Caltech) on "Discrete Element Sea-Ice Modeling in Julia: Successes and Challenges"
- 14:50 Milan Klöwer (MIT) on atmospheric modelling with SpeedyWeather.jl
- 15:10 Julia Sloan/Lenka Novak (Caltech) on "ClimaCoupler.jl: An Extensible ESM Coupler in Julia"
15:30-15:40 Break
Schedule, Part 2
- 15:40 Sarah Williamson (UT Austin) on "Differentiable Ocean models with Enzyme.jl"
- 16:00 Lisa Rennels (UC Berkeley) on integrated assessment modelling with Mimi.jl
- 16:20 Gaël Forget (MIT) on Julia-Fortran interfaces with ClimateModels.jl
- 16:40 Overflow discussion time
- 16:50 Outro
Chairs: Milan Klöwer (MIT), Simon Byrne (Caltech), Chris Hill (MIT)
Recording: https://www.youtube.com/watch?v=x9d6WtePul0&t=6475s
false
https://pretalx.com/juliacon2023/talk/TEW9BP/
https://pretalx.com/juliacon2023/talk/TEW9BP/feedback/
32-144
Phoenix or cyborg (2)
Minisymposium
2023-07-26T15:00:00-04:00
15:00
01:00
Several large projects on Earth system modelling are based on Julia. In a field where Fortran traditionally dominates model development we want to gather developers to share how we build Earth system software in Julia. Instead of sharing project surfaces, this minisymposium wants to look deeper into their anatomy. The phoenix: are we reinventing traditional coding structures in a new language? Or the cyborg: does Julia allow us to replace these structures with new, enhanced machinery?
juliacon2023-30798-phoenix-or-cyborg-2-
Geosciences
en
Best practices had to be developed to make the most of a new language in the development of Earth system models. Newer concepts in research software engineering find their way into the Earth system modelling community, particularly through the adoption of new languages. And the user demand on interfaces and interoperability to these models also has changed.
With custom types a domain specific language is created within our models and abstraction minimizes code redundancy. Ideally we would like to execute the same code on all computing architectures at high performance, but what are the best practices to achieve that? Earth system models are large with many interoperating parts. How can we build them with a flexibility that any user can swap in and out parts without diving deep into the code base? And we are aiming towards differentiable models that learn from data, ideally without sacrificing anything from above. This minisymposium aims to cover the following topics, but is not limited to
- Abstraction, multiple dispatch, types and type hierarchies in large projects
- Flexibility with respect to the computing architecture: CPUs, GPUs, and others
- Modularity: easily extendible models and their components
- Differentiability and optimization: Learning from data
- Integration of Julia’s machine learning frameworks into models
- Performance: Achieving and monitoring performance during development
- Usage of high-performance and parallelism frameworks
- Interconnectivity of packages across the Julia ecosystem
- Bad practices: Julia’s detours, unnecessary complexity, or missing features
Potential projects to be presented:
- ClimaAtmos.jl and other projects in the CliMA framework
- Oceananigans.jl
- SpeedyWeather.jl
- dJUICE.jl
- ClimateModels.jl
- ClimateTools.jl
- NCDatasets.jl
- DiskArrays.jl
Chair: Milan Klöwer (MIT);
Co-chairs: Simon Byrne (Caltech), Chris Hill (MIT)
false
https://pretalx.com/juliacon2023/talk/SN7TSJ/
https://pretalx.com/juliacon2023/talk/SN7TSJ/feedback/
32-144
Phoenix or cyborg (3)
Minisymposium
2023-07-26T16:00:00-04:00
16:00
01:00
Several large projects on Earth system modelling are based on Julia. In a field where Fortran traditionally dominates model development we want to gather developers to share how we build Earth system software in Julia. Instead of sharing project surfaces, this minisymposium wants to look deeper into their anatomy. The phoenix: are we reinventing traditional coding structures in a new language? Or the cyborg: does Julia allow us to replace these structures with new, enhanced machinery?
juliacon2023-30799-phoenix-or-cyborg-3-
Geosciences
en
Best practices had to be developed to make the most of a new language in the development of Earth system models. Newer concepts in research software engineering find their way into the Earth system modelling community, particularly through the adoption of new languages. And the user demand on interfaces and interoperability to these models also has changed.
With custom types a domain specific language is created within our models and abstraction minimizes code redundancy. Ideally we would like to execute the same code on all computing architectures at high performance, but what are the best practices to achieve that? Earth system models are large with many interoperating parts. How can we build them with a flexibility that any user can swap in and out parts without diving deep into the code base? And we are aiming towards differentiable models that learn from data, ideally without sacrificing anything from above. This minisymposium aims to cover the following topics, but is not limited to
- Abstraction, multiple dispatch, types and type hierarchies in large projects
- Flexibility with respect to the computing architecture: CPUs, GPUs, and others
- Modularity: easily extendible models and their components
- Differentiability and optimization: Learning from data
- Integration of Julia’s machine learning frameworks into models
- Performance: Achieving and monitoring performance during development
- Usage of high-performance and parallelism frameworks
- Interconnectivity of packages across the Julia ecosystem
- Bad practices: Julia’s detours, unnecessary complexity, or missing features
Potential projects to be presented:
- ClimaAtmos.jl and other projects in the CliMA framework
- Oceananigans.jl
- SpeedyWeather.jl
- dJUICE.jl
- ClimateModels.jl
- ClimateTools.jl
- NCDatasets.jl
- DiskArrays.jl
Chair: Milan Klöwer (MIT);
Co-chairs: Simon Byrne (Caltech), Chris Hill (MIT)
false
https://pretalx.com/juliacon2023/talk/REUUTX/
https://pretalx.com/juliacon2023/talk/REUUTX/feedback/
32-G449 (Kiva)
Morning Break Day 1 Room 6
Break
2023-07-26T11:15:00-04:00
11:15
00:15
Morning break for coffee and snacks, and transit time from the keynote to the rest of the day's talks.
juliacon2023-28128-morning-break-day-1-room-6
JuliaCon
en
Morning break for coffee and snacks, and transit time from the keynote to the rest of the day's talks.
false
https://pretalx.com/juliacon2023/talk/8WKLZC/
https://pretalx.com/juliacon2023/talk/8WKLZC/feedback/
32-G449 (Kiva)
Quantum Minis Introduction
Minisymposium
2023-07-26T11:30:00-04:00
11:30
00:10
In recent years the Julia language has seen increasing adoption in the quantum community, broadly defined. We propose to organize two back-to-back minisymposia on the theme of "Julia and quantum," including quantum computing, condensed matter physics, quantum chemistry. The talks would be addressed to both current and future experts.
juliacon2023-30794-quantum-minis-introduction
Quantum
en
The community of Julia users in academic fields involving quantum phenomena and in the quantum computing industry continues to grow. There is already a vibrant ecosystem of quantum-related Julia packages and users and developers at a variety of career stages: high school, undergraduate, and graduate students; postdoctoral scholars; faculty; staff scientists; and industry researchers and developers. In these conjoined minisymposia, the community would be able to present their work to a quantum-focused audience, allowing package developers to present material more relevant to an expert audience (which is often not really appropriate for the wider JuliaCon audience), more junior community members to find mentors in their area, and allowing community members working in different areas to find areas of common interest or need.
For example, the minisymposium might contain a talk about tensor network methods in condensed matter physics, and another talk about these methods as applied to quantum circuit simulation, allowing developers in both areas to understand where current Julia implementations are lacking and find new contributors. Having a dedicated stream (minisymposia) for this kind of talk would facilitate this kind of interaction.
In addition, a quantum-focused series of minisymposia would allow the "quantum curious" -- students, those in industry but without a background in the relevant fields -- to see the "lay of the land" and find opportunities to contribute, for research, or for job searches.
We propose a double or conjoined minisymposium -- one session in the morning, one in the afternoon -- because of the breadth of the community and the wide variety of use cases and fields of study covered. Two sessions will allow us to include enough full length talks to treat each area fairly, as even with much background in common, a full length talk is often needed to treat a topic in the "quantum" area in sufficient depth. We also hope to solicit lightning talks from more junior members (e.g. graduate or undergraduate students).
Confirmed speakers:
Matt Fishman & Miles Stoudenmire (ITensor.jl)
Katharine Hyatt (Amazon Braket)
QuEra computing
false
https://pretalx.com/juliacon2023/talk/UQ3LZA/
https://pretalx.com/juliacon2023/talk/UQ3LZA/feedback/
32-G449 (Kiva)
Qurt.jl: Compiling Quantum Circuits (in Julia)
Lightning talk
2023-07-26T11:40:00-04:00
11:40
00:10
I'll present Qurt.jl, a package for creating and transforming quantum circuits. I'll do a
brief demo, including the obligatory, slick, macro-enabled builder interface. I'll show how
it is faster than some Python-Rust and C++ implementations.
juliacon2023-32380-qurt-jl-compiling-quantum-circuits-in-julia-
Quantum
John Lapeyre
en
I'll talk about the design in light of the goals and requirements for Qurt. These include:
It should be easy to round-trip translate between Qurt and Qiskit, and to reproduce Qiskit
functionality, including compiling circuits. The interfaces should be straightforward and
employ familiar concepts, similar to Qiskit or Circ. It should be easy to quickly define
and use a custom gate. It should be designed with minimal barriers to achieving high
performance; for instance runtime dispatch must be avoided when handling circuit elements;
Data should be, to the extent possible, stored in vectors of bitstypes. Obviously some of
these goals are in conflict, so trade-offs will be adjusted as Qurt evolves. In fact,
another goal of Qurt is to explore data structures dictated by these trade-offs and use
this exploration to inform implementations in static languages where experimentation is
expensive.
false
https://pretalx.com/juliacon2023/talk/KRN7VM/
https://pretalx.com/juliacon2023/talk/KRN7VM/feedback/
32-G449 (Kiva)
Convenient time dependence in QuantumOptics.jl
Lightning talk
2023-07-26T11:50:00-04:00
11:50
00:10
QuantumOptics.jl is a numerical framework for simulating quantum systems. It provides a very general set of tools that go far beyond quantum optics. We recently added "time dependent operators", which allow easy creation and composition of time-dependent Hamiltonians and Lindbladians for use inn dynamical simulations. This talk briefly introduces the QuantumOptics.jl and these convenient new time-dependence features.
juliacon2023-32381-convenient-time-dependence-in-quantumoptics-jl
Quantum
Ashley Milsted
en
QuantumOptics.jl is a numerical framework for simulating quantum systems. It provides a very general set of tools that go far beyond quantum optics. We recently added "time dependent operators", which allow easy creation and composition of time-dependent Hamiltonians and Lindbladians for use inn dynamical simulations. This talk briefly introduces the QuantumOptics.jl and these convenient new time-dependence features.
false
https://pretalx.com/juliacon2023/talk/UZE9HU/
https://pretalx.com/juliacon2023/talk/UZE9HU/feedback/
32-G449 (Kiva)
Braket.jl: A Julia SDK for Quantum Computing
Talk
2023-07-26T12:00:00-04:00
12:00
00:30
In recent years the Julia language has seen increasing adoption in the quantum community, broadly defined. We propose to organize two back-to-back minisymposia on the theme of "Julia and quantum," including quantum computing, condensed matter physics, quantum chemistry. The talks would be addressed to both current and future experts.
juliacon2023-26982-braket-jl-a-julia-sdk-for-quantum-computing
Quantum
Katharine Hyatt
en
The community of Julia users in academic fields involving quantum phenomena and in the quantum computing industry continues to grow. There is already a vibrant ecosystem of quantum-related Julia packages and users and developers at a variety of career stages: high school, undergraduate, and graduate students; postdoctoral scholars; faculty; staff scientists; and industry researchers and developers. In these conjoined minisymposia, the community would be able to present their work to a quantum-focused audience, allowing package developers to present material more relevant to an expert audience (which is often not really appropriate for the wider JuliaCon audience), more junior community members to find mentors in their area, and allowing community members working in different areas to find areas of common interest or need.
For example, the minisymposium might contain a talk about tensor network methods in condensed matter physics, and another talk about these methods as applied to quantum circuit simulation, allowing developers in both areas to understand where current Julia implementations are lacking and find new contributors. Having a dedicated stream (minisymposia) for this kind of talk would facilitate this kind of interaction.
In addition, a quantum-focused series of minisymposia would allow the "quantum curious" -- students, those in industry but without a background in the relevant fields -- to see the "lay of the land" and find opportunities to contribute, for research, or for job searches.
We propose a double or conjoined minisymposium -- one session in the morning, one in the afternoon -- because of the breadth of the community and the wide variety of use cases and fields of study covered. Two sessions will allow us to include enough full length talks to treat each area fairly, as even with much background in common, a full length talk is often needed to treat a topic in the "quantum" area in sufficient depth. We also hope to solicit lightning talks from more junior members (e.g. graduate or undergraduate students).
Confirmed speakers:
Matt Fishman & Miles Stoudenmire (ITensor.jl)
Katharine Hyatt (Amazon Braket)
QuEra computing
Others to reach out to:
Roger Melko (University of Waterloo/Perimeter Institute)
Amazon Center for Quantum Computing (Ash Milsted, Andrew Keller)
Zapata Computing
Paul Brehmer, RWTH Aachen University
Michael Goerz (QuantumControl.jl)
false
https://pretalx.com/juliacon2023/talk/QHHSUV/
https://pretalx.com/juliacon2023/talk/QHHSUV/feedback/
32-G449 (Kiva)
Lunch Day 1 (Room 6)
Lunch Break
2023-07-26T12:30:00-04:00
12:30
01:30
We hope you're enjoying JuliaCon 2023 so far! Please find our food trucks waiting right outside venue with food available for purchase.
juliacon2023-28071-lunch-day-1-room-6-
JuliaCon
en
We hope you're enjoying JuliaCon 2023 so far! Please find our food trucks waiting right outside venue with food available for purchase.
false
https://pretalx.com/juliacon2023/talk/ZBCGFG/
https://pretalx.com/juliacon2023/talk/ZBCGFG/feedback/
32-G449 (Kiva)
Yao.jl & Bloqade.jl: towards a symbolic engine for quantum
Talk
2023-07-26T14:00:00-04:00
14:00
00:30
In recent years the Julia language has seen increasing adoption in the quantum community, broadly defined. We propose to organize two back-to-back minisymposia on the theme of "Julia and quantum," including quantum computing, condensed matter physics, quantum chemistry. The talks would be addressed to both current and future experts.
juliacon2023-32378-yao-jl-bloqade-jl-towards-a-symbolic-engine-for-quantum
Quantum
Xiu-zhe (Roger) Luo
en
In recent years the Julia language has seen increasing adoption in the quantum community, broadly defined. We propose to organize two back-to-back minisymposia on the theme of "Julia and quantum," including quantum computing, condensed matter physics, quantum chemistry. The talks would be addressed to both current and future experts.
false
https://pretalx.com/juliacon2023/talk/SDW7EX/
https://pretalx.com/juliacon2023/talk/SDW7EX/feedback/
32-G449 (Kiva)
Quantum Dynamics and Control with QuantumControl.jl
Talk
2023-07-26T14:30:00-04:00
14:30
00:30
The `QuantumControl.jl` package provides a framework for open-loop quantum optimal control: finding classical control fields to drive the dynamics of a quantum system in a particular way, e.g., to create a particular entangled state or to realize a quantum gate.
juliacon2023-31206-quantum-dynamics-and-control-with-quantumcontrol-jl
Quantum
Michael Goerz
en
Quantum control seeks to design classical "control fields" in order to steer a quantum system in some desired way. It is a cornerstone of modern quantum technology: quantum control is how one generates entangled states for quantum sensing, or realizes the logic gates in a quantum computer. Solving the control problem requires numerically simulating the dynamics of the quantum system and then iteratively tuning the control fields to minimize some figure of merit. I will give an overview of the [`QuantumControl.jl` framework](https://github.com/JuliaQuantumControl), which implements state of the art methods for simulation and optimization.
Julia provides unique advantages in flexibility and numerical efficiency, both of which are critical to quantum control. Multiple dispatch makes it easy to adapt to different physical systems and efficient representations. Furthermore, integration with the wider Julia ecosystem allows to leverage "semi-automatic differentiation" to efficiently optimize arbitrary, non-analytical figures of merit such as entanglement measures. Benchmarks show that Julia matches the performance of existing Fortran code for quantum control with ease. Using more specialized data structures, such as `SparseArrays`, `StaticArrays` or `GPUArrays`, depending on the use case, can further improve performance by a considerable margin.
false
https://pretalx.com/juliacon2023/talk/EKWZ3D/
https://pretalx.com/juliacon2023/talk/EKWZ3D/feedback/
32-G449 (Kiva)
Symmetries in Tensor Networks—TensorOperations.jl + TensorKit.jl
Talk
2023-07-26T15:00:00-04:00
15:00
00:30
Tensor networks have emerged as a versatile and efficient framework for analyzing a wide range of quantum systems. A key advantage lies in their ability to leverage the underlying structure of the problems they address. We present [TensorOperations.jl](https://github.com/Jutho/TensorOperations.jl) and [TensorKit.jl](https://github.com/Jutho/TensorKit.jl), which are designed to facilitate the seamless and efficient implementation of tensor network algorithms and incorporate arbitrary symmetries.
juliacon2023-32379-symmetries-in-tensor-networks-tensoroperations-jl-tensorkit-jl
Quantum
Lukas Devos
en
We begin with [TensorOperations.jl](https://github.com/Jutho/TensorOperations.jl), a tool that streamlines the specification of tensor networks using the popular Einstein summation notation. By optimizing critical aspects of this process during compile-time, this tool can significantly enhance performance-a pivotal factor for many tensor network algorithms.
In the latest release (v4.0.0) we have added numerous quality of life updates alongside support for different backends, as well as support for automatic differentiation via the ChainRules.jl ecosystem.
Next, we delve into the realm of symmetries within tensors, by exploring the implications of (non-)Abelian symmetry groups, as well as the more exotic categorical symmetries. We show how [TensorKit.jl](https://github.com/Jutho/TensorKit.jl) provides a way to develop symmetry-independent algorithms, which are still able to leverage the advantages given by the additional structure.
To illustrate the practicality of these advancements, we briefly showcase the capabilities of [MPSKit.jl](https://github.com/maartenvd/MPSKit.jl), a matrix product state library that incorporates these symmetries. We give examples that underline the computational benefits and fundamental insights into the underlying physics.
false
https://pretalx.com/juliacon2023/talk/SRNPDJ/
https://pretalx.com/juliacon2023/talk/SRNPDJ/feedback/
32-G449 (Kiva)
An update on the ITensor ecosystem
Talk
2023-07-26T15:30:00-04:00
15:30
00:30
ITensor is a library for running and developing tensor network algorithms, a set of algorithms where high order tensors are represented as a network of lower order, and low rank, tensors. I will give an update on the ITensor Julia ecosystem. In particular, I will discuss efforts to support more GPU backends and block sparse operations on GPU, as well as ITensorNetworks.jl, a new library for tensor network algorithms on general graphs.
juliacon2023-31204-an-update-on-the-itensor-ecosystem
Quantum
Matthew Fishman
en
ITensor is a library for running and developing tensor network algorithms, a set of algorithms where high order tensors are represented as a network of lower order, and low rank, tensors. I will give an update on the ITensor Julia ecosystem. In particular, I will discuss updates to our tensor operation backend library, NDTensors.jl, and efforts to make it more extensible and support dense and block sparse operations on a variety of GPU backends through package extensions. In addition, I will discuss the new ITensorNetworks.jl library, a library built on top of ITensors.jl for tensor network algorithms on general graphs, which has a graph-like interface and graph operations based on Graphs.jl.
false
https://pretalx.com/juliacon2023/talk/TXKXKP/
https://pretalx.com/juliacon2023/talk/TXKXKP/feedback/
32-G449 (Kiva)
Quantum Mini Coffee Break
Break
2023-07-26T16:00:00-04:00
16:00
00:15
Taking a break from nonstop quantum information to recharge.
juliacon2023-32382-quantum-mini-coffee-break
Quantum
en
Taking a break from nonstop quantum information to recharge with caffeine, fresh air, and possibly pastries.
false
https://pretalx.com/juliacon2023/talk/7AUWD9/
https://pretalx.com/juliacon2023/talk/7AUWD9/feedback/
32-G449 (Kiva)
Tensor network contraction order optimization algorithms
Lightning talk
2023-07-26T16:15:00-04:00
16:15
00:15
In this talk, I will introduce the algorithms used to find optimal
contraction orders for tensor networks, which are implemented in the
OMEinsumContractionOrders.jl package. These algorithms have a wide range of
applications, including simulating quantum circuits, solving inference
problems, and solving combinatorial optimization problems.
juliacon2023-33972-tensor-network-contraction-order-optimization-algorithms
Quantum
JinGuo Liu
en
In this talk, I will introduce the algorithms used to find optimal
contraction orders for tensor networks, which are implemented in the
OMEinsumContractionOrders.jl package. These algorithms have a wide range of
applications, including simulating quantum circuits, solving inference
problems, and solving combinatorial optimization problems.
false
https://pretalx.com/juliacon2023/talk/PX99DC/
https://pretalx.com/juliacon2023/talk/PX99DC/feedback/
32-G449 (Kiva)
QuantumCumulants.jl
Talk
2023-07-26T16:30:00-04:00
16:30
00:30
QuantumCumulants.jl is a package for the symbolic derivation of generalized mean-field equations for quantum mechanical operators in open quantum systems. The equations are derived using fundamental commutation relations of operators. When averaging these equations they can be automatically expanded in terms of cumulants to an arbitrary order. This results in a closed set of symbolic differential equations, which can also be solved numerically.
juliacon2023-30795-quantumcumulants-jl
Quantum
Christoph Hotter
en
A full quantum mechanical treatment of open quantum systems via a Master equation is often limited by the size of the underlying Hilbert space. As an alternative, the dynamics can also be formulated in terms of systems of coupled differential equations for operators in the Heisenberg picture. This typically leads to an infinite hierarchy of equations for products of operators. A well-established approach to truncate this infinite set at the level of expectation values is to neglect quantum correlations of high order. This is systematically realized with a so-called cumulant expansion, which decomposes expectation values of operator products into products of a given lower order, leading to a closed set of equations. Here we present an open-source framework that fully automizes this approach: first, the equations of motion of operators up to a desired order are derived symbolically using predefined canonical commutation relations. Next, the resulting equations for the expectation values are expanded employing the cumulant expansion approach, where moments up to a chosen order specified by the user are included. Finally, a numerical solution can be directly obtained from the symbolic equations. After reviewing the theory we present the framework and showcase its usefulness in a few example problems.
false
https://pretalx.com/juliacon2023/talk/NLYAH3/
https://pretalx.com/juliacon2023/talk/NLYAH3/feedback/
32-G449 (Kiva)
Formalism-agnostic Quantum Hardware Simulation
Talk
2023-07-26T17:00:00-04:00
17:00
00:30
The design of quantum hardware and the study of quantum algorithms require classical simulations of quantum dynamics. A rich ecosystem of simulation methods and algorithms has been developed over the last 20 years, each applicable to different sub-problems and efficient in different settings. We present a family of symbolic and numeric Julia packages that abstract away many of the methodology decisions, providing a way to focus on the hardware under study instead of the minutia of the methods.
juliacon2023-27011-formalism-agnostic-quantum-hardware-simulation
Quantum
/media/juliacon2023/submissions/DQCQKF/logo_rzSUkgs.png
Stefan Krastanov
en
Quantum hardware, if successfully deployed in the future, would provide computational resources unavailable in classical computation. Nonetheless, we still need as-efficient-as-possible classical simulators for quantum dynamics: both in the experimental design process for quantum hardware and in the theoretical study of potential quantum algorithms.
A vast array of techniques have been developed in the last 20 years: the direct non-scalable wavefunction simulators, capable of simulating a wide variety of analog noise processes; but also efficient special-purpose or approximate methods (e.g., Clifford circuit simulators applicable in error correction over thousands of qubits or tensor-network methods for approximate large scale dynamics).
Thus, a lot of questions arise for the scientist designing such machines: which simulation method would permit the study of the particular dynamic of interest, how most efficiently to model the environmental noise, and crucially, how to couple an efficient simulation of one type of quantum dynamics with a completely different methodology that has to be used for a different part of the hardware.
Our answer to these issues is the **QuantumSavory.jl** family of packages. **QuantumSavory** provides for a formalism-agnostic way to describe the quantum hardware you want to study. Then it dispatches to the appropriate backend simulator: **QuantumClifford.jl** for Clifford circuits and error correction; **BPGates.jl** for even faster simulations of entanglement purification; **QuantumOptics.jl** for general purpose wavefunction simulations; and other special purpose-packages that can be easily plugged into the interface provided by QuantumSavory.
**QuantumSymbolics.jl** is another crucial tool enabling the formalism-agnostic simulator: a symbolic quantum-focused CAS build on top of Symbolics.jl. In QuantumSymbolics one can operate on the "Platonic" representation of various quantum states and processes before converting them to a special-purpose numerical object to be used by the backend simulator (e.g. a density matrix for QuantumOptics or a tableau for QuantumClifford).
Of course, these tools would not be complete without support for discrete event simulations (e.g. in quantum networks) and a rich library of plotting recipes for visualization and debugging, of which we provide a brief overview at the end.
false
https://pretalx.com/juliacon2023/talk/DQCQKF/
https://pretalx.com/juliacon2023/talk/DQCQKF/feedback/
32-G449 (Kiva)
Tree sweeping algorithms with ITensorNetworks.jl
Lightning talk
2023-07-26T17:30:00-04:00
17:30
00:15
[ITensorNetworks.jl](https://github.com/mtfishman/ITensorNetworks.jl) is an experimental package which aims to provide general tools for working with higher-dimensional tensor networks based on [ITensors.jl](https://github.com/ITensor/ITensors.jl). Among many novel features, I will focus on the generalization of one-dimensional sweeping algorithms to systems with arbitrary tree-like geometries, and highlight some promising applications.
juliacon2023-32377-tree-sweeping-algorithms-with-itensornetworks-jl
Quantum
Lander Burgelman
en
Tensor networks have proven an invaluable numerical tool in computational physics, with particular success in simulating quantum many body systems in one and two dimensions. With more recent applications outside of this conventional setting, there has been an increased interest in the use of higher-dimensional tensor networks for the efficient representation and manipulation of high-dimensional data structures in general. [ITensorNetworks.jl](https://github.com/mtfishman/ITensorNetworks.jl) is an experimental package which aims to provide general tools for working with higher-dimensional tensor networks based on [ITensors.jl](https://github.com/ITensor/ITensors.jl). Its features include methods for graph partitioning in general networks, gauging and approximate contraction tools, as well as optimization and evolution routines for several classes of networks.
Arguably the most widely used tensor-network tools are [DMRG](https://tensornetwork.org/mps/algorithms/dmrg/)-like sweeping routines for finite one-dimensional systems, which use iterative local updates to perform tasks such as ground state searches, simulating time evolution and targeting excitations. I will focus on the generalization of these algorithms to systems with arbitrary tree-like geometries and their implementation within the framework of ITensorNetworks.jl. The goal is to provide an intuitive and easy to use tool that retains the efficiency and robustness of the original algorithms, while also exploiting the characteristic geometry of the system at hand. Aside from more straightforward applications, a promising use this framework would be as an approximate backend in established (quantum) simulation tools.
Slides and examples: https://github.com/leburgel/JuliaCon2023_tree_sweeping
false
https://pretalx.com/juliacon2023/talk/H7GAYD/
https://pretalx.com/juliacon2023/talk/H7GAYD/feedback/
32-G449 (Kiva)
Evolution of tensor network states with ITensorNetworks.jl
Lightning talk
2023-07-26T17:45:00-04:00
17:45
00:15
ITensorNetworks.jl is a new Julia package for manipulating and optimising tensor networks of arbitrary structure. I will introduce methods available in the package for evolving these tensor networks in time, allowing the simulation of a wide range of problems in many-body quantum systems.
I will then demonstrate the application of these methods to simulate the kicked Ising model on the heavy-hex lattice.
juliacon2023-35909-evolution-of-tensor-network-states-with-itensornetworks-jl
Quantum
Joseph Tindall
en
ITensorNetworks.jl is a new Julia package for manipulating and optimising tensor networks of arbitrary structure. I will introduce methods available in the package for evolving these tensor networks in time, allowing the simulation of a wide range of problems in many-body quantum systems.
I will then demonstrate the application of these methods to simulate the kicked Ising model on the heavy-hex lattice. A quantum simulation of this problem was recently performed on the IBM Eagle processor as an indicator of quantum utility. I will show how ITensorNetworks.jl can be leveraged to solve this problem accurately and with very modest computational resources.
false
https://pretalx.com/juliacon2023/talk/AXHNYJ/
https://pretalx.com/juliacon2023/talk/AXHNYJ/feedback/
32-G449 (Kiva)
Quantum BoF
Birds of Feather (BoF)
2023-07-26T18:00:00-04:00
18:00
01:00
In recent years the Julia language has seen increasing adoption in the quantum community, broadly defined. We propose to organize two back-to-back minisymposia on the theme of "Julia and quantum," including quantum computing, condensed matter physics, quantum chemistry. The talks would be addressed to both current and future experts.
juliacon2023-31207-quantum-bof
Quantum
en
The community of Julia users in academic fields involving quantum phenomena and in the quantum computing industry continues to grow. There is already a vibrant ecosystem of quantum-related Julia packages and users and developers at a variety of career stages: high school, undergraduate, and graduate students; postdoctoral scholars; faculty; staff scientists; and industry researchers and developers. In these conjoined minisymposia, the community would be able to present their work to a quantum-focused audience, allowing package developers to present material more relevant to an expert audience (which is often not really appropriate for the wider JuliaCon audience), more junior community members to find mentors in their area, and allowing community members working in different areas to find areas of common interest or need.
For example, the minisymposium might contain a talk about tensor network methods in condensed matter physics, and another talk about these methods as applied to quantum circuit simulation, allowing developers in both areas to understand where current Julia implementations are lacking and find new contributors. Having a dedicated stream (minisymposia) for this kind of talk would facilitate this kind of interaction.
In addition, a quantum-focused series of minisymposia would allow the "quantum curious" -- students, those in industry but without a background in the relevant fields -- to see the "lay of the land" and find opportunities to contribute, for research, or for job searches.
We propose a double or conjoined minisymposium -- one session in the morning, one in the afternoon -- because of the breadth of the community and the wide variety of use cases and fields of study covered. Two sessions will allow us to include enough full length talks to treat each area fairly, as even with much background in common, a full length talk is often needed to treat a topic in the "quantum" area in sufficient depth. We also hope to solicit lightning talks from more junior members (e.g. graduate or undergraduate students).
Confirmed speakers:
Matt Fishman & Miles Stoudenmire (ITensor.jl)
Katharine Hyatt (Amazon Braket)
QuEra computing
Others to reach out to:
Roger Melko (University of Waterloo/Perimeter Institute)
Amazon Center for Quantum Computing (Ash Milsted, Andrew Keller)
Zapata Computing
Paul Brehmer, RWTH Aachen University
Michael Goerz (QuantumControl.jl)
false
https://pretalx.com/juliacon2023/talk/9XH33L/
https://pretalx.com/juliacon2023/talk/9XH33L/feedback/
32-G449 (Kiva)
Julia productivity enhancement with Boilerplate.jl
Online Only Talk
2023-07-26T19:00:00-04:00
19:00
00:03
Anybody who works with data is likely to face with challenges to understand the underlieing data. Many times we use boilerplate codes. Boilerplate.jl is here to rescue and also we collected many more useful code.
juliacon2023-26654-julia-productivity-enhancement-with-boilerplate-jl
JuliaCon
Marcell Havlik
en
Julia is actually the most powerful language of our time, even if people didn't realize it yet. I developed some code that is extremely simple but the more useful. I want to share with everyone, so it can help anyone to reach extremely fast development speed. Everything will be here: https://github.com/Cvikli/Boilerplate.jl
Most important: @sizes, @typeof, @display, @get for dicts, @asyncsafe and some more.
Have you every typed:
@show typeof(vari) or @show size(arr[1][1]), size(arr[2][1]), size(arr[2][2]) # or anything like this due to size doesn't handle your type? @sizes should be able to handle it till you are at base.
Also, note parenthesis is really slow to type. But more during the very short and concise talk.
Please note, I am really happy for any new idea, share it, so we can all grow together! ;)
false
https://pretalx.com/juliacon2023/talk/S8JVGK/
https://pretalx.com/juliacon2023/talk/S8JVGK/feedback/
32-G449 (Kiva)
C/C++ fans
Experience
2023-07-26T19:05:00-04:00
19:05
00:03
Julia is different from CPP for many reasons. Path of the process is the learning curve for many new academics and researchers joining the community in drove. There are many interesting obstacles and features that could bring Julia into a more mainstream choice for many academics.
juliacon2023-24750-c-c-fans
JuliaCon
Min Khant Zaw
en
Some of the interesting features I want to highlight from my learning experience as an ML/AI research student are the discovery of C functions and how convenient it was for me, and how I experimented distributed computation in my HPC class.
false
https://pretalx.com/juliacon2023/talk/WRB8D8/
https://pretalx.com/juliacon2023/talk/WRB8D8/feedback/
32-D463 (Star)
Morning Break Day 1 Room 4
Break
2023-07-26T11:15:00-04:00
11:15
00:15
Morning break for coffee and snacks, and transit time from the keynote to the rest of the day's talks.
juliacon2023-28126-morning-break-day-1-room-4
JuliaCon
en
Morning break for coffee and snacks, and transit time from the keynote to the rest of the day's talks.
false
https://pretalx.com/juliacon2023/talk/7U7QWK/
https://pretalx.com/juliacon2023/talk/7U7QWK/feedback/
32-D463 (Star)
Attitude control subsystem development using Julia
Talk
2023-07-26T11:30:00-04:00
11:30
00:30
The development of a satellite attitude and orbit control subsystem (AOCS) imposes challenges since the dynamics cannot be reproduced on the ground. The entire process relies on simulation. We developed a novel workflow that improved productivity by using the outstanding adaptability of the Julia language and its ecosystem. We could adapt a validated simulator to test and verify all the AOCS algorithms at each development phase, drastically reducing error propagation and, consequently, test cost
juliacon2023-26707-attitude-control-subsystem-development-using-julia
SciML
/media/juliacon2023/submissions/AWQCSX/JuliaCon2023_seuqh2k.jpeg
Ronan Arraes Jardim Chagas
en
The satellite attitude is defined as its orientation in space. Most satellites require attitude control to point their payloads to acquire the data. The satellite subsystem in charge of estimating and controlling the attitude is the attitude determination and control subsystem (ADCS). When this subsystem can also maintain the satellite orbit, it is called the attitude and orbit control subsystem (AOCS).
It is almost impossible to reliably reproduce the space rigid-body dynamics on the ground. Hence, the development of the AOCS highly depends on simulation from the design phase up to the acceptance tests. This specificity leads to an "n-simulator" problem, where each development phase requires constructing some simulation tool. For example, when developing and tuning the control loop, the specialist must use a minimal dynamics simulation to verify the control law. In a more advanced phase, when the embedded software is being coded, the software engineer should also use a simulator to perform closed-loop tests to check whether the control laws are correctly implemented.
Given all the difficulties that arise from keeping all those simulators, the AOCS development usually happens through unit tests and simple scenarios. However, unit tests can hardly verify the entire AOCS algorithms, for it depends on Markov chains. Thus, more than a simple input/output verification is needed, and a vast number of internal states must be taken into account. Deeper tests only occur with the onboard computer in a hardware-in-the-loop simulation when everything is implemented. Even though this approach usually simplifies the simulation requirements, some errors can only be spotted late, leading to significant problems for the project. One of them is the time required for algorithm testing and debugging using real hardware. Those are expensive tests since they happen in real-time, leading to a significant mobilization of the test team and the use of expensive flight hardware.
Previously, using the packages SatelliteToolbox.jl, ReferenceFrameRotations.jl, and DifferentialEquations.jl, we managed to create high fidelity simulator for the Brazilian Multi-Mission Platform (PMM, in Portuguese), called PmmAocsSimulator.jl. We validated this tool using on-orbit data from the Amazonia-1 satellite.
For our next AOCS generation, we want to rely on the validated simulator code for all the development phases. The algorithms for the embedded software are initially constructed in Julia inside the PmmAocsSimulator.jl. When the control engineer validates the dynamics, the Julia code is translated to C++ using the CxxWrap.jl package, which is also run inside the same environment. Afterward, the same C++ code is compiled for the onboard processor and run in its emulator. In this phase, the onboard software is also tested in a closed-loop environment using the PmmAocsSimulator.jl. Lastly, the AOCS software is integrated with the rest of the satellite computer software, and a final simulation takes place using the real onboard computer.
Since all the phases are simulated using the same validated environment, the results must be perfectly matched. This workflow allows us to develop the next AOCS generation with a huge gain compared to the classic scenario. The development can happen incrementally but also with a full test suite encompassing the most significant dynamics states of the satellite, which is impossible to reproduce without such an environment. Hence, most of the issues related to attitude control can be spotted and solved by the AOCS team before the final software integration. All this workflow is only possible given the considerable adaptability of the Julia language and the tools available in its ecosystem.
Finally, the gains obtained by this new workflow are massive. We have already implemented 50% of the AOCS functions for the next-generation software, and we have yet to identify a single algorithm bug at the final test stage with the flight hardware. This process allowed us to drastically reduce the test hours using the real satellite computer, which also helped diminish the development cost.
This talk will detail all the processes mentioned before, stating how the Julia language and its ecosystem are being used at the National Institute for Space Research (INPE) to solve the n-simulator problem by providing one single simulator for all the AOCS development phases.
false
https://pretalx.com/juliacon2023/talk/AWQCSX/
https://pretalx.com/juliacon2023/talk/AWQCSX/feedback/
32-D463 (Star)
DyVE, a Framework for Value Dynamics
Talk
2023-07-26T12:00:00-04:00
12:00
00:30
DyVE (Dynamics of Value Evolution) is an open source framework, aimed at designing, fitting, and integrating complex real-world process models; accounting for the accrual of costs and rewards; and supporting complex decision making, particularly in pharmaceutical R&D and business.
Process specifications are compact and algebraically composable. Uncertainty about model values or structure is supported. DyVE is interoperable with Scientific Machine Learning (SciML).
juliacon2023-24356-dyve-a-framework-for-value-dynamics
SciML
/media/juliacon2023/submissions/DN387J/logo_WcmBxjB.png
Jan BimaOtto RitterSean L Wu
en
DyVE (Dynamics of Value Evolution) is aimed at designing, fitting, and integrating dynamical models of real-world processes; accounting for the accrual of costs and rewards; and supporting complex decision making, particularly in pharmaceutical R&D and business.
Process and resource models are specified in a compact DSL inspired by Applied Category Theory (ACT). Process specifications are algebraically composable. Uncertainty about model values, as well as structure, can be represented by distributions or by more general user-defined probabilistic expressions. The DSL is essentially a bridge from ACT semantics to Scientific Machine Learning (SciML).
DyVE can also co-integrate its models with pre-existing non-DyVE models, supporting the APIs of DifferentialEquations.jl, Agents.jl, or AlgebraicJulia.jl.
Processes are represented as hybrid state-rewriting systems. Several rewriting formalisms are supported: discrete maps, differential equations, discrete events, agent-based models, and combinations thereof.
Simulation, fitting, optimization, visualization, and other computational heavy lifting is delegated to the SciML ecosystem.
We will present and demonstrate component packages of DyVE on illustrative applications.
false
https://pretalx.com/juliacon2023/talk/DN387J/
https://pretalx.com/juliacon2023/talk/DN387J/feedback/
32-D463 (Star)
Lunch Day 1 (Room 4)
Lunch Break
2023-07-26T12:30:00-04:00
12:30
01:30
We hope you're enjoying JuliaCon 2023 so far! Please find our food trucks waiting right outside venue with food available for purchase.
juliacon2023-28069-lunch-day-1-room-4-
JuliaCon
en
We hope you're enjoying JuliaCon 2023 so far! Please find our food trucks waiting right outside venue with food available for purchase.
false
https://pretalx.com/juliacon2023/talk/G878MG/
https://pretalx.com/juliacon2023/talk/G878MG/feedback/
32-D463 (Star)
Digital Twins for Ocean Robots
Talk
2023-07-26T14:00:00-04:00
14:00
00:30
Ocean robots and satellites collect crucial data to monitor, understand, and predict climate change. Our digital twin framework accesses & simulates these complex data sets. It leverages multiple Julia packages developed by the author and linked organizations. This talk focuses on numerical modeling and artificial intelligence components of the DT framework. It touches on all major elements of the global observing system along with expected scientific, societal, and commercial applications.
juliacon2023-26930-digital-twins-for-ocean-robots
SciML
/media/juliacon2023/submissions/N8ZX8W/global_ocean_surface_currents_PABKSj3.png
Gael Forget
en
Topics discussed include
- two recent workshops : _Symposium on Advances in Ocean Observation_ (to which G.F. was invited in 2022) and _Global Workshop on Earth Observation with Julia_ (JuliaEO 2023, which G.F. co-organized)
- our model hierarchy with : climate model emulators, data assimilative models, kilometer-scale global models, marine ecosystem simulations, trained neural networks, robot pathway simulations, and virtual ocean sensors.
- real-life and simulated data : at sea data collection, autonomous vehicles, Earth Observation via satellites, ocean+atmospheric reanalyses, geospatial data, smart observational strategies, and edge computing.
- integration of multiple packages developed by the author (incl. `ClimateModels.jl`, `OceanRobots.jl`, `IndividualDisplacements.jl`, and `MeshArrays.jl`) and across the broader Julia ecosystem (incl. `JuliaClimate`, `JuliaOcean`, `SciML`, `MakieOrg`, `FluxML`, `JuliaGeo`, `JuliaSpace`, and `JuliaRobotics`).
- initial plans for real-life at sea experiments that will involve constellations of ocean robots within the digital twin framework
false
https://pretalx.com/juliacon2023/talk/N8ZX8W/
https://pretalx.com/juliacon2023/talk/N8ZX8W/feedback/
32-D463 (Star)
Interpretable Machine Learning with SymbolicRegression.jl
Talk
2023-07-26T14:30:00-04:00
14:30
00:30
SymbolicRegression.jl is a state-of-the-art symbolic regression library written from scratch in Julia using a custom evolutionary algorithm. The software emphasizes high-performance distributed computing, and can find arbitrary symbolic expressions to optimize a user-defined objective – thus offering a very interpretable type of machine learning. SymbolicRegression.jl and its Python frontend PySR have been used for model discovery in over 30 research papers, from astrophysics to economics.
juliacon2023-27126-interpretable-machine-learning-with-symbolicregression-jl
SciML
/media/juliacon2023/submissions/FGZBBB/Screenshot_2023-01-16_at_11.49.52_PM_gii9KoX.png
Miles Cranmer
en
SymbolicRegression.jl is an open-source library for practical symbolic regression, a type of machine learning that discovers human-interpretable symbolic models. SymbolicRegression.jl was developed to democratize and popularize symbolic regression for the sciences, and is built on a high-performance distributed backend, a flexible search algorithm, and interfaces with several deep learning packages. The hand-rolled internal search algorithm is a mixed evolutionary algorithm, which consists of a unique evolve-simplify-optimize loop, designed for optimization of unknown real-valued constants in newly-discovered empirical expressions. The backend is highly optimized, capable of fusing user-defined operators into SIMD kernels at runtime with LoopVectorization.jl, performing automatic differentiation with Zygote.jl, and distributing populations of expressions to thousands of cores across a cluster using ClusterManagers.jl. In describing this software, I will also share a new benchmark, “EmpiricalBench,” to quantify the applicability of symbolic regression algorithms in science. This benchmark measures recovery of historical empirical equations from original and synthetic datasets.
In this talk, I will describe the nuts and bolts of the search algorithm, its efficient evaluation scheme, DynamicExpressions.jl, and how SymbolicRegression.jl may be used in scientific workflows. I will review existing applications of the software (https://astroautomata.com/PySR/papers/). I will also discuss interfaces with other Julia libraries, including SymbolicUtils.jl, as well as SymbolicRegression.jl's PyJulia-enabled link to the ScikitLearn ecosystem in Python.
false
https://pretalx.com/juliacon2023/talk/FGZBBB/
https://pretalx.com/juliacon2023/talk/FGZBBB/feedback/
32-D463 (Star)
Neuroblox.jl: biomimetic modeling of neural control circuits
Talk
2023-07-26T15:00:00-04:00
15:00
00:30
Neuroblox.jl is a Julia module designed for computational neuroscience and psychiatry applications. Our tools range from control circuit system identification to brain circuit simulations bridging scales from spiking neurons to fMRI-derived circuits, parameter-fitting models to neuroimaging data, interactions between the brain and other physiological systems, experimental optimization, and scientific machine learning.
juliacon2023-26949-neuroblox-jl-biomimetic-modeling-of-neural-control-circuits
SciML
Helmut Strey
en
Neuroblox.jl is based on a library of modular computational building blocks (“blox”) in the form of systems of symbolic dynamic differential equations that can be combined to describe large-scale brain dynamics. Once a model is built, it can be simulated efficiently and fit electrophysiological and neuroimaging data. Moreover, the circuit behavior of multiple model variants can be investigated to aid in distinguishing between competing hypotheses.
We employ ModelingToolkit.jl to describe the dynamical behavior of blox as symbolic (stochastic/delay) differential equations. Our libraries of modular blox consist of individual neurons (Hodgkin-Huxley, IF, QIF, LIF, etc.), neural mass models (Jansen-Rit, Wilson-Cowan, Lauter-Breakspear, Next Generation, microcanonical circuits etc.) and biomimetically-constrained control circuit elements. A GUI designed to be intuitive to neuroscientists allows researchers to build models that automatically generate high-performance systems of numerical ordinary/stochastic differential equations from which one can run stimulations with parameters fit to experimental data. Our benchmarks show that the increase in speed for simulation often exceeds a factor of 100 as compared to neural mass model implementation by the Virtual Brain (python) and similar packages in MATLAB. For parameter fitting of brain circuit dynamical models, we use Turing.jl to perform probabilistic modeling, including Hamilton-Monte-Carlo sampling and Automated Differentiation Variational Inference.
This talk will demonstrate Neuroblox.jl by building small dynamic brain circuits. We construct circuits by first defining the nodes (“blox”) and then either defining an adjacency matrix, building directed graphs, or sketching the circuit in our GUI to assemble the brain circuit ODE systems. Using symbolic equations through ModelingToolkit.jl allows us to define additional parameters of the system that are varied across the system, which later can be used for parameter fitting of data. We will also demonstrate the simulation of networks of several hundred spiking neurons representing a biomimetic reinforcement learning model. We present visual patterns to these spiking neuron networks and apply a learning algorithm to adjust the weights to classify the different patterns. The reinforcement learning feedback is implemented by frequent callbacks into the ODE system and allows us to monitor the learning rate continuously. In addition, we will show examples of a larger brain circuit generating an emergent systems-level cortico-striatal-thalamo-cortical loop, which selectively responds to learned visual patterns and matches many of the experimental data's dynamic behaviors from non-human primates. Finally, we will show benchmarks comparing Neuroblox.jl to other implementations of brain circuit simulations. We hope to convince the audience that Julia is the ideal language for computational neuroscience applications.
false
https://pretalx.com/juliacon2023/talk/L9S3HT/
https://pretalx.com/juliacon2023/talk/L9S3HT/feedback/
32-D463 (Star)
Extending JumpProcesses.jl for fast point process simulation
Talk
2023-07-26T15:30:00-04:00
15:30
00:30
Point processes model the occurrence of a countable number of random points over some support. They are useful to describe phenomena including chemical reactions, stockmarket transactions and social interactions. In this talk, we show that JumpProcesses.jl is a fast, general purpose library for simulating point processes, and describe extensions to JumpProcesses.jl that significantly speed up the simulation of point processes with time-varying intensities.
juliacon2023-26745-extending-jumpprocesses-jl-for-fast-point-process-simulation
SciML
/media/juliacon2023/submissions/NNVBG8/screenshot-proposal_Dmzaqtw.png
Guilherme Augusto Zagatti
en
Jump and point processes share many similarities despite their somewhat independent developments. JumpProcesses.jl was first developed for simulating jump processes via stochastic simulation algorithms (SSAs) (including Doob’s method, Gillespie’s methods, and Kinetic Monte Carlo methods). Historically, jump processes have been developed in the context of dynamical systems with the objective of describing dynamics with discrete jumps. In contrast, the development of point processes has been more focused in describing the occurrence of random events. In this talk we bridge the gap between the treatment of point and jump process simulation. The algorithms previously included in JumpProcesses.jl can be mapped to three general methods developed in statistics for simulating evolutionary point processes. Our comparative exercise reveals that the library originally lacked an efficient algorithm for simulating processes with variable intensity rates. We therefore extended JumpProcesses.jl with a new simulation algorithm, Coevolve, that enables the rapid simulation of processes with locally-bounded variable intensity rates. It is now poosible to efficiently simulate any point process on the real-line with a non-negative, left-continuous, history-adapted and locally bounded intensity rate. This extension significantly improves the computational performance of JumpProcesses.jl when simulating such processes, enabling it to become one of the only readily available, fast, general purpose libraries for simulating evolutionary point processes.
false
https://pretalx.com/juliacon2023/talk/NNVBG8/
https://pretalx.com/juliacon2023/talk/NNVBG8/feedback/
32-D463 (Star)
Thoughts for the Next Generation
Talk
2023-07-26T16:00:00-04:00
16:00
00:30
The first version of Modelica was released in 1997. Today, over 25 years later, it is still going strong. At the same time, Julia is taking the numerical computing world by storm and ModelingToolkit is revisiting many of the topics and applications that drove the development of Modelica. This talk will highlight many of the important aspects of Modelica's design in the hope that these are taken to heart by the developers of the next generation of modeling and simulation tools.
juliacon2023-26212-thoughts-for-the-next-generation
SciML
Michael Tiller
en
Modelica is an interesting mix of basically three different topics. At the lowest level, Modelica is concerned with simulation (solving non-linear equations, integrating differential equations, computing Jacobians). In this way, it is very much aligned with Julia and my sense is that Julia has already exceeded what Modelica tools can offer at this level.
The next level above that is the ability to perform symbolic manipulation on the underlying equations so as to generate very efficient simulation code. The capabilities in this layer are what pushed Modelica forward into applications like real-time hardware-in-the-loop because symbolic manipulation provides enormous benefits here that other tools, lacking symbolic manipulation, could not match. There were some amazing contributions from Pantelides, Elmqvist, Otter, Cellier, and many others in this area and it is important that people understand these advances.
Finally, you have the language itself. While the symbolic manipulation opened many doors to extend what was possible with modeling and simulation, the language that was layered on top addressed very important ideas around usability. It brought in many important ideas from software engineering to make modeling and simulation a scalable enterprise (allowing model developers to create models that had potentially 1000s of components and tens or even hundreds of thousands of equations). But it also brought usability with a representation that quite naturally and seamlessly allowed graphical, schematic based modeling to be implemented consistently by a number of different tools.
None of this is to say that Modelica is perfect or that better tools aren't possible. It is simply to raise awareness of all the things Modelica got right so that future generations can build on that and create even more amazing and capable tools to push the frontiers of modeling and simulation even further.
false
https://pretalx.com/juliacon2023/talk/HUCMKV/
https://pretalx.com/juliacon2023/talk/HUCMKV/feedback/
32-D463 (Star)
StochasticAD.jl: Differentiating discrete randomness
Talk
2023-07-26T16:30:00-04:00
16:30
00:30
Automatic differentiation (AD) is great: use gradients to optimize, sample faster, or just for fun! But what about coin flips? Agent-based models? Nope, these aren’t differentiable... or are they? StochasticAD.jl is an open-source research package for AD of stochastic programs, implementing AD algorithms for handling programs that can contain discrete randomness.
juliacon2023-26968-stochasticad-jl-differentiating-discrete-randomness
SciML
/media/juliacon2023/submissions/RRBDAA/path_skeleton_5Z2CCtM.png
Gaurav AryaFrank Schäfer
en
StochasticAD.jl is an open-source research package for automatic differentiation (AD) of stochastic programs. The particular focus is on implementing AD algorithms for handling programs that can contain *discrete* randomness. But what does this even mean?
Derivatives are all about how functions are affected by a tiny change ε in their input. For example, take the function sin(x). Perturb x by ε, and the output changes by approximately cos(x) * ε: tiny change in, tiny change out. And the coefficient cos(x)? That's the derivative!
But what happens if your function is discrete and random? For example, take a Bernoulli variable, with probability p of being 1 and probability 1-p of being 0. If we perturb p by ε, *the output of the Bernoulli variable cannot change by a tiny amount*. But in the probabilistic world, there is another way to change by a tiny amount *on average*: jump by a large amount, with tiny probability.
StochasticAD.jl generalizes the well-known concept of [dual numbers](https://en.wikipedia.org/wiki/Dual_number) by including a *third* component to describe large perturbations with infinitesimal probability. The resulting object is called a *stochastic triple*, and StochasticAD.jl develops the algorithms to propagate this triple through user-written code involving discrete randomness. Ultimately, the result is a *provably unbiased* estimate of the derivative of your program, even if it contains discrete randomness!
In this talk, we will discuss the workings of StochasticAD.jl, including the underlying theory and the technical implementation challenges.
false
https://pretalx.com/juliacon2023/talk/RRBDAA/
https://pretalx.com/juliacon2023/talk/RRBDAA/feedback/
32-141
Morning Break Day 1 Room 1
Break
2023-07-26T11:15:00-04:00
11:15
00:15
Morning break for coffee and snacks
juliacon2023-28123-morning-break-day-1-room-1
JuliaCon
en
Morning break for coffee and snacks, and transit time from the keynote to the rest of the day's talks.
false
https://pretalx.com/juliacon2023/talk/QDGWRG/
https://pretalx.com/juliacon2023/talk/QDGWRG/feedback/
32-141
Debugging time spent in LLVM with SnoopCompile
Lightning talk
2023-07-26T11:30:00-04:00
11:30
00:10
SnoopCompile has a feature of timing the time spent in LLVM to optimize Julia code. This is a talk sharing my experiences using the macro snoopl to debug the time spent optimizing each function in LLVM.
juliacon2023-27017-debugging-time-spent-in-llvm-with-snoopcompile
Julia Base and Tooling
Guilherme Bodin
en
In this talk, I would present my current workflow for debugging the time spent in LLVM optimizing functions and share my experience of using SnoopCompile to understand where are the problems of a large code base to perform faster Time To First X (TTFX).
false
https://pretalx.com/juliacon2023/talk/N3E8BX/
https://pretalx.com/juliacon2023/talk/N3E8BX/feedback/
32-141
Pinning Julia Threads to CPU-Cores with ThreadPinning.jl
Lightning talk
2023-07-26T11:40:00-04:00
11:40
00:10
Have you ever wondered how to bind a thread to a specific CPU-core in Julia? And why that might be useful in the first place? Then this talk is for you! I demonstrate how to easily query and control the affinity of your Julia threads using ThreadPinning.jl and present instructive examples that highlight the importance of thread pinning, especially on HPC clusters.
juliacon2023-26245-pinning-julia-threads-to-cpu-cores-with-threadpinning-jl
Julia Base and Tooling
Carsten Bauer
en
In this talk, I will introduce ThreadPinning.jl, a Julia package that makes controlling the affinity of your Julia threads simple (and fun!). First, you'll learn how to get an overview of the system topology and find out where your Julia threads are currently running. I'll then explain how to readily pin your Julia threads to specific cores, sockets, or memory domains (NUMA) either interactively from inside the Julia REPL or via environment variables or Julia preferences. Applying these techniques, we will study a few instructive examples, such as a (memory bound) streaming kernel, to understand the impact of thread pinning. In particular, we'll consider different pinning schemes on a typical HPC cluster node (dual-socket system). Finally, I'll give an outlook on future features (e.g. interactively pinning BLAS threads) and discuss potential Windows support.
Disclaimer: Linux only (for now)
false
https://pretalx.com/juliacon2023/talk/MBS3YY/
https://pretalx.com/juliacon2023/talk/MBS3YY/feedback/
32-141
Shipping Julia Packages with System-Images
Lightning talk
2023-07-26T11:50:00-04:00
11:50
00:10
The Julia ecosystem has super fast and performant packages, but not always upon first use. This may be alleviated using system images and this talk presents a convenient workflow to ship them. Additionally, it discusses various challenges when doing so and shares our experience in reliably tackling them.
juliacon2023-26969-shipping-julia-packages-with-system-images
Julia Base and Tooling
Venkatesh PrasadJoris Kraak
en
Julia's packages usually compose well with each other. Hence it is common to see people using different packages together. It is often a design choice to split work across different packages.
Often, users add all the packages used in a tutorial or documentation to a local environment; end up with different versions, often leading to unintended results. Partly, a super-set Manifest.toml could solve it. However, a simple Pkg.up or similar operation could alter it. Did you say you pinned it? Instantiation and precompilation still lurk around. It is unreasonable to keep users waiting from using your highly performant, super-fast package.
With system images, one can start using packages quickly. However, you can't hand them over and expect them to work all the time. System images can sometimes fail to build; and often fail to relocate. Occasionally, they fail to start even on the same machine. We will walk through - how to interpret and fix these errors.
We need to deal with lots of releases of packages. One should ensure that the latest and greatest is delivered while maintaining consistency.
We want to talk about how to alleviate the issues in shipping with a mostly-contained and relocatable bundle that includes various components like a relocatable sysimage, additional packages for `stdlib` and a stackable depot with a pinned environment and more. These bundles should include all the artifacts necessary to run while not being bloated by them.
Distributing private registries and packages is tricky. We will discuss how to enable it without giving away the source code.
Our talk aims at decluttering the workflow of shipping packages by talking about how to set up one and the pitfalls; how to recognize and avoid them. It should give a better idea of how people can reliably build and share relocatable products to increase user adoption.
false
https://pretalx.com/juliacon2023/talk/S8DTUG/
https://pretalx.com/juliacon2023/talk/S8DTUG/feedback/
32-141
IsDef.jl: maintainable type inference
Talk
2023-07-26T12:00:00-04:00
12:00
00:30
IsDef.jl provides maintainable type inference in that it
1. Uses code generation where possible to deterministically infer the types of your code
2. Allows you to overload type-inference with your custom implementation
3. If neither code generation works nor a custom type inference rule is given, it falls back to Core.Compiler.return_type, wrapped by some safety nets
In this talk IsDef.jl is presented, along with typical applications and details about the implementation.
juliacon2023-27049-isdef-jl-maintainable-type-inference
Julia Base and Tooling
Stephan Sahm
en
I am super happy to announce the release of IsDef.jl. Since years I wanted to be able to check whether some function is defined for my types. Plain `applicable` was not enough for me, because it just inspects the first layer, and if this is a generic function (like syntactic sugar), it won't give meaningful results. That was the motivation.
It turns out, you need full-fledged type-inference for this, however Julia's default type-inference is not intended for usage in packages. Inference is not guaranteed to be stable, may be indeterministic, and may change on any minor Julia version, which makes it really hard for maintainability. The purpose of Julia's default type-inference is just code-optimization and as such it is an implementation detail to Julia.
Hence, there is a need for another type inference system, or at least a layer around Julia's default inference which makes it maintainable. Welcome to `IsDef`.
The highlevel interface of IsDef is very simple, consisting of two functions `isdef` and `Out`. The usage of these will be explained, along with implementation details and design decisions.
The package version is 0.1.x - suggestions, critics, and improvement ideas about the way inference is made maintainable are very welcome so that IsDef.jl can become the go-to package for maintainable type inference.
false
https://pretalx.com/juliacon2023/talk/HXFG7X/
https://pretalx.com/juliacon2023/talk/HXFG7X/feedback/
32-141
Lunch Day 1 (Room 1)
Lunch Break
2023-07-26T12:30:00-04:00
12:30
01:30
We hope you're enjoying JuliaCon 2023 so far! Please find our food trucks waiting right outside venue with food available for purchase.
juliacon2023-28066-lunch-day-1-room-1-
JuliaCon
en
We hope you're enjoying JuliaCon 2023 so far! Please find our food trucks waiting right outside venue with food available for purchase.
false
https://pretalx.com/juliacon2023/talk/UB73KY/
https://pretalx.com/juliacon2023/talk/UB73KY/feedback/
32-141
FlowFPX: Nimble tools for debugging floating-point exceptions
Talk
2023-07-26T14:00:00-04:00
14:00
00:30
Reliable numerical computations are central to HPC and ML. We present FlowFPX, a Julia-based tool for
tracking the onset and flow of IEEE Floating-Point exceptions that signal numerical defects. FlowFPX’s
design exploits Julia’s operator overloading to trace exception flows and even inject exceptions to accelerate
testing. We present intuitive visualizations of summarized exception flows including how they are generated,
propagated and killed, thus helping with debugging and repair.
juliacon2023-26903-flowfpx-nimble-tools-for-debugging-floating-point-exceptions
Julia Base and Tooling
Ashton Wiersdorf
en
Imagine a busy scientist developing a numerical program P in Julia using a mix of code running on CPUs and offloaded GPU codes. Suppose the program runs to completion, but it has many NaNs ("not a number" in IEEE floating-point arithmetic) in the result.
Unable to proceed meaningfully, the scientist resorts to printing out the NaNs by decoding the values (a hugely time-consuming "hit-miss" proposition). After weeks of work, they discover that one particular
division operation within an inner Julia library function J and another sqrt() inside a GPU library function G are two sources of NaNs.
Probing further, the scientist discovers that function J is called along two call paths P1 and P2. Path P1 conducts the NaN generated by function J to the output result. Path P2, on the other hand, goes through a Julia less-than (<) function that "kills" the NaN by silently consuming it in the computation.
The scientist decides to rewrite this function to conduct the NaN to the output (call it "failure manifestation" for debugging), but they are unsure if there is another call path P3 that also can be activated under a different input, and whether P3 might also kill the NaN sometimes.
Curious about why function G generates NaN, the scientist seeks the CUDA sources for it; unfortunately, they discover that G is supplied by NVIDIA in binary form with no documentation.
The scientist now hears about OUR NEW TOOL FlowFPX---a unique contribution that has many attractive features:
- (a) FlowFPX can run program P unmodified, and shows all the call-paths through the code impinging on functions J and G. Across the thousands of numerical iterations of code C, FlowFPX summarizes all the paths that cause J and G to GENERATE NaNs (gen), the paths that PROPAGATE NaNs (prop), and paths that KILL the NaNs (kill)---a much more comprehensive report that is generated automatically.
- (b) FlowFPX even produces a nice graphical visualization of gen, prop, and kill.
- (c) To find additional lurking paths such as P3, FlowFPX can induce stress by making any floating-point operator foo() artificially generate NaNs. This simulates an (as yet unseen) input which might have caused foo() to spit out a NaN. Using this facility, the scientist discovers ways to failure-manifest paths in program P paths.
- (d) The scientist also discovers that FlowFPX comes with a companion tool called GPU-FPX that can examine binary-only GPU codes using NVIDIA-provided binary instrumentation. This way, if G internally generates a NaN but silently kills this NaN inside the code, the scientist can do one of two things: (i) see if NVIDIA provides an alternative implementation of G that helps failure-manifest. (ii) artificially generate a NaN at the return site of the G call even if G silently kills the NaN inside. This helps keep the program P more transparent and reliable in that internal NaNs are not lost during test runs.
The scientist then goes and reads the documentation of FlowFPX and finds these important facts that further makes them a fan of the FlowFPX tool:
- (a) NaN-bugs are not rare. A recent bug-report is https://forums.developer.nvidia.com/t/sqrt-of-positive-number-is-nan-with-newer-drivers/219078/15.
- (b) GPU-FPX is released at https://github.com/LLNL/GPU-FPX, and backed by a publication by PhD student Xinyi Li cited therein
- (c) FlowFPX was inspired by Sherlogs.jl https://github.com/milankl/Sherlogs.jl that has been proven useful in examining Julia codes run on the Fugaku supercomputer.
- (d) The study of floating-point exceptions is hugely important, and even for important libraries, there is disagreement on how to handle them, as discussed in https://arxiv.org/pdf/2207.09281.pdf written by Demmel et al.
false
https://pretalx.com/juliacon2023/talk/A3LVDS/
https://pretalx.com/juliacon2023/talk/A3LVDS/feedback/
32-141
Julia's Extensible High Performance Sorting Pipeline
Talk
2023-07-26T14:30:00-04:00
14:30
00:30
Julia 1.9 provides remarkable facilities for sorting including lightning fast default algorithms and an extensible system that supports both package authors who provide additional sorting options and users with domain-specific knowledge about their input distributions. This talk explains the Julia sorting pipeline: how these features are made possible and how you can benefit from using them.
juliacon2023-24007-julia-s-extensible-high-performance-sorting-pipeline
Julia Base and Tooling
/media/juliacon2023/submissions/XYHJXA/sorting_Xa1Bemb.svg
Lilith Hafner
en
Julia's new sorting pipeline uses nested algorithm types that allow a user to precisely specify a custom pipeline or modify existing pipelines by adding preprocessing or post-processing steps using either dispatch on custom types or explicit algorithm specification.
In sum, these improvements make Julia the programming language that provides the highest performance sorting in many common cases as well as the most power and flexibility in defining specialized systems for niche use cases.
Finally, this talk will touch on some of the directions for future work within the Julia sorting ecosystem, both in the core language and in SortingAlgorithms.jl.
false
https://pretalx.com/juliacon2023/talk/XYHJXA/
https://pretalx.com/juliacon2023/talk/XYHJXA/feedback/
32-141
Julia meets the Intelligence Processing Unit
Talk
2023-07-26T15:00:00-04:00
15:00
00:30
In this talk I will detail about the effort to run Julia code on the _Intelligence Processing Unit_ (IPU), a massively parallel accelerator developed by Graphcore and powered by 1472 cores.
juliacon2023-26980-julia-meets-the-intelligence-processing-unit
Julia Base and Tooling
Mosè Giordano
en
While Julia natively runs only on capable enough CPUs, the [`GPUCompiler.jl`](https://github.com/JuliaGPU/GPUCompiler.jl) package allows generating code for other devices where Julia itself wouldn't be able to run (for example because of memory constraints or lack of support), thus offloading specific tasks to accelerators or other dedicated hardware.
The _Intelligence Processing Unit_ (IPU) is an accelerator for massively parallel workflows developed by Graphcore and powered by 1472 cores. The IPU is typically employed in machine-learning applications through the optimised libraries developed by Graphcore, but the flexibility of this processor makes it suitable also for many other numerical programs. Julia offers the opportunity to write high-level custom code that can be compiled down to native code for the IPU with `GPUCompiler.jl`, without having to directly interface to the SDK provided by Graphcore.
In this talk I will detail about the effort, in collaboration with researchers at the Simula Research Laboratory, to use Julia code on the IPU, presenting simple benchmarks and applications involving also `DifferentialEquations.jl` and `Enzyme.jl`.
false
https://pretalx.com/juliacon2023/talk/7BCPNX/
https://pretalx.com/juliacon2023/talk/7BCPNX/feedback/
32-141
Quickdraw: The simplest Julia package deployment system
Lightning talk
2023-07-26T15:30:00-04:00
15:30
00:10
The Julia ecosystem provides rich interoperability among Julia packages but it has been nontrivial to deploy functionality from a Julia package to non-technical users. Quickdraw is a simple combination of existing tools that installs a runnable app from any Julia package that defines a `main` function. All the end user needs to do is run a single command; all the developer needs to provide is that command.
juliacon2023-24850-quickdraw-the-simplest-julia-package-deployment-system
Julia Base and Tooling
Lilith Hafner
en
This cross-platform deployment system already supports every public Julia package with a `main` function so development time of package authors is minimal. Following a similar deployment principle as Juliaup, and utilizing Juliaup to install Julia when it is not already installed, Quickdraw is tested against Mac and Linux systems with and without Julia pre-installed, and Windows systems that do already have Julia installed. Unfortunately, this deployment system does not solve the ecosystem wide time to first x issues.
This talk will discuss the usability and security implications of the installation system.
---
# Live Example
To install a game of minesweeper on Linux or Mac, run the following command:
```
curl -fLsS https://lilithhafner.com/quickdraw | sh -s https://github.com/LilithHafner/Minesweeper.jl
```
To install this software on Windows, install Julia and then run the following command:
```
(echo julia -e "import Pkg; Pkg.activate(\"Minesweeper\", shared=true); try Pkg.add(url=\"https://github.com/LilithHafner/Minesweeper.jl\"); catch; println(\"Warning: update failed\") end; using Minesweeper: main; main()" %0 %* && echo pause) > Minesweeper.bat
```
In both cases, the command will create an executable called `Minesweeper` that can be double clicked to run.
false
https://pretalx.com/juliacon2023/talk/BXSCXM/
https://pretalx.com/juliacon2023/talk/BXSCXM/feedback/
32-141
GC Developments
Lightning talk
2023-07-26T15:40:00-04:00
15:40
00:10
RelationalAI, Australia National University, MIT Julia Lab and JuliaHub present our work improving Julia's Garbage Collector performance and stability, and adding support for alternate GC backends, starting with the Memory Management Toolkit (MMTk).
juliacon2023-30957-gc-developments
Julia Base and Tooling
Nathan DalyDiogo Netto
en
RelationalAI, Australia National University, MIT Julia Lab and JuliaHub present our work improving Julia's Garbage Collector performance and stability, and adding support for alternate GC backends, starting with the Memory Management Toolkit (MMTk).
false
https://pretalx.com/juliacon2023/talk/BMBEGY/
https://pretalx.com/juliacon2023/talk/BMBEGY/feedback/
32-141
What's new with JET.jl
Lightning talk
2023-07-26T15:50:00-04:00
15:50
00:10
JET.jl is a static code analyzer for Julia that is powered by Julia compiler's type inference system. This talk aims to introduce new features that have been added to JET over the past two years, as well as discuss the future plans of the project.
juliacon2023-26863-what-s-new-with-jet-jl
Julia Base and Tooling
Shuhei Kadowaki
en
[JET.jl](https://github.com/aviatesk/JET.jl), a static code analyzer for Julia, is now widely used in the community. The purpose of this talk is to introduce new features and enhancements that have been added to JET over the past two years. These include:
- Integration with Test.jl's unit-testing system
- Improvements in type inference, such as alias-analysis on immutable fields
- Various configurations
- Shipping with the base Julia
Additionally, if time allows, I will also talk a bit about future plans of the project.
false
https://pretalx.com/juliacon2023/talk/EM7FDX/
https://pretalx.com/juliacon2023/talk/EM7FDX/feedback/
32-141
Profiling parallel Julia code with Nsight Systems and NVTX.jl
Lightning talk
2023-07-26T16:00:00-04:00
16:00
00:10
Understanding the performance of parallel code is tricky, however Julia can make it even more opaque: with asynchronous tasks, multithreading, distributed computing, garbage collection, GPU support and calls to many external libraries, getting a full understanding of what your code is doing can be rather complicated. This talk will describe how to use Nvidia Nsight Systems to understand what your parallel Julia code is doing.
juliacon2023-27021-profiling-parallel-julia-code-with-nsight-systems-and-nvtx-jl
Julia Base and Tooling
/media/juliacon2023/submissions/FS7HWF/Screenshot_2023-01-15_at_1.23.57_PM_YJZQqEd.png
Simon Byrne
en
[Nvidia Nsight Systems](https://developer.nvidia.com/nsight-systems) is a powerful profiling tool for analyzing code performance, especially when working with parallel or asynchronous code, even without a GPU. This talk will give a short overview of Nsight, as well as the [NVTX.jl](https://github.com/simonbyrne/NVTX.jl) package for instrumenting Julia code, using examples of both multithreaded and MPI distributed code.
false
https://pretalx.com/juliacon2023/talk/FS7HWF/
https://pretalx.com/juliacon2023/talk/FS7HWF/feedback/
32-141
Expronicon: a modern toolkit for meta-programming in Julia
Lightning talk
2023-07-26T16:10:00-04:00
16:10
00:10
Expronicon is a toolkit for metaprogramming in Julia, offering a rich set of functions for analyzing, transforming, and generating Julia expressions, first-class support of MLStyle's pattern matching, and type-stable algebra data types for efficient and simple code generation. Perfect for boosting productivity and improving coding efficiency.
juliacon2023-26846-expronicon-a-modern-toolkit-for-meta-programming-in-julia
Julia Base and Tooling
Xiu-zhe (Roger) Luo
en
Expronicon is a collective toolkit built on top of the awesome MLStyle package for functional programming features. It features
- a rich set of tools for analyzing, transforming, and generating Julia expressions
- type-stable algebra data type
- expanding compile-time dependencies, help you get rid of dependencies that are not needed in runtime to improve your package's loading time
- ExproniconLite: a light-weight version with 0 dependencies for latency-sensitive applications
please refer to the documentation website for more information https://expronicon.rogerluo.dev/
false
https://pretalx.com/juliacon2023/talk/CQEE8M/
https://pretalx.com/juliacon2023/talk/CQEE8M/feedback/
32-141
ScratchQuickSort: a faster version of QuickSort
Lightning talk
2023-07-26T16:20:00-04:00
16:20
00:10
ScratchQuickSort is a novel variation on QuickSort that is typically faster, always stable, and now Julia's default sorting algorithm for generic data.
juliacon2023-25663-scratchquicksort-a-faster-version-of-quicksort
Julia Base and Tooling
/media/juliacon2023/submissions/LZQA3Z/sorting_w9Lcanf.svg
Lilith Hafner
en
By leveraging a scratch space, ScratchQuickSort utilizes a faster partitioning strategy which emits simpler cpu instructions than QuickSort. Specifically the order of comparisons and memory reads is independent of the results of those comparisons allowing better ILP and faster iteration. Due to Julia's integrated sorting pipeline, the scratch space used by ScratchQuickSort is reused throughout the sorting pipeline and is rarely a performance bottleneck.
Nevertheless, ScratchQuickSort is not always faster than ScratchQuickSort and this talk will briefly explain how to avoid those limitations when they do occur
Should time allow, this talk may also describe how ScratchQuickSort is robust to invalid user orderings that cause ScratchQuickSort to segfault and how it ensures stability with negligible performance overhead.
ScratchQuickSort was first published (as far as I can tell) when it was introduced to the Julia programming language in https://github.com/JuliaLang/julia/pull/45222
false
https://pretalx.com/juliacon2023/talk/LZQA3Z/
https://pretalx.com/juliacon2023/talk/LZQA3Z/feedback/
32-141
What's new in the Julia extension for VS Code
Lightning talk
2023-07-26T16:30:00-04:00
16:30
00:10
The Julia extension for VS Code provides features that help with developing Julia packages and ease interactive coding. It is widely used in the Julia community and sees continuous development. This talk will go over features and enhancements introduced since last JuliaCon as well as future developments.
juliacon2023-27105-what-s-new-in-the-julia-extension-for-vs-code
Julia Base and Tooling
Sebastian Pfitzner
en
The Julia extension for VS Code already provides a plethora of IDE-like features as well as functionality for interactive workflows. Last year's development cycle was mainly focused on improving the existing features -- both in regards to correctness and robustness. In particular, the live testing functionality was significantly improved and static analysis is much more robust. We have also introduced static analysis for Julia environments, which will check Project.toml and Manifest.toml files for consistency and present notifications about breaking and non-breaking package updates.
false
https://pretalx.com/juliacon2023/talk/PZASV7/
https://pretalx.com/juliacon2023/talk/PZASV7/feedback/
32-141
SimpleMatch.jl, NotMacro.jl and ProxyInterfaces.jl
Lightning talk
2023-07-26T16:40:00-04:00
16:40
00:10
My super tiny helpers which may also help you. Let me present to you SimpleMatch.jl for nice inline dispatch, NotMacro.jl for using `@not` instead of `!`, and ProxyInterfaces.jl for quickly creating proxy types for `Dicts`, `Arrays`, etc..
juliacon2023-27062-simplematch-jl-notmacro-jl-and-proxyinterfaces-jl
Julia Base and Tooling
Stephan Sahm
en
Three tiny packages of mine, so tiny that they can be combined into one lightning talk:
SimpleMatch.jl is a super lightweight alternative to Match.jl. It simply reuses standard function dispatch syntax, but makes it available in a simple inline macro syntax which is much easier to read than defining inner functions.
NotMacro.jl solves the request that many coming from Python would really love to see `not` being part of the normal syntax. NotMacro.jl gives you `@not` to fill this gap.
ProxyInterfaces.jl is created around the idea of a Proxy. Say, you use `Dict` a lot, but sometimes whish, your dictionary had a special extra feature, which the default one just does not have. ProxyInterfaces.jl let's you focus on creating such extra feature by automatically defining all standard Dict functions for your proxy type. It also supports `Array` and other standard interfaces.
false
https://pretalx.com/juliacon2023/talk/QA9LQX/
https://pretalx.com/juliacon2023/talk/QA9LQX/feedback/
Online talks and posters
Julia usecases in actuarial science related fields
Online Only Lightning Talk
2023-07-26T09:20:00-04:00
09:20
00:10
We are interested in exploring different aspects how Julia programming language can be applied to certain interesting usecases in the actuarial and financial space. In addition we may also provide commentary on when to use certain languages in terms of productivity and performance.
juliacon2023-26839-julia-usecases-in-actuarial-science-related-fields
JuliaCon
Yun-Tien Lee
en
We have identified various fields to apply Julia for, including
- performance improvement. Fairly straightforward to convert existing codes to Julia to get immediate performance gains.
- package and library development. The self-contained package management system helps to standardize and module-ize core functions and add-on functions. This would make all components better structured and largely increase the productivity of the team.
- data visualization. When data volume increases the visualization of summary of data would correspondingly need increased computing power.
- simulation. Julia comes with a huge variety of libraries to do simulation depending on different purposes. To name a few, time series analysis and interest rate models are heavily required components.
- other applications applicable to life or health specific usecases.
In the meantime we would also be interested in finding out how Julia differs and compares among several trending languages, in terms of both productivity and efficiency.
false
https://pretalx.com/juliacon2023/talk/Z3WWTN/
https://pretalx.com/juliacon2023/talk/Z3WWTN/feedback/
Online talks and posters
Nerf.jl a real-time Neural 3D Scene Reconstruction in pure Julia
Online Only Talk
2023-07-26T09:25:00-04:00
09:25
00:30
We demonstrate a set of pure Julia packages to interactively reconstruct and render Neural Radiance Fields (NeRFs), a 3D neural representation of a scene from pictures, using heterogeneous kernel programming. This system was developed at AMD and supports other GPU backends as well.
juliacon2023-26876-nerf-jl-a-real-time-neural-3d-scene-reconstruction-in-pure-julia
JuliaCon
Anton Smirnov
en
Representing scenes as Neural Radiance Fields for novel view synthesis has become
one of the most popular approaches in deep-learning in contrast to more classical approaches, like triangle-model-based scene reconstruction.
However, training and evaluation of Neural Radiance Fields can be costly and slow,
which limits their real-time applicability.
Our package implements many state-of-the-art NeRF algorithms, including Instant Neural Graphics Primitives with Multiresolution Hash Encoding.
While a similar open-source C++ and CUDA system exists, we implemented this version from scratch in pure Julia, using heterogeneous kernel programming to support many backends.
We implement efficient kernels for ray marching, including acceleration structures,
multiresolution hash encoding and alpha compositing and provide gradient kernels for them.
Additionally, we integrate those kernels with Julia's AD ecosystem using ChainRules.jl.
This allows them to be used independently and with the AD system of your choice.
For visualization and interaction, we bundle all this into an interactive OpenGL application
that enables real-time tweaking of training parameters.
Additional features include the ability to record videos from custom camera paths,
convert a trained NeRF to a triangle mesh using Marching Cubes and
Marching Tetrahedra algorithms.
During the development of this package, we also improved the Julia GPU stack.
We now support ROCm artifacts for AMDGPU.jl and improved the support and stability for RDNA2 GPUs.
In conclusion, we provide a more user-friendly experience, running both on AMDGPU.jl and CUDA.jl. We believe this code is much simpler than its C++/CUDA counterpart, and still delivers competitive performance. We hope this can serve as a good example of how to integrate high-performance 3D graphics and Machine Learning in a GPU-agnostic manner, and contribute to the expansion of the Julia ecosystem.
false
https://pretalx.com/juliacon2023/talk/DESQVZ/
https://pretalx.com/juliacon2023/talk/DESQVZ/feedback/
Online talks and posters
JuliaHealth's Tools for Patient-Level Predictions: Strengthening
Poster
2023-07-26T09:30:00-04:00
09:30
00:30
Working with OMOP CDM involves managing large datasets, requiring efficient data extraction tools. To enhance JuliaHealth's infrastructure, we'll expand tools and enable diverse database connections. This aids in developing a patient-level prediction framework for cohort outcomes, tested on mimic and real patient data. The poster is influenced by a Google Summer of Code project, sharing aspects of this journey.
juliacon2023-35730-juliahealth-s-tools-for-patient-level-predictions-strengthening
Biology and Medicine
Fareeda Abdelazeez
en
Working with the OMOP CDM (Observational Medical Outcomes Partnership Common Data Model) involves handling large datasets that require a set of tools for extracting the necessary data efficiently. The first part of the project focuses on improving JuliaHealth's infrastructure by increasing the range of tools available to users. This involves enabling connections to various databases, and working with observational health data. Our second goal is to leverage the capacity built in the previous phase to develop a comprehensive framework for patient-level prediction. This framework will predict patient cohort outcomes with given treatments and will be tested on mimic data, and potentially on real aggregated and anonymized patient data.
false
https://pretalx.com/juliacon2023/talk/QLUH7A/
https://pretalx.com/juliacon2023/talk/QLUH7A/feedback/
Online talks and posters
Interacting with reality: Julia and Jetson Nano.
Poster
2023-07-26T09:30:00-04:00
09:30
00:30
In this work, the first steps for the creation of a Julia package for the interaction with the GPIO and camera ports of a Jetson Nano are presented.
The input and output modes of the GPIO ports are addressed, while the problem of reading data using PWM is discussed.
In addition, interaction with the camera module for the Jetson Nano using the Julia interface is shown.
juliacon2023-27040-interacting-with-reality-julia-and-jetson-nano-
JuliaCon
Óscar Alvarado
en
Julia's community has grown rapidly in recent years, however, it is important to remember what a privilege it is to be able to use these new technologies. For this reason, by expanding the use of Julia to low-cost computers that are easy to transport, it opens the possibilities of who can use and learn about Julia, especially in rural areas or areas with difficult access to technology.
It is common for museums or science fairs to show interactive projects so that the non-science community has a more realistic approach and not just what is seen in books. This is why we believe that the interaction of Julia and reality could be a great opportunity in terms of education and the dissemination of science and technology.
In this project, a package is developed to be able to use the GPIO and camera ports included in the Jetson Nano, from the NVIDIA company.
Julia's interaction with the Jetson Nano's GPIO ports is shown in a variety of ways, including the use of LEDs, buttons, the camera module, and a variety of sensors.
This small-sized computer was chosen because it includes a graphics card, with which you can interact very easily with Julia, and thus expand the scope of projects carried out.
false
https://pretalx.com/juliacon2023/talk/F89ZLV/
https://pretalx.com/juliacon2023/talk/F89ZLV/feedback/
Online talks and posters
Evolving Robust Facility Placements
Poster
2023-07-26T09:30:00-04:00
09:30
00:30
Climate change is leading to an increase in natural disasters, such as floods and earthquakes, which can damage critical infrastructure like hospitals and schools. To address this problem, we propose a project that uses evolutionary algorithms to design facility placements that are robust to disaster. By using evolutioanary algorthims as a metaheursitc solution to the p-median problem, we are show that our approach leads to facility layouts that are more robust to natural disasters.
juliacon2023-25479-evolving-robust-facility-placements
JuliaCon
/media/juliacon2023/submissions/GP9DMT/Screen_Shot_2022-12-15_at_8.17.59_PM_1fLmdP0.png
Will Thompson and Alex FriedrichsenAlex Friedrichsen
en
As climate change continues to worsen, natural disasters such as floods, storms, and earthquakes are becoming more frequent and destructive. These disasters often destroy or damage critical infrastructure such as hospitals, schools, and shelters, reducing access to essential services for affected populations. To address this problem, our project focuses on evolving facility placements that are robust to disasters, using evolutionary algorithms and highly optimized tolerance (HOT) as a mechanism for design. We show that our evolved facility layouts are more robust to natural disasters than empirical layouts, providing a potential solution to the challenges posed by climate change and other disasters. Our work also highlights the utility of the Julia programming language in scientific computing and specifically in evolutionary computation.
Our work draws on two bodies of literature, the p-median problem and highly optimized tolerance (HOT). We formulate the p-median problem as follows: given N facilities and a heterogeneous population distribution across an area, find the optimal placement of facilities that minimizes the average distance to the nearest facility. No exact analytical solution exists, but approximate solutions show that the optimal facility density should scale as the population density to the 2/3rds power.
We implement an evolutionary algorithm as a metaheuristic solution to the p-median problem. We compare its performance to other metaheuristics found in the literature such as simulated annealing, and show that evolutionary algorithms converge to near-optimal facility locations and follow the analytically predicted 2/3 scaling relationship.
The second body of literature revolves around HOT, a mechanism through which engineered systems become robust to a pattern of perturbations through design. Previous work by Carlson, Doyle, and Zhou applied evolutionary algorithms to a percolation lattice model to demonstrate how complexity and robustness can arise in biological contexts.
We extend this work by applying evolutionary algorithms to evolve robust facility locations. We introduce perturbations in the form of catastrophes with locations drawn from a distribution over event locations. All facilities within a specified radius of the catastrophe are eliminated. These catastrophes mimic natural disasters, targeted attacks, or random failures. A robust facility layout minimizes the drop in fitness due to catastrophes. We evolve our facility layout with perturbations. Every generation we apply a set amount of catastrophes to each individual within our population before we evaluate the fitness. We show that the facility layouts evolved with perturbations are more robust than layouts evolved without perturbations. The average decrease in fitness due to a catastrophe is lower in facility layouts evolved with perturbations. We also show that evolutionary algorithms are a viable alternative to simulated annealing in the rugged landscape induced by the stochastic nature of the perturbations. We compare evolution to empirical data and show that our evolved facility layouts are more robust to natural disasters than empirical layouts. This work has the potential to improve the resilience of critical infrastructure to climate change.
We wrote our own evolutionary framework for this project that allows for the introduction of perturbations to the genome during evolution. While evolutionary computation packages exist in Julia, to our knowledge none allow for perturbation during the evolutionary process. This is a contribution to the community because the framework is general and extensible and could be applied to a diverse array of other problems.
Our project makes novel use of Julia's ecosystem of geospatial packages in its application to evolutionary algorithms. The speed of the Julia language is essential. Our algorithm involves tens of thousands of nearest-neighbor assessments for each individual every generation. This computation is prohibitively slow in Python. By leveraging static arrays and Julia's ample multiprocessing capabilities, we were able to make our code several orders of magnitude faster than its Python equivalent. Julia allowed us to implement a full-scale version of our evolutionary algorithm even though relatively little computational power was available to us. As such, our project is a demonstration of the incredible utility of the Julia programming language in scientific computing and specifically in evolutionary computation.
false
https://pretalx.com/juliacon2023/talk/GP9DMT/
https://pretalx.com/juliacon2023/talk/GP9DMT/feedback/
Online talks and posters
HighDimMixedModels.jl
Poster
2023-07-26T09:30:00-04:00
09:30
00:30
I present on a package under development, HighDimMixedModels.jl, for fitting high dimensional mixed effect regression models. The motivation comes from analysis of microbial datasets, but the model is well-suited to many settings across bioinformatics. In my poster, I explain the usage of the package as well as the underlying statistical theory.
juliacon2023-27028-highdimmixedmodels-jl
JuliaCon
Evan Gorstein
en
I present on a package currently under development at http://www.github.com/solislemuslab/HighDimMixedModels.jl for fitting high dimensional mixed effect regression models. The original motivation for this package and the mixed-effect sparse learning models it will support comes from the field of microbiology. Microbial communities are among the driving forces of biogeochemical processes, and standard approaches to studying the connection between microbial communities and these biogeochemical phenomena rely on the use of abundance matrices to represent the microbial compositions as the design matrix in a regression or machine learning analysis.
These types of regressions present two major challenges. First, there is often inter-sample correlation structure due to a grouping of samples in space—in soil studies, for example, samples may come from one of several different locations—or time. These inter-sample correlation structures lead to under-powered statistical estimators as well as incorrect inferences if not properly accounted for. Secondly, in these models, the number of microbial taxa present across the range of samples to be included in the analysis, which equals the number of regression coefficients to be estimated, is quite high.
There are existing Julia packages to deal with each of these problems individually. High dimensional estimation with well-studied guarantees can be done using the Lasso, which has been implemented efficiently in the package Lasso.jl. On the other hand, MixedModels.jl is a popular package for fitting mixed effect models in Julia. The proposed package under development will deal with models that exhibit both at once: that is, they both incorporate random effects and also have a high dimensional vector of fixed effect. In particular, I am implementing the proposed estimator from Schelldorfer et al. (2011), which uses a coordinate-gradient-descent algorithm. The proposed package will be called HighDimMixedModels.jl, and it translates the R package lmmlasso built by Schelldorfer, which was subsequently removed from CRAN. My hope is that by harnessing the speed of Julia, my implementation of Schelldorfer’s model and algorithm will allow researchers in biology to fit high dimensional, mixed-effect regression models more efficiently. The software will be accompanied by step-by-step tutorials and examples similar to the ones found found in the MixedModels.jl documentation.
While at first glance, the intersection of hierarchical/grouped sampling structure with high dimensional feature space might seem like a niche, highly-specialized setting, it may actually be a common occurrence in biology in the age of -omics data. While the impetus for the development of this package is regression analysis for microbial data, the hope is that this package will be useful to researchers working with other types of data, including metagenomic data, metatranscriptomic data, or even continuous measurements like metabolites, methylation or gene expression. For an example of this last category, the model was successfully applied in [1] to a gene expression data matrix in order to identify which genes are most relevant for the production of riboflavin (vitamin B) in the bacterium Bacillus subtillis.
In my poster, I will explain how to use my package, its high level API and its implementation details. I will also explain the details of the underlying statistical theory. I hope that by introducing my package at JuliaCon, it can be useful to the many bioinformatics researchers who attend the conference.
References:
[1] SCHELLDORFER, J., BÜHLMANN, P., & VAN DE GEER, S. (2011). Estimation for High-Dimensional Linear Mixed-Effects Models Using ℓ1-Penalization. Scandinavian Journal of Statistics, 38(2), 197–214. http://www.jstor.org/stable/23015490
false
https://pretalx.com/juliacon2023/talk/THCJ7H/
https://pretalx.com/juliacon2023/talk/THCJ7H/feedback/
Online talks and posters
Pipelines & JobSchedulers for computational workflow development
Online Only Talk
2023-07-26T09:30:00-04:00
09:30
00:30
Computational workflows and pipelines are essential in large-scale data analysis and bioinformatics. Here, we present Pipelines.jl and JobSchedulers.jl, as the reproducible and scalable pipeline builder and workload manager. Pipelines.jl provides simple but powerful methods to wrap Julia function or external command, and provide data and dependency validation, and job re-try or skipping. JobSchedulers.jl can allocate CPU and memory to jobs and defer a job until other jobs are finished.
juliacon2023-24317-pipelines-jobschedulers-for-computational-workflow-development
HPC
Jiacheng ChuanLi Sean
en
Computational workflows and pipelines are essential in large-scale data analysis and bioinformatics. Julia is a fast and easy-to-learn language, powered with parallel implementation and suited to glue together multiple languages and data types. Those features support Julia to be a potential workflow language.
Here, we present two packages, Pipelines.jl and JobSchedulers.jl, as the Julia-based pipeline builder and workload manager. Pipelines.jl is a lightweight and powerful package for building reusable pipelines. JobSchedulers.jl was inspired by Slurm and PBS, and supports allocating CPU and memory to a specific job and deferring a job until other jobs are finished. Julia or external code can wrap in a Program type, and Pipelines.jl provides the code with multiple features, including inputs and outputs validation, dependency check, resuming interrupted tasks, re-trying failed tasks, and skipping finished tasks.
Pipelines.jl and JobSchedulers.jl have been used in Clasnip (www.clasnip.com), a web-based microorganism classification service with Julia back-end. In addition, BioPipelines.jl is an implementation of Pipelines.jl. It integrates a collection of bioinformatics programs and is fully compatible with PackageCompiler.jl. It solves the relocation of Julia applications and the configuration of external dependencies used in workflows.
Therefore, by combining JobSchedulers.jl and Pipelines.jl, developers and researchers can conveniently build reproducible and scalable workflows in Julia.
false
https://pretalx.com/juliacon2023/talk/ZCNXEB/
https://pretalx.com/juliacon2023/talk/ZCNXEB/feedback/
Online talks and posters
Julia at NCI
Online Only Lightning Talk
2023-07-26T09:45:00-04:00
09:45
00:10
We demonstrate how to use NCI services to run and profile Julia code at scale.
juliacon2023-26747-julia-at-nci
HPC
/media/juliacon2023/submissions/DT3B7Z/NCI_Australia_logo_black_square_mjiAcyY.png
Yue Sun
en
NCI Overview
The National Computational Infrastructure (NCI) is Australia's leading organization for high-performance data, storage, and computing.
As the home of the high-performance computational science for Australia research, our highly integrated scientific computing facility provides world-class services to thousands of Australian researchers and their collaborators every year. They consume 1.2 billion CPU hours annually and access our curated dataset from more than 100 countries worldwide.
Julia Support at NCI
Our user community has a very broad range of research interests, ranging from astronomy to bioinformatics, material science to particle physics, and climate simulations to geostorage. To accommodate this diverse demand, NCI maintains a wide selection of software applications, with the number of accumulated software versions exceeding 1000 just three years after the initial launch.
We started supporting Julia at NCI in 2018 on Raijin, which was the predecessor to Gadi. Now, more singularity modules containing Julia are available for users. These modules ship with over 500 popular packages to help users to quickly start their computations and try different workflows on many cores/nodes without worrying about the installation.
Users can run their Julia code in Jupyter notebooks using the prepared module through the Australian Research Environment (ARE). The ARE environment provides an interface to all types of compute architectures available on our HPC system, Gadi. For example, users can run their Julia code on GPUs with CUDA support across multiple nodes using either native SSHManager or MPIManager, while monitoring the GPU performance.
We are going to show three examples in this talk. First, Oceananigans ShallowWaterModel using two GPUs in a JupyterLab instance while monitoring the GPU usage. Second, the training of game connect-four using AlphaZero.jl across two GPU nodes and profiling the code using NVIDIA Nsight systems. Third, AEM inversion using HiQGA.jl on 7104 CPU cores and profiling the code using PPerf.jl and Intel-VTune.
false
https://pretalx.com/juliacon2023/talk/DT3B7Z/
https://pretalx.com/juliacon2023/talk/DT3B7Z/feedback/
Online talks and posters
Parallel Power System Dynamics Simulation toolbox in Julia
Poster
2023-07-26T09:55:00-04:00
09:55
00:30
This talk presents performance improvements in a parallel time-domain simulation toolbox under development in Julia for executing simulations of power system dynamics. The algorithm applies a branch splitting parallel-in-space decomposition scheme to create parallelizable subnetworks in the network solution of the power system analysis problem. The performance of the improved algorithm is evaluated on a supercomputing cluster and shows enhanced computation speedup in complex networks.
juliacon2023-26911-parallel-power-system-dynamics-simulation-toolbox-in-julia
JuliaCon
Michael Kyesswa
en
Computational simulations are important in the design, operation and analysis of power systems in order to ensure a secure and stable operation of power grids. The current power system operating environment, however, shows several transformations in the grid structure as a result of increasing operation of large interconnected networks, an increase in electricity demand from e.g. electric vehicles and heat pumps, and the increasing integration of renewable energy sources in the energy transition context. These changes directly impose additional requirements to the stability analysis process, whereby the time-domain simulations widely used for dynamic stability studies are faced with an increase in computational burden due to the increasing complexity of the system under analysis. In order to address the complexity in the analysis of large networks, parallel and distributed computing techniques are frequently applied to improve the computational speed by taking advantage of multi-core processors and cluster computing.
This talk presents an extension in a Julia-based parallel simulation algorithm to address the need for improved computation methods in power system stability analysis. The algorithm achieves an improvement in computational speedup by reformulating an inherently sequential numerical solution to a parallel approach using a parallel-in-space decomposition scheme and the Julia computing environment. The talk will focus on the parallelization approach applied to restructure the numerical formulation in order to solve the resulting power system differential and algebraic equations in parallel. The basis of the parallelization is a parallel-in-space decomposition to partition the network into independent subnetworks and the equations of the subnetworks are assigned to different processors. The in-space decomposition uses the branch splitting Multi-Area Thevenin Equivalent (MATE) algorithm to divide the network coefficient matrix into submatrices that can be solved in parallel.
The talk will describe the multi-level graph partitioning technique which is used to achieve optimal balancing of tasks in the parallel solution process. The partitions are extended to the dynamic simulation problem to obtain balanced subnetworks that are solved in parallel and only linked via a link subsystem. Furthermore, simulation results will be presented to highlight the difference between the original parallel approach based on the node-splitting Block Bordered Diagonal Formulation (BBDF) and the improved extension of the algorithm based on the branch-splitting MATE algorithm.
false
https://pretalx.com/juliacon2023/talk/33HMQB/
https://pretalx.com/juliacon2023/talk/33HMQB/feedback/
Online talks and posters
Ignite.jl: A brighter way to train neural networks
Online Only Talk
2023-07-26T10:00:00-04:00
10:00
00:30
We introduce Ignite.jl, a package that streamlines neural network training and validation by replacing traditional for/while loops and callback functions with a powerful and flexible event-based pipeline. The key feature of Ignite.jl is the separation of the training step from the training loop, which gives users the flexibility to easily incorporate events such as artifact saving, metric logging, and model validation into their training pipelines without having to modify existing code.
juliacon2023-26850-ignite-jl-a-brighter-way-to-train-neural-networks
JuliaCon
Jonathan Doucette
en
We introduce Ignite.jl (https://github.com/jondeuce/Ignite.jl), a package for simplifying neural network training and validation loops in Julia. Based on the popular Python library ignite, Ignite.jl provides a high-level, flexible event-based approach for training and evaluating neural networks in Julia with flexibility.
The traditional way of training a neural network typically involves using for/while loops and callback functions. However, this approach can become complex and difficult to maintain as the number of events increases and the training pipeline becomes more intricate. Ignite.jl addresses this issue by providing a simple yet flexible engine and event system that allows for the easy composition of training pipelines with various events such as artifact saving, metric logging, and model validation.
Event-based training abstracts away the traditional training loop, replacing it with:
1. a training engine which executes a single training step,
2. a data loader iterator which provides data batches, and
3. events and corresponding handlers which are attached to the engine and are configured to fire at specific points during training.
Event handlers are much more flexible compared to other approaches like callbacks: they can be any callable, multiple handlers can be attached to a single event, multiple events can trigger the same handler, and custom events can be defined to fire at user-specified points during training. This makes adding functionality to your training pipeline easy, minimizing the need to modify existing code.
To illustrate the benefits of using Ignite.jl, we provide a minimal working example in the README.md file that demonstrates how to train a simple neural network while periodically logging evaluation metrics. In particular, we highlight that periodic model and optimizer saving can be added with no changes to the rest of the training pipeline – simply add another event. This example illustrates the benefits of separating the training step from the training loop, and how to simple it is to customize the training pipeline.
In summary, Ignite.jl is a simple and flexible package that improves the process of training neural networks by using events and handlers. It allows for easy modifications to training pipelines without needing to change existing code, it is well-documented, and thoroughly tested. We believe Ignite.jl will be a valuable addition to the Julia machine learning ecosystem.
false
https://pretalx.com/juliacon2023/talk/B3GMAE/
https://pretalx.com/juliacon2023/talk/B3GMAE/feedback/
Online talks and posters
Julia in machining: optimizing drilling positions
Poster
2023-07-26T10:00:00-04:00
10:00
00:30
In the machining of cast blank parts, small geometrical deviations between different lots of blanks can cause serious issues. These differences need to be compensated to 1) make sure that every feature on the blank is properly machined and 2) dimensional tolerances between features are respected. This poster shows via a case study in drilling that Julia is a great tool for solving this problem, including the steps of data processing, optimization, and visualization.
juliacon2023-26873-julia-in-machining-optimizing-drilling-positions
JuliaCon
/media/juliacon2023/submissions/DQ9BDU/julia-in-machining-poster_w3TYqp3.png
Tamás Cserteg
en
In a drilling application, CNC code is built up from two sections: positions of hole features relative to a workpiece local coordinate frame called part zero, and the coordinates of those part zeros in the workspace of the machining center. Usually, there are different part zeros for different feature groups (e.g., one for each side of the part). For quality control reasons, feature coordinates should not be changed, while part zeros can be modified if there is a good reason for it.
One such reason can be the small geometrical deviation between the different lots of cast blanks; using the old CNC code for a new lot may result in tool breakage or producing scrap. To ensure that no time, energy and material is wasted by finding appropriate part zeros with the good old trial-and-error method, an automated optimization-based method is needed.
On every new lot, one cast blank is measured with a 3D scanner, and important geometrical features are exported. Rough feature (on the cast blank) and machined feature (CNC code) positions can be compared to ensure that every surface that needs to be machined will be eventually machined. An appropriate machining allowance, i.e., distance between a rough and corresponding machined feature can ensure this. Other aspects that need to be considered are the dimensional tolerances between machined features. Tolerances between features in the same feature group are ensured by the CNC code, while inter-operation tolerances must be overseen when setting the different part zeros.
With a proper optimization model, the allowance criteria as well as tolerance requirements can be balanced and optimal part zeros can be computed.
This poster showcases how this complex engineering problem can be programmed and solved in Julia. Input geometrical data can be read from files or queried from a database. The JuMP ecosystem is used to implement and solve the optimization problem, and Makie to visualize and interpret the results.
---
The description of this methodology is currently under publication and will be available as an Open Access paper. The implementation is available here: https://github.com/cserteGT3/BlankLocalizationCore.jl
false
https://pretalx.com/juliacon2023/talk/DQ9BDU/
https://pretalx.com/juliacon2023/talk/DQ9BDU/feedback/
Online talks and posters
FairPortfolio ~ simple and stable portfolio optimization
Poster
2023-07-26T10:00:00-04:00
10:00
00:30
Removing sources of large errors in financial data leads to an analytical solution for portfolio optimization that is more stable and performant than several tested alternatives
juliacon2023-26348-fairportfolio-simple-and-stable-portfolio-optimization
Finance and Economics
1m1
en
We are going to minimize portfolio risk after applying the following 3 simplifications:
• consider only 2nd moments (simplification 1)
• homogenize variances (simplification 2)
• algebraically reduce dimension (simplification 3)
We assume that we have chosen a list of 𝑛 assets that we want to invest into. Us
choosing these assets implies that we believe that each of these assets has a long
term positive return.
The above simplifications allow us to analytically derive a formula for the weights of each asset.
false
https://pretalx.com/juliacon2023/talk/LC9AZE/
https://pretalx.com/juliacon2023/talk/LC9AZE/feedback/
Online talks and posters
ctrl-VQE: Julianic simulations of a pulse-level VQE
Online Only Talk
2023-07-26T10:00:00-04:00
10:00
00:30
We discuss the ctrl-VQE algorithm, a variant of the popular Variational Quantum Eigensolver for solving molecular electronic states which has the potential to reduce runtimes on a quantum computer by orders of magnitude. Our most recent implementation of ctrl-VQE using Julia allows us to simulate our algorithm much faster than our previous Python code, enabling a more thorough analysis which has revealed strategies to improve the runtime of the algorithm even more.
juliacon2023-32391-ctrl-vqe-julianic-simulations-of-a-pulse-level-vqe
Quantum
Kyle Sherbert
en
This research was performed by a joint collaboration between the Virginia Tech Chemistry and Physics departments.
This research was supported by the US. Department of Energy (DoE) Award No. DE-SC0019199 and by the DoE Office of Science, National Quantum Information Science Research Centers, Co-design Center for Quantum Advantage (C2QA) under contract number DE-SC0012704.
false
https://pretalx.com/juliacon2023/talk/WUTK8E/
https://pretalx.com/juliacon2023/talk/WUTK8E/feedback/
Online talks and posters
NLP and human behavior with Julia (and a bit of R)
Online Only Talk
2023-07-26T10:00:00-04:00
10:00
00:30
This talk covers NLP analysis with Julia, with a focus com graph analysis.
A basic pipeline is shown, including pre-proccessing graph and semantic embeddings with TextGraphs.jl.
We briefly study the insides of this package along with an use case. For this, the heteronyms of Portuguese poet Fernando Pessoa are contrasted with each other. We see how speech structural properties reflect different writing styles.
juliacon2023-24194-nlp-and-human-behavior-with-julia-and-a-bit-of-r-
JuliaCon
/media/juliacon2023/submissions/LCVKZS/drummond_J3uzsuj.png
Felipe Coelho Argolo
en
This talk covers NLP analysis with Julia, with a focus com graph analysis.
A basic pipeline with TextGraphs.jl is shown, including stemming, lemmatization, grammatical tagging, graph embeddings and semantic embeddings .
We briefly study software writing in Julia and the insides of this package, proceeding to an use case. For this, the heteronyms of Portuguese poet Fernando Pessoa are contrasted with each other. We see how speech structural properties reflect different writing styles.
Graph embeddings transform text into networks, considering recurrence of symbols. Their properties (e.g. density, centrality, size of largest connected component) contain information about text structure and are useful in psychometrics.
TextGraphs.jl makes use of RCall.jl and R::udpipe for some functionalities, which give us a glimpse of R and Julia effortless interoperability.
Semantic embeddings provide measures of speech coherence. These are also useful to characterize ideas in text.
false
https://pretalx.com/juliacon2023/talk/LCVKZS/
https://pretalx.com/juliacon2023/talk/LCVKZS/feedback/
Online talks and posters
Sharing Julia with the world
Poster
2023-07-26T10:00:00-04:00
10:00
00:30
Julia is a 21st century programming language, and one of the advantages of this is to create a healthy and diverse community and share our experiences and issues and the internet.
juliacon2023-27097-sharing-julia-with-the-world
Julia Community
Letícia Madureira
en
I have an Youtube channel with tutorials about Julia Language, in this poster, I would like to share best practices on creating new videos, the statistics comparing views and likes between Shorts and Long videos (since we are in the Tik Tok era and short videos are taking the attention more), and present how Julia can help to connect people in multiple environments. Sharing experiences and following the changes of the world with Julia is the purpose of having a world conference. The YouTube channel also helped to share Julia Language in Brazil and some other Latin American countries, spreading the power of programming computers to economical minorities and unfavored groups.
false
https://pretalx.com/juliacon2023/talk/ELRURU/
https://pretalx.com/juliacon2023/talk/ELRURU/feedback/
Online talks and posters
PerfChecker.jl: tools for performance checking over versions
Poster
2023-07-26T10:05:00-04:00
10:05
00:30
*PerfChecker.jl* is an early version of a set of tools I made to check the performance of the packages in JuliaConstraints over time and versions.
The user provides a set of instructions to run in the same vein as Test.jl that can be used for allocation checks and benchmarks. Finally, the results are plotted in function of the versions.
An example of the pipeline is available at https://github.com/JuliaConstraints/PerfChecker.jl
juliacon2023-27069-perfchecker-jl-tools-for-performance-checking-over-versions
Julia Base and Tooling
/media/juliacon2023/submissions/DCEGGJ/benchmark-evolutions_qV46n83.png
Jean-François BAFFIER (azzaare@github)
en
This poster presents an early version of *PerfChecker.jl*, a set of tools made to check the performance of packages over time and versions. Although the general workflow of this package is designed, we expect several changes to take place before a stable release.
Currently, we invite the users to use it as follows:
- Prepare a set of instructions to be evaluated for package **P**
- Separate instructions for allocations and benchmarks are possible
- For each versions of **P** to be checked *do*
- Checkout the targeted version of **P**
- Load *PerfChecker* and *P*
- Execute the checks
- Leave the current Julia session (if in REPL)
- Launch a new REPL and compile the checks of all the different versions
An example of this workflow is given on the README at https://github.com/JuliaConstraints/PerfChecker.jl
The final goal of *PerfChecker.jl* is to have a behavior similar to *Test.jl*. Contributions are more than welcome.
false
https://pretalx.com/juliacon2023/talk/DCEGGJ/
https://pretalx.com/juliacon2023/talk/DCEGGJ/feedback/
Online talks and posters
Quantum Monte Carlo in Julia.
Online Only Talk
2023-07-26T10:15:00-04:00
10:15
00:30
Quantum computing may be the technology that will change the financial industry. One of the potential use cases for quantum computers is derivative instrument pricing, this is a computationally demanding task with proven quadratic speed-up on the quantum machine.
The goal of the talk is to show how to do this using Julia's library QuantumCircuits.jl.
juliacon2023-25298-quantum-monte-carlo-in-julia-
Quantum
/media/juliacon2023/submissions/S339TU/QC_p9UwVX1.svg
Rafał Pracht
en
I will start with an introduction to quantum computing with a focus on why this will be a breakthrough technology. Next, I will introduce the base of financial mathematics for derivative pricing. In fallow part, I will show the QuantumCircuits.jl library and how we can use it to perform quantum computation in Julia.
Then I introduce the Monte Carlo method in finance, and how we can harness the power of quantum computing to speed up the calculation. Next, a detailed description of the Quantum algorithm for pricing options on gates-base quantum computers will be presented.
The last part will cover the implementation of the algorithm in Julia.
false
https://pretalx.com/juliacon2023/talk/S339TU/
https://pretalx.com/juliacon2023/talk/S339TU/feedback/
Online talks and posters
Sorting gene trees by their path within a species network
Online Only Talk
2023-07-26T10:20:00-04:00
10:20
00:30
Evolutionary relationships are often depicted in tree structures where internal nodes represent ancestral species and external nodes represent extant species, but directed networks are often necessary. Many species tree and network estimation procedures first estimate a set of gene trees from sequence data. The proportion of these gene trees that do not necessitate a network model is biologically interesting. In RANSANEC.jl, we expand random sample consensus to tree-space to tackle this problem.
juliacon2023-24331-sorting-gene-trees-by-their-path-within-a-species-network
Biology and Medicine
Nathan Kolbow
en
Phylogenetics is the study of evolutionary relationships between organisms or groups. These relationships are often depicted in a tree structure, where internal nodes represent hypothetical ancestral species and external nodes (leaves) represent extant organisms or species. Over the past few decades, though, the importance of biological processes which cannot be represented in a tree structure (such as hybridization, introgression, and more) has become more well-known, which has led to a drastic increase in depicting these relationships with directed networks.
Many common procedures for estimating such species trees and networks begin by estimating a collection of many gene trees from sequence data. These gene trees can be discordant even if a tree model is sufficient for modeling the species relationship, due to a process called incomplete lineage sorting (ILS). Thus, one of the main difficulties with model-based approaches is that it is difficult to disentangle what proportion of gene trees are adequately explained by a tree structure and ILS, and what proportion requires a network structure to be explained.
Our package RANSANEC.jl (RAndom SAmple NEtwork Consensus) implements the classical statistical method of RANSAC (RAndom SAmple Consensus) in phylogenetic tree-space in order to separate the set of estimated gene trees which are adequately explained by ILS and a tree structure and those which require a network structure. This software builds on the rapidly growing suite of phylogenetic analysis software available in Julia.
false
https://pretalx.com/juliacon2023/talk/ZVLMVZ/
https://pretalx.com/juliacon2023/talk/ZVLMVZ/feedback/
Online talks and posters
Fast Higher-order Automatic Differentiation for Physical Models
Online Only Talk
2023-07-26T10:30:00-04:00
10:30
00:30
Taking higher-order derivatives is crucial for physical models like ODEs and PDEs, and it would be great to get it done by automatic differentiation. Yet, existing packages in Julia either has exponential scaling w.r.t. order (nesting first-order AD) or has exponential scaling w.r.t. dimension (nested Taylor polynomial in TaylorSeries.jl). The author presents TaylorDiff.jl (https://github.com/JuliaDiff/TaylorDiff.jl) which is specifically optimized for fast higher-order directional derivatives.
juliacon2023-26848-fast-higher-order-automatic-differentiation-for-physical-models
SciML
Songchen Tan
en
The author will be presenting efficient higher-order automatic differentiation (AD) algorithms and its potential application in scientific models where higher-order derivatives are required to calculated efficiently, such as solving ODEs and PDEs with neural functions. Existing methods to achieve higher-order AD often suffer from one or more of the following problems: (1) nesting first-order AD would result in exponential scaling w.r.t. order; (2) ad-hoc hand-written higher-order rules which is hard to maintain and not utilizing existing first-order AD infrastructures; (3) inefficient data representation and manipulation that causes "penalty of abstraction" problem at first- or second-order when compared to highly-optimized first-order AD libraries. By combining advanced techniques in computational science, i.e. aggressive type specializing, metaprogramming and symbolic computing, TaylorDiff.jl will address all three problems.
TaylorDiff.jl is currently capable for the followings: (1) Taylor-mode AD algorithms which can compute the n-th order derivative of scalar functions in O(n) time; (2) automatic generation of higher-order rules from first-order rules in ChainRules.jl; (3) achieved comparable performance with ForwardDiff.jl at lower order.
false
https://pretalx.com/juliacon2023/talk/D8RHE7/
https://pretalx.com/juliacon2023/talk/D8RHE7/feedback/
Online talks and posters
E4ST.jl: Policy & Investment Analysis for the Electric Sector
Poster
2023-07-26T10:30:00-04:00
10:30
00:30
The Engineering, Economic, and Environmental Simulation Tool (E4ST) is a detailed power sector model from Resources for the Future for comprehensive cost-benefit analysis of policies and investments in the electricity sector. Originally written in MATLAB on top of MATPOWER, E4ST.jl is the Julia rewrite of the model, using JuMP, and Julia’s multiple dispatch programming paradigm to allow users to specify novel climate policies, introduce new technologies, and change spatial/temporal resolution.
juliacon2023-25455-e4st-jl-policy-investment-analysis-for-the-electric-sector
JuliaCon
/media/juliacon2023/submissions/LUVDCY/E4STLogo_eD8g3Sb.png
erussell@rff.org
en
At the heart of E4ST is a detailed engineering representation of the power grid, and an optimization problem that represents the decisions of the system operators, electricity end-users, generators, and generation developers. The model represents these operation, consumption, investment, and retirement decisions by minimizing the sum of generator variable costs, fixed costs, investment costs, and end-user consumer surplus losses. E4ST provides detailed analysis to better inform policymakers, investors, and stakeholders.
The power sector is increasingly complex, with challenging emission reduction aspirations, new energy technologies, an ever-changing policy backdrop, growing demand, and much uncertainty. Some of the challenges of representing the sector include:
* Regional and national markets for clean electricity credits
* Diverse generation mixes with temporal variations
* Markets for various fuel types and captured CO2
* increasing energy storage requirements
To provide relevant analysis for such a complex and dynamic sector, models must to be fast to adapt and use. The previous version of E4ST was written as a wrapper for MATPOWER, a powerful Matlab-language package for solving steady-state power system simulation and optimization problems. However, as powerful as MATPOWER is, we desired the additional flexibility and speed that Julia can provide.
Our team is in the process of writing E4ST.jl with maximum flexibility and speed in mind. E4ST.jl is a bring-your-own-solver JuMP-based package. We leverage clever interfaces to inject custom modifications into the data loading, model setup, and results processing steps to allow for extreme configurability and extensibility. We allow for flexible time representations and time-varying inputs with space-and-time-efficient data retrieval.
E4ST.jl uses the speed and extensibility of Julia to enable faster deployment of detailed and adaptable models to inform policy decision-makers and technology developers.
false
https://pretalx.com/juliacon2023/talk/LUVDCY/
https://pretalx.com/juliacon2023/talk/LUVDCY/feedback/
Online talks and posters
VLLimitOrderBook.jl, simulation of electronic order book dynamic
Online Only Lightning Talk
2023-07-26T10:35:00-04:00
10:35
00:10
VLLimitOrderBook.jl is a package written in Julia that simulates order book dynamics and matching for equities, options, and cryptocurrency orders. The orders in the book are stored in an AVL Tree data structure and prioritized based on price and time.
juliacon2023-27003-vllimitorderbook-jl-simulation-of-electronic-order-book-dynamic
JuliaCon
Ruize Ren
en
An [order book](https://www.investopedia.com/terms/o/order-book.asp) is an electronic list of buy and sell orders for a specific security. Order books, which are maintained for each security traded on an exchange, catalog orders based upon the time and price the entry was received and use different matching algorithms to match buyers and sellers for a particular security. , which can also be used to help illustrate the dynamics for that security. Thus, order books, and their associated matching algorithms, play an essential role in determining the price at which a trade is executed. Order books can also be used as a measure of the supply and demand dynamics for a security.
The following section will briefly discuss this package's origins and how we modified and extended its features. Finally, we will provide a system evaluation and validation.
**Where does this package come from?**
This package is based on [LimitOrderBook.jl](https://github.com/p-casgrain/LimitOrderBook.jl), which previously implemented the submission and cancellation of market orders and limit orders. It also provided information regarding the order book, such as the depth of the book, total bid/ask volume, and best bid/ask prices. However, several things could have been improved with the original package. Toward these issues, we fixed several order matching and account tracking issues present in the original package. Further, we extended the package to include additional order types, such as stop-loss orders and buy-stop orders. We also implemented functionality to load or persist the state of the entire order book as a comma-separated value (CSV) file. Finally, we are working on functionality to integrate the book with the order submission process. A notification feature is being added to notify the client/broker regarding order status; for example, when a limit order is submitted, it may only get matched after some time. If matched, the matching engine will notify listeners that the limited order has been executed.
**How do we maintain this package?**
VLLimitOrderBook.jl is maintained on the Varnerlab GitHub site at Cornell University. The code is available under an MIT software license.
**Evaluation and Validation**
We are implementing a set of performance benchmarks. We measure throughput by submitting or canceling orders concurrently or sequentially. We also measure the time required to process an order from submission to completion of the matching process. Finally, to validate the accuracy and correctness of the order book simulation, we are working with [Polygon](https://polygon.io/), a leading market data provider, to compare simulated historical book data for a variety of assets, including equity, options, and cryptocurrency.
false
https://pretalx.com/juliacon2023/talk/RRL3KQ/
https://pretalx.com/juliacon2023/talk/RRL3KQ/feedback/
Online talks and posters
Nested approaches for multi-stage stochastic planning problems
Online Only Lightning Talk
2023-07-26T10:35:00-04:00
10:35
00:10
We present a JuMP-based solver that combines a nested primal-dual decomposition technique and convex relaxation approaches for tackling non-convex multi-stage stochastic programming problems. The approach addresses optimal long-term water supply infrastructure planning with feasibility constraints at operational timescales. We combine an outer primal decomposition of planning stages and inner dual ones of operating scenarios, with convexified non-anticipative constraints relaxed for scenarios.
juliacon2023-26963-nested-approaches-for-multi-stage-stochastic-planning-problems
JuMP
/media/juliacon2023/submissions/KB8HS7/Figures_and_Table_VOH7lVz.jpg
Alireza ShefaeiEdo Abraham
en
The efficient operation of a series of critical infrastructures such as water networks is essential to sustainable futures. However, to sustain good infrastructure services in many places, they need continuous expansion, mainly in response to population growth and rapid industrialization. As a specific example, consider a water supply network that supplies the required demand of different consumers using the available resources. However, for the smooth operation of the network, it is necessary to find new resources and connect them to the growing demand points in an economical manner. If this expansion planning is to be done for a long-term horizon, it requires considering uncertainties on various system parameters like demands, price of electricity, and availability of resources. These conditions also are similar to a great extent for the other infrastructure networks like electric power, gas, and district heating networks with their related equations, variables, and parameters. As a result, long-term infrastructure planning problems that confront high levels of uncertainty need an appropriate approach to modeling this process. For this purpose, multi-stage stochastic programming (MSSP) can be used as a well-established approach for dynamic decision-making under uncertainty. However, due to the nonlinear equation governing the operation of physical infrastructure networks, besides the mixed-integer nature of planning decision variables, this becomes a complicated non-convex problem. In this regard, an iterative procedure based on primal-dual decomposition and convex relaxation for solving this MSSP problem is put forward. This technique is motivated by explained planning problem that requires the satisfaction of constraints that describe operational processes for water supply infrastructure at multiple timescales. Since the aim is just finding a dual for the operation problem (pump scheduling or optimal power flow), each approach, like copositive programming, that can do this task can be used for this purpose. Here, applying a dual decomposition technique, e.g., optimality condition decomposition or Lagrangian relaxation, breaks the time dependency of intra-operation intervals. It also makes amenable the operation of the system in a decentralized mode by relaxing the coupling constraints like mass conservation and nodal power balance equations. If the operation problems are considered the subproblems (second-stage), their solving generates relative cuts for the master planning problem (first-stage problem). This procedure will terminate with the convergence of outer primal decomposition to an acceptable gap. Figure 2 represents the primal\dual decomposition of the scenario tree Figure 1 in two steps. Flowchart 3 also shows the proposed solution technique graphically. The packages for solving the SP in the Julia programming language have been reviewed in Table 1. It is clear from this table that there is not any package to tackle the non-convex nonlinear mixed-integer MSSP problem. This work aims to develop a solver for this complicated MSSP problem. It also must be mentioned that although the proposed approach is applied to water supply networks, it is general enough to be used for any infrastructure network.
false
https://pretalx.com/juliacon2023/talk/KB8HS7/
https://pretalx.com/juliacon2023/talk/KB8HS7/feedback/
Online talks and posters
Wrapping Up Offline RL as part of AutoMLPipeline Workflow
Online Only Lightning Talk
2023-07-26T10:45:00-04:00
10:45
00:10
Unlike in Online RL where agents need to interact with real environment, Offline RL works similar to a typical machine learning workflow. Given a dataset, Offline RL processes data extracting state, action, reward, and terminal columns to optimize the policy Q. By wrapping up offline RL into the AutoMLPipeline workflow, it becomes trivial to search for the optimal preprocessing elements and their combinations to improve Offline RL optimal policy using symbolic workflow manipulation.
juliacon2023-24493-wrapping-up-offline-rl-as-part-of-automlpipeline-workflow
JuliaCon
Paulito Palmes
en
As part of AutoMLPipeline workflow, it becomes trivial to search which preprocessing elements and their combinations provide the best policy Q by cross-validation where the dataset is split into training and testing several times to get the average accumulated discounted rewards (return) of a given policy Q. This talk will demonstrate how to setup the Offline RL pipeline to preprocess the dataset and learn the optimal policy Q and incorporate some parallel search strategy to get the optimal workflow.
false
https://pretalx.com/juliacon2023/talk/UXE8SJ/
https://pretalx.com/juliacon2023/talk/UXE8SJ/feedback/
Online talks and posters
Lessons learned while working as a technical writer at FluxML
Online Only Lightning Talk
2023-07-26T10:45:00-04:00
10:45
00:03
Working as a technical writer at FluxML for the past six months has taught me some tips and tricks for writing and maintaining user-centric documentation. Documentation not only requires writing clear and concise content but also building infrastructures, testing, and a lot of discussions. This talk takes all the lessons I learned (myself and through FluxML maintainers) in the past six months and condenses it into 3 minutes!
juliacon2023-25070-lessons-learned-while-working-as-a-technical-writer-at-fluxml
JuliaCon
Saransh Chopra
en
This short talk will explain why working on documentation is not just writing content. We will walk through design ideas, implementations, user feedback, as well as failures. We will discuss how to write clear and maintainable documentation for users and how to make the documentation development process seamless for developers. As a final goal, this talk aims to enhance a user's journey through reading documentation and a developer's journey through developing documentation.
false
https://pretalx.com/juliacon2023/talk/7AYUSN/
https://pretalx.com/juliacon2023/talk/7AYUSN/feedback/
Online talks and posters
Streaming real-time financial market data with AWS cloud
Online Only Lightning Talk
2023-07-26T10:50:00-04:00
10:50
00:10
Historical data is useful in the financial industry, particularly for back-testing trading strategies. However, a single machine can't store real-time data for even a single ticker symbol. Thus, constructing a distributed scalable computer system to persist real-time financial data requires time and a significant financial commitment. Using cloud computing services reduces this barrier. This talk describes a real-time cloud-based financial data streaming and storage system implemented in Julia.
juliacon2023-26989-streaming-real-time-financial-market-data-with-aws-cloud
JuliaCon
/media/juliacon2023/submissions/8DFWR3/diag_9ESgTJu.png
Ruize Ren
en
This section will introduce our process for selecting cloud service providers and data sources, compare various system backend implementations, and conclude with a system evaluation.
**Reasons for AWS cloud**
Initially, we deployed our program on the Amazon Web Services (AWS) cloud because it offers scalable data storage and can handle concurrent requests from multiple customers/agents. Among the leading cloud service providers (such as Microsoft Azure or Google Cloud), AWS is currently the market leader and provides one-third of the world's cloud infrastructure services.
**Data Source**
We chose [Polygon](https://polygon.io/) as the data source for our data feed, and particularly, their WebSocket data feed mechanism. Polygon is a leading financial market data platform that supplies market data for stocks, options, futures, and cryptocurrencies. Their aggregated data have up to a second resolution.
**Key implementation**
The backend, the main part of this talk, was implemented in Julia using existing packages. We explored two ways to do this; the first is to use various AWS services via [AWS.jl](https://github.com/JuliaCloud/AWS.jl), [LambdaMaker.jl](https://github.com/JuliaCloud/LambdaMaker.jl), etc. Those packages provide different microservices to build up our system. The second approach was to set up EC2 (Amazon Elastic Compute Cloud) instances and deploy the program to these instances. The first approach was the easiest to develop but strongly depended on AWS solutions. For example, we used DynamoDb (an AWS NoSQL database service) to store real-time market data for certain tickers. However, if Amazon shuts down DynamoDb or migrates to another service, we must make changes accordingly. Hence the second approach is not constrained by particular Amazon services; it requires virtual machines to be configured by users and does not lead to vendor lock-in issues.
As for the frontend, we display real-time data from the data source. This component was constructed as a Typescript React App.
The following diagram describes how the frontend client/user interacts with the backend AWS services.
**Evaluations**
We considered several key factors when evaluating the real-time data storage system: latency, scalability, accuracy, reliability, and throughput. Of these, scalability and throughput may be influenced by our choice of cloud service. We have preliminary implemented the first backend using AWS solutions, and the frontend is a React application deployed on AWS Amplify service. The frontend does not have a limit on concurrency but has some constraints such as total data transfer, request duration, and the total number of requests in the free tier. We will be able to determine which one reaches the limit first after building the frontend features. Our backend service DynamoDB has limitations on concurrency, which is 25 read requests per second. If we consider each request of historical data requires one read from the database, our free tier supports a maximum of 25 concurrent requests at the same time.
Latency can be measured in various ways, including end-to-end latency and processing latency. End-to-end latency measures the time taken from a request to be made to a response to be received. In this scenario, it measures the data processing time from start to finish for a given ticker. We have implemented the backend with various AWS solutions, and despite some spike processing time of up to 2,000ms to 4,000ms, the average data processing time is now 100-200ms. Processing latency, on the other hand, measures the time it takes for the system to process a specific input or request. In this scenario, it measures the steps of backend data processing: storing a ticker data in the database and notifying the client in real-time. The average processing time for storing a ticker data is 30ms to 100ms, while notifying the client in real-time is 70ms to 120ms. We will also be able to evaluate the "real-time" performance of our system using these two different backend implementations once the second implementation is completed.
To ensure the accuracy and reliability of our system, we need to consistently provide correct and current data in an optimized way, especially in the face of client changes or malicious attacks. Our backend system should be able to handle and optimize for these scenarios without being affected.
false
https://pretalx.com/juliacon2023/talk/8DFWR3/
https://pretalx.com/juliacon2023/talk/8DFWR3/feedback/
Online talks and posters
RxInfer.jl: a package for real-time Bayesian Inference
Online Only Talk
2023-07-26T10:50:00-04:00
10:50
00:30
We present RxInfer.jl, which is a Julia package for automated and fast Bayesian inference in a probabilistic model through reactive message passing on a factor graph representation of that model. RxInfer.jl unites different Julia packages and forms a user-friendly ecosystem for efficient real-time Bayesian processing of infinite data streams.
juliacon2023-24617-rxinfer-jl-a-package-for-real-time-bayesian-inference
Statistics and Data Science
/media/juliacon2023/submissions/WQNE9L/biglogo-blacktheme_HUIPMYq.svg
Dmitry Bagaev
en
**Background**: Bayesian inference realizes optimal information processing through a full commitment to reasoning by probability theory. The Bayesian framework is positioned at the core of modern AI technology for applications such as speech and image recognition and generation, medical analysis, robot navigation, etc., as it describes how a rational agent ought to update beliefs when new information is revealed by the agent's environment. Unfortunately, perfect Bayesian reasoning is generally intractable, since it requires the calculation of (often) very high-dimensional integrals. As a result, a number of numerical algorithms for approximating Bayesian inference have been developed and implemented in probabilistic programming packages. Successful methods include variants of Monte Carlo (MC) sampling, Variational Inference (VI), and Laplace approximation.
**Problem statement**: Many important AI applications, such as self-driving vehicles and extended reality video processing, require real-time Bayesian inference. However, sampling-based inference methods do not scale well to realistic probabilistic models with a significant number of latent states. As a result, Monte Carlo sampling-based methods are not suitable for real-time applications. Variational Inference promises to scale better than sampling-based inference, but VI requires derivation of gradients of a "variational Free Energy" cost function. For large models, manual derivation of these gradients is not feasible, and automated "black-box" gradient methods are too inefficient to be applicable to real-time inference applications. Therefore, while Bayesian inference is known as the optimal data processing framework, in practice, real-time AI applications rely on much simpler, often ad hoc, data processing algorithms.
**Solution proposal**: We present RxInfer.jl, a package for processing infinite data streams by real-time Bayesian inference in large probabilistic models. RxInfer is open source, available at http://rxinfer.ml, and enjoys the following features:
- A flexible probabilistic model specification. Through Julia macros, RxInfer is capable of automatically transforming a textual description of a probabilistic model to a factor graph representation of that model.
- A flexible inference engine. The inference engine supports a variety of well-known message passing-based inference methods such as belief propagation, structured and mean-field variational message passing, expectation propagation, etc.
- A customized trade-off between accuracy and speed. For each (node and edge) location in the graph, RxInfer enables a custom specification of inference constraints on the variational family of distributions. This enables the use of different Bayesian inference methods at different locations of the graph, leading to an optimized trade-off between accuracy and computational complexity.
- Support for real-time processing of infinite data streams. Since RxInfer is based on a reactive programming framework, implemented by the package Rocket.jl, an ongoing inference process is always interruptible and an inference result is always available.
- Support for large static data sets. The package is not limited to real-time processing of data streams and also scales well to batch processing of large data sets.
- RxInfer is extensible. A large and extendable collection of precomputed analytical inference solutions for standard problems increases the efficiency of the inference process. Current methods include solutions for linear Gaussian dynamical systems, auto-regressive Models, Gaussian and Gamma mixture models, convolution of distributions, and conjugate pair primitives.
**Evaluation**: Over the past few years, the ecosystem has been tested on many advanced probabilistic models that have led to several publications in high-ranked journals such as Entropy [1], Frontiers [2], and conferences like MLSP[3], ISIT[4], PGM[5] and others. A fast and user-friendly automatic Bayesian inference framework is a key factor to expand the applicability of Bayesian inference methods to real-time AI applications. More generally, access to fast Bayesian inference will benefit the wider statistical research community. We are excited to present our work on RxInfer and discuss its strengths and limitations.
**References**: More at biaslab.github.io/publication/.
[1] Message Passing and Local Constraint Manipulation in Factor Graphs, Entropy. Special Issue on Approximate Bayesian Inference.
[2] AIDA: An Active Inference-Based Design Agent for Audio Processing Algorithms, Frontiers in Signal Processing.
[3] Message Passing-Based Inference in the Gamma Mixture Model, IEEE 31st International Workshop on Machine Learning for Signal Processing.
[4] The Switching Hierarchical Gaussian Filter, IEEE International Symposium on Information Theory.
[5] Online Single-Microphone Source Separation using Non-Linear Autoregressive Models, PGM
false
https://pretalx.com/juliacon2023/talk/WQNE9L/
https://pretalx.com/juliacon2023/talk/WQNE9L/feedback/
Online talks and posters
Teaching Quantitative Finance to Engineers using Julia
Poster
2023-07-26T10:50:00-04:00
10:50
00:30
There is a compelling opportunity for scientists and engineers to leverage their proficiency in mathematics combined with artificial intelligence and data science tools to drive value creation within businesses and ultimately democratize wealth through quantitive finance approaches. Toward this opportunity, this poster describes a course piloted at the Smith School of Chemical Engineering at Cornell University during Spring 2022 to teach quantitative finance to engineers and scientists in Julia.
juliacon2023-26922-teaching-quantitative-finance-to-engineers-using-julia
JuliaCon
/media/juliacon2023/submissions/A7883T/CHEME-5660-Course-Sell-Sheet-F22_r64CURo.svg
Jeffrey Varner
en
Finance and economic forecasting, modeling, and decision-making are typically not part of a traditional engineering or physical science curriculum. However, many engineers and scientists are migrating toward employment opportunities in the financial and consulting industries. Further, despite increased market access, there remains a significant barrier to entry for many individuals in our society to the wealth-creation opportunities offered by markets. Toward these unmet needs, we developed the [CHEME 5660 Financial Data, Markets, and Mayhem course](https://varnerlab.github.io/CHEME-5660-Markets-Mayhem-Book/infrastructure.html) in collaboration with [Polygon.io](https://polygon.io/), a leading financial market data provider. The class, which introduced financial systems, markets, and the tools to analyze and model financial data to engineers and scientists at Cornell University, had an initial enrollment of 60 students from CHEM, Physics/AEP, Engineering Management, CS/ORIE, CBE, BME, CEE, ECE, AEP, and the Johnson Business School. The course content was delivered via a combination of lectures and guided computational sessions enabled by Pluto and Jupyter notebooks. All course materials are open source, including notes, examples, and labs.
[CHEME 5660](https://varnerlab.github.io/CHEME-5660-Markets-Mayhem-Book/infrastructure.html) catalyzed the development of multiple Julia packages to support the course's educational goals. For example, [PQPolygonSDK.jl](https://github.com/Paliquant/PQPolygonSDK.jl.git) and [PQEcolaPoint.jl](https://github.com/Paliquant/PQEcolaPoint.jl) were developed to support the class. The [PQPolygonSDK.jl](https://github.com/Paliquant/PQPolygonSDK.jl.git) package is a software development kit for the [Polygon.io](https://polygon.io) financial data platform. [Polygon.io](https://polygon.io) provides real-time and historical data for various assets. A vital component of the success of [CHEME 5660](https://varnerlab.github.io/CHEME-5660-Markets-Mayhem-Book/infrastructure.html) was access to high-quality data sets supplied by [Polygon.io](https://polygon.io). This data allowed us to study the statistical properties of financial data and other topics, such as modeling and analysis tools for describing and ultimately predicting asset pricing dynamics and issues such as portfolio management and hedging. Further, we used tools from artificial intelligence, such as Markov Decision Processes (MDPs) and model-based and model-free reinforcement learning to study optimal decision-making, dynamic hedging, and trade management using actual data sets (including minute-resolution data). Thus, data provided by [Polygon.io](https://polygon.io) through the [PQPolygonSDK.jl](https://github.com/Paliquant/PQPolygonSDK.jl.git) package enabled Cornell students to learn and explore quantitative finance topics with actual data, which was a critical and differentiating feature of the course. Additionally, the [PQEcolaPoint.jl](https://github.com/Paliquant/PQEcolaPoint.jl) package was developed to study the pricing and trade mechanics of equity derivative products, i.e., options, a central topic in the course. Options are a huge market in the United States; the average daily notional value of traded single-stock options rose to more than $450 billion in 2021, compared with about $405 billion for stocks, according to Cboe Global Markets data (2021). To put these values in perspective, the annual global biopharmaceuticals market was valued at USD 401.32 billion in 2021. Thus, in a single day, the options market in the United States trades more than the entire annual global biopharmaceutical market.
Moving forward, several new packages will be developed to support the course. In particular, we are working on a new portfolio management package that will initially be focused on implementing traditional approaches, such as the data-driven and model-driven Markowitz problem. In addition, we are working on dynamic hedging and high-frequency trading packages that will take advantage of real-time data from [Polygon.io](https://polygon.io/). These packages will support new content in the course in market making, i.e., how leading liquidity providers such as [Citadel Securities](https://www.citadelsecurities.com/) drive efficient markets.
Finally, we’ll share lessons learned from students in the course, students with no computational background, and students whose primary language was not Julia. Accessible and reliable notebook technologies enabled these students' broad adoption of Julia. However, there was significant resistance in some cases because of the well-known “time to first plot” issue and general configuration headaches, especially on Windows. This was especially true in cases where heavy computation, e.g., Monte-Carlo price simulations or extensive portfolio optimization calculations, were attempted on various student machines.
false
https://pretalx.com/juliacon2023/talk/A7883T/
https://pretalx.com/juliacon2023/talk/A7883T/feedback/
Online talks and posters
Vahana.jl - A framework for large-scale agent-based models
Online Only Talk
2023-07-26T10:55:00-04:00
10:55
00:30
Vahana.jl is a new open-source high-performance software framework for development of large-scale agent-based models (ABM) of complex social systems.
Vahana models are expressed as synchronous graphic dynamical systems (SyGDS). This allows Vahana to run a simulation in parallel and to distribute the simulation across multiple nodes of a cluster. New extensions to the traditional SyGDS approach allow the implementation of complex ABMs.
juliacon2023-26790-vahana-jl-a-framework-for-large-scale-agent-based-models
HPC
Steffen Fürst
en
This talk will introduce Vahana.jl, a new open-source software framework tailored to the development of large-scale agent-based models (ABMs), with a focus on complex social networks.
The foundation of Vahana.jl lies in an approach to ABM known as a synchronous graph dynamical system (SyGDS). This method advances the principles of cellular automata by transitioning from cells to vertices of a graph representing agents, and using directed edges to establish neighborhood interactions.
Vahana.jl elevates the concept of SyGDS by introducing several by adding a multi-layer approach that enabling a more nuanced and detailed representation of complex systems by treating different interactions as distinct layers in the graph.
In addition to the theoretical discussion of SyGDS, the talk will of course also provide practical insights by implementing an (incomplete) toy model. Furthermore, we also demonstrate how spatial information can be embedded in the graph structure and how Vahana integrates with the Julia Ecosystem, such as DataFrames and Graphs.
The development of Vahana.jl has been driven by a strong focus on performance. The presentation discusses how this emphasis influenced the design and implementation choices, highlighting the effective use of Julia's meta-programming features to enhance the execution speed of simulations. This performance aspect is substantiated with results from two distinct models, including a simulation of Covid-19 spread across Germany, demonstrating Vahana's applicability to real-world scenarios and its capacity for large-scale simulations.
Links:
Repository: https://github.com/s-fuerst/Vahana.jl
Documentation: https://s-fuerst.github.io/Vahana.jl/dev/
Episim example: https://git.zib.de/sfuerst/vahana-episim/
false
https://pretalx.com/juliacon2023/talk/B9DG8U/
https://pretalx.com/juliacon2023/talk/B9DG8U/feedback/
Online talks and posters
Progress on a solver for ideal MHD stability in stellarators
Poster
2023-07-26T10:55:00-04:00
10:55
00:30
In this work, we present progress towards the development of a new code written in the Julia programming language for evaluating global (linear) ideal MHD stability in stellarator geometry. We demonstrate the code’s efficiency and robustness which is achieved through leveraging methods provided by high-performance mathematical libraries from the Julia community, such as Krylov subspace methods. Efficient evaluation of linear ideal MHD stability is crucial for stellarator optimization.
juliacon2023-27054-progress-on-a-solver-for-ideal-mhd-stability-in-stellarators
JuliaCon
/media/juliacon2023/submissions/ADLCE8/Caira_Anderson_JuliaCon_poster_Osu1LfM.jpg
Caira Anderson
en
A stellarator is a nuclear fusion device that uses external non-axisymmetric coils to generate and twist a magnetic field to contain plasma particles. Stable stellarator equilibria are necessary for sustained fusion energy production, but stability evaluation is challenging because of the geometric complexity of stellarators. Ideal Magnetohydrodynamics (MHD) is a model which assumes that the plasma is a perfectly conducting fluid in an electromagnetic field. Linear ideal MHD stability describes the global behavior of the plasma. The linear ideal MHD stability problem can be expressed as a generalized eigenvalue problem involving the ideal MHD force operator.
For strongly shaped 3D configurations such as stellarators, this problem must be solved numerically. By developing a new numerical tool in Julia for evaluating global (linear) ideal MHD stability in stellarator geometry, we make a vital contribution to the process of optimizing stellarator configurations for fusion energy.
false
https://pretalx.com/juliacon2023/talk/ADLCE8/
https://pretalx.com/juliacon2023/talk/ADLCE8/feedback/
Online talks and posters
Becoming a research software engineer with Julia
Online Only Lightning Talk
2023-07-26T10:55:00-04:00
10:55
00:03
This talk will present some Julia tools and features that helped an apprentice research software engineer (and newcomer to Julia) adopt good software engineering practices.
juliacon2023-26875-becoming-a-research-software-engineer-with-julia
JuliaCon
Branwen Snelling
en
Before becoming a research software engineer writing Julia, I was a PhD student writing Python, and most of the code I wrote was just for me. This meant I learned Julia and good software engineering practices simultaneously. Thankfully Julia and its community made it easier. In this talk I will share some of the ways in which using Julia helped me to adopt good software engineering practices, including monitoring performance, engaging with SemVer, and writing performant type stable code.
false
https://pretalx.com/juliacon2023/talk/UTJDAF/
https://pretalx.com/juliacon2023/talk/UTJDAF/feedback/
Online talks and posters
ModelOrderReduction.jl -- Symbolic-Enhanced Model Simplification
Online Only Talk
2023-07-26T11:00:00-04:00
11:00
00:30
We present ModelOrderReduction.jl -- a Julia package for the automatic simplification of scientific models by way of symbolic computation. As such, ModelOrderReduction.jl enables modelers without detailed knowledge of the domain of model order reduction to automatically generate simplified models that approximate the dominant behavior of otherwise too complex ones.
juliacon2023-26956-modelorderreduction-jl-symbolic-enhanced-model-simplification
SciML
Bowen S. ZhuFlemming Holtorf
en
The computational analysis of complex physicochemical phenomena or networked systems relies commonly on the simulation of detailed large-scale models. The computational burden imposed by simulating such large-scale models, however, is frequently found to render a range of scientific and engineering tasks like design, control or uncertainty quantification prohibitively expensive. To recover the tractability of such tasks in this setting, it is customary to build a compact, cheap-to-evaluate proxy for the original large-scale model which captures its dominant behaviors accurately, i.e, a reduced order model. And while there exists a wide range of systematic techniques to build such reduced order models, it typically remains a tedious, error-prone and often ad hoc process that frequently requires substantial intrusion into the original model. With ModelOrderReduction.jl we set out to streamline and largely automate this process, keeping active intrusion into the model by the user to a minimum. To that end, we rely on a symbolic representation of the original model specified via ModelingToolkit.jl. The reduction task is then performed on the symbolic representation using the tools provided by Symbolics.jl.
For those who are non-technical, we will discuss how this approach fits into the paradigm of symbolic-numeric computing and as such applies automatic code transformations beyond automatic differentiation that benefit greatly from the features of the Julia programming language.
For modelers, we will show with scientific PDE and ODE models from the realms of chemical engineering and neuroscience how ModelOrderReduction.jl can help you reduce your models and speed up your computations.
For those who are technical, we will dive into greater mathematical detail about the model order reduction techniques that are supported in ModelOrderReduction.jl and why they lend themselves to being automated by way of symbolic computation. We will specifically focus on the automatic generation of efficient discrete empirical interpolators and polynomial chaos expansions as well as the associated reduced order models obtained from Petrov-Galerkin projection.
false
https://pretalx.com/juliacon2023/talk/HZV83P/
https://pretalx.com/juliacon2023/talk/HZV83P/feedback/
Online talks and posters
Modeling Glacier Ice Flow with Universal Differential Equations
Poster
2023-07-26T11:00:00-04:00
11:00
00:30
We introduce ODINN.jl, an open-source package that uses Universal Differential Equations to model and discover physical processes of climate-glacier interactions. We show how ODINN incorporates tools from the SciML ecosystem (differential equation solvers, automatic differentiation) to recover prescribed laws inside the differential equation describing glacier ice flow, paving the way for functional inversions of empirical laws governing glacier physical processes at a global scale.
juliacon2023-26995-modeling-glacier-ice-flow-with-universal-differential-equations
SciML
/media/juliacon2023/submissions/AZLD3E/ODINN_logo_dtp29qR.png
Facundo Sapienza
en
Global glacier models attempt to simulate different glacier processes, in order to model the evolution and response to climate change of all 200,000 glaciers on Earth. Calibrating the model parameters with noisy and sparse observations coming mostly from satellite data is a very challenging task. Traditionally, this calibration is usually made at a regional or even global level, or sometimes for each glacier individually if enough data is available. However, no global information is used to derive general laws governing the spatiotemporal variability of those parameters. With the increase of remote sensing derived datasets with a global coverage, new opportunities arise to discover empirical laws describing physical processes of climate-glacier interactions. The main reasons why this is technically difficult to achieve are twofold: (1) the computational cost of modelling massive glacier datasets and solving the differential equations that describe their dynamics; and (2) the statistical challenge of making constrained parameter or functional inversions from real satellite observations covering glaciers in widely diverse climates and topographies. Scientific Machine Learning is a modelling framework that can address both limitations.
We introduce ODINN.jl, an open-source model for global glacier modelling making use of tools from the Scientific Machine Learning Julia ecosystem. ODINN uses Universal Differential Equations (UDEs) to learn subparts of a differential equation governing glacier ice flow. The full code is differentiable using Zygote.jl, which allows gradient-based optimization for the parameters of the neural network embedded inside the differential equation. ODINN exploits the latest generation of glacier ice surface velocities and geodetic mass balance remote sensing products, as well as many preprocessing tools from the Open Global Glacier Model (OGGM) in Python. The retrieval and preprocessing of these datasets is done in ODINN using PyCall.jl to run Python code from OGGM.
We showcase the implementation of a 2D Shallow Ice Approximation for glacier ice dynamics (mathematically equivalent to a 2D heat equation with spatio-temporally dependent diffusivity coefficient) and a temperature-index mass balance model per glacier (i.e. the source). We then show how the model successfully infers parameters of ice rheology based on prescribed synthetic laws. This simple example illustrates the first steps of a new global glacier modelling framework in Julia that allows the estimation of global empirical laws for the physical parameters. Furthermore, the lessons learned on implementing UDEs for stiff nonlinear diffusivity PDEs are applicable to other domains, particularly in Earth sciences where the input data consists of gridded remote sensing products. To conclude, we also discuss some of the main challenges and limitations of the current SciML suite in terms of implementing UDEs for 2D physical processes using real observations.
Work in collaboration with Jordi Bolibar, Redouane Lguensat, Bert Wouters and Fernando Pérez.
false
https://pretalx.com/juliacon2023/talk/AZLD3E/
https://pretalx.com/juliacon2023/talk/AZLD3E/feedback/
26-100
Breakfast
Breakfast
2023-07-27T07:00:00-04:00
07:00
01:30
Get a delicious breakfast.
juliacon2023-35794-breakfast
JuliaCon
en
Get a delicious continental breakfast and fresh coffee served directly in the Stata at JuliaCon 2023.
false
https://pretalx.com/juliacon2023/talk/ZYVZNP/
https://pretalx.com/juliacon2023/talk/ZYVZNP/feedback/
26-100
Keynote: Rumman Chowdhury
Keynote
2023-07-27T09:00:00-04:00
09:00
00:45
Dr. Chowdhury currently runs Parity Consulting, Parity Responsible Innovation Fund, and is a Responsible AI Fellow at the Berkman Klein Center for Internet & Society at Harvard University. She is also a Research Affiliate at the Minderoo Center for Democracy and Technology at Cambridge University and a visiting researcher at the NYU Tandon School of Engineering.
juliacon2023-28062-keynote-rumman-chowdhury
JuliaCon
en
Dr.Chowdhury’s passion lies at the intersection of artificial intelligence and humanity. She is a pioneer in the field of applied algorithmic ethics, creating cutting-edge socio-technical solutions for ethical, explainable and transparent AI. She is an active contributor to discourse around responsible technology with bylines in the Atlantic, Forbes, Harvard Business Review, Sloan Management Review, MIT Technology Review and VentureBeat.
false
https://pretalx.com/juliacon2023/talk/GRWSZN/
https://pretalx.com/juliacon2023/talk/GRWSZN/feedback/
26-100
Morning Break Day 2 Room 7
Break
2023-07-27T10:15:00-04:00
10:15
00:15
Morning break for coffee and snacks, and transit time from the keynote to the rest of the day's talks.
juliacon2023-28136-morning-break-day-2-room-7
JuliaCon
en
Morning break for coffee and snacks, and transit time from the keynote to the rest of the day's talks.
false
https://pretalx.com/juliacon2023/talk/SAYHDW/
https://pretalx.com/juliacon2023/talk/SAYHDW/feedback/
26-100
Welcome to Genie 6!
Talk
2023-07-27T10:30:00-04:00
10:30
00:30
Learn about the new productivity features and performance improvements in Genie 6, the new major Genie release of 2023!
juliacon2023-26972-welcome-to-genie-6-
JuliaCon
/media/juliacon2023/submissions/E98J3J/b322ed80-bc5a-11e9-807a-9b53749c40ef_PX5EWQV.png
Adrian Salceanu
en
Following on its yearly major release cycle, Genie 6 brings a wealth of new features and performance improvements. This talk will take you through a whirlwind tour of the best additions to Genie that you can not miss! We'll cover everything from the new routing features, powerful additions to the UI libraries, extended support for cookies and sessions, improved caching, and enhanced database support -- to new exciting features, like static website generation.
We'll see how to take advantage of these exciting updates in your apps, but also how to navigate the breaking changes and successfully upgrade your Genie app from version 5.
false
https://pretalx.com/juliacon2023/talk/E98J3J/
https://pretalx.com/juliacon2023/talk/E98J3J/feedback/
26-100
KomaMRI.jl: Framework for MRI Simulations with GPU Acceleration
Talk
2023-07-27T11:00:00-04:00
11:00
00:30
Koma is an MRI simulator utilizing CPU and GPU parallelization to solve the Bloch equations. Our simulator targets researchers and students, offering an easy-to-use GUI. The accuracy and speed of our simulator were compared against two open-source alternatives JEMRIS (C++, using an ODE solver) and MRiLab (C++ and CUDA). The results show that Koma can simulate with high accuracy (MAEs below 0.1% compared to JEMRIS) and better GPU performance than MRiLab.
juliacon2023-27099-komamri-jl-framework-for-mri-simulations-with-gpu-acceleration
Biology and Medicine
/media/juliacon2023/submissions/YEUNMW/Koma_banner_TiQ6Nh7.png
Carlos Castillo Passi
en
I-INTRODUCTION
Numerical simulations are an important tool for analyzing and developing new acquisition and reconstruction methods in Magnetic Resonance Imaging (MRI). Simulations allow us to isolate and study phenomena by removing unwanted effects, such as hardware imperfections, and others. Additionally, with the increasing use of Machine Learning models, simulation becomes even more relevant, because it can be used to generate synthetic data for training, or to construct signal dictionaries to infer quantitative measurements from the acquired data. Moreover, simulations are an excellent tool for education and training, as hands-on experience is a great way to assimilate the theoretical and practical components of MRI.
We believe an ideal simulator should be general, fast, easy to use, extensible, open-source, and cross-platform. In this work, we developed an MRI simulation framework built from the ground up to satisfy these requirements. We chose the Julia programming language because its syntax is similar to MATLAB (widely used by the MRI community), its excellent GPU support, and its speed is comparable to C/C++. This has been shown to be the case in other MRI applications such as image reconstruction with MRIReco.jl, where the authors achieved speeds on par with state-of-the-art toolboxes.
We called our simulator “Koma”, inspired by the Japanese word for spinning top, as its physics resemble MRI’s
II-METHODS
Koma simulates the magnetization the spins in a virtual object by solving the Bloch equations. The solution of these equations for a single spin is independent of the state of the other spins in the system, a key feature that enables parallelization.
Our simulator uses a first-order splitting method to simplify the solution of the Bloch equations. This reflects mathematically the intuition of separating the Bloch equations in a two-step process, rotation and relaxation, for each time step. This method is exact in many common cases in MRI, but in general, has O(dt^3) convergence.
For CPU parallelization, we used the macro Threads.@threads, and the package ThreadsX.jl. To ensure thread safety, we stored the acquired signals per thread in different matrices to add them later into a signal matrix. On the other hand, for the GPU support we operated with CuArrays from CUDA.jl.
We ensured type stability to enable high performance. Moreover, we had special care to perform in-place operations and not generate unnecessary variable copies using the @view macro in the simulation functions. Finally, we used NVIDIA Nsight Systems to profile GPU performance with the NVTX.@range and CUDA.@profile macros.
For the GUI we used Blink.jl, a framework to develop applications using web technologies. The GUI allows the user to easily plot the sequence, 𝑘-space, phantom, acquired signal, and reconstructed images. Plots are done using the PlotlyJS.jl package, which also allows to export them to .svg files.
To test the accuracy of our simulator, we compared Koma with the latest version of JEMRIS (v2.9), which has been compared with real MRI acquisitions. On the other hand, we compared the speed of our simulations against MRiLab, an open-source GPU-accelerated MRI simulator. Finally, to compare the ease of use for first-time users, we designed a pilot experience with students of an Imaging course in Engineering, where they learned some fundamentals of MRI.
III-RESULTS
For the simulated scenarios we obtained accurate results with MAEs below 0.1% when compared to JEMRIS. When we tested the simulation speed against MRiLab, we found that we had slower CPU performance, but we were 2.6 times faster for the GTX 1650Ti and 6.0 times for the RTX 2080 Ti.
We think the CPU results show that we still perform unwanted synchronizations between threads, a problem that our GPU implementation would not suffer as we use Nthreads=1 by default.
In the students experience, they reported no problem installing Julia (mean 4.7/5), Koma (mean 4.2/5), JEMRIS (3.8/5), and MRiLab (4.3/5). Regarding the time taken to install each simulator, most of the students were able to install Koma (mean 13.2 min), JEMRIS (mean 33.8 min), and MRiLab (mean 16.9 min) in less than 40 minutes.
Their first simulation took them more time in JEMRIS (mean 19 min) and MRiLab (mean 13.9 min) than in Koma (mean 5.7 min). 31% of the students could not simulate on MRiLab (6 students using Mac OS), so we only used Koma and JEMRIS for the rest of the activities.
Their reported median simulation speeds were 8.4 times faster with Koma than with JEMRIS, and 65% recommended Koma over JEMRIS.
IV-CONCLUSIONS
In this work, we presented a new general MRI simulator programmed in Julia. This simulator is fast, easy to use, extensible, open-source, and cross-platform. These characteristics were achieved by choosing the appropriate technologies to write easy-to-understand and fast code with a flexible GUI.
false
https://pretalx.com/juliacon2023/talk/YEUNMW/
https://pretalx.com/juliacon2023/talk/YEUNMW/feedback/
26-100
Towards developing a production app with Julia
Talk
2023-07-27T11:30:00-04:00
11:30
00:30
When starting a company built on mathematical concepts Julia seems like an obvious choice because of its capabilities for enabling rapid prototypes, rewrites, and pushing for performance early. This talk presents our journey to build a Julia production app. In particular, we present how we use a monorepo within a distributed team and how we set up our continuous-integration infrastructure. Finally, we introduce an open source project which aims to make it easier to write Julia serverless apps.
juliacon2023-26942-towards-developing-a-production-app-with-julia
Julia Base and Tooling
/media/juliacon2023/submissions/3QFPST/PlantingSpace_logo_onWhite_800x300px_ddmtFvl.png
Steffen RidderbuschCédric Belmant
en
The Julia language enables rapid progress even with a small number of software developers, as evidenced by its rich ecosystem, which makes it a great language for startups building a product based on state-of-the-art technologies drawing from advanced mathematical concepts. However, there are challenges in building a serverless web application, notably regarding hosting, packaging and serving the final product with minimal latency.
In this talk, we present a few of our development practices and our experience with using Julia to build a web service. Firstly, we will present the benefits of using a monorepo with a substantial number of submodules in a fully-remote distributed team.
Secondly, we show how we organised our pipelines on GitLab and how we keep everything as fast as possible to ensure testing consistency and prompt developer feedback. We address the challenges in doing multiple daily deployments that are dependent on time consuming custom system image builds, as well as in identifying and debugging issues.
Thirdly, we show our use of several specialised docker and system images, which combined with several unregistered packages poses additional challenges in dependency management.
To conclude our talk, we introduce an [open source project](https://gitlab.com/plantingspace/awsexample) that can be used as a starting point to implement Julia web services that can be deployed to AWS Lambda.
false
https://pretalx.com/juliacon2023/talk/3QFPST/
https://pretalx.com/juliacon2023/talk/3QFPST/feedback/
26-100
Lunch Day 2 (Room 7)
Lunch Break
2023-07-27T12:30:00-04:00
12:30
01:30
We hope you're enjoying JuliaCon 2023 so far! Please find our food trucks waiting right outside venue with food available for purchase.
juliacon2023-28079-lunch-day-2-room-7-
JuliaCon
en
We hope you're enjoying JuliaCon 2023 so far! Please find our food trucks waiting right outside venue with food available for purchase.
false
https://pretalx.com/juliacon2023/talk/QYLWAY/
https://pretalx.com/juliacon2023/talk/QYLWAY/feedback/
26-100
Running Julia code baremetal on an Arduino
Talk
2023-07-27T14:00:00-04:00
14:00
00:30
Through the use of GPUCompiler.jl and LLVM.jl, it is possible to cross-compile julia code to backends not officially supported by julia itself. One of these is the AVR backend, the architecture used by the arduino family of microcontrollers. This talk explores some experiments in compiling julia code to AVR, running it baremetal on an arduino as well as looking into challenges with making julia more suited to cross compilation.
juliacon2023-26101-running-julia-code-baremetal-on-an-arduino
Julia Base and Tooling
/media/juliacon2023/submissions/K3DPHH/talk_image_RtNNwQb.png
Valentin Bogad
en
With its goal of having a single language for both prototyping research code as well as highly optimized, high-performance deployed code and this goal moving closer to completion year after year, one of the remaining challenges for julia is cross compilation to CPU architectures other than the running julia session. This talk will show some exploration in compiling julia code with a target of AVR and subsequently running that julia code, without a runtime, on an arduino. For this, two new support packages are used: [AVRDevices.jl](https://github.com/Seelengrab/AVRDevices.jl), for CPU specific definitions in the AVR family and [AVRCompiler.jl](https://github.com/Seelengrab/AVRCompiler.jl), for the cross compilation process. Afterwards, we'll look at some challenges for developing with such static compilation in mind, what could be done to make the experience smoother & easier to debug, as well as which features of julia are unavailable in such a restricted environment.
false
https://pretalx.com/juliacon2023/talk/K3DPHH/
https://pretalx.com/juliacon2023/talk/K3DPHH/feedback/
26-100
Making a Julia release
Talk
2023-07-27T14:30:00-04:00
14:30
00:30
The Julia release process is important since it's the process by which all the work that goes into the Julia language gets delivered to users.
This talk details the various steps that are made to ensure that Julia releases are of high quality with few regressions, both in correctness and performance.
juliacon2023-26755-making-a-julia-release
Julia Base and Tooling
Kristoffer Carlsson
en
The Julia release process is important since it's the process by which all the work that goes into the Julia language finally gets delivered to users in the form of an official release. Faults and regressions in Julia releases leads to confusion, disappointment and potentially reduced trust in Julia as a language. Therefore, there are quite a few steps in the Julia release process that are there to try catch potential issues before an official release is made.
This talk will show how these various quality checks are done in practice and some examples of regressions found by them. It will also detail various parts where there improvements could be made as well as how the general Julia community can help make creating new high quality releases easier for the release managers.
false
https://pretalx.com/juliacon2023/talk/SSRBPL/
https://pretalx.com/juliacon2023/talk/SSRBPL/feedback/
26-100
Comparison of Choice models in Julia, Python, R, and STATA
Talk
2023-07-27T15:30:00-04:00
15:30
00:30
The proposal is about which Covid vaccine would be selected by target group of patients in a cross cultural comparison of US and India as features related to the vaccines vary. The data is analyzed using packages from Julia, Python, R, and STATA. The objective is to present a comparison of tools while analyzing an extremely relevant and time critical data related to Covid vaccinations
juliacon2023-27122-comparison-of-choice-models-in-julia-python-r-and-stata
Statistics and Data Science
Srinivas K Datta
en
As the initial vaccines became available in the fight against Covid, data were collected from a total of 800 patients (395 in US, 405 in India) on their preference for the different Vaccines available. Key attributes related to efficacy, safety, dosing frequency, boosters, and cost were considered to identify the key factors and their levels which drove patients' choice in selecting the vaccine. Depending upon their interest, domain, and exposure, Julia, Python, R and STATA are the preferred choices for advanced modeling by both academic and professional modelers and researchers. The proposal provides a how-to in Discrete Choice Modeling and a comparison of results using each of these languages and tools.
false
https://pretalx.com/juliacon2023/talk/LN3C9W/
https://pretalx.com/juliacon2023/talk/LN3C9W/feedback/
26-100
Keynote: Stephen Wolfram
Keynote
2023-07-27T16:15:00-04:00
16:15
00:45
Dr. Stephen Wolfram is the creator of Mathematica, Wolfram|Alpha and the Wolfram Language; the author of A New Kind of Science; the originator of the Wolfram Physics Project; and the founder and CEO of Wolfram Research.
juliacon2023-28063-keynote-stephen-wolfram
JuliaCon
en
Over the course of more than four decades, he has been a pioneer in the development and application of computational thinking—and has been responsible for many discoveries, inventions and innovations in science, technology and business. Based on both his practical and theoretical thinking, Dr. Wolfram has emerged as an authority on the implications of computation and artificial intelligence for society and the future, and the importance of computational language as a bridge between the capabilities of computation and human objectives.
Dr. Wolfram has been president and CEO of Wolfram Research since its founding in 1987. In addition to his corporate leadership, Wolfram is deeply involved in the development of the company's technology, personally overseeing the functional design of the company's core products on a daily basis, and constantly introducing new ideas and directions.
false
https://pretalx.com/juliacon2023/talk/VJRZDF/
https://pretalx.com/juliacon2023/talk/VJRZDF/feedback/
32-082
Morning Break Day 2 Room 1
Break
2023-07-27T10:15:00-04:00
10:15
00:15
Morning break for coffee and snacks, and transit time from the keynote to the rest of the day's talks.
juliacon2023-28130-morning-break-day-2-room-1
JuliaCon
en
Morning break for coffee and snacks, and transit time from the keynote to the rest of the day's talks.
false
https://pretalx.com/juliacon2023/talk/DBXJAN/
https://pretalx.com/juliacon2023/talk/DBXJAN/feedback/
32-082
The state of JuMP
Talk
2023-07-27T10:30:00-04:00
10:30
00:30
JuMP is a modeling language and collection of supporting packages for mathematical optimization in Julia. JuMP makes it easy to formulate and solve linear programming, semidefinite programming, integer programming, convex optimization, constrained nonlinear optimization, and related classes of optimization problems.
In this talk, we discuss the state of JuMP, preview some recently added features, and discuss our plans for the future.
juliacon2023-30210-the-state-of-jump
JuMP
Miles Lubin
en
JuMP is a modeling language and collection of supporting packages for mathematical optimization in Julia. JuMP makes it easy to formulate and solve linear programming, semidefinite programming, integer programming, convex optimization, constrained nonlinear optimization, and related classes of optimization problems.
In this talk, we discuss the state of JuMP, preview some recently added features, and discuss our plans for the future.
false
https://pretalx.com/juliacon2023/talk/XZP9U8/
https://pretalx.com/juliacon2023/talk/XZP9U8/feedback/
32-082
Solving the merchant collocated facilities with JuMP
Lightning talk
2023-07-27T11:00:00-04:00
11:00
00:10
In this presentation, we showcase using JuMP to formulate and solve a range of optimization problems related to the location and market bidding of energy management systems.
juliacon2023-28928-solving-the-merchant-collocated-facilities-with-jump
JuMP
Jose Daniel Lara
en
The development of new clean-generation technologies also leads to new plant-level architectures that combine several generation and storage assets behind the point of connection. These co-located generation resources (Hybrid Systems) primarily operate as merchant assets that employ automated market bidding models and internal Energy Management Systems (EMS) to comply with the operator's signals. Formulating an optimal bidding model requires embedding the EMS control model into the bidding algorithm, resulting in a bi-level optimization problem. In this presentation, we first showcase using JuMP to formulate and solve this problem effectively for multiple merchant systems and the bidding outcomes considering different model formulations. Second, the bidding outcomes are later integrated into a PowerSimulations.jl (also built with JuMP) simulation to study the system-level effects of the various merchant bidding and the interactions between market-clearing models and the embedded EMS model. We will showcase simulations conducted in the RTS system considering different levels of merchant hybrid systems participation. The presentation provides the following specific insights on JuMP usage: 1) the Formulation of specialized bi-level problems with custom cuts to solve the merchant hybrid system bidding problem, 2) the integration of a modular model within a complex simulation workflow supported by JuMP in PowerSimulations.jl 3) Accelerating the solution of power systems operations simulation that employ agent optimization problems using JuMP.
false
https://pretalx.com/juliacon2023/talk/Y9PV3X/
https://pretalx.com/juliacon2023/talk/Y9PV3X/feedback/
32-082
How JuMP enables abstract energy system models
Lightning talk
2023-07-27T11:10:00-04:00
11:10
00:10
In this talk we present feedback on our experience developing a large-scale energy system model, particularly concerning garbage collection pressure during model creation and methods for exploring parametric models.
juliacon2023-28922-how-jump-enables-abstract-energy-system-models
JuMP
Stefan Strömer
en
"Energy system models are at the core of today’s energy related research and have been for decades. However, global challenges, such as those related to climate and politics, have intensified the necessity for ever evolving models, as well as the requirements regarding features and explainability. As energy sectors continue to become increasingly interconnected across different regions, model complexity has surpassed the advancements in computational performance during the last decade. In response, a diverse range of approaches, including various aggregation techniques and specialized solve methods, have been applied to address the issue of complexity, with varying degrees of popularity.
In recent years, a novel energy system model has been developed, which enables the representation of arbitrary systems as part of an abstract commodity (energy) flow model, thereby lowering entry barriers for researchers with diverse backgrounds. This model leverages cutting-edge operations research approaches, which have not yet been widely adopted within the energy sector. Leveraging the JuMP modeling language/framework facilitates scaling the model to continental-scale systems and outperforming many existing models.
Although the model is currently closed source, efforts are being made to address the remaining hurdles and release it as an open-source software in the near future. During the development phase, certain issues such as concerns surrounding garbage collection pressure during model creation or methods for exploring parametric models using JuMP have been raised and addressed through discussions with the community and the JuMP-team.
This talk will primarily address the following topics regarding the novel energy system model: (1) the unique features of its underlying model structure that distinguish it from conventional energy system models, (2) the advantages of using JuMP for solving various types of energy system models, (3) the automated conversion of optimization models formulated using JuMP into LaTex code to facilitate model validation and formulation extraction, and (4) internal details pertaining to the model architecture. Please note that these points may be subject to change, as the model is still in an early stage of development and may incorporate additional novel features."
false
https://pretalx.com/juliacon2023/talk/C7DFSF/
https://pretalx.com/juliacon2023/talk/C7DFSF/feedback/
32-082
TimeStruct.jl: multi horizon time modelling in JuMP
Lightning talk
2023-07-27T11:20:00-04:00
11:20
00:10
This talk will present TimeStruct.jl, a package developed to support the modeling of optimization problems in JuMP involving time structurers with different resolution on an operational and strategic level. The package also supports multiple operational scenarios and strategic tree structures to model uncertainty at both operational and strategic level.
juliacon2023-26819-timestruct-jl-multi-horizon-time-modelling-in-jump
JuMP
/media/juliacon2023/submissions/FCFKS3/regtree_REyJROB.png
Truls Flatberg
en
The package TimeStruct.jl provides functionality for the efficient development of JuMP models with a flexible time structure. The package provides multiple time structures with a uniform interface from the modeling perspective. A time structure is iterable over its time periods and these can be used for getting data values from associated time profiles.
The time structures have a two level structure with both a strategic and an operational level, with support for multiple operational scenarios and an associated probability for each scenario. A typical application is energy models where e.g. you can have multiple scenarios for wind availability. At the strategic level it is also possible to represent uncertainty by having a tree structure for the strategic decisions.
Using the TimeStruct.jl package allows for an efficient development of optimization models where the model can be validated on simple time structures before changing to more complex time structures without modifying the model implementation.
false
https://pretalx.com/juliacon2023/talk/FCFKS3/
https://pretalx.com/juliacon2023/talk/FCFKS3/feedback/
32-082
Designing a flexible energy system model using multiple dispatch
Talk
2023-07-27T11:30:00-04:00
11:30
00:30
Energy system models are generally written in algebraic modelling languages like AMPL or GAMS. This both limits the extendibility of the models due to the structure of the languages and can be seen as a threshold due to the license costs for the languages. Julia with JuMP is an excellent alternative as it is both open source and allows for flexibility in model structure by using multiple dispatch. The model is a novel energy system model framework with easy extendibility and high flexibility.
juliacon2023-26869-designing-a-flexible-energy-system-model-using-multiple-dispatch
JuMP
Julian StrausLars Hellemo
en
We present, a modular framework for energy systems analyses. A specific goal of the development is to address the challenges of analyzing multi energy-carrier systems with sector coupling, bringing together modelling of technologies that are traditionally modelled separately, with a large span in the need for temporal and geographic resolution, and where accurate models are often non-linear and come with high computational cost.
The model is based on existing (proprietary) models and planned to be an open-source framework. It will serve as basis for new extensions in the future and is designed to be modular and flexible to facilitate modelling at different resolutions and technological detail, leveraging Julia’s multiple dispatch.
#### Time representation - TimeStruct.jl
Representation of time is provided by a first package, TimeStruct.jl, which abstracts the representation of time by providing time-related types and methods, thereby separating the technology description from the time structure. TimeStruct.jl makes it simple to change time resolution and provides types for efficient representation of time series for different time structures, including multi-level structures which are common for optimization models for investment analyses, allowing investments only in a subset of time periods. Iterations over the time periods is achieved through the introduction of iterators and the application of multiple dispatch. TimeStruct.jl also includes the implementation of both operational and strategic uncertainty allowing for the implementation of stochastic problems.
#### The base package - EnergyModelsBase.jl
The core of the energy model is a base package that provides the fundamental building blocks for energy systems such as production, conversion, and consumption of energy by different processes, implemented using JuMP. The base package focuses on operational models and can be easily extended by other packages to add new functionality (e.g., support for investments/capacity expansion or geography), or to add more precise technology descriptions, e.g., using mixed integer or non-linear production functions or constraints through the application of multiple dispatch on both the model type and the technology nodes. The implementation of improved descriptions does not require any modifications to the base package due the applications of Julia's multiple dispatch. This modularity simplifies future development of, e.g., improved technology descriptions due to the separation between technology and framework descriptions. In addition, it allows for fast development of required technology descriptions.
#### Example extension - EnergyModelsInvestment.jl
While many larger optimization models in practice only provide users with a limited set of options for investments (e.g., discrete or continuous), InvestmentModels.jl provides types for different classes of investments that can be used in different combinations, e.g., discrete, continuous, or semi-continuous, and provides extra functionality in dependencies between projects (e.g., not allow investing in project A until project B is in operation). The modelling of investment options is separated from the modelling of technology and time structure, allowing easy customization to different use cases. The use of custom types and multiple dispatch also facilitates more efficient model creation, by only generating the necessary (binary) variables for the chosen investment options.
false
https://pretalx.com/juliacon2023/talk/TSJYFP/
https://pretalx.com/juliacon2023/talk/TSJYFP/feedback/
32-082
Learning JuMP by example
Talk
2023-07-27T12:00:00-04:00
12:00
00:30
Get an up-to-date overview of the modelling capabilities of JuMP through a number of worked examples. I'll cover the types of optimization problems you can solve effectively, give lots of unsolicited advice and briefly look at some extensions.
juliacon2023-28923-learning-jump-by-example
JuMP
James D Foster
en
Get an up-to-date overview of the modelling capabilities of JuMP through a number of worked examples. I'll cover the types of optimization problems you can solve effectively, give lots of unsolicited advice and briefly look at some extensions.
false
https://pretalx.com/juliacon2023/talk/SMUP8H/
https://pretalx.com/juliacon2023/talk/SMUP8H/feedback/
32-082
Lunch Day 2 (Room 1)
Lunch Break
2023-07-27T12:30:00-04:00
12:30
01:30
We hope you're enjoying JuliaCon 2023 so far! Please find our food trucks waiting right outside venue with food available for purchase.
juliacon2023-28073-lunch-day-2-room-1-
JuliaCon
en
We hope you're enjoying JuliaCon 2023 so far! Please find our food trucks waiting right outside venue with food available for purchase.
false
https://pretalx.com/juliacon2023/talk/NLQFQJ/
https://pretalx.com/juliacon2023/talk/NLQFQJ/feedback/
32-082
Exploring topological invariants of Quantum systems in Julia
Lightning talk
2023-07-27T14:00:00-04:00
14:00
00:10
This talk explores the capabilities of the Julia ecosystem for topological physics, specifically the calculation of topological invariants for periodically driven non-equilibrium Quantum systems. Often, these quantities are related to topological properties of complicated geometric objects. Julia's ability to streamline these calculations and easily integrate mathematical libraries via multiple dispatch offers a powerful toolbox that simplify the exploration of new phenomena considerably.
juliacon2023-27024-exploring-topological-invariants-of-quantum-systems-in-julia
Quantum
Volker Karle
en
In my current research, I focus on the evaluation of non-Abelian topological invariants in periodically driven non-equilibrium quantum systems. These invariants offer a rich phenomenology and can give rise to new and unexpected dynamic behavior in current-day experiments. They are related to the properties of quasi-energy manifolds that form knots in space and time. However, their calculation can be very challenging and computationally demanding.
To address these challenges, I have been utilizing the Julia ecosystem, which offers a powerful toolset for numerical calculations. Julia's ability to easily integrate mathematical libraries and perform multiple dispatch has enabled me to streamline the calculation of these topological invariants.
One of the key features of non-Abelian topological invariants is that they are not captured by traditional static and field-theoretic approaches. In this work, I will focus on a specific class of non-Abelian topological invariants known as the Euler invariants. These invariants are related to the topological properties of real gauge fields and have been shown to play an important role in the behavior of certain materials. By evaluating these invariants numerically, I aim to gain a deeper understanding of their properties and their potential applications in the field of quantum physics. In particular, I will demonstrate how the degeneracies of the quasi-energy bands give rise to a implicitly defined manifold with multiple components that form a complicated knots. To evaluate the topological invariants, one has to calculate integrals of the wave function on the knotted structure of the manifold. By utilizing DifferentialEquations.jl, manifold.jl, and ForwardDiff.jl, I am able to parallelize these calculations and explore the space of topological phase transitions efficiently. I will present how these packages can effectively work together without going into details.
In summary, my research aims to utilize the Julia ecosystem to evaluate non-Abelian topological invariants numerically, with the goal of gaining a deeper understanding of the exotic material properties they give rise to. The results of this research have the potential to lead to new insights and applications in the field of quantum physics.
false
https://pretalx.com/juliacon2023/talk/GBD87E/
https://pretalx.com/juliacon2023/talk/GBD87E/feedback/
32-082
GreenFunc.jl: A Toolbox for Quantum Many-Body Problems.
Lightning talk
2023-07-27T14:10:00-04:00
14:10
00:10
GreenFunc.jl is a powerful package that offers a solution to the complex computational challenges of quantum many-body systems. The package is developed using native Julia language which offers both speed and flexibility. GreenFunc.jl implements state-of-the-art algorithms for solving quantum many-body problems using a Green's function approach, making it an invaluable tool for researchers in fields such as material design, high-temperature superconductivity, and quantum information technology.
juliacon2023-27010-greenfunc-jl-a-toolbox-for-quantum-many-body-problems-
Quantum
Xiansheng CaiTao Wang
en
In this talk, we will introduce GreenFunc.jl, a powerful tool for solving complex quantum many-body problems. Developed under the Numerical Effective Field Theory (NEFT) project, which aims to develop modern quantum field theory frameworks for modeling real-world problems, GreenFunc.jl is based on a set of more fundamental packages such as Lehmman.jl, CompositeGrids.jl, and BrilliouZoneMeshs.jl. We will demonstrate how it can be used to analyze the SYK model, which is crucial for understanding the mechanism of high-temperature superconductivity and important for black hole physics. If time allows, we will also showcase its ability to explore ultralow temperature superconductivity, which was not possible to calculate before. Join us to learn how GreenFunc.jl can help you tackle challenging quantum many-body problems in the fields of material science, condensed matter physics, high energy physics and quantum information science. Detailed tutorials and the source code can be found at the link https://github.com/numericalEFT/GreenFunc.jl and https://github.com/numericalEFT.
false
https://pretalx.com/juliacon2023/talk/EQ9BSG/
https://pretalx.com/juliacon2023/talk/EQ9BSG/feedback/
32-082
Faster Simulation of Quantum Entanglement with BPGates.jl
Lightning talk
2023-07-27T14:20:00-04:00
14:20
00:10
**BPGates.jl** is a tool for extremely fast simulation of quantum circuits, as long as the circuit is limited to only performing entanglement purification over Bell pairs.
juliacon2023-27016-faster-simulation-of-quantum-entanglement-with-bpgates-jl
Quantum
Stefan Krastanov
en
Quantum dynamics is famously infeasible to simulate efficiently by classical hardware. However, there are important classes of processes that can be simulated efficiently: Gaussian optics and Clifford circuits being the prototypical examples. In particular, the ease of simulating Clifford circuits was crucial for the development of quantum error correcting codes.
The purification of entangled Bell pairs can already be efficiently (polynomially) simulated by Clifford circuits. However, if we restrict ourselves to only purification circuits, not general Clifford circuits, the simulation can be even faster, both asymptotically and practically.
**BPGates.jl** implements this new simulation algorithm, providing for simulating bilateral quantum gates in 𝒪(1) assymptotic time (and ns wall time), instead of the typical 𝒪(n) complexity. We introduce the new algorithm, implementation, and discuss applications, including the immense effect it has had on simulating and optimizing entanglement purification circuits.
false
https://pretalx.com/juliacon2023/talk/UANL79/
https://pretalx.com/juliacon2023/talk/UANL79/feedback/
32-082
Surrogatising quantum spin systems using reduced basis methods
Talk
2023-07-27T14:30:00-04:00
14:30
00:30
Quantum spin systems are a central topic in condensed matter physics. Their simplicity enables systematic study of the quantum properties, which has notable applications in, e.g., quantum computing. With ReducedBasis.jl we provide a Julia package, which uses reduced-basis methods to accelerate the modeling of parametrised eigenvalue problems, such as typical for quantum spin Hamiltonians. The package integrates with ITensors enabling a treatment using state-of-the-art tensor network methods.
juliacon2023-26288-surrogatising-quantum-spin-systems-using-reduced-basis-methods
Quantum
Michael F. HerbstPaul Brehmer
en
A central objective in quantum spin models is to understand the physical behaviour of the system across the parameter domain of the associated Hamiltonian. Mathematically one wishes to compute the eigenfunction corresponding to the lowest eigenvalue (the ground state) at each parameter instance. Since the Hilbert space dimension (thus the size of the Hamiltonian) scales exponentially with the size of the physical system, each ground state computation is associated with a considerable cost. In contrast to an exhaustive systematic scan over the parameter domain, [ReducedBasis.jl](https://github.com/mfherbst/ReducedBasis.jl) follows a recent approach based on the reduced basis method (RBM) [[1]](https://doi.org/10.1103/PhysRevE.105.045303). In this approach, a surrogate model is assembled by projecting the full problem onto a basis consisting of only a few tens of parameter snapshots, the only instances where the ground state needs to be computed. These snapshots are selected following a greedy strategy, which aims to maximally reduce the estimated error with each additional snapshot. Once the RBM surrogate has been assembled physical observables (e.g. for mapping out phase diagrams) can be computed for any parameter value with a modest complexity, notably scaling independently from the dimension of the Hilbert space.
Even though the motivating applications for ReducedBasis.jl are quantum spin systems, the package is intended to be generally applicable to parametrised eigenvalue problems with a low-dimensional parameter space. Key steps of the RBM procedure can therefore be easily customised. Most importantly this concerns details on the "ground truth" method used for obtaining the eigenstates at the selected snapshots: Currently both broadly applicable standard iterative diagonalisation methods such as LOBPCG as well as specialised tensor network methods such as the density matrix renormalisation group (DMRG) approach are supported. Both modes of operation will be illustrated in our talk, where we apply ReducedBasis.jl to both a simple parametrised eigenproblem, a chain of Rydberg atoms as well as to one-dimensional quantum spin-1 models featuring rich quantum phase diagrams. Compared to a traditional approach using only DMRG and ITensors.jl we demonstrate a manyfold speedup when DMRG calculations are complemented by ReducedBasis.jl.
This is joint work with Matteo Rizzi (Universität Köln), Benjamin Stamm (Universität Stuttgart) and Stefan Wessel (RWTH Aachen).
[1] [M. F. Herbst, S. Wessel, M. Rizzi and B. Stamm. Phys. Rev. E, 105, 45303 (2022)](https://doi.org/10.1103/PhysRevE.105.045303).
false
https://pretalx.com/juliacon2023/talk/TFGFPB/
https://pretalx.com/juliacon2023/talk/TFGFPB/feedback/
32-082
Quantum Chemistry: solving the Schrödinger equation with Julia
Talk
2023-07-27T15:00:00-04:00
15:00
00:30
The computational evaluation of the electronic properties of atoms and moleculesentails the use of quantum mechanics. The computational cost of some routines, in most cases, does not scale linearly and becomes very dependent on the size of the system. Thus, over time, the development of effective tools to accelerate such computations can benefit of new programming languages, like Julia, focused on numerical, scientific and faster programming.
juliacon2023-27087-quantum-chemistry-solving-the-schrdinger-equation-with-julia
Quantum
Letícia Madureira
en
In this piece of research, the central aim is to joint both approaches to present a new
Julia library capable to calculate the molecular integrals proposed by Taketa, Huzinaga,
and O-ohata in 1966. The performance associated with Julia codes will allow us to
calculate electron repulsion integrals, ERIs, without taking too much time when compared
with Python, for example. As the system gets larger, the computation of ERIs, becomes
more expansive, since it is one of the most time-consuming steps in the whole calculation
These results were compared with those obtained by the ORCA software, and by implementing the same integrals in Python. The results shown a good agreement of the energy values obtained by Julia
implementation and ORCA software (a very well stablished software in the computational chemistry field) calculations, and Julia is also the faster implementation
analyzed, being a promise for new electronic structure calculations codes. The library developed for the calculation of molecular integrals in question (S: over-
lap integrals matrix; T : kinetic integrals matrix; V : electron-nuclear attraction integrals
matrix; G: electron-electron repulsion integrals tensor) was named QuantumFoca.jl and is
available on GitHub as a free and open source code.
false
https://pretalx.com/juliacon2023/talk/AZST33/
https://pretalx.com/juliacon2023/talk/AZST33/feedback/
32-082
Piccolo.jl: An Integrated Quantum Optimal Control Stack
Talk
2023-07-27T15:30:00-04:00
15:30
00:30
We are introducing **Piccolo.jl**, an integrated quantum optimal control stack. In our recent paper, "Direct Collocation for Quantum Optimal Control", we demonstrated -- in simulation and on hardware -- that our *direct collocation* based pulse optimization method (PICO) is a powerful alternative to existing *quantum optimal control* (QOC) methods. Piccolo.jl is designed to be a simple and powerful interface for utilizing this method for pulse optimization and *hardware-in-the-loop* control.
juliacon2023-27043-piccolo-jl-an-integrated-quantum-optimal-control-stack
Quantum
Aaron TrowbridgeAditya Bhardwaj
en
### overview
[Piccolo.jl](https://github.com/aarontrowbridge/Piccolo.jl) is a meta-package that reexports the following packages:
* [QuantumCollocation.jl](https://github.com/aarontrowbridge/QuantumCollocation.jl): set up and solve QOC problems using PICO \[1\].
* [IterativeLearningControl.jl](https://github.com/aarontrowbridge/IterativeLearningControl.jl): utilize PICO solutions to correct model mismatch errors *in situ* on an experimental system.
* [NamedTrajectories.jl](https://github.com/aarontrowbridge/NamedTrajectories.jl): intuitively and efficiently store trajectory data (underlies both of the above packages)
Please visit corresponding package links above for detailed description and documentation for each package.
### references
- \[1\] [Direct Collocation for Quantum Optimal Control](https://arxiv.org/abs/2305.03261)
- recently accepted to IEEE QCE23
false
https://pretalx.com/juliacon2023/talk/9SPAPT/
https://pretalx.com/juliacon2023/talk/9SPAPT/feedback/
32-123
Morning Break Day 2 Room 2
Break
2023-07-27T10:15:00-04:00
10:15
00:15
Morning break for coffee and snacks, and transit time from the keynote to the rest of the day's talks.
juliacon2023-28131-morning-break-day-2-room-2
JuliaCon
en
Morning break for coffee and snacks, and transit time from the keynote to the rest of the day's talks.
false
https://pretalx.com/juliacon2023/talk/PNWHPR/
https://pretalx.com/juliacon2023/talk/PNWHPR/feedback/
32-123
Analyzing Large Graphs with QuasiStableColors.jl
Talk
2023-07-27T10:30:00-04:00
10:30
00:30
Graphs (*aka* networks) are a key part of the data science pipeline at many organizations. However, scalability is the most frequently reported limitation by graph analysts. I introduce `QuasiStableColors.jl`, a Julia library for approximate graph analysis. On tasks such as ranking node importance (centrality) it enables an over *10x* speedup while introducing less than 5% error. In this talk, I will demonstrate how to use this novel graph compression for your own workloads.
juliacon2023-27001-analyzing-large-graphs-with-quasistablecolors-jl
JuliaCon
Moe Kayali
en
In this talk I will give an overview of the new Julia package `QuasiStableColors.jl`. This package allows for accelerating graph analytics by compressing the underlying data. This results in big gains for supported tasks, like computing centrality or maximum-flow. Our approximation allows a speedup of >10x on a variety of real-life benchmark graphs while introducing a negligible relative error, often less than 5%.
The compression scheme is a novel one recently introduced in the database research community. It will be presented at the Very Large Databases 2023 conference under the title "Quasi-stable Coloring for Graph Compression: Approximating Max-Flow, Linear Programs, and Centrality" by Moe Kayali and Dan Suciu. [paper link](http://vldb.org/pvldb/volumes/16/paper/Quasi-stable%20Coloring%20for%20Graph%20Compression%3A%20Approximating%20Max-Flow%2C%20Linear%20Programs%2C%20and%20Centrality) This package is the reference implementation.
I will focus on the practical usage of this library for data scientists rather than the theoretical aspects of the research work. By the end of the talk the audience will have understood the basics of this compression method and have the knowledge to start using it for their own graph analysis applications.
This package follows software development best practices, with thorough documentation, a tutorial, tests, and CI/CD. I intend to maintain this package into the future. I will also briefly discuss how using Julia and following these practices helped in achieving my research goals.
- package: https://github.com/mkyl/QuasiStableColors.jl
- documentation: https://mkyl.github.io/QuasiStableColors.jl/stable/
This material is based upon work supported by the National Science Foundation under Grant No. NSF-BSF 2109922 and NSF IIS 1907997.
false
https://pretalx.com/juliacon2023/talk/9JL3JZ/
https://pretalx.com/juliacon2023/talk/9JL3JZ/feedback/
32-123
Generating Extended Kalman Filters with Julia
Talk
2023-07-27T11:00:00-04:00
11:00
00:30
Extended Kalman filters are super useful in robotics and embedded systems, but require the derivation of large state derivative matrices. Julia's symbolic manipulation facilities can make this much easier! I will introduce TinyEKFGen.jl, a Julia package that converts nice Julia expressions to embeddable C-code that works with the TinyEKF library, and show some examples usages (including one that runs in space!).
juliacon2023-26103-generating-extended-kalman-filters-with-julia
JuliaCon
Thatcher Chamberlin
en
The Extended Kalman filter (EKF) is a commonly-used recursive filter that finds use in sensor fusion and state estimation applications, like robotics and spacecraft control. The filter takes in noisy observations, compares them to a provided model, and updates an estimate of a system's state. A key part of the state update process is computing a state transition matrix, which can be become large when a large state or many observations are used. In systems with non-linear dynamics or observations, the elements of the matrix can be complicated expression. Julia's excellent symbolic manipulation capabilities can greatly simplify the generation of these matrices, speedy up development time of new EKFs. This approach has proven really useful at the satellite company where I work, providing a fun example of Julia in an enterprise setting.
I wrote a package called TinyEKFGen to generate EKF matrices for use in satellite attitude estimation, but also for any other estimation system that is suited to Kalman filtering. The package takes in Julia code describing a system's state, dynamics, and measurements, and emits C code that works with the TinyEKF library. This code is then suitable for use on embedded systems that with limited memory or that don't allow dynamic allocation. In this talk I'll show how the TinyEKFGen code works and run through some examples Kalman filter programs, including a tumbling satellite!
false
https://pretalx.com/juliacon2023/talk/9HVSQW/
https://pretalx.com/juliacon2023/talk/9HVSQW/feedback/
32-123
Graph alignment problem within GraphsOptim.jl
Lightning talk
2023-07-27T11:30:00-04:00
11:30
00:10
Graph alignment is the problem of recovering a bijection between vertex sets of two graphs, that minimizes the divergence between edges. This problem can be encountered in various fields where graphs are often very large, so an efficient algorithm is essential for many applications. In this talk we will discuss how to solve this problem approximately with the popular Fast Approximate Quadratic (FAQ) algorithm and its recent improved variants implemented in the Julia GraphsOptim.jl package.
juliacon2023-26849-graph-alignment-problem-within-graphsoptim-jl
JuliaCon
Aurora Rossi
en
Graph alignment, also known as graph matching, is the problem of recovering a bijection between the vertex sets of two graphs, that minimizes edge disagreements.
Given the adjacency matrices A,B ∈ 𝐑ᴺˣᴺ of the graphs G₁ and G₂ with vertex set {1, . . ., N} the problem can be formulated as follows:
min ||A−PBPᵀ||²
s.t. P ∈ 𝑷
where 𝑷 is the set of N×N permutation matrices and ||·|| is the Frobenius norm.
In the case of two isomorphic graphs its solution is an isomorphism. The calculation of its solution is NP-hard.
The graph alignment problem can be encountered in various fields in which graphs have to be compared such as computer vision, social networks, molecular biology and neuroscience. In the latter, areas of connectomes (nodes of graphs representing neural connections in the brain) are analyzed. These kinds of graphs are often very large, so efficient graph alignment algorithms are essential for many applications. One of the most popular algorithms that solves the problem is the Fast Approximate Quadratic (FAQ) algorithm [1,2]. It is designed to solve a relaxed version of the above formulation, in which the convex-hull of the permutation matrices, i.e. doubly stochastic matrices, is considered as a feasible region. In this way, the Frank–Wolfe algorithm can be applied to find a doubly stochastic matrix, which is subsequently projected by solving a linear assignment problem, onto the set of permutation matrices, finding an approximate solution.The linear assignment problem is also used at each iteration of the Frank-Wolfe algorithm and is the bottleneck of the procedure. For this reason, it has been replaced in the recent GOAT algorithm [3] with the optimal transport problem solved via the Sinkhorn algorithm. In this talk, we will first briefly introduce the GraphsOptim.jl package [4] and then focus on the details and Julia implementation of the FAQ and GOAT algorithms.
[1] J. T. Vogelstein et al., “Fast Approximate Quadratic Programming for Graph Matching,” PLoS ONE, vol. 10, no. 4, p. e0121002, Apr. 2015, doi: 10.1371/journal.pone.0121002.
[2] https://github.com/microsoft/graspologic
[3] A. Saad-Eldin, B. D. Pedigo, C. E. Priebe, and J. T. Vogelstein, “Graph Matching via Optimal Transport.” arXiv, Nov. 09, 2021. Accessed: Dec. 21, 2022. [Online]. Available: http://arxiv.org/abs/2111.05366
[4] https://github.com/gdalle/GraphsOptim.jl
false
https://pretalx.com/juliacon2023/talk/AUBPU7/
https://pretalx.com/juliacon2023/talk/AUBPU7/feedback/
32-123
Realtime embedded systems testing with Julia
Lightning talk
2023-07-27T11:40:00-04:00
11:40
00:10
The talk focuses on the use of Julia in real-world aerospace applications in place of C++ for the purpose of simulating large, complex systems that interface with embedded hardware and software in real-time. Leveraging memory allocation management in Julia alongside an interface with C++ shared libraries is used to perform real-time hardware-in-the-loop simulations of models written in Julia.
juliacon2023-27107-realtime-embedded-systems-testing-with-julia
JuliaCon
/media/juliacon2023/submissions/UK9DCU/hermeus-2TW9iPDNRsc-unsplash_u0WEbIp.jpg
Corbin Klett
en
Fast-paced product development environments in industry can leverage Julia in their software stack to increase the pace of development and testing of cyber-physical systems. The flexibility and ease-of-use provided by a dynamically typed language, combined with the runtime speed made possible by pre-compiling, allows Julia to be used in place of C++ for the purpose of simulating large, complex systems that interface with embedded hardware and software in real-time. This talk will focus on a real-world aerospace application and survey the practices used to make this possible while suggesting capabilities for future development that would improve performance and usability. The specific techniques and features discussed include 1) running the Julia-based simulation models through a pre-compile step, 2) tracking and minimizing memory allocations to avoid garbage collection such that real-time integration with embedded hardware is possible, and 3) interfacing with C++ shared libraries that connect the Julia simulation models with the embedded hardware and software. Specific examples of applications to hardware-in-the-loop testing environments for aerospace systems will be presented. Further, we will note efforts to compile and run Julia code on an embedded system and why those efforts were abandoned.
false
https://pretalx.com/juliacon2023/talk/UK9DCU/
https://pretalx.com/juliacon2023/talk/UK9DCU/feedback/
32-123
MultilayerGraphs.jl: Multilayer Network Science in Julia
Lightning talk
2023-07-27T11:50:00-04:00
11:50
00:10
**MultilayerGraphs.jl** is a Julia package for the creation, manipulation and analysis of multilayer graphs, which have been adopted to model a wide range of complex systems from bio-chemical to socio-technical networks.
We will synthetically introduce multilayer network science, illustrate some of the main features of the current version of the package and talk about its future developments.
juliacon2023-26153-multilayergraphs-jl-multilayer-network-science-in-julia
JuliaCon
/media/juliacon2023/submissions/MS7YWQ/logo_uhNy56r.png
Pietro MonticoneClaudio Moroni
en
[**MultilayerGraphs.jl**](https://github.com/JuliaGraphs/MultilayerGraphs.jl) is a Julia package for the creation, manipulation and analysis of the structure, dynamics and functions of multilayer graphs.
A multilayer graph consists of multiple subgraphs called *layers* which can be interconnected through [bipartite graphs](https://en.wikipedia.org/wiki/Bipartite_graph) called *interlayers*.
In order to formally represent multilayer networks, several theoretical paradigms have been proposed (e.g. see [Bianconi (2018)](https://doi.org/10.1093/oso/9780198753919.001.0001) and [De Domenico (2022)](https://doi.org/10.1007/978-3-030-75718-2)) and adopted to model the structure and dynamics of a wide spectrum of high-dimensional, multi-scale, time-dependent complex systems including molecular, neuronal, social, ecological and economic networks (e.g. see [Amato et al. (2017)](https://doi.org/10.1038/s41598-017-06933-2), [DeDomenico (2017)](https://doi.org/10.1093/gigascience/gix004), [Timteo et al. (2018)](https://doi.org/10.1038/s41467-017-02658-y), [Aleta et al. (2020)](https://doi.org/10.1038/s41562-020-0931-9), [Aleta et al. (2022)](https://doi.org/10.1073/pnas.2112182119)).
The package features an implementation that maps a standard integer-labelled vertex representation to a more user-friendly framework exporting all the objects a practitioner would expect such as nodes, vertices, layers, interlayers, etc.
MultilayerGraphs.jl has been integrated with the [JuliaGraphs](https://github.com/JuliaGraphs) and the [JuliaDynamics](https://github.com/JuliaDynamics) ecosystems through:
- the extension of [Graphs.jl](https://github.com/JuliaGraphs/Graphs.jl) with several methods and metrics including the multilayer eigenvector centrality, the multilayer modularity and the Von Newman entropy;
- the compatibility with [Agents.jl](https://github.com/JuliaDynamics/Agents.jl) allowing for agent-based modelling on general multilayer networks.
In our talk we will briefly introduce the theory and applications of multilayer graphs and showcase some of the main features of the current version of the package through a quick tutorial including:
- how to install the package;
- how to define layers and interlayers with a variety of constructors and underlying graphs;
- how to construct a directed multilayer graph with those layers and interlayers;
- how to add nodes, vertices and edges to the multilayer graph;
- how to compute some standard multilayer metrics.
For a more comprehensive exploration of the package functionalities and further details on the future developments the user is invited to consult the package [README](https://github.com/JuliaGraphs/MultilayerGraphs.jl/blob/main/README.md), [documentation](https://juliagraphs.org/MultilayerGraphs) and [issues](https://github.com/JuliaGraphs/MultilayerGraphs.jl/issues).
false
https://pretalx.com/juliacon2023/talk/MS7YWQ/
https://pretalx.com/juliacon2023/talk/MS7YWQ/feedback/
32-123
Long range dependence modelling in Julia
Lightning talk
2023-07-27T12:10:00-04:00
12:10
00:10
This talk presents a package to analyse long-range dependence (LRD) in time series data. LRD is shown by the fact that the effects from previous disturbances take longer to dissipate than what standard models can capture. Failing to account for LRD dynamics can perversely affect forecasting performance: a model that does not account for LRD misrepresents the true prediction confidence intervals. LRD has been found in climate, political affiliation and finance data, to name a few examples.
juliacon2023-25344-long-range-dependence-modelling-in-julia
JuliaCon
/media/juliacon2023/submissions/NF7PAX/LM-temp_RiZYzAK.png
J. Eduardo Vera-Valdés
en
Long-range dependence has been a topic of interest in time series analysis since Granger's study on the shape of the spectrum of economic variables. The author found that long-term fluctuations in data, if decomposed into frequency components, are such that the amplitudes of the components decrease smoothly with decreasing periods. This type of dynamics implies long-lasting autocorrelations; that is, they exhibit long-range dependence. Long-range dependence has been estimated in temperature data, political affiliation data, financial volatility measures, inflation, and energy prices, to name a few. Moreover, it has been shown that the presence of long-range dependence on data can have perverse effects on statistical methods if not included in the modelling scheme.
This talk presents a package for modelling long-range dependence in the data. We develop methods to model long-range dependence by the commonly used fractional difference operator and the theoretically based cross-sectional aggregation scheme. The fast Fourier transform and recursive implementations of the algorithms are used to speed up computations. The proposed algorithms are exact in the sense that no approximation of the number of aggregating units is needed. We show that the algorithms can be used to reduce computational times for all sample sizes.
Moreover, estimators in the frequency domain are developed to test for long-range dependence in the data. A broad range of estimators are considered: the original Geweke and Porter-Hudak (GPH) estimator, local Whittle (LW) variants that allow for non-de-meaned data, bias-reduced versions of both GPH and LW methods, and Maximum Likelihood Estimators (MLE) in the frequency domain for the fractional differenced and cross-sectional aggregated data. For the latter, the profile likelihood is obtained for efficiency.
The proposed package is simple to implement in real applications. In particular, we present an exercise using temperature data modelled using standard and long-range dependence models. The experiment shows that the standard model misrepresents the prediction confidence intervals of future global temperatures. The misrepresentation can potentially explain some of the previous underestimations of temperature increases in the last decades.
false
https://pretalx.com/juliacon2023/talk/NF7PAX/
https://pretalx.com/juliacon2023/talk/NF7PAX/feedback/
32-123
MINDFul.jl A framework for intent-driven multi-domain networking
Lightning talk
2023-07-27T12:20:00-04:00
12:20
00:10
MINDFul.jl is a tool to research coordination mechanisms and intent-driven algorithms for multi-domain IP-Optical networks. It combines modern paradigms like Software Defined Networking (SDN) and Intent-Based Networking (IBN) to build a novel flexible architecture, which is appropriate but not limited to decentralized control. It provides a stateful representation of common metro/core network equipment and facilitates event-based simulations with a hackable interface and visualization support.
juliacon2023-26932-mindful-jl-a-framework-for-intent-driven-multi-domain-networking
JuliaCon
/media/juliacon2023/submissions/7UFCSM/MINDFul.jl.text_E1s6zLn.png
Filippos Christou
en
[MINDFul.jl](https://github.com/UniStuttgart-IKR/MINDFul.jl) is an effort to provide the scientific networking community with a flexible, easy-to-use tool aimed at state-of-the-art research in the algorithmic control and architecture of multi-domain IP-Optical networks. The tool offers interfaces to integrate novel mechanisms into the package and an easy way to evaluate them with event-based simulations.
The first part of the presentation is dedicated to clearly explaining some necessary networking terms and notions. We will introduce the ideas behind Software-Defined Networking (SDN) and Intent-Based Networking (IBN). We will talk about multi-domain and IP-Optical networks, which are globally used to enable data transfer and constitute the core of today's Internet.
Afterward, we will describe the architecture of MINDFul.jl, and how a network domain can be represented using [NestedGraphs.jl](https://github.com/UniStuttgart-IKR/NestedGraphs.jl).
In short, MINDFul.jl leverages the common IBN architecture by positioning the IBN framework on top of the SDN controller. In addition, a novel flexible technique is introduced for handling network intents in decentralized multi-domain scenarios.
We will mention some of the interfaces, like how to add a connectivity intent into an IBN framework instance, compile it down to a concrete implementation, and install it in the simulated network.
Later we will demonstrate the package usage and conduct a simple event-based simulation. The event-based simulation will be configured to enable link faults and repairs following an exponential distribution to showcase the influence on the connectivity intents' state. We will develop a simplistic heuristic algorithm, which we will integrate into the simulation. We can retrieve the data using the logs to evaluate the heuristic algorithm. In this stage, we will also show how to produce some visualizations using [MINDFulMakie.jl](https://github.com/UniStuttgart-IKR/MINDFulMakie.jl).
We will end the presentation with future directions, plans, and a discussion of how the networking community could benefit from such a tool and the promising role of Julia in this field.
MINDFul.jl source code and documentation can be found in this repository: https://github.com/UniStuttgart-IKR/MINDFul.jl.
Some notebook examples can be found here: https://github.com/UniStuttgart-IKR/MINDFulNotebookExamples.jl.
false
https://pretalx.com/juliacon2023/talk/7UFCSM/
https://pretalx.com/juliacon2023/talk/7UFCSM/feedback/
32-123
Lunch Day 2 (Room 2)
Lunch Break
2023-07-27T12:30:00-04:00
12:30
01:30
We hope you're enjoying JuliaCon 2023 so far! Please find our food trucks waiting right outside venue with food available for purchase.
juliacon2023-28074-lunch-day-2-room-2-
JuliaCon
en
We hope you're enjoying JuliaCon 2023 so far! Please find our food trucks waiting right outside venue with food available for purchase.
false
https://pretalx.com/juliacon2023/talk/KCXLNH/
https://pretalx.com/juliacon2023/talk/KCXLNH/feedback/
32-123
Creating Transport Maps with MParT.jl
Talk
2023-07-27T14:00:00-04:00
14:00
00:30
Measure Transport, "moving" from one measure to another, has been gaining momentum to perform generative sampling, conditional density estimation, and other statistical methods on a computer. However, transport software is primarily bespoke, is not portable, and can be slow. The Monotone Parameterization Toolkit (MParT) package provides a fast, tested base in C++ to train and use complicated maps for transport easily, and we highlight the Julia bindings for the package in this talk.
juliacon2023-24035-creating-transport-maps-with-mpart-jl
JuliaCon
/media/juliacon2023/submissions/XKGR3C/97112235_9rbDcvi.png
Daniel Sharp
en
The Monotone Parameterization Toolkit (MParT) is a software package written in C++ with bindings in Python, Matlab, and Julia to parameterize and train a subclass of functions that perform measure transport, called Monotone Transport Maps. The tooling required includes adaptive quadrature, orthogonal polynomials, and more, which is why there has yet to be a comprehensive package to perform monotone transport. Built around Kokkos, MParT allows users to effortlessly work with these maps, and do the vast majority of calculations in parallel with ongoing efforts to allow GPU calculations. This talk is intended to be a small introduction of what MParT has to offer with a few interesting examples of training and using maps for different simple statistical modeling problems.
false
https://pretalx.com/juliacon2023/talk/XKGR3C/
https://pretalx.com/juliacon2023/talk/XKGR3C/feedback/
32-123
Solving for student success with CurricularAnalytics.jl
Talk
2023-07-27T14:30:00-04:00
14:30
00:30
In this talk, we introduce CurricularAnalytics.jl, a package for studying and analyzing academic program curricula. By representing curricula as graphs, we utilize various graph-theoretic measures to quantify the complexity of curricula. In addition to analyzing curricular complexity, the toolbox supports the ability to visualize curricula, create optimal degree plans for completing curricula, and simulate the impact of various events on student progression through a curriculum.
juliacon2023-27031-solving-for-student-success-with-curricularanalytics-jl
JuliaCon
Hayden FreeGreg Heileman
en
The Curricular Analytics toolbox is designed to support meaningful interaction between the academic core and administrative shell of an institution. It also serves as a tool for researchers to easily investigate research questions around curricular design and for faculty to examine alternative curricular pathways. The toolbox is currently at the center of a three-year project in collaboration with the Association for Undergraduate Education at Research Universities (UERU), formerly the Reinvention Collaborative, and funded by the Ascendium Education Group, which seeks to validate the relationship between curricular complexity and student success.
We first introduce the curricular analytics framework, which we've implemented in Julia, and involves decomposing curricular complexity into two independent parts: instructional complexity and structural complexity. The toolbox also allows for simulating student progression through various curricula adjusting for both measures of complexity. Additionally, we demonstrate the one-to-many relationship between curricula and degree plans and examine why some plans might be better than others. This includes highlighting the degree plan optimization capabilities we've developed, and how this relates to transfer articulation between institutions and the associated equity implications of the transfer process.
Finally, we speak to the broader social implications of the toolbox and how it may inform a broader, more comprehensive, causal model of student success. Specifically, how may the tool guide change management within institutions, and can it empower faculty in their ownership of the curriculum?
false
https://pretalx.com/juliacon2023/talk/QEBHYB/
https://pretalx.com/juliacon2023/talk/QEBHYB/feedback/
32-123
Replacing legacy Fortran in a hydroelectrical critical system
Talk
2023-07-27T15:00:00-04:00
15:00
00:30
Hydro-Quebec is a public electricity utility for the province of Quebec in Canada. Its demand forecast team has been working for decades with Fortran models used daily in a critical system. Julia was selected as the replacement for Fortran. An optimisation library with call-backs was converted to Julia. The Julia version of the library is exempt of Fortran interface, and we present how the Fortran applications are modified to call either version of the library as to compare their behaviours.
juliacon2023-25602-replacing-legacy-fortran-in-a-hydroelectrical-critical-system
JuliaCon
Alain Marcotte
en
Hydro-Quebec is the main public electricity utility for the province of Quebec. Its demand forecast team has been working for decades with Fortran coded models. Julia was selected as the replacement for Fortran. Since those models are used daily in a critical system and history vouches for their reliability, great care will be taken to ensure the quality of the Julia replacements. The first step undertaken in this transition is to replace one of the optimisation libraries, ENLSIP, written in the 80’s. This is a least square error minimisation library for non-linear equations with non-linear constraints. Conversion and upgrade of that library is in the process but before improving the optimisation process, current efforts are made to ensure that the Julia version reproduces the original Fortran version. To that effect, Julia-Fortran interface is used to create a Julia application that allows to compare both versions of the library in context of the calling applications, still in Fortran. The talk will describe the intertwining of Fortran and Julia as the optimisation library is provided with call back functions to calculate the errors and derivates. In a nutshell, Julia calls Fortran that calls Julia which calls Fortran. This approach isolated the Julia version of the library which can be called by a Julia application devoid of Fortran code as the callback subroutines were wrapped in Julia. An example stripped of the proprietary code illustrating the handling of data and functions will be made public. We will also cover in the talk the tests methodology and graphical tools used to analyze the search space.
false
https://pretalx.com/juliacon2023/talk/UVEDD3/
https://pretalx.com/juliacon2023/talk/UVEDD3/feedback/
32-123
Sound Synthesis with Julia
Lightning talk
2023-07-27T15:30:00-04:00
15:30
00:10
We describe and demonstrate a method to use Julia to generate music on a computer. While electronic music generation has had a long and distinguished history, the use of the Julia programming language provides benefits that are not available using traditional tools in this area.
juliacon2023-27000-sound-synthesis-with-julia
JuliaCon
Ahan SenguptaAvik Sengupta
en
Most electronic music synthesis software today is written in C/C++. This is usually due to the performance requirements that are necessary in this domain. The use of Julia however brings two distinct advantages to this area.
First, using a high level, dynamic programming language, allows for a wider and more productive range of experimentation. The use of Julia allows for the performance characteristics to me met, while working in an easy to use language. Second, the wide range of high quality mathematical libraries in Julia, from FFT to differential equation solvers, allows for the use of high level constructs, further increasing the productivity of the artist.
In this talk, we show a set of fundamental building blocks for music synthesis in Julia. From wave generators to filters to amplifiers, we will see how these can be built with simple Julia functions, leveraging the existing ecosystem. We will show that Julia's ability to build abstractions without sacrificing performance is crucial to this use case.
false
https://pretalx.com/juliacon2023/talk/PYJVRU/
https://pretalx.com/juliacon2023/talk/PYJVRU/feedback/
32-124
Morning Break Day 2 Room 3
Break
2023-07-27T10:15:00-04:00
10:15
00:15
Morning break for coffee and snacks, and transit time from the keynote to the rest of the day's talks.
juliacon2023-28132-morning-break-day-2-room-3
JuliaCon
en
Morning break for coffee and snacks, and transit time from the keynote to the rest of the day's talks.
false
https://pretalx.com/juliacon2023/talk/YRJ7LZ/
https://pretalx.com/juliacon2023/talk/YRJ7LZ/feedback/
32-124
Julia : the unique solution to an optimisation problem
Lightning talk
2023-07-27T10:30:00-04:00
10:30
00:10
We expose a problem of estimation of multivariate convolutions of gamma random variables, which has very bad numerical properties. This bad numerical behavior literally forced us to use Julia. We describe why Python, R or C++ were not capable of solving our problem and argue that the multiple dispatch paradigm in Julia was the reason we were able to reuse existing code.
juliacon2023-24240-julia-the-unique-solution-to-an-optimisation-problem
JuliaCon
Oskar Laverny
en
The estimation of multivariate generalized Gamma convolutions via their projections into the Laguerre basis is easy to deal with mathematically. However, this deconvolution problem gives a loss function that requires the optimization routine to be:
- Global, since the loss is not convex and has many local minima
- Compiled, since the loss is heavy and cannot be parallelized
- In arbitrary precision, due to combinatorial reasons.
In both Python, R and C++, these three conditions were incompatible and forces us to re-code a full library. However, the ease of code reuse in the Julia ecosystem allowed us to solve our problem using existing libraries.
false
https://pretalx.com/juliacon2023/talk/AL7QJG/
https://pretalx.com/juliacon2023/talk/AL7QJG/feedback/
32-124
Airfoil meshing automatization with AirfoilGmsh.jl
Lightning talk
2023-07-27T10:40:00-04:00
10:40
00:10
In field of aerodynamic is often required to test different geometries, most of them are standard airfoils. The aim is usually to identify promising geometry for a specific application. Many operations repeat mechanically, and are tedious and time consuming. In specific, the mesh creation is a fundamental but routine task, similar but not identical for all the airfoil and all the test cases. This is where GmshAirfoil.jl comes in handy.
juliacon2023-25529-airfoil-meshing-automatization-with-airfoilgmsh-jl
JuliaCon
/media/juliacon2023/submissions/9GR3PG/AifoilGMSH_Q5j2HeH.png
Carlo Brunelli
en
It allows the user to create in a few clicks a .geo file containing all the information for the open-source widely used GMSH software to create properly structured meshes. Before exporting the mesh, the user can visualize the preview, and modify almost all the parameters (number of divisions and or the growth ratio in the inlet, airfoil surface or shear ...) and see the new results in real-time. The package also adds the physical tag (inlet, outlet, airfoil and limits) to the entities relieving the workload from the user. The mesh created is compatible with the most popular CFD software (Fluent, STARCCM+). In reality, this package, was born to create airfoil meshes that are compatible with the FEM package Gridap.
The primary purpose of this package is to create a .geo that can be read by GMSH. It has been decided to write a .geo file and not directly the .msh file for three main reasons. Firstly, the .geo file is easy to read and interpret. In this way, the user can perform some easy changes directly on it, without the need to dig deeply into the code. Secondly, it can be imported into GMSH, all the parameters can be modified so the user can visually verify that the mesh is good enough for its purpose. Lastly, the API of GMSH for Julia at the moment does not provide all the features that are needed. The package is simple but quite powerful, it provides basically only two functions:
- from_url_to_csv: the user can browse on the airfoiltools.com website looking for the profile to analyze. It is the biggest database of airfoil online. As the name of the function suggests, it creates a .csv file where the airfoil points are stored. This function comes in handy when the user does not have the file with the points of the profile.
- main_create_geofile: is heart of the package. It allows to create the .geo file. Here its plenty of optional argument that the user can specify to achieve a results closer to his needs:
- Reynolds number: this is used to compute the boundary layer characteristics over the airfoil. An optimal combination of meshing parameters is defined by employing some empiric formulas and setting some constraints on the number of cells in the boundary layer and the growth ratio. In the boundary layer, a sufficient number of cells is needed (at least 30 in a turbulent case), a growth ratio G between 1.05 and 1.2, and we have to ensure that y+ is approximately 1.
- First layer height: sometimes this is the value that the user wants to ensure. Usually, it comes from previous simulations or other authors work.
- Chord: is set c=1 by default, but this value can be overrided.
- Dimensions: 2D or 3D.
- Elements: TRI/TETRA or QUAD/HEXA, can be the shape of the single element.
We have not specified yet because this mesher is defined as "smart". As shows in the previous section, it automatically compute the optimal mesh characteristic requiring no additional work form the user. Furthermore, it is able to distinguish the trailing edge (rear part), the leading edge, the suction side (top part) and pressure side (bottom part). It automatically fixes two point close to the trailing edge, one on the suction and one on the pressure side, in order to create the inlet region.
It can detect the shape of the trailing edge, that can be sharp or not, and so the mesh will be different. Meshing the trailing edge has always been troublesome for engineers due to its strong curvatures and strange shapes.
For a 3D mesh, the user can specify the boundaries surfaces normal to x-axis to be periodic.
The mesh, once the .geo file is open in GMSH, is highly customizable. The user can specify the number of nodes in the inlet section, over the airfoil, over the vertical lines, in the shear, as well as the geometrical growth of the cells. The user can also define the length of the domain. Furthermore, it creates a refinement region close to the airfoil in order to customize the parameters to have a better-refined mesh.
The user can also specify the angle of attack of the profile, which is a fundamental parameters for evaluating airfoil performance. By default it is set to zero. Increasing the AoA leads the airfoil to rotate, as well as the shear and refinement mesh.
Class Shape Transformation (CST) is also implemented. It allows increasing the number of points defining the profile.
false
https://pretalx.com/juliacon2023/talk/9GR3PG/
https://pretalx.com/juliacon2023/talk/9GR3PG/feedback/
32-124
High-dimensional Monte Carlo Integration with Native Julia
Lightning talk
2023-07-27T10:50:00-04:00
10:50
00:10
This talk presents a new Julia package for efficient and generic Monte Carlo integration in high-dimensional and complex domains, featuring the Vegas algorithm for self-adaptive important sampling and an improved algorithm for increased robustness. The package demonstrates Julia's superiority over C/C++/Fortran and Python for high-dimensional Monte Carlo integration by enabling the easy creation of user-defined integrand evaluation functions with the speed of C and the flexibility of Python.
juliacon2023-26905-high-dimensional-monte-carlo-integration-with-native-julia
JuliaCon
Kun ChenXiansheng Cai
en
Evaluating integrals in high-dimensional spaces and complex domains is a common task in scientific research, and Monte Carlo methods provide robust and efficient solutions to this problem. To our knowledge, the Julia community currently lacks a native and feature-complete package for generic high-dimensional Monte Carlo integration. Our package addresses this gap by providing a carefully tested, well-documented, officially registered solution that supports multiple algorithms and both MPI and threading parallelization. In Monte Carlo integration, the user must implement an integrand evaluation function evaluated millions or even billions of times. Julia's speed and flexibility make this task significantly easier for the user compared to similar packages in C/C++/Fortran and Python. As such, our package offers a highly competitive Monte Carlo integration tool on the market. Detailed tutorials and the source code can be found at the link https://github.com/numericalEFT/MCIntegration.jl.
false
https://pretalx.com/juliacon2023/talk/YQVCML/
https://pretalx.com/juliacon2023/talk/YQVCML/feedback/
32-124
Cygnus.jl: Simulating Inertial Confinement Fusion in Julia
Talk
2023-07-27T11:00:00-04:00
11:00
00:30
A new, high-order multi-physics code, Cygnus.jl, has recently been used to simulate portions of inertial confinement fusion experiments at the University of Rochester's Laboratory for Laser Energetics. Methods of parallelization, code design, performance characterizations, and lessons learned will be presented alongside simulation results.
juliacon2023-24411-cygnus-jl-simulating-inertial-confinement-fusion-in-julia
JuliaCon
Sam Miller
en
Inertial confinement fusion (ICF) seeks to initiate fusion by using multiple high-power, nanosecond laser beams to compress and implode a millimeter-scale spherical target filled with deuterium-tritium (DT) fuel. Implosions are designed to increase the pressure and density of the fuel high enough such that DT fusion occurs and releases excess energy.
Simulating ICF experiments presents an extreme challenge and requires combining multiple physics models including multi-material hydrodynamics, nonlinear heat conduction, laser ray-tracing, and radiation transfer. Multi-physics codes are traditionally written in Fortran or C++ for performance, and are occasionally combined with a scripting language interface (such as Python) for user interactivity. Julia provides a compelling alternative to this model by improving developer productivity without sacrificing code performance.
Cygnus was written as part of a doctoral dissertation to simulate the impact of micron-scale defects embedded in the plastic outer-shell of ICF targets. Cygnus combines finite-volume hydrodynamics with non-linear thermal conduction and laser energy deposition on a domain-decomposed block structured mesh. Methods of parallelization (e.g., multi-threaded loops, BlockHaloArrays.jl), code design, performance characterizations (roofline model and strong scaling), and lessons learned will be presented alongside simulation results.
This material is based upon work supported by the Department of Energy National Nuclear Security Administration under Award Number DE-NA0003856, the University of Rochester, and the New York State Energy Research and Development Authority.
false
https://pretalx.com/juliacon2023/talk/NJCTVF/
https://pretalx.com/juliacon2023/talk/NJCTVF/feedback/
32-124
Using Julia to Optimise Trajectories for Robots with Legs
Talk
2023-07-27T11:30:00-04:00
11:30
00:30
Planning trajectories for underactuated systems is a challenging problem in robotics. The dynamics governing such systems are quite complex, and mechanisms themselves have strict physical limits. In this talk, I will explain how we can use Julia (and packages from its robotics ecosystem) to frame motion planning problems as numerical optimisations. I will also share videos of robots solving practical tasks in the real world, tracking trajectories computed with this approach.
juliacon2023-26580-using-julia-to-optimise-trajectories-for-robots-with-legs
JuliaCon
/media/juliacon2023/submissions/GLGFZB/cover_QLZpeaJ.png
Henrique Ferrolho
en
In this talk, I will explain how direct transcription works ─ a numerical optimisation approach that uses the model of a robot and its dynamics to plan feasible motions. I will start with a brief introduction on underactuated systems, and then explain how we can model system states (joint positions, velocities, torques, and contact forces). Next, I will go over the equations of motion that govern the system and show how to write equality and inequality constraints to enforce system dynamics, kinematic goals, and contact stability. After that, I will explain how we can formulate direct transcription problems in Julia (using existing packages from its rich ecosystem). In short, these are:
- Ipopt.jl/KNITRO.jl for interfacing with off-the-shelf nonlinear programming solvers
- RigidBodyDynamics.jl for calculating the whole-body system dynamics of complex mechanism models
- SparsityDetection.jl, ForwardDiff.jl, and SparseDiffTools.jl for sparsity calculation and automatic differentiation of the NLP constraints and their Jacobians
- StaticArrays.jl for non-allocating arrays used within the NLP constraints
- MeshCat.jl for 3D visualisation of mechanisms
- RobotOS.jl for ROS-related communications
I will also mention TORA.jl, an open-source implementation of direct transcription for robot arms, and go over a Jupyter notebook with a demo. Finally, I will share some videos of robot experiments on quadrupeds and humanoids from my PhD and postdoc work.
false
https://pretalx.com/juliacon2023/talk/GLGFZB/
https://pretalx.com/juliacon2023/talk/GLGFZB/feedback/
32-124
PRONTO.jl: Trajectory Optimization in Function Space
Talk
2023-07-27T12:00:00-04:00
12:00
00:30
PRONTO.jl is a Julia implementation of the Projection-Operator-Based Newton’s Method for Trajectory Optimization (PRONTO). PRONTO is a direct method for trajectory optimization which solves the optimal control problem directly in infinite-dimensional function space. It is capable of achieving quadratic convergence and has potential applications ranging from aerospace to quantum sensing.
juliacon2023-27063-pronto-jl-trajectory-optimization-in-function-space
JuliaCon
Mantas NarisJieqiu Shao
en
At the core of the PRONTO algorithm is a nonlinear projection operator which maps curves to the system's trajectory manifold. Encoding the dynamics constraint through this operator creates an unconstrained optimization problem which is then solved via Newton descent. In the best case, PRONTO can achieve quadratic convergence. PRONTO has proven to be an effective tool for solving large quantum control problems, finding optimal satellite maneuvers, and studying the dynamics of high-performance vehicles.
In spite of its efficacy, previously existing implementations of PRONTO had a substantial learning curve. The algorithm requires the definition of Jacobians and Hessians of the dynamics and cost functions, all of which needed to be manually implemented in C++. Systems with tens of states necessitated the calculation of hundreds of derivatives - this process was rather work intensive and would often lead to mistakes hidden deep in byzantine code. Further analysis of results was typically done in MATLAB, and this two-language approach made it difficult to gain insights on some aspects of the algorithm's behavior. A pure MATLAB implementation of PRONTO was performance limited to small systems at low resolutions.
To increase the accessibility and usability of this powerful algorithm without sacrificing performance, Julia was a natural choice. Initially, we were expecting to accept a small reduction in performance in exchange for dramatically increased usability. However, thanks to insights gleaned from an array of profiling and code introspection tools created by the Julia community, we were able to not just match the performance of the C++ implementation, but substantially exceed it.
The majority of computations in the PRONTO algorithm consist of solving a series of about 10 ODEs, some forward, and some backward in time. Consequently, the performance of the internal ODE solver is extremely important. Rather than being limited to an explicit Runge-Kutta (4,5) algorithm, in PRONTO.jl we make use of the power and flexibility of DifferentialEquations.jl, and can adjust algorithms and settings to suit the requirements of the problem. Furthermore, the solver-aware interpolation methods used by DifferentialEquations.jl result in a more efficient representation of each ODE solution, storing less data while being more accurate and requiring fewer extra solution steps than a fixed-step linear interpolation.
PRONTO.jl utilizes the symbolic differentiation provided by Symbolics.jl to compute the necessary Jacobians and Hessians, and the resulting functions are then fine-tuned using Julia's metaprogramming and expression manipulation functionalities. Since PRONTO.jl can automatically generate substantial portions of its own code, the user only needs to provide 5 functions: the dynamics, stage and terminal costs, and a pair of regulator matrices. This greatly reduces the burden on the user, reduces implementation errors, and results in more readable code. Ultimately, this substantially cuts down on the time required to implement new models, which can now be spent exploring novel problems.
PRONTO.jl enables us to study a set of problems that were previously prohibitively difficult to implement, and more importantly makes this powerful tool accessible to a broader research community. We hope it will enable some exciting research!
PRONTO.jl is available at https://github.com/narijauskas/PRONTO.jl and we expect to add it to the general registry in the near future.
false
https://pretalx.com/juliacon2023/talk/QUFUFV/
https://pretalx.com/juliacon2023/talk/QUFUFV/feedback/
32-124
Lunch Day 2 (Room 3)
Lunch Break
2023-07-27T12:30:00-04:00
12:30
01:30
We hope you're enjoying JuliaCon 2023 so far! Please find our food trucks waiting right outside venue with food available for purchase.
juliacon2023-28075-lunch-day-2-room-3-
JuliaCon
en
We hope you're enjoying JuliaCon 2023 so far! Please find our food trucks waiting right outside venue with food available for purchase.
false
https://pretalx.com/juliacon2023/talk/LXDXBL/
https://pretalx.com/juliacon2023/talk/LXDXBL/feedback/
32-124
Julia Application Builders
Birds of Feather (BoF)
2023-07-27T14:00:00-04:00
14:00
01:00
Julia is primarily considered to be a "scientific programming language". Quite often, what starts out as "scientific programming" ends up in products meant to be consumed by others. When this happens, these pieces of software end up in "applications", typically with a UI of some sort. This BoF is for those that build these applications, to learn from others, and to discuss what can be done to move the ecosystem forward.
juliacon2023-27026-julia-application-builders
JuliaCon
Joris Kraak
en
Building applications around a "Julia core" can be done in a variety of ways, hence there are also a variety of solutions. Some applications are shipped "natively", e.g. using GTK.jl, TK.jl, target terminal users, e.g. using Term.jl, or target browser-based environments, e.g. using JSServe.jl, Genie.jl, Pluto.jl, etc. which may in turn end up as "native" applications using Electron.jl.
Even though there is a wide variety of "application targets", most of these packages work in a similar fashion. Said differently, most applications take the form of a Julia process, a process rendering the UI and a means to communicate between the two. A potentially interesting question to try and answer during this BoF is whether there is a need or a desire to arrive at a common base framework to facilitate these types of applications.
We will talk about what people are building, what they are using to build applications around a "Julia
core", how these applications are being deployed, and how we may improve Julia's ecosystem with respect to the challenges faced by these "application builders".
false
https://pretalx.com/juliacon2023/talk/XPPB3S/
https://pretalx.com/juliacon2023/talk/XPPB3S/feedback/
32-124
Copulas.jl : A fully `Distributions.jl`-compliant copula package
Talk
2023-07-27T15:00:00-04:00
15:00
00:30
The `Copulas.jl` package brings standard dependence modeling routines to native Julia. Copulas are distributions functions on the unit hypercube that are widely used (from theoretical probabilities and Bayesian statistics, to applied finance or actuarial sciences) to model the dependence structure of random vectors apart from their marginals. This native implementation leverages the `Distributions.jl` framework and is therefore conveniently directly compatible with the broader ecosystem.
juliacon2023-24239-copulas-jl-a-fully-distributions-jl-compliant-copula-package
JuliaCon
Oskar Laverny
en
Since half a century, the notion of Copulas, introduced by Sklar in 1959, is the standard approach to model complex dependence structures in multivariate random vectors. In a broad range of domains, multivariate statistics and dependence structure modeling is an important part of the statistical treatment of the information, and therefore many applied domains have widely adopted copulas frameworks.
In Julia, the probabilistic and statistical ecosystem is centered on `Distributions.jl`, which provides the standard tools to deal with random variables. `Copulas.jl` provides many standard tools to model dependencies *between* random variables: evaluation of probabilities and moments, Kendall's tau, Spearmann's rho, distribution function and density evaluation, fitting models to data through inverse moments or loglikelyhood maximization, etc, are available for a wide range of classical parametric copula families. Moreover, the Sklar type, mimicking Sklar's Theorem, allows building full models including the Copulas and marginal specifications. These complex multivariate models are compatible with the broader `Distributions.jl` ecosystem, allowing to, e.g., plug them directly into `Turing.jl` for Bayesian applications.
In this talk, we present the new tools that we developed, their integration to the ecosystem, and we showcase the new functionalities that are now available to the practitioner. The fact that this is native Julia allows beautiful application to weird number types that were not possible before, and we believe this package is a great addition to the Julia ecosystem.
false
https://pretalx.com/juliacon2023/talk/YUNTJC/
https://pretalx.com/juliacon2023/talk/YUNTJC/feedback/
32-124
Joint Chance Constraints for successful microgrid islanding
Talk
2023-07-27T15:30:00-04:00
15:30
00:30
Joint Chance Constraints achieve a good trade-off between cost and robustness for optimization under uncertainty. This talk proposes a Julia implementation of Joint Chance Constraints and algorithms to solve the programming problems for different types of multivariate probabilities. The library focuses on unit commitment problems for decentralized, renewable-powered microgrids, connected to an unreliable higher-level, grid-balancing unit, but can be applied to similarly-structured problems.
juliacon2023-27033-joint-chance-constraints-for-successful-microgrid-islanding
JuliaCon
Nesrine Ouanes
en
Ensuring a certain level of reliability becomes challenging with renewable-powered and decentralized energy systems.
The problem is increasingly relevant for energy system dispatch planning, both in cases of advanced grid infrastructure or of severe resource-constraints. The capability of a local unit of balancing its supply and demand successfully becomes crucial, when the availability of a higher-level grid-balancing unit is unreliable.
In this talk, I explain how this unreliability can be considered using Joint Chance Constraints and will introduce how these are implemented in JuMP optimization models. By comparing these models to both deterministic and Individual Chance Constrained programming problems, I prove that Joint Chance Constraints help increase the reliability of the microgrids and their successful operation in the "islanded" mode.
The library was conceived for the application described above, but is written generically such that it can be applied for other similarly-structured problems.
false
https://pretalx.com/juliacon2023/talk/8JZPSG/
https://pretalx.com/juliacon2023/talk/8JZPSG/feedback/
32-124
Atomistic modelling ecosystem brainstorming at JuliaCon
BoF (45 mins)
2023-07-27T17:00:00-04:00
17:00
01:30
We will organise a discussion and brainstorming session centred around the current ecosystem of first-principle atomistic modelling in Julia. In particular we want to discuss the development of interface packages like AtomsBase and what can be done to improve package integration across the existing ecosystem.
juliacon2023-35990-atomistic-modelling-ecosystem-brainstorming-at-juliacon
Quantum
en
If you are interested in applications in quantum chemistry, electronic structure theory, first-principle materials simulations, interatomic potentials, molecular dynamics please come along.
false
https://pretalx.com/juliacon2023/talk/NAGCKG/
https://pretalx.com/juliacon2023/talk/NAGCKG/feedback/
32-144
Morning Break Day 2 Room 5
Break
2023-07-27T10:15:00-04:00
10:15
00:15
Morning break for coffee and snacks, and transit time from the keynote to the rest of the day's talks.
juliacon2023-28134-morning-break-day-2-room-5
JuliaCon
en
Morning break for coffee and snacks, and transit time from the keynote to the rest of the day's talks.
false
https://pretalx.com/juliacon2023/talk/7ZGXCR/
https://pretalx.com/juliacon2023/talk/7ZGXCR/feedback/
32-144
Tools and techniques of working with tabular data (3)
Minisymposium
2023-07-27T10:30:00-04:00
10:30
01:00
The objective of this minisymposium is to gather people interested in working with tabular data to allow them to discuss various aspects of the Julia ecosystem in this area.
juliacon2023-30803-tools-and-techniques-of-working-with-tabular-data-3-
Statistics and Data Science
en
The proposed list of talks is as follows:
* Ingesting larger data in Julia: evolution of JSON, Arrow, CSV support (Jacob Quinn)
* Tidier.jl: Bringing the Tidyverse to Julia (Karandeep Singh)
* What is new in DataFrames.jl - highlights of 1.4 and 1.5 releases. (Bogumił Kamiński)
* Tips & tricks for printing tables using PrettyTables.jl. (Ronan Arraes J. Chagas)
* Working with geographical data using DataFrames.jl (Przemysław Szufel)
* Parallelizing your data with Dagger and DTables (Julian Samaroo)
false
https://pretalx.com/juliacon2023/talk/HLYTJZ/
https://pretalx.com/juliacon2023/talk/HLYTJZ/feedback/
32-144
Tools and techniques of working with tabular data (2)
Minisymposium
2023-07-27T11:30:00-04:00
11:30
01:00
The objective of this minisymposium is to gather people interested in working with tabular data to allow them to discuss various aspects of the Julia ecosystem in this area.
juliacon2023-30802-tools-and-techniques-of-working-with-tabular-data-2-
Statistics and Data Science
en
The proposed list of talks is as follows:
* Ingesting larger data in Julia: evolution of JSON, Arrow, CSV support (Jacob Quinn)
* Tidier.jl: Bringing the Tidyverse to Julia (Karandeep Singh)
* What is new in DataFrames.jl - highlights of 1.4 and 1.5 releases. (Bogumił Kamiński)
* Tips & tricks for printing tables using PrettyTables.jl. (Ronan Arraes J. Chagas)
* Working with geographical data using DataFrames.jl (Przemysław Szufel)
* Parallelizing your data with Dagger and DTables (Julian Samaroo)
false
https://pretalx.com/juliacon2023/talk/GL8JM9/
https://pretalx.com/juliacon2023/talk/GL8JM9/feedback/
32-144
Lunch Day 2 (Room 5)
Lunch Break
2023-07-27T12:30:00-04:00
12:30
01:30
We hope you're enjoying JuliaCon 2023 so far! Take a break and grab some lunch to recharge for the afternoon sessions. We have a delicious spread waiting for you in the dining hall. Bon appétit!
juliacon2023-28077-lunch-day-2-room-5-
JuliaCon
en
We hope you're enjoying JuliaCon 2023 so far! Take a break and grab some lunch to recharge for the afternoon sessions. We have a delicious spread waiting for you in the dining hall. Bon appétit!
false
https://pretalx.com/juliacon2023/talk/PW8B9S/
https://pretalx.com/juliacon2023/talk/PW8B9S/feedback/
32-144
Tools and techniques of working with tabular data
Minisymposium
2023-07-27T14:00:00-04:00
14:00
01:00
The objective of this minisymposium is to gather people interested in working with tabular data to allow them to discuss various aspects of the Julia ecosystem in this area.
juliacon2023-26050-tools-and-techniques-of-working-with-tabular-data
Statistics and Data Science
Bogumił KamińskiJacob Quinn
en
The list of talks is scheduled as follows:
Session 1 (10:30-11:30):
* What is new in DataFrames.jl - highlights of 1.4, 1.5, and 1.6 releases. (Bogumił Kamiński)
* Parallelizing your data with Dagger and DTables (Julian Samaroo)
* Working with geographical data using DataFrames.jl (Przemysław Szufel)
Session 2 (11:30-12:30):
* Ingesting larger data in Julia: evolution of JSON (Jacob Quinn)
* Tips & tricks for printing tables using PrettyTables.jl. (Ronan Arraes J. Chagas)
Session 3 (14:00-15:00):
* Tidier.jl: Bringing the Tidyverse to Julia (Karandeep Singh)
* Timeseries data manipulation using TSFrames.jl. (Chirag Anand)
false
https://pretalx.com/juliacon2023/talk/PHXZS8/
https://pretalx.com/juliacon2023/talk/PHXZS8/feedback/
32-144
Geometric Algebra at compile-time with SymbolicGA.jl
Talk
2023-07-27T15:00:00-04:00
15:00
00:30
Geometric Algebra is a high-level mathematical framework which expresses a variety of geometric computations with an intuitive language. While its rich structure unlocks deeper insight and an elegant simplicity, it often comes at a cost to numerical implementations. After giving an overview of geometric algebra and its applications, a Julia implementation is presented which uses metaprogramming to shift the work to compile-time, enabling a fast and expressive approach to computational geometry.
juliacon2023-26981-geometric-algebra-at-compile-time-with-symbolicga-jl
JuliaCon
/media/juliacon2023/submissions/KPXNR7/talk_28TX2hB.png
Cédric Belmant
en
Geometric Algebra is a high-level mathematical framework which expresses a large range of geometric computations in a simple and intuitive language. From a single set of rules and axioms, this framework allows you to create diverse and geometrically meaningful spaces which best suit your needs.
Complex numbers and quaternions may be identified as elements in such spaces which describe rotations in two and three dimensions. These spaces may express Euclidean transformations, such as reflections, rotations and translations; others express intersections of flat geometry such as lines and planes, and may include rounded geometry such as circles and spheres in slightly more complex spaces - all in a dimension-agnostic manner.
The price to pay for this unifying, high-level framework is extra mathematical structure that is generally not a zero-cost abstraction. However, by shifting the application of this structure to compile-time, it is possible to combine the expressive power of geometric algebra with highly performant code.
In this talk, pragmatic motivations for considering geometric algebra are provided, with a quick introduction to its formalism. Then, the open-source [SymbolicGA.jl](https://github.com/serenity4/SymbolicGA.jl) package is presented as a compile-time implementation of geometric algebra. It will be shown that the language of geometric algebra can be used to describe many geometric operations, all with a low symbolic complexity and in a performant manner.
---
- [Slides](https://docs.google.com/presentation/d/e/2PACX-1vQ9trJBYfvZXCEoArxRQwYhS_tzGBYOfeY-s7aGZhE8_J-VPbztXbPPgW9uNTjUUrNbf9JWIYjLLngW/pub?start=false&loop=false&delayms=3000&slide=id.p)
- [Talk](https://www.youtube.com/watch?v=lD4tNcHVjX4)
false
https://pretalx.com/juliacon2023/talk/KPXNR7/
https://pretalx.com/juliacon2023/talk/KPXNR7/feedback/
32-144
SimpleGA. A lightweight Geometric Algebra library.
Talk
2023-07-27T15:30:00-04:00
15:30
00:30
Geometric algebra (GA) is a powerful language for formulating and solving problems in geometry, physics, engineering and graphics. SimpleGA is designed as a straightforward implementation of the most useful geometric algebras, with the key focus on performance. In this talk we use the library to explain some key properties of GA, and explain the motivation behind the design and how it utilises Julia's unique features.
juliacon2023-26926-simplega-a-lightweight-geometric-algebra-library-
JuliaCon
Chris Doran
en
Geometric algebra is a powerful mathematical language that unites many disparate concepts including complex numbers, quaternions, exterior algebra, spinors and projective geometry. The goal with this talk is to use a simple implementation of the algebra to explain the main features. No prior knowledge of geometric (aka Clifford) algebra will be assumed and by the end the audience should have a basic understanding of the properties of the geometric product - the key basis for the algebra. A novel implementation of this product in terms of binary operations will also be discussed. All of the SimpleGA source code is available, and there are many excellent free resources on geometric algebra for those interested in diving deaeper.
false
https://pretalx.com/juliacon2023/talk/PK9C77/
https://pretalx.com/juliacon2023/talk/PK9C77/feedback/
32-G449 (Kiva)
Morning Break Day 2 Room 6
Break
2023-07-27T10:15:00-04:00
10:15
00:15
Morning break for coffee and snacks, and transit time from the keynote to the rest of the day's talks.
juliacon2023-28135-morning-break-day-2-room-6
JuliaCon
en
Morning break for coffee and snacks, and transit time from the keynote to the rest of the day's talks.
false
https://pretalx.com/juliacon2023/talk/PZ9WSU/
https://pretalx.com/juliacon2023/talk/PZ9WSU/feedback/
32-G449 (Kiva)
Julia for High-Performance Computing
Minisymposium
2023-07-27T10:30:00-04:00
10:30
01:00
The *Julia for HPC* minisymposium gathers current and prospective Julia practitioners from various disciplines in the field of high-performance computing (HPC). Each year, we invite participation from science, industry, and government institutions interested in Julia’s capabilities for supercomputing. Our goal is to provide a venue for showing the state of the art, share best practices, discuss current limitations, and identify future developments in the Julia HPC community.
juliacon2023-26934-julia-for-high-performance-computing
HPC
/media/juliacon2023/submissions/PC8PZ8/Julia_for_HPC_4_SyADKIq.png
Carsten BauerMichael Schlottke-LakemperJohannes BlaschkeSamuel OmlinLudovic Räss
en
As we embrace the era of exascale computing, scalable performance and fast development on extremely heterogeneous hardware have become ever more important aspects for high-performance computing (HPC). Scientists and developers with interest in Julia for HPC need to know how to leverage the capabilities of the language and ecosystem to address these issues and which tools and best practices can help them to achieve their performance goals.
What do we mean by HPC? While HPC can be mainly associated with running large-scale physical simulations like computational fluid dynamics, molecular dynamics, high-energy physics, climate models etc., we use a more inclusive definition beyond the scope of computational science and engineering. More recently, rapid prototyping with high-productivity languages like Julia, machine learning training, data management, computer science research, research software engineering, large scale data visualization and in-situ analysis have expanded the scope for defining HPC. For us, the core of HPC is not to run simple test problems faster but involves everything that enables solving challenging problems in simulation or data science, on heterogeneous hardware platforms, from a high-end workstation to the world's largest supercomputers powered with different vendors CPUs and accelerators (e.g. GPUs).
In this three-hour minisymposium, we will give an overview of the current state of affairs of Julia for HPC in a series of ~10-minute talks. The focus of these overview talks is to introduce and motivate the audience by highlighting aspects making the Julia language beneficial for scientific HPC workflows such as scalable deployments, compute accelerator support, user support, and HPC applications. In addition, we have reserved some time for participants to interact, discuss and share the current landscape of their investments in Julia HPC, while encouraging networking with their colleagues over topics of common interest.
### Minisymposium Schedule
* 10:30: Carsten Bauer (PC2) & Samuel Omlin (CSCS): Welcome and Overview
**Part I (Scaling Applications)**
* 10:40: Ludovic Räss (ETHZ): Scalability and HPC readiness of Julia’s AMD GPU stack
* 10:55: Dominik Kiese (Flatiron Institute): Large-scale vertex calculations in condensed matter physics with Julia
* 11:10: Michael Schlottke-Lakemper (RWTH Aachen) & Hendrik Ranocha (U Hamburg): Scaling Trixi.jl to more than 10,000 cores using MPI
* 11:25: Q&A
* 11:30: Short break
**Part II (Performance Evaluation & Tuning)**
* 11:40: William F Godoy (ORNL): Julia programming models evaluation on Oak Ridge Leadership Computing Facilities: Summit and Frontier
* 11:55: Mosé Giordano (UCL): MPI, SVE, 16-bit: using Julia on the fastest supercomputer
* 12:10: Carsten Bauer (PC2): HPC Tools for Julia: Inspecting, Monitoring, and Tuning Performance
* 12:25: Q&A
* 12:30: Time for lunch 😉
----
**Lunch break (1:30h)**
----
**Part III (Ecosystem Developments)**
* 2:00: Tim Besard (Julia Computing): Update on oneAPI.jl developments
* 2:15: Julian Samaroo (MIT): Dagger in HPC: GPUs, MPI, and Profiling at great speed
* 2:30: Johannes Blaschke (NERSC): Improvements to Distributed.jl for HPC
* 2:45: Q&A
* 3:00: Fin.
The overall goal of the minisymposium is to identify and summarize current practices, limitations, and future developments as Julia experiences growth and positions itself in the larger HPC community due to its appeal in scientific computing. It also exemplifies the strength of the existing Julia HPC community that collaboratively prepared this event. We are an international, multi institutional, and multi disciplinary group interested in advancing Julia for HPC applications in our academic and national laboratory environments. We would like to welcome new people from multiple backgrounds sharing our interest and bring them together in this minisymposium.
In this spirit, the minisymposium will serve as a starting point for further Julia HPC activities at JuliaCon 2023. During the main conference, a Birds of Feather session will provide an opportunity to bring together the community for more discussions and to allow new HPC users to join the conversation. Furthermore, a number of talks will be dedicated to topics relevant for HPC developers and users alike.
false
https://pretalx.com/juliacon2023/talk/PC8PZ8/
https://pretalx.com/juliacon2023/talk/PC8PZ8/feedback/
32-G449 (Kiva)
Julia for High-Performance Computing (2)
Minisymposium
2023-07-27T11:30:00-04:00
11:30
01:00
The *Julia for HPC* minisymposium gathers current and prospective Julia practitioners from various disciplines in the field of high-performance computing (HPC). Each year, we invite participation from science, industry, and government institutions interested in Julia’s capabilities for supercomputing. Our goal is to provide a venue for showing the state of the art, share best practices, discuss current limitations, and identify future developments in the Julia HPC community.
juliacon2023-30796-julia-for-high-performance-computing-2-
HPC
en
As we embrace the era of exascale computing, scalable performance and fast development on extremely heterogeneous hardware have become ever more important aspects for high-performance computing (HPC). Scientists and developers with interest in Julia for HPC need to know how to leverage the capabilities of the language and ecosystem to address these issues and which tools and best practices can help them to achieve their performance goals.
What do we mean by HPC? While HPC can be mainly associated with running large-scale physical simulations like computational fluid dynamics, molecular dynamics, high-energy physics, climate models etc., we use a more inclusive definition beyond the scope of computational science and engineering. More recently, rapid prototyping with high-productivity languages like Julia, machine learning training, data management, computer science research, research software engineering, large scale data visualization and in-situ analysis have expanded the scope for defining HPC. For us, the core of HPC is not to run simple test problems faster but involves everything that enables solving challenging problems in simulation or data science, on heterogeneous hardware platforms, from a high-end workstation to the world's largest supercomputers powered with different vendors CPUs and accelerators (e.g. GPUs).
In this three-hour minisymposium, we will give an overview of the current state of affairs of Julia for HPC in a series of ~10-minute talks. The focus of these overview talks is to introduce and motivate the audience by highlighting aspects making the Julia language beneficial for scientific HPC workflows such as scalable deployments, compute accelerator support, user support, and HPC applications. In addition, we have reserved some time for participants to interact, discuss and share the current landscape of their investments in Julia HPC, while encouraging networking with their colleagues over topics of common interest.
### Minisymposium Schedule
* 10:30: Carsten Bauer (PC2) & Samuel Omlin (CSCS): Welcome and Overview
**Part I (Scaling Applications)**
* 10:40: Ludovic Räss (ETHZ): Scalability and HPC readiness of Julia’s AMD GPU stack
* 10:55: Dominik Kiese (Flatiron Institute): Large-scale vertex calculations in condensed matter physics with Julia
* 11:10: Michael Schlottke-Lakemper (RWTH Aachen) & Hendrik Ranocha (U Hamburg): Scaling Trixi.jl to more than 10,000 cores using MPI
* 11:25: Q&A
* 11:30: Short break
**Part II (Performance Evaluation & Tuning)**
* 11:40: William F Godoy (ORNL): Julia programming models evaluation on Oak Ridge Leadership Computing Facilities: Summit and Frontier
* 11:55: Mosé Giordano (UCL): MPI, SVE, 16-bit: using Julia on the fastest supercomputer
* 12:10: Carsten Bauer (PC2): HPC Tools for Julia: Inspecting, Monitoring, and Tuning Performance
* 12:25: Q&A
* 12:30: Time for lunch 😉
----
**Lunch break (1:30h)**
----
**Part III (Ecosystem Developments)**
* 2:00: Tim Besard (Julia Computing): Update on oneAPI.jl developments
* 2:15: Julian Samaroo (MIT): Dagger in HPC: GPUs, MPI, and Profiling at great speed
* 2:30: Johannes Blaschke (NERSC): Improvements to Distributed.jl for HPC
* 2:45: Q&A
* 3:00: Fin.
The overall goal of the minisymposium is to identify and summarize current practices, limitations, and future developments as Julia experiences growth and positions itself in the larger HPC community due to its appeal in scientific computing. It also exemplifies the strength of the existing Julia HPC community that collaboratively prepared this event. We are an international, multi institutional, and multi disciplinary group interested in advancing Julia for HPC applications in our academic and national laboratory environments. We would like to welcome new people from multiple backgrounds sharing our interest and bring them together in this minisymposium.
In this spirit, the minisymposium will serve as a starting point for further Julia HPC activities at JuliaCon 2023. During the main conference, a Birds of Feather session will provide an opportunity to bring together the community for more discussions and to allow new HPC users to join the conversation. Furthermore, a number of talks will be dedicated to topics relevant for HPC developers and users alike.
false
https://pretalx.com/juliacon2023/talk/HMMKMF/
https://pretalx.com/juliacon2023/talk/HMMKMF/feedback/
32-G449 (Kiva)
Lunch Day 2 (Room 6)
Lunch Break
2023-07-27T12:30:00-04:00
12:30
01:30
We hope you're enjoying JuliaCon 2023 so far! Please find our food trucks waiting right outside venue with food available for purchase.
juliacon2023-28078-lunch-day-2-room-6-
JuliaCon
en
We hope you're enjoying JuliaCon 2023 so far! Please find our food trucks waiting right outside venue with food available for purchase.
false
https://pretalx.com/juliacon2023/talk/UTY8CB/
https://pretalx.com/juliacon2023/talk/UTY8CB/feedback/
32-G449 (Kiva)
Julia for High-Performance Computing (3)
Minisymposium
2023-07-27T14:00:00-04:00
14:00
01:00
The *Julia for HPC* minisymposium gathers current and prospective Julia practitioners from various disciplines in the field of high-performance computing (HPC). Each year, we invite participation from science, industry, and government institutions interested in Julia’s capabilities for supercomputing. Our goal is to provide a venue for showing the state of the art, share best practices, discuss current limitations, and identify future developments in the Julia HPC community.
juliacon2023-30797-julia-for-high-performance-computing-3-
HPC
en
As we embrace the era of exascale computing, scalable performance and fast development on extremely heterogeneous hardware have become ever more important aspects for high-performance computing (HPC). Scientists and developers with interest in Julia for HPC need to know how to leverage the capabilities of the language and ecosystem to address these issues and which tools and best practices can help them to achieve their performance goals.
What do we mean by HPC? While HPC can be mainly associated with running large-scale physical simulations like computational fluid dynamics, molecular dynamics, high-energy physics, climate models etc., we use a more inclusive definition beyond the scope of computational science and engineering. More recently, rapid prototyping with high-productivity languages like Julia, machine learning training, data management, computer science research, research software engineering, large scale data visualization and in-situ analysis have expanded the scope for defining HPC. For us, the core of HPC is not to run simple test problems faster but involves everything that enables solving challenging problems in simulation or data science, on heterogeneous hardware platforms, from a high-end workstation to the world's largest supercomputers powered with different vendors CPUs and accelerators (e.g. GPUs).
In this three-hour minisymposium, we will give an overview of the current state of affairs of Julia for HPC in a series of ~10-minute talks. The focus of these overview talks is to introduce and motivate the audience by highlighting aspects making the Julia language beneficial for scientific HPC workflows such as scalable deployments, compute accelerator support, user support, and HPC applications. In addition, we have reserved some time for participants to interact, discuss and share the current landscape of their investments in Julia HPC, while encouraging networking with their colleagues over topics of common interest.
### Minisymposium Schedule
* 10:30: Carsten Bauer (PC2) & Samuel Omlin (CSCS): Welcome and Overview
**Part I (Scaling Applications)**
* 10:40: Ludovic Räss (ETHZ): Scalability and HPC readiness of Julia’s AMD GPU stack
* 10:55: Dominik Kiese (Flatiron Institute): Large-scale vertex calculations in condensed matter physics with Julia
* 11:10: Michael Schlottke-Lakemper (RWTH Aachen) & Hendrik Ranocha (U Hamburg): Scaling Trixi.jl to more than 10,000 cores using MPI
* 11:25: Q&A
* 11:30: Short break
**Part II (Performance Evaluation & Tuning)**
* 11:40: William F Godoy (ORNL): Julia programming models evaluation on Oak Ridge Leadership Computing Facilities: Summit and Frontier
* 11:55: Mosé Giordano (UCL): MPI, SVE, 16-bit: using Julia on the fastest supercomputer
* 12:10: Carsten Bauer (PC2): HPC Tools for Julia: Inspecting, Monitoring, and Tuning Performance
* 12:25: Q&A
* 12:30: Time for lunch 😉
----
**Lunch break (1:30h)**
----
**Part III (Ecosystem Developments)**
* 2:00: Tim Besard (Julia Computing): Update on oneAPI.jl developments
* 2:15: Julian Samaroo (MIT): Dagger in HPC: GPUs, MPI, and Profiling at great speed
* 2:30: Johannes Blaschke (NERSC): Improvements to Distributed.jl for HPC
* 2:45: Q&A
* 3:00: Fin.
The overall goal of the minisymposium is to identify and summarize current practices, limitations, and future developments as Julia experiences growth and positions itself in the larger HPC community due to its appeal in scientific computing. It also exemplifies the strength of the existing Julia HPC community that collaboratively prepared this event. We are an international, multi institutional, and multi disciplinary group interested in advancing Julia for HPC applications in our academic and national laboratory environments. We would like to welcome new people from multiple backgrounds sharing our interest and bring them together in this minisymposium.
In this spirit, the minisymposium will serve as a starting point for further Julia HPC activities at JuliaCon 2023. During the main conference, a Birds of Feather session will provide an opportunity to bring together the community for more discussions and to allow new HPC users to join the conversation. Furthermore, a number of talks will be dedicated to topics relevant for HPC developers and users alike.
false
https://pretalx.com/juliacon2023/talk/C3N9QF/
https://pretalx.com/juliacon2023/talk/C3N9QF/feedback/
32-G449 (Kiva)
Computational Radio Astronomy with Julia (2)
Minisymposium
2023-07-27T15:00:00-04:00
15:00
01:00
Radio interferometry produces the sharpest images in astronomy through the concept of a computational telescope, with software and algorithms integral to their success. In the coming decade, new facilities, including the ngEHT, ngVLA, and SKA, will revolutionize astronomy, leveraging breakthroughs in digital technology to generate Petabyte-scale datasets. This minisymposium will explore Julia's rapidly growing role in radio astronomy and highlight crucially needed developments.
juliacon2023-30800-computational-radio-astronomy-with-julia-2-
JuliaCon
en
Radio astronomy is a rich and growing field with success that is closely tied to high-performance computing (HPC). In particular, the technique of very long baseline interferometry (VLBI) – which recently culminated in the first images of a black hole – typically requires HPC at every level of its analysis. Indeed, even the "telescope" itself is created using HPC systems, with surface corrections and pointing performed digitally after the observations digitize and record the raw data. In the coming decades, new VLBI arrays such as the next-generation Very Large Array (ngVLA), Square Kilometer Array (SKA), and next-generation Event Horizon Telescope (ngEHT) will come online. These new telescopes will drastically increase the recorded data volumes, necessitating new data analysis paradigms. For instance, the ngEHT will observe over an order of magnitude more often than the EHT, with twice as many dishes providing petabytes of data. New computational software must be developed to process and calibrate this enormous volume of data.
The process of forming an image in radio interferometry is also computational since the telescope doesn't directly produce an image. Instead, specialized computational algorithms analyze the data and produce images consistent with the recorded data. Furthermore, traditional imaging methods, while efficient, tend to produce suboptimal images compared to newer methods that directly fit the data products recorded by the telescope. However, these tools have traditionally been underdeveloped since they require highly performant computing. In addition, conventional interferometric imaging is an interactive process, with some algorithms requiring constant input from a user. Thanks to its dynamic and performant nature, Julia is primed to become the new standard in radio astronomical imaging. Julia has already started to be adopted within the radio community through analysis of data from the Event Horizon Telescope, where Julia aided in the construction of the first-ever picture of the black hole at the center of our galaxy. Additionally, JuliaAstro already has interfaces to several commonly used software suites within radio astronomy.
This mini-symposium's focus will be on the progress and development of Julia-based software within the radio astronomy community. This mini-symposium will be split into three parts:
- The first part will highlight where Julia has already been used within the radio astronomy community. Speakers will describe how they used Julia within their research programs. Additionally, the speakers will discuss their experience using Julia.
- The second part will highlight future directions of radio astronomy and the role of Julia. Speakers will describe what computational obstacles (e.g., data volume, processing speed, algorithmic development) radio astronomers face and how Julia could help.
- The third part will be a roundtable discussion of the presented topics and what areas of development and outreach the community should prioritize. Part of this section will be the start of a technical report highlighting the future development goals of the Julia radio astronomy field that will be published shortly after the symposium.
false
https://pretalx.com/juliacon2023/talk/GEUWRY/
https://pretalx.com/juliacon2023/talk/GEUWRY/feedback/
32-G449 (Kiva)
Computational Radio Astronomy with Julia (3)
Minisymposium
2023-07-27T17:00:00-04:00
17:00
01:00
Radio interferometry produces the sharpest images in astronomy through the concept of a computational telescope, with software and algorithms integral to their success. In the coming decade, new facilities, including the ngEHT, ngVLA, and SKA, will revolutionize astronomy, leveraging breakthroughs in digital technology to generate Petabyte-scale datasets. This minisymposium will explore Julia's rapidly growing role in radio astronomy and highlight crucially needed developments.
juliacon2023-30801-computational-radio-astronomy-with-julia-3-
JuliaCon
en
Radio astronomy is a rich and growing field with success that is closely tied to high-performance computing (HPC). In particular, the technique of very long baseline interferometry (VLBI) – which recently culminated in the first images of a black hole – typically requires HPC at every level of its analysis. Indeed, even the "telescope" itself is created using HPC systems, with surface corrections and pointing performed digitally after the observations digitize and record the raw data. In the coming decades, new VLBI arrays such as the next-generation Very Large Array (ngVLA), Square Kilometer Array (SKA), and next-generation Event Horizon Telescope (ngEHT) will come online. These new telescopes will drastically increase the recorded data volumes, necessitating new data analysis paradigms. For instance, the ngEHT will observe over an order of magnitude more often than the EHT, with twice as many dishes providing petabytes of data. New computational software must be developed to process and calibrate this enormous volume of data.
The process of forming an image in radio interferometry is also computational since the telescope doesn't directly produce an image. Instead, specialized computational algorithms analyze the data and produce images consistent with the recorded data. Furthermore, traditional imaging methods, while efficient, tend to produce suboptimal images compared to newer methods that directly fit the data products recorded by the telescope. However, these tools have traditionally been underdeveloped since they require highly performant computing. In addition, conventional interferometric imaging is an interactive process, with some algorithms requiring constant input from a user. Thanks to its dynamic and performant nature, Julia is primed to become the new standard in radio astronomical imaging. Julia has already started to be adopted within the radio community through analysis of data from the Event Horizon Telescope, where Julia aided in the construction of the first-ever picture of the black hole at the center of our galaxy. Additionally, JuliaAstro already has interfaces to several commonly used software suites within radio astronomy.
This mini-symposium's focus will be on the progress and development of Julia-based software within the radio astronomy community. This mini-symposium will be split into three parts:
- The first part will highlight where Julia has already been used within the radio astronomy community. Speakers will describe how they used Julia within their research programs. Additionally, the speakers will discuss their experience using Julia.
- The second part will highlight future directions of radio astronomy and the role of Julia. Speakers will describe what computational obstacles (e.g., data volume, processing speed, algorithmic development) radio astronomers face and how Julia could help.
- The third part will be a roundtable discussion of the presented topics and what areas of development and outreach the community should prioritize. Part of this section will be the start of a technical report highlighting the future development goals of the Julia radio astronomy field that will be published shortly after the symposium.
false
https://pretalx.com/juliacon2023/talk/FUB7GF/
https://pretalx.com/juliacon2023/talk/FUB7GF/feedback/
32-G449 (Kiva)
Computational Radio Astronomy with Julia
Minisymposium
2023-07-27T18:00:00-04:00
18:00
01:00
Radio interferometry produces the sharpest images in astronomy through the concept of a computational telescope, with software and algorithms integral to their success. In the coming decade, new facilities, including the ngEHT, ngVLA, and SKA, will revolutionize astronomy, leveraging breakthroughs in digital technology to generate Petabyte-scale datasets. This minisymposium will explore Julia's rapidly growing role in radio astronomy and highlight crucially needed developments.
juliacon2023-26843-computational-radio-astronomy-with-julia
JuliaCon
Paul Tiede
en
**Complete schedule and abstracts [here](https://docs.google.com/document/d/1NOJZRak7y4Z2POXD-Fz3rGoUByRshY8WaZJ76BuRopA/edit?usp=sharing)**
This mini-symposium's focus will be on the progress and development of Julia-based software within the radio astronomy community. This mini-symposium will be split into three parts: - The first part will highlight where the direction of radio astronomy and what tools will be needed in the future. The second part will highlight what development is underway in Julia and where we need to focus in the future. Speakers will describe what computational obstacles (e.g., data volume, processing speed, algorithmic development) radio astronomers face and how Julia could help. - The third part will be a roundtable discussion of the presented topics and what areas of development and outreach the community should prioritize. Part of this section will be the start of a technical report highlighting the future development goals of the Julia radio astronomy field that will be published shortly after the symposium.
Schedule
Introduction and Challenges of Radio Astronomy (3 – 4 pm)
- Sara Issaoun (3:00 - 3:20), Imaging Black Holes with the Event Horizon Telescope
- Kiran Shila (3:20 - 3:40), Real Time Stream Processing for Radio Interferometers in Julia
- Dave MacMahon (3:40 - 4:00), How I use Julia in radio astronomy/interferometry (to find aliens)
Break (4 pm - 5pm) Wolfram Keynote
The State of Julia and Radio Interferometry (5:10 - 6:10 pm)
- Paul Tiede (5:10 - 5:30), Imaging Black Holes on Your Laptop
- Sasha Plavin (5:30 - 5:50), Accessing astronomy tools from Julia
- Frank Lind (5:50 - 6:10), Adaptive Radio Science with Julia
Round-Table Discussion (6:20 - 7:00 pm)
false
https://pretalx.com/juliacon2023/talk/PUY3SP/
https://pretalx.com/juliacon2023/talk/PUY3SP/feedback/
32-G449 (Kiva)
Day 2 Evening Hacking/Social
Social hour
2023-07-27T19:00:00-04:00
19:00
04:00
Come hang out in the evening after talks for some friendly hacking and social time!
juliacon2023-28178-day-2-evening-hacking-social
JuliaCon
en
Come hang out in the evening after talks for some friendly hacking and social time! Get advice, find new collaborators, or just enjoy the Julia atmosphere.
false
https://pretalx.com/juliacon2023/talk/7FJE3Z/
https://pretalx.com/juliacon2023/talk/7FJE3Z/feedback/
32-D463 (Star)
Morning Break Day 2 Room 4
Break
2023-07-27T10:15:00-04:00
10:15
00:15
Morning break for coffee and snacks, and transit time from the keynote to the rest of the day's talks.
juliacon2023-28133-morning-break-day-2-room-4
JuliaCon
en
Morning break for coffee and snacks, and transit time from the keynote to the rest of the day's talks.
false
https://pretalx.com/juliacon2023/talk/7X3GGX/
https://pretalx.com/juliacon2023/talk/7X3GGX/feedback/
32-D463 (Star)
Systems biology: community needs, plans, and visions
Birds of Feather (BoF)
2023-07-27T10:30:00-04:00
10:30
01:00
The Scientific Machine Learning (SciML) ecosystem is rapidly gaining momentum within the field of systems biology. With this birds of feather discussion we want to bring the international community of systems biology tool developers and users at one table to (a) brainstorm promising routes for future developments, and (b) facilitate collaborative projects.
juliacon2023-26759-systems-biology-community-needs-plans-and-visions
SciML
Paul LangAnand JainElisabeth Roesch
en
**Purpose:** The Scientific Machine Learning (SciML) ecosystem in Julia holds great potential for applications in the field of systems biology. Much of this potential can be attributed to fast ODE solvers and parameter estimation packages, and convenient interface with neural differential equations and systems biology standards/formats. While an increasing number of systems biology groups are starting to use Julia to address biological questions, many existing systems biology tools are not yet available in Julia.
In this Birds of Feather discussion we want to address the following questions:
1. What are the community needs, plans and visions?
2. Which existing systems biology tools hold great potential for faster implementations if ported to Julia?
3. Which new tools are enabled by the features of the SciML ecosystem.
4. Where can we find synergies and common interests for joining efforts on systems biology tool development?
5. (How) can we improve on the existing communication channels to facilitate coordinated and collaborative tool development.
**Significance:** Bringing key players in the field of open-source systems biology tool development at one table to discuss the above questions will facilitate a flourishing ecosystem of systems biology tools and pipelines in Julia, and increase the uptake of the language by the community as a consequence.
**Agenda:** the following points will be on the agenda.
- ~5 min: Introduction by Paul Lang or Anand Jain
- ~15 min: Every participant briefly addresses question (1) from above from their perspective
- ~50 min: Participants are split up into 2-5 groups to discuss questions (2)-(5) from above.
- ~15 min: One member of each group summarizes the main points from the group discussion for the other groups.
- ~5 min: Closing remarks by Paul Lang and Anand Jain
- (Optional if time is left: Talking to people from other discussion groups and unguided networking.)
A short document summarizing the Birds of Feather discussion may be published on Twitter, Discourse and the sysbio-sciml Slack channel on the Julia workspace for the broader community by the end of August.
**Moderators**: Paul Lang and Anand Jain
false
https://pretalx.com/juliacon2023/talk/PRLSQN/
https://pretalx.com/juliacon2023/talk/PRLSQN/feedback/
32-D463 (Star)
Open-Source Bayesian Hierarchical PBPK Modeling in Julia
Talk
2023-07-27T11:30:00-04:00
11:30
00:30
Physiologically based pharmacokinetic (PBPK) models characterize a drug’s distribution in the body using prior knowledge. Bayesian tools are well suited to infer PBPK model parameters using the informative prior knowledge available while quantifying the parameter uncertainty. The presentation will review a full Bayesian hierarchical PBPK modeling framework in Julia, using the SciML ecosystem and Turing.jl, to accurately infer the posterior distributions of the parameters of interest.
juliacon2023-26884-open-source-bayesian-hierarchical-pbpk-modeling-in-julia
SciML
Ahmed Elmokadem
en
Physiologically based pharmacokinetic (PBPK) models are mechanistic models that characterize how a drug is distributed in the body. These models are built based on an investigator’s prior knowledge of the in vivo system of interest. Bayesian inference incorporates an investigator’s prior knowledge of parameters while using the data to update this knowledge. As such, Bayesian tools are well suited to infer PBPK model parameters using the strong prior knowledge available while quantifying the uncertainty on these parameters. The presentation will review a full Bayesian hierarchical PBPK modeling framework in Julia, using the open-source SciML ecosystem and Turing.jl, that can accurately infer the posterior distributions of the parameters of interest. Additionally, diagnostics will be reviewed to evaluate general goodness-of-fit and the model predictive performance. The framework displays the composability of Julia packages that can be synced together using a single model definition to run various analyses, which include Bayesian analysis, sensitivity analysis, as well as population simulations that can explore alternative dosing scenarios. The general applicability of the proposed framework makes it a valuable tool for investigators interested in building Bayesian hierarchical PBPK models in an efficient, flexible, and convenient way.
false
https://pretalx.com/juliacon2023/talk/YD7DUP/
https://pretalx.com/juliacon2023/talk/YD7DUP/feedback/
32-D463 (Star)
Linear analysis of ModelingToolkit models
Talk
2023-07-27T12:00:00-04:00
12:00
00:30
We detail recent work on linear analysis of ModelingToolkit models. We talk about the linearization itself, the subsequent simplification of models with algebraic equations into standard linear statespace models and about causal elements introduced to enable a more convenient workflow for analysis.
We end with examples, illustrating mode shapes of a series of masses and springs, compute gain and phase margins of an electrical circuit, and determine the stability properties of a control system.
juliacon2023-26816-linear-analysis-of-modelingtoolkit-models
SciML
Fredrik Bagge Carlson
en
ModelingToolkit is a powerful language for acausal modeling, capable of modeling everything from a single pendulum to the structural mechanics of an industrial robot or the HVAC system in a skyscraper. While detailed models enable high fidelity simulations, they pose challenges for analysis. Engineers often resort to linear analysis to perform tasks such as mode analysis to determine vibration patterns and closed-loop analysis of control systems to determine stability and performance properties.
This talk details the work that has been done over the last year enabling linear analysis of ModelingToolkit models. We start by talking about the linearization itself, and the subsequent simplification of models with algebraic equations into standard linear statespace models. We then talk about causal elements introduced in the otherwise acausal modeling language, enabling a more convenient workflow for analyzing models.
We end with some examples of linear analysis, illustrating mode shapes of a series of masses and springs, compute the gain and phase margins of an electrical circuit, and determine the stability and robustness properties of a feedback-control system.
Slides: https://docs.google.com/presentation/d/1UcCiMxVgwWKV38tKq1lMY0Glnt4ZWDGCuR3nJ2xISjM/edit?usp=sharing
false
https://pretalx.com/juliacon2023/talk/NN3URK/
https://pretalx.com/juliacon2023/talk/NN3URK/feedback/
32-D463 (Star)
Lunch Day 2 (Room 4)
Lunch Break
2023-07-27T12:30:00-04:00
12:30
01:30
We hope you're enjoying JuliaCon 2023 so far! Please find our food trucks waiting right outside venue with food available for purchase.
juliacon2023-28076-lunch-day-2-room-4-
JuliaCon
en
We hope you're enjoying JuliaCon 2023 so far! Please find our food trucks waiting right outside venue with food available for purchase.
false
https://pretalx.com/juliacon2023/talk/LQPPHX/
https://pretalx.com/juliacon2023/talk/LQPPHX/feedback/
32-D463 (Star)
Surrogatizing Dynamic Systems using JuliaSim: An introduction.
Talk
2023-07-27T14:00:00-04:00
14:00
00:30
In this talk, we will discuss the use of surrogates in scientific simulations, and introduce JuliaSim, a commercial offering built on top of the SciML ecosystem, and introduce some of the surrogates available in JuliaSim.
juliacon2023-26892-surrogatizing-dynamic-systems-using-juliasim-an-introduction-
SciML
Sharan Yalburgi
en
In recent years, the use of surrogates in scientific simulations has become increasingly of interest. Surrogates, also known as digital-twins, are approximate models that are trained to mimic the output of a computationally expensive or complex simulation. They can be used to quickly explore the parameter space of a simulation, tune a controller, or optimize inputs and parameters.
The SciML ecosystem is an open-source project that aims to provide a suite of software tools for scientific modeling in the Julia programming language. It includes a wide range of modeling and simulation tools, including differential equations solvers, optimization algorithms, and surrogate models. The goal of SciML is to make it easy for scientists and engineers to use advanced modeling techniques in their work.
JuliaSim is a commercial offering built on top of the open-source SciML ecosystem. It provides a suite of tools for building and deploying surrogate models in Julia. JuliaSim makes it easy to interface with existing simulation codes and dynamic models and also to train, validate, and deploy surrogates using a wide range of algorithms.
In this talk, we will discuss the use of surrogates in scientific simulations, and introduce JuliaSim and discuss the variety of surrogates available in JuliaSim, including their individual specialties.
false
https://pretalx.com/juliacon2023/talk/NLJFAX/
https://pretalx.com/juliacon2023/talk/NLJFAX/feedback/
32-D463 (Star)
SciML: Novel Scientific Discoveries through composability
Talk
2023-07-27T14:30:00-04:00
14:30
00:30
SciML provides tools for a wide problem space. It can be confusing for new users to decide between the packages and the kind of questions that can be answered using each of them. This talk will walk through various ecosystem components for tasks such as inverse problems, model augmentation, and equation discovery and showcase workflows for using these packages with examples based on real-world data.
juliacon2023-24496-sciml-novel-scientific-discoveries-through-composability
SciML
Vaibhav DixitTorkelTorkelUtkarsh
en
SciML provides tooling for various Scientific Machine Learning tasks, including parameter estimation, model augmentation, equation discovery, ML-based solvers for differential equations, and surrogatization. It can be confusing for new users to reason about the various packages, including DiffEqParamEstim, DiffEqFlux, DataDrivenDiffEq, NeuralPDE, and Surrogates etc., and their suitability for the problem they want to solve. We plan to provide a wide overview of the SciML ecosystem packages, describing the kinds of questions that each of these packages is suitable to answer. Additionally, we will demonstrate sample SciML workflows that show the composability of the ecosystem.
false
https://pretalx.com/juliacon2023/talk/MS7SVG/
https://pretalx.com/juliacon2023/talk/MS7SVG/feedback/
32-D463 (Star)
Intro to modeling with ModelingToolkitStandardLibrary
Talk
2023-07-27T15:00:00-04:00
15:00
00:30
ModelingToolkitStandardLibrary is a library of components to model the world and beyond. We will demonstrate how various Mechanical, Electrical, Magnetic, and Thermal components can be used as basic building blocks to simulate complex models. We will talk about design choices, vision, and future plans for the package.
juliacon2023-26951-intro-to-modeling-with-modelingtoolkitstandardlibrary
SciML
Venkatesh PrasadFredrik Bagge Carlson
en
Component-based acausal modeling is a system for quickly generating large-scale, efficient models by composing elements with known physics. All that one has to do is to define components that represent objects, such as transistors, air conditioning units, or pipes, and connect them to generate accurate physical models of real-world phenomena. However, these models can easily require hundreds of components. How can one avoid the tedious task of having to understand and write the physics of so many things?
The answer is ModelingToolkit’s system for component-based modeling. With ModelingToolkit, you can compose components and it will generate the resulting set of equations as simulatable differential equations amenable to simulation with the DifferentialEquations.jl environment. To aid in the use of modeling standard systems, we introduce a new component to the ModelingToolkit and SciML ecosystem called the ModelingToolkitStandardLibrary. The ModelingToolkitStandardLibrary is a standard library of pre-built components modeled with ModelingToolkit. We want this package to serve two purposes; as a starting place for anyone who wants to get started with modeling, and as a performant dependency for power users.
MTKStdLib is structured to provide extendable basic blocks. This talk intends to walk users through the process of composing custom models using these components. We will speak about the internal structure, best practices, and known gotchas.
We will briefly talk about all the signals and shapes we support, the utility and math blocks we provide; the thermal ports, thermal circuit components like convective and thermal- resistors, and inductors, `HeatCapacitors`; the magnetic component: `FluxTubes`; the rotational library; and the electrical circuit components.
This talk should leave users supercharged to use MTK and the standard library to model the world and beyond.
false
https://pretalx.com/juliacon2023/talk/V7SMGP/
https://pretalx.com/juliacon2023/talk/V7SMGP/feedback/
32-D463 (Star)
BSTModelKit.jl Building Biochemical Systems Theory Models
Lightning talk
2023-07-27T15:30:00-04:00
15:30
00:10
In this talk, we introduce the BSTModelKit.jl package, which enables the construction and analysis of biochemical systems theory (BST) models of metabolic and single transduction networks. We demonstrate the capabilities of BSTModelKit.jl by analyzing the thrombin generation dynamics of a population of synthetic patients before and during pregnancy developed from an ongoing study at the University of Vermont supported by the National Institute of Health (NIH).
juliacon2023-26923-bstmodelkit-jl-building-biochemical-systems-theory-models
SciML
Jeffrey Varner
en
Biochemical systems theory (BST), developed beginning in the 1960s by Savageau, Voit, and others, is a modeling framework based on ordinary differential equations (ODE) in which biochemical processes are represented using power-law expansions in the system's variables. In this talk, we introduce [BSTModelKit.jl](https://github.com/varnerlab/BSTModelKit.jl), a Julia package dedicated to the automatic generation and analysis of BST models of metabolic and signal transduction networks. [BSTModelKit.jl](https://github.com/varnerlab/BSTModelKit.jl) features a simple domain-specific language (DSL) that specifies the model reaction network, methods to estimate steady-state and dynamic solutions to BST models, and methods to conduct global sensitivity analysis of BST model parameters.
We construct and analyze BST models of the thrombin generation dynamics of synthetic patients before and during pregnancy to demonstrate the features of the [BSTModelKit.jl](https://github.com/varnerlab/BSTModelKit.jl) package. Women are at higher risk for a blood clot during pregnancy, childbirth, and up to 3 months after delivering a baby. The Centers for Disease Control and Prevention estimates that pregnant women are up to 5 times more likely to experience a blood clot than women who are not pregnant. A synthetic population of pregnant women was developed from patient data to understand better this increased clotting risk. Measurements of 11 coagulation factors involved with the regulation of thrombin generation and measurements of the hormones Estradiol and Progesterone were collected longitudinally from N = 38 women at three visits: V1 non-pregnant, V2 first trimester, and V3 third trimester. A corresponding Thrombin Generation Assay (TGA) was conducted for each patient sample, and parameters describing the dynamics were extracted.
A joint probability model (assumed to be a multivariate normal distribution) was constructed from this matched data by computing the mean vector and covariance arrays from the experimental measurements. The probability model was then used to generate synthetic patient populations (dimension 1k, 10k, and 100k patients) that could be used in subsequent Machine Learning (ML) studies. One such ML study gauged how representative the synthetic population was of the original data. Next, an ordinary differential equation BST model of coagulation dynamics in individual patients was developed from the synthetic population and used to predict the patient TGA parameters. Patient-specific BST model parameters were estimated and used to simulate TGA patient data; the synthetic-patient BST models were consistent with true-patient TGA measurements. Further, global sensitivity analysis of the BST models identified which model parameters controlled the different aspects of the TGA measurements. Finally, by clustering the synthetic and actual patient data, using a radial basis function distance metric, our synthetic population recapitulated the empirically measured differences between the non-pregnant and pregnant states.
The following grants supported this work: The Interaction of Basal Risk, Pharmacological Ovulation Induction, Pregnancy and Delivery on Hemostatic Balance NIH NHLBI R-33 HL 141787 (PI’s [Bernstein](https://www.uvmhealth.org/medcenter/provider/ira-m-bernstein-md), [Orfeo](https://www.med.uvm.edu/biochemistry/lab_orfeo_research)) and the Pregnancy Phenotype and Predisposition to Preeclampsia NIH NHLBI R01 HL 71944 (PI [Bernstein](https://www.uvmhealth.org/medcenter/provider/ira-m-bernstein-md)).
false
https://pretalx.com/juliacon2023/talk/C7WNPJ/
https://pretalx.com/juliacon2023/talk/C7WNPJ/feedback/
32-D463 (Star)
UDEs for parameter estimation in Systems Biology
Lightning talk
2023-07-27T15:40:00-04:00
15:40
00:10
Neural universal differential equations (UDEs) are one of the comparatively new concepts combining aspects of purely knowledge-based mechanistic models with black-box and data-based approaches. We explore the applicability of UDEs for parameter estimation in the context of systems biology. For this, we consider different levels of noise, data sparsity, and prior knowledge about the underlying system on a diverse set of problems.
juliacon2023-26864-udes-for-parameter-estimation-in-systems-biology
SciML
Nina Schmid
en
There are two common paradigms for mathematical modelling of dynamics in life science: Knowledge-based mechanistic models like systems of ordinary differential equations or black-box approaches such as neural networks. Over the last years, methods for combining these concepts, sometimes called hybrid models, gained increasing attention in various fields of science. Neural universal differential equations (UDEs) represent one of the most prominent ideas. UDEs describe the dynamics of a system in the form of an ordinary differential equation (ODE), summing over a knowledge-based mechanistic term and a neural network-based universal approximator term. Since mechanistic parameters can jointly be optimized with the parameters of the neural network, UDEs raise hope to solve both forward as well as inverse problems with arbitrary levels of prior knowledge.
Using SciMLs DiffEqFlux.jl, we explore the applicability of UDEs for parameter estimation in the context of systems biology. For this, we consider different levels of noise, data sparsity, and prior knowledge about the underlying system on problems described by ordinary differential equations. The setting of systems biology introduces difficulties like observable mappings, high-dimensional mechanistic parameter spaces and stiffness.
We integrate ideas from conventional mechanistic modeling with ideas from neural network training to overcome these challenges and balance the contributions from knowledge-based mechanistic models and data-driven neural network components.
false
https://pretalx.com/juliacon2023/talk/9GLADM/
https://pretalx.com/juliacon2023/talk/9GLADM/feedback/
32-D463 (Star)
Geometric Control of a Quadrotor : Simulation and Visualization
Lightning talk
2023-07-27T15:50:00-04:00
15:50
00:10
This talk will present a simulation of geometric control on a quadrotor model built using ModellingToolkit.j. The resulting trajectories will be visualized using Makie.jl. The talk will be a detailed overview of how to use these Julia packages to simulate and visualize the control of quadrotors
juliacon2023-26920-geometric-control-of-a-quadrotor-simulation-and-visualization
SciML
Rajeev Voleti
en
This talk will take you through the process of simulating geometric control on a quadrotor model using Julia programming language. We will begin by discussing the importance of quadrotors in various fields and the need for precise control. Next, we will delve into building a quadrotor model using the ModellingToolkit.jl package. You will learn about the advantages of using nonlinear geometric control methods over traditional linear control techniques.
After building the model, we will perform a simulation to visualize the control of the quadrotor. The simulation results will be animated using Makie.jl, a powerful visualization package in Julia. You will see the quadrotor's trajectories in real-time, which will help you understand how the control inputs affect the motion of the quadrotor.
Throughout the talk, you will learn about the Julia packages and the concepts used in simulating and visualizing geometric control. By the end of the talk, you will have a good understanding of how to use Julia to simulate and visualize the control of quadrotors.
The talk will be an overview with some hands-on examples.
false
https://pretalx.com/juliacon2023/talk/QXZFDL/
https://pretalx.com/juliacon2023/talk/QXZFDL/feedback/
32-141
Roman Vershynin: Revisiting Grothendieck
Talk
2023-07-27T08:30:00-04:00
08:30
00:30
Revisiting Grothendieck: Szemeredi regularity, random submatrices, and covariance loss
juliacon2023-35754-roman-vershynin-revisiting-grothendieck
ASE60
en
Alan Edelman's 60th Birthday Celebration Talk: Revisiting Grothendieck: Szemeredi regularity, random submatrices, and covariance loss
false
https://pretalx.com/juliacon2023/talk/Z8RQZT/
https://pretalx.com/juliacon2023/talk/Z8RQZT/feedback/
32-141
Misha Kilmer: Spotlight on Structure: From Matrices to Tensors
Talk
2023-07-27T09:00:00-04:00
09:00
00:30
Spotlight on Structure: from Matrix to Tensor Algebra for Optimal Approximation of Non-random Data
juliacon2023-35755-misha-kilmer-spotlight-on-structure-from-matrices-to-tensors
ASE60
en
Alan Edelman's 60th Birthday Celebration Talk: Spotlight on Structure: from Matrix to Tensor Algebra for Optimal Approximation of Non-random Data
false
https://pretalx.com/juliacon2023/talk/XYFJZG/
https://pretalx.com/juliacon2023/talk/XYFJZG/feedback/
32-141
Avoiding Discretization Issues for Nonlinear Eigenvalue Problems
Talk
2023-07-27T09:30:00-04:00
09:30
00:30
Alex Townsend: Avoiding Discretization Issues for Nonlinear Eigenvalue Problems
juliacon2023-35756-avoiding-discretization-issues-for-nonlinear-eigenvalue-problems
ASE60
en
Alan Edelman's 60th Birthday Celebration Talk: Avoiding discretization issues for nonlinear eigenvalue problems
false
https://pretalx.com/juliacon2023/talk/3QFRKP/
https://pretalx.com/juliacon2023/talk/3QFRKP/feedback/
32-141
Gil Strang: Elimination and Factorization
Talk
2023-07-27T10:00:00-04:00
10:00
00:30
Elimination and Factorization
juliacon2023-35757-gil-strang-elimination-and-factorization
ASE60
en
Alan Edelman's 60th Birthday Talk: Elimination and Factorization talk by Gil Strang. Held in Stata room 32-141
false
https://pretalx.com/juliacon2023/talk/DGYBYB/
https://pretalx.com/juliacon2023/talk/DGYBYB/feedback/
32-141
The Special Math of Translating Theory To Software in DiffEq
Talk
2023-07-27T11:00:00-04:00
11:00
00:30
Chris Rackauckas: The Special Math of Translating Theory To Software in Differential Equations
juliacon2023-35758-the-special-math-of-translating-theory-to-software-in-diffeq
ASE60
en
The Special Math of Translating Theory To Software in Differential Equations by Chris Rackauckas in 32-141
false
https://pretalx.com/juliacon2023/talk/HPT87A/
https://pretalx.com/juliacon2023/talk/HPT87A/feedback/
32-141
Folkmar Bornemann: Unapologetically Beyond Universality
Talk
2023-07-27T11:30:00-04:00
11:30
00:30
Unapologetically Beyond Universality by Folkmar Bornemann
juliacon2023-35759-folkmar-bornemann-unapologetically-beyond-universality
ASE60
en
Alan Edelman's 60th Birthday Celebration Talk: Unapologetically Beyond Universality by Folkmar Bornemann
false
https://pretalx.com/juliacon2023/talk/YGHAFS/
https://pretalx.com/juliacon2023/talk/YGHAFS/feedback/
32-141
Steven Smith: Say It With Matrices?
Talk
2023-07-27T12:00:00-04:00
12:00
00:30
Join us for ASE-60, where we celebrate the life and the career of Professor Alan Stuart Edelman, on the occasion of his 60th birthday: https://math.mit.edu/events/ase60celebration/
juliacon2023-35762-steven-smith-say-it-with-matrices-
ASE60
en
My career and contributions have been greatly influenced by Alan Edelman’s work on random matrices, optimization, scientific computing, along with his cherished collaboration and advice. This talk starts with a brief survey of how Alan and his ideas provide a strong foundation for applied research in important areas: random matrices and optimization are applied extensively in diverse fields from sensor arrays to social media networks. The recent, interwoven developments of networked multimedia content sharing and neural-network-based large language and diffusion models would appear to provide a natural home for this theory, which has a great deal to say about the underlying matrices and algorithms that describe both the data and nonlinear optimization methods used in AI. Yet progress in these AI fields has evolved rapidly and spectacularly almost wholly without explicit insights from matrix theory, in spite of their deep reliance on random matrices. The second part of the talk uses related experience from recent work on MCMC- and LLM-based causal inference of real-world network influence to describe the challenges and potential opportunities of applying matrix theory to these recent developments.
false
https://pretalx.com/juliacon2023/talk/3MMRYJ/
https://pretalx.com/juliacon2023/talk/3MMRYJ/feedback/
32-141
Daniel Spielman: Laplacians.jl
Talk
2023-07-27T14:00:00-04:00
14:00
00:30
Laplacians.jl by Daniel Spielman
juliacon2023-35760-daniel-spielman-laplacians-jl
ASE60
en
Alan Edelman's 60th Birthday Celebration Talk: Laplacians.jl by Daniel Spielman held in Stata Center 32-141
false
https://pretalx.com/juliacon2023/talk/TF77V8/
https://pretalx.com/juliacon2023/talk/TF77V8/feedback/
32-141
ASE60 Poster Session
Poster Session
2023-07-27T14:30:00-04:00
14:30
01:30
ASE60 Poster Session
juliacon2023-35761-ase60-poster-session
ASE60
en
Alan Edelman's 60th Birthday Conference Poster Session, held in the Stata Center 4th floor R&D Commons
false
https://pretalx.com/juliacon2023/talk/KHQKQY/
https://pretalx.com/juliacon2023/talk/KHQKQY/feedback/
26-100
Breakfast
Breakfast
2023-07-28T07:00:00-04:00
07:00
01:30
Get a delicious breakfast.
juliacon2023-35792-breakfast
JuliaCon
en
Get a delicious continental breakfast and fresh coffee served directly in the Stata at JuliaCon 2023.
false
https://pretalx.com/juliacon2023/talk/XVAH7X/
https://pretalx.com/juliacon2023/talk/XVAH7X/feedback/
26-100
State of Julia
Keynote
2023-07-28T09:00:00-04:00
09:00
00:45
Welcome to JuliaCon 2023 Keynotes! We're excited to feature leading experts in the Julia community, who will share their latest insights and developments. Stay tuned for inspiring talks and lively discussions!
juliacon2023-28064-state-of-julia
JuliaCon
Tim HolyValentin ChuravyJameson Nash
en
Welcome to JuliaCon 2023 Keynotes! We're excited to feature leading experts in the Julia community, who will share their latest insights and developments. Stay tuned for inspiring talks and lively discussions!
false
https://pretalx.com/juliacon2023/talk/RTCDVR/
https://pretalx.com/juliacon2023/talk/RTCDVR/feedback/
26-100
Morning Break Day 3 Room 7
Break
2023-07-28T09:45:00-04:00
09:45
00:15
Morning break for coffee and snacks, and transit time from the keynote to the rest of the day's talks.
juliacon2023-28143-morning-break-day-3-room-7
JuliaCon
en
Morning break for coffee and snacks, and transit time from the keynote to the rest of the day's talks.
false
https://pretalx.com/juliacon2023/talk/RPJLW9/
https://pretalx.com/juliacon2023/talk/RPJLW9/feedback/
26-100
Learning smoothly: machine learning with RobustNeuralNetworks.jl
Talk
2023-07-28T10:00:00-04:00
10:00
00:30
Neural networks are typically sensitive to small input perturbations, leading to unexpected or brittle behaviour. We present RobustNeuralNetworks.jl: a Julia package for neural network models that are constructed to naturally satisfy robustness constraints. We discuss the theory behind our model parameterisation, give an overview of the package, and demonstrate its use in image classification, reinforcement learning, and nonlinear robotic control.
juliacon2023-26792-learning-smoothly-machine-learning-with-robustneuralnetworks-jl
General Machine Learning
/media/juliacon2023/submissions/QN3XGU/logo_7RNNoQq.svg
Nicholas Barbara
en
Modern machine learning relies heavily on rapidly training and evaluating neural networks in problems ranging from image classification to robotic control. However, most existing neural network architectures have no robustness certificates, making them sensitive to even small input perturbations and highly susceptible to poor data quality, adversarial attacks, and other forms of input disturbances. The few neural network architectures proposed in recent years that offer solutions to this brittle behaviour rely on explicitly enforcing constraints during training to “smooth” the network response. These methods are computationally expensive, making them slow and difficult to scale up to complex real-world problems.
Recently, we proposed the Recurrent Equilibrium Network (REN) architecture as a computationally efficient solution to these problems. The REN architecture is flexible in that it includes all commonly used neural network models, such as fully-connected networks, convolutional neural networks, and recurrent neural networks. The weight matrices and bias vectors in a REN are directly parameterised to naturally satisfy behavioural constraints chosen by the user. For example, the user can build a REN with a given Lipschitz constant to ensure the output of the network is quantifiably less sensitive to unexpected input perturbations. Other common options include contracting RENs and input/output passive RENs.
The direct parameterisation of RENs means that no additional constrained optimization methods are needed to train the networks to be less sensitive to attacks or perturbations. We can therefore train RENs with standard, unconstrained optimization methods (such as gradient descent) while also guaranteeing their robustness. Achieving the “best of both worlds” in this way is unique to our REN model class, and allows us to freely train RENs for common machine learning problems as well as more difficult applications where safety and robustness are critical.
In this talk, we will present our RobustNeuralNetworks.jl package. The package is built around the AbstractREN type, encoding the REN model class. It relies heavily on key features of the Julia language (such as multiple dispatch) for a neat, efficient implementation of RENs, and can be used alongside Flux.jl to solve machine learning problems with and without robustness requirements, all in native Julia.
We will give a brief introduction to the fundamental theory behind our direct parameterisation of neural networks, and outline what we mean by nonlinear robustness. We will follow this with a detailed overview of the RobustNeuralNetworks.jl package structure, including the key types and methods used to construct and implement a REN. To conclude, we will demonstrate some interesting applications of our Julia package for REN in our own research, including in:
- Image classification
- System identification
- Learning-based control for dynamical systems
- Real-time control of robotic systems via the Julia C API
Ultimately, we hope to show how RENs will be useful to the wider Julia machine learning community in both research and industry applications. For more information on the REN model class and its uses, please see our two recent papers https://arxiv.org/abs/2104.05942 and https://doi.org/10.1109/LCSYS.2022.3184847.
false
https://pretalx.com/juliacon2023/talk/QN3XGU/
https://pretalx.com/juliacon2023/talk/QN3XGU/feedback/
26-100
Predictive Uncertainty Quantification in Machine Learning
Talk
2023-07-28T10:30:00-04:00
10:30
00:30
We propose [`ConformalPrediction.jl`](https://github.com/pat-alt/ConformalPrediction.jl): a Julia package for Predictive Uncertainty Quantification in Machine Learning (ML) through Conformal Prediction. It works with supervised models trained in [`MLJ.jl`](https://alan-turing-institute.github.io/MLJ.jl/dev/), a popular comprehensive ML framework for Julia. Conformal Prediction is easy-to-understand, easy-to-use and model-agnostic and it works under minimal distributional assumptions.
juliacon2023-24045-predictive-uncertainty-quantification-in-machine-learning
General Machine Learning
/media/juliacon2023/submissions/JQWNNP/wide_logo_gy1jE4d.png
Patrick Altmeyer
en
### 📈 The Need for Predictive Uncertainty Quantification
A first crucial step towards building trustworthy AI systems is to be transparent about predictive uncertainty. Machine Learning model parameters are random variables and their values are estimated from noisy data. That inherent stochasticity feeds through to model predictions and should be addressed, at the very least in order to avoid overconfidence in models.
Beyond that obvious concern, it turns out that quantifying model uncertainty actually opens up a myriad of possibilities to improve up- and down-stream tasks like active learning and model robustness. In Bayesian Active Learning, for example, uncertainty estimates are used to guide the search for new input samples, which can make ground-truthing tasks more efficient ([Houlsby et al., 2011](https://arxiv.org/abs/1112.5745)). With respect to model performance in downstream tasks, predictive uncertainty quantification can be used to improve model calibration and robustness ([Lakshminarayanan et al., 2016](https://arxiv.org/abs/1612.01474)).
### 👉 Enter: Conformal Prediction
Conformal Prediction (CP) is a scalable frequentist approach to uncertainty quantification and coverage control ([Angelopoulus and Bates, 2022](https://arxiv.org/abs/2107.07511)). CP can be used to generate prediction intervals for regression models and prediction sets for classification models. There is also some recent work on conformal predictive distributions and probabilistic predictions. The following characteristics make CP particularly attractive to the ML community:
- The underlying methods are easy to implement.
- CP can be applied almost universally to any supervised ML model, which has allowed us to easily tab into the existing [`MLJ.jl`](https://alan-turing-institute.github.io/MLJ.jl/dev/) toolkit.
- It comes with a frequentist marginal coverage guarantee that ensures that conformal prediction sets contain the true value with a user-chosen probability.
- Only minimal distributional assumptions are needed.
- Though frequentist in nature, CP can also be effectively combined with Bayesian Methods.
### 😔 Problem: Limited Availability in Julia Ecosystem
Open-source development in the Julia AI space has been very active in recent years. [MLJ](https://alan-turing-institute.github.io/MLJ.jl/dev/) is just one great example testifying to these community efforts. As we gradually build up an AI ecosystem, it is important to also pay attention to the risks and challenges facing AI today. With respect to Predictive Uncertainty Quantification, there is currently good support for Bayesian Methods and Ensembling. A fully-fledged implementation of Conformal Prediction in Julia has so far been lacking.
### 🎉 Solution: `ConformalPrediction.jl`
Through this project we aim to close that gap and thereby contribute to broader community efforts towards trustworthy AI. Highlights of our new package include:
- **Interface to [MLJ](https://alan-turing-institute.github.io/MLJ.jl/dev/)**: turning your machine learning model into a conformal predictor is just one API call away: `conformal_model(model::MLJ.Supervised)`.
- **Many SOTA approaches**: the number of implemented approaches to Conformal Regression and Classification is already large and growing.
- **Detailed [Diátaxis](https://diataxis.fr/) Documentation**: tutorials and blog posts, hands-on guides, in-depth explanations and a detailed reference including docstrings that document the mathematical underpinnings of the different approaches.
- **Active Community Engagement**: we have coordinated our efforts with the core dev team of [`MLJ.jl`](https://alan-turing-institute.github.io/MLJ.jl/dev/) and some of the leading researchers in the field. Thankfully we have also already received a lot of useful feedback and contributions from the community.
### 🎯 Future Developments
Our primary goal for this package is to become the go-to place for conformalizing supervised machine learning models in Julia. To this end we currently envision the following future developments:
- Best of both worlds through **Conformalized Bayes**: combining the power of Bayesian methods with conformal coverage control.
- Additional approaches to Conformal Regression (including time series) and Conformal Classification (including Venn-ABER) as well as support for Conformal Predictive Distributions.
For more information see the list of outstanding [issues](https://github.com/pat-alt/ConformalPrediction.jl/issues).
### 🧐 Curious?
Take a quick interactive tour to see what this package can do: [link](https://binder.plutojl.org/v0.19.12/open?url=https%253A%252F%252Fraw.githubusercontent.com%252Fpat-alt%252FConformalPrediction.jl%252Fmain%252Fdocs%252Fpluto%252Fintro.jl). Aside from this `Pluto.jl` 🎈 notebook you will find links to many more resources on the package repository: [`ConformalPrediction.jl`](https://github.com/pat-alt/ConformalPrediction.jl).
false
https://pretalx.com/juliacon2023/talk/JQWNNP/
https://pretalx.com/juliacon2023/talk/JQWNNP/feedback/
26-100
Machine Learning Property Loans for Fun and Profit
Talk
2023-07-28T11:00:00-04:00
11:00
00:30
This talk will demonstrate the use of the MLJ.jl package in building a machine-learning pipeline for a dataset of property loans. The goal is to predict which loans might default and build a strategy to minimize losses. Several machine learning models such as ElasticNet, XGBoost, and KNN will be explored, and then combined into a stacked model. I will also show how the output of these models can be used to drive investment decisions and the final results of the strategy. This talk will provide a
juliacon2023-26976-machine-learning-property-loans-for-fun-and-profit
General Machine Learning
Dean Markwick
en
In this talk, I'll demonstrate the MLJ.jl package and how to build a machine-learning pipeline for a dataset of property loans. By trying to predict what loans might default, we can build a strategy driven by machine learning to try and better predict which loans might default and lose us money.
I'll explore several different machine learning models, such as ElasticNet XGBoost and KNN before combing them all into a stacked model that attempts to use all of the different techniques. I'll also show how we can use the output of all the models to drive the investment decision and the final results of this strategy.
Overall this talk will give a practical example of using machine learning in Julia and how MLJ.jl provides a comprehensive API for everything machine learning.
false
https://pretalx.com/juliacon2023/talk/GDRTNK/
https://pretalx.com/juliacon2023/talk/GDRTNK/feedback/
26-100
Massively parallel inverse modelling on GPUs with Enzyme
Talk
2023-07-28T11:30:00-04:00
11:30
00:30
We present an efficient and scalable approach to inverse PDE-based modelling with the adjoint method. We use automatic differentiation (AD) with Enzyme to automaticaly generate the buidling blocks for the inverse solver. We utilize the efficient pseudo-transient iterative method to achieve performance that is
close to the hardware limit for both forward and adjont problems. We demonstrate close to optimal parallel efficiency on GPUs in series of benchmarks.
juliacon2023-26952-massively-parallel-inverse-modelling-on-gpus-with-enzyme
JuliaCon
Ivan UtkinLudovic RässSamuel Omlin
en
Massively parallel hardware architectures such as graphics processing units (GPUs) make it possible to run numerical simulations at unprecedented resolution. Matching the results of simulations to the available observational and experimental data requires estimating the sensitivity of the model to changes in its parameters. The adjoint method for computing sensitivities gains more attention in the scientific and engineering communities. This method allows for computing the sensitivities in the entire parameter space using the results of only one forward solve, in contrast to the direct method that would require runninng the forward simulation for each parameter separately.
Recent breakthrough innovations in compiler technologies resulted in wide adoption of the paradigm of differentiable programming, in which the entire computer program could be differentiated via automatic differentiation (AD). Differentiable programming allows for automated generation of adjoints from the forward model. In this work, we demonstrate the applications of the adjoint method to inverse modelling in geosciences, and share our experiences using the Julia ecosystem for high-performance computing. We developed massively parallel 3D forward and inverse solvers with full GPU support. We used [Enzyme.jl](https://github.com/EnzymeAD/Enzyme.jl) for the AD implementation, [ParallelStencil.jl](https://github.com/omlins/ParallelStencil.jl) to generate efficient computational kernels for multiple backends including GPUs, and [ImplicitGlobalGrid.jl](https://github.com/eth-cscs/ImplicitGlobalGrid.jl) for distributed parallelism.
Co-authors: Ludovic Räss¹ ², Samuel Omlin ³
¹ ETH Zurich | ² Swiss Federal Institute for Forest, Snow and Landscape Research (WSL) | ³ Swiss National Supercomputing Centre (CSCS )
false
https://pretalx.com/juliacon2023/talk/YKUD8Q/
https://pretalx.com/juliacon2023/talk/YKUD8Q/feedback/
26-100
Three Musketeers: Sherlock Holmes, Mathematics and Julia
Talk
2023-07-28T12:00:00-04:00
12:00
00:30
Mathematics is a science and one of the most important discoveries of the human race on earth. In our daily life, we use mathematics knowingly and unknowingly. Many of us are unaware that forensic experts use mathematics to solve crime mysteries. In this talk, we will explore how Sherlock Holmes, the famous fictional detective character created by Sir Arthur Conan Doyle uses Mathematics and Julia programming language to solve crime mysteries.
juliacon2023-27053-three-musketeers-sherlock-holmes-mathematics-and-julia
JuliaCon
Gajendra Deshpande
en
Mathematics is a science and one of the most important discoveries of the human race on earth. Math is everywhere and around us. It is in nature, music, sports, economics, engineering, and so on. In our daily life, we use mathematics knowingly and unknowingly. Many of us are unaware that forensic experts use mathematics to solve crime mysteries. In this talk, we will explore how Sherlock Holmes, the famous fictional detective character created by Sir Arthur Conan Doyle uses Mathematics and Julia programming language to solve crime mysteries. We will solve simple crime puzzles using mathematics and Julia scripts. Finally, we will solve a few complex hypothetical crime mysteries using advanced Julia concepts. The participants will learn how to use the concepts of mathematics such as statistics, probability, trigonometry, and graph theory, and Julia and its packages such as Measurements.jl, Plots.jl, Graphs.jl, DataFrames.jl, RecursiveArrayTools.jl, and MatrixNetworks.jl to solve the crime puzzles.
Outline
1. Why Julia for forensics? (05 Minutes)
2. Review of Mathematics concepts required to solve crimes (05 Minutes)
3. Solving simple puzzles (10 Minutes)
i) Estimate the pressure of a shoe print on a soft ground
ii) Calculate the percentage of concentrations
iii) Compute bloodstain thickness
iv) Ricochet analysis and aspects of ballistics
v) Suicide, Accident or murder?
4. Advanced Problems (10 Minutes)
i) A Game of Shadows
ii) Bicycle Problem
iii) Detect the location of a serial killer
false
https://pretalx.com/juliacon2023/talk/SFCJDX/
https://pretalx.com/juliacon2023/talk/SFCJDX/feedback/
26-100
Lunch Day 3 (Room 7)
Lunch Break
2023-07-28T12:30:00-04:00
12:30
01:30
We hope you're enjoying JuliaCon 2023 so far! Please find our food trucks waiting right outside venue with food available for purchase.
juliacon2023-28086-lunch-day-3-room-7-
JuliaCon
en
We hope you're enjoying JuliaCon 2023 so far! Please find our food trucks waiting right outside venue with food available for purchase.
false
https://pretalx.com/juliacon2023/talk/FESQMX/
https://pretalx.com/juliacon2023/talk/FESQMX/feedback/
26-100
ExprParsers.jl: Object Orientation for Macros
Talk
2023-07-28T14:00:00-04:00
14:00
00:30
You want to build a complex macro? ExprParsers.jl gives you many prebuilt expression parsers - for functions, calls, args, wheres, macros, ... - so that you don't need to care about the different ways these high-level Expr-types can be represented in Julia syntax. Everything is well typed, so that you can use familiar julia multiple dispatch to extract the needed information from your input Expr.
juliacon2023-27046-exprparsers-jl-object-orientation-for-macros
Julia Base and Tooling
Stephan Sahm
en
The need of abstracting upon Expr-types like functions is already recognized by the widespread MacroTools.jl. There you have support for functions (and arguments) by a set of helpers like `splitdef` and `combinedef` which go from Expr to Dict and back.
ExprParsers.jl is different from MacroTools.jl in that it 100% focuses on this kind of object-orientation, extended to many more Expr-types like where syntax, type annotations, keyword arg, etc. In addition, ExprParsers are well typed, composable and extendable in that you can easily write your own parser object.
When working with ExprParsers, you first construct your (possibly nested) parser, describing in detail what you expect as the input. Then you safely parse given expressions and dispatch on the precise ExprParser types. Finally, you can mutate the parsed results and return the manipulated version, or simply extract information from it.
false
https://pretalx.com/juliacon2023/talk/WRHJPD/
https://pretalx.com/juliacon2023/talk/WRHJPD/feedback/
26-100
REPL Without a Pause: Bringing VimBindings.jl to the Julia REPL
Talk
2023-07-28T14:30:00-04:00
14:30
00:30
VimBindings.jl is a Julia package that emulates vim, the popular text editor, directly in the Julia REPL. This talk will illuminate the context in which a REPL-hacking package runs by taking a deep dive into the Julia REPL code, and articulate the modifications VimBindings.jl makes to introduce novel functionality. The talk will also describe design problems that emerge at the intersection of the REPL and vim paradigms, and the choices made to attempt a coherent fusion of the two.
juliacon2023-27044-repl-without-a-pause-bringing-vimbindings-jl-to-the-julia-repl
Julia Base and Tooling
Caleb Allen
en
Vim is a ubiquitous text editor found on almost every modern operating system. Vim (and its predecessor vi) has a storied history as a primary contender in the “editor wars”, its modal editing paradigm often pinned against the modeless, extensibility-oriented Emacs.
Vim users often tout its speed and ease of use, at least after stomaching a steep learning curve. Once a user has learned vim they might question why their fingers should leave home-row, even when they aren’t using vim. Their muscle memory can be applied across many applications by using vim emulation plugins or packages: browsers (vimium and vim vixen), email clients (mutt), IDE plugins (vscode-neovim for vs-code, ideavim for IntelliJ), and shell modes (zsh, bash, fish). Vim emulation can even be used to interact with an operating system: sway for Linux users, AppGrid for MacOS users, or evil mode for Emacs users.
Finally, users can use vim emulation in the Julia REPL. In this talk I will describe how VimBindings.jl works, as well as the design considerations borrowed from other vim emulation implementations in its development. I will take a deep dive into the Julia REPL code and describe how the package introduces new functionality to the REPL, I will also discuss the unique challenges faced during the creation of VimBindings.jl, and the not-so-elegant solutions developed to solve them.
Github repo: https://github.com/caleb-allen/VimBindings.jl
false
https://pretalx.com/juliacon2023/talk/BFQVMX/
https://pretalx.com/juliacon2023/talk/BFQVMX/feedback/
26-100
OpenTelemetry.jl: Collecting Logs, Traces, and Metrics Together
Talk
2023-07-28T15:00:00-04:00
15:00
00:30
[OpenTelemetry.jl](https://github.com/oolong-dev/OpenTelemetry.jl) is a pure Julia implementation of the [OpenTelemetry specification](https://opentelemetry.io/). It enables developers to collect logs, traces, and metrics in a unified approach to improve the observability of complex systems. With OpenTelemetry.jl, users can not only analyze the telemetry data in Julia but also across many other different languages or services.
juliacon2023-27051-opentelemetry-jl-collecting-logs-traces-and-metrics-together
Julia Base and Tooling
Jun Tian
en
## Introduction
Logs, traces, and metrics are used independently to help diagnose systems for a long time. As applications are becoming more complex and more heterogeneous, pinpointing a problem is more difficult than before. Formed through a merger of the OpenTracing and OpenCensus projects, [OpenTelemetry](https://opentelemetry.io/) is aimed to improve the situation. In addition to [the concrete specification](https://github.com/open-telemetry/opentelemetry-specification), OpenTelemetry comes with a collection of APIs and SDKs in many different programming languages. As for now, Julia is not included yet. And that's why we created [OpenTelemetry.jl](https://github.com/oolong-dev/OpenTelemetry.jl). By fully respecting the OpenTelemetry specification, data collected in Julia with OpenTelemetry.jl can be analyzed together with those from other languages uniformly.
## Highlights of OpenTelemetry.jl
- **Simple API**
The APIs are elaborately designed by balancing conventions in Julia and specifications stated in OpenTelemetry.
- **Fully Configurable**
Most components in `OpenTelemetrySDK` are configurable (either explicitly through keyword arguments or with environment variables). Users can decide how to collect the data, when to send the data, and where to store them. With the architecture of OpenTelemetry, users can choose any APM which supports the OpenTelemetry collector.
- **Pluggable auto instrumentation**
Several commonly used packages in Julia are already auto-instrumented (Downloads.jl, Genie.jl, HTTP.jl, CUDA.jl, etc). Users can simply enable them by importing corresponding instrumentation packages.
- **It's FAST**
The API layer and SDK layer are separated by design. The API layer is lightweight enough so that package developers can safely add it as a dependency with only very little overhead introduced. Our benchmark results show that the implementation in Julia is much faster than many SDKs in other languages.
## Agenda
This talk contains the following three parts:
1. A general introduction to OpenTelemetry and its benefits.
2. An in-depth explanation of how OpenTelemetry.jl is implemented plus some personal experiences on how to implement and organize a mega Julia package **in a more Julian way**.
3. Demonstrations of how we apply OpenTelemetry.jl in realworld products.
## Target Audience
Both package developers and application developers who would like to improve the observability of their services in production will benefit from this talk. General Julia users can also learn how to better collect, manage, and analyze telemetry data in a unified approach after this talk.
false
https://pretalx.com/juliacon2023/talk/JUSWPE/
https://pretalx.com/juliacon2023/talk/JUSWPE/feedback/
26-100
Logging in Julia: Logging stdlib and LoggingExtras.jl
Talk
2023-07-28T15:30:00-04:00
15:30
00:30
This talk will go into how to use julia's Logging standard library, in particular into how to configure it using LoggingExtras.jl.
LoggingExtras.jl is a suite of extra functionality on top of the Logging standard library to make configuring log handling simpler.
Primarily it works by separating all the ways you can configure the logger into a series of composable objects with only one function: filtering, splitting, transforming.
This allow for easy and comprehensive control of logging.
juliacon2023-27076-logging-in-julia-logging-stdlib-and-loggingextras-jl
Julia Base and Tooling
/media/juliacon2023/submissions/H7GADX/julialogging_03zYr8n.png
Frames Catherine White
en
LoggingExtras was developed in 2018.
Its functionality has only had minimal changes since then, but a bit of polish and proving.
In 2022 it had its 1.0 release.
The main focus of this talk will be on the practicalities of configuring logging for larger applications and libraries.
Things like turning on Debug logging for a particular package, or muting warnings within a particular function.
The talk will also go into some discussion of how the Logging standard library works,
and why it works that way.
It will conclude with some commentary on issues with the current design and discussion of the future.
false
https://pretalx.com/juliacon2023/talk/H7GADX/
https://pretalx.com/juliacon2023/talk/H7GADX/feedback/
26-100
Convex Optimization for Quantum Control in Julia
Lightning talk
2023-07-28T16:00:00-04:00
16:00
00:10
Feedback control policies for quantum systems often lack performance targets and certificates of optimality. Here, we will show how bounds on the best possible control performance are readily computable for a wide range of quantum control problems by means of convex optimization using Julia's optimization ecosystem. We discuss how these bounds provide targets and certificates to improve the design of quantum feedback controllers.
juliacon2023-26946-convex-optimization-for-quantum-control-in-julia
Quantum
/media/juliacon2023/submissions/MX7J3F/thumbnail.001_j5qsRfN.png
Flemming HoltorfFrank Schäfer
en
Optimal feedback control of quantum systems plays an important role in fields such as quantum information processing and quantum sensing. Despite its relevance, however, all but the simplest quantum control problems have no known analytical solutions and even rigorous numerical approximations are usually unavailable. This can be attributed to the complex dynamics associated with quantum systems subjected to continuous observation, such as in photon counting or homodyne detection setups; these systems are described by nonlinear jump-diffusion processes.
As a consequence, the use of heuristics and approximations, often based on reinforcement learning or expert intuition, is common practice for the design of quantum control policies. While these heuristics often perform remarkably well in practice, they seldom possess a mechanism to evaluate the degree of suboptimality they introduce, leaving it purely to intuition when to terminate the controller design process. We present a convex (sum-of-squares) programming-based framework for the computation of informative bounds on the best possible control performance and show how Julia's optimization ecosystem enables its implementation. We demonstrate the utility of the approach by constructing certifiably near-optimal control policies for a continuously monitored qubit.
false
https://pretalx.com/juliacon2023/talk/MX7J3F/
https://pretalx.com/juliacon2023/talk/MX7J3F/feedback/
26-100
Automating the composition of ML interatomic potentials in Julia
Lightning talk
2023-07-28T16:10:00-04:00
16:10
00:10
Scaling up atomistic simulation models is hampered by expensive calculations of interatomic forces. Machine learning potentials address this challenge and promise the accuracy of first-principles methods at a lower computational cost. This talk presents, as part of the research activities of the CESMIX project, how Julia is used to facilitate automating the composition of a novel neural potential based on the Atomic Cluster Expansion.
juliacon2023-27061-automating-the-composition-of-ml-interatomic-potentials-in-julia
Quantum
Emmanuel Lujan
en
Simplifying the composition of machine learning (ML) interatomic potentials is key to finding combinations, between data, descriptors, and learning methods, that exceed the accuracy and performance of the state-of-the-art. The Julia programming language, and its burgeoning atomistic ecosystem, can facilitate the composition of neural networks and other ML models with cutting-edge interatomic potentials, through mechanisms such as multiple dispatch, differentiable programming, ML and GPU abstractions, as well as specialized scientific computing libraries. Here, the use of Julia to automatize the composition of a novel neural potential based on the Atomic Cluster Expansion (ACE) is presented as part of the research activities of the Center for the Exascale Simulation of Materials in Extreme Environments (CESMIX). The proposed scheme aims to facilitate the execution of parallel fitting experiments that search for hyper-parameter values that significantly improve the accuracy in training and test metrics (e.g., MAE, MSE, RSQ, mean cos) of energies and forces with respect to different Density Functional Theory (DFT) data sets.
More information about composing ML potentials in Julia [here](https://docs.google.com/presentation/d/1XI9zqF_nmSlHgDeFJqq2dxdVqiiWa4Z-WJKvj0TctEk/edit#slide=id.g169df3c161f_63_123).
Take a look at our growing atomistic CESMIX suite in GitHub [here](https://github.com/cesmix-mit).
false
https://pretalx.com/juliacon2023/talk/GGEKXE/
https://pretalx.com/juliacon2023/talk/GGEKXE/feedback/
26-100
WTP.jl: A library for readable electronic structure code
Lightning talk
2023-07-28T16:20:00-04:00
16:20
00:10
The electronic structure theory is critical for understanding materials, but it has been challenging to develop readable yet efficient electronic structure packages. `WTP.jl` identifies a layer of abstractions that simplifies such development through an interface resembling the mathematical notation of the electronic structure theory. Using `WTP.jl`, we built another package `SCDM.jl` for electron localization with code far more readable than the widely used alternative `Wannier90`.
juliacon2023-26929-wtp-jl-a-library-for-readable-electronic-structure-code
Quantum
/media/juliacon2023/submissions/LE8M9X/wtp_hv4cOut.svg
Kangbo Li
en
Electronic structure theory describes the motion of electrons within a material, which has been critical for understanding the type of a material (e.g. insulator, conductor, semiconductor) as well as quantitatively predicting the properties of a material (e.g. thermal and electrical conductivity). The mathematics of the electronic structure theory is both conceptually difficult and computationally
intensive, making the development of readable packages challenging.
With `WTP.jl`, we simplify the development process by providing abstractions for commonly used concepts in electronic structure theory that are tricky to implement. Moreover, we design our interface to resemble the mathematical notation of electronic structure theory to minimize the effort in translating the theory into code.
Specifically, `WTP.jl` includes functions for working with nonorthogonal periodic grids and objects defined on such grids. Physically, the grids can be, for example, the reciprocal lattice or the first Brillouin Zone. Then, objects defined on grids can be the periodic part of the Bloch orbitals (defined on the reciprocal lattice) or the set of Bloch orbitals (defined on the first Brillouin zone). Last but not the least, it includes IO functionalities with the output of `pw.x` from Quantum Espresso, a popular distribution of programs for electronic structure theory.
A second package, `SCDM.jl`, builds on top of `WTP.jl` and provides electron localization through two different approaches. The first is through the selected columns of the density matrix (SCDM), which is a noniterative method that has no reliance on the initial guess. The second is based on our novel variational formulation of the localized Wannier functions, which is an iterative method that takes 10-70 less iterations than `Wannier90` with the same per-iteration cost.
Both packages will be free and open source upon release.
false
https://pretalx.com/juliacon2023/talk/LE8M9X/
https://pretalx.com/juliacon2023/talk/LE8M9X/feedback/
26-100
InverseStatMech.jl: Extract Interactions from Materials' Spectra
Talk
2023-07-28T16:30:00-04:00
16:30
00:30
This talk introduces InverseStatMech.jl, a Julia package that provides various efficient and robust algorithms to infer geometric structures of ordered and disordered materials from their spectra and other structural descriptors, which is a crucial inverse problem in statistical mechanics, crystallography and soft materials sciences.
(Package currently under development.)
juliacon2023-24808-inversestatmech-jl-extract-interactions-from-materials-spectra
Quantum
/media/juliacon2023/submissions/ELHUTR/Spec2Struc_RrTYr71.png
Haina Wang
en
The relationship between materials structure, statistical descriptors and intermolecular forces is of paramount importance in statistical mechanics. With its collection of state-of-the-art inverse algorithms, InverseStatMech.jl enables researchers to infer structures and forces from given statistical descriptors for materials in one, two and three dimensions. Input statistical descriptors include pair correlation functions and structure factors for point patterns, as well as two-point correlation functions for multi-phase media. Key algorithms available in the package are
-reverse Monte-Carlo based on simulated annealing; see J. Phys.: Condens. Matter 13, R877–R913 (2001).
-iterative Boltzmann inversion; see Chem. Phys. 202, 295–306 (1996).
-iterative HNC inversion; see Phys. Rev. Lett. 54, 451–454 (1985) and J. Comput. Chem. 39, 1531–1543
(2018).
-ensemble-based algorithm; see Phys. Rev. E 101, 032124 (2020).
-algorithm based on optimizing parametrized potentials; see Phys. Rev. E 106 (4), 044122 (2022).
false
https://pretalx.com/juliacon2023/talk/ELHUTR/
https://pretalx.com/juliacon2023/talk/ELHUTR/feedback/
26-100
Closing Ceremony
Ceremony
2023-07-28T17:00:00-04:00
17:00
00:30
As JuliaCon 2023 comes to a close, join us for a memorable farewell ceremony to celebrate a week of learning, collaboration, and innovation. We'll recap the highlights of the conference, thank our sponsors and volunteers, and recognize outstanding contributions to the Julia community. Don't miss this opportunity to say goodbye to old and new friends, and leave with inspiration for your next Julia project. Safe travels!
juliacon2023-28120-closing-ceremony
JuliaCon
en
As JuliaCon 2023 comes to a close, join us for a memorable farewell ceremony to celebrate a week of learning, collaboration, and innovation. We'll recap the highlights of the conference, thank our sponsors and volunteers, and recognize outstanding contributions to the Julia community. Don't miss this opportunity to say goodbye to old and new friends, and leave with inspiration for your next Julia project. Safe travels!
false
https://pretalx.com/juliacon2023/talk/N3RRSG/
https://pretalx.com/juliacon2023/talk/N3RRSG/feedback/
32-123
Morning Break Day 3 Room 2
Break
2023-07-28T09:45:00-04:00
09:45
00:15
Morning break for coffee and snacks, and transit time from the keynote to the rest of the day's talks.
juliacon2023-28138-morning-break-day-3-room-2
JuliaCon
en
Morning break for coffee and snacks, and transit time from the keynote to the rest of the day's talks.
false
https://pretalx.com/juliacon2023/talk/KQG7UF/
https://pretalx.com/juliacon2023/talk/KQG7UF/feedback/
32-123
Julia-fying Your Data Team: the Ultimate Upgrade
Lightning talk
2023-07-28T10:00:00-04:00
10:00
00:10
This talk is designed to give data/insights/decision intelligence team leads a better understanding of the potential of Julia and how it can be effectively adopted in their teams. I'll be discussing the advantages and disadvantages of adopting Julia, drawing on my own experience and sharing some of the lessons I've learned along the way. I'll also be sharing several examples of Julia's unreasonable effectiveness that have supercharged our small team.
juliacon2023-26811-julia-fying-your-data-team-the-ultimate-upgrade
Statistics and Data Science
Jan Siml
en
This talk covers using Julia in data, insights, and decision intelligence teams. As a leader in a small data organization, you have to be careful about adopting new technologies, because you don't have any capacity to spare. That's why I'm here today to share my experiences with adopting Julia and give you a better understanding of its potential for your team.
First, I'll cover the advantages of using Julia in your data team. You've heard that Julia is faster and easier for setting up your projects (and replicating your results in the future). But it's the Julia design philosophy and its ecosystem that makes it so productive to use. In addition, its tooling and community provide tons of learning opportunities, so you'll keep growing and improving just from your everyday work. Additionally, Julia has an unbeatable time to insight - not just the execution time, but end-to-end, from starting the project to sharing the insights with stakeholders, I'll demonstrate what I mean and why that is.
Of course, no technology is perfect. I'll also be discussing some of the downsides to using Julia, such as internal challenges, the difficulty in finding talent with Julia skills and some red flags when you shouldn't adopt Julia (eg, large existing code, strict cross-dependencies, or deployment requirements.)
But the most compelling reason to consider Julia is its unreasonable effectiveness. I'll be sharing a few examples of how Julia has supercharged our small team and how you can benefit from it too (eg, workflow setup, re-use via small packages, documentation system, composability of tools, user-friendly APIs on custom types)
In conclusion, while there are certainly some downsides to adopting Julia, the advantages and its unreasonable effectiveness make it a worthy consideration for data, insights and decision intelligence teams. I hope that by sharing my experiences and examples, I've been able to give you a better understanding of whether Julia is the right choice for your team and how you can effectively adopt it.
false
https://pretalx.com/juliacon2023/talk/EGDAR7/
https://pretalx.com/juliacon2023/talk/EGDAR7/feedback/
32-123
Pigeons.jl: Distributed sampling from intractable distributions
Lightning talk
2023-07-28T10:10:00-04:00
10:10
00:10
Pigeons.jl enables users to leverage distributed computation to obtain samples from complicated probability distributions, such as multimodal posteriors arising in Bayesian inference and high-dimensional distributions in statistical mechanics. Pigeons is easy to use single-threaded, multi-threaded and/or distributed over thousands of MPI-communicating machines. We demo Pigeons.jl and offer advice to Julia developers who wish to implement correct distributed and randomized algorithms.
juliacon2023-26227-pigeons-jl-distributed-sampling-from-intractable-distributions
Statistics and Data Science
Nikola Surjanovic
en
In this talk we provide an overview of Pigeons.jl and describe how we addressed the challenges of implementing a distributed, parallelized, and randomized algorithm, exploiting a strong notion of “parallelism invariance” which we will exemplify and motivate. The talk appeals to practitioners who want to leverage distributed computation to perform challenging Bayesian inference tasks or sample from complex distributions such as those arising in statistical mechanics. We briefly describe how Pigeons uses a state-of-the-art method known as non-reversible parallel tempering to efficiently explore challenging posterior distributions. The talk also appeals to a broad array of Julia developers who may want to implement distributed randomized algorithms. The open-source code for Pigeons.jl is available at https://github.com/Julia-Tempering/Pigeons.jl.
Ensuring code correctness at the intersection of randomized, parallel, and distributed algorithms is a challenge. To address this challenge, we designed Pigeons based on a notion of “parallelism invariance”: the output for a given input should be **identical** regardless of which of the following four scenarios is used: 1. one machine running on one thread, 2. one machine running on several threads, 3. several machines running, each using one thread (in our case, communicating via MPI.jl), and 4. several machines running, each using several threads. Since (1) is significantly simpler to debug and implement than (2, 3, 4), being able to exactly compare the four outputs pointwise (instead of distributional equality checks, which have false positive rates), is a powerful tool to detect software defects.
Two factors tend to cause violations of parallelism invariance: (a) task-local and thread-local random number generators, (b) non-associativity of floating point operations. We discuss libraries we have developed to workaround (a) and (b) while preserving the same running time complexity, including a Julia SplittableRandom stream library (https://github.com/UBC-Stat-ML/SplittableRandoms.jl) and a custom distributed MPI reduction in pure Julia.
Joint work with: Alexandre Bouchard-Côté, Paul Tiede, Miguel Biron-Lattes, Trevor Campbell, and Saifuddin Syed.
false
https://pretalx.com/juliacon2023/talk/YGMJXD/
https://pretalx.com/juliacon2023/talk/YGMJXD/feedback/
32-123
Writing a Julia Data Science book like a software engineer
Lightning talk
2023-07-28T10:20:00-04:00
10:20
00:10
We present our package and book that we’ve developed to write Julia for Data Science Book. Unlike many other books, our book considers functions are first class citizens and is fully (re)built with CI. We discuss how to develop a code of conduct for communication guidelines and a workflow for coauthoring together using GitHub features such as pull request and issues as project management tools.
juliacon2023-24152-writing-a-julia-data-science-book-like-a-software-engineer
Statistics and Data Science
Jose Storopoli
en
Like many people before us, we started working on a book only to realize that the tools were not helpful enough.
So, as a proper case of yak shaving, we first created a software package to write books.
We will present our [package](https://github.com/JuliaDataScience/JuliaDataScience) and book that we've developed to write [Julia for Data Science Book](https://juliadatascience.io).
We write this book for researchers from all fields, while keeping robust software development practices in mind.
Unlike many other books, our book considers functions are first class citizens and is fully (re)built with CI.
And, finally, we'll discuss how to develop a code of conduct for communication guidelines and a workflow for coauthoring together using GitHub features such as pull request, issues and projects as software project management tools.
false
https://pretalx.com/juliacon2023/talk/TQE7GF/
https://pretalx.com/juliacon2023/talk/TQE7GF/feedback/
32-123
Julia / Statistics symposium
Minisymposium
2023-07-28T10:30:00-04:00
10:30
01:00
We report on the progress of Julia for statistics
juliacon2023-27081-julia-statistics-symposium
Statistics and Data Science
Ayush Patnaik
en
One of the important application areas for Julia lies in statistics. Dataset sizes have exploded, thus creating demands on high performance. In addition, there are elegant possibilities to harness the language foundations of Julia to create useful abstractions and better code reuse. In this mini-symposium, we report on some of this progress. The minisymposium will feature:
1. Survey.jl: a package for studying complex survey data (Ayush Patnaik)
Handling complex survey data is a crucial task in statistics, requiring the incorporation of survey design to accurately estimate standard errors associated with survey estimates. While established software such as SAS, STATA, and SUDAAN provide this capability, the survey package in R is a popular open-source option. However, as dataset sizes grow, the need for a more efficient computing framework arises. This talk introduces the Survey package in Julia, which aims to address this need.
This talk provides an overview of surveys and survey design, emphasizing the importance of accounting for survey design when estimating standard errors. It explores design-based standard errors and various methods to estimate them accurately. The presentation then delves into the implementation details and design choices of the Survey package in Julia.
2. Lessons learned from doing introductory econometrics with GLM.jl (Bogumil Kaminski)
GLM.jl is a fast and easy to use package, allowing its users to estimate generalized linear regression models.
As a researcher in economics, Bogumil explored if the package had sufficient functionality for a standard introductory econometrics course. He implemented all examples contained in Part 1 (chapters 1 to 9) of “Introductory Econometrics: A Modern Approach”, Seventh Edition textbook by Jeffrey M. Wooldridge.
In the talk, he shares his experience of the process, in particular discussing the missing functionalities.
The talk is accompanied by the GitHub repository containing all the source codes for all the exercises and custom functions that fill all the gaps I found.
The codes are ready to use by introductory econometrics teachers in their classes.
3. CRRao.jl: A consistent API for many useful models (Sourish Das)
Here is the abstract: CRRao.jl is built as a single API for diverse statistical models. Drawing inspiration from the Zelig package in the R world, the CRRao package provides a simple and consistent API for end-users. In this talk, we will present how to implement Bayesian Analysis with the Horse-Shoe Prior using CRRao.jl. We will demonstrate how the Poisson regression model can be implemented for the English Premier League dataset, using Ridge prior, Laplace prior, Cauchy prior, and Horse-Shoe prior. Furthermore, we will show how Logistic Regression with the Horse-Shoe prior can be implemented using the Friedrich Ataxia dataset from Genome research. Additionally, we will illustrate how Gaussian Process Regression can be implemented using the CRRao API call.
4. Improving the precision GLM (Mousum Dutta)
In this talk, we will explore the impact of different decomposition methods on Generalized Linear Models (GLM). The presentation will be divided into the following sections:
i. Overview of GLM (5 mins): A brief introduction to Generalized Linear Models, a statistical framework used for diverse response variables.
ii. Understanding Decomposition Methods in GLM (3 mins)
iii. Comparison of Cholesky and QR Decompositions (5 mins): Highlighting their strengths and weaknesses in GLM estimation.
iv. Improving Numerical Stability with QR Decomposition (7 mins): Exploring how QR decomposition can enhance numerical stability in GLM estimation.
v. Performance Advantage of Cholesky Decomposition (3 mins)
vi. Conclusions (2 mins)
This will be a simplified version of the Statistics in Julia symposium that was in Juliacon 2022 (https://www.youtube.com/watch?v=Fewunew8wU4). It reflects the areas in which good new work has happened in the past year. Of course, the material in the minisymposium will be self-contained: it will target a new viewer, it will not be produced as a diff on the previous one.
false
https://pretalx.com/juliacon2023/talk/8Z8S3R/
https://pretalx.com/juliacon2023/talk/8Z8S3R/feedback/
32-123
Julia / Statistics symposium (2)
Minisymposium
2023-07-28T11:30:00-04:00
11:30
01:00
We report on progress of Julia for statistics
juliacon2023-30792-julia-statistics-symposium-2-
Statistics and Data Science
en
One of the important application areas for Julia lies in statistics. Dataset sizes have exploded, thus creating demands on high performance. In addition, there are elegant possibilities to harness the language foundations of Julia to create useful abstractions and better code reuse. In this mini-symposium, we report on some of this progress. The minisymposium will feature:
1. Julia for statistics by Ajay Shah
An overview talk with a sense of the landscape.
2. Survey.jl: a package for studying complex survey data by Iulia Dimitru, Shikhar Mishra, Ayush Patnaik
Handling complex survey data is an important problem in statistics, and the authors have made considerable progress in building an elegant Julia package that has many of the most-used capabilities.
3. CRRao.jl: A consistent API for many useful models by Sourish Das
Students and practitioners find their path into statistical modelling is eased by using the consistent framework of CRRao.jl.
4. TSFrames.jl by Chirag Anand
Time series data is ubiquitous, and can benefit from specialised functions and user abstractions. The TSFrames.jl package fills this need.
We expect a 20 minute talk by each of these 4 which adds up to 80 minutes.
This will be a simplified version of the Statistics in Julia symposium that was in Juliacon 2022 (https://www.youtube.com/watch?v=Fewunew8wU4). It reflects the areas in which good new work has happened in the past year. Of course, the material in the minisymposium will be self-contained: it will target a new viewer, it will not be produced as a diff on the previous one.
false
https://pretalx.com/juliacon2023/talk/ARPCQJ/
https://pretalx.com/juliacon2023/talk/ARPCQJ/feedback/
32-123
Lunch Day 3 (Room 2)
Lunch Break
2023-07-28T12:30:00-04:00
12:30
01:30
We hope you're enjoying JuliaCon 2023 so far! Take a break and grab some lunch to recharge for the afternoon sessions. We have a delicious spread waiting for you in the dining hall. Bon appétit!
juliacon2023-28081-lunch-day-3-room-2-
JuliaCon
en
We hope you're enjoying JuliaCon 2023 so far! Take a break and grab some lunch to recharge for the afternoon sessions. We have a delicious spread waiting for you in the dining hall. Bon appétit!
false
https://pretalx.com/juliacon2023/talk/ST9N7V/
https://pretalx.com/juliacon2023/talk/ST9N7V/feedback/
32-123
Julia / Statistics symposium (3)
Minisymposium
2023-07-28T14:00:00-04:00
14:00
01:00
We report on progress of Julia for statistics
juliacon2023-30793-julia-statistics-symposium-3-
Statistics and Data Science
en
One of the important application areas for Julia lies in statistics. Dataset sizes have exploded, thus creating demands on high performance. In addition, there are elegant possibilities to harness the language foundations of Julia to create useful abstractions and better code reuse. In this mini-symposium, we report on some of this progress. The minisymposium will feature:
1. Julia for statistics by Ajay Shah
An overview talk with a sense of the landscape.
2. Survey.jl: a package for studying complex survey data by Iulia Dimitru, Shikhar Mishra, Ayush Patnaik
Handling complex survey data is an important problem in statistics, and the authors have made considerable progress in building an elegant Julia package that has many of the most-used capabilities.
3. CRRao.jl: A consistent API for many useful models by Sourish Das
Students and practitioners find their path into statistical modelling is eased by using the consistent framework of CRRao.jl.
4. TSFrames.jl by Chirag Anand
Time series data is ubiquitous, and can benefit from specialised functions and user abstractions. The TSFrames.jl package fills this need.
We expect a 20 minute talk by each of these 4 which adds up to 80 minutes.
This will be a simplified version of the Statistics in Julia symposium that was in Juliacon 2022 (https://www.youtube.com/watch?v=Fewunew8wU4). It reflects the areas in which good new work has happened in the past year. Of course, the material in the minisymposium will be self-contained: it will target a new viewer, it will not be produced as a diff on the previous one.
false
https://pretalx.com/juliacon2023/talk/Q9R93T/
https://pretalx.com/juliacon2023/talk/Q9R93T/feedback/
32-123
Future of JuliaData ecosystem
Birds of Feather (BoF)
2023-07-28T15:00:00-04:00
15:00
01:00
During the BoF we want to discuss future directions of development of JuliaData ecosystem. The objective is to identify priorities of development of current packages in the ecosystem and discuss what functionalities are missing and should be added in new packages.
juliacon2023-26049-future-of-juliadata-ecosystem
Statistics and Data Science
Bogumił KamińskiJacob Quinn
en
Selected planned topics for discussion are:
* The state and future of integration databases (both SQL and NoSQL);
* Creation, execution, and monitoring of data processing pipelines; especially considering containerization and deployment in the cloud (including discussion of integration with standard tools available in this domain);
* Do we need/can benefit from GPU acceleration for processing tables?
* Scaling computations (out-of-core processing, supercomputer processing): what is needed, what is the current status? What should be the roadmap for Dagger.jl and DTables.jl?
* How can we make it easier for users to move from R/Python to Julia?
false
https://pretalx.com/juliacon2023/talk/CXYFUL/
https://pretalx.com/juliacon2023/talk/CXYFUL/feedback/
32-123
Robust data management made simple: Introducing DataToolkit
Talk
2023-07-28T16:00:00-04:00
16:00
00:30
An ad-hoc approach to acquiring and using data can seem simple at first, but leaves one ill-prepared to deal with questions like: "where did the data come from?", "how was the data processed?", or "am I looking at the same data?". Generic tools for managing data (including some in Julia) exist, but suffer from limitations that reduce their broad utility. DataToolkit.jl provides a highly extensible and integrated approach, making robust and reproducible treatment of data and results convenient.
juliacon2023-26855-robust-data-management-made-simple-introducing-datatoolkit
Statistics and Data Science
/media/juliacon2023/submissions/9BTTRL/logotype_xPnV07q.svg
Timothy Chapman
en
In this talk, I will discuss the DataToolkit*.jl family of packages, which aim
to enable end-to-end robust treatment of data. The small number of other
projects that attempt to tackle subsets of data management —DataLad, the Kedro
data catalogue, Snakemake, Nextflow, Intake, Pkg.jl's Artifacts, DataSets.jl—
have a collection of good ideas, but all fall short of the convenience and
robustness that is possible.
Poor data management practices are rampant. This has been particularly visible
with computational pipelining tools, and so most have rolled their own data
management systems (Snakemake, Nextflow, Kedro). These may work well when
building/running computational pipelines, but are harder to use interactively;
besides which, not everything is best expressed as a computational pipeline.
Scientific research is another area where robust data management is vitally
important, yet often doesn't follow one or more of the FAIR principles
(findable, accessible, interoperable, reusable). For this domain, a more general
approach is needed than is offered by computational pipeline tools. DataLad and
Intake both represent more general solutions, however both also fall short in
major ways. DataLad lacks any sort of declarative data file (embedding info as
JSON in git commit messages), and while Intake has a (YAML — see
https://noyaml.com for why this isn't a great choice) data file format it only
provides read-only data access, making it less helpful for writing/publishing
results.
There is space for a new tool, providing a better approach to data management.
Furthermore, there are strong arguments for building such a tool with Julia, not
least due to the strong project management capabilities of Pkg.jl and the
independence from system state provided by JLL packages. An analysis performed
in Julia, with the environment specified by a Manifest.toml, the data processing
captured within the same project, and the input data itself verified by
checksumming, should provide a strong reproducibility guarantees — beyond that
easily offered by existing tools. A data analysis project can be hermetic.
One of the aims of DataToolkit.jl is to make this not just possible, but easy to
achieve. This is done by providing a capable "Data CLI" for working with the
Data.toml file. Obtaining and using a dataset is as easy as:
data> create iris https://raw.githubusercontent.com/mwaskom/seaborn-data/master/iris.csv
julia> sum(d"iris".sepal_length) # do something with the data
This exercise generates a Data.toml file that declaratively specifies the data
sets of a project, how they are obtained and verified, and even capture the
preprocessing required before the main analysis. DataToolkit.jl provides a
declarative data catalogue file, easy data access, and automatic validation;
enabling data management in accordance with the FAIR principles.
Pkg.jl's Artifact system also provides these reproducibility guarantees, but is
designed specifically for small blobs of data required by packages. This shows
in the design, with a number of crippling caveats for general usage, such as:
the maximum file size of 2GB, only working with libcurl downloads (i.e. no GCP
or S3 stored data, etc.), and being unextensible.
Even if DataToolkit.jl works well in some situations, to provide a truly general
solution it needs to be able to adapt to unforeseen use cases — such as custom
data formats/access methods, and integration with other tools. To this end,
DataToolkit.jl also contains a highly flexible plugin framework that allows for
the fundamental behaviours of the system to be drastically altered. Several
major components of DataToolkit.jl are in fact provided as plugins, such as the
default value, logging, and memorisation systems.
While there is some overlap between DataToolkit.jl and DataSets.jl,
DataToolkit.jl currently provides a superset of the capabilities (for instance
the ability to express composite data sets, built from multiple input data sets)
with more features in development; and the design of DataToolkitBase.jl allows
for much more to be built on it.
The current plan for the talk itself is:
- An overview of the problem domain and background
- A description of the DataToolkit.jl family of packages
- A demonstration or example of using DataToolkit.jl
- Showing what it takes to implement a new data loader/storage driver
- An overview of the plugin system
- A demonstration of the plugin system
false
https://pretalx.com/juliacon2023/talk/9BTTRL/
https://pretalx.com/juliacon2023/talk/9BTTRL/feedback/
32-123
A Data Persistence Architecture for the SimJulia Framework
Talk
2023-07-28T16:30:00-04:00
16:30
00:30
We present a novel transparent data persistence architecture as an extension of the SimJulia package. We integrated PostgresORM into the ResumableFunctions library by using Julia's metaprogramming support. As such, we were able to remove the dependency on a user's knowledge on architectures for persistence. Our contribution aims to improve the usability, whilst demonstrating the power of macro expansion to move towards a dynamic object-relational mapping configuration.
juliacon2023-25313-a-data-persistence-architecture-for-the-simjulia-framework
Statistics and Data Science
/media/juliacon2023/submissions/L88UBE/comprehensiveFig_kj33ks2.png
piet.vanderpaelt@mil.be
en
SimJulia is an open source discrete event process simulation framework (Lauwens et al.). The framework transforms processes expressed as functions into resumable functions using the ResumableFunctions library (Lauwens et al.). Both SimJulia and ResumableFunctions are available through the general registry.
The current SimJulia implementation lacks functionality to transparently store state variables of such a process. The exploitation of the data generated by a SimJulia simulation depends entirely on the user's knowledge on technology for persistence and how to interact with it from within the simulation model. To mitigate this shortcoming, we extended both the SimJulia and the ResumableFunctions packages by implementing a transparent probing and data persistence architecture, employing Julia's metaprogramming support. Our implementation is based on the Object-Relational Mapping (ORM) concept (Russell et al.) using the PostgresORM package (Laugier et al.), supported by the PostgreSQL Relational Database Management Systems (RDBMS), and Julia's macro expansion.
A simulation model expressed using functions is transformed into resumable functions ready to be run by SimJulia. Our solution exploits the code analysis occurring during a first phase of running a simulation to store a monitored function’s state variables configuration in the database. This is achieved by using a static ORM configuration which maps an argument of such a function to a tuple in the configuration table.
At the beginning of the second phase, in which the simulation runs, macro expansion takes place, creating the object definitions based on the configuration saved earlier. During this macro expansion, the static ORM configuration is used to retrieve the tuples that define an object within the dynamic ORM. The dynamic ORM provides an object definition for each process within the simulation model, offering the possibility to describe the state of a process through a materialised object. This aspect aside, the dynamic mapping configuration between object and table is also provided through macro expansion.
During the simulation, the standard working of the SimJulia package is to invoke an array of callback functions to be executed for successive events. The last function in such an array is the newly added probing function. When calling the latter, the process under consideration is matched against the object definitions created earlier through the dynamic ORM. An object describing the state of the process gets instantiated and is persisted again using the dynamically created ORM.
Configuration is kept in a versioned way in the database, enabling alteration of the simulation model while retaining the configuration of previous versions. The data and object definitions related to any version of the model are subsequently available in the RDMBS, and can be visualised using the technology of choice.
The current proof-of-concept implementation demonstrates the usage of the stored data and the versioned object definitions created dynamically during the simulations using the VueJS package. Through the intermediary of an abstraction layer which employs the dynamic ORM, we were able to provide VueJS with the required view on the data. We are currently considering the possibility to externalise the data based on a REST API for the presented architecture.
Our contribution is twofold. For end users of the SimJulia package, we provide a transparent probing and persistence mechanism, removing the requirement to provide any configuration on this aspect. To the Julia community we demonstrate the usage of metaprogramming to divert from a static ORM configuration as often seen in web applications, to a transparent and dynamic configuration.
false
https://pretalx.com/juliacon2023/talk/L88UBE/
https://pretalx.com/juliacon2023/talk/L88UBE/feedback/
32-124
Morning Break Day 3 Room 3
Break
2023-07-28T09:45:00-04:00
09:45
00:15
Morning break for coffee and snacks, and transit time from the keynote to the rest of the day's talks.
juliacon2023-28139-morning-break-day-3-room-3
JuliaCon
en
Morning break for coffee and snacks, and transit time from the keynote to the rest of the day's talks.
false
https://pretalx.com/juliacon2023/talk/HBZWNY/
https://pretalx.com/juliacon2023/talk/HBZWNY/feedback/
32-124
Bruno.jl - Financial derivative asset pricing and modeling
Talk
2023-07-28T10:30:00-04:00
10:30
00:30
The Bruno.jl package allows for pricing financial derivative assets under different theoretical models over varying time frames. This enables technical traders to formulate and test trading strategies within the package based on the derivatives themselves, rather than relying solely on the underlying assets. Using multiple dispatch, the simulating environment is left generic allowing for a wide range of uses from finance practitioners to academics.
juliacon2023-26997-bruno-jl-financial-derivative-asset-pricing-and-modeling
Finance and Economics
Mitchell PoundSpencer Clemens
en
Bruno.jl provides four main types of tools for working with financial derivatives: a descriptive type system, time-series data simulating, derivative asset pricing tools, and trading strategy simulating environment. The package takes advantage of multiple dispatch and parametric types to allow for more possibilities for financial analysis.
Multiple dispatch allows for more flexibility and extensions of the code. For example, Bruno can calculate theoretical historical derivatives prices for a variety of pricing models such as the Black-Scholes model and Monte Carlo analysis, by simply switching the input types. With few lines of code, theoretical prices can be calculated for many financial derivatives at different points in time. It is important to note that long-term derivative prices and exotic option prices may not be readily accessible without incurring a cost. Thus, many trading and hedging strategies, by necessity are currently based on asset prices. Bruno.jl will allow them to be based on theoretical derivative prices instead.
The modular architecture of Bruno.jl lets finance professionals use the different parts of the package interchangeably, leading to many novel combinations of tools. For example, a key feature of Bruno.jl is that it can produce a distribution of maximum loss that could result from a trading or hedging strategy. These distributions can be estimated with several different time-series simulating processes or derivative pricing models. This information would be valuable to financial practitioners, hedge funds, and market makers as it would help to quantity the risk of a potential strategy before putting the strategy into place. Therefore, Bruno.jl allows for better comparison of different trading and hedging strategies under different assumed market conditions.
Bruno was designed to be used by academic researchers as well as finance practitioners such as derivatives analysts, hedge fund managers, or market makers. It allows for complete analysis of financial assets and strategies in a single package.
false
https://pretalx.com/juliacon2023/talk/VYVNCU/
https://pretalx.com/juliacon2023/talk/VYVNCU/feedback/
32-124
Introducing a financial simulation ecosystem in Julia
Talk
2023-07-28T11:00:00-04:00
11:00
00:30
In this talk, I’ll introduce a suite of software packages built end-to-end in Julia to represent the modeling and analysis of complex financial systems. This package ecosystem leverages agent-based modeling and machine learning to produce a flexible simulation environment for studying market phenomena, generating scenarios, and building real-time trading applications.
juliacon2023-27047-introducing-a-financial-simulation-ecosystem-in-julia
Finance and Economics
Aaron Wheeler
en
Financial markets are composed of complex and evolving interactions that are difficult to define. Systems that exhibit these dynamics are known as *complex adaptive systems* (CAS). In these systems, the micro-scale behaviors of individual actors or agents (e.g., retail investors, trading firms, etc.) coalesce to generate *emergent properties* at the macro-scale. In financial systems, emergent properties can manifest unexpectedly and have varying consequences. One example of emergence is the price discovery process, in which the price series of an asset is determined through the individual orders submitted by various buyers and sellers. Other emergent phenomena can be irrational and deviate significantly from historical events, such as a *flash crash* (i.e., a sudden and extreme drop in asset value that can have lingering and widespread effects on the market).
Agent-based models (ABMs) capture critical features of complex systems—emergent properties, non-linearity, and heterogeneity. These features arise naturally due to the micro-scale interactions; that is, macroscopic properties of the system emerge without being explicitly constrained, or under any assumption, to do so. ABMs typically undergo both a calibration and validation procedure to generate realistic behavior. This entire process is facilitated by a combination of two new packages: Brokerage.jl and TradingAgents.jl.
In this talk, we calibrate an agent-based model of risky asset price dynamics and trading behavior by configuring agent-specific parameters through heuristics and machine learning (online recursive least squares). To validate the model, we run statistical tests on the simulation outputs and compare these against actual market behavior (i.e., check for the presence of empirical regularities, or “stylized facts”, through the use of empirical macroscopic data). We’ll show that our market ABM produces price and volume time series with many statistical features of actual market data, e.g., non-Gaussian returns distributions and volatility clustering. Further, we’ll demonstrate our market ABM’s ability to function across multiple assets and large agent population sizes. Taken together, this new package ecosystem presents an opportunity for researchers to model and analyze complex market phenomena in a flexible, scalable, and fully open-source environment.
false
https://pretalx.com/juliacon2023/talk/VJC39L/
https://pretalx.com/juliacon2023/talk/VJC39L/feedback/
32-124
Fighting Money Laundering with Julia
Lightning talk
2023-07-28T11:30:00-04:00
11:30
00:10
In this talk proposal, we will discuss the chain of fraudulent transactions and help the investigation agencies to fight money laundering with the help of Julia programming language and packages.
juliacon2023-27127-fighting-money-laundering-with-julia
Finance and Economics
Gajendra Deshpande
en
The working of the proposed solution is described below
Step 1: The investigation officer obtains data of suspicious accounts across banks.
Step 2: Using Benford’s Law the accounts data will be checked for possible fraud and marked for further analytics.
Step 3: The account details will also be matched with Politically Exposed Persons(PEP), Relatives and Close Associates (RCA), and Sanctions Data. If a match is found then it increases the probability of possible money laundering.
Step 4: Generate graphs showing the links between transactions of different bank accounts for step 2 and step 3.
Step 5: Apply Graph Machine Learning techniques and graph algorithms to identify the fraudulent chains between depositor and receiver accounts.
Step 6: Find a correlation between transactions and bank accounts to form a fraudulent chain.
Step 7: Generate results in the form of reports and interactive visualizations
Step 8: Verify the result for genuineness and false positive rate.
Step 9: Keep track of all the activities and tasks executed from steps 2 through 8.
Step 10: Generate a report for step 9 in a human-readable and understandable form.
false
https://pretalx.com/juliacon2023/talk/W9GE7P/
https://pretalx.com/juliacon2023/talk/W9GE7P/feedback/
32-124
What we learned building a modelling language for macroeconomics
Lightning talk
2023-07-28T11:40:00-04:00
11:40
00:10
When we set out to build the modelling language for StateSpaceEcon.jl, our package for macroeconomic models, we did not yet know that Julia was the perfect language for the task. When faced with hundreds of variables, shocks, equations, parameters, lags and expectation terms, what the economist needs most is an intuitive and expressive domain-specific language to help keep all that complexity under control. Join us as we share our experience – it might help you enhance your own packages.
juliacon2023-26897-what-we-learned-building-a-modelling-language-for-macroeconomics
Finance and Economics
Boyan BejanovNicholas L. St-PierreJason Jensen
en
[StateSpaceEcon.jl](https://github.com/bankofcanada/StateSpaceEcon.jl) is designed to work with discrete-time macroeconomic state space models. These models typically include from tens to hundreds of variables, equations and parameters. Variables come in several different kinds (exogenous, endogenous, observed, shocks, etc.), parameters are often linked to other parameters, and equations can be linear or non-linear containing past and expected future values of the variables as well as mathematical expressions (arithmetic operations, powers, exponents, logarithms, etc.) and time-series operations (e.g., moving averages or weighted sums). In order to keep the complexity of expressing the model separate from that of working with it, we have designed and implemented a modelling language that supports the necessary features.
In this presentation, we would like to share with our audience how we designed and implemented the domain-specific language for StateSpaceEcon.jl and what we learned along the way. With concrete and practical examples, we will explain the motivations behind our design decisions and demonstrate a few key techniques we employed in our implementations. Specifically, we plan to cover the following aspects.
1. How the model developer can declare the various properties of the model, namely variables, shocks, parameters and equations.
2. Explain why we encapsulate each model into its own Julia module.
3. How we manipulate model equations to automatically compute the dynamic and steady state residuals and their derivatives.
4. How we implement the ability to express time series operations in the model equations, such as moving averages, and how we allow for user-defined extensions.
5. How we deal with parameters that are functions of other parameters.
If time permits, we can also discuss the following.
6. How we allow additional constraints for the steady state system of the model.
7. How we assemble the residual functions into a stacked-time global system and how we construct its sparse Jacobian matrix.
8. How we linearize individual equations or the entire system.
9. Our discrete time series type descendent from AbstractVector, together with its primitive frequency type.
false
https://pretalx.com/juliacon2023/talk/VN8YWA/
https://pretalx.com/juliacon2023/talk/VN8YWA/feedback/
32-124
Simulating RFQ Trading in Julia
Lightning talk
2023-07-28T11:50:00-04:00
11:50
00:10
This talk will explore using Julia to simulate the Request for Quote (RFQ) trading method. RFQ is a trading method that puts counterparties in competition by asking banks for prices to buy or sell an asset. I will simulate the Executing in an Aggregator model (Oomen 2017) and demonstrate why Julia's high performance and ease of use make it a perfect choice for simulating this type of trading. I'll finally show how we can learn from these simulations, educate clients and guide pricing strategies.
juliacon2023-26973-simulating-rfq-trading-in-julia
Finance and Economics
Dean Markwick
en
The The 'Request for Quote' (RFQ) is a trading method that puts counterparties in competition. You will ask banks for the price that they would buy or sell an amount of some asset. For example, you might ask three banks for their price to buy or sell 1 million Apple shares and you trade with whoever gives you the best price.
The more banks you ask, the better price you get, but you are showing many people your interest, so leaking information and once you trade the price moves against you.
In this talk, I'll show you how you can simulate this problem in Julia and build a framework of RFQ trading using the popular Executing in an Aggregator Model (Oomen 2017). Julia provides the best experience from converting the equations in the paper to working code and the speed both in building and using the simulator.
I'll use the framework to show how the price does improve when you ask more counterparties, but at the cost of adverse market movements post trade. I'll then illustrate how we use these simulations to guide our thought process when designing quoting strategies and also using the results to help educate clients.
false
https://pretalx.com/juliacon2023/talk/DBSAMT/
https://pretalx.com/juliacon2023/talk/DBSAMT/feedback/
32-124
Accelerating Economic Research with Julia
Lightning talk
2023-07-28T12:00:00-04:00
12:00
00:10
Economic research strongly depends upon the economist’s ability to identify relevant information for causal inference and forecast accuracy efficiently. We address this goal in our Julia’s ParallelGSReg project, developing different econometric-machine learning packages. In JuliaCon 2023, we will present an improved version of our dimensionality reduction package (including non-linear algorithms) and a new "research acceleration package" with automatized Latex code and AI-bibliographic features.
juliacon2023-26978-accelerating-economic-research-with-julia
Finance and Economics
Demian PanigoAlexis TcachPablo GluzmannAdán Mauri UngaroJuan MenduiñaAlejoNicolás MonzónNahuel Panigo
en
In their recent volume of “Econometrics with Machine Learning”, Chan & Mátyás (2022) remind us of the well-established distinction in which Econometrics and Machine Learning are perceived as alternative methodological cultures: one focused on prediction (model selection, sampling properties, accuracy metrics) and the other on explanation (causal inference, hypothesis testing, coefficient robustness). Moving away from this false dichotomy, we introduce ParallelGSReg (https://github.com/ParallelGSReg): a Julia’s research project with several packages (GlobalSearchRegression.jl, GlobalSearchRegressionGUI.jl and ModelSelection.jl) aimed at 1) building bridges between those complementary cultures; and 2) encouraging economic researchers to use Julia in order to improve computational efficiency in model selection tasks (particularly those using dimensionality reduction techniques with causal-inference requirements).
In JuliaCon 2018, the focus was on “efficiency”. We presented the world-fastest all-subset-regression command (GlobalSearchRegression.jl, which runs up to 3165 times faster than the original Stata-code and up to 197 times faster than well-known R-alternatives; see https://github.com/ParallelGSReg/JuliaCon2019/blob/master/GlobalSearchRegression.jl-paper.pdf).
In 2019, the goal was “ease of use” for what we improved our Graphic-User-Interface (GlobalSearchRegressionGUI.jl) and developed a basic package (ModelSelection.jl) to automatize Julia-to-Latex migration of dimensionality reduction results (which also includes all GlobalSearchRegression.jl functions and additional features like regularization and cross-fold validation).
For JuliaCon 2023 the target is “scope and integration”, for which we are:
1) updating all packages (removing compatibility issues with the newest Julia versions);
2) improving ModelSelection.jl with:
2.a) new classification algorithms (logistic, probit, etc) for regularization and all-subset-regression functions;
2.b) additional tests for causal inference (unit root tests);
2.c) extended cross-fold validation capabilities (to deal with re-sampling requirements of panel data and time-series databases); and
2.d) higher computational efficiency, reducing the Time-to-First-Result (TTFR) by focusing on statistical functions (moving Julia-to-Latex migration capabilities to a complementary package).
3) developing ResearchAccelerator.jl, a new package with:
3.a) extended Julia-to-Latex migration functions that work as an “automatic research assistant”. Using ModelSelection.jl results, it generates a Latex document, with relevant tables, graphics, and metrics.
3.b) AI integration for references and literature review. Using user-provided keywords or phrases, ResearchAccelerator.jl will interact with Google Scholar to obtain a potentially relevant bibliography. Then a subset of them with available abstracts, references, and keywords will be used to provide citation networks, and keywords/ citations statistics. Finally, a machine learning system with modern NLP models will be used to generate, based on articles’ abstracts, a similarity network to provide users with additional information for a deeper search among related bibliography. This network will be exported to the Latex document as a table, a figure, and to a standard output file to be viewed using graph plotting and analysis tools such as Gephi.
4) including a JuliaCall for Stata-integration, which allows all packages in our ParallelGSReg project to be used in batch mode through the Stata’s gsreg.ado package. This feature is developed to give change-averse economic researchers the simplest way to verify the substantial runtime reduction they can obtain by progressively switching to Julia.
We will introduce all these contributions (including some new benchmark figures) in the first 5 minutes of our Lightning talk. Then, a live hands-on example will be developed in 3 minutes to leave the last 2 minutes for audience questions.
false
https://pretalx.com/juliacon2023/talk/TCFXVE/
https://pretalx.com/juliacon2023/talk/TCFXVE/feedback/
32-124
JACC: on-node performance portable programming model in Julia
Lightning talk
2023-07-28T12:10:00-04:00
12:10
00:10
We present our research efforts in creating a performance portable programming model, JACC, in Julia targeting heterogeneous hardware on the US Department of Energy Leadership Computational Facilities. JACC leverages the high-productivity aspect of Julia and the CUDA.jl and AMDGPU.jl vendor specific GPU implementations and expands to many core CPUs (Arm, x86) and automatic memory management. The goal is to allow Julia applications to write performant code once leveraging existing infrastructure.
juliacon2023-26896-jacc-on-node-performance-portable-programming-model-in-julia
HPC
William F Godoy
en
In this talk we present our efforts towards a performance portable programming model in Julia: JACC. We leverage existing implementations targeting vendor-specific GPU hardware acceleration: CUDA.jl and AMDGPU.jl. Similar to KernelAbstractions.jl for GPUs and Kokkos for C++ on multiple platforms, we aim to understand the requirements and feasibility of a performance portable layer targeting heterogeneous hardware at the US Department of Energy Leadership Computing Facilities. We share the gaps in coarse/fine granularity kernel implementations, lessons learned and results from testing JACC on Summit (IBM+NVIDIA) and Crusher (AMD) using the gray-scott diffusion-reaction proxy application running simulation on CPU and GPU heterogeneous hardware. We aim to understand the feasibility of this capability as an important need towards the adoption of Julia in the High-Performance Computing (HPC) communities. JACC leverages the Julia high-productivity and high-performance capability on top of LLVM allowing users to develop their software focusing on their science, while offloading performance portability aspects to our efforts.
false
https://pretalx.com/juliacon2023/talk/AY8EUX/
https://pretalx.com/juliacon2023/talk/AY8EUX/feedback/
32-124
Falra.jl : Distributed Computing with AI Source Code Generation
Lightning talk
2023-07-28T12:20:00-04:00
12:20
00:10
Falra.jl in Julia provides a straightforward approach to implementing distributed computing, equipped with an AI-assisted feature for generating source code. This addition facilitates more efficient big data transformations. Tasks such as preprocessing 16TB of IoT data can be done in 1/100 of the original time. Developers are now able to generate Julia source code more easily with the aid of AI, further aiding in distributed computing tasks.
juliacon2023-26858-falra-jl-distributed-computing-with-ai-source-code-generation
HPC
Bowen ChiuChialo Lee
en
This is a real development scenario that we encountered to preprocess 6-year, 16TB historical IoT raw datasets for data cleaning and transformation. It takes 100 days to complete processing in a single-machine environment, which is time-consuming.
So, the Falra.jl was developed to allow us to divide the data cleaning and transformation tasks that we need to perform into smaller tasks. Falra.jl then automatically distributes these tasks for distributed processing. This architecture saves a lot of computing time and development costs. Through Falra.jl, we were able to complete all IoT data transformations in 1/100 of the time.
Compared to the native Julia distributed module, the advantage of Falra.jl is that developers do not need to learn how to develop the Julia distributed programming syntax. They can just use their single-machine programs as they used to do. In addition, Falra.jl can be deployed on any network that can be called via HTTPS. There is no need to deal with TCP or other network or firewall issues.
Moreover, we've enhanced our approach by integrating AI-assisted Julia source code auto-generation. This novel feature allows developers to efficiently create Julia code using artificial intelligence. Rather than manually crafting each line of code, the AI
can generate source code based on the developer's requirements, thus accelerating the development process. It makes it feasible for developers, even those unfamiliar with Julia, to quickly produce distributed programs. This AI-driven tool not only simplifies code creation but also enables the rapid adaptation and extension of the
applications under the Falra.jl . The fusion of distributed computing and AI-assisted auto-generation of Julia source code significantly boosts productivity.
Currently, we have released the Falra.jl on Github (https://github.com/bohachu/Falra.jl) for everyone to use.
false
https://pretalx.com/juliacon2023/talk/9ES8NF/
https://pretalx.com/juliacon2023/talk/9ES8NF/feedback/
32-124
Lunch Day 3 (Room 3)
Lunch Break
2023-07-28T12:30:00-04:00
12:30
01:30
We hope you're enjoying JuliaCon 2023 so far! Please find our food trucks waiting right outside venue with food available for purchase.
juliacon2023-28082-lunch-day-3-room-3-
JuliaCon
en
We hope you're enjoying JuliaCon 2023 so far! Please find our food trucks waiting right outside venue with food available for purchase.
false
https://pretalx.com/juliacon2023/talk/B9PX7D/
https://pretalx.com/juliacon2023/talk/B9PX7D/feedback/
32-124
Scalable 3-D PDE Solvers Tackling Hardware Limit
Talk
2023-07-28T14:00:00-04:00
14:00
00:30
We present an efficient approach for the development of 3-D partial differential equation solvers that are able to tackle the hardware limit of modern GPUs and scale to the world's largest supercomputers. The approach relies on the accelerated pseudo-transient method and on the automatic generation of on-chip memory usage-optimized computation kernels for the implementation. We report performance and scaling results on LUMI and Piz Daint, an AMD-GPU and a NVIDIA-GPU supercomputer.
juliacon2023-26915-scalable-3-d-pde-solvers-tackling-hardware-limit
HPC
Samuel OmlinLudovic RässIvan Utkin
en
For the development of massively scalable high performance 3-D partial differential equation (PDE) solvers we rely on the usage of a powerful matrix-free pseudo-transient iterative algorithm. The algorithm allows for formulating the update rules for fields in a form close to the explicit time integration. This formulation is optimally suited for both shared and distributed memory parallelization.
With respect to implementation, our approach is instantiated in the packages ParallelStencil.jl and ImplicitGlobalGrid.jl. It is fully architecture-agnostic and, for GPU, it includes the automatic generation of on-chip memory usage-optimized computation kernels as well as the automatic determination of their launch parameters. On-chip memory usage-optimization is achieved by explicitly reading large arrays through the memory shared by the threads of a block - when beneficial and when hardware resources allow for it - and by steering register usage in order to keep data that is to be reused whenever possible on the chip. The optimizations are applicable to complex real world applications: a simplified, less on-chip memory requiring optimization strategy can be used for a part of the kernel input arrays when the most aggressive optimization strategy would lead to a resource exhaustion. Kernel launch parameters as blocks, threads and required shared memory can be automatically defined in agreement with the applied optimizations. Furthermore, communication can be automatically hidden behind computation, including with on-chip memory usage-optimization activated.
We compare the performance achieved for some representative computation kernels against the performance obtained using alternative ways to express equivalent computations in Julia, for example using GPU array broadcasting or straightforward CUDA.jl kernel programming. Finally, we report the scaling of some earth science applications on the European flagship supercomputer LUMI at CSC in Finland, hosting AMD GPUs, and on the Piz Daint supercomputer at the Swiss National Supercomputing Centre in Switzerland, hosting NVIDIA GPUs.
Co-authors: Ludovic Räss¹ ², Ivan Utkin¹
¹ ETH Zurich | ² Swiss Federal Institute for Forest, Snow and Landscape Research (WSL)
false
https://pretalx.com/juliacon2023/talk/BLCWQW/
https://pretalx.com/juliacon2023/talk/BLCWQW/feedback/
32-124
Accelerating the Migration of Large-Scale Simulation to Julia
Talk
2023-07-28T14:30:00-04:00
14:30
00:30
Julia Accelerator Interfaces(JAI, github.com/grnydawn/AccelInterfaces.jl) tries to solve the issues in code migration from Fortran to Julia GPU by using shared libraries. JAI consists of 1) Julia GPU programming interface using Julia macros whose syntax is similar to OpenACC. 2) Automated shared library generation that implements kernels and vendor API interfaces using vendor-provided compilers. 3) Automated call to functions implemented in the shared libraries using Julia ccall interface.
juliacon2023-24316-accelerating-the-migration-of-large-scale-simulation-to-julia
HPC
Youngsung Kim
en
The emergence of various micro-architectures gives us hope that application programmers can continue to use higher-performant hardwares even in the era without Dennard scaling. We still see advertisements of newer processors such as GPUs that claim that it is a few times faster than its previous generation. However, on the other hand, the divergence of micro-architecture has produced trouble for application programmers. To enjoy the performance from new hardware, they have to “migrate” their code in order to work with the new hardware, which creates two major costs: the cost of code migration and the cost of maintaining multiple versions.
In regards to GPU programming, Julia supports several packages such as CUDA.jl, AMDGPU.jl and oneAPI.jl. Currently, using these packages is a de-facto standard of Julia GPU programming. However, I argue that this approach has demerits seeing from the porting point of view: 1) it does not reduce the cost of code migration and the cost of maintaining multiple versions, 2) it does not support coexistence of multiple GPU programming paradigm such as CUDA and OpenACC, 3) the latest updates in the Julia GPU packages always come next after vendor’s updates.
First, the user has to port the entire code to Julia. Even after completing the porting, the user may need to maintain the old application, possibly written in Fortran or C/C++. There could be many reasons for maintaining both versions such as the eco-system that a community has built around the old application. Secondly, supporting coexistence of multiple GPU programming is especially important now when there is no single winner for GPU programming. Instead of betting on a single GPU programming framework, you may want to transit from one framework to another smoothly when a newer framework is desirable. Lastly, it is obvious that the maintainer of the Julia GPU programming framework should wait until the vendor publicly distributes the latest updates.
Julia Accelerator Interfaces(JAI, https://github.com/grnydawn/AccelInterfaces.jl) tries to solve the three issues identified above by using shared libraries that include GPU kernels and GPU vendor interfaces. JAI consists of three main functions: 1) Julia GPU programming interface using Julia macros. With the macros, Julia programmers can create and run GPU kernels in the way similar to a directive based GPU programming such as OpenACC or OpenMP target. 2) Automated shared library generation that implements kernels and vendor API interfaces. To create the shared library, JAI relies on external vendor-provided compilers instead of Julia internal GPU compilation infra-structure. In this way, JAI can leverage the newest feature of vendor-provided compilers as well as the benefit of Just-in-time (JIT) compilation. 3) Automated call to functions implemented in the shared libraries using Julia ccall interface. Because this boiler-plating works are hidden from JAI user interface, users can write JAI GPU code at high-level abstraction and JAI can accommodate API changes on the vendor-side.
To demonstrate the features of JAI, we ported a Fortran miniWeather (github.com/mrnorman/miniWeather) OpenACC version that simulates weather-like flows for training in parallelizing accelerated HPC architectures to jlweather (https://github.com/grnydawn/jlweather) that utilize an OpenACC-enabled compiler. For the sake of performance comparison, we also ported minWeather to the pure Julia version and to the manually GPU-ported version too. The versions are deployed and executed at two US-based supercomputing centers (https://docs.google.com/presentation/d/17pDiiMnTuy8oscQ9-NmxEZ7SltWU4dhUrD2-51ot6ew/edit?usp=sharing). The experiments show that the jlweather OpenACC version is about 25% faster than Fortran OpenACC version with medium workload while slower with small workload.
In JAI, kernels are not ported to Julia. Instead, the user provides JAI with the body of the kernels in the original languages such as Fortran. And then, JAI generates kernel source files and compiles them to shared libraries. The kernel body is located in an external text file in a simple INI-format file, called KNL file. In a KNL file, multiple versions of kernel-body can co-exist and JAI automatically selects the best version. For example, if a KNL file contains a kernel body in CUDA, HIP, and Fortran version, JAI can first try to select HIP or CUDA version to check if the system supports the framework. If none of them is supported on the system, JAI may select the Fortran version as a backup. This JAI feature for coexisting multiple GPU programming frameworks allows users to migrate code gradually from one kernel to another kernel, not the entire application at once. As of this writing, JAI supports kernel-bodys written in CUDA, HIP, Fortran-OpenACC, Fortran-OpenMP-Target, Fortran, and C/C++.
false
https://pretalx.com/juliacon2023/talk/YNGAV7/
https://pretalx.com/juliacon2023/talk/YNGAV7/feedback/
32-124
Exploring synthesis of flexible neural machines with Zygote.jl
Talk
2023-07-28T15:00:00-04:00
15:00
00:30
We were able to successfully synthesize simple compact high-level neural machines via a novel algorithm for neural architecture search using flexible differentiable programming capabilities of Zygote.jl.
juliacon2023-26856-exploring-synthesis-of-flexible-neural-machines-with-zygote-jl
General Machine Learning
Mishka (Michael Bukatin)
en
We are studying an interesting class of flexible neural machines where single neurons process streams of vectors with tree-shaped indices rather than streams of numbers (see https://anhinga.github.io/ and https://arxiv.org/abs/1712.07447, "Dataflow Matrix Machines and V-values: a Bridge between Programs and Neural Nets"). These machines are expressive enough to be used for general-purpose stream-oriented programming (dataflow or functional reactive programs). They are also expressive enough to flexibly and compositionally modify their own weight-connectivity matrices on the fly. They can be thought of as flexible attention machines, with neuron inputs computing linear combinations of vectors with tree-shaped indices and thus working as attention devices.
We would like to explore a variety of machine learning experiments using this class of neural machines. In particular, it would be nice to be able to use differentiable programming and gradient methods in those experiments. Traditional machine learning frameworks would necessitate difficult engineering work reshaping those flexible tree-shaped vectors into flat tensors.
Zygote.jl provides flexible differentiable programming facilities and, in particular, allows users to take gradients with respect to variables aggregated inside a tree. That makes it a perfect fit for our machine learning experiments with flexible neural machines.
We consider the following novel algorithm for neural architecture search. Consider neural architectures where connections between neural modules are gated with scalar multipliers. Start with a sufficiently large network and perform sparsifying training of this network on a set of test problems using sparsifying regularization and gradually pruning inter-module connections with low gating weights.
In our case, the neural modules are powerful "super-neurons" of dataflow matrix machines. In our exploratory set of experiments we consider a duplicate character detector problem (a hand-crafted compact neural machine solving this problem is described in Section 3 of https://arxiv.org/abs/1606.09470, "Programming Patterns in Dataflow Matrix Machines and Generalized Recurrent Neural Nets").
The history of our preliminary experiments is documented at https://github.com/anhinga/DMM-synthesis-lab-journal/blob/main/history.md
These are moderate scale experiments successfully performed on a home laptop CPU.
Zygote.jl is quite awesome, but using it is still not without some difficulties. In particular, while "backpropagation theorem" promises us reverse-mode gradients at cost of the corresponding forward computation multiplied by a small constant, in practice we often observe larger slowdowns (presumably this happens when the forward computation is friendly to compiler optimizations, while the induced gradient computation is not). When this slowdown is still moderate one can successfully use Zygote.jl. When the slowdown starts to exceed two-three orders of magnitude, it makes sense to switch to the stochastic approximation of gradient computations via OpenAI flavor of evolution strategies even if one is computing sequentially and not on a cluster (see https://arxiv.org/abs/1712.06564, "On the Relationship Between the OpenAI Evolution Strategy and Stochastic Gradient Descent" by Xingwen Zhang, Jeff Clune, and Kenneth O. Stanley).
In our case, we have initially bumped into heavy slowdowns and memory swaps when trying to perform this neural synthesis in the most ambitious prior-free manner starting with a fully-connected recurrent machine with long backpropagation through time.
Because we have been aiming to synthesize feedforward circuits with local recurrences, we decided to scale our ambition down and rely on some priors and start with fully connected feedforward machines with skip connections and local recurrences.
Then we have obtained acceptable performance and have been able to synthesize very compact, human-readable neural machines with rather remarkable generalization properties using a tiny training set.
These are very exciting preliminary results where nice-looking compact, human-readable neural circuits approximating a hand-written neural program have been automatically synthesized for the first time.
We hope that this line of exploration will be continued, by ourselves and by other people. (The speaker is looking for collaboration opportunities.)
false
https://pretalx.com/juliacon2023/talk/F7WKU7/
https://pretalx.com/juliacon2023/talk/F7WKU7/feedback/
32-124
Machine Learning on Server Side with Julia and WASM
Talk
2023-07-28T15:30:00-04:00
15:30
00:30
Julia is a high-performance programming language that has gained traction in the machine-learning community due to its simplicity and speed. The talk looks at how Julia can potentially be used to build machine learning models on the server using WebAssembly (WASM) and the WebAssembly System Interface in this talk (WASI) but also look at some of the major hurdles along the way. The talk will go over the benefits of using WASM and WASI for building such as improved performance and security.
juliacon2023-27123-machine-learning-on-server-side-with-julia-and-wasm
General Machine Learning
Shivay Lamba
en
As the demand for machine learning applications grows, so does the need for efficient and performant solutions. Julia is a high-performance programming language that has gained traction in the machine learning community due to its simplicity and speed. We will look at how Julia could ideally be used to build machine learning models on the server using WebAssembly (WASM) and the WebAssembly System Interface in this talk (WASI). But we will focus on what is limiting Julia from being able to do so on the backend. We will go over the benefits of using WASM and WASI for deployment, such as improved performance and security. Attendees will have a better understanding of the subject by the end of this talk.
Table of Content:
1. Introduction to server side machine learning
2. How can Julia be used for machine learning
3. What is WebAssembly (WASM) and the WebAssembly System Interface (WASI)
4. how Julia can be potentially used to build machine learning models on the server using WebAssembly (WASM) and the WebAssembly System Interface but covering some of the limitations in being able to do so.
false
https://pretalx.com/juliacon2023/talk/M8PLZV/
https://pretalx.com/juliacon2023/talk/M8PLZV/feedback/
32-124
Automatic Differentiation for Statistical and Topological Losses
Talk
2023-07-28T16:00:00-04:00
16:00
00:30
We present a new Julia library, `TDAOpt.jl`, which provides a unified framework for automatic differentiation and gradient-based optimization of statistical and topological losses using persistent homology. `TDAOpt.jl` is designed to be efficient and easy to use as well as highly flexible and modular. This allows users to easily incorporate topological regularization into machine learning models in order to optimize shapes, encode domain-specific knowledge, and improve model interpretability
juliacon2023-26625-automatic-differentiation-for-statistical-and-topological-losses
General Machine Learning
/media/juliacon2023/submissions/YFN8CY/tda_cUUKVQu.gif
Siddharth Vishwanath
en
Persistent homology is a mathematical framework for studying topological features of data, such as connected components, loops, and voids. It has a wide range of applications, including data analysis, computer vision, and shape optimization. However, the use of persistent homology in optimization and machine learning has been limited by the difficulty of computing derivatives of topological quantities.
In our presentation, we will introduce the basics of persistent homology and demonstrate how to use our library to optimize statistical and topological losses in a variety of settings, including shape optimization of point clouds and generative models. We will also discuss the benefits of using Julia for this type of work and how our library fits into the broader Julia ecosystem.
We believe it will be of interest to a wide range of practitioners, including machine learning researchers and practitioners, as well as those working in fields related to topology and scientific computing.
false
https://pretalx.com/juliacon2023/talk/YFN8CY/
https://pretalx.com/juliacon2023/talk/YFN8CY/feedback/
32-124
OndaBatches.jl: Continuous, repeatable, and distributed batching
Talk
2023-07-28T16:30:00-04:00
16:30
00:30
At Beacon Biosignals we don't want to have to re-invent the wheel about data
loading and batch randomization every time we stand up a new machine learning
project. So we've collected a set of patterns that have proven useful across
multiple projects into
[OndaBatches.jl](https://github.com/beacon-biosignals/OndaBatches.jl), which
serves as a foundation for building the specific batch randomization,
featurization, and data movement systems that each machine learning project
requires.
juliacon2023-26906-ondabatches-jl-continuous-repeatable-and-distributed-batching
General Machine Learning
Dave Kleinschmidt
en
At Beacon Biosignals we don't want to have to re-invent the wheel about data
loading and batch randomization every time we stand up a new machine learning
project. So we've collected a set of patterns that have proven useful across
multiple projects into
[OndaBatches.jl](https://github.com/beacon-biosignals/OndaBatches.jl), which
serves as a foundation for building the specific batch randomization,
featurization, and data movement systems that each machine learning project
requires.
Our typical machine learning task involves time series datasets composed of at
least thousands of multichannel recordings, each of which has on the order of
100 million individual samples, with accompanying dense or sparse labels. While
not the largest machine learning datasets known to humankind, these are large
enough to be generally inconvenient. The size, shape, and structure of these
datasets (and the associated learning tasks) require some modifications of a
typical machine learning workflow (e.g. one in which the entire dataset is
processed in its entirety in each training epoch).
In this talk, I will present
[OndaBatches.jl](https://github.com/beacon-biosignals/OndaBatches.jl), a Julia
package that implements a set of patterns that have proven to be useful across a
number of projects at Beacon. OndaBatches.jl serves as a foundation for
building the specific batch randomization, featurization, and data movement
systems that each machine learning project requires. Its purpose is to build
and serve batches for machine learning workflows based on densely labeled time
series data, in a way that is:
- distributed (cloud native, throw more resources at it to make sure data
movement is not the bottleneck)
- scalable (handle out-of-core datasets, both for signal data and labels)
- deterministic + reproducible (pseudo-random)
- resumable
- flexible and extensible via normal Julia mechanisms of multiple dispatch
This talk focuses on two aspects of OndaBatches.jl design and development.
First, I'll describe the process of moving a local workflow into a distributed
setting in order to support scalability. Second, I'll discuss how Julia's
composability has shaped the design and functionality of OndaBatches.jl. In
particular, OndaBatches.jl builds on...
- ...[Onda.jl](https://github.com/beacon-biosignals/Onda.jl) to represent both
the multi-channel time series that is the input data and the regularly-sampled
labels.
- ...[Distributed.jl](https://docs.julialang.org/en/v1/stdlib/Distributed/) to
compose well with various cluster managers (including Kubernetes via
[K8sClusterManagers.jl](https://github.com/beacon-biosignals/K8sClusterManagers.jl/))
in service of scalability.
- ...base Julia patterns around iteration in order to separate batch
_state_ from batch _content_ (in service of reproducibility and resumability)
- ...Julia's multiple dispatch pattern to allow our machine learning
teams to customize behavior where needed without having to re-invent basic
functionality every time.
false
https://pretalx.com/juliacon2023/talk/TWS3EJ/
https://pretalx.com/juliacon2023/talk/TWS3EJ/feedback/
32-144
Morning Break Day 3 Room 5
Break
2023-07-28T09:45:00-04:00
09:45
00:15
Morning break for coffee and snacks, and transit time from the keynote to the rest of the day's talks.
juliacon2023-28141-morning-break-day-3-room-5
JuliaCon
en
Morning break for coffee and snacks, and transit time from the keynote to the rest of the day's talks.
false
https://pretalx.com/juliacon2023/talk/LSUDNT/
https://pretalx.com/juliacon2023/talk/LSUDNT/feedback/
32-144
Creating smooth surface models with ElasticSurfaceEmbedding.jl
Lightning talk
2023-07-28T10:00:00-04:00
10:00
00:10
In this talk, I introduce [ElasticSurfaceEmbedding.jl](https://github.com/hyrodium/ElasticSurfaceEmbedding.jl), a package for creating holdable surfaces by weaving paper strips. I'll discuss the process of embedding pieces of a target surface into a plane and minimizing their elastic strain energy. The presentation aims to engage a diverse audience such as mathematicians, physicists, and handicraftsmen, and explore interdisciplinary implications. Physical examples will be showcased onsite.
juliacon2023-25304-creating-smooth-surface-models-with-elasticsurfaceembedding-jl
JuliaCon
/media/juliacon2023/submissions/RBHAER/catenoid_helicoid_EhyI6UO.jpg
Yuto Horikawa
en
In general, creating a surface from a planar material requires splitting the surface into small pieces to avoid large strain in the medium. The most non-trivial part of this project is how to find the planar shapes which become deformed into each piece of the surface. We characterized this shape by minimizing the strain energy in the medium. The minimization problem is formulated by weak-form PDE, and it can be solved numerically with [ElasticSurfaceEmbedding.jl](https://github.com/hyrodium/ElasticSurfaceEmbedding.jl). This package uses B-spline-based Galerkin method and Newton-Raphson method internally. See [my recent paper](https://arxiv.org/abs/2211.06372) for more information.
I will bring some curved woven surfaces I made onsite. The top image is a catenoid and a helicoid which can be deformed into each other ([watch video on YouTube](https://www.youtube.com/watch?v=Gp6XkPLCw7s)).
The presentation slides are available at the following URL:
https://www.docswell.com/s/hyrodium/5JL8EQ-JuliaCon2023.
false
https://pretalx.com/juliacon2023/talk/RBHAER/
https://pretalx.com/juliacon2023/talk/RBHAER/feedback/
32-144
Interesso - Integrated Residual Solver for Dynamic Optimization
Lightning talk
2023-07-28T10:10:00-04:00
10:10
00:10
Dynamic optimization problems include optimal control, state estimation, and system identification. Our newly developed integrated residual methods generalize the state-of-the-art direct collocation method. `Interesso.jl` implements a selection of Lagrange polynomial and Gauss quadrature node distributions. The iterative `Progradio.jl` optimizer allows for efficient mesh refinement. We include an example of optimizing the trajectory of a space-shuttle landing.
juliacon2023-26830-interesso-integrated-residual-solver-for-dynamic-optimization
JuliaCon
/media/juliacon2023/submissions/A97CNW/logoInteresso_WLJIhsl.svg
Eduardo M. G. Vila
en
Optimal control, state estimation, and system identification are examples of dynamic optimization problems (DOPs). In general, these need to be discretized in time into finite-dimensional problems, suitable for optimization solvers.
Transcribing DOPs into (nonlinear) optimization problems can be done in various ways. The state-of-the-art are direct collocation methods, where the differential equations are enforced at each of the discretized time nodes and the number of collocation points do not exceed the degrees of freedom. Integrated residual methods (IRMs) are a generalization of collocation, where errors in the differential equation violations are penalized instead. This generalization robustifies the method against certain DOPs for which the collocation method fails.
We parameterize the continuous trajectories using Lagrange polynomials in the barycentric form. This representation allows for fast and numerically stable interpolations. The user can choose the degree of polynomials, as well as the node distribution (e.g. Chebyshev, Legendre) to be used.
A central part of IRMs is the integration of the dynamic constraints violation. The numerical integration is done using Gauss quadrature methods. The user can choose the degree of quadrature, as well as the distribution (e.g. Gauss-Legendre, Clenshaw-Curtis) to be used.
The transcribed optimization problem is solved with `Progradio.jl`, our projected gradient solver. Because the package implements Julia's `Iterator` interface, the optimization can be arbitrarily terminated by Interesso. This is a key feature which allows efficient mesh refinement.
The Julia language and community have been crucial in allowing programming abstractions. Parameterizing types with `AbstractFloat` optimizes the code to a user-specified floating point precision, and `AbstractDifferentiation.jl` allows us to easily support many AD engines, dependence-free.
We demonstrate our code with an example optimal control problem: optimizing the trajectory of a space-shuttle landing.
false
https://pretalx.com/juliacon2023/talk/A97CNW/
https://pretalx.com/juliacon2023/talk/A97CNW/feedback/
32-144
Julia Para Gente Con Prisa (Spanish)
Lightning talk
2023-07-28T10:20:00-04:00
10:20
00:10
Este plática consta de compartir la experiencia de haber presentado un curso llamado Julia Para Gente Con Prisa en el IIMAS de la UNAM en el verano de 2022. Es un curso diseñado para primerizos pero con aplicaciones para gente que quiere explotar lo más posible de sus cores en ciencias de datos o proyectos de interés social
juliacon2023-26993-julia-para-gente-con-prisa-spanish-
Julia Community
Miguel Raz Guzmán Macedo
en
El curso constó de 20 horas totales de 2 horas al día y con poca experiencia computacional como se esperaba. Se va a compartir el temario, videos y ejercicios de manera completamente abierta para que más grupos puedan replicar el modelo de cursos intensivos de Julia enfocados en aprendizaje y aplicaciones con alto impacto computacional.
false
https://pretalx.com/juliacon2023/talk/AZWAFU/
https://pretalx.com/juliacon2023/talk/AZWAFU/feedback/
32-144
Teaching Introductory Materials Science with Pluto Demos!
Lightning talk
2023-07-28T10:30:00-04:00
10:30
00:10
In this talk, I'll share my experience with Julia in a new (to me) context: in a classroom of first-year undergraduate students who have no coding experience! I plan to use Pluto notebooks to make a series of interactive computational (and low- or no-code) demonstrations of concepts from the introductory materials science course at Carnegie Mellon which I am teaching in the Spring 2023 semester. I look forward to sharing the resources I create as well as my reflections on the experience!
juliacon2023-26734-teaching-introductory-materials-science-with-pluto-demos-
Julia Community
Rachel Kurchin
en
In Spring 2023, I will teach the introductory materials science and engineering course at Carnegie Mellon University to approximately 80 students. These first-year undergraduate students have no programming prerequisites. I plan to use Pluto notebooks to create some interactive computational demonstrations of various ideas in the course, and potentially also to integrate further exploration of these into homework assignments for the course. As one example, I have already created a notebook with an interactive (thanks to PlotlyJS) "make-your-own-Ashby-plot" activity, where students can plot values of different materials properties against each other (using simple PlutoUI dropdown menus) and hypothesize about the mechanisms underlying the trends they observe.
By the time JuliaCon happens, I will know how all this went! In this talk (sort of like an "extended experience" talk from someone who't not actually new to Julia but is new to teaching with it), I will share with the community what went well, what didn't work, lessons learned, and thoughts on how to improve/adapt going forward! I hope this will be useful for others thinking about using Julia/Pluto in similar educational contexts.
false
https://pretalx.com/juliacon2023/talk/KCWTWK/
https://pretalx.com/juliacon2023/talk/KCWTWK/feedback/
32-144
The Slack thread that would not die
Lightning talk
2023-07-28T10:40:00-04:00
10:40
00:10
A handful of Julians get nerdsniped into a ridiculous benchmarking spat with all the other languages.
After a 1000+ slack thread that defied numerical stability, common sense and message quotas, our intrepid Julians won nothing but a few lost days of productive heckling.
It was beautiful.
juliacon2023-26996-the-slack-thread-that-would-not-die
Julia Community
Miguel Raz Guzmán Macedo
en
Nothing about this benchmark makes sense, but we all had a kick of fun participating in it.
Hopefully you will as well from this whimsical little gem of a corner of the Julia community.
false
https://pretalx.com/juliacon2023/talk/X8QZZX/
https://pretalx.com/juliacon2023/talk/X8QZZX/feedback/
32-144
Learn Julia by creating Pull Requests on Github
Lightning talk
2023-07-28T10:50:00-04:00
10:50
00:10
The talk is about the creation of a GitHub repository that teaches users Julia by forking the repository and using CI/CD to autonomously teach best Julia practices.
juliacon2023-27041-learn-julia-by-creating-pull-requests-on-github
Julia Community
James Hennessy
en
The project was a creation of a Github repository that users can fork and submit Pull Requests, then leveraging the CI/CD of Github, have the CI/CD give corrections and tips of Julia building blocks like arrays, vectors, and matrices. Additional topics, such as statistics and dispatch have been added as new modules. Since the project is open sourced, Julia enthusiasts can create their own learning track by forking the repository. This was created to allow users to familiarize themselves with Julia, general programming concepts, and CI/CD pipelines. This is a critical tool for the community that can increase adoption of the technology. The project hopes to gain more traction during the Hacktoberfest event, where users look for repositories to contribute to, in order to win badges and status in the open source community . The idea was inspired by a similar module for the Haskell programming language.
false
https://pretalx.com/juliacon2023/talk/NLB33K/
https://pretalx.com/juliacon2023/talk/NLB33K/feedback/
32-144
Diversity and Inclusion Efforts in the Julia Community
Lightning talk
2023-07-28T11:00:00-04:00
11:00
00:10
The Julia community aims to be welcoming, diverse, inclusive towards people from all backgrounds. However, there is still room for improvement in terms of engagement and representation of people identifying from underrepresented backgrounds. In this talk, we will present different initiatives aimed at supporting underrepresented individuals and groups within the Julia community such as Julia Gender Inclusive, JuliaCN, and the Development and Diversity Fund.
juliacon2023-26954-diversity-and-inclusion-efforts-in-the-julia-community
Julia Community
Jacob Zelko
en
During the presentation, we will provide updates on the Julia Gender Inclusive initiative. We will discuss statistics on gender diversity within the Julia community, our organized coffee hours, and mentored Hackathons to foster an inclusive environment for people of all genders.
Additionally, we will discuss new funding opportunities within the Julia community to support specifically those from underrepresented populations. One example will be The Julia Community Development and Diversity Fund which focuses on providing micro-grants to support individuals or groups lacking access to traditional computing resources within the Julia Community.
Another initiative discussed will also highlight the JuliaCN Community Fund, which promotes translation efforts and makes resources more accessible to Chinese Julia users. The mission of this fund is to propose a long-term English-to-Chinese translation program to cover anything that is stable and useful to Chinese Julia users.
For each initiative, We will discuss efforts from the past year, highlight progress and ongoing efforts of this group and how attendees can get involved and support these efforts whether in financial or volunteer bases.
false
https://pretalx.com/juliacon2023/talk/TC79NT/
https://pretalx.com/juliacon2023/talk/TC79NT/feedback/
32-144
A brief history of the Julia repository
Lightning talk
2023-07-28T11:10:00-04:00
11:10
00:10
This is a light hearted talk that looks at some interesting statistics and tidbits of the Julia repo from the almost 15 years it has existed.
juliacon2023-27013-a-brief-history-of-the-julia-repository
Julia Community
Kristoffer Carlsson
en
The Julia repository has existed for almost 15 years with the first commit being done at Sat Aug 22 20:39:06 2009. Over 50'000 commits has been made, 22'000+ issues and 25'000+ PRs have been opened and 130+ releases have been tagged.
This light hearted talk presents some of the more interesting statistics from the Julia repo. Among others, here are some of the questions that will be answered:
- How has the ratio of open to closed PRs and issues evolved over time? How does this compare to the repositories of other languages. Are there any conclusions to be made from this.
- How has the frequency of Julia releases changed over time.
- What PR has the longest title? What PR was the fastest from creation to merge? What PR took the longest from getting created to getting merged? Etc.
As a bonus, the presentation for this talk is made completely programatically and can be regenerated with a single command to incorporate the future data.
false
https://pretalx.com/juliacon2023/talk/7TFQMP/
https://pretalx.com/juliacon2023/talk/7TFQMP/feedback/
32-144
Qualitative study on challenges faced in multithreading in Julia
Lightning talk
2023-07-28T11:20:00-04:00
11:20
00:10
Developers have been using multithreading to obtain increased performance. However, developers find it difficult to write multithreaded code. To help them, it is important to understand the difficulties they face. The goal of the study is to evaluate the challenges faced by developers with multithreading in Julia using Julia Discourse and Stack Overflow. Conversation between developers on these online discussion forums were analyzed using inductive qualitative content analysis.
juliacon2023-26798-qualitative-study-on-challenges-faced-in-multithreading-in-julia
Julia Community
Harshita
en
With consistent need of efficiency, developers are required to write efficient and correct multithreaded code. Writing correct and efficient code is quite challenging. To better help the developers with multithreading, it is imperative to understand their interests and difficulties in terms of the multithreading code they are writing. The main aim of the study is to measure the challenges faced by developers in multithreading in Julia. This will help maintainers of Julia to improve its support for multithreading and help developers fine-tune their code to make it more optimized. More importantly, it would help developers to ask right questions on online discussion forums. This study involves the analysis of Julia discourse and stack overflow posts, which were used to evaluate the type of questions on multithreading asked by Julia developers. This study uses qualitative content analysis which allows the categories to flow from the data. The research findings indicate that developers face a lot of performance related issues, specifically the slow speed of their multithreaded code. This study also indicates that even though Julia provides official documentation about usage of macros and threads, developers tend to inquire about the minimal working example to learn about their proper implementation. It has also been found that programmers find it difficult to understand the pattern of memory allocation in their code which is one of the major causes of slow performance of multithreaded code. The study was expected to have queries related to testing and debugging of multithreaded code but surprisingly, there were negligible queries on that. This study also analyzes the different approach used by developers to ask their queries Julia Discourse and Stack Overflow.
false
https://pretalx.com/juliacon2023/talk/LBCZDX/
https://pretalx.com/juliacon2023/talk/LBCZDX/feedback/
32-144
Discussing Gender Diversity in the Julia Community
Birds of Feather (BoF)
2023-07-28T11:30:00-04:00
11:30
01:00
Julia Gender Inclusive is an initiative that supports gender diversity in the Julia community. We are a group of people whose gender is underrepresented in the community and aim to provide a supportive space for all gender minorities in the Julia community. Over the last year, we have worked toward doing so with our Learn Julia with Us workshops, regular coffee meetings, and our inaugural hackathon. With the BoF session, we hope to discuss current and future initiatives with other people with un
juliacon2023-26882-discussing-gender-diversity-in-the-julia-community
Julia Community
Julia Gender Inclusive
en
The objective of this BoF is to create space for discussion and community building among people who feel their gender is underrepresented within the Julia community, as well as allies who want to support us. We aim to create a safe and fruitful discussion about gender diversity, increase awareness of our current initiatives, receive input on new actions we can take as Julia Gender Inclusive, and reach out to others who want to get involved.
false
https://pretalx.com/juliacon2023/talk/MHCCJF/
https://pretalx.com/juliacon2023/talk/MHCCJF/feedback/
32-144
Lunch Day 3 (Room 5)
Lunch Break
2023-07-28T12:30:00-04:00
12:30
01:30
We hope you're enjoying JuliaCon 2023 so far! Please find our food trucks waiting right outside venue with food available for purchase.
juliacon2023-28084-lunch-day-3-room-5-
JuliaCon
en
We hope you're enjoying JuliaCon 2023 so far! Please find our food trucks waiting right outside venue with food available for purchase.
false
https://pretalx.com/juliacon2023/talk/VX8S9V/
https://pretalx.com/juliacon2023/talk/VX8S9V/feedback/
32-144
Julia and Rust BoF
Birds of Feather (BoF)
2023-07-28T14:00:00-04:00
14:00
01:00
Julia and Rust both punch above their weight, but how well do they gel together? We'll invite community members to speak on their experiences interfacing the languages in both directions, future possibilities of collaboration and leveraging the best of both worlds.
juliacon2023-26991-julia-and-rust-bof
JuliaCon
Miguel Raz Guzmán Macedo
en
Where and when should Julia or Rust be considered? What are the common pitfalls to avoid? What should people really know about before embarking on learning or interfacing with Rust?
Although Julia has seen great increase in usage in the past few years, Rust's rise has been stratospheric, as it even has a place in the Linux kernel now.
This BoF will discuss
a) that there is much to learn from cooperation on both languages
b) how performance sparring will compound over time
c) that both languages are now pushing into domains not initially carved out of their initial designs (but where one may learn from the other)
d) the case for and against Rust in Julia internals
This BoF is open to beginner and experienced Julia and Rust users alike.
false
https://pretalx.com/juliacon2023/talk/3PSF39/
https://pretalx.com/juliacon2023/talk/3PSF39/feedback/
32-144
Julia in HPC BoF
Birds of Feather (BoF)
2023-07-28T15:00:00-04:00
15:00
01:00
The Julia HPC community has been growing over the last years with monthly meetings to coordinate development and to solve problems arising in the use of Julia for high-performance computing.
The Julia in HPC Birds of a Feather is an ideal opportunity to join the community and to discuss your experiences with using Julia in an HPC context.
juliacon2023-26867-julia-in-hpc-bof
HPC
Carsten BauerValentin ChuravyWilliam F Godoy
en
Building on last year's success of this format at JuliaCon2022 and the well-received Julia BoF at SC22 in Dallas, we are looking forward to talk within the wider Julia HPC community again!
false
https://pretalx.com/juliacon2023/talk/NKVYHZ/
https://pretalx.com/juliacon2023/talk/NKVYHZ/feedback/
32-144
An opinionated, but configurable, GitLab CI process for Julia
Lightning talk
2023-07-28T16:00:00-04:00
16:00
00:10
As an alternative code-hosting/project management platform, GitLab is not as well supported within the Julia community. This talk presents a GitLab CI process for Julia aiming to change this, making it easier for those using GitLab as their preferred platform to build and ship Julia-based software.
juliacon2023-27022-an-opinionated-but-configurable-gitlab-ci-process-for-julia
Julia Base and Tooling
Joris Kraak
en
The Julia community is very GitHub-centric, i.e. the majority of Julia package development happens on the platform. Hence, various solutions for CI have been created by the community for the platform.
As an alternative platform, GitLab has not seen as much use, and hence support, from the Julia community. However, the platform can, and is, used for Julia development by various people and organizations. It is typically chosen because of its excellent feature set and the ability to host private instances, with limited capabilities, for free.
The [Ethima organization](https://gitlab.com/ethima) provides opinionated, composable, and reusable CI processes for GitLab. It even provides one for Julia! In this talk, you will learn how to get set up (it doesn't take much), what functionality the process provides you with, how it can be extended and fine-tuned to your needs, and how to contribute.
false
https://pretalx.com/juliacon2023/talk/9AUDXW/
https://pretalx.com/juliacon2023/talk/9AUDXW/feedback/
32-144
When type instability matters
Lightning talk
2023-07-28T16:10:00-04:00
16:10
00:10
Type instabilities are not always bad! Using non-concrete types, and avoiding method specialization and type inference can help with improving latency and, in specific cases, runtime performance. The latter is observed in inherently dynamic contexts with no way to compile all possible method signatures upfront, because code needs to be compiled at points of dynamic dispatch by design. We present a concrete case we face in our production environment, additional examples, and related trade-offs.
juliacon2023-26883-when-type-instability-matters
Julia Base and Tooling
/media/juliacon2023/submissions/CNBSS3/PlantingSpace_logo_onWhite_800x300px_LaTYTqK.png
Cédric BelmantThéo Galy-Fajou
en
- [Repository](https://gitlab.com/plantingspace/juliacon10min)
- [Slides](https://gitlab.com/plantingspace/juliacon10min/-/blob/main/assets/slides.pdf)
Writing type-stable code has largely been accepted as a standard good practice. When the compiler can fully infer the types which flow through a program, it can apply optimizations and specialize to emit fast code for the specific function signatures that get compiled.
The process of method specialization and type inference is expensive and a main source of the perceived latency when using Julia. While in most cases, this one-time cost is acceptable for the performance gains at runtime, there are specific contexts in which the cost of such analyses becomes prohibitive.
In this talk, we present situations which may benefit from preventing the compiler to reason too much about the code, considering the trade-off between compilation and execution time. Such situations include evaluating generated code, especially generating new types. We also discuss other cases, in which certain data structures benefit from having non-concrete fields, such as `Expr` and other tree-shaped data structures.
false
https://pretalx.com/juliacon2023/talk/CNBSS3/
https://pretalx.com/juliacon2023/talk/CNBSS3/feedback/
32-144
Declaratively imposing constraints using ValueConstraints.jl
Lightning talk
2023-07-28T16:20:00-04:00
16:20
00:10
`ValueConstraints.jl` provides a framework for declaratively expressing constraints on values, e.g. "minimum", "maximum", "needs to be within this set", etc. along with a small standard library of common constraints. It is fast, i.e. on par with using regular function calls, provides friendly error messages and warnings, and, perhaps most importantly, can serve as a foundation for the creation of schemas.
juliacon2023-27101-declaratively-imposing-constraints-using-valueconstraints-jl
Julia Base and Tooling
Joris Kraak
en
The need to be able to express constraints on data is common in various aspects of software engineering. A particular example that might come to mind is API design, especially inter-process API design, where it is useful to be able to make expose these constraints to consumers of the API. For instance, the [OpenAPI specification](https://spec.openapis.org/oas/v3.1.0) is a specification for REST APIs and is in turn an extension of the [JSON Schema specification](https://json-schema.org/specification.html). The OpenAPI specification leverages [the JSON Schema Validation specification](https://json-schema.org/draft/2020-12/json-schema-validation.html) to document constraints on data submitted to API endpoints.
Although the Julia ecosystem has packages that handle the parsing of these types of schemas, such as [`JSONSchema.jl`](https://github.com/fredo-dedup/JSONSchema.jl) or [`OpenAPI.jl`](https://github.com/JuliaComputing/OpenAPI.jl), there is no such package making it easy to generate these types of schemas _from Julia code_. `ValueConstraints.jl` aims to fill that gap. It takes a declarative and introspectable approach to expressing constraints on values, so that other packages like [`StructTypes.jl`](https://github.com/JuliaData/StructTypes.jl) can be used to serialize the constraints into and from formats such as JSON Schema.
false
https://pretalx.com/juliacon2023/talk/HDTGHB/
https://pretalx.com/juliacon2023/talk/HDTGHB/feedback/
32-144
NEOs.jl: a jet transport-based package for Near-Earth asteroids
Talk
2023-07-28T16:30:00-04:00
16:30
00:30
NEOs.jl is an open source Near Earth Object orbit determination software package in the Julia programming language. NEOs.jl features exploitation of high-order automatic differentiation techniques, known as jet transport, in order to quantify orbital uncertainties in a versatile, semi-analytical manner. Using NEOs.jl we have estimated the Yarkovsky acceleration acting on the potentially hazardous asteroid Apophis and have ruled out potential impacts on 2036 and 2068.
juliacon2023-26939-neos-jl-a-jet-transport-based-package-for-near-earth-asteroids
JuliaCon
Jorge A. Pérez-HernándezLuis Eduardo Ramírez MontoyaLuis Benet
en
NEOs.jl is an open source Near Earth Object orbit determination software package in the Julia programming language. It enables high-accuracy orbit determination for near-Earth asteroids and comets from optical and radar astrometry. NEOs.jl features exploitation of high-order automatic differentiation techniques, known as jet transport, in order to quantify orbital uncertainties in a versatile, semi-analytical manner. The optical astrometry error model incorporates state-of-the-art dynamical and observational models, and accounts for biases present in star catalogs, as well as other sources of systematic errors via an appropriate weighting scheme; this considerations are important when dealing with high-precision orbit determinaiton. As part of the development of NEOs.jl, we also implemented a Julia package with our own planetary ephemeris integrator, PlanetaryEphemeris.jl, based on the DE430 planetary and lunar ephemeris dynamical model produced by the Jet Propulsion Laboratory. Using NEOs.jl we have estimated the Yarkovsky acceleration acting on asteroid Apophis and have ruled out potential impacts on 2036 and 2068. These results were published in the Communications Earth & Environment journal: https://www.nature.com/articles/s43247-021-00337-x. This is joint work with Dr. Luis Benet (ICF-UNAM) and Luis Eduardo Ramírez Montoya (FC-UNAM).
false
https://pretalx.com/juliacon2023/talk/B8FHNU/
https://pretalx.com/juliacon2023/talk/B8FHNU/feedback/
32-G449 (Kiva)
Morning Break Day 3 Room 6
Break
2023-07-28T09:45:00-04:00
09:45
00:15
Morning break for coffee and snacks, and transit time from the keynote to the rest of the day's talks.
juliacon2023-28142-morning-break-day-3-room-6
JuliaCon
en
Morning break for coffee and snacks, and transit time from the keynote to the rest of the day's talks.
false
https://pretalx.com/juliacon2023/talk/87N8YF/
https://pretalx.com/juliacon2023/talk/87N8YF/feedback/
32-G449 (Kiva)
MEDYAN.jl: Agent-based modeling of active matter and whole cells
Talk
2023-07-28T10:00:00-04:00
10:00
00:30
Agent-based modeling of the whole cell has emerged as a frontier of modern research. To address this challenge, we developed MEDYAN, a mechano-chemical forcefield and simulation software, in C++. Our new Julia package is 10x faster. To achieve this result, the overall architecture was redesigned and various Julia packages were leveraged. I will describe our strategy for combining stochastic reaction diffusion dynamics with movable membrane and filament mechanics. See medyan.org for more details.
juliacon2023-26941-medyan-jl-agent-based-modeling-of-active-matter-and-whole-cells
Biology and Medicine
Nathan Zimmerberg
en
MEDYAN.jl is the next-generation implementation of our lab’s agent-based model of the cell cytoskeleton. Our goal is to describe the emergent behavior of a cell in terms of basic chemical and mechanical interactions between individual proteins. By rewriting our object-oriented C++ implementation in Julia we were able to speed up simulations by over 10x. We changed the overall framework architecture and used various high-performance Julia packages, including Optim.jl, CellListMap.jl, and JumpProcesses.jl to achieve this speed up. I will describe our strategy for combining stochastic reaction diffusion with deformable membrane and filament mechanics. Lastly, I will show some of the current applications of our model to study neuron growth and the T-cell immune synapse in collaboration with various experimental groups. For more details of the C++ based framework, please go to medyan.org.
false
https://pretalx.com/juliacon2023/talk/WV8YE3/
https://pretalx.com/juliacon2023/talk/WV8YE3/feedback/
32-G449 (Kiva)
Julia Systems Biology
Minisymposium
2023-07-28T10:30:00-04:00
10:30
01:00
Julia has had the most developed ecosystem for differential equation modeling in simulation through the SciML organization for a while. Here we present a collection of talks by computational systems biologists in the community. The focus of the symposium will be to look at how SciML tools are being used in systems biology, how they can improve, and how we can take steps to increase collaboration throughout industry and academia.
juliacon2023-24763-julia-systems-biology
SciML
Anand JainPaul LangHarry SaxtonTorsten Schenkel
en
Julia SciML is a collection of hundreds of scientific computing packages for the Julia programming language. Historically, SciML began as a group of fast ordinary differential equation (ODE) solvers. However, in the last half a decade, SciML has grown to include an extensive collection of packages geared toward modern scientific computing, with its own Computer Algebra System (CAS) and Domain Specific Language (DSL), the ability for heterogeneous computing (e.g., running code on GPUs), and seamless connection to machine learning packages.
Julia SysBio community has significantly benefited from the SciML ecosystem. One of the first efforts to make SciML usable to the SysBio community was to provide importers and bridging packages (e.g., CellMLToolkit.jl, SBMLToolkit.jl) that allow for standard and well-curated models in open-source repositories (CellML, SBML) to be imported and used by the SciML packages. CellML models are generally in the form of ODE and are usually solved with the help of ODE and partial differential equation (PDE) solvers. On the other hand, SBML models are usually presented as reaction networks and are better suited to Catalyst.jl.
With the maturation of the SciML infrastructure and the availability of bridging packages between the biological models and the SciML ecosystem, it is time for applications! In this mini-symposium, we showcase some recent applications of SciML in SysBio in academia and industry.
In addition to presenting various applications, our plan is to have a forum to provide feedbacks from the community to the core SciML developers in order to guide the future development of SciML and related packages to make Julia and SciML the best environment for SysBio applications.
Presenters:
* Alex Cohen will be presenting on techniques for learning physically-constrained models in mode space to characterize the locomotion and behavioral states of animals that use undulatory locomotion.
* Andrew Stine (United Therapeutics) will be presenting how Julia and SciML tools are used in QSP modeling teams at United Therapeutics.
* Torsten Schenkel/Harry Saxton will be presenting lumped parameter models We introduce CirculatorySystemModels.jl (CSM), an acausal model library built on ModelingToolkit.jl, containing a wide range of cardiovascular elements for creating complex models. CSM provides a common API, with automatic generation of ODEs meaning every 0D model can be solved with the same workflow, allowing for full integration with the SciML framework.
* Katy Norman (Sanofi) will be presenting how Julia and SciML tools are used in QSP modeling teams at Sanofi.
* Otto Ritter (Merck) will be presenting how Julia, Applied Category Theory, and SciML tools are used in QSP modeling teams at Merck with ReactiveDynamics.jl.
* Sam Isaacson will be discussing high level progression of the systems biology ecosystem in Julia, providing important context on future steps and ways that the community can improve and collaborate.
* Don Elbert will be presenting on variable volume models and his experience as a SciML user. This will provide important context on pain points for adoption and what we can do to make Julia the primary choice for modeling and simulation of biology.
* Wiktor Phillips will be presenting his acausal neuronal modeling and simulation package, Conductor.jl.
* Shahriar Iravanian will be presenting simulating cardiac electrophysiological ionic models as 2D/3D PDEs with the help of Julia and SciML and showcase the power of composition in Julia by using dual complex numbers in CUDA kernels to solve a biological problem (calculating the membrane impedance of cardiac cells).
* Torkel Loman will be presenting Catalyst.jl and how chemical reaction networks are simulated in Julia.
* Sebastian Micluța-Câmpeanu/Paul Lang/Elisabeth Roesch (JuliaHub) will explore strategies to handle parameter unidentifiability and integrate open standards and neural model autocomplete in QSP workflows.
false
https://pretalx.com/juliacon2023/talk/GQPUWV/
https://pretalx.com/juliacon2023/talk/GQPUWV/feedback/
32-G449 (Kiva)
Julia Systems Biology (2)
Minisymposium
2023-07-28T11:30:00-04:00
11:30
01:00
Julia has had the most developed ecosystem for differential equation modeling in simulation through the SciML organization for a while. Here we present a collection of talks by computational systems biologists in the community. The focus of the symposium will be to look at how SciML tools are being used in systems biology, how they can improve, and how we can take steps to increase collaboration throughout industry and academia.
juliacon2023-30804-julia-systems-biology-2-
SciML
en
Julia SciML is a collection of hundreds of scientific computing packages for the Julia programming language. Historically, SciML began as a group of fast ordinary differential equation (ODE) solvers. However, in the last half a decade, SciML has grown to include an extensive collection of packages geared toward modern scientific computing, with its own Computer Algebra System (CAS) and Domain Specific Language (DSL), the ability for heterogeneous computing (e.g., running code on GPUs), and seamless connection to machine learning packages.
Julia SysBio community has significantly benefited from the SciML ecosystem. One of the first efforts to make SciML usable to the SysBio community was to provide importers and bridging packages (e.g., CellMLToolkit.jl, SBMLToolkit.jl) that allow for standard and well-curated models in open-source repositories (CellML, SBML) to be imported and used by the SciML packages. CellML models are generally in the form of ODE and are usually solved with the help of ODE and partial differential equation (PDE) solvers. On the other hand, SBML models are usually presented as reaction networks and are better suited to Catalyst.jl.
With the maturation of the SciML infrastructure and the availability of bridging packages between the biological models and the SciML ecosystem, it is time for applications! In this mini-symposium, we showcase some recent applications of SciML in SysBio in academia and industry.
In addition to presenting various applications, our plan is to have a forum to provide feedbacks from the community to the core SciML developers in order to guide the future development of SciML and related packages to make Julia and SciML the best environment for SysBio applications.
Presenters:
* Alex Cohen will be presenting on techniques for learning physically-constrained models in mode space to characterize the locomotion and behavioral states of animals that use undulatory locomotion.
* Shahriar Iravanian will be presenting simulating cardiac electrophysiological ionic models as 2D/3D PDEs with the help of Julia and SciML and showcase the power of composition in Julia by using dual complex numbers in CUDA kernels to solve a biological problem (calculating the membrane impedance of cardiac cells).
* Torsten Schenkel/Harry Saxton will be presenting lumped parameter models We introduce CirculatorySystemModels.jl (CSM), an acausal model library built on ModelingToolkit.jl, containing a wide range of cardiovascular elements for creating complex models. CSM provides a common API, with automatic generation of ODEs meaning every 0D model can be solved with the same workflow, allowing for full integration with the SciML framework.
* Katy Norman (Sanofi) will be presenting how Julia and SciML tools are used in QSP modeling teams at Sanofi.
* Otto Ritter (Merck) will be presenting how Julia, Applied Category Theory, and SciML tools are used in QSP modeling teams at Merck with ReactiveDynamics.jl.
* Sam Isaacson will be discussing high level progression of the systems biology ecosystem in Julia, providing important context on future steps and ways that the community can improve and collaborate.
* Don Elbert will be presenting on variable volume models and his experience as a SciML user. This will provide important context on pain points for adoption and what we can do to make Julia the primary choice for modeling and simulation of biology.
* Wiktor Phillips will be presenting his acausal neuronal modeling and simulation package, Conductor.jl.
* Joe Bender (United Therapeutics) will be presenting how Julia and SciML tools are used in QSP modeling teams at United Therapeutics.
* Torkel Loman will be presenting Catalyst.jl and how chemical reaction networks are simulated in Julia.
false
https://pretalx.com/juliacon2023/talk/JS8ZFN/
https://pretalx.com/juliacon2023/talk/JS8ZFN/feedback/
32-G449 (Kiva)
Lunch Day 3 (Room 6)
Lunch Break
2023-07-28T12:30:00-04:00
12:30
01:30
We hope you're enjoying JuliaCon 2023 so far! Take a break and grab some lunch to recharge for the afternoon sessions. We have a delicious spread waiting for you in the dining hall. Bon appétit!
juliacon2023-28085-lunch-day-3-room-6-
JuliaCon
en
We hope you're enjoying JuliaCon 2023 so far! Take a break and grab some lunch to recharge for the afternoon sessions. We have a delicious spread waiting for you in the dining hall. Bon appétit!
false
https://pretalx.com/juliacon2023/talk/XPA7EQ/
https://pretalx.com/juliacon2023/talk/XPA7EQ/feedback/
32-G449 (Kiva)
Julia Systems Biology (3)
Minisymposium
2023-07-28T14:00:00-04:00
14:00
01:00
Julia has had the most developed ecosystem for differential equation modeling in simulation through the SciML organization for a while. Here we present a collection of talks by computational systems biologists in the community. The focus of the symposium will be to look at how SciML tools are being used in systems biology, how they can improve, and how we can take steps to increase collaboration throughout industry and academia.
juliacon2023-30805-julia-systems-biology-3-
SciML
en
Julia SciML is a collection of hundreds of scientific computing packages for the Julia programming language. Historically, SciML began as a group of fast ordinary differential equation (ODE) solvers. However, in the last half a decade, SciML has grown to include an extensive collection of packages geared toward modern scientific computing, with its own Computer Algebra System (CAS) and Domain Specific Language (DSL), the ability for heterogeneous computing (e.g., running code on GPUs), and seamless connection to machine learning packages.
Julia SysBio community has significantly benefited from the SciML ecosystem. One of the first efforts to make SciML usable to the SysBio community was to provide importers and bridging packages (e.g., CellMLToolkit.jl, SBMLToolkit.jl) that allow for standard and well-curated models in open-source repositories (CellML, SBML) to be imported and used by the SciML packages. CellML models are generally in the form of ODE and are usually solved with the help of ODE and partial differential equation (PDE) solvers. On the other hand, SBML models are usually presented as reaction networks and are better suited to Catalyst.jl.
With the maturation of the SciML infrastructure and the availability of bridging packages between the biological models and the SciML ecosystem, it is time for applications! In this mini-symposium, we showcase some recent applications of SciML in SysBio in academia and industry.
In addition to presenting various applications, our plan is to have a forum to provide feedbacks from the community to the core SciML developers in order to guide the future development of SciML and related packages to make Julia and SciML the best environment for SysBio applications.
Presenters:
* Alex Cohen will be presenting on techniques for learning physically-constrained models in mode space to characterize the locomotion and behavioral states of animals that use undulatory locomotion.
* Shahriar Iravanian will be presenting simulating cardiac electrophysiological ionic models as 2D/3D PDEs with the help of Julia and SciML and showcase the power of composition in Julia by using dual complex numbers in CUDA kernels to solve a biological problem (calculating the membrane impedance of cardiac cells).
* Torsten Schenkel/Harry Saxton will be presenting lumped parameter models We introduce CirculatorySystemModels.jl (CSM), an acausal model library built on ModelingToolkit.jl, containing a wide range of cardiovascular elements for creating complex models. CSM provides a common API, with automatic generation of ODEs meaning every 0D model can be solved with the same workflow, allowing for full integration with the SciML framework.
* Katy Norman (Sanofi) will be presenting how Julia and SciML tools are used in QSP modeling teams at Sanofi.
* Otto Ritter (Merck) will be presenting how Julia, Applied Category Theory, and SciML tools are used in QSP modeling teams at Merck with ReactiveDynamics.jl.
* Sam Isaacson will be discussing high level progression of the systems biology ecosystem in Julia, providing important context on future steps and ways that the community can improve and collaborate.
* Don Elbert will be presenting on variable volume models and his experience as a SciML user. This will provide important context on pain points for adoption and what we can do to make Julia the primary choice for modeling and simulation of biology.
* Wiktor Phillips will be presenting his acausal neuronal modeling and simulation package, Conductor.jl.
* Joe Bender (United Therapeutics) will be presenting how Julia and SciML tools are used in QSP modeling teams at United Therapeutics.
* Torkel Loman will be presenting Catalyst.jl and how chemical reaction networks are simulated in Julia.
false
https://pretalx.com/juliacon2023/talk/8TVH3V/
https://pretalx.com/juliacon2023/talk/8TVH3V/feedback/
32-G449 (Kiva)
100 Million Patients: Julia for International Health Studies
Talk
2023-07-28T15:00:00-04:00
15:00
00:30
This talk explores the use of Julia in a novel observational health research study that explores health equity and mental health in ~100 million patients in an international collaborative effort across more than 4 countries. Contributions and efforts within the JuliaHealth and adjacent communities have made working with this data possible. The approaches and results shared will be valuable for potential researchers and will open new frontiers for high performance computing and health analytics.
juliacon2023-26999-100-million-patients-julia-for-international-health-studies
Biology and Medicine
Jacob Zelko
en
Conducting health research studies at scale to understand the health of specific communities and subpopulations has long been a struggle. This has been due to a variety of issues, such as a lack of international standards in the structure of electronic health records, patient claims data, and diagnoses. Moreover, the investigation of questions related to the topic of health equity (that is, the skewed distribution of health resources or services to various subpopulations seeking healthcare) has been largely stalled due to these problems.
In a previous talk I gave, [Using Julia for Observational Health Research](https://www.youtube.com/watch?v=5XsWUZX6lFM), I presented early work on the success of using Julia within the space of observational health research in utilizing the [OMOP Common Data Model](https://jacobzelko.com/02082021170353-cdm-standardized-tables/). In that previous work, I conducted a pilot study to characterize prevalence rates in mental health care for [intersectional subpopulations](https://jacobzelko.com/11042022141714-what-intersectionality-theory/#workable_definitions_of_intersectionality) suffering from bipolar disorder, depression, and/or suicidality. This work utilized novel tooling and approaches created within Julia to successfully analyze data from ~2.5 million Medicaid subscribers within the U.S. state of Georgia. This work earned the [highest awards at the top observational health research venue](https://www.ohdsi.org/2022-collaborator-showcase), drove another successful grant proposal, and resulted in [multiple invited talks](https://www.nahdo.org/conference/2022/agenda). Buoyed by the interest and success of this pilot work, my team and I have moved this project into the next phase: the examination of more than 100 million patients from more than 4 countries across the globe.
In this talk, I will present advances within the JuliaHealth community and the broader Julia ecosystem that have made possible such large scale and federated analyses. In particular, novel JuliaHealth tools such as [OMOPCDMCohortCreator.jl](https://juliahealth.org/OMOPCDMCohortCreator.jl/) will be highlighted to show how to analyze "big" [real world data](https://jacobzelko.com/10282021140730-real-world-evidence/#united_states_food_and_drug_administration_definitions), how using Julia can be of huge benefit within this space, and how Julia community members could start using these tools for their own research. As this study now takes place across multiple countries, time will also be spent discussing how Julia lends itself very well to robust analyses using literate programming tools such as [Quarto](https://quarto.org) or [Weave.jl](https://github.com/JunoLab/Weave.jl) and versioning processes through [DrWatson.jl](https://github.com/JuliaDynamics/DrWatson.jl) or [Data Version Control](https://github.com/iterative/dvc), which can be utilized to handle each country's specific needs. Additionally, I will spend some time discussing issues encountered (both technical and anthropological), ways that the Julia ecosystem could potentially grow to support future work in this research domain, and opportunities for Julia users to get involved. Finally, I will share my personal thoughts on what open questions there are to be addressed in observational health research and how Julia can be a tool to address public health questions and provide insight into questions of health disparities.
In conclusion, this talk will highlight the real world use of Julia in large-scale health research studies built on real world data. Moreover, it will show the potential of the various ecosystems within Julia to analyze and tackle complex questions within health equity. Through this talk, I invite future Julia users and researchers to join me in pursuing the potential of Julia within the space of observational health research.
false
https://pretalx.com/juliacon2023/talk/F3P9FX/
https://pretalx.com/juliacon2023/talk/F3P9FX/feedback/
32-G449 (Kiva)
SingleCellProjections.jl - Fast Single Cell Expression analysis
Talk
2023-07-28T15:30:00-04:00
15:30
00:30
We present an easy to use and powerful package that enables analysis of Single Cell Expression data in Julia.
It is faster and uses less memory than existing solutions since the data is internally represented as expressions of sparse and low rank matrices, instead of storing huge dense matrices.
In particular, it efficiently performs PCA (Principal Component Analysis), a natural starting point for downstream analysis, and supports both standard workflows and projections onto a base data set.
juliacon2023-26962-singlecellprojections-jl-fast-single-cell-expression-analysis
Biology and Medicine
Rasmus Henningsson
en
Using Single Cell RNA sequencing, it is today possible to generate data sets with gene expressions levels for >30k genes and hundreds of thousands of cells.
Explorative analysis of these rich data sets is important - but challenging using existing tools that store the transformed and normalized data as dense matrices. (A single dense matrix with 30k genes and 500k cells takes 120GB RAM.)
SingleCellProjections.jl is the first comprehensive Julia Package for processing Single Cell Expression data.
It is at least 5 times faster and uses less than 1/5 of the memory, when doing the same analysis as existing packages written in other languages.
SingleCellProjections.jl supports the standard workflow for Single Cell Expression data:
Sparse matrices with raw gene expressions counts (~5-10% nonzeros) are transformed (e.g. using SCTransform or log transform) and then normalized (e.g. mean-center, regress out covariates).
Next, a truncated SVD (i.e. Principal Component Analysis) is computed to bring the data down from 30k dimensions to ~100 dimensions, which also serves as noise reduction.
This is a great starting point for downstream analysis, partly because the data set is now much smaller.
UMAP and t-SNE visualization are supported using external packages, and Force Layout (also known as SPRING plot) support is built-in.
As the name indicates, SingleCellProjections.jl is also built for projections.
A common use case is that there is good, well-annotated reference data set (e.g. healthy cells), that the user wants to relate their own, newly generated data set (e.g. cancer cells) to.
By projecting the new data onto the reference data set, similarities and differences can be interpreted in terms of the reference data set.
Projection is here used in a broad sense, describing all steps after loading the raw count data until the analysis is done.
Computing the projection in SingleCellProjections.jl is very simple:
```
new_data = load_counts(filepaths)
project(new_data, base)
```
Under the hood, SingleCellProjections.jl has stored a `ProjectionModel` for each analysis step that was applied to the base data set, with all the information needed to compute the projection for that step, and applies the models one by one to project the new data.
Note that projecting is rarely the same as running the same analysis step independently, as the model is built from the source data.
This can be subtle and easy to forget, but the simple interface hides this complexity from the user, minimizing the risk for mistakes.
At the same time, it is easy to customize some steps if needed.
The key to performance and low memory usage in SingleCellProjections.jl is to never store large, dense matrices.
Instead, matrix expressions are created and manipulated to implicitly represent the same information internally.
As a motivating example, consider a sparse matrix `S` and let `A := S - m1ᵀ` be the matrix after mean-centering.
Here, `m` is a vector with the mean for each gene (variable) and `1` is a vector of ones.
SingleCellProjections.jl avoids materializing the large dense matrix `A` by working with the expression object directly.
This strategy generalizes to more advanced transforms and normalizations, yielding slightly more complicated matrix expressions, consisting of sparse and/or low-rank terms and factors.
Continuing the example from above, note that it is much more efficient to compute `AX` for some matrix `X` by distributing over the sum and evaluating `SX - m(1ᵀX)`, than to work with the materialized matrix `A`.
Randomized subspace iterations (Halko et al) are used to compute the truncated SVD (i.e. PCA), relying only on such matrix-matrix products.
To efficiently compute the matrix-matrix products, SingleCellProjections.jl internally solves a generalized Matrix Chain Multiplication problem, taking both size and structure of the matrices into account.
Julia is very well suited for working with complicated, high-dimensional, biological data.
In particular, the ability to write both high level code and efficient low-level code has been immensely useful when implementing this package.
Reproducibility, which is very important for scientific analyses, is also greatly improved by Julia - in part by Manifests, but also by execution speed, since the user is more likely to rerun an analysis rather than loading some partial result from disk.
We hope that SingleCellReductions.jl is a good starting point for anyone who wants to perform Single Cell expressions analyses and benefit from the Julia language and its ecosystem.
References:
Halko et al, "Finding Structure with Randomness: Probabilistic Algorithms for Constructing Approximate Matrix Decompositions"
false
https://pretalx.com/juliacon2023/talk/NPADF7/
https://pretalx.com/juliacon2023/talk/NPADF7/feedback/
32-G449 (Kiva)
BoF: Julia in health and medicine
Birds of Feather (BoF)
2023-07-28T16:00:00-04:00
16:00
01:00
A casual, open-ended discussion for anyone interested in using Julia in health and medicine. In particular, we'll discuss strategies for growing and strengthening the Julia health and medicine community.
juliacon2023-26937-bof-julia-in-health-and-medicine
Biology and Medicine
Dilum AluthgeJacob Zelko
en
The goal of this session is to gather individuals interested in discussing and promoting the use of Julia in health and medicine. The Birds of a Feather (BoF) will be a casual, open-ended discussion for anyone interested in utilizing Julia to improve healthcare outcomes. This includes healthcare professionals, Julia developers, and anyone else interested in using technology to improve healthcare and the JuliaHealth ecosystem.
During the BoF, we will focus on strategies for growing and strengthening the JuliaHealth community. This includes discussing the JuliaHealth ecosystem, an initiative aimed at promoting the use of Julia in healthcare and medicine, and identifying ways to increase its visibility and reach within the healthcare community, as well as highlighting the existing needs in healthcare that JuliaHealth could help address.
We will also foster the exchange of best practices, tips, and thoughts on how to use Julia in healthcare. This could include discussing specific Julia packages or tools that are particularly useful in healthcare, as well as sharing examples of how Julia is currently being used in healthcare, such as in medical imaging, electronic health records, and/or clinical decision support systems.
Overall, the BoF will provide a valuable opportunity for individuals in the Julia and healthcare communities to connect, share knowledge, and collaborate on ways to improve healthcare with Julia.
false
https://pretalx.com/juliacon2023/talk/SU9FGK/
https://pretalx.com/juliacon2023/talk/SU9FGK/feedback/
32-G449 (Kiva)
Day 3 Evening Hacking/Social
Social hour
2023-07-28T17:30:00-04:00
17:30
05:30
Come hang out in the evening after talks for some friendly hacking and social time!
juliacon2023-28179-day-3-evening-hacking-social
JuliaCon
en
Come hang out in the evening after talks for some friendly hacking and social time! Get advice, find new collaborators, or just enjoy the Julia atmosphere.
false
https://pretalx.com/juliacon2023/talk/BGGH3H/
https://pretalx.com/juliacon2023/talk/BGGH3H/feedback/
32-D463 (Star)
Morning Break Day 3 Room 4
Break
2023-07-28T09:45:00-04:00
09:45
00:15
Morning break for coffee and snacks, and transit time from the keynote to the rest of the day's talks.
juliacon2023-28140-morning-break-day-3-room-4
JuliaCon
en
Morning break for coffee and snacks, and transit time from the keynote to the rest of the day's talks.
false
https://pretalx.com/juliacon2023/talk/RZAWKF/
https://pretalx.com/juliacon2023/talk/RZAWKF/feedback/
32-D463 (Star)
Tenet.jl: Composable Tensor Network Library
Lightning talk
2023-07-28T10:10:00-04:00
10:10
00:10
In this talk, I present a collection of Julia packages developed for Tensor Network simulation experiments ([Tenet.jl](https://github.com/bsc-quantic/Tenet.jl) and [EinExprs.jl](https://github.com/bsc-quantic/EinExprs.jl)). We examine which Julia features and design choices enabled us to offer an intuitive interface for users, increasing the tunability and flexibility without loss of performance.
juliacon2023-26924-tenet-jl-composable-tensor-network-library
Quantum
Sergio Sánchez Ramírez
en
In this talk, I present the Julia library ecosystem that we have developed at the Barcelona Supercomputing Center for large-scale tensor network simulations. Specifically, I present:
- [**Tenet.jl**](https://github.com/bsc-quantic/Tenet.jl), a composable Tensor Network library that allows user for tunable executions. Its design has been carefully crafted to provide great expressibility, flexibility and performance.
- [**EinExprs.jl**](https://github.com/bsc-quantic/EinExprs.jl), a contraction path search library that offers state-of-art heuristics, visualization utilities and optimizers. It powers Tenet but the constructions introduced in it can be of use in other libraries.
The talk counts with code examples and introductions to the topics for users outside of the field. I will give an example of the expressive power of **Tenet** and **EinExprs** by showing how Google's quantum ~supremacy~ advantage experiment can be recreated in <15 lines of code.
false
https://pretalx.com/juliacon2023/talk/TC7TVY/
https://pretalx.com/juliacon2023/talk/TC7TVY/feedback/
32-D463 (Star)
BiosimMD.jl: Fast and Versatile Molecular Dynamics on CPU
Lightning talk
2023-07-28T10:20:00-04:00
10:20
00:10
We have developed BiosimMD.jl, a package for performing molecular dynamics (MD) simulations significantly faster than state-of-the-art engines. We present performance benchmarks of the package and its versatility. The package allows scientists to implement novel methods for MD without compromising the speed of simulation. We also discuss aspects of Julia critical in BiosimMD’s development, including access to many levels of computational abstraction, metaprogramming, and ease of multi-threading.
juliacon2023-26998-biosimmd-jl-fast-and-versatile-molecular-dynamics-on-cpu
Biology and Medicine
Hayk Saribekyan
en
Molecular dynamics (MD) is a computational method for simulating microscopic motions of atoms in a system, in order to understand macroscopic properties of the system. One of the main uses of MD is in computational drug discovery, where simulating a target protein with a candidate drug (ligand) allows one to predict their binding affinity before conducting expensive wet-lab experiments. MD computations are by no means cheap: to get meaningful results multi-day simulations on high-performance machines are often necessary. For that reason, most popular MD engines are written and performance engineered in C++ in order to make best use of the hardware.
BiosimMD.jl is wholly written in Julia, which made it possible to match and surpass the performance of state-of-the-art C++ engines. The package makes extensive use of the ability to access low-level compiler and CPU features e.g. SIMD operations. At the same time, the ability to test new algorithms at a high level made it possible to quickly research and prototype the best performing code. The package also heavily relies on Julia’s metaprogramming, greatly reducing the size and the complexity of the codebase.
The design of BiosimMD.jl enables users to implement custom MD methods without compromising simulation speed and modifying BiosimMD’s internal code. One example is the use of machine learning during the simulation on-the-fly. A challenge in MD is the design of high-quality force-fields, the set of parameters that determines how the atoms move in a simulation. While there are good off-the-shelf force-fields for proteins, that is not the case for ligands (candidate drugs). We have implemented a package, MoleculeMD.jl, which integrates with BiosimMD.jl in a few lines of code, providing a machine-learnt force field trained on accurate and extensive quantum chemistry data. BiosimMD.jl provides an interface for implementing a wide array of similar techniques.
Currently, BiosimMD.jl is implemented and optimized for Intel and AMD CPUs. In future, a GPU implementation will also be available. There is also an accompanying package to import molecular systems from popular MD engines such as Gromacs and OpenMM.
false
https://pretalx.com/juliacon2023/talk/RQ7BJZ/
https://pretalx.com/juliacon2023/talk/RQ7BJZ/feedback/
32-D463 (Star)
Surrogate-Assisted Multi-Objective Optimization with Constraints
Lightning talk
2023-07-28T10:30:00-04:00
10:30
00:10
We present the key ideas for finding first-order critical points of multi-objective optimization problems with nonlinear objectives and constraints. A gradient-based trust-region algorithm is modified to employ local, derivative-free surrogate models instead, and a so-called Filter ensures convergence towards feasibility. We show results of a prototype implementation in Julia, relying heavily on JuMP and suitable LP or QP solvers, that confirm the use of surrogates to reduce function calls.
juliacon2023-26961-surrogate-assisted-multi-objective-optimization-with-constraints
JuliaCon
/media/juliacon2023/submissions/8FAGEC/session_image_Pl1tBCL.png
Manuel Berkemeier
en
The presentation is aimed at anyone with a basic understanding of gradient-based optimization or with an interest in trust-region methods and surrogate modeling. That is because many of the concepts in our multi-objective setting are generalizations of well-known single-objective pendants.
Like in single-objective optimization, optimization problems with multiple objectives and nonlinear constraint functions might arise in a multitude of areas in the natural sciences, engineering, or economics. Whereas trust-region ideas allow for a relatively straightforward modeling of objective functions to design derivative-free descent algorithms, they cannot handle “difficult” constraint functions without modifications. But the composite-step approach and the Filter mechanism can be adapted from single-objective optimization, enabling the same modeling techniques for constraints as well. Now it is possible to use suitable, “fully linear” models, such as linear polynomials or radial basis function models, to achieve convergence to first-order critical points, without gradient queries for the true functions.
In our pre-print (https://arxiv.org/abs/2208.12094), we refer to a prototype implementation in a Pluto notebook. This proof-of-concept uses JuMP and COSMO to find the inexact steps. Moreover, we currently use NLopt for nonlinear restoration and Makie for plotting. Our main goal, at the moment, is to package the algorithm and integrate it with the SciML ecosystem for nonlinear optimization. We hope to benefit from existing solutions, such as automatic differentiation or symbolic manipulations. For example, we currently investigate the possibility of only modeling subcomponents of a symbolically defined function. Of course, we hope to be able to provide a glimpse of this, too.
false
https://pretalx.com/juliacon2023/talk/8FAGEC/
https://pretalx.com/juliacon2023/talk/8FAGEC/feedback/
32-D463 (Star)
LotteryTickets.jl: Sparsify your Flux Models
Lightning talk
2023-07-28T10:40:00-04:00
10:40
00:10
We present `LotteryTickets.jl`, a library for finding *lottery tickets* in deep neural networks: pruned, sparse sub-networks that retain much of the performance of their fully parameterized architectures. `LotteryTickets.jl` provides prunable wrappers for all `Flux.jl` defined layers as well as an easy macro for making a predefined Flux model prunable.
juliacon2023-24151-lotterytickets-jl-sparsify-your-flux-models
JuliaCon
Marco Cognetta
en
Roughly, the *lottery ticket hypothesis* says that only a small fraction of parameters in deep neural models are responsible for most of the model's performance. Further, if you were to initialize a network with just these parameters, that network would converge much more quickly than the fully parameterized model. We call the parameters of such subnetworks *winning lottery tickets*. A straightforward way to search for lottery tickets is to iteratively train, prune, and reinitialize a model until the performance begins to suffer.
We introduce `LotteryTickets.jl`, a forthcoming Julia library for pruning Flux models in order to find *winning lottery tickets*. `LotteryTickets.jl` provides wrappers for Flux layers so that one can define a normal Flux model and then prune it to recover the lottery tickets. All of the layers defined in Flux are supported, and it is made easy to define prunable wrappers for custom Flux layers.
In addition to a brief primer on model sparsification, this talk will discuss the main interface for `LotteryTickets.jl`, the key implementation choices, and an example of training and pruning a model end-to-end, even in the presence of custom Flux layers.
false
https://pretalx.com/juliacon2023/talk/J7VRPB/
https://pretalx.com/juliacon2023/talk/J7VRPB/feedback/
32-D463 (Star)
An optimization package for constrained nonlinear least-squares
Lightning talk
2023-07-28T10:50:00-04:00
10:50
00:10
ENLSIP algorithm is designed to solve nonlinear least squares problems under nonlinear constraints. Implemented in Fortran77, it has been successfully used for decades by Hydro-Québec, the main electricity supplier for the province of Quebec in Canada, to calibrate its short-term electricity demand forecast models. A conversion into Julia has been developed to improve reliability and readability of the original code. We now present it as a Julia numerical optimization open-source package
juliacon2023-25648-an-optimization-package-for-constrained-nonlinear-least-squares
JuliaCon
Pierre Borie
en
ENLSIP, which stands for Easy Nonlinear Least-Squares Inequality Program and available at https://plato.asu.edu/sub/nonlsq.html#lsqres, is the name of an optimization algorithm and an open-source Fortran77 library developed and released in the 1980s that solves nonlinear least squares problems under nonlinear constraints using a Gauss-Newton type method.
This library has been successfully used for decades by Hydro-Québec, the main electricity supplier for the province of Quebec in Canada, to calibrate its short-term electricity demand forecast models, also coded in Fortran. Since Hydro-Québec is starting to switch from Fortran to Julia and because its systems are used in a very critical context, the first goal of this transition is to ensure that the replacing Julia version reproduce the results given by the original Fortran version. The Julia conversion of the above-mentioned ENLSIP library is part of this process. Comparison of results and performance on operational Hydro-Québec optimization problems have been performed thanks to a Julia-Fortran interface and show very encouraging results, which leads us to think that the current version of our implementation can be published as a Julia package.
We recognize that this algorithm does not beneficiate from state-of-the-art least-squares optimization improvements, but we think using it can still be relevant nowadays. Indeed, its application remains very general covering non-linearity and non-convexity of the constraints and objective function. This category of least-squares problems is seldom mentioned in the literature compared to other ones such as linear cases without constraints for instance. The second part of this project is to improve the optimization method and we hope the release of this package and its eventual use by the community can help us gathering improvements useful to all.
false
https://pretalx.com/juliacon2023/talk/RHTLPD/
https://pretalx.com/juliacon2023/talk/RHTLPD/feedback/
32-D463 (Star)
FastOPInterpolation.jl
Lightning talk
2023-07-28T11:00:00-04:00
11:00
00:10
We present the package [FastOPInterpolation.jl](https://github.com/NicolasW1/FastOPInterpolation.jl). It provides interpolation on arbitrary tensor product domains. The main goal is fast, repeated evaluations at fixed order with forward recursion and fast reinterpolation of updated or new functions. The implemented domains include lines, disks, and triangles. The extension to other domains is easily possible. It was originally developed with integral equations on product spaces in mind.
juliacon2023-25339-fastopinterpolation-jl
JuliaCon
Nicolas Wink
en
The package [FastOPInterpolation.jl](https://github.com/NicolasW1/FastOPInterpolation.jl) provides interpolation on an arbitrary number of tensor product domains. For example, a prism as a product of a line and a triangle, a cylinder as a product of a line and a disk or simply hypercubes as a product of lines.
The interpolation is based on nodal values using a lazy Kronecker product of Vandermonde matrices. The evaluation utilizes forward recurrence relations of orthogonal polynomials on the elementary domains. This provides numerical stability and speed while being entirely non-allocating. Additionally, the forward recurrence structure allows for an easy extension to other domains, provided a set of orthogonal polynomials together with their recurrence relation is known.
The initial version includes Jacobi polynomials, including Chebyshev and Legendre polynomials as special cases, on the line, where evaluation nodes are provided for arbitrary polynomial order. The triangle with Koornwinder IV polynomials (nodes up to order 18), and the disk with Koornwinder II polynomials, which include the Zernike polynomials as special case (nodes up to order 30).
The package was originally developed with applications to integral and integral-differential equations of several functions on tensor product domains in mind.
false
https://pretalx.com/juliacon2023/talk/W78W98/
https://pretalx.com/juliacon2023/talk/W78W98/feedback/
32-D463 (Star)
The role of (un)knowns in Julia's UDE modeling
Lightning talk
2023-07-28T11:20:00-04:00
11:20
00:10
In the context of the famous quote “It ain’t what you don’t know that gets you in trouble. It’s what you know for sure that just ain’t so” attributed to Mark Twain, we explore the magnitude of the trouble you get into when you introduce pathological assumptions in scientific machine modeling. Considering Universal Differential Equations we ask “what happens if our domain’s knowledge is incorrectly specified?” and answer showcasing the high interoperability of Julia.
juliacon2023-26868-the-role-of-un-knowns-in-julia-s-ude-modeling
SciML
Luca Reale
en
One of the most promising evolutions in Machine Learning and AI is the approximation and analysis of differential equation systems with deep learning, i.e. Neural Network Differential Equations. Julia’s SciML ecosystem introduced an effective way to model natural phenomena as dynamical systems with Universal Differential Equations (UDE’s).The UDE framework enriches both classic and Neural Network differential equation modeling by combining an explicitly “known” term (which comes from our scientific knowledge of the natural phenomenon we are studying) with an “unknown” term (what rests to be discovered). Researchers are looking at how noise in the data observations, irregular observations, or inherent variability affect the performance of these techniques. This implicitly assumes that what we think about the natural phenomenon is correctly expressed in the known term. In this talk, instead, we are going to use Julia’s SciML ecosystem to study how an erroneous or partial understanding of a phenomenon (a perturbation of the known term) impacts our ability to discover the unknown component.
Within a UDE framework, the unknown terms, and therefore the overall functional forms of the dynamical system, are learned from the observational data by fitting a Neural Network. Methods such as sparse identification of non-linear dynamics (SINDy) can then be applied to simplify the fitted neural network and improve the performance of our model. We focus instead on the impact of possible pathologies in the design of a UDE system, and in particular, on possible errors we introduce in the expression of the known term. We pose the question, “what happens if our domain’s knowledge is incorrectly specified?”. In the context of the famous quote “It ain’t what you don’t know that gets you in trouble. It’s what you know for sure that just ain’t so” attributed to Mark Twain, we explore the magnitude of the trouble you get into.
### Mathematical Setting
More in detail, for a set of variables X, we consider a dynamical system of the form
`dX/dt=F(X)=K(X)+U(X)`.
In the following, we consider `K(X)` as known (ie.g., from domain knowledge), and `U(X)` as the “unknown” component we wish to discover.
We sample observational data from `X(t)` at various points in time.
Let `Kp(X)` be a perturbed version of K, that is, an erroneous specification of the domain knowledge. We try to recover `F(X)` from the data by optimizing a UDE of the form
`dX/dt=Kp(X)+NN(X)`.
For example, choosing `K(x)=sin(x)` we sample data from the system `dX/dt=sin(x)+exp(x)` and we try to recover it from a UDE such as `dX/dt=cos(x)+NN(X)`, where `Kp(x)=sin(x + π/2)` .
In this setting, we ask how strongly can we mispecify `K(X)` and still recover the functional form of `F(X)`?
We study the problem within the Julia’s SciML framework, exploiting its capacity of interoperating with symbolic computation systems, such as, Symbolics.jl and its high performance to explore a large space of original and perturbed functions.
### Future Development
The preliminary results raise interesting questions about the presence of undetectable domain knowledge errorse.
Our talk will interest both people who study UDEs for our cautionary and surprising results, and the wider audience interested more in the use of Julia in mathematical modeling for the encouraging examples of interoperability we present.
The talk will present how Julia helped us in this experimental mathematical exercise, and offer many opportunities for further investigations.
The presentation will be as light as possible on the mathematical side, present ample examples of how the interoperability of Julia helped our analysis, and assumes little or no prior knowledge of UDE’s. Graphs and examples will also be used to aid understanding of the topic.
false
https://pretalx.com/juliacon2023/talk/WQVFEE/
https://pretalx.com/juliacon2023/talk/WQVFEE/feedback/
32-D463 (Star)
Differentiable isospectral flows for matrix diagonalization
Lightning talk
2023-07-28T11:30:00-04:00
11:30
00:10
In this talk, we present a differentiable Julia implementation of eigenvalue algorithms based on isospectral flows, i.e., matrix systems of ordinary differential equations (ODEs) that continuously drive Hermitian matrices toward a diagonal steady state. We discuss different options for suitable ODE solvers as well as methods for computing sensitivities, and showcase applications in quantum many-body physics.
juliacon2023-26950-differentiable-isospectral-flows-for-matrix-diagonalization
SciML
/media/juliacon2023/submissions/FJXRMJ/differentiable_isospectral_flows_dI5hI2a.png
Julian Arnold
en
One of the most important problems in numerical analysis is the design of algorithms for solving the eigenvalue problem, i.e., to find the eigenvalues and eigenvectors of a matrix. For a Hermitian matrix, this amounts to constructing a unitary transformation that brings it into a diagonal form. Traditional iterative algorithms, such as the Jacobi eigenvalue algorithm or the QR algorithm, accomplish this in a series of discrete steps. In the continuum limit, the unitary transformation is instead performed in a continuous fashion, as described by isospectral flows. Solving the ODEs modeling such an isospectral flow numerically in turn corresponds to an iterative eigenvalue algorithm. The QR algorithm, for example, emerges when sampling a particular isospectral flow at unit intervals [1].
Eigenvalue algorithms based on continuous isospectral flows have proven particularly useful for studying closed quantum many-body systems whose properties are determined by their Hamiltonian - a Hermitian matrix whose dimension grows exponentially with the number of system constituents. In this case, the operator structure of typical physical Hamiltonians can be exploited to introduce approximations that enable the analysis of larger systems [2].
Formulating eigenvalue algorithms based on continuous isospectral flows in a differentiable manner allows for the efficient and accurate computation of derivatives of physical quantities of interest, such as (time-dependent) expectation values of observables, with respect to Hamiltonian parameters. The derivative information can be utilized to perform, e.g., parameter estimation, inverse Hamiltonian design, or sensitivity analysis. For the forward simulation of the ODEs, we utilize OrdinaryDiffEq.jl and custom implementations of solvers respecting the isospectrality of the flow. To compute derivatives we utilize the automatic differentiation (AD) framework of Julia, particularly SciMLSensitivity.jl, and define custom rules for forward- and reverse-mode AD tailored toward quantum many-body problems.
[1] U. Helmke and J. B. Moore, Optimization and Dynamical Systems, Communications and Control Engineering Series (Springer, 1994).
[2] S. Kehrein, The Flow Equation Approach to Many-Particle Systems, Springer Tracts in Modern Physics (Springer, 2006).
false
https://pretalx.com/juliacon2023/talk/FJXRMJ/
https://pretalx.com/juliacon2023/talk/FJXRMJ/feedback/
32-D463 (Star)
Differentiation of discontinuities in ODEs arising from dosing
Lightning talk
2023-07-28T11:40:00-04:00
11:40
00:10
In this talk, we present continuous-adjoint sensitivity methods for hybrid differential equations (i.e., ordinary or stochastic differential equations with callbacks) modeling explicit and implicit events. The methods are implemented in the SciMLSensitivity.jl package. As a concrete example, we consider the sensitivity analysis of dosing times in pharmacokinetic models. We discuss different options for the automatic differentiation backend.
juliacon2023-26802-differentiation-of-discontinuities-in-odes-arising-from-dosing
SciML
/media/juliacon2023/submissions/YRTLEC/ex1_ayXLUhb.png
Frank Schäfer
en
Sensitivity analysis, uncertainty quantification, and inverse design tasks typically involve computing a gradient with respect to a loss function modeling the objective in a computer program. Handling objectives that require the numerical simulation of a differential equation with discontinuities, such as in pharmacology applications involving drug dosing, is of great interest. In the forward simulation of an (ordinary or stochastic) differential equation, discontinuities can be implemented using callbacks. However, the computation of the derivatives can be challenging: Discrete sensitivity analysis techniques based on automatic differentiation (AD) packages may scale poorly with the number of parameters (in the case of forward-mode AD) or have a large memory footprint due to the caching of intermediate values (in the case of reverse-mode AD). Therefore, it is highly desirable to make continuous adjoints compatible with callbacks as well. In this talk, we present continuous-adjoint sensitivity methods for hybrid differential equations that model explicit and implicit events and are available within the SciMLSensitivity.jl package.
false
https://pretalx.com/juliacon2023/talk/YRTLEC/
https://pretalx.com/juliacon2023/talk/YRTLEC/feedback/
32-D463 (Star)
ML-Based Surrogate Modeling of Particle Accelerators with Julia
Lightning talk
2023-07-28T11:50:00-04:00
11:50
00:10
As physicists build ever more advanced particle accelerators, corresponding simulation softwares demand more computational resources. Our experiment, IsoDAR, is no exception to this. To reduce computational overhead of high-fidelity simulations, we used Julia to develop machine learning models that can, with reasonable accuracy, predict the behavior of a beam traversing our accelerator. These surrogate models have the potential to transform the way physicists design and optimize accelerators.
juliacon2023-27091-ml-based-surrogate-modeling-of-particle-accelerators-with-julia
SciML
/media/juliacon2023/submissions/ZJKYVS/isodar_fFRwBtk.png
Joshua Villarreal
en
The IsoDAR (Isotope Decay-At-Rest) experiment is a proposed source of neutrinos: light, electrically neutral fundamental particles with many properties that physics has yet to explain. IsoDAR creates neutrinos by irradiating a Li-7 target with a beam of protons. This beam of protons is proposed to carry an unprecedented 10 mA of current, made possible in part by the inclusion of a linear accelerator called a Radiofrequency Quadrupole (RFQ). Simulating the behavior of a beam as it traverses an RFQ of arbitrary design is nontrivial already, but once the beam current becomes as high as 10 mA, nonlinear space charge effects make these computations even more difficult. In response, we have used Julia to develop neural networks that can predict throughgoing beam dynamics accurately and quickly, orders of magnitude faster than traditional high-fidelity simulation. In this contribution, we present the current performance of such surrogate models, discuss their pitfalls in predicting less straightforward beam summary parameters, and highlight their utility for accelerator engineering and design optimization.
false
https://pretalx.com/juliacon2023/talk/ZJKYVS/
https://pretalx.com/juliacon2023/talk/ZJKYVS/feedback/
32-D463 (Star)
Accelerating Model Predictive Control with Machine Learning
Lightning talk
2023-07-28T12:00:00-04:00
12:00
00:10
We present a tool for user-friendly generation of machine learning surrogates that allows for fast and optimizer-free Model Predictive Control. Using a variety of practically-relevant examples, we demonstrate its utility to the workflow of a control engineer in the context of Julia simulation tools (such as ModelingToolkit.jl).
juliacon2023-26945-accelerating-model-predictive-control-with-machine-learning
SciML
Avinash Subramanian
en
Model Predictive Control (MPC) provides a powerful framework to control a dynamical system and is particularly useful for nonlinear systems that may have constraints on states as well as a general objective function. The concept is to use a prediction model of the future response of the system and then determine the optimal control inputs that optimize the future evolved trajectory. Thus, an (expensive) online optimizer is called at each time step with the requirement that a solution is available before the next time step. The first of these control inputs is then passed onto the plant model (or actual system). An observer (such as a Kalman filter or variants) is used to provide an estimate of the system state which is fed back to the predictive controller for the next time step. The need for an online optimizer commonly results in a limitation of the sample rate of the system or the length of the prediction horizon which may make MPC intractable for nonlinear systems with fast dynamics. Furthermore, it necessitates verification of the optimizer especially for safety-critical applications.
This work presents an alternative approach for optimizer-free control that both results in a substantial speedup over online optimization and removes the need for optimizer verification. The method followed is to simulate the MPC controller for several sampled reference conditions as well as initial states and train a neural network-based surrogate to learn the control law from the simulated data. We discuss the philosophy followed and highlight the benefits and drawbacks of constructing a surrogate for the optimizer versus for both the optimizer and the observer. We apply and benchmark the tool developed to several practically-relevant problems arising from various industries such as aerospace and chemical engineering. We also demonstrate the ability of the surrogate controller to reject disturbances and deal with set-point changes. With these examples, we show 100-150x speed-ups with low training errors compared to online optimization.
Lastly, we present the use of the toolkit in a typical workflow of a control engineer starting from the development of a dynamic system model (e.g., using ModelingToolkit.jl) to the development of the MPC model and finally to surrogatization and analysis in order to highlight its use as an enabling technology especially for control of fast nonlinear systems.
false
https://pretalx.com/juliacon2023/talk/ZVWPTP/
https://pretalx.com/juliacon2023/talk/ZVWPTP/feedback/
32-D463 (Star)
Machine learning phase transitions: A probabilistic framework
Lightning talk
2023-07-28T12:10:00-04:00
12:10
00:10
In recent years, it has been extensively demonstrated that phase transitions can be detected from data by analyzing the output of neural networks (NNs) trained to solve specific classification problems. In this talk, we present a framework for the autonomous detection of phase transitions based on analytical solutions to these problems. We discuss the conditions that enable such approaches and showcase their computational advantage compared to NNs based on our Julia implementation.
juliacon2023-26987-machine-learning-phase-transitions-a-probabilistic-framework
SciML
/media/juliacon2023/submissions/9FD73V/ML_for_PT_Julia_Con_2023_7GLjleg.png
Julian Arnold
en
The identification of phase transitions and the classification of different phases of matter from data are among the most popular applications of machine learning (ML) in condensed matter physics. NN-based approaches have proven to be particularly powerful due to the ability of NNs to learn arbitrary functions. Many such approaches work by computing indicators of phase transitions from the output of NNs trained to solve specific classification problems.
The optimal solutions to these classification problems are given by Bayes classifiers that take into account the probability distributions underlying the physical system under consideration. We show that in many scenarios arising in (quantum) many-body physics, the Bayes optimal indicators can be well-approximated (or even computed exactly) based on readily available data by leveraging prior system knowledge. This constitutes an alternative approach to detecting phase transitions from data compared to NN-based classification. Here, we contrast these two approaches based on our Julia implementation.
false
https://pretalx.com/juliacon2023/talk/9FD73V/
https://pretalx.com/juliacon2023/talk/9FD73V/feedback/
32-D463 (Star)
Temporal Network analysis with Julia SciML and DotProductGraphs
Lightning talk
2023-07-28T12:20:00-04:00
12:20
00:10
Many complex networks present a temporal nature, e.g. Social Networks, and their modelling is still an challenge. In this talk we’ll show how, in our research group, we use the Julia's SciML ecosystem to understand and predict the temporal evolution of networks. In particular, we'll present a new package DotProductGraphs.jl, that implements tools from the statistical theory of graph embedding (RDPG) to model temporal networks as dynamical systems.
juliacon2023-26860-temporal-network-analysis-with-julia-sciml-and-dotproductgraphs
SciML
Connor Stirling Smith
en
**Introduction**
Complex networks can change over time as vertices and edges get added or removed. Modelling the temporal evolution of networks and predicting their structure is an open challenge across a wide variety of disciplines: from the study of ecological networks as food-webs, to predictions about the structure of economical networks; from the analysis of Social Networks, to the modelling of how our brain develops and adapts during our lives.
In their usual representation, networks are binary (an edge is either observed or not), sparse (each vertex is linked to a very small subset of the network), large (up to billions of nodes), and changes are discrete rewiring events. These properties make them hard to handle with classic machine learning techniques and have barred the use of some mathematical modelling frameworks such as differential equations. In this talk, we show how we used Julia, and in particular the Scientific Machine learning (SciML) framework, to model the temporal evolution of complex networks as continuous, multivariate, dynamical systems from observational data. We took an approach cutting across different mathematical disciplines (machine learning, differential equations, and graph theory): this was possible largely thanks to the integration of packages like Graph.jl (and the integrated MetaGraph.jl package) with the SciML ecosystem, e.g., DiffEqFlux.jl, via graph embeddings in metric spaces.
For this latter task, we will present a novel Julia package called [DotProductGraphs.jl](https://www.github.com/gvdr/DotProductGraphs.jl) to easily and efficiently compute graph embedding, building on interpreting networks as Random Dot Product Graphs (RDPG).
**Methodology**
1. To translate the discrete, high-dimensional, dynamical system into a continuous, low-dimensional one, we rely on a network embedding technique. A network embedding maps the vertices of a network to points in a (low-dimensional) metric space. We adopt the well-studied Random Dot-Product Graphs statistical model: the mapping is provided by a truncated Singular Value Decomposition of the network’s adjacency matrix; to reconstruct the network we use the fact that the probability of interaction between two vertices is given by the dot product of the points they map to. The decomposition of the adjacency matrices and their alignment is a computationally intensive step, and we tackle it thanks to the fast matrix algorithms available for Julia implemented in our novel DotProductGraphs.jl package.
2. In the embedding framework, a discrete change in the network is modelled as the effect of a continuous displacement of the points in the metric space. Our goal then, is that of discovering from the data (the network observed at various points in time) an adequate dynamical system capturing the laws governing the temporal evolution of the complex network. This is possible thanks to a pipeline that combines Neural or Universal ODEs and the identification of nonlinear dynamical systems, e.g. SiNDY.
In general, each node may influence the temporal evolution of every over node in the network, and, if we are working in a space with dimension d, and N nodes, this translates to a dynamical system with N*N*d variables. As networks may often have thousands or millions of nodes, that number can be huge. In our talk we are going to discuss various strategies we adopted to tame the complexity of the dynamical system.
**Future Development**
We are now working on three fronts:
- we are scaling up the framework, so to model larger networks (for example to model social networks, where being able to predict what people might link with others in the future could be a tool to fight the growing problem of misinformation and disinformation);
- we are progressively moving from Neural Differential Equations to Universal Differential Equations, both to capture any preexisting knowledge of the network dynamical system, and to help with the training complexity;
- we are exploring data augmentation techniques to interpolate between the estimated embeddings, other embeddings, and other neural network architectures.
These three development directions constitute interesting challenges for the Julia practitioner interested in extending Julia modeling abilities. We will discuss them in the talk and suggest how everyone can contribute
All the code will be made available in a dedicated git repository and we are preparing a detailed publication to illustrate our approach.
false
https://pretalx.com/juliacon2023/talk/WGZK8E/
https://pretalx.com/juliacon2023/talk/WGZK8E/feedback/
32-D463 (Star)
Lunch Day 3 (Room 4)
Lunch Break
2023-07-28T12:30:00-04:00
12:30
01:30
We hope you're enjoying JuliaCon 2023 so far! Please find our food trucks waiting right outside venue with food available for purchase.
juliacon2023-28083-lunch-day-3-room-4-
JuliaCon
en
We hope you're enjoying JuliaCon 2023 so far! Please find our food trucks waiting right outside venue with food available for purchase.
false
https://pretalx.com/juliacon2023/talk/RMSBYV/
https://pretalx.com/juliacon2023/talk/RMSBYV/feedback/
32-D463 (Star)
A Julia framework for world dynamics modeling and simulation
Talk
2023-07-28T14:00:00-04:00
14:00
00:30
WorldDynamics.jl is a Julia package which aims to provide a modern framework to investigate Integrated Assessment Models (IAMs) of sustainable development benefiting from Julia's ecosystem for scientific computing. Its goal is to allow users to easyly use and adapt different IAMs, from World3 to recent proposals. In this talk, we are going to present the motivations behind the package, its major goals and current functionalities, and prospect our expectations for the following releases.
juliacon2023-26881-a-julia-framework-for-world-dynamics-modeling-and-simulation
SciML
/media/juliacon2023/submissions/CMAMLV/logo_MdWC42F.png
Pierluigi Crescenzi
en
In the early 1970s, Jay Forrester, a notable and pioneer researcher in several fields, designed the first system dynamics model intended to analyze the complex interactions between high-level subsystems of the world, in order to understand the evolution of a major variable in each subsystem (in particular, the population, the capital investment, the natural resources, the fraction of capital devoted to agriculture, and the pollution level variables). His model, called World2, was published in the 1971 book "World Dynamics" and subsequently evolved into the World3 model, described in the well-known 1972 book "The Limits to Growth" by Meadows et al. Since then, several similar models were developed, such as the quite popular DICE model from the Nobel Prize laureate William Nordhaus, and categorized as Integrated Assessment Models (IAMs), which can be roughly divided into two general classes: policy optimization and policy evaluation models. DICE is a policy optimization model in which, intuitively, we look for a set of parameters (that is, a policy) which maximizes a specific objective variable (or function). On the other hand, the WorldX models (which recently evolved in the Earth4All model) can be considered as policy evaluation models, in which we decide a policy (that is, a set of parameter values) and analyze the behaviour of some major variables (or functions). Benefiting from Julia's ecosystem for scientific computing and referring to this classification, WorldDynamics.jl has been developed to support the construction of policy evaluation IAMs and has already been applied to the reproduction of several historical models.
WorldDynamics.jl leverages the ModelingToolkit.jl ability to compose differential-algebraic systems of equations. Indeed, an IAM is usually structured into several different subsystems whose variable interactions are modelled by differential-algebraic equations. As such, the goal of the model is to understand the evolution of some major variables. WorldDynamics.jl emphasizes this modular structure of IAMs by facilitating the coding of the systems of equations corresponding to the different subsystems and their composition by automatically deriving the connections among them (that is, their shared variables). By doing so, WorldDynamics.jl allows a designer to focus on a specific subsystem without necessarily knowing how another person is developing a different subsystem (interestingly, this seems to be how the well-known World3 model was described and, most likely, developed). Moreover, this approach easily allows the substitution of one system of equations (representing one subsystem) with another system of equations (still representing the same subsystem), provided that it correctly interacts with the other subsystems (intuitively, respecting the required input/output variable interface).
As an open-source package, WorldDynamics.jl also tries to democratize access to distinct models as well as to promote transparency among them. Even if most of the current models are freely available for reproduction, they are usually implemented using proprietary software, which prevents us from verifying precisely their internal operation and, hence, how the models are simulated exactly. By using Julia and its notable packages, WorldDynamics.jl provides a flexible framework that allows the usage of several solvers and integration with different methods with a reduced effort. Its current features include the implementation of the entire Club of Rome series of models with the possibility of easily replicating all the plots of their major variables that appeared in the literature, and changing parameter values and systems of equations in order to evaluate different policies. In conclusion, WorldDynamics.jl allows model construction in a simplified way while enabling the application of modern scientific computing techniques over new and classical models.
In this talk, we are going to present the WorldDynamics.jl package as a whole.
We start with the motivations behind its creation followed by a historical display of classic models. Then, we emphasize our main goals and the challenges encountered during the development process (including how we translated equations from the DYNAMO language), explain the modular approach for implementing IAMS, and describe all the currently implemented models. Finally, we conclude the talk by indicating future works and a roadmap for the next releases. The presentation is intended to give the audience a general explanation of WorldDynamics.jl goals and capabilities. Along with this talk, we are also submitting a workshop proposal to interactively demonstrate the functionalities of WorldDynamics.jl with several hands-on examples.
false
https://pretalx.com/juliacon2023/talk/CMAMLV/
https://pretalx.com/juliacon2023/talk/CMAMLV/feedback/
32-D463 (Star)
Knowledge-Informed Learning in MagNav.jl for Magnetic Navigation
Talk
2023-07-28T14:30:00-04:00
14:30
00:30
The Earth’s crustal magnetic field is a powerful tool for navigation as an alternative to GPS. MagNav.jl is an open-source Julia package containing algorithms for both aeromagnetic compensation and navigation. Alongside baseline algorithms, such as Tolles-Lawson, this package enables multiple scientific machine learning approaches for compensation. This talk will cover some of these techniques and advanced use cases for MagNav.jl in navigation.
juliacon2023-26909-knowledge-informed-learning-in-magnav-jl-for-magnetic-navigation
SciML
Jonathan TaylorAllan WollaberAlbert R. Gnadt
en
Building on previous talks given at JuliaCon 2021 and 2022 that covered the basics of airborne magnetic navigation (MagNav) and MagNav.jl, this talk will expand on the development of scientific machine learning approaches enabled in its recent version 1.0 release. At a high level, MagNav.jl provides tooling to compensate for an aircraft’s magnetic field, removing most of the corruption to enable comparing magnetometer readings with detailed maps of the Earth’s magnetic field. It can then use the resultant signal, alongside other readings, as input to a navigation filter, such as an extended Kalman filter, in order to estimate position. Julia is integral to our research in this field due to its specialties in automatic differentiation, ease of neural network construction, and its performance. This talk will showcase the ease with which state-of-the-art compensation models (Tolles-Lawson) can be mixed with machine learning to reduce the required training data and enhance magnetic compensation accuracy.
false
https://pretalx.com/juliacon2023/talk/9AKT3R/
https://pretalx.com/juliacon2023/talk/9AKT3R/feedback/
32-D463 (Star)
Finite Element Modeling of Assets in Future Distribution Grids
Talk
2023-07-28T15:00:00-04:00
15:00
00:30
We describe a project-based assignment in the EE4375 master course held at the TU Delft. The assignment asks to perform finite element simulations of the magnetic and thermal field of power transformer in distribution grids. Students can choose to employ both codes developed earlier in the course or to resort to existing finite element packages. See github.com/ziolai/finite_element_electrical_engineering . The assignment is developed in collaboration with the local system administrator.
juliacon2023-27012-finite-element-modeling-of-assets-in-future-distribution-grids
SciML
/media/juliacon2023/submissions/WTSTWE/sc_normB_UHlIbuB.png
Domenico Lahaye
en
The operation of electrical distrubution grids is changing drastically. The infeed of renewables and the loading by battery charging is causing unforeseen challenges. The finite element modeling of electrical power transformer is indispensable to chart the location of electromagnetic losses. This talk illustrates tools in Julia that render this challenge possible.
false
https://pretalx.com/juliacon2023/talk/WTSTWE/
https://pretalx.com/juliacon2023/talk/WTSTWE/feedback/
32-D463 (Star)
On solving optimal control problems with Julia
Talk
2023-07-28T15:30:00-04:00
15:30
00:30
The numerical solution of optimal control of dynamical systems is a rich process that typically involves modelling, optimisation, differential equations (most notably Hamiltonian ones), nonlinear equations (*e. g.* for shooting), pathfollowing methods… and, at every step, automatic differentiation. We report on recent experiments with Julia on a variety of applied or more theoretical problems in optimal control and try to survey what the practitioner will find, and would like to find.
juliacon2023-26994-on-solving-optimal-control-problems-with-julia
SciML
/media/juliacon2023/submissions/HDC8F7/IMG_2361_cxAxRup.jpeg
Jean-Baptiste CaillauOlivier CotsGergaudPierre martinon
en
While tremendous efforts are being made in every one of the above-mentioned fields by the Julia community, there are still some missing parts to bridge the gap between them and have a fully satisfactory solving process - meaning: an abstract description, as close as possible to the mathematical problem formulation, and an efficient & reliable numerical computation. The Julia language has a lot to offer in this respect, and there already are excellent codes available [1, 2, 3, 4], while mostly oriented towards « direct solving » (that is direct transcription of the original optimal control problem into a mathematical program).
As a group of researchers with a 20+ year history in the numerical solution of optimal control problems, we are very interested by what Julia offers out of the box as a high-level and efficient language. We have been involved in the development of several libraries in Fortran, C/C++, Matlab, python [5, 6] and the associated use of automatic differentiation (with Adifor, Tapenade, CppAD…) and are eager to share our experience and get feedback from the Julia community. Several examples of the numerical solution of optimal control problems can be found in [7] and include applications in biology, aerospace engineering, marine navigation, turnpike computation and hybrid problems.
More at [control-toolbox.org](https://control-toolbox.org), including tutorials on [OptimalControl.jl](https://control-toolbox.org/docs/optimalcontrol/stable)
1. [ControlSystems](https://juliacontrol.github.io/ControlSystems.jl/dev/#JuliaControl)
2. [InfiniteOpt](https://github.com/infiniteopt/InfiniteOpt.jl)
3. [TrajectoryOptimization](http://roboticexplorationlab.org/TrajectoryOptimization.jl)
4. [Enzyme](https://enzyme.mit.edu/julia)
5. [Bocop](https://www.bocop.org)
6. [Hampath](http://www.hampath.org)
7. [ct: control toolbox gallery](https://ct.gitlabpages.inria.fr/gallery)
false
https://pretalx.com/juliacon2023/talk/HDC8F7/
https://pretalx.com/juliacon2023/talk/HDC8F7/feedback/
32-D463 (Star)
Immuno-Oncology QSP Modeling Using Open-Science Julia Solvers
Talk
2023-07-28T16:00:00-04:00
16:00
00:30
As Julia usage continues to grow within regulated biomedical environments, it is vital to ensure analyses are traceable and reproducible. Conducting analyses in an open-science manner is also critical to expand the adoption of Julia and to facilitate the infrastructure growth of Julia as an accessible ecosystem. A step-by-step model-building example of a classic monoclonal antibody-drug conjugate PBPK/tumor dynamics system illustrates how to develop such a reproducible open-science framework.
juliacon2023-27052-immuno-oncology-qsp-modeling-using-open-science-julia-solvers
SciML
Ahmed Elmokadem
en
The Julia ecosystem has been developed as a powerful open-source language, but it currently lacks some infrastructure typically required by regulatory agencies to be seen as a reproducible and dependable open-source platform for submission-quality work within the highly regulated biomedical field. For Julia to be a truly open-source platform it needs infrastructure for pharmacometric and QSP analyses like the "how to document that this platform is “validated" as R has done (https://www.r-project.org/doc/R-FDA.pdf) ", a package for routinely incorporating inputs like dosing events and other "covariates", model library(ies), general (traceable and reproducible) modeling workflows and best practices, model specification approaches, package testing, functional outputs, etc. The open-source community should not have to rely on commercial entities to supply such infrastructure, nor is a viable open-source language healthy, under those circumstances.
The authors have been working, with others in the field, to develop models and some of the aforementioned workflows and infrastructure to produce submission-quality analyses within the open-source and open-science Julia ecosystem. A classic set of monoclonal antibody-drug conjugate Physiologically Based PK (PBPK)/tumor dynamics systems models have been implemented, parameterized, and exercised (through simulation) to illustrate such a reproducible workflow. The background behind the models and the Julia scripts used to develop and visualize the models and processing will be presented as a step-by-step “how-to” guide for deploying a reproducible open-science Julia workflow. A github page with the details and codes behind this vignette will also be provided to the public, in the spirit of knowledge sharing.
false
https://pretalx.com/juliacon2023/talk/PHRQZ9/
https://pretalx.com/juliacon2023/talk/PHRQZ9/feedback/
32-D463 (Star)
Working with spatial data in Julia
Lightning talk
2023-07-28T16:30:00-04:00
16:30
00:10
In this talk various Julia packages for processing spatial data coming from the Open Street Map (OSM) project will be presented. OSM is an excellent source of information about road system that can be explored by tools such as OpenStreetMapX.jl However, the OSM files also contain useful information points interests (restaurants, hotels, schools, stores etc.) that can be extracted and analyzed in Julia.
juliacon2023-26740-working-with-spatial-data-in-julia
JuliaCon
Przemysław Szufel
en
The OpenStreetMapX.jl package is capable of parsing *.osm and *pbf formatted data from the OpenStreetMap.org project. This data can subsequently utilized to extract information about city’s POIs (points of interest such as schools, hotels, restaurants, tourist attractions), measure actual distances, perform routing and build numerical simulation model that make it possible to understand dynamics of a city. A new package currently under development https://github.com/pszufe/OSMToolset.jl is aimed for mass extraction of various types of POI data from OSM files as DataFrame for further processing with other tools from Julian ecosystem. Additionally, OSM data contain links to other data sources such as Wikipedia and Wikimedia. This information can be processed in Julia and used for building various visualizations of attractiveness of urban regions. This lightning talk will take a form of a Jupyter notebook with a life demo of various processing patterns for spatial data.
The development of OSMToolset.jl was funded in whole by National Science Centre, Poland grant number 2021/41/B/HS4/03349. Presentation of this tutorial has been supported by the Polish National Agency for Academic Exchange under the Strategic Partnerships programme, grant number BPI/PST/2021/1/00069/U/00001
false
https://pretalx.com/juliacon2023/talk/AGYFPY/
https://pretalx.com/juliacon2023/talk/AGYFPY/feedback/
32-D463 (Star)
Graphical Displays for Understanding Mixed Models
Lightning talk
2023-07-28T16:40:00-04:00
16:40
00:10
Using MixedModels.jl and MixedModelsMakie.jl, I will show several different ways to visualize different aspects of the model fit as well as the model fitting process. I will focus especially on shrinkage (downward bias of the random effects relative to similar estimates from a classical OLS model) and how MixedModels.jl uses BOBYQA and a profiled log likelihood to efficiently explore the parameter space.
juliacon2023-26904-graphical-displays-for-understanding-mixed-models
JuliaCon
Phillip Alday
en
Visualization of mixed-effects models often focuses on the same plots as classical regression models: visualization is focused on effects plots and diagnostic plots that largely ignore the complexity and subtlety introduced by random effects.
MixedModelsMakie.jl provides a shrinkage plot, which displays the change from classical OLS estimates to the conditional modes (random effects) for the block-level predictions.
In addition to demonstrating the concept of shrinkage, these displays also provide informative diagnostic information on random-effects structure.
For example, models with a degenerate random-effects structure, i.e. singular models, generally show the excess dimensionality quite clearly in shrinkage plots.
Shrinkage plots also provide a convenient way to visualize the tradeoffs of a restricted covariance structure: fewer parameters to optimize, but less efficient shrinkage.
MixedModels.jl also allows tracing of the optimization procedure, i.e. exploration of the parameter space.
We can take advantage of this trace to visualize and better understand the behavior of the optimizer and the challenges involved in fitting large or complex models.
For example, we can observe that optimization generally follows three phases: an initial phase of broad exploration of the parameter space, a phase rapid convergence to the neighborhood of the optimum and a final phase of fine tuning of parameter estimates and verification.
In large models, the final phase tends to dominate, which has implications for a speed-accuracy tradeoff in certain applications.
Finally, we can also examine animation of shrinkage across the course of optimization.
The change in shrinkage is relevant as a practical implication for speed-accuracy tradeoffs in model fits and also serves to highlight how shrinkage -- like all regularization -- is an example of the bias-variance tradeoff.
For mixed models, this means a tradeoff between the observation-level variance and the between-group variance.
false
https://pretalx.com/juliacon2023/talk/UKSPXU/
https://pretalx.com/juliacon2023/talk/UKSPXU/feedback/
32-D463 (Star)
cadCAD.jl: A Modeling Library for Generalized Dynamical Systems
Lightning talk
2023-07-28T16:50:00-04:00
16:50
00:10
This talk introduces cadCAD.jl, a high performance open source Julia library for modeling and simulating dynamical systems with generic attributes. Our goal with this talk is to show the main ideas behind the library, how it promotes open science, how we used Julia to achieve higher performance in comparison to it's Python implementation, and how it would fit in a data science workflow, by running an example simulation.
juliacon2023-24980-cadcad-jl-a-modeling-library-for-generalized-dynamical-systems
JuliaCon
Emanuel Lima
en
This talk introduces cadCAD.jl, a high performance open source Julia library for modeling and simulating dynamical systems with generic attributes. Instead of evolving numbers over time, system engineers and data scientists can now evolve any kind of data structure in their dynamical systems simulations. With cadCAD.jl, models of these systems can yield:
1. all possible trajectories produced by transformations;
2. identification of properties associated with underlying assumptions;
3. iterated insights about the model.
Our goal with this talk is to show the main ideas behind the library, how it promotes open science, how we used Julia to achieve higher performance in comparison to it's Python implementation, and how it would fit in a data science workflow, by running an example simulation.
false
https://pretalx.com/juliacon2023/talk/ERXTXE/
https://pretalx.com/juliacon2023/talk/ERXTXE/feedback/
32-141
Eigenvalues and condition numbers of random quasimatrices
Talk
2023-07-28T09:00:00-04:00
09:00
00:25
Join us for ASE-60, where we celebrate the life and the career of Professor Alan Stuart Edelman, on the occasion of his 60th birthday: https://math.mit.edu/events/ase60celebration/
juliacon2023-35763-eigenvalues-and-condition-numbers-of-random-quasimatrices
ASE60
en
Alan first hit the headlines with his wonderful paper "Eigenvalues and condition numbers of random matrices". Here we explore what happens when, in one or both directions, the discrete random entries of a matrix are replaced by smooth random functions.
false
https://pretalx.com/juliacon2023/talk/S97ZGF/
https://pretalx.com/juliacon2023/talk/S97ZGF/feedback/
32-141
Optimizations with orthogonality constraints
Talk
2023-07-28T09:30:00-04:00
09:30
00:25
Full title: Optimizations with orthogonality constraints and eigenvector-dependent nonlinear eigenvalue problems. Join us for ASE-60, where we celebrate the life and the career of Professor Alan Stuart Edelman, on the occasion of his 60th birthday: https://math.mit.edu/events/ase60celebration/
juliacon2023-35764-optimizations-with-orthogonality-constraints
ASE60
en
Optimizations with orthogonality constraints are ubiquitous in scientific computing, such as finding projection matrices for dimensionality reduction in high dimensional data analysis and energy minimization for electronic structure calculations. Edelman, Arias and Smith's 1998 paper on the geometry of algorithms with orthogonality constraints is a seminal work that has a great influence in the fields of optimizations and numerical linear algebra over two decades. We will present recent development of optimization with orthogonality constraints from the perspective of algebraic nonlinear eigenvalue problems with eigenvector dependency (NEPv). This is a joint work with Ren-Cang Li of University of Texas, Arlington and Ding Lu of University of Kentucky.
false
https://pretalx.com/juliacon2023/talk/GPUAJ8/
https://pretalx.com/juliacon2023/talk/GPUAJ8/feedback/
32-141
Three Years of Computing with Alan
Talk
2023-07-28T10:00:00-04:00
10:00
00:25
Join us for ASE-60, where we celebrate the life and the career of Professor Alan Stuart Edelman, on the occasion of his 60th birthday
juliacon2023-35766-three-years-of-computing-with-alan
ASE60
en
Over the past three years, Alan has served as an informal advisor for me, meeting every Saturday without fail. (We rescheduled, reluctantly, for the birth of my son.) We've talked all things numerical linear algebra, which I will touch on in this talk. Our conversations have also influenced my other work — and have given me many stories about Alan, which I will share with you as well.
false
https://pretalx.com/juliacon2023/talk/CTHHQJ/
https://pretalx.com/juliacon2023/talk/CTHHQJ/feedback/
32-141
Sparsity: Practice-to-Theory-to-Practice
Talk
2023-07-28T11:00:00-04:00
11:00
00:25
Join us for ASE-60, where we celebrate the life and the career of Professor Alan Stuart Edelman, on the occasion of his 60th birthday: https://math.mit.edu/events/ase60celebration/
juliacon2023-35767-sparsity-practice-to-theory-to-practice
ASE60
en
As we all know, the entire world of computation is mostly matrix multiplies. Within this universe we do allow some variation. Specifically, all the world is mostly either dense matrix multiplies or sparse matrix multiplies. Sparse matrices are often used as a trick to solve larger problems by only storing non-zero values. As a result, there is large toolkit of powerful sparse matrix software. The availability of sparse matrix tools inspires representing a wide range of problems as sparse matrices. Notably graphs have many wonderful sparse matrix properties and many graph algorithms can be written as matrix multiplies using a variety of semirings. This inspires developing new sparse matrix software that encompasses a wide range of semiring operations. In the context of graphs, where vertex labels are diverse, it is natural to relax strict dimension constraints and make hyper-sparse matrices a full-fledged member of the sparse matrix software world. The wide availability of hyper-sparse matrices allows addressing a wide range of problems and completely new approaches to parallel computing.
false
https://pretalx.com/juliacon2023/talk/FVZXUF/
https://pretalx.com/juliacon2023/talk/FVZXUF/feedback/
32-141
Construction of Hierarchically SemiSeparable Mat. Representation
Talk
2023-07-28T11:30:00-04:00
11:30
00:25
Join us for ASE-60, where we celebrate the life and the career of Professor Alan Stuart Edelman, on the occasion of his 60th birthday: https://math.mit.edu/events/ase60celebration/
juliacon2023-35768-construction-of-hierarchically-semiseparable-mat-representation
ASE60
en
We extend our early work on adaptive partially matrix-free Hierarchically Semi-Separable (HSS) matrix construction algorithm using Gaussian sketching operators to a broader class of Johnson-Lindenstrauss (JL) sketching operators. We present theoretical work which justifies this extension. In particular, we extend the earlier concentration bounds to all JL sketching operators and examine this bound for specific classes of such operators including the original Gaussian sketching operators, subsampled randomized Hadamard transform (SRHT) and the sparse Johnson-Lindenstrauss transform (SJLT). We demonstrate experimentally that using SJLT instead of Gaussian sketching operators leads to 1.5 - 2.5X speedups of the HSS construction implementation in the STRUMPACK C++ library. The generalized algorithm allows users to select their own JL sketching operators with theoretical lower bounds on the size of the operators which may lead to faster runtime with similar HSS construction accuracy.
false
https://pretalx.com/juliacon2023/talk/AFCL3Z/
https://pretalx.com/juliacon2023/talk/AFCL3Z/feedback/
32-141
Modeling and Duality in Domain Specific Languages
Talk
2023-07-28T12:00:00-04:00
12:00
00:25
Join us for ASE-60, where we celebrate the life and the career of Professor Alan Stuart Edelman, on the occasion of his 60th birthday: https://math.mit.edu/events/ase60celebration/
juliacon2023-35769-modeling-and-duality-in-domain-specific-languages
ASE60
en
Domain specific languages (DSL) for mathematical optimization allow users to write problems in a natural algebraic format. However, what is considered natural can vary from user to user. For instance, JuMP’s interface makes a distinction between conic formulations and nonlinear programming formulations whose constraints are level-sets of nonlinear functions. Tradeoffs between such alternative modeling formats are further amplified when dual solutions are considered. In this talk we describe work related to these tradeoffs in JuMP. In particular, we consider modeling using a wide range of non-symmetric cones and their solution with Hypatia.jl.
false
https://pretalx.com/juliacon2023/talk/PSHBWQ/
https://pretalx.com/juliacon2023/talk/PSHBWQ/feedback/
32-141
Graphs, matrices, and programming: There and back again
Talk
2023-07-28T14:00:00-04:00
14:00
00:25
Join us for ASE-60, where we celebrate the life and the career of Professor Alan Stuart Edelman, on the occasion of his 60th birthday: https://math.mit.edu/events/ase60celebration/
juliacon2023-35770-graphs-matrices-and-programming-there-and-back-again
ASE60
en
Matrices and graphs are in some sense the same thing. In applications, though, one often supports computation on the other, and the direction of the relationship swings back and forth. Ideas about how to write programs for matrices and graphs swing back and forth too. This talk will review a little bit of the back-and-forth, nodding to sparse matrix computation, graph analysis libraries, and models of parallel computation, including some moments when Alan's many contributions to numerical linear algebra, high-performance computing, and programming language design have influenced the speaker.
false
https://pretalx.com/juliacon2023/talk/UPWCQ3/
https://pretalx.com/juliacon2023/talk/UPWCQ3/feedback/
32-141
Alan, Julia and Climate
Talk
2023-07-28T14:30:00-04:00
14:30
00:25
Join us for ASE-60, where we celebrate the life and the career of Professor Alan Stuart Edelman, on the occasion of his 60th birthday: https://math.mit.edu/events/ase60celebration/
juliacon2023-35771-alan-julia-and-climate
ASE60
en
This is a story of how Julia catalyzed a connection between computational and climate sciences at MIT. It was only a few years ago that a group of of us from EAPS crossed campus to meet Alan’s group and discuss how to move climate modeling forward. We now have a numerical model of the ocean written in Julia that runs on GPUs and is nearly ten times faster than any other existing ocean model. The atmospheric model is also being completed. We have started teaching classes that introduce MIT students to Julia as a tool to study climate. All this was made possible at that first meeting and thanks to Alan's vision.
false
https://pretalx.com/juliacon2023/talk/MWEDQL/
https://pretalx.com/juliacon2023/talk/MWEDQL/feedback/
32-141
Hidden Structures in Shape Optimization Problems
Talk
2023-07-28T15:00:00-04:00
15:00
00:25
Join us for ASE-60, where we celebrate the life and the career of Professor Alan Stuart Edelman, on the occasion of his 60th birthday: https://math.mit.edu/events/ase60celebration/
juliacon2023-35772-hidden-structures-in-shape-optimization-problems
ASE60
en
A variety of tasks in computer graphics and 3D modeling involve optimization problems whose variables encode a shape or geometric quantity. These problems can be extremely stiff, nonlinear, and strongly constrained. In this talk, I will share how hidden structures in shape optimization problems can lead to efficient and even convex formulations, enabling a variety of applications.
false
https://pretalx.com/juliacon2023/talk/LAB8MH/
https://pretalx.com/juliacon2023/talk/LAB8MH/feedback/
32-141
So you think you know how to take derivatives?
Talk
2023-07-28T16:30:00-04:00
16:30
00:30
Join us for ASE-60, where we celebrate the life and the career of Professor Alan Stuart Edelman, on the occasion of his 60th birthday: https://math.mit.edu/events/ase60celebration/
juliacon2023-35773-so-you-think-you-know-how-to-take-derivatives-
ASE60
en
Derivatives are seen as the "easy" part of learning calculus: a few simple rules, and every function's derivatives are at your fingertips! But these basic techniques can turn bewildering if you are faced with much more complicated functions like a matrix determinant (what is a derivative "with respect to a matrix" anyway?), the solution of a differential equation, or a huge engineering calculation like a fluid simulation or a neural-network model. And needing such derivatives is increasingly common thanks to the growing prevalence of machine learning, large-scale optimization, and many other problems demanding sensitivity analysis of complex calculations. Although many techniques for generalizing and applying derivatives are known, that knowledge is currently scattered across a diverse literature, and requires students to put aside their memorized rules and re-learn what a derivative really is: linearization. In 2022 and 2023, Alan and I put together a one-month, 16-hour "Matrix Calculus" course at MIT that refocuses differential calculus on the linear algebra at its heart, and we hope to remind you that derivatives are not a subject that is "done" after your second semester of calculus.
false
https://pretalx.com/juliacon2023/talk/LENGPQ/
https://pretalx.com/juliacon2023/talk/LENGPQ/feedback/
32-155
Morning Break Day 3 Room 1
Break
2023-07-28T09:45:00-04:00
09:45
00:15
Morning break for coffee and snacks, and transit time from the keynote to the rest of the day's talks.
juliacon2023-28137-morning-break-day-3-room-1
JuliaCon
en
Morning break for coffee and snacks, and transit time from the keynote to the rest of the day's talks.
false
https://pretalx.com/juliacon2023/talk/PWH7ZN/
https://pretalx.com/juliacon2023/talk/PWH7ZN/feedback/
32-155
MathOpt: solver independent modeling in Google's OR-Tools
Talk
2023-07-28T10:00:00-04:00
10:00
00:30
Google Optimization Tools (a.k.a., OR-Tools) is an open-source, fast and portable software suite for solving combinatorial optimization problems. In this talk, we present our work on a new interface to OR-Tools, and we describe the lessons that we learnt along the way.
juliacon2023-28926-mathopt-solver-independent-modeling-in-google-s-or-tools
JuMP
Ross Anderson
en
This talk introduces MathOpt, a software tool for modeling mathematical optimization problems (e.g. linear programs). MathOpt is part of Google's open source project OR-Tools and is used extensively within Google. MathOpt consists of three parts: (1) client libraries in various programming languages (including C++, Java and Python) to formulate a model, independently of any underlying solver, (2) an efficient, language independent data format for models, model updates and solutions, and (3) a core library to solve models in this data format with existing commercial and open source optimization solvers (e.g. Gurobi, GLOP). MathOpt provides solver independent support for a range of advanced features, including nonlinear constraints, duality, rays, LP basis, callbacks, and solver parameters. The core library of MathOpt (and the C++ client) are written in portable C++ and can run on servers, mobile devices, or even in the browser via WASM. MathOpt supports remote execution for all features (including callbacks) via RPC. For non-C++ languages, each client library is written entirely in that language, so users relying on remote solve do not require any native dependency to install and run MathOpt. We provide benchmarks showing that the overhead introduced by MathOpt is in most cases negligible.
false
https://pretalx.com/juliacon2023/talk/VZDG9G/
https://pretalx.com/juliacon2023/talk/VZDG9G/feedback/
32-155
QUBO.jl: Quadratic Unconstrained Binary Optimization
Lightning talk
2023-07-28T10:30:00-04:00
10:30
00:10
We present QUBO.jl, an end-to-end Julia package for working with QUBO (Quadratic Unconstrained Binary Optimization) instances.
juliacon2023-28925-qubo-jl-quadratic-unconstrained-binary-optimization
JuMP
Joaquim Dias Garcia
en
We present QUBO.jl, an end-to-end Julia package for working with QUBO (Quadratic Unconstrained Binary Optimization) instances. This tool aims to convert a broad range of JuMP problems for straightforward application in many physics and physics-inspired solution methods whose standard optimization form is equivalent to the QUBO. These methods include quantum annealing, quantum gate-circuit optimization algorithms (Quantum Optimization Alternating Ansatz, Variational Quantum Eigensolver), other hardware-accelerated platforms, such as Coherent Ising Machines and Simulated Bifurcation Machines, and more traditional methods such as simulated annealing. Besides working with reformulations, QUBO.jl allows its users to interface with the aforementioned hardware, sending QUBO models in various file formats and retrieving results for subsequent analysis. QUBO.jl was written as a JuMP / MathOptInterface (MOI) layer that automatically maps between the input and output frames, thus providing a smooth modeling experience.
false
https://pretalx.com/juliacon2023/talk/QZPLLS/
https://pretalx.com/juliacon2023/talk/QZPLLS/feedback/
32-155
ConstraintLearning: Ever wanted to learn more about constraints?
Lightning talk
2023-07-28T10:40:00-04:00
10:40
00:10
In many fields of optimization, there is often a tradeoff between efficiency and the simplicity of the model. *ConstraintLearning.jl* is an interface to several tools designed to smooth that tradeoff.
- *CompositionalNetworks.jl*: a scaling glass-box method to learn highly combinatorial functions [JuliaCon 2021]
- *QUBOConstraints.jl*: a package to automatically learn QUBO matrices from optimization constraints.
Applications are not limited to Constraint Programming, but are focused on it.
juliacon2023-26865-constraintlearning-ever-wanted-to-learn-more-about-constraints-
JuMP
Jean-François BAFFIER (azzaare@github)
en
In Constraint Programming, a problem can be modeled as simply as
- a set of variables' domains
- a set of predicates over those variables called constraints
- an optional objective
Often, efficient solvers expect more complex models to provide additional efficiency. For instance, Constraint-Based Local Search (CBLS) solvers have significant speedups when the constraint is encoded as a more refined function than a predicate. We designed *CompositionalNetworks.jl* to learn those functions from simple predicates, effectively removing the modeling complexity.
Similarly, we designed *QUBOConstraints.jl* such that QUBO matrices are learned from simple predicates. Among other things, QUBO encoding can be used on QUBO based solvers and quantum annealing machines.
Finally, **ConstraintLearning.jl** provides a common interface for both learning techniques. It also effectively allows both packages to only contains minimal data structures and generic solving interfaces to be including in appropriate solvers.
false
https://pretalx.com/juliacon2023/talk/87RGBV/
https://pretalx.com/juliacon2023/talk/87RGBV/feedback/
32-155
Fast Convex Optimization with GeNIOS.jl
Lightning talk
2023-07-28T10:50:00-04:00
10:50
00:10
We introduce GeNIOS.jl, a package for large-scale data-driven convex optimization. This package leverages randomized numerical linear algebra and inexact subproblem solves to dramatically speed up the alternating direction method of multipliers (ADMM). We showcase performance on a logistic regression problem and a constrained quadratic program. Finally, we show how this package can be extended to almost any convex optimization problem that appears in practice.
juliacon2023-26857-fast-convex-optimization-with-genios-jl
JuMP
Theo Diamandis
en
We introduce GeNIOS.jl, a Julia language alternating direction method of multipliers (ADMM) based convex optimization. We built this solver with an eye towards large-scale data-driven problems, such as those that that come up in robust optimization and machine learning. However, the solver can tackle any convex optimization problem that can be tractably solved with ADMM.
We first detail the algorithm. This solver builds on a line of work in randomized numerical linear algebra and inexact ADMM, i.e., ADMM in which the subproblems are only solved approximately. In GeNIOS.jl, we use a quadratic approximation to the smooth subproblem, an idea explored for the unconstrained case in [1]. We then leverage the Nystrom Sketch to dramatically speedup the linear system solve [2] in this subproblem. When this subproblem comprises most of the computational effort, as is frequently the case in practice, our use of approximation and randomization yields significant speedups.
We will then introduce the package’s interface. We provide a `standard library’ which makes it easy to solve a number of problems, including logistic regression and quadratic programs. We will demonstrate how to define custom problems as well and show examples coming from different application areas. Finally, we will showcase the performance improvement of GeNIOS.jl compared to other methods.
This work builds on ideas presented at JuMP-dev last year, extending the class of problems tackled from quadratic programs to all convex optimization problems. We will conclude with more future directions and promising applications of these ideas and this solver.
[1] Zhao, S., Frangella, Z., & Udell, M. (2022). NysADMM: faster composite convex optimization via low-rank approximation. arXiv preprint arXiv:2202.11599.
[2] Frangella, Z., Tropp, J. A., & Udell, M. (2021). Randomized Nyström Preconditioning. arXiv preprint arXiv:2110.02820.
false
https://pretalx.com/juliacon2023/talk/UNDFLS/
https://pretalx.com/juliacon2023/talk/UNDFLS/feedback/
32-155
Polynomial Optimization
Talk
2023-07-28T11:00:00-04:00
11:00
00:30
Polynomial Optimization can be solved in a variety of ways. JuMP provides an unified interface for modelling these problems. In this talk, we show how to interface each type of polynomial optimization solver to this model and how they compare on a variety of benchmark problems.
juliacon2023-27018-polynomial-optimization
JuMP
Benoît Legat
en
Polynomial Optimization can be solved in a variety of ways. Black-box solvers using first and second-order derivative callbacks can be used but these only find a local extremum and cannot guarantee its global optimality. Several approaches exist to find the global optimum. These include Sum-of-Squares, Sums of AM/GM Exponential, multivariate partitioning algorithm and Algebraic System solving of the KKT conditions. In this talk, we detail the work of bringing all these possible solving strategies to the common JuMP nonlinear interface. We then compare the efficiency of these approaches both in theory and in practice on a variety of benchmark problems.
false
https://pretalx.com/juliacon2023/talk/XLT8H3/
https://pretalx.com/juliacon2023/talk/XLT8H3/feedback/
32-155
Computable General Equilibrium (CGE) Models in Julia JuMP
Lightning talk
2023-07-28T11:30:00-04:00
11:30
00:10
Computable General Equilibrium (CGE) models are large systems of non-linear equations that combine economic theory with real economic data to describe impacts of policies or shocks in the economy. Currently these problems are predominately solved using GAMS which is an expensive, closed source solution. We have created a package called GamsStructure.jl to emulate GAMS data manipulation in Julia. We will discuss several models created in both GAMS and Julia to highlight key similarities.
juliacon2023-26207-computable-general-equilibrium-cge-models-in-julia-jump
JuMP
Mitch Phillipson
en
GAMS, or General Algebraic Modelling System, is used extensively to solve economic optimization problems. Originally released in 1987, the language lacks many conveniences of a modern language such as case sensitivity. Julia JuMP is a natural successor to GAMS allowing for a complete model description, from data cleaning to solving a model to displaying the data.
A large issue is converting GAMS users to Julia. GAMS has been in use for 35 years and the people that use GAMS are accustomed to the compact, self-documenting syntax that GAMS allows. To bridge this gap, we have created a Julia package called GamsStructure.jl. This package has been designed to emulate loading and manipulating data in GAMS, and has built in mechanisms to attach a description to Sets and Parameters.
Being an old language in heavy use GAMS has a rich library of examples and sample code to assist modelers in building complex models. We are working on translating many of these models into Julia and building a repository to share with the general community. We are also preparing a paper comparing the impacts of policing on tourism in Jamaica using both GAMS and Julia.
false
https://pretalx.com/juliacon2023/talk/9UA9PH/
https://pretalx.com/juliacon2023/talk/9UA9PH/feedback/
32-155
Constructing Optimal Optimization Methods using BnB-PEP
Lightning talk
2023-07-28T11:40:00-04:00
11:40
00:10
We present the Branch-and-Bound Performance Estimation Programming (BnB-PEP), a unified methodology for constructing optimal first-order methods for convex and nonconvex optimization. BnB-PEP poses the problem of finding the optimal optimization method as a nonconvex but practically tractable quadratically constrained quadratic optimization problem and solves it to certifiable global optimality using a customized branch-and-bound algorithm.
juliacon2023-27038-constructing-optimal-optimization-methods-using-bnb-pep
JuMP
Shuvomoy Das Gupta
en
By directly confronting the nonconvexity, BnB-PEP offers significantly more flexibility and removes the many limitations of the prior methodologies. Our customized branch-and-bound algorithm, through exploiting specific problem structures, outperforms the latest off-the-shelf implementations by orders of magnitude, accelerating the solution time from hours to seconds and weeks to minutes. Finally, we apply BnB-PEP to several setups for which the prior methodologies do not apply and obtain methods with bounds that improve upon prior state-of-the-art results. Open source Julia implementation of BnB-PEP is available at: https://github.com/Shuvomoy/BnB-PEP-code.
false
https://pretalx.com/juliacon2023/talk/LFPAKE/
https://pretalx.com/juliacon2023/talk/LFPAKE/feedback/
32-155
Stochastic programming application for LNGC logistics
Lightning talk
2023-07-28T11:50:00-04:00
11:50
00:10
I would like to share with the JuMP community some of my experience building stochastic programming models in production, this talk will discuss design patterns used for an LNGC logistics problem developed in an R&D project.
juliacon2023-27020-stochastic-programming-application-for-lngc-logistics
JuMP
Guilherme Bodin
en
In this talk, I will go through useful first principles on how to create maintainable stochastic programming JuMP applications by discussing different design patterns that make it easier to maintain a pipeline of building, modifying, solving, and retrieving solutions of JuMP models in an orderly manner. The design patterns go from how to structure the code directory to ways of separating code into functions of the complete stochastic programming pipeline. Everything will be discussed with an LNGC logistics problem as background.
false
https://pretalx.com/juliacon2023/talk/A7EA9E/
https://pretalx.com/juliacon2023/talk/A7EA9E/feedback/
32-155
When Enzyme meets JuMP: a tour de ronde
Talk
2023-07-28T12:00:00-04:00
12:00
00:30
Julia provides a vibrant automatic differentiation (AD) ecosystem, with numerous AD libraries. All these AD solutions are unique, and take diverse approaches to the various fundamental AD design choices for code transformations available in Julia. The recent refactoring of the JuMP nonlinear interface is giving us an opportunity to integrate some of these AD libraries into JuMP. However, how far can we go in the integration?
juliacon2023-27007-when-enzyme-meets-jump-a-tour-de-ronde
JuMP
François PacaudMichel Schanen
en
In this talk, we present our recent work with Enzyme, an AD backend working directly at the LLVM level, enabling massively parallel or vectorized modeling through GPUCompiler.jl. We put a special emphasis on the extraction of sparse Jacobian and sparse Hessian with Enzyme using respectively forward and forward-over-reverse automatic differentiation. We give a thorough investigation of the capability of Enzyme regarding nonlinear programming and present detailed results on the seminal optimal power flow (OPF) problem.
false
https://pretalx.com/juliacon2023/talk/A7YMT8/
https://pretalx.com/juliacon2023/talk/A7YMT8/feedback/
32-155
Lunch Day 3 (Room 1)
Lunch Break
2023-07-28T12:30:00-04:00
12:30
01:30
We hope you're enjoying JuliaCon 2023 so far! Please find our food trucks waiting right outside venue with food available for purchase.
juliacon2023-28080-lunch-day-3-room-1-
JuliaCon
en
We hope you're enjoying JuliaCon 2023 so far! Please find our food trucks waiting right outside venue with food available for purchase.
false
https://pretalx.com/juliacon2023/talk/FVAMWF/
https://pretalx.com/juliacon2023/talk/FVAMWF/feedback/
32-155
Optimization solvers in JuliaSmoothOptimizers
Talk
2023-07-28T14:00:00-04:00
14:00
00:30
In this presentation, we give an overview of the recent progress regarding the continuous nonlinear nonconvex optimization solvers implemented in the JuliaSmoothOptimizers (JSO) organization. We introduce the new package JSOSuite.jl, a unique interface between users and JSO solvers.
juliacon2023-27037-optimization-solvers-in-juliasmoothoptimizers
JuMP
Tangi Migot
en
In this presentation, we give an overview of the recent progress regarding the continuous nonlinear nonconvex optimization solvers implemented in the JuliaSmoothOptimizers (JSO) organization. We introduce the new package JSOSuite.jl, a unique interface between users and JSO solvers.
The package JSOSuite.jl is very user-friendly as one no longer needs to know the different solvers and their corresponding packages ( DCISolver.jl, FletcherPenaltySolver.jl, Percival.jl, RipQP.jl, etc). Moreover, it makes benchmarking algorithms very simple and also opens the door to automatic algorithm selection based on the problem features.
Finally, we will also illustrate the recent improvements of the solvers in terms of performance and memory allocations as well as a new feature for parameter optimization.
false
https://pretalx.com/juliacon2023/talk/HQSUYM/
https://pretalx.com/juliacon2023/talk/HQSUYM/feedback/
32-155
Improving nonlinear programming support in JUMP
Talk
2023-07-28T14:30:00-04:00
14:30
00:30
In JuMP 1.0, support for nonlinear programming is a second-class citizen. In talk, we discuss our efforts to build a first-class NonlinearExpression object in JuMP, banishing the need for the `@NL` macros.
juliacon2023-26815-improving-nonlinear-programming-support-in-jump
JuMP
Oscar Dowson
en
In JuMP 1.0, support for nonlinear programming is a second-class citizen. In talk, we discuss our efforts to build a first-class NonlinearExpression object in JuMP, banishing the need for the `@NL` macros.
false
https://pretalx.com/juliacon2023/talk/UWKJU9/
https://pretalx.com/juliacon2023/talk/UWKJU9/feedback/
32-155
Multi-objective optimization with JuMP
Lightning talk
2023-07-28T15:00:00-04:00
15:00
00:10
vOptGeneric.jl is a package based on JuMP and MOI for modeling and solving multi-objective linear optimization (MOO) problems. vOptGeneric will soon be discontinued, and replaced by a new package, fully based on the syntax of JuMP and offering a convenient interface entirely based on MOI to extend the pool of MOO algorithms. In this talk, we present our feedback on our experiences from vOptGeneric.
juliacon2023-28921-multi-objective-optimization-with-jump
JuMP
Xavier Gandibleux
en
vOptGeneric.jl is a package based on JuMP and MOI for modeling and solving multi-objective linear optimization (MOO) problems. It was first released in 2017 and since, it serves as support to exercises and projects in MOO of our master's programme in operations research at Nantes University, and University of South Britany (both in France), and recently at Johannes Kepler University Linz (Austria). It has been also used in research projects, often for computing in a confortable manner the nondominated points, and also for testing new MOO algorithms. vOptGeneric will soon be discontinued, and replaced by a new package, fully based on the syntax of JuMP and offering a convenient interface entirely based on MOI to extend the pool of MOO algorithms. In this talk, a feedback on these experiences, and an introduction to the new package from the user's perspective will be presented, as well the development of future features.
false
https://pretalx.com/juliacon2023/talk/WCJUUH/
https://pretalx.com/juliacon2023/talk/WCJUUH/feedback/
32-155
Debugging JuMP optimization models using graph theory
Lightning talk
2023-07-28T15:10:00-04:00
15:10
00:10
Writing a large optimization model is a time-consuming and error-prone task. If and when a modeling error is suspected, the developer often must re-consider every constraint in the model for faulty assumptions causing singularities in the optimization model, which impede convergence of solvers. We introduce a package that computes the Dulmage-Mendelsohn and block triangular partitions of incidence graphs of JuMP models, which can be used to detect sources of structural and numeric singularities.
juliacon2023-26809-debugging-jump-optimization-models-using-graph-theory
JuMP
Robert Parker
en
The package we implement is named "JuMP-Incidence" or "JuMPIn.jl". It depends on JuMP and Graphs.jl, and may be of interest to the JuMP and Julia-Graphs communities. This package is of particular interest as model debugging is a notorious pain point in mathematical optimization. In the presentation, I intend to to describe the algorithms implemented, describe their use cases, and demonstrate their use to detect a modeling error in an optimization problem from chemical engineering.
false
https://pretalx.com/juliacon2023/talk/EC88TM/
https://pretalx.com/juliacon2023/talk/EC88TM/feedback/
32-155
Plasmo.jl and MadNLP.jl-A Framework for Graph-Based Optimization
Lightning talk
2023-07-28T15:20:00-04:00
15:20
00:10
Many optimization problems can be represented as a graph to provide useful visualization and to allow for manipulating and exploiting problem structure. We use Plasmo.jl (which extends JuMP.jl) to build optimization problems within a graph structure (where optimization model components are stored within the nodes and edges of the graph), and we use the nonlinear interior-point solver, MadNLP.jl, to solve these models and exploit some of these graph structures.
juliacon2023-26943-plasmo-jl-and-madnlp-jl-a-framework-for-graph-based-optimization
JuMP
David Cole
en
Graph theory can provide a useful scheme for representing and solving some structured optimization problems. Using graph theory, optimization problem components (e.g., variables, constraints, objective functions, or data) are stored within nodes and edges of a graph. This provides convenient visualization of the model, and it can lead to applications of various decomposition schemes to exploit the structure of the model. In this talk, I will present how we use Plasmo.jl and MadNLP.jl for building and solving graph-based optimization problems and how these packages can be used for exploiting some problem structures.
We use the package Plasmo.jl for building graph-based optimization problems [1]. This package extends JuMP.jl, and provides an interface for defining the optimization problem components within nodes and edges of a graph. Plasmo.jl uses an abstract modeling object called an OptiGraph which is composed of OptiNodes (similar to JuMP.jl’s Model object and which can store variables, constraints, objective functions, and data) and OptiEdges (which store constraints for variables on different OptiNodes and which capture connectivity of components).
We also use the package MadNLP.jl for solving graph-based optimization problems [2]. This package is a nonlinear interior-point solver similar to IPOPT. MadNLP.jl can interface with Plasmo.jl to solve problems defined within an OptiGraph, and it has the capability of exploiting OptiGraph structures using different decomposition schemes. Both Schwarz [2] and Schur [3] decompositions are enabled within MadNLP.jl, and these have been shown to reduce solution times. We are also interested in developing further decomposition schemes for graph-structured problems.
1. Jalving, J., Shin, S., and Zavala, V.M. 2022. A graph-based modeling abstraction for optimization: concepts and implementation in Plasmo.jl. Math. Prog. Comp. 14: 699-747.
2. Shin, S., Coffrin, C., Sundar, K., and Zavala, V.M. 2021. Graph-based modeling and decomposition of energy infrastructures. IFAC-Papers OnLine, 54(3): 693-698.
3. Cole, D.L., Shin, S., and Zavala, V.M. 2022. A julia framework for graph-structured nonlinear optimization. Ind. Eng. Chem. Res., 61(26):9366-9380.
false
https://pretalx.com/juliacon2023/talk/JQWEQE/
https://pretalx.com/juliacon2023/talk/JQWEQE/feedback/
32-G449 (Kiva)
Hackathon
Hackathon
2023-07-29T09:55:00-04:00
09:55
06:00
The JuliaCon Hackathon!
juliacon2023-35739-hackathon
JuliaCon
en
Join us at the JuliaCon Hackathon at the Kiva and Star rooms, as well the entire 4th floor of the Stata Centre. Connect with other members of the community!
false
https://pretalx.com/juliacon2023/talk/MZXTLV/
https://pretalx.com/juliacon2023/talk/MZXTLV/feedback/
32-D463 (Star)
Hackathon!
Hackathon
2023-07-29T09:55:00-04:00
09:55
06:00
The JuliaCon Hackathon!
juliacon2023-35740-hackathon-
JuliaCon
en
Join us at the JuliaCon Hackathon on the 4th floor of the Stata Center, as well as the Kiva and Star rooms. Connect with other members of the community!
false
https://pretalx.com/juliacon2023/talk/3C3MFL/
https://pretalx.com/juliacon2023/talk/3C3MFL/feedback/