2.0
-//Pentabarf//Schedule//EN
PUBLISH
8SBXLD@@pretalx.com
-8SBXLD
Solving Differential Equations in Julia
en
en
20190722T083000
20190722T120000
3.03000
Solving Differential Equations in Julia
The exercises are described as follows:
- Exercise 1 takes the user through defining the same biological system with stochasticity, utilizing EnsembleProblems to understand 95% bounds on the solution, and perform Bayesian parameter estimation.
- Exercise 2 takes the user through defining hybrid differential equation, that is a differential equation with events, and using adjoints to to perform gradient-based parameter estimation.
- Exercise 3 takes the user through differential-algebraic equation (DAE) modeling, the concept of index, and using both mass-matrix and implicit ODE representations.
- Exercise 4 takes the user through optimizing a PDE solver, utilizing automatic sparsity pattern recognition, automatic conversion of numerical codes to symbolic codes for analytical construction of the Jacobian, preconditioned GMRES, and setting up a solver for IMEX and GPUs.
- Exercise 5 focuses on a parameter sensitivity study, utilizing GPU-based ensemble solvers to quickly train a surrogate model to perform global parameter optimization.
- Exercise 6 takes the user through training a neural stochastic differential equation, using GPU-acceleration and adjoints through Flux.jl's neural network framework to build efficient training codes.
PUBLIC
CONFIRMED
Workshop (half day)
https://pretalx.com/juliacon2019/talk/8SBXLD/
PH 103N
Chris Rackauckas
PUBLISH
EHSJY3@@pretalx.com
-EHSJY3
Pharmaceutical Modeling and Simulation with Pumas
en
en
20190722T133000
20190722T170000
3.03000
Pharmaceutical Modeling and Simulation with Pumas
Phamacometrics is the practice of using mathematical models to predict the effect of drugs on a patient’s internal biology. This field has become a standard part of pharmaceutical research, with major pharmaceutical companies routinely utilizing these methodologies to optimize dosing schedules and analyze clinical trial data for efficacy and toxicity before performing expensive clinical trials. These models are nonlinear mixed effects models where the nonlinearity is given by a system of differential equations. Because the process is limited by the speed and flexibility of the differential equation solvers, there has been increasing interest in using Julia for this practice.
In this workshop we will walk both Julia users and pharmaceutical practitioners through the process of pharmacometric modeling in Julia. Pharmaceutical practitioners will be paired with Julia users to work on guided exercises to learn both the pharmacometric modeling workflows and their implementation in Julia. Workshop participants will learn to make use of Pumas.jl and Bioequivalence.jl to perform clinical trial simulations, and analyze the results using the Pumas Non-Compartmental Analysis (NCA) functionality. Users will learn how to implement Pk/Pd models with complex dosing schedules and incorporating population models and estimate population parameters from data. Advanced users can explore the function-based interface of Pumas.jl to define delay and stochastic differential equation models, and optimize the runtime of their simulations using the full functionality of DifferentialEquations.jl. The participants will leave with a clear understanding of how to use the Julia package ecosystem to efficiently handle these difficult pharmacometric models, and will have a new perspective for understanding the differential equation solver advances being discussed at JuliaCon.
PUBLIC
CONFIRMED
Workshop (half day)
https://pretalx.com/juliacon2019/talk/EHSJY3/
PH 103N
Chris Rackauckas
Vijay Ivaturi
PUBLISH
QYQNMW@@pretalx.com
-QYQNMW
Excelling at Julia: basics and beyond
en
en
20190722T083000
20190722T120000
3.03000
Excelling at Julia: basics and beyond
We will kick off this tutorial with an introduction to Julia, which should be accessible to anyone with technical computing needs and some exposure to another language. In the first part of the tutorial, we will cover Julia’s syntax, design paradigm, performance, basic plotting, and interfaces to other languages. We hope to show you why Julia is special, demonstrate how easy Julia is to learn, and get you writing your first Julia programs. In the second part of this tutorial, we will introduce you to data science tools for data management and machine learning algorithms and then delve into topics in performance optimization such as type stability and profiling. We will end this tutorial by going over the parallel computing infrastructure in Julia.
PUBLIC
CONFIRMED
Workshop (half day)
https://pretalx.com/juliacon2019/talk/QYQNMW/
PH 111N
Jane Herriman
Huda Nassar
PUBLISH
XKURRK@@pretalx.com
-XKURRK
Writing a package -- a thorough guide
en
en
20190722T133000
20190722T170000
3.03000
Writing a package -- a thorough guide
In Julia a lot of important functionality is implemented in packages living outside the core language. There are many reasons for this, for example that user defined types and functions are first class citizens and treated the same way as built-in types and functions, and that Julia's multiple dispatch makes it possible to work together with, and extend, other packages and the core language.
There are packages available in Julia for almost everything, for example differential equations, machine learning, data science, debugging and web applications. They are all very different, some are tens of thousands of source lines, while others are single line packages. Basically, the common denominator is that a package is a reusable piece of code wrapped in a specific structure, that solve a specific problem.
For newcomers to Julia, or to programming in general, it might seem like a difficult task to author a Julia package. Therefore, in this workshop, we will go through all the necessary steps to create a Julia package. In particular we will learn how to:
- set up the package structure;
- add package dependencies;
- write unit tests;
- write documentation;
- set up continuous integration (CI);
- release a package.
The goal of the workshop is that attendees should be well prepared for getting started with package writing in Julia.
PUBLIC
CONFIRMED
Workshop (half day)
https://pretalx.com/juliacon2019/talk/XKURRK/
PH 111N
Fredrik Ekre
Kristoffer Carlsson
PUBLISH
UEDNGH@@pretalx.com
-UEDNGH
Machine Learning Workshop
en
en
20190722T083000
20190722T120000
3.03000
Machine Learning Workshop
Interest and excitement in machine learning (ML) has skyrocketed in recent years due to its proven successes in many disparate domains. Julia is uniquely positioned as a strong language for ML due to its high performance, ease of use, and groundbreaking research in differentiable programming.
In this interactive workshop you will learn the core concepts that drive and underpin modern machine learning techniques. The first half incrementally introduces key ML terminology and concepts as you build and train your first neural network with Flux. Covered along the way are data representations, models, gradient descent, training, and testing. Then take a step back and explore the wide array of applications of machine learning with a handful of demonstrations of different tasks and models.
PUBLIC
CONFIRMED
Workshop (half day)
https://pretalx.com/juliacon2019/talk/UEDNGH/
PH 203N
Matt Bauman
PUBLISH
KD9RGB@@pretalx.com
-KD9RGB
Handling Data with DataFrames.jl
en
en
20190722T133000
20190722T170000
3.03000
Handling Data with DataFrames.jl
The workshop provides an overview of all major functionality provided in the [DataFrames.jl](https://github.com/JuliaData/DataFrames.jl) package. It is organized around solving several practical case-studies of working with tabular data and covers in particular:
* Reading/writing tabular data, getting summary information about data.
* Handling missing values and categorical data.
* Standard transformations of tabular data (sorting, filtering, mutating, joining, reshaping, grouped operations, tabulating etc.).
* Plotting of tabular data.
* Performance considerations of using the DataFrames.jl package.
PUBLIC
CONFIRMED
Workshop (half day)
https://pretalx.com/juliacon2019/talk/KD9RGB/
PH 203N
Bogumił Kamiński
PUBLISH
PSSWXL@@pretalx.com
-PSSWXL
Intermediate Julia for Scientific Computing
en
en
20190722T083000
20190722T120000
3.03000
Intermediate Julia for Scientific Computing
In this workshop, we will explore two of the more advanced topics that make Julia special: types and metaprogramming.
We will start off by looking at different uses of *types* as a glue in a scientific programming application: implementing a new arithmetic (automatic differentiation) and dispatch-based design.
In the second half, we will look at metaprogramming: how to get inside a Julia expression tree and apply that to write macros and "domain-specific languages".
This workshop is suitable for people who are comfortable with basic usage of Julia and wish to explore the language in more depth.
PUBLIC
CONFIRMED
Workshop (half day)
https://pretalx.com/juliacon2019/talk/PSSWXL/
PH 211N
David P. Sanders
PUBLISH
RLDSVL@@pretalx.com
-RLDSVL
Parallel Computing Workshop
en
en
20190722T133000
20190722T170000
3.03000
Parallel Computing Workshop
This interactive workshop demonstrates how to write parallel Julia code in a variety of ways, including shared memory computing with threads, multiple processes for distributed computing, and computing on the GPU. Julia makes all these modes of parallelism possible, and the Julia community is continuing to perform active research to make high performance parallel computing easier. Many national labs, major corporations, and universities are already using Julia with parallel computing.
The workshop will help you:
* identify the challenges in converting a program from serial to parallel
* discover the many forms of parallelism Julia offers and learn when to use each
* learn how to structure programs to take advantage of parallel computation
* write programs that use an appropriate form of parallelism
In this workshop, we will cover
* A quick primer on serial performance
* Multithreading
* Designing parallel algorithms
* Tasks (also known as co-routines or green threads)
* Multi-process parallelism
* A very quick introduction to GPU programming
* Future developments
Participants should have basic understanding of non-parallel programming techniques and of Julia itself.
PUBLIC
CONFIRMED
Workshop (half day)
https://pretalx.com/juliacon2019/talk/RLDSVL/
PH 211N
Avik Sengupta
Matt Bauman
PUBLISH
3RGNK9@@pretalx.com
-3RGNK9
Breakfast (Workshops)
en
en
20190722T073000
20190722T083000
1.00000
Breakfast (Workshops)
PUBLIC
CONFIRMED
Break
https://pretalx.com/juliacon2019/talk/3RGNK9/
Other
PUBLISH
WVVALL@@pretalx.com
-WVVALL
Lunch
en
en
20190722T120000
20190722T131500
1.01500
Lunch
PUBLIC
CONFIRMED
Break
https://pretalx.com/juliacon2019/talk/WVVALL/
Other
PUBLISH
BNSCFK@@pretalx.com
-BNSCFK
Intelligent Tensors in Julia
en
en
20190723T110000
20190723T113000
0.03000
Intelligent Tensors in Julia
Tensor network methods are an extremely useful class of simulation algorithms in physics. They work by constructing a graph of tensors -- of which matrices and vectors are low-dimensional examples -- and making local optimizations to these tensors to capture the essential physics of a many-body system. ITensor (Intelligent Tensor) is a leading C++ package created to make tensor network methods accessible to a wider group of scientists and programmers. In this talk, we present ITensors.jl, a ground-up rewrite of ITensor in Julia, which uses the lessons from the C++ project to offer much of the same powerful functionality in a more concise and elegant format, substantially lowering the "barrier to entry" for using tensor network techniques. We will present some usage examples that are common in physics applications to exemplify the ITensors.jl user interface and design philosophy. Using Julia, we can create a tensor network package expressive enough to capture a variety of physics that's also accessible enough for more physicists and computer scientists to use.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2019/talk/BNSCFK/
Elm A
Katharine Hyatt, Matthew Fishman
PUBLISH
QAAUCS@@pretalx.com
-QAAUCS
A general-purpose toolbox for efficient Kronecker-based learning
en
en
20190723T113000
20190723T114000
0.01000
A general-purpose toolbox for efficient Kronecker-based learning
I would like to introduce the Kronecker kernel-based framework I developed during my PhD and explain why I would switch from Python to Julia for this.
PUBLIC
CONFIRMED
Lightning Talk
https://pretalx.com/juliacon2019/talk/QAAUCS/
Elm A
Michiel Stock
PUBLISH
83YFMV@@pretalx.com
-83YFMV
Thread Based Parallelism part 2
en
en
20190723T114000
20190723T115000
0.01000
Thread Based Parallelism part 2
PUBLIC
CONFIRMED
Lightning Talk
https://pretalx.com/juliacon2019/talk/83YFMV/
Elm A
Jeff Bezanson
PUBLISH
3YQSSP@@pretalx.com
-3YQSSP
Thread Based Parallelism part 1
en
en
20190723T115000
20190723T120000
0.01000
Thread Based Parallelism part 1
PUBLIC
CONFIRMED
Lightning Talk
https://pretalx.com/juliacon2019/talk/3YQSSP/
Elm A
Jameson Nash
PUBLISH
TTBU37@@pretalx.com
-TTBU37
Generating documentation: under the hood of Documenter.jl
en
en
20190723T143000
20190723T150000
0.03000
Generating documentation: under the hood of Documenter.jl
Documenter can take Markdown files and inline docstrings and combine them into a manual for your Julia package. In addition, it can also run code snippets, verify that the output from code examples is up to date (doctesting) and upload the manual automatically to GitHub from a Travis CI build to be published as a website. Documenter is used by many Julia packages, and for generating Julia's own manual.
Behind the scenes, Documenter needs to (1) parse and represent Markdown documents, done via the Markdown standard library, (2) run code snippets embedded in the Markdown documents, (3) work with meta-information about functions and types, such as method signatures, and (4) fetch docstrings from your Julia code. Once all that is done, it compiles the result into the chosen output format -- a set of HTML pages or a PDF document.
The talk explores how Documenter goes from a make.jl script to a completely rendered and deployed manual. It should give existing users a glance into how Documenter works, but also provide a thorough overview of what is possible with Documenter to prospective new users.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2019/talk/TTBU37/
Elm A
Morten Piibeleht
PUBLISH
DAKCYM@@pretalx.com
-DAKCYM
Literate programming with Literate.jl
en
en
20190723T150000
20190723T151000
0.01000
Literate programming with Literate.jl
Literate programming was introduced by Donald Knuth in 1984 and is described as an _explanation of the program logic in a natural language, interspersed with traditional source code_. `Literate.jl` is a simple Julia package that can be used for literate programming. The original purpose was to facilitate writing example programs for documenting Julia packages.
Julia packages are often showcased and documented using "example notebooks". Notebooks are great for this purpose since they contain both input source code, descriptive markdown and rich output like plots and figures, and, from the description above, notebooks can be considered a form of literate programming. One downside with notebooks is that they are a pain to deal with in version control systems like git, since they contain lots of extra data. A small change to the notebook thus often results in a large and complicated diff, which makes it harder to review the actual changes. Another downside is that notebooks require external tools, like Jupyter and `IJulia.jl` to be used effectively.
With `Literate.jl` is is possible to dynamically generate notebooks from a simple source file. The source file is a regular `.jl` file, where comments are used for describing the interspersed code snippets. This means that, for basic usage, there are no new syntax to learn in order to use `Literate.jl`, basically any valid Julia source file can be used as a source file. This solves the problem with notebooks described in the previous section, since the notebook itself does not need to be checked into version control -- it is just the source text file that is needed. `Literate.jl` can also, from the _same_ input source file, generate markdown files to be used with e.g. `Documenter.jl` to produce HTML pages in the package documentation. This makes it easy to maintain both a notebook and HTML version of examples, since they are based on the same source file.
This presentation will briefly cover the `Literate.jl` syntax, and show examples of how `Literate.jl` can be used.
PUBLIC
CONFIRMED
Lightning Talk
https://pretalx.com/juliacon2019/talk/DAKCYM/
Elm A
Fredrik Ekre
PUBLISH
RUYDYR@@pretalx.com
-RUYDYR
Formatting Julia
en
en
20190723T151000
20190723T152000
0.01000
Formatting Julia
Formatting code has recently gained significant trackion amongst the programming community. The most notable
formatters being [gofmt](https://golang.org/cmd/gofmt/) (Go), [remft](https://reasonml.github.io/) (Reason/Ocaml), and [prettier](https://prettier.io/) (JS/CSS/HTML,etc). In this talk I'll present
Julia's own formatter, which formats Julia code into a canonical, width-aware output. I'll go over:
* Why you should format your code.
* How the Julia formatter works and how you can use it in your workflow.
* Lots of demos showing beautifully formatted code!
PUBLIC
CONFIRMED
Lightning Talk
https://pretalx.com/juliacon2019/talk/RUYDYR/
Elm A
Dominique Luna
PUBLISH
EHG87D@@pretalx.com
-EHG87D
Cleaning messy data with Julia and Gen
en
en
20190723T154500
20190723T161500
0.03000
Cleaning messy data with Julia and Gen
Julia is home to a growing ecosystem of probabilistic programming languages—but how can we put them to use for practical, everyday tasks? In this talk, we'll discuss our ongoing effort to automate common-sense data cleaning by building a declarative dataset description language on top of [Gen](https://github.com/probcomp/Gen). Users of the language can encode domain knowledge about their dataset and the ways in which it might be unclean in short, declarative probabilistic scripts, which are compiled to Gen programs that infer locations of probable errors, impute missing values, and propose likely corrections in tabular data.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2019/talk/EHG87D/
Elm A
Alex Lew
PUBLISH
8WT7C8@@pretalx.com
-8WT7C8
LightQuery.jl
en
en
20190723T161500
20190723T164500
0.03000
LightQuery.jl
LightQuery.jl is a new package for querying tabular data. I'll discuss a number of things which make it special.
1) Careful use of constant propagation, so that named-tuple level operations are type stable when wrapped in a functions.
2) The ability to simultaneously collect into (and iteratively widen) several sinks at once.
3) Very few macros: only two, one for chaining and one for anonymizing. Compare the number of macros required for Query, DataFramesMeta, and JuliaDBMeta.
4) Careful tracking and propagation of line number information.
5) Huge speed-ups when sources are pre-sorted.
6) Flexibility. Row-wise operations work with arbitrary containers, provided they can be indexed out of order. Column-wise operations work for anything which has propertynames.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2019/talk/8WT7C8/
Elm A
Brandon Taylor
PUBLISH
DJY9HU@@pretalx.com
-DJY9HU
State of the Data: JuliaData
en
en
20190723T164500
20190723T165500
0.01000
State of the Data: JuliaData
PUBLIC
CONFIRMED
Lightning Talk
https://pretalx.com/juliacon2019/talk/DJY9HU/
Elm A
Jacob Quinn
PUBLISH
YZPSDK@@pretalx.com
-YZPSDK
Prototyping Visualizations for the Web with Vega and Julia
en
en
20190723T165500
20190723T170500
0.01000
Prototyping Visualizations for the Web with Vega and Julia
Using Julia and Vega to jumpstart development of interactive visualizations helps bridge the gap between analysis done on your laptop and publishing compelling results to the web. This talk will show how you can use the language, tools, and development environment you love with Julia and have web-ready interactive graphics ready to deploy. We'll highlight the use of DataVoyager.jl for data exploration, how to use DataVoyager plots to quick-start your visualizations in VegaLite.jl, adding interactivity to your visualizations, and finally how this translates to the web.
PUBLIC
CONFIRMED
Lightning Talk
https://pretalx.com/juliacon2019/talk/YZPSDK/
Elm A
Mary McGrath
PUBLISH
ZCWD9M@@pretalx.com
-ZCWD9M
A Showcase for Makie
en
en
20190723T170500
20190723T173500
0.03000
A Showcase for Makie
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2019/talk/ZCWD9M/
Elm A
Simon Danisch
PUBLISH
RAKLRV@@pretalx.com
-RAKLRV
The Linguistics of Puzzles: Solving Cryptic Crosswords in Julia
en
en
20190723T110000
20190723T113000
0.03000
The Linguistics of Puzzles: Solving Cryptic Crosswords in Julia
Cryptic (or British-style) crosswords are designed to be intentionally vague, misleading, or ambiguous. Each clue
combines a standard crossword clue with wordplay elements like anagrams, reversals, or homophones, so solving the clue requires understanding both crossword definitions and a combinatorial explosion of possible wordplays. Here are a couple of easy examples:
Clue: "Spin broken shingle"
Answer: "english"
Explanation: "broken" means to take an anagram of "shingle", which produces "english", and "english" can mean "spin" (at least in billiards).
Clue: "Initially babies are naked"
Answer "bare"
Explanation: "initially" means to take the first letter of "babies", giving "b". Combining "b" and "are" gives "bare", which means "naked".
We could try to enumerate every possible thing a word might mean, and every way those meanings might combine, but doing so would result in billions of possibilities, most of which are nonsense. instead, I'll show how we can use tools from computational linguistics to attack this silly problem in a serious way, and I'll show how Julia makes doing so even easier.
In particular, I will talk about:
* Developing a formal grammar for cryptic crossword clues
* Implementing probabilistic parsers which can parse cryptic crossword grammars (or any other grammar, I suppose)
* Squeezing as much performance as possible out of string manipulation in Julia
* Analyzing the meaning of words and phrases with WordNet.jl and machine learning
To learn more, check out the code, all of which is available online right now. You can find the parsing code at https://github.com/rdeits/ChartParsers.jl and the solver itself at https://github.com/rdeits/CrypticCrosswords.jl
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2019/talk/RAKLRV/
Elm B
Robin Deits
PUBLISH
VN7TVD@@pretalx.com
-VN7TVD
Counting On Floating Point
en
en
20190723T113000
20190723T114000
0.01000
Counting On Floating Point
Relax into more reliable floating point. Get more good digits, keep the ones that count.
Robustly accurate, `DoubleFloats` offers a way to develop resilient numerics reliably.
PUBLIC
CONFIRMED
Lightning Talk
https://pretalx.com/juliacon2019/talk/VN7TVD/
Elm B
Jeffrey Sarnoff
PUBLISH
3FSTJF@@pretalx.com
-3FSTJF
Analyzing social networks with SimpleHypergraphs.jl
en
en
20190723T114000
20190723T115000
0.01000
Analyzing social networks with SimpleHypergraphs.jl
PUBLIC
CONFIRMED
Lightning Talk
https://pretalx.com/juliacon2019/talk/3FSTJF/
Elm B
Przemysław Szufel
Bogumił Kamiński
PUBLISH
FFXKCX@@pretalx.com
-FFXKCX
Recommendation.jl: Building Recommender Systems in Julia
en
en
20190723T115000
20190723T120000
0.01000
Recommendation.jl: Building Recommender Systems in Julia
[Recommendation.jl](https://github.com/takuti/Recommendation.jl) allows you to easily implement and experiment your recommender systems, by fully leveraging Julia's efficiency and applicability. This talk demonstrates the package as follows.
The speaker first gives a brief overview of theoretical background in the field of recommender systems, along with corresponding Recommendation.jl functionalities. The package supports a variety of well-know recommendation techniques, including k-nearest-neighbors and matrix factorization. Meanwhile, their dedicated evaluation metrics (e.g., recall, precision) and non-personalized baseline methods are available for your experiments.
Next, this talk discusses pros and cons of using Julia for recommendation. On the one hand, a number of algorithms fits well into Julia's capability of high-performance scientific computing in this field, but at the same time, it is challenging to make Julia-based recommenders production-grade at scale. The discussion ends up with future ideas of how to improve the package.
We will finally see the extensibility of the package with an example of building our own custom recommendation method. In practice, Recommendation.jl is designed to provide separated, flexible *data access layer*, *algorithm layer*, and *recommender layer* to the end users. Consequently, the users can quickly build and test their custom recommendation model with less efforts.
Reference: [Recommendation.jl: Building Recommender Systems in Julia](https://takuti.me/note/recommendation-julia/), an article written by the speaker.
PUBLIC
CONFIRMED
Lightning Talk
https://pretalx.com/juliacon2019/talk/FFXKCX/
Elm B
Takuya Kitazawa
PUBLISH
9D933F@@pretalx.com
-9D933F
A New Breed of Vehicle Simulation
en
en
20190723T143000
20190723T150000
0.03000
A New Breed of Vehicle Simulation
We’ll see how Julia’s combination of mathy notation, built-in numerical tools, compilation, and metaprogramming opens up new possibilities for creating a simulation environment fit for aircraft, spacecraft, autonomous underwater vehicles, and the like. We’ll examine the special requirements for these types of applications, why elegant solutions have historically been out of reach, and how that picture is beginning to change. Finally, we’ll see how one such simulation environment in Julia has become the backbone of flight algorithm development and testing for a large fleet of autonomous aircraft that deliver life-saving medical supplies in Rwanda and Ghana.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2019/talk/9D933F/
Elm B
Tucker McClure
PUBLISH
WNWNYK@@pretalx.com
-WNWNYK
Modia3D: Modeling and Simulation of 3D-Systems in Julia
en
en
20190723T150000
20190723T151000
0.01000
Modia3D: Modeling and Simulation of 3D-Systems in Julia
The talk is about modeling and simulating mechanical 3D-systems with the Julia package Modia3D.jl. Modia3D initially supports mechanical systems and it shall be expanded into other domains in the future. The package uses the multiple dispatch and metaprogramming concepts of Julia to implement features of modern game engines and multi-body programs such as component-based design, hierarchical structuring, and closed kinematic loops. The mechanical systems are treated as Differential Algebraic Equations which are solved with the variable-step integrator IDA of the Sundials.jl package.
Modia3D performs collision handling with elastic response calculation for convex geometries or shapes approximated by a set of convex geometries. A broad phase is executed where each geometry is approximated by a bounding box and only if the bounding boxes are intersecting, the Euclidean distance or the penetration depth is computed in the narrow phase with an improved Minkowski Portal Refinement algorithm.
It is planned to combine 3D modeling closely with equation-based modeling. Therefore, the Julia packages Modia3D and Modia needs to interact, for example a joint of a Modia3D system is driven by a Modia model of an electrical motor and gearbox.
PUBLIC
CONFIRMED
Lightning Talk
https://pretalx.com/juliacon2019/talk/WNWNYK/
Elm B
Andrea Neumayr
PUBLISH
UPLPQW@@pretalx.com
-UPLPQW
TrajectoryOptimization.jl: A testbed for optimization-based robotic motion planning
en
en
20190723T151000
20190723T152000
0.01000
TrajectoryOptimization.jl: A testbed for optimization-based robotic motion planning
Trajectory optimization is a powerful tool for motion planning, enabling the synthesis of dynamic motion for complex underactuated robotic systems. This general framework can be applied to robots with nonlinear dynamics and constraints where other motion planning paradigms---such as sample-based planning, inverse dynamics, or differential flatness---are impractical or ineffective.
TrajectoryOptimization.jl has been developed for the purpose of collecting and developing state-of-the-art algorithms for trajectory optimization under a single, unified platform that offers the user state-of-the-art performance, an intuitive interface, and versatility. Initial results using a novel algorithm written in Julia already beat previous methods leveraging NLP solvers such as Ipopt and Snopt.
PUBLIC
CONFIRMED
Lightning Talk
https://pretalx.com/juliacon2019/talk/UPLPQW/
Elm B
Brian Jackson
PUBLISH
G7LXYQ@@pretalx.com
-G7LXYQ
Non-Gaussian State-estimation with JuliaRobotics/Caesar.jl
en
en
20190723T152000
20190723T153000
0.01000
Non-Gaussian State-estimation with JuliaRobotics/Caesar.jl
We are actively using Julia in algorithmic research and development work for robotic navigation. In robotic navigation, multiple sensor data are combined such as odometers, cameras, inertial measurement units, lidars, GPS, sonar acoustics, etc. The dream is to build factor graph based non-Gaussian state-estimation into real-time capable systems. Julia has enabled the development of newer non-Gaussian inference techniques that would otherwise have been near intractable if attempted with older languages. Most SLAM systems today are built in C++ with some Python integration while others are using MATLAB with an eye on later C++ implementations. Switching to Julia has been worth it; our ongoing efforts are to formalize the benefits of the one-language / fast / distributed / high-level-numerical-syntax of Julia with an open-source development model. Our approach not only includes on-board computations but also distributed inference with a cloud server model. JuliaRobotics/Caesar.jl is an umbrella framework alongside dedicated packages such as RoME.jl / IncrementalInference.jl / Arena.jl / ApproxManifoldProducts.jl / GraffSDK.jl. The JuliaRobotics/Caesar.jl package depends on over 100 other Julia packages, creating challenges with first run compile times and debugging efforts. Our challenge now is to continue software development, all-round performance improvement, improved user experience, and help grow the JuliaRobotics community. Although the JuliaRobotics community is still small, we believe that Julia could become a significant language in robotics. In the mean-time, a multi-language interface is in the works too. We are actively using Julia in algorithmic research and development work for robotic navigation. Robotic navigation is generally done by combining data from multiple sensors such as odometers, cameras, inertial measurement units, lidars, GPS, sonar acoustics, etc. The dream is to build factor graph based non-Gaussian state-estimation into real-time capable systems. Julia has enabled the development of newer non-Gaussian inference techniques that would otherwise have been near intractable if attempted with older languages. Most SLAM systems today are built in C++ with some Python integration while others are using MATLAB with an eye on later C++ implementations. Switching to Julia has been worth it; our ongoing efforts are to formalize the benefits of the one-language / fast / distributed / high-level-numerical-syntax of Julia with an open-source development model. Our approach not only includes on-board computations but also distributed inference with a cloud server model. JuliaRobotics/Caesar.jl is an umbrella framework alongside dedicated packages such as RoME.jl / IncrementalInference.jl / Arena.jl / ApproxManifoldProducts.jl / GraffSDK.jl. The JuliaRobotics/Caesar.jl package depends on over 100 other Julia packages, creating challenges with first run compile times and debugging efforts. Our challenge now is to continue software development, all-round performance improvement, improved user experience, and help grow the JuliaRobotics community.
PUBLIC
CONFIRMED
Lightning Talk
https://pretalx.com/juliacon2019/talk/G7LXYQ/
Elm B
Dehann Fourie
Sam Claassens
PUBLISH
H7TZTT@@pretalx.com
-H7TZTT
Solving Delay Differential Equations with Julia
en
en
20190723T154500
20190723T161500
0.03000
Solving Delay Differential Equations with Julia
Time delays are an inherent part of many dynamical systems in different scientific areas such as biology, physiology, chemistry, and control theory, suggesting to model these systems with delay differential equations (DDEs), i.e., differential equations including time delays. However, solving DDEs numerically in an efficient way is hard. In my talk I present [DelayDiffEq.jl](https://github.com/JuliaDiffEq/DelayDiffEq.jl), a Julia package for solving DDEs. I show how it integrates into the DifferentialEquations ecosystem and makes use of the large number of numerical algorithms in [OrdinaryDiffEq.jl](https://github.com/JuliaDiffEq/OrdinaryDiffEq.jl) for solving ordinary differential equations.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2019/talk/H7TZTT/
Elm B
David Widmann
PUBLISH
8DTHDK@@pretalx.com
-8DTHDK
Open Source Power System Production Cost Modeling in Julia
en
en
20190723T161500
20190723T164500
0.03000
Open Source Power System Production Cost Modeling in Julia
Production Cost Modeling (PCM) of power systems captures all the costs of operating a fleet of generators. This model captures hourly chronological security constrained unit commitment and economic dispatch simulation while minimizing costs and adhering to a wide variety of operating constraints. In this talk, we will cover the basics of Production Cost Modeling, and will explain how we have implemented this in Julia using JuMP. We will also discuss our experiences using Julia and JuMP and express the benefits to our users and some challenges we faced.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2019/talk/8DTHDK/
Elm B
Dheepak Krishnamurthy
PUBLISH
QJAUAT@@pretalx.com
-QJAUAT
Scientific AI: Domain Models with Integrated Machine Learning
en
en
20190723T164500
20190723T171500
0.03000
Scientific AI: Domain Models with Integrated Machine Learning
Dynamical models are often interesting due to the high-level qualitative behavior that they display. Differential equation descriptions of fluids accurately predict when drone flight will go unstable, and stochastic evolution models demonstrate the behavior for how patterns may suddenly emerge from biological chemical reactions. However, utilizing these models in practice requires the ability to understand, prediction, and control these outcomes. Traditional nonlinear control methods directly tie the complexity of the simulation to the control optimization process, making it difficult to apply these methods in real-time to highly detailed but computationally expensive models.
In this talk we will show how to decouple the computation time of a model from the ability to predict and control its qualitative behavior through a mixture of differential equation and machine learning techniques. These new methods directly utilize the language-wide differentiable programming provided by Flux.jl to perform automatic differentiation on differential equation models described using DifferentialEquations.jl. We demonstrate an adaptive data generation technique and show that common classification methods from machine learning literature converge to >99% accuracy for determining qualitative model outcomes directly from the parameters of the dynamical model. Using a modification of methods from Generative Adversarial Networks (GANs), we demonstrate an inversion technique with the ability to predict dynamical parameters that meet user-chosen objectives. This method is demonstrated to be able to determine parameters which constrains predator-prey models to a specific chosen domain and predict chemical reaction rates that result in Turing patterns for reaction-diffusion partial differential equations. Code examples will be shown and explained to directly show Julia users how to do these new techniques. Together, these methods are scalable and real-time computational tools for predicting and controlling the relation between dynamical systems and their qualitative outcomes with many possible applications.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2019/talk/QJAUAT/
Elm B
Chris Rackauckas
PUBLISH
BXYJ8D@@pretalx.com
-BXYJ8D
HydroPowerModels.jl: A Julia/JuMP Package for Hydrothermal economic dispatch Optimization
en
en
20190723T171500
20190723T172500
0.01000
HydroPowerModels.jl: A Julia/JuMP Package for Hydrothermal economic dispatch Optimization
The hydrothermal dispatch problem is very important for the planning and operation of the electrical system, especially for the Brazilian system. It is composed of an optimization problem in which the generation of generators, energy distribution and hydro storage management are coordinated in order to minimize cost of operation. Often, this problem is formulated with the Multi-stage Stochastic optimization framework, where decisions are taken for various periods and in the presence of uncertainties, since the generation resources are limited and often shared inter-temporally.
Solving Multi-stage Stochastic optimization is a challenging numerical problem. Therefore, these are commonly solved by a methodology based on the approximation of the bellman equation of stochastic dynamic programming by a piecewise linear function, called Stochastic Dual Dynamic Programming (SDDP). This methodology is preferred because it avoids the high dimensionality of present in classical stochastic dynamic programming.
The objective of this work is to build an open source tool for Hydrothermal Multistage Steady-State Power Network Optimization solved by Stochastic Dual Dynamic Programming (SDDP). Problem Specifications and Network Formulations are handled by [PowerModels.jl](https://github.com/lanl-ansi/PowerModels.jl). Solution method is handled by [SDDP.jl](https://github.com/odow/SDDP.jl).
The talk will constitute of: (i) An overview of the package; (ii) A brief description of the dependent packages and their integration; (iii) Quick example of the package's usage.
PUBLIC
CONFIRMED
Lightning Talk
https://pretalx.com/juliacon2019/talk/BXYJ8D/
Elm B
Andrew Rosemberg
PUBLISH
JAXM9R@@pretalx.com
-JAXM9R
Modeling in Julia at Exascale for Power Grids
en
en
20190723T172500
20190723T173500
0.01000
Modeling in Julia at Exascale for Power Grids
PUBLIC
CONFIRMED
Lightning Talk
https://pretalx.com/juliacon2019/talk/JAXM9R/
Elm B
Michel Schanen
PUBLISH
KAMHYJ@@pretalx.com
-KAMHYJ
Pkg, Project.toml, Manifest.toml and Environments
en
en
20190723T110000
20190723T113000
0.03000
Pkg, Project.toml, Manifest.toml and Environments
Julia's new package manager, Pkg, was released together with version 1.0 of the Julia language. The new package manager is a complete rewrite of the old one, and solves many of the problems observed in the old version. One major feature of the new package manager is the concept of _package environments_, which can be described as independent, sandboxed, sets of packages.
A package environment is represented by a `Project.toml` and `Manifest.toml` file pair. These files keep track of what packages, and what versions, are available in a given environment. Since environments are "cheap", just two files, they can be used liberally. It is often useful to create new environments for every new coding project, instead of installing packages on the global level. Since the package manager modifies the current project, e.g. when adding, removing or updating packages, there is no risk for these operations to mess up other environments.
The fact that exact versions of packages in the environment is being recorded means that Julia has reproducibility built-in. As long as the `Project.toml` and `Manifest.toml` file pair is available it is possible to replicate exactly the same package environment. Some typical use cases include being able to replicate the same package environment on a different machine, and being able to go back in time and run some old code which might require some old versions of packages.
In this presentation we will discuss how environments work, how they interact with the package manager and Julia's code loading, and how to effectively use them. Hopefully you will be more comfortable working with, and seeing the usefulness of, environments after this presentation.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2019/talk/KAMHYJ/
Room 349
Fredrik Ekre
PUBLISH
CY3YQB@@pretalx.com
-CY3YQB
FilePaths: File system abstractions and why we need them
en
en
20190723T113000
20190723T114000
0.01000
FilePaths: File system abstractions and why we need them
We'll start by discussing filesystem libraries for other languages (e.g., [pathlib](https://docs.python.org/3/library/pathlib.html) for Python, [Data.FilePath](https://hackage.haskell.org/package/data-filepath-2.2.0.0/docs/Data-FilePath.html) for Haskell, [Paths](https://doc.rust-lang.org/std/path/struct.Path.html) in Rust) and how such abstractions may uniquely benefit from multiple dispatch in Julia.
I’ll review some examples of how we (myself and my colleagues) have used [FilePathsBase.jl](https://github.com/rofinn/FilePathsBase.jl) to simplify our application logic and avoid ambiguous code paths.
Finally, we'll conclude with an open discussion around how these abstractions can be better incorporated into the larger Julia ecosystem.
PUBLIC
CONFIRMED
Lightning Talk
https://pretalx.com/juliacon2019/talk/CY3YQB/
Room 349
Rory Finnegan
PUBLISH
UUESUW@@pretalx.com
-UUESUW
Ultimate Datetime
en
en
20190723T114000
20190723T115000
0.01000
Ultimate Datetime
Ultimate datetime is a high performance, comprehensive datatype that was developed in C, then integrated into Julia. Ultimate datetime represents datetimes from the Big Bang through the year 100,000,000,000 with attosecond precision. Specifiable precision and uncertainty have been implemented along with a rich set of comparison and arithmetic functions. Leap seconds are handled properly, as is the pre-leap second atomic time period. Local times support the full range of time zones provided in the IANA database, including proper accounting for historical time zones, as well as the varying transitions from the Julian to the Gregorian calendar around the world.
PUBLIC
CONFIRMED
Lightning Talk
https://pretalx.com/juliacon2019/talk/UUESUW/
Room 349
Jay Dweck
PUBLISH
G9Z3AG@@pretalx.com
-G9Z3AG
Smart House with JuliaBerry
en
en
20190723T115000
20190723T120000
0.01000
Smart House with JuliaBerry
This project is a miniaturised smart house that uses a Raspberry Pi and Julia to automatically run a small model home. This smart house uses a Raspberry Pi with a Sense HAT and Explorer HAT Pro to create several functions that could be used on a real house. This includes a motion sensor that opens a door; a photoresistor that turns lights on and off and a Sense HAT taking readings and scrolling through them on an LED matrix. All these are functions which are used on houses and have been scaled down using the Raspberry Pi and the Julia Language.
PUBLIC
CONFIRMED
Lightning Talk
https://pretalx.com/juliacon2019/talk/G9Z3AG/
Room 349
Ahan Sengupta
PUBLISH
JWEAN3@@pretalx.com
-JWEAN3
MLJ - Machine Learning in Julia
en
en
20190723T143000
20190723T150000
0.03000
MLJ - Machine Learning in Julia
MLJ, an open-source machine learning toolbox written in Julia, has evolved from an early proof of concept, to a functioning well-featured prototype. Features include:
1. A flexible API for complex model composition, such as stacking.
2. Repository of externally implemented model metadata, for facilitating composite model design, and for matching models to problems through a MLR-like task interface.
3. Systematic tuning and benchmarking of models having possibly nested hyperparameters.
4. Unified interface for handling probabilistic predictors and multivariate targets.
5. Agnostic data containers
6. Careful handling of categorical data types.
In addition to demonstrating some of these features, we discuss relationships with other Julia projects in the data science domain.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2019/talk/JWEAN3/
Room 349
Anthony Blaom
PUBLISH
7NRBXW@@pretalx.com
-7NRBXW
Merging machine learning and econometric algorithms to improve feature selection with Julia
en
en
20190723T150000
20190723T151000
0.01000
Merging machine learning and econometric algorithms to improve feature selection with Julia
Applied scientific research increasingly uses Fat-Data (e.g. large number of explanatory variables relative to number of observations) for feature selection purposes. Previous version of our all-subset-regression Julia package was unable to deal with such databases. Existing ML packages (e.g. [Lasso.jl](https://github.com/JuliaStats/Lasso.jl)) overcome this problem paying a cost in terms of statistical inference, coefficient robustness and feature selection optimality (because ML algorithms focus on prediction not on explanation or causal-prediction). The new GlobalSearchRegression.jl version combines regularization pre-processsing with all-subset-regression algorithms to efficiently work with Fat Data without losing EC-strengths in terms of sensitivity analysis, residual properties and coefficient robustness.
In the first 3 minutes, our Lighting talk will discuss GlobalSearchRegression.jl new capabilities. We will focus on the main advantages of merging ML and EC algorithms for feature selection when the number of potential covariates is relatively large: ML provides efficiency and sub-sample uncertainty assessment while EC guarantees in-sample and out-of-sample optimality with covariate uncertainty assessment.
Then, we will show different benchmarks for the new GlobalSearchRegression.jl package against R and Stata counterparts, as well as against their own original version. Our updated ML-EC- algorithm written in Julia is up to 100 times faster than similar R or Stata programs, and allows working with hundreds of potential covariates (while the upper limit for the original GlobalSearchRegression.jl version was 28).
Finally, we will use the last 4 minutes for a live hands-on example to show the Graphical User Interface, execute the ML-EC algorithm with fat data and analyze main results using new output capabilities in Latex-PDF.
PUBLIC
CONFIRMED
Lightning Talk
https://pretalx.com/juliacon2019/talk/7NRBXW/
Room 349
Demian Panigo
Adán Mauri Ungaro
Nicolás Monzón
Valentin Mari
PUBLISH
8T3FVZ@@pretalx.com
-8T3FVZ
Let's Play Hanabi!
en
en
20190723T151000
20190723T152000
0.01000
Let's Play Hanabi!
Hanabi is a card game for two to five players. What makes Hanabi special is that, unlike most card games, players can only see their partners' hands, and not their own. In this talk, I will focus on the following three parts:
1. A short introduction to Hanabi and how to implement the game in a client-server style in Julia.
2. The challenges of Hanabi and some typical approaches.
3. The implementation details of some state-of-the-art algorithms.
I hope this talk can arouse the interest of the audiences. And get more people involved in the reinforcement learning field in Julia.
PUBLIC
CONFIRMED
Lightning Talk
https://pretalx.com/juliacon2019/talk/8T3FVZ/
Room 349
Jun Tian
PUBLISH
9GAZTS@@pretalx.com
-9GAZTS
TSML (Time Series Machine Learning)
en
en
20190723T152000
20190723T153000
0.01000
TSML (Time Series Machine Learning)
Over the past years, the industrial sector has seen many innovations brought about by automation. Inherent in this automation is the installation of sensor networks for status monitoring and data collection. One of the major challenges in these data-rich environments is how to extract and exploit information from these large volume of data to detect anomalies, discover patterns to reduce downtimes and manufacturing errors, reduce energy usage, etc.
To address these issues, we developed TSML package. It leverages AI and ML libraries from ScikitLearn, Caret, and Julia as building blocks in processing huge amount of industrial time series data. It has the following characteristics:
- TS data type clustering/classification for automatic data discovery
- TS aggregation based on date/time interval
- TS imputation based on symmetric Nearest Neighbors
- TS statistical metrics for data quality assessment
- TS ML wrapper with more than 100+ libraries from caret, scikitlearn, and julia
- TS date/value matrix conversion of 1-D TS using sliding windows for ML input
- Common API wrappers for ML libs from JuliaML, PyCall, and RCall
- Pipeline API allows high-level description of the processing workflow
- Specific cleaning/normalization workflow based on data type
- Automatic selection of optimised ML model
- Automatic segmentation of time-series data into matrix form for ML training and prediction
- Easily extensible architecture by using just two main interfaces: fit and transform
- Meta-ensembles for robust prediction
- Support for distributed computation, for scalability, and speed
TSML uses a pipeline which iteratively calls the fit and transform families of functions relying on multiple dispatch to select the correct algorithm. Machine learning functions in TSML are wrappers to the corresponding Scikit-learn, Caret, and native Julia ML libraries. There are more than hundred classifiers and regression functions available using a common API.
Full TSML documentation: https://ibm.github.io/TSML.jl/stable/
PUBLIC
CONFIRMED
Lightning Talk
https://pretalx.com/juliacon2019/talk/9GAZTS/
Room 349
Dr. Paulito Palmes, PhD
PUBLISH
LGHLC3@@pretalx.com
-LGHLC3
Porting a massively parallel Multi-GPU application to Julia: a 3-D nonlinear multi-physics flow solver
en
en
20190723T154500
20190723T161500
0.03000
Porting a massively parallel Multi-GPU application to Julia: a 3-D nonlinear multi-physics flow solver
We showcase the port to Julia of a massively parallel Multi-GPU solver for spontaneous nonlinear multi-physics flow localization in 3-D. The original solver is itself the result of a translation from a Matlab prototype to CUDA C and MPI. Our contribution is an illustration of Julia solving "the two language problem": the Matlab prototype and the CUDA C + MPI production code are being replaced by a single Julia code that will serve both further prototyping and production. The solver's parallel and matrix-free design enables a short time to solution and is applicable to solve a wide variety of coupled and nonlinear systems of partial differential equations in 3-D. The employed stencil-based iterative method optimally suits both shared and distributed memory parallelization. As reference, the original Multi-GPU solver achieved a high performance and a nearly ideal parallel efficiency on up to 5120 NVIDIA Tesla P100 GPUs on the hybrid Cray XC-50 "Piz Daint" supercomputer at the Swiss National Supercomputing Centre, CSCS. We report the first performance and scaling results obtained with the Julia port. We present additionally our porting approach and discuss the related challenges.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2019/talk/LGHLC3/
Room 349
Ludovic Räss
PUBLISH
8ANSVY@@pretalx.com
-8ANSVY
XLA.jl: Julia on TPUs
en
en
20190723T161500
20190723T164500
0.03000
XLA.jl: Julia on TPUs
Machine Learning workloads continue to require greater and greater compute capability, spawning the development of multiple generations of specialized hardware designed to eke out ever greater efficiency in training and inference workloads. This talk will explore the state of Julia on this hardware platform, showcasing some of the impressive speedups the hardware and provide, alongside some of the restrictions the hardware model imposes upon the dynamic nature of the Julia language.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2019/talk/8ANSVY/
Room 349
Elliot Saba
Keno Fischer
PUBLISH
3YBZLC@@pretalx.com
-3YBZLC
Targeting Accelerators with MLIR.jl
en
en
20190723T164500
20190723T165500
0.01000
Targeting Accelerators with MLIR.jl
PUBLIC
CONFIRMED
Lightning Talk
https://pretalx.com/juliacon2019/talk/3YBZLC/
Room 349
James Bradbury
PUBLISH
YDVQKM@@pretalx.com
-YDVQKM
SIMD and cache-aware sorting with ChipSort.jl
en
en
20190723T165500
20190723T170500
0.01000
SIMD and cache-aware sorting with ChipSort.jl
To attain the best performance with a modern computer, programmers are required to exploit thread and instruction level parallelism and make sure memory access follows suitable patterns. ChipSort.jl is a sorting package that implements techniques exploiting parallelism and memory locality. It uses SIMD instructions to implement basic operations such as sorting networks, merging networks, and in-place matrix transpose. These operations can be used to sort large arrays using merge-sort exploiting SIMD and cache memory for improved performance. The implementation is largely based on unique Julia features such as generated functions and parametric methods, allowing Julia to generate optimized custom machine code for different architectures based on the same high-level Julia code. Experiments were made with both Intel (AVX2 and AVX512) and AMD (NEON) processors, achieving speedups from 2 up to 17 times in different benchmarks.
Project documentation: https://nlw0.github.io/ChipSort.jl
PUBLIC
CONFIRMED
Lightning Talk
https://pretalx.com/juliacon2019/talk/YDVQKM/
Room 349
Nicolau Leal Werneck
PUBLISH
BXVHJV@@pretalx.com
-BXVHJV
Generic Sparse Data Structures on GPUs
en
en
20190723T170500
20190723T171500
0.01000
Generic Sparse Data Structures on GPUs
Sparse matrices arising from structured grids generally possess rich structure, which is amenable to GPU-parallelism. We implemented the DIA format, one of the most primitive sparse matrix storage formats, in Julia. BLAS routines are implemented on DIA format with GPU using `CUDAnative.jl` and `CuArrays.jl`. We also present Geometric Multigrid(GMG) preconditioner, implemented on GPU using DIA format and solve large ill-conditioned systems. Julia allows users to write generic code, which allows us to exploit blocked structure that arises from higher degrees of freedom. We benchmark and verify against the SPE10 problem, a standard oil reservoir simulation benchmark.
PUBLIC
CONFIRMED
Lightning Talk
https://pretalx.com/juliacon2019/talk/BXVHJV/
Room 349
Sungwoo Jeong
Ranjan Anantharaman
PUBLISH
AADAJW@@pretalx.com
-AADAJW
Array Data Distribution with ArrayChannels.jl
en
en
20190723T171500
20190723T172500
0.01000
Array Data Distribution with ArrayChannels.jl
We introduce a library to the language, 'ArrayChannels.jl', encapsulating several data parallelism patterns which causes serialisation of arrays between processes to occur in-place. This provides for better handling of processor cache, while retaining the synchronous semantics of Julia's RemoteChannel constructs.
We then evaluate the performance of the library by comparison to MPI and standard Julia on a number of microbenchmarks and HPC kernels.
PUBLIC
CONFIRMED
Lightning Talk
https://pretalx.com/juliacon2019/talk/AADAJW/
Room 349
Rohan McLure
PUBLISH
LR9FW9@@pretalx.com
-LR9FW9
High-Performance Portfolio Risk Aggregation
en
en
20190723T172500
20190723T173500
0.01000
High-Performance Portfolio Risk Aggregation
Western Asset is a fixed income asset manager. We recently replaced the portfolio risk aggregation process with a Julia implementation, and the run-time performance has improved tremendously. This talk will focus on system architecture, performance optimization, and deployment to a Docker swarm environment.
PUBLIC
CONFIRMED
Lightning Talk
https://pretalx.com/juliacon2019/talk/LR9FW9/
Room 349
Tom Kwong
PUBLISH
F8BBQW@@pretalx.com
-F8BBQW
Opening Remarks
en
en
20190723T083000
20190723T084000
0.01000
Opening Remarks
PUBLIC
CONFIRMED
Lightning Talk
https://pretalx.com/juliacon2019/talk/F8BBQW/
NS Room 130
JuliaCon Committee
PUBLISH
PMLSD9@@pretalx.com
-PMLSD9
Keynote: Professor Madeleine Udell
en
en
20190723T084000
20190723T092500
0.04500
Keynote: Professor Madeleine Udell
PUBLIC
CONFIRMED
Keynote
https://pretalx.com/juliacon2019/talk/PMLSD9/
NS Room 130
Professor Madeleine Udell
PUBLISH
BB9VQZ@@pretalx.com
-BB9VQZ
Debugging code with JuliaInterpreter
en
en
20190723T093000
20190723T100000
0.03000
Debugging code with JuliaInterpreter
A Julia debugger, with support for breakpoints, trapping errors, and inspection of local variables, has been long desired in the Julia community. We describe an approach based on a new interpreter for Julia code, JuliaInterpreter.jl. JuliaInterpreter is able to evaluate Julia’s [lowered representation](https://docs.julialang.org/en/latest/devdocs/ast/) statement-by-statement, and thus serves as the foundation for inspecting and manipulating intermediate results. Compared with previous tools, JuliaInterpreter offers several new features, such as improved performance, the ability to evaluate top-level code, built-in support for breakpoints, and the ability to switch flexibly between compiled and interpreted evaluation. JuliaInterpreter can be used directly as a standalone interpreter, and this makes it interesting for other purposes such as exploring tradeoffs between compile-time and run-time efficiency.
To enable JuliaInterpreter’s use as a debugger, we developed three different front-ends. One is the Juno IDE, which supports graphical management of breakpoints and stepping through code in a manner integrated with its editing capabilities. In addition to Juno, there are two different console-based (REPL) interfaces. Debugger.jl offers the most powerful control over stepping, whereas Rebugger.jl emulates features of a graphical client. We will demonstrate these tools as means to access some of JuliaInterpreter’s capabilities.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2019/talk/BB9VQZ/
NS Room 130
Tim Holy
Sebastian Pfitzner
Kristoffer Carlsson
PUBLISH
LFGLXC@@pretalx.com
-LFGLXC
Sponsor Address: Intel
en
en
20190723T100000
20190723T100500
0.00500
Sponsor Address: Intel
PUBLIC
CONFIRMED
Sponsor's Address
https://pretalx.com/juliacon2019/talk/LFGLXC/
NS Room 130
Paul Petersen
PUBLISH
9NK3HY@@pretalx.com
-9NK3HY
Julia Survey Results
en
en
20190723T100500
20190723T101500
0.01000
Julia Survey Results
PUBLIC
CONFIRMED
Lightning Talk
https://pretalx.com/juliacon2019/talk/9NK3HY/
NS Room 130
Viral B. Shah
PUBLISH
CQM3RC@@pretalx.com
-CQM3RC
Sponsor Address: Relational AI
en
en
20190723T101500
20190723T102000
0.00500
Sponsor Address: Relational AI
PUBLIC
CONFIRMED
Sponsor's Address
https://pretalx.com/juliacon2019/talk/CQM3RC/
NS Room 130
Nathan Daly
PUBLISH
AY9C9Z@@pretalx.com
-AY9C9Z
Keynote: Dr Ted Rieger
en
en
20190723T133000
20190723T141500
0.04500
Keynote: Dr Ted Rieger
PUBLIC
CONFIRMED
Keynote
https://pretalx.com/juliacon2019/talk/AY9C9Z/
NS Room 130
Dr Ted Rieger
PUBLISH
Q8KE7A@@pretalx.com
-Q8KE7A
Dynamical Modeling in Julia
en
en
20190723T110000
20190723T120000
1.00000
Dynamical Modeling in Julia
Many different aspects of dynamical modeling in Julia have seen a recent boom in popularity. A lot of package development focus has been given to the tooling for simulating dynamical models, such as the differential equation solvers of DifferentialEquations.jl and the relevant underlying pieces like IterativeSolvers.jl and NLsolve.jl. In addition, a community of domain-specific modeling tools such as DynamicalSystems.jl, Modia.jl, PuMaS.jl, QuantumOptics.jl, and more than can be listed have all built their own user bases.
The purpose of this BoF is to gather the developers of these related tooling to discuss the current state of the ecosystem and develop plans and priorities for next steps. A quick overview of the package space and its recent developments will be given to frame the conversion, with most of the time dedicated to discussion. Possible topics include (but are not limited to) understanding the domains most in need of new and more performant solvers, the utilization of parallelism (multithreading, multiprocessing, and GPu), incorporating symbolic tooling such as ModelingToolkit.jl, and the commonalities of analysis tooling (such as parameter estimation, neural network integration, uncertainty propagation). We invite developers within the community to express their feedback and help guide our next moves within the package space.
PUBLIC
CONFIRMED
Birds of Feather
https://pretalx.com/juliacon2019/talk/Q8KE7A/
BoF: Room 353
Chris Rackauckas
PUBLISH
GPZYS7@@pretalx.com
-GPZYS7
JuliaDB Code and Chat
en
en
20190723T143000
20190723T153000
1.00000
JuliaDB Code and Chat
Possible topics of conversation/things to work on:
- JuliaDB wishlist
- Utilities for feature engineering/other ML tasks
- Fixing bugs
- Creating benchmarks
PUBLIC
CONFIRMED
Birds of Feather
https://pretalx.com/juliacon2019/talk/GPZYS7/
BoF: Room 353
Josh Day
PUBLISH
TFFARR@@pretalx.com
-TFFARR
Julia and NumFocus, a discussion of how money works
en
en
20190723T154500
20190723T163500
0.05000
Julia and NumFocus, a discussion of how money works
PUBLIC
CONFIRMED
Birds of Feather
https://pretalx.com/juliacon2019/talk/TFFARR/
BoF: Room 353
Viral B. Shah
PUBLISH
APPUNN@@pretalx.com
-APPUNN
Cassette and company -- Dynamic compiler passes
en
en
20190723T163500
20190723T173500
1.00000
Cassette and company -- Dynamic compiler passes
PUBLIC
CONFIRMED
Birds of Feather
https://pretalx.com/juliacon2019/talk/APPUNN/
BoF: Room 353
Valentin Churavy
Jarrett Revels
PUBLISH
JRCQHU@@pretalx.com
-JRCQHU
Breakfast
en
en
20190723T073000
20190723T083000
1.00000
Breakfast
PUBLIC
CONFIRMED
Break
https://pretalx.com/juliacon2019/talk/JRCQHU/
Other
PUBLISH
BY7ZUX@@pretalx.com
-BY7ZUX
Morning break
en
en
20190723T102000
20190723T110000
0.04000
Morning break
PUBLIC
CONFIRMED
Break
https://pretalx.com/juliacon2019/talk/BY7ZUX/
Other
PUBLISH
ESLAXZ@@pretalx.com
-ESLAXZ
Lunch
en
en
20190723T120500
20190723T132000
1.01500
Lunch
PUBLIC
CONFIRMED
Break
https://pretalx.com/juliacon2019/talk/ESLAXZ/
Other
PUBLISH
BZSMMV@@pretalx.com
-BZSMMV
Short break
en
en
20190723T153000
20190723T154500
0.01500
Short break
PUBLIC
CONFIRMED
Break
https://pretalx.com/juliacon2019/talk/BZSMMV/
Other
PUBLISH
CNQ9JB@@pretalx.com
-CNQ9JB
Conference Dinner and Inner Harbor Cruise
en
en
20190723T190000
20190723T213000
2.03000
Conference Dinner and Inner Harbor Cruise
PUBLIC
CONFIRMED
Break
https://pretalx.com/juliacon2019/talk/CNQ9JB/
Other
PUBLISH
ARX8CY@@pretalx.com
-ARX8CY
Why writing C interfaces in Julia is so easy*
en
en
20190724T110000
20190724T113000
0.03000
Why writing C interfaces in Julia is so easy*
This talk is titled why writing C interfaces in Julia is so easy, but as anyone that has written interfaces to C ABI will know, interfacing with the C ABI is never easy. There can be segfaults, memory leaks, uninitialized memory issues and a host of other challenges to deal with when working through this process. In this talk, I'll briefly describe how the C ABI works, and then describe how `ccall` can be used. I'll also go through many best practices that I've used to ensure a nice clean Julian interface to a shared library. I will show how some best practices regarding writing interfaces to a large number of functions, you can use Julia's type system to guarantee that the users of your Julia library don't accidentally pass the wrong pointer to a function using `unsafe_convert`, and some general advice for programmers interested in writing their own libraries in a lower level language (such as C, C++, Rust, Nim etc) and how to ensure that they can be provided as pre-compiled binaries for Julia packages (using BinaryBuilder and alternatives).
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2019/talk/ARX8CY/
Elm A
Dheepak Krishnamurthy
PUBLISH
VANP8R@@pretalx.com
-VANP8R
Backticks and the Glorious Command Literal
en
en
20190724T113000
20190724T114000
0.01000
Backticks and the Glorious Command Literal
Like Perl, Ruby and Bash, Julia offers backtick syntax as an abstraction for dealing with processes. However, while other languages use this syntax to invoke a shell and grab the output, backticks in Julia invoke a mini-parser for it's own safe version of a shell language and they evaluate to a command literal just waiting to be run, never invoking a shell.
This talk will go over the details of how command literals are parsed in Julia, what the resulting object looks like, and why the Julia approach to commands is a significant improvement over what shell wrappers in other languages provide (including and especially the POSIX shell itself). Of course, it will also include many examples of how to use command literals effectively.
PUBLIC
CONFIRMED
Lightning Talk
https://pretalx.com/juliacon2019/talk/VANP8R/
Elm A
Aaron Christianson
PUBLISH
FXU7DC@@pretalx.com
-FXU7DC
Re-designing Optim
en
en
20190724T114000
20190724T115000
0.01000
Re-designing Optim
Optim, NLsolve and LsqFit are three packages in the JuliaNLSolvers organization. They have all been around from the early Julia days, and serve some basic scientific computing needs such as minimizing a function, fitting a curve and solving a system of equations. Their age means that they are widely used and well-known. However, their age also show in much of the design and abstractions that predates many of the unique and powerful features and packages in Julia.
Based on my own experience as a maintainer of these packages, and learning from the discussions on mailing lists, forums, and github, I will talk about the failures of the three packages, and how a complete rewrite of the packages is the best way forward. Hopefully, my reflections and experiences can help future package writers avoid making the same mistakes over and over again.
The talk won't be heavy on the mathematical details, but will explore important things to design for from the start. Users will eventually request many of the features, but they might be difficult to retrofit, so come join the quest to find the best ways of satisfying the greedy Julia users and abusers out there!
PUBLIC
CONFIRMED
Lightning Talk
https://pretalx.com/juliacon2019/talk/FXU7DC/
Elm A
Patrick Kofod Mogensen
PUBLISH
AGNHPU@@pretalx.com
-AGNHPU
Towards Faster Sorting and Group-by operations
en
en
20190724T115000
20190724T120000
0.01000
Towards Faster Sorting and Group-by operations
PUBLIC
CONFIRMED
Lightning Talk
https://pretalx.com/juliacon2019/talk/AGNHPU/
Elm A
Dai ZJ
PUBLISH
QFEHQS@@pretalx.com
-QFEHQS
SemanticModels.jl: not just another modeling framework
en
en
20190724T143000
20190724T150000
0.03000
SemanticModels.jl: not just another modeling framework
SemanticModels is a system for extracting semantic information from scientific code and reconciling it with conceptual descriptions to automate machine understanding of scientific ideas. We represent the connections between elements of code (variables, values, functions, and expressions) and elements of scientific understanding (concepts, terms, relations), to facilitate several metamodeling tasks, including model augmentation, synthesis, and validation. We show how SemanticModels can be used to augment scientific workflows in the epidemiological domain.
SemanticModels builds on such great Julia packages as Cassette, Flux, DifferentialEquations, and LightGraphs. It conducts static and dynamic analysis of programs to increase the productivity of scientist-developers. SemanticModels is a complimentary technology to the modeling languages like ModelingToolkit.jl and other DSLs used within the DiffEq ecosystem.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2019/talk/QFEHQS/
Elm A
James Fairbanks
Christine R Herlihy
PUBLISH
MHUAX7@@pretalx.com
-MHUAX7
OmniSci.jl: Bringing the open-source, GPU-accelerated relational database to Julia
en
en
20190724T150000
20190724T153000
0.03000
OmniSci.jl: Bringing the open-source, GPU-accelerated relational database to Julia
For this talk, I will highlight the work-to-date in bringing the functionality of OmniSci to Julia, and how all of the work of others on packages for geospatial, decimal support, Thrift and Apache Arrow make OmniSci.jl possible. Specifically, I will discuss:
* Why OmniSci and Julia are a great fit (performance, LLVM)
* Connecting to OmniSci from Julia
* Performing basic queries in millisecond speed
* Future work towards end-to-end analytics on the GPU in Julia
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2019/talk/MHUAX7/
Elm A
Randy Zwitch
PUBLISH
QZBKAU@@pretalx.com
-QZBKAU
Polynomial and Moment Optimization in Julia and JuMP
en
en
20190724T154500
20190724T174500
2.00000
Polynomial and Moment Optimization in Julia and JuMP
Polynomial and moment optimization problems are infinite dimensional optimization problems that can model a wide range of problems such as shape-constrained polynomial regression, optimal control of dynamical systems, region of attraction, polynomial matrix decomposition, smooth maximum-likelihood density estimation, AC power systems, experimental design, and computation of Nash equilibria. In this minisymposium we show how the [Julia](https://julialang.org) and [JuMP](https://github.com/JuliaOpt/JuMP.jl) ecosystems are particularly well suited for constructing and solving these problems. In particular, we show how the JuMP extensions [SumOfSquares](https://github.com/JuliaOpt/SumOfSquares.jl)/[PolyJuMP](https://github.com/JuliaOpt/PolyJuMP.jl) allow for an effortless construction of these problems and how they provide a flexible and customizable building block for additional packages such as JuliaMoments. We also show how various features of the Julia programming language are used in the state-of-the-art solvers Hypatia.jl and Aspasia.jl. Finally, we showcase specific uses of these tools for applications in engineering and statistics.
PUBLIC
CONFIRMED
Minisymposia / Extended Presentation
https://pretalx.com/juliacon2019/talk/QZBKAU/
Elm A
Juan Pablo Vielma
Lea Kapelevich
Chris Coey
Benoît Legat
Tillmann Weisser
PUBLISH
M8UDFK@@pretalx.com
-M8UDFK
Probabilistic Biostatistics: Adventures with Julia from Code to Clinic
en
en
20190724T110000
20190724T113000
0.03000
Probabilistic Biostatistics: Adventures with Julia from Code to Clinic
We know that Julia solves the “two-language problem”: it is both fast and efficient (performance), and easy to use (user friendliness). Using Julia combined with the Bayesian MCMC machinery can solve what we call “the two field problem” in clinical trials, which is that clinical researchers need expertise in more than one field.
Medical research - including clinical trials - is frequently conducted by physician researchers who have limited training in inferential statistics and computer programming. Typically, clinical research teams will have a biostatistician, though this may be an MS level individual who performs pre-specified “off the shelf” analyses, and generally is not someone well-versed in Bayesian inferential tools. The prevalence of “five percentitus,” i.e. looking only for and reporting p-values that are “statistically significant” (p<0.05), testifies to this fact. The advances in computing power and capabilities in the last several decades, along with the subsequent developments in Bayesian computational methods, are only just beginning to have an impact on this.
As those conducting and funding clinical RCTs recognize the high costs of these studies (e.g., medication expense, time required, and potential exposure of patients to ineffective treatments), there has been greater enthusiasm for (1) improving statistical analytic methods for RCTs, and 2) using evidence-based methods to examine existing naturalistically-collected clinical data to inform clinical practice without the need for RCTs. These approaches require far greater statistical and programming knowledge and sophistication from users. Thus, there is an urgent need to provide statistical tools to clinician-researchers that are intuitive and easy to use, yet sophisticated and powerful enough “under the hood” to answer questions that simpler methods cannot.
The “Bayesian machinery” of Markov chain Monte Carlo (MCMC) methods together with Julia offer a solution to this “two-field problem”. They enable exact small sample inference and hypothesis testing for complex models without requiring the restrictive assumptions necessary to obtain analytical tractability (performance), and facilitate the analysis of complex models with basic statistical concepts: frequency distributions, density plots, means, medians, modes, standard deviations, quantiles, and posterior odds (user friendliness).
The talk will demonstrate application of this approach using examples from our own research that illustrate our experiences with Bayesian inferential methods for clinical research using Julia. [the number and detail of examples will be modified to suit the length of the talk].
• Reevaluating the evidence from previously conducted RCTs.
• Analysis of abandoned trials.
• Joint evaluation of tolerability and efficacy in RCTs.
• Bayesian hierarchical modeling for meta-analysis evaluating adverse events (“side effects”) in trial participants,
and examining the difference between industry and federally sponsored randomized controlled trials.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2019/talk/M8UDFK/
Elm B
Jeff Mills
PUBLISH
3JUN8D@@pretalx.com
-3JUN8D
Slow images, fast numbers: Using Julia in biomedical imaging and beyond
en
en
20190724T113000
20190724T114000
0.01000
Slow images, fast numbers: Using Julia in biomedical imaging and beyond
PUBLIC
CONFIRMED
Lightning Talk
https://pretalx.com/juliacon2019/talk/3JUN8D/
Elm B
Virginia Spanoudaki
PUBLISH
U9XTE7@@pretalx.com
-U9XTE7
Brain Tumour Classification with Julia
en
en
20190724T114000
20190724T115000
0.01000
Brain Tumour Classification with Julia
In this study, I have taken up a multi-class classification problem in order to distinguish four types of brain tumours from each other, in particular, medulloblastoma, malignant glioma, Atypical Teratoid Rhabdoid Tumor (ATRT), and normal cerebellar. The dataset describes a few thousand genes, and their numerical levels of expression in each tumour sample. The aim of the study is to predict a tumour class, given the gene expression data for that tumour. The insights from this study will be particularly useful, especially for tumours like ATRT which are difficult to diagnose. Survival rates of such types of cancer are considerably higher with an early correct diagnosis.
PUBLIC
CONFIRMED
Lightning Talk
https://pretalx.com/juliacon2019/talk/U9XTE7/
Elm B
Amita Varma
PUBLISH
HBBG8N@@pretalx.com
-HBBG8N
Mining Imbalanced Big Data with Julia
en
en
20190724T115000
20190724T120000
0.01000
Mining Imbalanced Big Data with Julia
In this era of big data, classifying imbalanced real-life data in supervised learning is a challenging research issue. Standard data sampling methods: under-sampling, and over-sampling have several limitations for dealing with big data. Mostly, under-sampling approach removes data points from majority class instances and over-sampling approach engenders artificial minority class instances to make the data balanced. However, we may lose informative information/ instances using under-sampling approach, and under other conditions over-sampling approach causes overfitting problem. In this talk, we have presented a new cluster-based under-sampling approach by amalgamating ensemble learning (e.g. RandomForest classifier) for classification of imbalanced data that we implemented in Julia. We have collected actual illegal money transaction telecom fraud data, which is highly imbalanced with only 8,213 minority class instances amount 63,62,620 instances. The proposed method bifurcates the data into majority class and minority class instances. Then, clusters the majority class instances into several clusters and considers a set of instances from each cluster to create several sub-balanced datasets. Finally, a number of classifiers are generated using these balances datasets and apply majority voting technique for classifying unknown/ new instances. We have tested the proposed method on separate test dataset that achieved 97% accuracy.
PUBLIC
CONFIRMED
Lightning Talk
https://pretalx.com/juliacon2019/talk/HBBG8N/
Elm B
Dewan Md. Farid
Swakkhar Shatabda
PUBLISH
HLLDQT@@pretalx.com
-HLLDQT
DataKnots.jl - an extensible, practical and coherent algebra of query combinators
en
en
20190724T143000
20190724T150000
0.03000
DataKnots.jl - an extensible, practical and coherent algebra of query combinators
[DataKnots](https://github.com/rbt-lang/DataKnots.jl) implements an algebraic query interface of [Query Combinators](https://arxiv.org/abs/1702.08409). This algebra’s elements, or queries, represent relationships among class entities and data types. This algebra’s operations, or combinators, are applied to construct query expressions.
We seek to prove that this query algebra has significant advantages over the state of the art:
* DataKnots is a practical alternative to SQL with a declarative syntax; this makes it suitable for use by domain experts.
* DataKnots' data model handles nested and recursive structures (unlike DataFrames or SQL); this makes it suitable for working with CSV, JSON, XML, and SQL databases.
* DataKnots has a formal semantic model based upon monadic composition; this makes it easy to reason about the structure and interpretation of queries.
* DataKnots is a combinator algebra (like XPath but unlike LINQ or SQL); this makes it easier to assemble queries dynamically.
* DataKnots is fully extensible with Julia; this makes it possible to specialize it into various domain specific query languages.
This talk will provide a conceptual introduction to DataKnots.jl with applications in medical informatics.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2019/talk/HLLDQT/
Elm B
Clark C. Evans
PUBLISH
QYG3BZ@@pretalx.com
-QYG3BZ
Queryverse - Under the Hood
en
en
20190724T150000
20190724T153000
0.03000
Queryverse - Under the Hood
This talk will start with a quick end-to-end data science example that exercises all parts of the [Queryverse]( https://www.queryverse.org/) (file IO, data manipulation and plotting). I will then briefly introduce a number of new features that were added over the course of the last year (a native Queryverse table type, various new tabular query operators, some new UI tools and the fastest Julia CSV parsing). The bulk of the talk will center on the internal design of [Queryverse]( https://www.queryverse.org/). Topics will include the monadic design of [Query.jl]( https://github.com/queryverse/Query.jl) (inherited from LINQ) that allows us to easily bridge the tabular world with many other julia data structures, the design principles behind [TableTraits.jl]( https://github.com/queryverse/TableTraits.jl) and how it manages to combine extreme simplicity with great performance, the underlying architecture in [Query.jl]( https://github.com/queryverse/Query.jl) that allows full query analysis, rewrites and optimization, and the engineering principles (in terms of backwards compatibility and testing infrastructure) that drive the [Queryverse]( https://www.queryverse.org/).
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2019/talk/QYG3BZ/
Elm B
David Anthoff
PUBLISH
GRZZBA@@pretalx.com
-GRZZBA
Raising Diversity & Inclusion among Julia users
en
en
20190724T154500
20190724T174500
2.00000
Raising Diversity & Inclusion among Julia users
Diversity of users is fundamental for the development of an open language such as Julia. Because it is a young, but very promising programming language, the promotion of working groups that foster the spread of its usage among users from different regions, backgrounds, ages, and social contexts can shed light on bugs and potential growth because of the different perspectives brought with diversity. The Julia Computing Diversity & Inclusion Award funded five projects aimed at promoting the usage of Julia Language with different approaches, in different regions of the planet. In this session, we will share our experience in our projects, talking about how we planned, executed and evaluated the outcomes, and what we learned.
For more information about individual projects funded, [see here](https://juliacomputing.com/blog/2018/11/30/DandI-grant-awards.html).
PUBLIC
CONFIRMED
Minisymposia / Extended Presentation
https://pretalx.com/juliacon2019/talk/GRZZBA/
Elm B
Kevin S Bonham
Elwin van 't Wout
Anna Harris
PUBLISH
8AM9JC@@pretalx.com
-8AM9JC
Yao.jl: Extensible, Efficient Quantum Algorithm Design for Humans.
en
en
20190724T110000
20190724T113000
0.03000
Yao.jl: Extensible, Efficient Quantum Algorithm Design for Humans.
## Introduction
Yao is an open source framework for
- quantum algorithm design;
- quantum [software 2.0](https://medium.com/@karpathy/software-2-0-a64152b37c35);
- quantum computation education.
## Motivation
Comparing with state of art quantum simulators, our library is inspired by quantum circuit optimization.
Variational quantum optimization algorithms like quantum circuit Born machine ([QCBM](https://arxiv.org/abs/1804.04168)), quantum approximate optimization algorithm ([QAOA](http://arxiv.org/abs/1411.4028)), variational quantum eigensolver ([VQE](https://doi.org/10.1038/ncomms5213)) and quantum circuit learning ([QCL](http://arxiv.org/abs/1803.00745)) et. al. are promising killer apps on a near term quantum computers.
These algorithms require the flexibility to tune parameters and have well defined patterns such as "Arbitrary Rotation Block" and "CNOT Entangler".
In Yao, we call these patterns "blocks". If we regard every gate or gate pattern as a "block", then the framework can
* be flexible to dispatch parameters,
* cache matrices of blocks to speed up future runs,
* allow hierarchical design of quantum algorithms
Thanks to Julia's duck type and multiple dispatch features, user can
* easily **extend** the block system by overloading specific interfaces
* quantum circuit blocks can be dispatched to some **special method** to improve the performance in specific case (e.g. customized repeat block of H gate).
## Features
Yao is a framework that is about to have the following features:
- **Extensibility**
- define new operations with a minimum number of methods in principle.
- extend with new operations on different hardware should be easy, (e.g GPUs, near term quantum devices, FPGAs, etc.)
- **Efficiency**
- comparing with python, julia have no significant overhead on small scale circuit.
- special optimized methods are dispatched to frequently used blocks.
- double interfaces "apply!" and "cache server + mat" allow us to choose freely when to sacrifice memory for faster simulation and when to sacrifice the speed to simulate more qubits.
- **Easy to Use**
- As a white-box simulator, rather than using a black box, users will be aware of what their simulation are doing right through the interface.
- **Hierarchical APIs** from **low abstraction quantum operators** to **highly abstract** circuit block objects.
The whole framework is highly **modularized**, researchers can extend this framework for different purposes.
## Author
This project is an effort of QuantumBFS, an open source organization for quantum science. All the contributors are listed in the [contributors](https://github.com/QuantumBFS/Yao.jl/graphs/contributors).
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2019/talk/8AM9JC/
Room 349
Roger Luo
PUBLISH
7BKBZJ@@pretalx.com
-7BKBZJ
Guaranteed constrained and unconstrained global optimisation in Julia
en
en
20190724T113000
20190724T114000
0.01000
Guaranteed constrained and unconstrained global optimisation in Julia
We will show how set computations, using interval-based methods, enable us to find the global minimum for difficult nonlinear, non-convex optimization problems of functions $f:\mathbb{R}^n \to \mathbb{R}$, even when the number of local minima is huge, with guaranteed bounds on the optimum value and on the set of minimizers. We can often also find all stationary points in a given box.
We will explain the underlying ideas and some details of the Julia implementation in IntervalOptimisation.jl, which relies on spatial branch and bound, as well as showing examples. We can tackle some "weakly non-convex" functions ranging up to a few hundred variables, whereas highly oscillatory functions can be very challenging even for $n < 10$.
For constrained optimization, we apply **constraint propagation**, as implemented in `IntervalConstraintProgramming.jl`, to eliminate infeasible regions and prove the existence of feasible points. We will show how the above techniques are combined to allow efficient and guaranteed calculations for optimization problems.
The `CharibdeOptim.jl` package combines these methods with the heuristic Differential Evolution technique to get an efficient global optimisation tool.
PUBLIC
CONFIRMED
Lightning Talk
https://pretalx.com/juliacon2019/talk/7BKBZJ/
Room 349
David P. Sanders
PUBLISH
LHU9UM@@pretalx.com
-LHU9UM
Pyodide: The scientific Python stack compiled to WebAssembly
en
en
20190724T114000
20190724T115000
0.01000
Pyodide: The scientific Python stack compiled to WebAssembly
Unlike the traditional data science interaction model where the web browser only acts as a front end to computation happening on a remote server, Pyodide allows the computation to happen right within the user's web browser. This makes interactivity more performant, and allows easier sharing of notebooks without using potentially costly or privacy-violating cloud services.
I hope to present this at JuliaCon as a success story in the hopes that a similar tool can be built for Julia.
PUBLIC
CONFIRMED
Lightning Talk
https://pretalx.com/juliacon2019/talk/LHU9UM/
Room 349
Michael Droettboom
PUBLISH
JYLQ8H@@pretalx.com
-JYLQ8H
Julia for Battery Model Parameter Estimation
en
en
20190724T115000
20190724T120000
0.01000
Julia for Battery Model Parameter Estimation
The particular battery model employed in this application predicts the voltage, temperature, state-of-charge, and degradation (i.e. lithium lost due to aging factors). Due to the complex interactions among these properties --- along with other dynamic, codependent cell properties --- the behavior of the cell over the course of an arbitrary load cannot be accurately characterized from an initial state without simulating these interactions over time.
As a result, the model implementation discretely progresses the cell through discharge and charge using a time-step of 2 seconds, predicting forward the state properties. The only time-dependent input is a load profile, which can come in the form of the power over time or current over time associated with the discharge due to the load and charging protocol. Beyond that, user inputs are only required for the initial cell state.
Looking at an individual step, the mole fraction of lithium in different parts of the cell is found using either the initial conditions or the prediction from a prior step. Calculating the open circuit potential for both the anode and cathode depends on these mole fractions and the current cell temperature. Following this step the mole fractions for the next state are calculated by approximating their rate of change, which relies on the input current, and multiplying the rate by the 2-second time-step. The state-of-charge for the next step is also calculated at this point.
The cell voltage depends on a set of overpotentials on top of the open circuit potential already estimated. As with the mole fractions, these come from initial conditions or a previous step. To predict the overpotentials for the next state, properties from the current state are used to calculate the current rate of change, which is then multiplied by the time-step. Temperature is predicted for the next state in a similar manner, as is the cell voltage.
The computational challenge derives in part from the vast parameter space necessary to characterize the model to a real cell based on testing data. The model depends on roughly 20 parameters for a single discharge-charge cycle to predict the state over time. On top of this, keeping track of cell degradation requires an additional 5 parameters.
Working from the state-of-charge model described above, a state-of-health model can be set up using these additional parameters and running the discharge model for hundreds of cycles, updating the input parameters at the beginning of each cycle. At each state during an individual cycle, the amount of lithium lost either to reactions with the electrolyte or isolation into inactive lithium metal is added to a running total for the cycle. After a cycle completes, this total is removed from the initial lithium available to the cell. Resistance and diffusivity also change over multiple cycles, and the contribution to their decay is also maintained as a running total within each cycle.
Since local minima are pervasive in this parameter space, and error-minimizing strategies are too strongly influenced by initial guesses, a Monte Carlo implementation is necessary to properly train the model. This becomes prohibitively expensive computationally within Matlab, where the model was first implemented, because each cycle lasts for up to 10,000 seconds, and up to 2,000 cycles can be required to compare the aging model to the available aging data.
The search space defined by the parameters requires that the Monte Carlo be able to perform several thousand iterations. Under the Matlab implementation, each Monte Carlo iteration would take approximately 0.03 seconds. This means that the algorithm can do 1 million iterations in 500 minutes or about 8 hours. While this seems sufficient, there are 20 parameters which means that on average, there are only 50,000 changes to each variable which is likely not enough iterations per variable to properly sample the space. In addition, more complex Monte Carlo models such as the Hamilton Monte Carlo take significantly more time to run, thus limiting the number of iterations that can be run.
By implementing the same code in Julia, the algorithm got a significant speed up in addition to other benefits. Compared to the Matlab implementation, the Julia implementation had one Monte Carlo iteration complete in about 0.003 seconds. This means that there was a 10X speed up, allowing for 10X more iterations to be completed in the same time. Thus, in about 8 hours, 10 million Monte Carlo iterations could be performed. In addition, Julia enabled the code to be run in parallel on Arjuna, a high-performance computing cluster at Carnegie Mellon University. This means that in 8 hours, several of these algorithms can be run in parallel in which each performs a phase space search using 10 million iterations. Since each algorithm has enough iterations to properly sample the space, the minimum error found from the collection of Monte Carlo simulations can be assumed to be the global minimum of the search space. The large amount of Monte Carlo iterations also allows for the algorithm to use a simulated annealing function to allow the algorithm to not get stuck in local minima.
PUBLIC
CONFIRMED
Lightning Talk
https://pretalx.com/juliacon2019/talk/JYLQ8H/
Room 349
Matthew Guttenberg
Shashank Sripad
Venkat Viswanathan
William L Fredericks
PUBLISH
K9BBNX@@pretalx.com
-K9BBNX
Heterogeneous Agent Dynamic Stochastic General Equilibrium (DSGE) Models in Julia at the Federal Reserve Bank of New York
en
en
20190724T143000
20190724T150000
0.03000
Heterogeneous Agent Dynamic Stochastic General Equilibrium (DSGE) Models in Julia at the Federal Reserve Bank of New York
**Abstract:** Over the past few decades, income and wealth inequality have emerged as defining fixtures of the modern U.S. economy. Born of a desire to study the differential effects of policy decisions on a variegated group of economic actors, heterogeneous agent (HA) models enable researchers to incorporate critical differentiation in household income, wealth, and consumption behavior into their analyses of economic phenomena. Yet, while great academic progress has been made developing such models, many policy institutions have been slow to shift from the “representative agent” paradigm. This is due, in large part, to computational strain surrounding HA models' solution and estimation. However, thanks to speed gains made possible by innovations in computing languages like Julia, what was once too computationally taxing for policy purposes has become both feasible to run and elegant to implement. This talk will provide an overview of the Federal Reserve Bank of New York's (NY Fed) HA dynamic stochastic general equilibrium (DSGE) model development process in Julia, walking through our navigation of Julia-specific functionality in the process. Comparisons of performance relative to MATLAB and FORTRAN will be provided.
**Description:** Heterogeneous agent models, in contrast to representative agent models, allow for heterogeneity in various features among agents in an economy, at both the household and firm level. Examples of such heterogeneity might include age, risk-tolerance, skills, and discounting of the future --- features that manifest themselves in heterogeneity in the wealth distribution. Recent work in macroeconomic literature reveals the monetary and fiscal policy implications for HA models can differ widely from their representative counterparts. Such models may be implemented in either discrete or continuous time, posing challenges for their intuitive out-of-the-box deployment as well as succinct tailoring of model-specific solution methods. The solving of large-scale macroeconomic models is sufficiently complex in the representative agent case; the dimensionality of problems increases considerably with the addition of heterogeneity, especially in continuous-time, lending itself well to Julia’s strengths.
In the past year, our team has ported several HA models—along with algorithms to discretize, linearize, and solve them—from both MATLAB and FORTRAN for integration into our codebase. package. This addition comes at the heals of the release of our [DSGE.jl](https://github.com/FRBNY-DSGE/DSGE.jl) package, whose other components, such as representative agent model solution, estimation, and forecasting, were the subject of past presentations at JuliaCons 2016-2018. I will discuss the respects Julia has provided us technical flexibility in constructing coherent type hierarchies, employing multiple dispatch, and utilizing distributed computing, as well as how we managed various design decisions pertaining to variable scope, typing, and parallelization, so as to optimize for memory usage and runtime. I will cover the technical constraints and considerations imposed by our production environment at the NY Fed, and offer advice for what we found accommodated our cluster setup. Finally, I will shed light on how HA models may expand the toolkit for policymakers and academics.
Disclaimer: This talk reflects the experience of the author and does not represent an endorsement by the Federal Reserve Bank of New York or the Federal Reserve System of any particular product or service, The views expressed in this talk are those of the authors and do not necessarily reflect the position of the Federal Reserve Bank of New York or the Federal Reserve System. Any errors or omissions are the responsibility of the authors.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2019/talk/K9BBNX/
Room 349
Rebecca Sarfati
PUBLISH
N3BKPS@@pretalx.com
-N3BKPS
“Online” Estimation of Macroeconomic Models
en
en
20190724T150000
20190724T153000
0.03000
“Online” Estimation of Macroeconomic Models
In this talk, I will present DSGE.jl’s new Sequential Monte Carlo (SMC) sampler. SMC is a method of generating draws from a posterior distribution when direct sampling is not possible. In the past, modern macroeconomists have used Random Walk Metropolis Hastings for this purpose, however as our models have become more complex, Metropolis Hastings’ shortcomings have become more apparent: the algorithm produces serially correlated draws, has difficulties characterizing multimodal distributions, is slow and unable to be parallelized, and interfaces poorly with new data arrivals.
SMC resolves these problems. Instead of starting from scratch with every new piece of information, we initialize the sampling algorithm at the entire posterior distribution of an older estimation. This “online estimation” allows frequent re-estimation of our models as new data become available rather than waiting months for enough new data to justify a full re-estimation. The algorithm is fast and parallelizable, reducing runtimes from days to just hours. These massive speedups make possible estimation of a new class of heterogenous agent models (which are simply infeasible to estimate using Metropolis Hastings) and allow more rigorous forecast evaluations (by allowing us to estimate a larger suite of comparison models on different data).
I will discuss lessons learned regarding parallelization and improvements we’ve made to the algorithm including parameter blocking, utilization of the Chandrasekhar recursions for likelihood evaluation, and fully-adaptive hyper-parameter tuning (allowing the algorithm to flexibly accommodate the business cycle: it spends more time exploring the distribution when economic conditions are novel than when conditions are similar to those in the past). Our SMC methods may prove useful to any Julia users who conduct Bayesian estimation, and many of our developments can also be easily applied to alternative applications. Finally, I will present comparative performance benchmarks of the algorithm in Matlab, FORTRAN, and Julia and discuss approaches we’ve taken to optimize code performance.
Disclaimer: This talk reflects the experience of the author and does not represent an endorsement by the Federal Reserve Bank of New York or the Federal Reserve System of any particular product or service. The views expressed in this talk are those of the authors and do not necessarily reflect the position of the Federal Reserve Bank of New York or the Federal Reserve System. Any errors or omissions are the responsibility of the authors.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2019/talk/N3BKPS/
Room 349
Ethan Matlin
PUBLISH
JPNYCR@@pretalx.com
-JPNYCR
Differentiate All The Things!
en
en
20190724T154500
20190724T161500
0.03000
Differentiate All The Things!
Last JuliaCon I announced the [Zygote](https://github.com/FluxML/Zygote.jl) tool for analytical differentiation (AD) of Julia code. Flux has now uses Zygote as its default AD,* enabling both a more elegant interface and all kinds of new models that weren't possible before.
Flux's new APIs are powerful and let us easily express advanced concepts like backpropagation through time. But really, Julia's power is in its awesome open-source ecosystem, with state of the art tools for differential equations, mathematical optimisation, and even colour theory! Come and see how we can take advantage of all of these tools in machine learning models, enabling "theory-driven" ML to tackle harder problems than ever.
* In theory; I write this from 4 months in the past, so who knows.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2019/talk/JPNYCR/
Room 349
Mike Innes
PUBLISH
JGY7KC@@pretalx.com
-JGY7KC
Differentiable Rendering and its Applications in Deep Learning
en
en
20190724T161500
20190724T162500
0.01000
Differentiable Rendering and its Applications in Deep Learning
PUBLIC
CONFIRMED
Lightning Talk
https://pretalx.com/juliacon2019/talk/JGY7KC/
Room 349
Avik Pal
PUBLISH
HANGXH@@pretalx.com
-HANGXH
Neural Ordinary Differential Equations with DiffEqFlux
en
en
20190724T162500
20190724T163500
0.01000
Neural Ordinary Differential Equations with DiffEqFlux
This talk will demonstrate models described in [Neural Ordinary Differential Equations](https://arxiv.org/pdf/1806.07366.pdf) implemented in DiffEqFlux.jl, using DifferentialEquations.jl to solve ODEs with dynamics specified and trained with Flux.jl. In particular it will show how to use gradient optimization with the adjoint method to train a neural network which parameterizes an ODE for supervised learning and for Continuous Normalizing Flows. These demonstrations will be contributed to the Flux model-zoo.
The supervised learning demonstration will illustrate that neural ODEs can be drop-in replacements for [residual networks](https://arxiv.org/pdf/1512.03385.pdf) on supervised tasks such as image recognition.
The Continuous Normalizing Flow demo will show how a neural ODE, with the instantaneous change of variables, can learn a continuous transformation from tractable base distribution to a distribution over data which can be sampled from and evaluate densities under.
PUBLIC
CONFIRMED
Lightning Talk
https://pretalx.com/juliacon2019/talk/HANGXH/
Room 349
Jesse Bettencourt
PUBLISH
KGHF7T@@pretalx.com
-KGHF7T
Fitting Neural Ordinary Differential Equations with DiffeqFlux.jl
en
en
20190724T163500
20190724T170500
0.03000
Fitting Neural Ordinary Differential Equations with DiffeqFlux.jl
A [neural Ordinary Differential Equation](https://arxiv.org/abs/1806.07366) (ODE) is a differential equation whose evolution equation is a neural network. We can use neural ODEs to model nonlinear transformations by directly learning the governing equations from time course data. Therefore, neural ODEs present a novel method for modelling time series in an elegant manner as they allow us to use sophisticated differential equations solving procedures in the field of machine learning, an area of already high and still increasing demand.
In this talk we discuss [DiffEqFlux.jl](https://arxiv.org/abs/1902.02376), a package for designing and training neural ODEs. We demonstrate how to fit neural ODEs against data by using the L2 loss function, and explain how Julia's automatic differentiation is used to calculate the gradients through the differential equation solvers to compute the gradients of the loss function. While this is the "standard" method, it involves solving an ODE at each step of the optimization which can be very time consuming. Thus, we introduce new methodologies available in DiffEqFlux.jl, to improve the efficiency and robustness of the fitting. First, we demonstrate new functionalities provided by a bridge to the [two stage collocation method](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2631937/
) of the package [DiffEqParamEstim.jl](https://docs.juliadiffeq.org/latest/analysis/parameter_estimation.html). Second, we show how to effectively use these functions in a mixed training loop to improve the speed and robustness of the fitting. Third, we demonstrate and explain a new loss function in DiffEqFlux.jl which allows for multiple shooting, and show its performance characteristics. Together, these three features improve the performance and robustness of the fitting process of neural ODEs in Julia and, thus, allow it to scale to more practical models and data.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2019/talk/KGHF7T/
Room 349
Elisabeth Roesch
PUBLISH
ZDRQQA@@pretalx.com
-ZDRQQA
Randomized Sketching for Approximate Gradients : Applications to PDE Constrained Optimization
en
en
20190724T170500
20190724T171500
0.01000
Randomized Sketching for Approximate Gradients : Applications to PDE Constrained Optimization
This work is motivated by the challenge of expensive storage in optimization problems with PDE constraints e.g. optimal flow control, full waveform inversion, optical tomography etc.
These optimization problems are characterized by PDE constraints that uniquely determine the state of a physical system for a given control. The state matrix is typically much more expensive to store than the control matrix.
The optimization algorithms effects changes in the control that move a physical system towards optimal behavior. Any first or second order algorithm requires gradient computation. As a first step we have to solve the PDE and store its solution and this is a storage bottleneck.
Recently randomized algorithms have been developed for matrix approximation and Sketching is a high-performance algorithm for on-the-fly compression of matrices. We demonstrate how sketching can be used to compute approximate gradients and hessian-times-vector quantities while avoiding the storage bottleneck caused by the PDE solution.
This cutting-edge algorithmic recipe has been applied successfully for linear parabolic boundary control and optimal fluid flow. We also explore its implications for efficient adjoint computation or back-propagation.
PUBLIC
CONFIRMED
Lightning Talk
https://pretalx.com/juliacon2019/talk/ZDRQQA/
Room 349
Ramchandran Muthukumar
PUBLISH
KJ9SGA@@pretalx.com
-KJ9SGA
Machine Learning for Social Good
en
en
20190724T172500
20190724T173500
0.01000
Machine Learning for Social Good
ML has provided us with the tools to think of data as a source of insight. It isn’t a stretch to apply the same thinking towards some of the most pressing socio economic calamities we as a society are faced with.
Mask RCNN is a state of the art object detection and segmentation net that was developed by FAIR and has shown tremendous leaps in terms of segmentation while being conceptually simple. We present an all-Julia Flux implementation of Mask RCNN and our methodology of setting up the segmentation task. We use satellite images of cities and try to identify regions where slums exist.
Finally we share our results of our findings.
Code:
The code is currently private before it is released openly through contributions to various existing projects like the Flux model-zoo and Metalhead.jl.
PUBLIC
CONFIRMED
Lightning Talk
https://pretalx.com/juliacon2019/talk/KJ9SGA/
Room 349
Dhairya Gandhi
PUBLISH
DRP3EF@@pretalx.com
-DRP3EF
Keynote: Professor Steven G Johnson
en
en
20190724T084000
20190724T092500
0.04500
Keynote: Professor Steven G Johnson
PUBLIC
CONFIRMED
Keynote
https://pretalx.com/juliacon2019/talk/DRP3EF/
NS Room 130
Professor Steven G Johnson
PUBLISH
JHZRJS@@pretalx.com
-JHZRJS
Sponsor Address: J P Morgan Chase & Co.
en
en
20190724T093000
20190724T094500
0.01500
Sponsor Address: J P Morgan Chase & Co.
PUBLIC
CONFIRMED
Sponsor's Address
https://pretalx.com/juliacon2019/talk/JHZRJS/
NS Room 130
Jiahao Chen
PUBLISH
CYJRTK@@pretalx.com
-CYJRTK
Sponsor Address: Julia Computing
en
en
20190724T094500
20190724T095000
0.00500
Sponsor Address: Julia Computing
PUBLIC
CONFIRMED
Sponsor's Address
https://pretalx.com/juliacon2019/talk/CYJRTK/
NS Room 130
Stefan Karpinski
PUBLISH
JANJFM@@pretalx.com
-JANJFM
Using Julia in Secure Environments
en
en
20190724T095000
20190724T100000
0.01000
Using Julia in Secure Environments
This talk stems from the author's (eventually successful) attempt to deploy Julia along with associated packages on a secure network. Last year there was a fairly long discourse thread on this subject; this talk will serve as a followup to the initial attempts and will discuss lessons learned and advice for those seeking to use Julia in restricted environments.
PUBLIC
CONFIRMED
Lightning Talk
https://pretalx.com/juliacon2019/talk/JANJFM/
NS Room 130
Seth Bromberger
PUBLISH
PKLBXV@@pretalx.com
-PKLBXV
Keynote: Arch D. Robison
en
en
20190724T133000
20190724T141500
0.04500
Keynote: Arch D. Robison
PUBLIC
CONFIRMED
Keynote
https://pretalx.com/juliacon2019/talk/PKLBXV/
NS Room 130
Arch D. Robison
PUBLISH
EKN9AD@@pretalx.com
-EKN9AD
Sustainable Development and Open Source Monetization
en
en
20190724T110000
20190724T120000
1.00000
Sustainable Development and Open Source Monetization
PUBLIC
CONFIRMED
Birds of Feather
https://pretalx.com/juliacon2019/talk/EKN9AD/
BoF: Room 353
Clark Evans
PUBLISH
PSAX8D@@pretalx.com
-PSAX8D
Diversity and Inclusion in Julia Community
en
en
20190724T143000
20190724T153000
1.00000
Diversity and Inclusion in Julia Community
JuliaCon 2018 was the most diverse JuliaCon yet. Though the success of this event demonstrated progress in the diversity and inclusivity of our community, the continued underrepresentation of individuals identifying as female or as racial/ethnic minorities, for example, indicates that we have a long way to go. As greater language stability increases the confidence of our community members to recruit new users and to invest further in the language and its community, interest in diversity efforts tied to the language seem to be gaining momentum. We would like to hold a birds of a feather session to discuss and brainstorm diversity and inclusion in the Julia community.
PUBLIC
CONFIRMED
Birds of Feather
https://pretalx.com/juliacon2019/talk/PSAX8D/
BoF: Room 353
Kelly Shen
Nathan Daly
PUBLISH
B38TDU@@pretalx.com
-B38TDU
Julia In Production
en
en
20190724T154500
20190724T164500
1.00000
Julia In Production
PUBLIC
CONFIRMED
Birds of Feather
https://pretalx.com/juliacon2019/talk/B38TDU/
BoF: Room 353
Curtis Vogt
PUBLISH
BACLJQ@@pretalx.com
-BACLJQ
JuliaGPU
en
en
20190724T164500
20190724T173500
0.05000
JuliaGPU
PUBLIC
CONFIRMED
Birds of Feather
https://pretalx.com/juliacon2019/talk/BACLJQ/
BoF: Room 353
Tim Besard
Valentin Churavy
PUBLISH
VXFNVA@@pretalx.com
-VXFNVA
Breakfast
en
en
20190724T073000
20190724T083000
1.00000
Breakfast
PUBLIC
CONFIRMED
Break
https://pretalx.com/juliacon2019/talk/VXFNVA/
Other
PUBLISH
TCVVZN@@pretalx.com
-TCVVZN
Poster Session
en
en
20190724T101000
20190724T110000
0.05000
Poster Session
Posters:
- "AMLET meets Julia" by Bastin, Fabian
- "JuliaDB: solving the two language problem in analytical databases" by Shashi Gowda
- "Mapping the Logic Item Domain with Julia" by Francis Smart
- "OrthogonalPolynomials.jl - Optimal evaluation of orthogonal polynomials in ~ 100 lines of Julia" by Miguel Raz Guzmán Macedo
- "Oceananigans.jl: fast, friendly, architecture-agnostic, high-performance ocean modeling" by Ali Ramadhan
- "Parallel Scenario Decomposition of Stochastic Programming" by Kibaek Kim
- "ARCH Models in Julia" by Simon Broda
- "Myths, Legends, and Other Amazing Adventures in CSV Parsing" by Jacob Quinn
- "Occasionally Binding Constraints in DSGE Models in Julia" by Vivaldo Mendes
- "Time Series Analysis and Forecasting with Julia: Nonlinear Autoregressive vs Machine Learning Models" by Diana A. Mendes
- "Julia applied in the Factory of the Future" by Thijs Van Hauwermeiren
- "NLPeterman.jl: Language processing from the ground up in Julia" by Mihir Paradkar
- "Materials.jl for crystal plasticity" by Ivan Yashchuk
- "Markov Chain-Monte Carlo Methods for Linear Algebra using Julia v.1.0" by Oscar A. Esquivel-Flores
- "Modelling environment for JuliaOpt" by Manuel Marin
- "BuyLibre - Cooperative Libre Software Cost-Sharing" by Clark C. Evans
- "StateSpaceModels.jl -- A Julia package for time-series analysis in a state-space framework" by Raphael Saavedra
- "TileDB: a data management solution tailored for data scientists" by Jake Bolewski
- "Growing Machine Learning Solutions on Kubernetes with Julia" by Patrick Barker
- "Econometrics.jl for econometric analysis in Julia" by José Bayoán Santiago Calderón
- "Bioequivalence Analysis in Julia using Bioequivalence.jl" by José Bayoán Santiago Calderón
- "Tackling Stochastic Delay Problems in Julia" by Henrik Sykora
- "Julia Web Apps with Mux.jl + Heroku" by Josh Day
- "A New Approach of Genre-Based Similarity for User-UserCollaborative Filtering Recommender System" by Mahamudul Hasan
- "Real Time Mapping of Epidemic Spread, predict with SIR, Neural Network in Julia" by Rahul Kulkarni
### Info for presenters
(This matches the email you've all been sent)
We will have two poster sessions during the morning coffee breaks on Wednesday,
July 24 and Thursday, July 25 from 10:10AM to 11AM in room 349.
Please hang your poster in room 349 by 8:30AM on Wednesday and plan to present during both sessions.
If you are unable to present on both days, please let us know.
The poster boards provided will be landscape (3 feet by 4 feet)
PUBLIC
CONFIRMED
Break
https://pretalx.com/juliacon2019/talk/TCVVZN/
Other
PUBLISH
YCGCVE@@pretalx.com
-YCGCVE
Lunch
en
en
20190724T120000
20190724T131500
1.01500
Lunch
PUBLIC
CONFIRMED
Break
https://pretalx.com/juliacon2019/talk/YCGCVE/
Other
PUBLISH
PEY7CR@@pretalx.com
-PEY7CR
Short break
en
en
20190724T153000
20190724T154500
0.01500
Short break
PUBLIC
CONFIRMED
Break
https://pretalx.com/juliacon2019/talk/PEY7CR/
Other
PUBLISH
7SDCJU@@pretalx.com
-7SDCJU
Julia + JavaScript = <3
en
en
20190725T110000
20190725T113000
0.03000
Julia + JavaScript = <3
This talk is an overview of the JuliaGizmos ecosystem. It starts with the basics of creating a simple page, showing it in various forms, to Interact.jl and beyond. I will present work done by many people that have been aggregated in this github niche, mainly that of Mike Innes, Pietro Vertechi, Joel Mason, Travis DePrato, Sebastian Pfitzner and myself.
••••••••••••••
Outline of the talk:
1. Showing any Julia object, serving it on a web server
2. Executing and talking to JavaScript: the `@js` syntax
3. Interact.jl -- past and future
4. Syntax: Building your own JavaScript based library with seamless Julia bindings
5. Deploying a native-looking app as a Julia script
6. A brief update on (other people's) work on transpiling Julia to JS.
A common question we get asked on Slack is -- "is there a replacement for R's shiny in Julia?". The answer is YES, and you can help us build it out!
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2019/talk/7SDCJU/
Elm A
Shashi Gowda
PUBLISH
L983HR@@pretalx.com
-L983HR
Julia web servers deployment
en
en
20190725T113000
20190725T114000
0.01000
Julia web servers deployment
##Julia web servers deployment
To deploy Julia web servers easily, we developed an open-source buildpack which is available on https://github.com/Optomatica/heroku-buildpack-julia.
It just requires `Project.toml` and `Manifest.toml` in the root of the repository to be pushed and it automatically download dependencies and make the server available on [Heroku](https://heroku.com).
It also precompiles all dependencies for speedy server boot time.
We will present our experience in making this buildpack and how we optimized it so that the web server boot time is minimized. We compare its performance with having a docker instance and it outperforms having a docker image and it requires less maintenance overhead than having a docker image. This can help many Julia users deploying web servers in no time.
PUBLIC
CONFIRMED
Lightning Talk
https://pretalx.com/juliacon2019/talk/L983HR/
Elm A
Amgad Naiem
Mohammed El-Beltagy
PUBLISH
YPGRDD@@pretalx.com
-YPGRDD
A case study of migrating Timelineapp.co to the Julia language
en
en
20190725T114000
20190725T115000
0.01000
A case study of migrating Timelineapp.co to the Julia language
[Timelineapp.co](https://timelineapp.co) is a platform that supports financial planners in analysis of different retirement income strategies. It interactively allows a user to specify the desired retirement income management plan and performs its backtesting to verify its profitability and risk profile. It shows the impact of asset allocation decisions, rebalancing, fees, and taxes and it prepares clients for market ups and downs.
The legacy development process at Timelineapp was that quantitative analysts specified application logic using Matlab and next software developers translated it to Elixir code that was deployed to production.
As the application complexity increased, even trying to squeeze out maximum performance from the legacy technology stack, the team faced the challenge that the response time per one financial scenario backtesting query would grow up to around 40 seconds. This was clearly not acceptable for an on-line application. Facing performance bottleneck the team researched different alternatives, did some benchmarks, and picked Julia. After migrating the code, it was possible to cut the time to response down to 0.6 second per query.
Another benefit of moving to Julia was a dramatic simplification of the development process. In the past Matlab code had to be translated to Elixir. That was quite cumbersome and many times things got lost in translation from one programming language to the other. Now both, the quantitative analysts and the developers, write Julia. This way the code has less bugs and the time from idea and experimental calculations (by the quantitative analysts team) to deployment to production (by the software development team) is much shorter and agile.
PUBLIC
CONFIRMED
Lightning Talk
https://pretalx.com/juliacon2019/talk/YPGRDD/
Elm A
Bogumił Kamiński
PUBLISH
QXF9AM@@pretalx.com
-QXF9AM
If Runtime isn't Funtime: Controlling Compile-time Execution
en
en
20190725T143000
20190725T150000
0.03000
If Runtime isn't Funtime: Controlling Compile-time Execution
Julia's ability to compile away complex logic is remarkable. Especially in recent releases, Const-propagation is a thing to behold! But we've found it can be hard to reason about _why_ some operations are compiled away (or why they aren't), and even harder to _control_ that behavior. What's more, that behavior can change as your code evolves, or Julia is updated, and it's difficult to test.
In Julia, the distinction between compile-time and runtime is deliberately muddy: compilation itself happens _during runtime_; a function may be compiled once, many times or never—it might even be compiled more times than it runs. Still, there are cases where we expect a function to be compiled once, early on, and then need it to run extremely quickly, many times, in a tight loop, where controlling the ability to move work from runtime to compile-time is critical.
This talk will explore a few such motivating cases we've seen at [RelationalAI](http://relational.ai), including in pieces of the [FixedPointDecimals.jl](https://github.com/JuliaMath/FixedPointDecimals.jl) library. We'll examine the options currently available in Julia for controlling compile-time execution, and their pros and cons, including some lessons-learned about the pain we've experienced with `@generated`. We'll study the approach modern C++ is taking to this problem, with its `constexpr` annotation. And we'll propose a few ideas for how we might add features to Julia that could increase our control over this fierce compiler.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2019/talk/QXF9AM/
Elm A
Nathan Daly
PUBLISH
UUC9UJ@@pretalx.com
-UUC9UJ
Transducers: data-oriented abstraction for sequential and parallel algorithms on containers
en
en
20190725T150000
20190725T153000
0.03000
Transducers: data-oriented abstraction for sequential and parallel algorithms on containers
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2019/talk/UUC9UJ/
Elm A
Takafumi Arakaki
PUBLISH
J39LVP@@pretalx.com
-J39LVP
Efficient Stiff Ordinary Differential Equation Solvers for Quantitative Systems Pharmacology (QsP)
en
en
20190725T154500
20190725T161500
0.03000
Efficient Stiff Ordinary Differential Equation Solvers for Quantitative Systems Pharmacology (QsP)
The solution of the stiff ordinary differential equation (ODE) systems resulting from QsP models is a rate-limiting step in large-scale population studies. Here we review the ongoing efficiency developments within the JuliaDiffEq numerical differential equation solver ecosystem with a focus on ODEs derived from QsP models. These models have specific features that can be specialized in order to gain additional efficiency in the integration, such as their small size (normally <500 ODEs), frequent dosing events, and multi-rate behaviors. We demonstrate how new implementations of high order Rosenbrock methods with PI-adaptive time stepping, exponential propagation iterative methods (EPRIK) with adaptive Krylov expmv calculations, and implicit-explicit singly diagonally implicit Runge-Kutta methods (IMEX SDIRK) can be advantageous over classical schemes like LSODA and Sundials CVODE on these types of models and discuss the future directions.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2019/talk/J39LVP/
Elm A
Yingbo Ma
PUBLISH
38EHTQ@@pretalx.com
-38EHTQ
Simulation and estimation of Nonlinear Mixed Effects Models with PuMaS.jl
en
en
20190725T161500
20190725T164500
0.03000
Simulation and estimation of Nonlinear Mixed Effects Models with PuMaS.jl
Pharmacokinetic/Pharmacodynamic (PKPD) models are empirical models of the physiological and pharmacological systems often used to describe the kinetics and behavior of drugs in the human body. Nonlinear Mixed Effects (NLME) statistical methods help identify the parameters of the PKPD models and quantify the differences between individuals by integrating models at the population and individual scales. In this talk I introduce PuMaS.jl, a Julia based software for simulating and estimating PKPD, physiology based PK (PBPK), quantitative systems pharmacology (QSP), etc. models used in pharmacology. I will begin by describing approximations to the marginal likelihood which are used to make the quantities efficiently computable and demonstrate on real data how these models can be fit with Optim.jl to reveal population-level characteristics. Additionally, I will demonstrate the ability to utilize DynamicHMC.jl to perform Bayesian estimation of population and individual parameters. Together, this demonstrates a Julia-based data-driven approach to handle complex problems in individualizing dosing.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2019/talk/38EHTQ/
Elm A
Vaibhav Dixit
PUBLISH
7SX9LN@@pretalx.com
-7SX9LN
An advanced electrodialysis process model in the Julia ecosystem
en
en
20190725T164500
20190725T165500
0.01000
An advanced electrodialysis process model in the Julia ecosystem
Electrodialysis is a separation technology that uses electric fields and ion-exchange membranes to separate charged components from solutions. Electrodialysis is a highly efficient technology with prominent applications in the production of drinking water from seawater and the recovery and upgrading of various bio-based resources.
The majority of physical and electrochemical phenomena are well understood and can be described by mechanistic models. The complex and intricate interplay of various phenomena that occur at the surface of the ion-exchange membranes add an incredible amount of complexity and a mechanistic description is often too difficult. The Julia ecosystem provides a unique opportunity to couple differential equation solvers with machine learning techniques such as neural networks as a hybrid approach to model these kind of systems. Automatic differentation facilitates the optimisation procedure and is interesting from an engineering point of view.
PUBLIC
CONFIRMED
Lightning Talk
https://pretalx.com/juliacon2019/talk/7SX9LN/
Elm A
Bram De Jaegher
PUBLISH
9BRTVV@@pretalx.com
-9BRTVV
IVIVC.jl: In vitro – in vivo correlation module as part of an integrated pharmaceutical modeling and simulation platform
en
en
20190725T165500
20190725T170500
0.01000
IVIVC.jl: In vitro – in vivo correlation module as part of an integrated pharmaceutical modeling and simulation platform
In this talk, I will introduce IVIVC.jl, a state of the art package for pharmaceutical modelling and simulation-based in Julia. In the core, IVIVC.jl uses Optim.jl as optimization library to model the in-vitro data and then correlates the model with deconvoluted in vivo data. This package establishes three levels (A, B and C) of in vitro-in vivo correlation (IVIVC). IVIVC is the relationship between parameter derived from a pharmacokinetic property produced by a dosage form and a physicochemical property of the same dosage form. The pharmacokinetic properties include maximum plasma concentration (Cmax) or area under the plasma concentration-time curve (AUC). When an IVIVC correlation is established, it is used for development and optimization of drug formulations.
Using IVIVC.jl, one can accelerate drug development and can introduce a drug in the market faster. IVIVC can be used to substitute human bioequivalence studies in the initial approval process, the scale-up process, and the post-approval changes. This project is under the University of Maryland, Baltimore and the authors are me, Jogarao Gobburu and Vijay Ivaturi.
PUBLIC
CONFIRMED
Lightning Talk
https://pretalx.com/juliacon2019/talk/9BRTVV/
Elm A
Shubham Maddhashiya
PUBLISH
9UWDLY@@pretalx.com
-9UWDLY
GigaSOM.jl: Huge-scale, high-performance flow cytometry clustering in Julia
en
en
20190725T170500
20190725T171500
0.01000
GigaSOM.jl: Huge-scale, high-performance flow cytometry clustering in Julia
Recent advances in single-cell technologies offer an unprecedented opportunity to comprehensively characterize the immune system, revealing a previously unparalleled complexity in the phenotype and function of immune cells. Mass cytometry, also known as CyTOF, was recently implemented to measure up to 40 different markers in several million single cells. A typical clinical study with hundreds of patients can therefore include billions of single cells (rows) and up to 40 markers (features).
Different dimension reduction methods have been implemented in commercial and open-source software, mainly written in R. The machine learning algorithm FlowSOM [1] is based on the famous Kohonen Self Organising Feature Maps (SOM) [2] and has shown various advantages over other methods.
However, all current implementations have a critical limitation on the total number of cells to be analyzed . This limitation often blocks the analysis of large-scale clinical studies with several hundred million cells.
Here, we present the open-source, high-level, and high-performance package GigaSOM.jl (https://github.com/LCSB-BioCore/GigaSOM.jl), which is HPC-ready and is written to handle very large datasets without limits. Julia is the natural language of choice when it comes to performing huge-scale cytometric analyses. With the GigaSOM.jl package, the possibilities for flow cytometry analysis are further broadened. The quality of the software package is assured using ARTENOLIS (https://artenolis.lcsb.uni.lu) [3]. Biological validation of the results will be performed on downsampled datasets by comparison to conventional implementations of the FlowSOM package and manual hierarchical analysis.
*References*
[1] Sofie Van Gassen, Britt Callebaut, Mary J. Van Helden, Bart N. Lambrecht, Piet Demeester, Tom Dhaene and Yvan Saeys. FlowSOM: Using self-organizing maps for visualization and interpretation of cytometry data. Cytometry A 2015, volume 87.7 (p. 636-645)
[2] Kohonen T. The self-organizing map. Proc IEEE 1990;78:1464–1480
[3] Heirendt, Laurent; Arreckx, Sylvain; Trefois, Christophe; Yarosz, Yohan; Vyas, Maharshi; Satagopam, Venkata P.; Schneider, Reinhard; Thiele, Ines; Fleming, Ronan M. T., ARTENOLIS: Automated Reproducibility and Testing Environment for Licensed Software, arXiv:1712.05236.
PUBLIC
CONFIRMED
Lightning Talk
https://pretalx.com/juliacon2019/talk/9UWDLY/
Elm A
Laurent Heirendt
Vasco Verissimo
PUBLISH
DLQMQU@@pretalx.com
-DLQMQU
MendelIHT.jl: How to fit Generalized Linear Models for High Dimensional Genetics (GWAS) Data
en
en
20190725T171500
20190725T172500
0.01000
MendelIHT.jl: How to fit Generalized Linear Models for High Dimensional Genetics (GWAS) Data
**Background:** Marginal regression is widely employed by the genomics community to identify variants associated with complex traits. Ideally one would consider all covariates in tandem, but existing multivariate methods are sub-ideal to handle common issues of a modern genome wide association study (GWAS). Here we fill the gap with a new multivariate algorithm - iterative hard thresholding (IHT).
**Method:** We introduce a novel coefficient estimation scheme based on maximum likelihoods, extending the IHT algorithm to perform multivariate model estimation for any exponential family. We further discuss and implement doubly-sparse and prior knowledge-aided variants of IHT to tackle specific problems in genetics, such as linkage disequilibrium.
**Results:** We show how to apply IHT for any generalized linear model, and explicitly derive the updating algorithm and optimal step length for logistic and Poisson models. We provide an efficient implementation of IHT in Julia to analyze GWAS data as a module under [OpenMendel](https://github.com/OpenMendel). We tested our algorithm on real and simulated data to demonstrate model quality, algorithm robustness, and scalability. Then we investigate when and how (group)-(within-group) sparsity and knowledge-aided projections may help in discovering rare genetic variants with small effect size. Our implementation enjoys built-in parallelism, operates directly on raw genotype files, and is completely [open sourced](https://github.com/biona001/MendelIHT.jl).
**Significance:** For geneticists, our method offers enhanced multivariate model selection for big data GWAS. For theorists, we demonstrate how to use IHT to find GLM coefficients, and we derive 2 variants of the thresholding operators and show when they are expected to perform better.
PUBLIC
CONFIRMED
Lightning Talk
https://pretalx.com/juliacon2019/talk/DLQMQU/
Elm A
benjamin chu
PUBLISH
NF9XC7@@pretalx.com
-NF9XC7
Electrifying Transportation with Julia
en
en
20190725T172500
20190725T173500
0.01000
Electrifying Transportation with Julia
Modeling electric vehicle systems and their batteries helps to hasten the adoption and development of electric vehicles by clarifying their capabilities and requirements. Much progress has been made in electrifying automobiles, but less has been made in aircraft, which contribute a significant fraction of the greenhouse gas emissions from transportation. Aircraft have different battery requirements than automobiles and other modes of ground transportation. Specifically, the take-off and climb stages of flight can require discharge rates far greater than ground based systems.To study the effects of electrifying aviation, models of aircraft and batteries need to be integrated and studied together. It is essential to calculate the parameters of both the aircraft and the battery because these two systems are dependent on each other.We use Julia to model both aircraft and batteries. To model aircraft, a physics based performance model is used to calculate the energy and power requirements of the system. We use historical aircraft of various size and configurations to calculate energy and power requirements of different classes of aircraft. Our lab has modeled both eVTOL and conventional aircraft. Convert-ing the model from MATLAB to Julia yielded a massive speedup, enabling us to test many more configurations and classes of aircraft.We use several battery models depending on the size and battery requirements of the mode of transportation. For modes of that are capable of being powered by Lithium-Ion batteries, we use psuedo-2D single particle models incorporating all relevant properties of batteries, including energy, discharge rates, mass, and temperature. These models involve large systems of differential equations which can be efficiently solved using the DifferentialEquations.jl package. Porting these battery models to Julia yielded massive speedups, enabling parameter sweeps and use-case specific optimization of battery cell parameters. Larger and longer range forms of transportation have battery requirements that exceed what can be provided by Lithium-Ion batteries. For example,a fully electric commercial aircraft could require a Lithium Air battery. We also use the speed provided by Julia to increase our modeling capabilities for these forms of transportation. In thistalk, we will show an integrated model of a Lithium Air battery and an aircraft performance model.
PUBLIC
CONFIRMED
Lightning Talk
https://pretalx.com/juliacon2019/talk/NF9XC7/
Elm A
Alec Bills
PUBLISH
EKTAAY@@pretalx.com
-EKTAAY
The Julia Language 1.0 Ephemeris and Physical Constants Reader for Solar System Bodies
en
en
20190725T173500
20190725T174500
0.01000
The Julia Language 1.0 Ephemeris and Physical Constants Reader for Solar System Bodies
With computation time being a critical factor in trajectory optimization, this code has aimed for and accomplished a higher computational efficiency than the original code designed in MATLAB®. The ephemeris reader acquires necessary data for mission design from public JPL websites and calculates positions, velocities, accelerations, and other characteristics of major and small bodies at any user-defined times using the extracted data. Additionally, the second generation of this ephemeris reader introduces shape models of known asteroids as well as spherical harmonics capabilities to support gravitational potential models. This version of the ephemeris reader is also compatible with Julia 1.0 which was released in August of 2018. This second-generation version continues to support the obtainment of critical information needed for mission and trajectory design faster and with added efficiency.
This lightning talk will discuss a more detailed overview as well as why Julia was chosen for this project.
PUBLIC
CONFIRMED
Lightning Talk
https://pretalx.com/juliacon2019/talk/EKTAAY/
Elm A
Renee Spear
PUBLISH
KDBXKD@@pretalx.com
-KDBXKD
Interval methods for scientific computing in Julia
en
en
20190725T110000
20190725T113000
0.03000
Interval methods for scientific computing in Julia
We will give an overview of the suite of inter-related packages making up the `JuliaIntervals` organization. These are package which use **set calculations** to solve nonlinear equations, minimize nonlinear functions, and solve ordinary differential equations with results that are (in principle, modulo coding errors!) *guaranteed* to be correct.
The underlying technique for calculating with sets is **interval arithmetic**. Here, mathematical operations, such as `x -> x^2` and `x + y` are defined on sets, represented as intervals of all real numbers between two endpoints. By defining these operations carefully, we can guarantee that `f(X)` is guaranteed to contain `f(x)` for all `x` in the set `X`, even though we use floating-point arithmetic for the calculations.
A key technique is **interval contraint propagation**, which allows us to calculate enclosures of the *inverse* of a given function. We will show how this can accelerate optimisation and root finding using interval methods.
The presentation will be based on practical applications.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2019/talk/KDBXKD/
Elm B
David P. Sanders
PUBLISH
7S9BDD@@pretalx.com
-7S9BDD
Implicit Geometry with Multi-Dimensional Bisection Method
en
en
20190725T113000
20190725T114000
0.01000
Implicit Geometry with Multi-Dimensional Bisection Method
Multi-Dimensional Bisection Method (MDBM.jl) is an efficient and robust root-finding package, which can be used to determine whole high-dimensional submanifolds (points, curves, surfaces…) of the roots of implicit non-linear equation systems, especially in cases, where the number of unknowns surpasses the number of equations.
Engineering application will be presented from the different fields
- determination of the workspace of a robot arm with collision avoidance
- stability (stabilizability) chart computations
- constrained parameter optimisation
PUBLIC
CONFIRMED
Lightning Talk
https://pretalx.com/juliacon2019/talk/7S9BDD/
Elm B
Daniel Bachrathy
PUBLISH
3BKGNU@@pretalx.com
-3BKGNU
Computational topology and Boolean operations with Julia sparse arrays
en
en
20190725T114000
20190725T115000
0.01000
Computational topology and Boolean operations with Julia sparse arrays
This approach to computation of space arrangements may be used in disparate subdomains of geometric and visual computing, including geo-mapping, computer vision, computer graphics, medical imaging, geometric and solid modeling, and finite elements. In all such domains one must compute incidences, adjacencies and ordering of cells, generally using disparate and often incompatible data structures and algorithms. E.g., most of earlier algorithms for space decomposition and Boolean operations work with data structures optimized for selected classes of geometric objects.
Conversely, we introduce a computational architecture based only on linear algebra with sparse arrays and basic matrix operations. Therefore our formulation, cast in terms of (co)chain complexes of (co)boundary maps, may be applied to very different geometric objects, ranging from solid models to engineering meshes, geographical systems, and biomedical images. In particular, we use rather general cellular complexes, with cells homeomorphic to polyhedra, i.e., to triangulable spaces, and hence possibly non convex and with holes.
The main stage of our computational pipeline operates independently on each 2-cell of the input data sets, according to an embarrassingly parallel data-driven approach. It is remarkable that the approach works with collections of 2-manifolds with- or without-boundary, sets of non-manifolds, sets of 3-manifolds, etc., and not only with triangle meshes.
Extending this approach to Boolean solid operations is straightforward. Among other strong points we cite: the compact representation of cellular complexes; the combinable nature of maps, allowing for multiple queries about the local topology, by a single matrix multiplication; the parallel fragmentation of input cells empowered by cell congruence; and what we call "topological gift wrapping" algorithm.
PUBLIC
CONFIRMED
Lightning Talk
https://pretalx.com/juliacon2019/talk/3BKGNU/
Elm B
Alberto Paoluzzi
PUBLISH
CES8P9@@pretalx.com
-CES8P9
Geometric algebra in Julia with Grassmann.jl
en
en
20190725T115000
20190725T120000
0.01000
Geometric algebra in Julia with Grassmann.jl
The design of [Grassmann.jl](https://github.com/chakravala/Grassmann.jl) is based on the `TensorAlgebra` abstract type system interoperability from [AbstractTensors.jl](https://github.com/chakravala/AbstractTensors.jl) with a `VectorSpace` parameter from [DirectSum.jl](https://github.com/chakravala/DirectSum.jl). Abstract tangent vector space type operations happen at compile-time, resulting in a differential conformal geometric algebra of hyper-dual multivector forms.
The abstract nature of the product algebra code generation enables one to automate the extension of the product operations to any specific number field type (including symbolic coefficients with [Reduce.jl](https://github.com/chakravala/Reduce.jl) or SymPy.jl), by taking advantage of Julia's type system. With the type system, it is possible to construct mixed tensor products from the mixed tangent vector basis and its dual basis, such as bivector elements of Lie groups. `Grassmann` can be used to study unitary groups used in quantum computing by building efficient computational representations of their algebras. Applicability of the Grassmann computational package not only maps to quantum computing, but has the potential of impacting countless other engineering and scientific computing applications. It can be used to work with automatic differentiation and differential geometry, algebraic forms and invariant theory, electric circuits and wave scattering, spacetime geometry and relativity, computer graphics and photogrammetry, and much more.
PUBLIC
CONFIRMED
Lightning Talk
https://pretalx.com/juliacon2019/talk/CES8P9/
Elm B
Michael Reed
PUBLISH
ZE9AVH@@pretalx.com
-ZE9AVH
Mimi.jl – Next Generation Climate Economics Modeling
en
en
20190725T143000
20190725T150000
0.03000
Mimi.jl – Next Generation Climate Economics Modeling
In 2016, the EPA commissioned a report from the National Academy of Sciences on research priorities for improving and updating the Social Cost of Carbon, a metric used by the federal government to account for the impacts of climate change within regulatory impact analyses. One central recommendation from the ensuing National Academies [report](https://www.nap.edu/catalog/24651/valuing-climate-damages-updating-estimation-of-the-social-cost-of) was to create a common, modular computational platform to better serve modelling work in this area.
Our team created the leading (and probably only) implementation following this call to action: the [Mimi Framework](https://www.mimiframework.org/). Mimi.jl is entirely implemented in Julia. This talk will present this computational platform, discuss its application, and dive into key design considerations.
The main design constraints for Mimi.jl were that we needed something:
a) computationally fast,
b) simple enough that a lack of significant programming experience is not a barrier for users,
c) that enables a modular work style for distributed, loosely coordinated teams, and
d) that creates a transparent framework enabling easy replication of computational experiments.
We will describe in some detail how we achieved this design using a macro based domain specific language for certain parts of the framework, while at the same time exposing the full power of the Julia language to users. We will also touch on our use of a custom Julia registry as the repository for different modules that different groups can work on, allowing us to use Julia’s package manager to solve the replication problem for computational experiments. We will also discuss a large number of specific design decisions that helped us make the system easy to use for novice programmers.
We will conclude the talk with a discussion of adoption and impact of this platform. We will outline how different groups at a number of leading universities and think tanks have adopted the platform for their work and outline why we believe it will power the next generation of the US federal climate economics work in the regulatory space going forward.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2019/talk/ZE9AVH/
Elm B
Cora Kingdon
Lisa Rennels
David Anthoff
PUBLISH
FXBCLP@@pretalx.com
-FXBCLP
The Climate Machine: A New Earth System Model in Julia
en
en
20190725T150000
20190725T153000
0.03000
The Climate Machine: A New Earth System Model in Julia
The [Climate Modeling Alliance (CliMA)](https://clima.caltech.edu/) aims to build the first Earth system model that automatically learns from both Earth observations and embedded high-resolution simulations. The goal is to build a more accurate climate model with quantified uncertainties.
High computational performance on heterogeneous architectures, including CPUs, GPUs, and distributed computing architectures is essential for the project. The model is being developed in Julia to ensure portable performance at supercomputing scales for this ambitious scientific computing project.
This talk will give an overview of the project and its motivation. We will highlight how we’re pushing Julia to scale, including lessons learnt and challenges we are facing.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2019/talk/FXBCLP/
Elm B
Simon Byrne
Charlie Kawczynski
PUBLISH
9R9A7M@@pretalx.com
-9R9A7M
Symbolic Manipulation in Julia
en
en
20190725T154500
20190725T161500
0.03000
Symbolic Manipulation in Julia
The manipulation of symbolic terms is fundamental to a variety of fields in computer science, including computer algebra, automated reasoning, and scientific modeling. Through the lens of symbolic transformations, we concisely represent and efficiently apply complex properties and equivalences. In this talk, we discuss a family of Julia packages for symbolic computation, establishing the foundations of term rewriting and showcasing extensions and applications.
After offering a formal definition of symbolic terms, we will examine various notions from the field of term rewriting and their implementations in Rewrite.jl. We will design sets of rewrite rules that simplify algebraic expressions into unique normal forms, based on domain-specific axioms. Building on this interpretation, we finally will explore symbolic differential equations in ModelingToolkit.jl, showcasing methods for incorporating context-aware symbolic variables and techniques for enforcing mathematical invariants.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2019/talk/9R9A7M/
Elm B
Harrison Grodin
PUBLISH
Y9B87L@@pretalx.com
-Y9B87L
Building a Debugger with Cassette
en
en
20190725T161500
20190725T164500
0.03000
Building a Debugger with Cassette
As the saying goes: _"You can solve that with Cassette"_.
You can do anything with Cassette, and debugging is a thing, therefore Cassette can be used for it.
Generally speaking, you shouldn't solve problems with Cassette that you can solve in any other way.
However if you want to build a debugger, the options include building [an entire interpreter](https://github.com/JuliaDebug/JuliaInterpreter.jl),
[going deep into the lowest levels of the compiler/LLVM](https://github.com/JuliaDebug/Gallium.jl/tree/v0.0.4),
or using [Cassette](https://github.com/jrevels/Cassette.jl).
The other options are certainly good, and indeed the interpreter work has yeilded [Debugger.jl](https://github.com/JuliaDebug/Debugger.jl/).
But this talk is about doing it with Cassette.
[MagneticReadHead v0.1.0](https://github.com/oxinabox/MagneticReadHead.jl/tree/v0.1.0) is usable debugger based on easy-to-use Cassette function overdubs.
It is under 300 lines of code, and allowed setting breakpoints and stepping between function calls.
[MagneticReadHead v0.2+](https://github.com/oxinabox/MagneticReadHead.jl) is featureful debugger based on Cassette IR passes.
It is under 800 lines of code, and alloweds setting breakpoints at arbitary locations and stepping between IR statements.
The talk will first explain the Cassette function overdubs used in v0.1.0 and how you can use them to fairly painlessly instrument julia code at the function call level.
It will then move on the the much more complex IR passes used in current versions of MagneticReadHead.
Explaining how you can recursively modify julia code at run time to instert the extra functionality needed for debugging.
At first Julia IR may seem like a read-only language.
It is surprisingly easy to read, but actually modifying it... the concept brings on a special kind of head pain.
After this talk you will be able to experience that special pain for yourself,
and hopefuly push through it to do something useful.
This tuitorial type presentation is for the advanced julia user.
While knowledge of Cassette is not required, it is expected that attendees are broadly familar with the idea of what IR is, even if they have no idea how to write it.
Everyone is welcome to come along for the ride.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2019/talk/Y9B87L/
Elm B
Lyndon White (@oxinabox)
PUBLISH
WJZJXS@@pretalx.com
-WJZJXS
Static walks through dynamic programs -- a conversation with type-inference.
en
en
20190725T164500
20190725T165500
0.01000
Static walks through dynamic programs -- a conversation with type-inference.
Efficient performance engineering for Julia programs heavily relies on understanding the result of type-inference on your program, type-inference as process is sensitive to local information or call-context. Many Julia users use the information provided by `@code_typed` to analyse the behaviour of type-inference on *a* function. This method becomes cumbersome and inefficient with deeply nested programs where the user needs to reconstruct local information to inspect called methods. This talk introduces a tool that streamlines this process and allows users to take a static walk through their dynamic program. It simplifies the performance engineering process and is capable of handling code that uses tasks, threads and GPUs.
PUBLIC
CONFIRMED
Lightning Talk
https://pretalx.com/juliacon2019/talk/WJZJXS/
Elm B
Valentin Churavy
PUBLISH
RVXF7L@@pretalx.com
-RVXF7L
Concolic Fuzzing -- Or how to run a theorem prover on your Julia code
en
en
20190725T165500
20190725T170500
0.01000
Concolic Fuzzing -- Or how to run a theorem prover on your Julia code
In this talk we will explore how to use Cassette to extract a symbolic trace from a Julia program and using that capability to prove properties of Julia programs . This also enables fuzzing and automated test-case development.
PUBLIC
CONFIRMED
Lightning Talk
https://pretalx.com/juliacon2019/talk/RVXF7L/
Elm B
Valentin Churavy
PUBLISH
RH78UW@@pretalx.com
-RH78UW
Analyzing and updating code with JuliaInterpreter and Revise
en
en
20190725T170500
20190725T171500
0.01000
Analyzing and updating code with JuliaInterpreter and Revise
Revise.jl serves a dynamic bridge between the text in source files and the compiled methods in your running Julia session. To faithfully bridge these two worlds, Revise needs to understand quite a bit about code. Historically, Revise analyzed expressions written in Julia’s [surface syntax](https://docs.julialang.org/en/latest/devdocs/ast/); however, recent examples revealed a number of weaknesses in this approach. To address these limitations, starting with version 2.0 Revise has leveraged JuliaInterpreter.jl to perform its analysis of code using Julia’s internal lowered-form representation. This has resulted in dramatic improvements in the number of methods that can be tracked by their signatures, and may allow new capabilities such as discovery of block-level interdependencies. At the same time, Revise’s internal data structures have been reorganized to simplify access by other packages, resulting in a lightweight standalone package CodeTracking.jl. I will describe the motivations for these changes and the solutions enabled by the new approach, and demonstrate some of the dramatic improvements this has netted for dependent packages like Rebugger.jl.
PUBLIC
CONFIRMED
Lightning Talk
https://pretalx.com/juliacon2019/talk/RH78UW/
Elm B
Tim Holy
PUBLISH
M8G7DD@@pretalx.com
-M8G7DD
TimerOutputs.jl - a cheap and cheerful instrumenting profiler
en
en
20190725T171500
20190725T172500
0.01000
TimerOutputs.jl - a cheap and cheerful instrumenting profiler
Julia has for a long time come with a built-in profiler, which is available as the Profile standard library. This profiler is a sampling profiler which means that it collects snapshots of the stacktrace at regular intervals. After execution of the profiled code, the number of traces that were collected in each function can be displayed which gives a good estimate where the code is spending its time. While sampling profiling is a very useful and cheap way of profiling code, there are a few drawbacks:
Since only samples of the execution is collected, exact information of e.g. the time spent in functions or the number of calls to a function is not available.
Not being able to annotate what parts of the code you care about mean that the output from the profiler is sometimes noisy and hard to interpret for non-experts.
Allocations can not be attributed to certain sections of the code.
TimerOutputs is a Julia package that provides an instrumenting profiler which requires you to annotate your code with labeled sections. While the code is running, TimerOutputs records timing and allocation data in these sections and can then print a summary back to the user. This can sometimes make it easier to get a high level overview of the performance characteristics of the code.
An example of the output from TimerOutputs is shown below.
───────────────────────────────────────────────────────────────────────────────
Time Allocations
────────────────────── ───────────────────────
Tot / % measured: 6.89s / 97.8% 5.20GiB / 85.0%
Section ncalls time %tot avg alloc %tot avg
───────────────────────────────────────────────────────────────────────────────
assemble 6 3.27s 48.6% 545ms 3.65GiB 82.7% 624MiB
inner assemble 240k 1.92s 28.4% 7.98μs 3.14GiB 71.1% 13.7KiB
linear solve 5 2.73s 40.5% 546ms 108MiB 2.39% 21.6MiB
create sparse matrix 6 658ms 9.77% 110ms 662MiB 14.6% 110MiB
export 1 78.4ms 1.16% 78.4ms 13.1MiB 0.29% 13.1MiB
───────────────────────────────────────────────────────────────────────────────
The timing and allocations for each section is presented and the call graph (e.g. that assemble calls inner assemble) is shown by indentation.
This lightning talk gives a short summary of the implementation, syntax and use cases of TimerOutputs.
PUBLIC
CONFIRMED
Lightning Talk
https://pretalx.com/juliacon2019/talk/M8G7DD/
Elm B
Kristoffer Carlsson
PUBLISH
RMNXL7@@pretalx.com
-RMNXL7
PackageCompiler
en
en
20190725T172500
20190725T173500
0.01000
PackageCompiler
PUBLIC
CONFIRMED
Lightning Talk
https://pretalx.com/juliacon2019/talk/RMNXL7/
Elm B
Simon Danisch
PUBLISH
BCYWZJ@@pretalx.com
-BCYWZJ
The Unreasonable Effectiveness of Multiple Dispatch
en
en
20190725T110000
20190725T113000
0.03000
The Unreasonable Effectiveness of Multiple Dispatch
The former kind of sharing of types stems from the external nature of multiple dispatch: the source of many problems in class-based OOP is that methods have to live "inside of" classes. Simply by associating methods with generic functions rather than the type that they operate on, multiple dispatch avoids many of the problems that OOP has.
The latter kind of sharing stems from the ability to correctly choose specialized code based on the types of all arguments of a function. There are patterns like double dispatch to try to deal with this in single dispatch languages, but they are cumbersome, brittle and opt-in, meaning that unless someone else planned for you to extend their code, you are unlikely to be able to do so.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2019/talk/BCYWZJ/
Room 349
Stefan Karpinski
PUBLISH
JVUMQJ@@pretalx.com
-JVUMQJ
Julia's Killer App(s): Implementing State Machines Simply using Multiple Dispatch
en
en
20190725T113000
20190725T114000
0.01000
Julia's Killer App(s): Implementing State Machines Simply using Multiple Dispatch
In the universe of programming languages, success for any new language hangs on
its ability to simplify some class of problem that exceed the capabilities of
other existing languages to generate simple solutions for. The concept of a
state machine is a powerful tool that programmers have used for ages to address
all manner of challenges, but until now implementing a state machine has
typically required either the use of involved external libraries or the design
of relatively opaque algorithms and data structures. Julia changes all of that
by embracing multiple dispatch and an extensible type system that, together,
provide everything a programmer requires to design and develop even the most
complex of state machines.
This presentation will begin with a basic introduction to the concepts behind
state machines and their implementation and then dive into an example of how
one might use the concept of a state machine to handle the parsing of a regular
expression into a Julia data structure. Continuing from there, we will look at
how matching a regular expression to a string can, itself, be implemented using
a state machine. All the while, we will only require Julia, its type system,
and a handful of multiply dispatched methods.
Finally, we will review various ways in which state machines implemented in
Julia can be extended and iterated upon without requiring any new libraries or
tools beyond those used in their initial implementation. We will also consider
how even more complex tasks, including even an entire HTTP request/response
handler, can be tackled using the exact same approach. By the end, it should be
clear that the killer feature that Julia brings to the world of programming
languages is the ability to develop state machines in a way that is clear,
concise, performant, and infinitely flexible.
PUBLIC
CONFIRMED
Lightning Talk
https://pretalx.com/juliacon2019/talk/JVUMQJ/
Room 349
Joshua Ballanco
PUBLISH
XH89CV@@pretalx.com
-XH89CV
Differential Programming Tensor Networks
en
en
20190725T114000
20190725T115000
0.01000
Differential Programming Tensor Networks
A tensor network is a contraction of tensors, it has wide applications in fields of physics, big data and machine learning. It can be used to represent a quantum wave function ansatz, compress data and model probabilities. Supporting automatic differentiation brings hope for many difficult problems like training project entangled pair states. We provide a unified, dispatchable interface for `einsum` as the middleware to port tensor contraction libraries and tensor network applications and a through support to back-propagation through linear algebra functions. Based on these supporting libraries, we show some tensor network algorithms as examples.
References:
* Differentiable Programming Tensor Networks, Hai-Jun Liao, Jin-Guo Liu, Lei Wang, and Tao Xiang [under preparation]
* Unsupervised Generative Modeling Using Matrix Product States, Zhao-Yu Han, Jun Wang, Heng Fan, Lei Wang, and Pan Zhang Phys. Rev. X 8, 031012
PUBLIC
CONFIRMED
Lightning Talk
https://pretalx.com/juliacon2019/talk/XH89CV/
Room 349
JinGuo Liu
PUBLISH
D7NVW8@@pretalx.com
-D7NVW8
JuliaCN: A community driven localization group for Julia in China
en
en
20190725T115000
20190725T120000
0.01000
JuliaCN: A community driven localization group for Julia in China
## Introduction
[JuliaCN](https://github.com/JuliaCN) was founded when Julia was still newly born. Its early member includes several developers in the community, e.g Jiahao Chen, Yichao Yu. We provided the first translation on Julia's documentation in v0.3, known as [julia_zh_cn](https://github.com/JuliaCN/julia_zh_cn). Later, we provided [the Chinese discourse](https://discourse.juliacn.com/) to let non-English speakers communicate freely and get support when they are in trouble.
## Activities
- Annual Julia User Meetup in China: This is more like a small workshop held each year.
- Holding and providing resource for Julia User Meetups
- Community driven translation project: we built a transifex based translation project [JuliaZH.jl](https://github.com/JuliaCN/JuliaZH.jl)
### Project Emerged in JuliaCN Community
- [Jetbrain IDE for Julia](https://plugins.jetbrains.com/plugin/10413-julia)
- [Py2JI.jl](https://github.com/JuliaCN/Py2Jl.jl)
PUBLIC
CONFIRMED
Lightning Talk
https://pretalx.com/juliacon2019/talk/D7NVW8/
Room 349
Roger Luo
PUBLISH
J779CB@@pretalx.com
-J779CB
Writing maintainable Julia code
en
en
20190725T143000
20190725T150000
0.03000
Writing maintainable Julia code
Have you ever updated an algorithm's code just to find that the algorithm has changed in unexpected ways? Have you ever started coding an algorithm just to think to yourself that you had already done something extremely similar for a different algorithm? These problems are often the case because code is not broken down into clear conceptual components that are independent from one another and easy to reuse.
To illustrate how this can be accomplished we will walk through the implementation of three highly similar iterative eigenvalue/eigenvector algorithms. It will be shown that each of the three iterative algorithms consists of the same main conceptual components which will be extracted into a single method used by all three iterative algorithms.
The result will be code that is easy to follow at a high level because breaking it into high level conceptual components will make it easier to read and follow. Changes will also be easier to make because finding the correct area of the code to change will be easier and there will be less risk of unexpected side effects because the other components should be independent. Finally some discussion will be made about the performance implications of introducing more abstractions and how if the abstractions are kept at the higher levels of the algorithms code it is highly unlikely that performance will be significantly impacted.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2019/talk/J779CB/
Room 349
Scott Haney
PUBLISH
WFVWES@@pretalx.com
-WFVWES
How We Wrote a Textbook using Julia
en
en
20190725T150000
20190725T153000
0.03000
How We Wrote a Textbook using Julia
My coauthor (Prof. Mykel Kochenderfer) and I wrote a full-length, fully-featured, academic textbook. It is full of beautiful Julia-generated figures and typeset Julia code snippets. Everything is source controlled .tex files, and everything else is generated on compile. This talk covers how we did it and why you might want to do it too.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2019/talk/WFVWES/
Room 349
Tim Wheeler
PUBLISH
MA9N8R@@pretalx.com
-MA9N8R
Turing: Probabalistic Programming in Julia
en
en
20190725T154500
20190725T161500
0.03000
Turing: Probabalistic Programming in Julia
During the course of the talk, I will introduce Turing's modeling syntax, some typical workflows, and examples of how Turing integrates with notable Julia packages such as Flux.jl or DifferentialEquations.jl. Additionally, I hope to present the status on some of the Julia Summer of Code projects, which may include Variational Inference or greater Gaussian processes integration with Stheno.jl, among others.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2019/talk/MA9N8R/
Room 349
Cameron Pfiffer
PUBLISH
GEJT9H@@pretalx.com
-GEJT9H
Gaussian Process Probabilistic Programming with Stheno.jl
en
en
20190725T161500
20190725T164500
0.03000
Gaussian Process Probabilistic Programming with Stheno.jl
Gaussian processes (GPs) are probabilistic models for nonlinear functions that are flexible, easy to interpret, and enable the modeller to straightforwardly encode high-level assumptions about the properties of the function in question. In short, they're a really useful component of the probabilistic modelling tool box.
Implementations to date have not made possible to fully exploit the interpretability of GPs, making it harder than necessary encode prior knowledge and interpret results. Based on the ideas in our recently proposed GP probabilistic programming framework, we have developed Stheno.jl to provide an implementation that is straightforward for the user interested in applying GPs to their problem to use, while remaining hackable for experts and researchers.
This talk will provide an intuitive introduction GPs using Stheno.jl. We'll then show how Stheno.jl can be used solve extensions of a classical non-linear regression problem and explore structure in the solution, how it can be used in conjunction with Turing.jl to embed GPs as a component in a larger non-Gaussian probabilistic programme, and to explore how it can be combined with Flux.jl to hardness the complementary strengths of deep learning and probabilistic modelling.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2019/talk/GEJT9H/
Room 349
Will Tebbutt
PUBLISH
GBWC9F@@pretalx.com
-GBWC9F
Soss.jl: Probabilistic Metaprogramming in Julia
en
en
20190725T164500
20190725T171500
0.03000
Soss.jl: Probabilistic Metaprogramming in Julia
Probabilistic programming is sometimes referred to as “modeling for hackers”, and has recently been picking up steam with a flurry of releases including Stan, PyMC3, Edward, Pyro, and Tensorflow Probability.
As these and similar systems have improved in performance and usability, they have unfortunately also become more complex and difficult to contribute to. This is related to a more general phenomenon of the “two language problem”, in which performance-critical domain like scientific computing involve both a high-level language for users and a high-performance language for developers to implement algorithms. This establishes a kind of wall between the two groups, and has a harmful effect on performance, productivity, and pedagogy.
In probabilistic programming, this effect is even stronger, and it’s increasingly common to see three languages: one for writing models, a second for data manipulation, model assessment, etc, and a third for implementation of inference algorithms.
Solving this “three-language problem” usually means accepting either lower performance or a restricted class of available models and inference algorithms.
It doesn’t have to be this way. The Julia language supports Python-level coding with C-level performance. In Julia, Julia’s own code is “first-class”: code can be pulled apart and manipulated as a data structure. This leads to an approach for high-level representation of models, with transformations and optimizations specific to a given model or inference family.
This is the approach taken in Soss, a small and extensible Julia library that provides a way to represent and manipulate probabilistic models. In this talk, we’ll discuss the need and for Soss, some of its concepts at a high level, and finally some recent advancements and upcoming opportunities.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2019/talk/GBWC9F/
Room 349
Chad Scherrer
PUBLISH
GVYCEL@@pretalx.com
-GVYCEL
Gen: a general-purpose probabilistic programming system with programmable inference built on Julia
en
en
20190725T171500
20190725T172500
0.01000
Gen: a general-purpose probabilistic programming system with programmable inference built on Julia
Probabilistic modeling and inference are central to many fields. Probabilistic programming systems aim to make probabilistic modeling and inference techniques accessible to a broader audience, and to make it easier for experts in these techniques to develop more complex applications. A key challenge for wider adoption of probabilistic programming languages is designing systems that are both flexible and performant. This talk introduces Gen, a new probabilistic programming system, built on top of Julia, that includes novel language constructs for modeling and for end-user customization and optimization of inference. Gen makes it practical to write probabilistic programs that solve problems from multiple fields. Gen programs can combine generative models written in Julia, neural networks written in TensorFlow, and custom inference algorithms based on an extensible library of Monte Carlo and numerical optimization techniques. The talk will also present techniques that enable Gen’s combination of flexibility and performance: (i) the generative function interface, an abstraction for encapsulating probabilistic and/or differentiable computations; (ii) domain-specific languages with custom compilers that strike different flexibility/performance tradeoffs; (iii) combinators that encode common patterns of conditional independence and repeated computation, enabling speedups from caching; and (iv) a standard inference library that supports custom proposal distributions also written as programs in Gen. This talk shows that Gen outperforms state-of-the-art probabilistic programming systems, sometimes by multiple orders of magnitude, on problems such as nonlinear state-space modeling, structure learning for real-world time series data, robust regression, and 3D body pose estimation from depth images.
PUBLIC
CONFIRMED
Lightning Talk
https://pretalx.com/juliacon2019/talk/GVYCEL/
Room 349
Marco Cusumano-Towner
PUBLISH
BD9EBF@@pretalx.com
-BD9EBF
A probabilistic programming language for switching Kalman filters
en
en
20190725T172500
20190725T173500
0.01000
A probabilistic programming language for switching Kalman filters
At R2 Inc., we monitor in real-time tens of thousands of sensors in industrial plants, looking for anomalous behaviour. Industrial processes are usually very well understood and predictable, making them ideal candidates for classical Bayesian analysis. Off-the-shelf modelling approaches like Stan offer great flexibility in formulating a model, but the Markov Chain Monte Carlo algorithms they use are slow on large datasets, which hinders model iteration. Kalman filters can provide much faster, exact posterior probabilities, by assuming that the uncertainties are Gaussian and that the hidden state evolution is linear.
We have built a probabilistic programming language around the switching Kalman filter algorithm. It was implemented as a Julia macro, and supports Unitful time series with missing data. I will present its design, implementation and some of its applications.
PUBLIC
CONFIRMED
Lightning Talk
https://pretalx.com/juliacon2019/talk/BD9EBF/
Room 349
Cédric St-Jean-Leblanc
PUBLISH
JUZUDM@@pretalx.com
-JUZUDM
Keynote: Professor Heather Miller
en
en
20190725T084000
20190725T092500
0.04500
Keynote: Professor Heather Miller
PUBLIC
CONFIRMED
Keynote
https://pretalx.com/juliacon2019/talk/JUZUDM/
NS Room 130
Professor Heather Miller
PUBLISH
ZFQHDS@@pretalx.com
-ZFQHDS
What's Bad About Julia
en
en
20190725T093000
20190725T100000
0.03000
What's Bad About Julia
As everyone knows, the problem with Julia is that it uses 1-based indexing! Well, of course that's not it, but are there any *real* problems? While there are any number of readily apparent issues involving performance,
missing functionality, and so on, those are being fixed every day. Subtler issues lurk beneath the surface. In this talk I will present some of the tricker, more hidden, and more fundamental problems involving the type system, object model, and core language generally --- things that keep me up at night. For example, did you know that Julia's types are not closed under intersection, but that it would be nice if they were? The good news is that these are all things I believe we can solve --- in time --- to make Julia a meaningfully better language.
PUBLIC
CONFIRMED
Talk
https://pretalx.com/juliacon2019/talk/ZFQHDS/
NS Room 130
Jeff Bezanson
PUBLISH
PK738L@@pretalx.com
-PK738L
Sponsor Address: University of Maryland
en
en
20190725T100000
20190725T100500
0.00500
Sponsor Address: University of Maryland
PUBLIC
CONFIRMED
Sponsor's Address
https://pretalx.com/juliacon2019/talk/PK738L/
NS Room 130
Vijay Ivaturi
PUBLISH
SSNXAP@@pretalx.com
-SSNXAP
Keynote: Dr Steven Lee
en
en
20190725T133000
20190725T141500
0.04500
Keynote: Dr Steven Lee
PUBLIC
CONFIRMED
Keynote
https://pretalx.com/juliacon2019/talk/SSNXAP/
NS Room 130
Dr Steven Lee
PUBLISH
37VY3Q@@pretalx.com
-37VY3Q
Performant parallelism with productivity and portability.
en
en
20190725T110000
20190725T120000
1.00000
Performant parallelism with productivity and portability.
This BoF is motivated in part by interesting new application development efforts that are combining high-performant scientific computing style parallelism with large scale machine learning, optimization and statistical methods. The BoF will focus on current and future directions for language and library level abstractions for large distributed parallel applications in Julia. Some projects such as [Celeste](https://juliacomputing.com/case-studies/celeste.html) and [CLiMA](https://clima.caltech.edu) are employing Julia for work that scales to tens of thousands or more parallel cores. The BoF will be a forum for starting conversations and brainstorming how Julia tools for distributed parallelism might evolve in coming years to support both portable, high-performance parallelism and productive interactivity in a relatively unified way. The BoF will include short presentations from current participants in large scale scientific computing parallel efforts using Julia, it will also include short presentations from relevant tool developers as well as an open discussion time for question and answer. We anticipate covering a few interconnected themes and goals.
**Blending in new ideas**. In the last 30 years, since the emergence of the message passing interface (MPI) parallel library standard for scientific computing, many projects have explored alternate parallelism paradigms. These alternates, which include UPC, X10, CoArray Fortran, Chapel, Hadoop, Spark, OCCA and many others, have introduced new ideas, but in scientific computing none have gained the broad traction of MPI in practice. One topic of interest for this BoF is where can the Julia language, library and meta-programming ecosystem usefully blend in some of the parallel programming ideas that have emerged in academic projects in the last 30 years.
**Enabling interactivity front and center in large scale, high-performance parallelism**. Are there ways Julia might help bring some flavor of productive interactivity to the sorts of applications that have typically leveraged MPI for portable performance. Some debugging tools, from other languages, such as [ddt](https://developer.nvidia.com/allinea-ddt) and [totalview](https://www.roguewave.com/products-services/totalview) may provide models for how to interact explicitly with multiple concurrent streams of computation in a large parallel application under something of a REPL feel. Julia Distributed arrays, Channels and Cluster manager abstractions provide more implicit ways for interacting with parallel computation streams. Using existing components it is already possible to create elaborate applications that mix high-performance parallel simulation and distributed machine learning, for example, in a single workflow. This sort of workflow is emerging in many fields from plasma science to economics. However, current programming approaches can result in a somewhat complex mix of abstractions that are not always amenable to flexible and agile exploration in an interactive environment of the sort that is Julia's hallmark.
**Catalyzing activities to discover the right APIs**. The BoF will aim to catalyze conversations and energize projects looking at the next generation of Julia ecosystem tools to support high-performance, parallel scientific computing. In this realm it is likely that many of the key abstractions/APIs are likely to be discovered somewhat organically as much as designed. The BoF seeks to be inclusive to all Julia projects that may have an interest in these areas. The organizers welcome anyone interested in presenting a slide at the BoF to [contact us](mailto:christophernhill+juliacon2019@gmail.com). We plan to gather the BoF material into a post meeting document that will be openly available.
PUBLIC
CONFIRMED
Birds of Feather
https://pretalx.com/juliacon2019/talk/37VY3Q/
BoF: Room 353
Chris Hill
Alan Edelman
Lucas Wilcox
Andreas Noack
Valentin Churavy
PUBLISH
7RPM7G@@pretalx.com
-7RPM7G
Julia in Healthcare
en
en
20190725T143000
20190725T153000
1.00000
Julia in Healthcare
PUBLIC
CONFIRMED
Birds of Feather
https://pretalx.com/juliacon2019/talk/7RPM7G/
BoF: Room 353
Vijay Ivaturi
PUBLISH
GMRBWB@@pretalx.com
-GMRBWB
Package Management BoF
en
en
20190725T154500
20190725T164500
1.00000
Package Management BoF
PUBLIC
CONFIRMED
Birds of Feather
https://pretalx.com/juliacon2019/talk/GMRBWB/
BoF: Room 353
Stefan Karpinski
PUBLISH
MC8TPZ@@pretalx.com
-MC8TPZ
Julia in Astronomy
en
en
20190725T164500
20190725T173500
0.05000
Julia in Astronomy
PUBLIC
CONFIRMED
Birds of Feather
https://pretalx.com/juliacon2019/talk/MC8TPZ/
BoF: Room 353
Mosè Giordano
PUBLISH
YX9NNS@@pretalx.com
-YX9NNS
Breakfast
en
en
20190725T073000
20190725T083000
1.00000
Breakfast
PUBLIC
CONFIRMED
Break
https://pretalx.com/juliacon2019/talk/YX9NNS/
Other
PUBLISH
GYFPM7@@pretalx.com
-GYFPM7
Poster Session
en
en
20190725T101000
20190725T110000
0.05000
Poster Session
Posters:
- "AMLET meets Julia" by Bastin, Fabian
- "JuliaDB: solving the two language problem in analytical databases" by Shashi Gowda
- "Mapping the Logic Item Domain with Julia" by Francis Smart
- "OrthogonalPolynomials.jl - Optimal evaluation of orthogonal polynomials in ~ 100 lines of Julia" by Miguel Raz Guzmán Macedo
- "Oceananigans.jl: fast, friendly, architecture-agnostic, high-performance ocean modeling" by Ali Ramadhan
- "Parallel Scenario Decomposition of Stochastic Programming" by Kibaek Kim
- "ARCH Models in Julia" by Simon Broda
- "Myths, Legends, and Other Amazing Adventures in CSV Parsing" by Jacob Quinn
- "Occasionally Binding Constraints in DSGE Models in Julia" by Vivaldo Mendes
- "Time Series Analysis and Forecasting with Julia: Nonlinear Autoregressive vs Machine Learning Models" by Diana A. Mendes
- "Julia applied in the Factory of the Future" by Thijs Van Hauwermeiren
- "NLPeterman.jl: Language processing from the ground up in Julia" by Mihir Paradkar
- "Materials.jl for crystal plasticity" by Ivan Yashchuk
- "Markov Chain-Monte Carlo Methods for Linear Algebra using Julia v.1.0" by Oscar A. Esquivel-Flores
- "Modelling environment for JuliaOpt" by Manuel Marin
- "BuyLibre - Cooperative Libre Software Cost-Sharing" by Clark C. Evans
- "StateSpaceModels.jl -- A Julia package for time-series analysis in a state-space framework" by Raphael Saavedra
- "TileDB: a data management solution tailored for data scientists" by Jake Bolewski
- "Growing Machine Learning Solutions on Kubernetes with Julia" by Patrick Barker
- "Econometrics.jl for econometric analysis in Julia" by José Bayoán Santiago Calderón
- "Bioequivalence Analysis in Julia using Bioequivalence.jl" by José Bayoán Santiago Calderón
- "Tackling Stochastic Delay Problems in Julia" by Henrik Sykora
- "Julia Web Apps with Mux.jl + Heroku" by Josh Day
- "A New Approach of Genre-Based Similarity for User-UserCollaborative Filtering Recommender System" by Mahamudul Hasan
- "Real Time Mapping of Epidemic Spread, predict with SIR, Neural Network in Julia" by Rahul Kulkarni
### Info for presenters
(This matches the email you've all been sent)
We will have two poster sessions during the morning coffee breaks on Wednesday,
July 24 and Thursday, July 25 from 10:10AM to 11AM in room 349.
Please hang your poster in room 349 by 8:30AM on Wednesday and plan to present during both sessions.
If you are unable to present on both days, please let us know.
The poster boards provided will be landscape (3 feet by 4 feet)
PUBLIC
CONFIRMED
Break
https://pretalx.com/juliacon2019/talk/GYFPM7/
Other
PUBLISH
XPVUEX@@pretalx.com
-XPVUEX
Lunch
en
en
20190725T120000
20190725T131500
1.01500
Lunch
PUBLIC
CONFIRMED
Break
https://pretalx.com/juliacon2019/talk/XPVUEX/
Other
PUBLISH
NS3ZQK@@pretalx.com
-NS3ZQK
Short break
en
en
20190725T153000
20190725T154500
0.01500
Short break
PUBLIC
CONFIRMED
Break
https://pretalx.com/juliacon2019/talk/NS3ZQK/
Other
PUBLISH
3JFX8K@@pretalx.com
-3JFX8K
Hackathon
en
en
20190726T083000
20190726T163000
8.00000
Hackathon
PUBLIC
CONFIRMED
Workshop (full day)
https://pretalx.com/juliacon2019/talk/3JFX8K/
Elm A
PUBLISH
KLDTNJ@@pretalx.com
-KLDTNJ
Lunch
en
en
20190726T120000
20190726T131500
1.01500
Lunch
PUBLIC
CONFIRMED
Break
https://pretalx.com/juliacon2019/talk/KLDTNJ/
Other