Breakfast will be provided in the PH Galley
This tutorial targets both new and moderately experienced Julia users. After covering the basics and tools for data science, we will delve into topics such as memory management, type stability, and profiling.
This is a workshop aimed at people who already know basic Julia usage and wish to explore some more advanced topics that make Julia special, namely defining and using types, and metaprogramming.
Demystify machine learning buzzwords by learning how to train and use your own neural network in this interactive workshop. We'll cover the foundational principles that underpin modern machine learning and demonstrate how Julia makes it easy and fast.
This workshop is for both experienced DifferentialEquations.jl users and newcomers. The first hour of the workshop will introduce the user to DifferentialEquations.jl, describing the basic workflow and the special features which are designed to make solving hard equations (automatic sparsity detection, Jacobian coloring, polyalgorithms, etc.) easy. After the introduction, the workshop will break out into groups to work on exercises, where the developers of the library's components will be available for any questions. Some of the exercises are designed for beginners to learn how to solve differential equations and fit models to data, while others are for experienced users to learn the newest performance-enhancement features and upgrade to GPU-accelerated workflows.
Lunch will be held in the SMC 2nd Floor Lobby
A case-study based tutorial on working with tabular data using the DataFrames.jl package.
Parallel computing is hard. Julia can make it much easier. In this workshop, we discuss modern trends in high performance computing, how they’ve converged towards multiple types of parallelism, and how to most effectively use these different types in Julia.
Pharmacometics is commonly used to optimize drug doses and pre-screen drugs before clinical trials. In this workshop, users familiar with Julia will learn about pharmacometrics and how to perform the model simulations, while pharmacometricians will learn how to use Julia to build the models they know from their field. The focus will be on simulating bioequivalance studies with Bioequivalence.jl, performing nonlinear mixed-effects modeling (NLME) simulation and estimation with Pumas.jl, and non-compartmental analysis (NCA) with the PumasNCA submodule.
In this workshop, we will go through all the necessary steps to create a Julia package. The goal of the workshop is that attendees should be well prepared for getting started with package writing in Julia.
Breakfast will be held in the SMC 2nd Floor Lobby
Welcome to Juliacon!
This opening session will let you know all the details of what is going on, and will include the important information and what to do in-case of emergencies.
Madeleine Udell is Assistant Professor of Operations Research and Information Engineering
and Richard and Sybil Smith Sesquicentennial Fellow at Cornell University.
She studies optimization and machine learning for large scale data analysis and control,
with applications in marketing, demographic modeling, medical informatics,
engineering system design, and automated machine learning.
Her research in optimization centers on detecting and exploiting novel structures
in optimization problems, with a particular focus on convex and low rank problems.
These structures lead the way to automatic proofs of optimality, better complexity guarantees, and faster,
more memory-efficient algorithms. She has developed a number of open source libraries for
modeling and solving optimization problems, including Convex.jl,
one of the top tools in the Julia language for technical computing.
We present a Julia debugger and demonstrate a variety of interfaces for accessing it. We also describe the infrastructure that provides intriguing new capabilities to the Julia ecosystem.
An address from one of our sponsors.
A presentation on the results of the 2019 Julia Survey
A talk from one of our gracious sponsors.
A break for coffee
A lot of people are building tooling for differential equation based models in Julia for various domains. DifferentialEquations.jl, DynamicalSystems.jl, PuMaS.jl, Modia.jl, QuantumOptics.jl, etc. and the list goes on. The purpose of this BoF is to gather the developers who are interested in this topic in order to learn about the priorities and gripes within the community in order to plan for the next developments.
We present ITensors.jl, a ground-up rewrite of the C++ ITensor package for tensor network simulations in Julia. We will motivate the use of tensor networks in physics and give some examples for how ITensors.jl can help make the use and development of tensor network algorithms easier for researchers, users, and developers.
One of the major features of Julia's new package manager is package environments. This presentation will explain how environments work, what they are useful for and how to use them effectively.
If you like using serious scientific tools to do silly things, then this talk is for you. Join me as I explore the intersection of computational linguistics, algorithm design, and machine learning in an effort to seriously overthink cryptic crossword clues.
Pairwise learning is a machine learning paradigm where the goal is to predict properties of pairs of objects. Applications include recommender systems, such as used by Amazon, molecular network inference and ecological interaction prediction. Kronecker-based learning systems provide a simple, yet elegant method to learn from such pairs. Using tricks from linear algebra, these models can be trained, tuned and validated on large datasets. The Julia package Kronecker.jl aggregates these tricks, such that it is easy to build such learning systems.
DoubleFloats.jl offers performant types, Double64 and Double32, with twice the precision of Float64 and Float32. Attendees will gain a working knowledge of how to apply the package in support of more reliably accurate results.
Have you ever found yourself writing code that special cases different local and remote filesystems?
FilePath types are a great way to encapsulate filesystem specific logic and provide a common abstraction for interacting with various types of paths (e.g., posix, windows, S3, FTP).
A hypergraph is a generalization of a graph where a single edge can connect more than two vertices. Typical applications are related to social data analysis and include situations such as sending a single email to several recipients, a customer giving reviews to several restaurants or analyzing security vulnerabilities of information networks. In many situations the usage of a hypergraph rather than a classical graph allows to better capture and analyze dependencies within the network.
We will start by presenting the library and its functionality. As an example a use case with analysis of Yelp reviews will be shown. The presentation will be based on Jupyter notebook and will be very illustrative for researchers planning to do social network modelling in Julia.
In the second part of presentation we will show how we made use of typical Julia programming patterns to build the library. This includes overloading Array operators to provide a user an Array-like access to the hypegraphs data, using object composition as a standard inheritance mechanism for generating various representations (views) of a hypergraph and finally, making the hypegraph data structures compatible with LightGraphs.jl by providing new method implementations. This should give the participants an overview of typical patterns used when extending the package ecosystem of the Julia language.
Acknowledgement: The project is financed by the Polish National Agency for Academic Exchange.
Hear about what is new in julia 1.2 and 1.3 with thread-based parallelism.
(note: due to a race condition, Thread Based Parallelism part 2 occurs before Thread Based Parallelism part 1)
Ultimate datetime is a datetime data type, which eliminates many of the limitations and inaccuracies of the datetime datatypes generally employed in computer languages. Ultimate datetime enables representation of datetimes from the Big Bang through to the year 100,000,000,000 with attosecond precision, while properly handling leap seconds, the full range of time zones, and accounting for precision and uncertainty.
This talk demonstrates Recommendation.jl, a Julia package for building recommender systems. We will eventually see (1) a brief overview of common recommendation techniques, (2) advantages and use cases of their Julia implementation, and (3) design principles behind the easy-to-use, extensible package.
Hardware and software scale model of a smart house that utilises the functions of a Raspberry Pi. It has several functions that could be transferred to a full-scale model using the same hardware.
Hear about what is new in julia 1.2 and 1.3 with thread-based parallelism.
(note: due to a race condition, Thread Based Parallelism part 1 occurs after Thread Based Parallelism part 2)
Lunch will be held in the SMC 2nd Floor Lobby
Ted Rieger received his PhD in Chemical Engineering from Northwestern where he developed models of protein aggregation of Huntington. After graduate school, he joined Entelos, Inc in the Bay Area where he spent 6 years, developing and utilizing Quantitative Systems Pharmacology (QSP) models to understand drug development questions, primarily in the area of cardiometabolic diseases. In 2011, Ted transitioned to Pfizer’s Systems Biology Group in our CVMET Research Unit. He has been at Pfizer since then, and is now a Senior Principal Scientist in the QSP Group in Early Clinical Development. He presently supports programs in the cardiometabolic space from early discovery through proof-of-concept.
When you design an aircraft or spacecraft, it generally has to work the first time or the consequences are fiery destruction. You simulate a lot. Julia enables not merely a flexible and fast way to write a custom simulation, but in fact an entirely new and powerful breed of simulation architecture.
Documenter compiles docstrings, code snippets, and Markdown pages into HTML or PDF documents and can automatically deploy them as websites, making it easy to create manuals for Julia packages that are immediately available to users. This talk explores what goes into making all of that happen.
JuliaDB is an analytical data framework that offers typed dataframes, parallel processing, and limited out-of-core support. This session gives JuliaDB users and contributors the opportunity to discuss how JuliaDB works for them, tackle issues, and discuss the future of JuliaDB.
We present MLJ, Machine Learning in Julia, a new toolbox for combining and systematically tuning machine learning models.
Literate programming is described as an explanation of the program logic in a natural language, interspersed with traditional source code. This presentation will describe how the Literate.jl
package can be used for literate programming, and show how to generate multiple outputs, such as jupyter notebooks, or markdown pages, based on the same source file.
Working on our previous contributions for JuliaCon 2018 (see GlobalSearchRegresssion.jl, GlobalSearchRegressionGUI.jl, and [our JuliaCon 2018 Lighting Talk] (https://bit.ly/2UC7dr1)) we develop a new GlobalSearchRegression.jl version merging LASSO and QR-OLS algorithms, and including new outcome capabilities. Combining machine learning (ML) and econometric (EC) procedures allows us to deal with a much larger set of potential covariates (e.g. from 30 to hundresds) preserving most of the original advantages of all-subset regression approaches (in-sample and out-of sample optimality, model averaging results and residuals tests for coefficient robustness). Additionally, the new version of GlobalSearchRegression.jl allows users to obtain LATEX and PDF outcomes with best model results, model averaging estimations and key statistics distributions
Model and simulate mechanical 3D-systems with hierarchical components, kinematic loops, and collision handling of convex bodies.
Ever wish your code automatically beautiful? Tired of spacing out commas, wrangling parenthesis and indenting? Julia's formatter can do all this and more! Come find out how to use it your everyday workflow.
Games have been testbeds for Artificial Intelligence research for a long time. Here I will demonstrate how to play the fantastic Hanabi card game interactively in Julia REPL. Furthermore, I will introduce how to implement some state-of-the-art learning algorithms in pure Julia.
Trajectory optimization is a fundamental tool for controlling robots with complex, nonlinear dynamics. TrajectoryOptimization.jl is devoted to providing a unified testbed for developing, comparing, and deploying algorithms for trajectory optimization.
Navigation and mapping for robots require data fusion from various sensors, each producing uncertain and opportunistic measurement data.
We are continuing with a multi-year, native Julia factor graph based simultaneous localization and mapping (SLAM) inference system that grew out of research work on non-Gaussian state-estimation, and is the primary implementation of the "multimodal-iSAM" algorithm from robotics literature.
TSML is a package for time series data processing, classification, and prediction. It provides common API for ML libraries from Python's ScikitLearn, R's caret, and native Julia MLs for seamless integration of heterogenous libraries to create complex ensembles for robust time-series preprocessing, prediction, clustering, and classification.
A short break between sessions
Julia is home to a growing ecosystem of probabilistic programming languages—but how can we put them to use for practical, everyday tasks? In this talk, we'll discuss our ongoing effort to automate common-sense data cleaning by building a declarative modeling language for messy datasets on top of Gen.
Discussion on how Julia as a community handles money, sponsorship and grants.
We showcase the port to Julia of a massively parallel Multi-GPU solver for spontaneous nonlinear multi-physics flow localization in 3-D. Our contribution is a real-world example of Julia solving "the two language problem".
Delay differential equations (DDEs) are used to model dynamics with inherent time delays in different scientific areas; however, solving them numerically in an efficient way is hard. This talk demonstrates how the DifferentialEquations ecosystem allows to solve even complicated DDEs with a variety of different numerical algorithms.
Introducting LightQuery.jl, a new querying package which combines performance with flexibility.
Operating a power system on a day to day basis involves optimizing the operation of the given energy system. Modeling these operations requires solving a Mixed Integer Linear Programming problem. In this talk, we will present methods for solving a production cost model in Julia and JuMP using PowerSimulations.jl
The intersection of Machine Learning and High Performance Computing: Running Julia code on Google Cloud Tensor Processing Units.
Chat about Cassette, Vinyl, IRTools, and Aborist. Things that rewrite the code at compile-time, based on context.
Modeling practice seems to be partitioned into scientific models defined by mechanistic differential equations and machine learning models defined by parameterizations of neural networks. While the ability for interpretable mechanistic models to extrapolate from little information is seemingly at odds with the big data "model-free" approach of neural networks, the next step in scientific progress is to utilize these methodologies together in order to emphasize their strengths while mitigating weaknesses. In this talk we will describe four separate ways that we are merging differential equations and deep learning through the power of the DifferentialEquations.jl and Flux.jl libraries. Data-driven hypothesis generation of model structure, automated real-time control of dynamical systems, accelerated of PDE solving, and memory-efficient deep learning workflows will all shown to be derived from this common computational structure of differential equations mixed with neural networks. The audience will leave with a new appreciation of how these two disciplines can benefit from one another, and how neural networks can be used for more than just data analysis.
With the release of Julia 1.0, packages have raced to update and stabilize APIs. Come learn about all things current and planned for JuliaData packages, including:
* DataFrames.jl
* CSV.jl
* Tables.jl
* CategoricalArrays.jl
* and others
MLIR is a flexible compiler infrastructure with an open ecosystem of dialects, built for a world of increasingly heterogeneous hardware. With its support for metaprogramming and extensible JIT compiler, Julia is well-positioned as a frontend language for the MLIR stack.
The internet is a powerful medium for story telling in data science, but creating compelling, interactive graphics can be difficult. This talk will show how Vega (VegaLite.jl) and Julia can be used to prototype interactive visualizations, and then how those visualizations can be deployed to the web.
ChipSort.jl is a sorting package that exploits instruction-level parallelism and cache memory seeking the best performance in any system.
Makie is a new plotting library written a 100% in Julia.
It offers a GPU accelerated drawing backend that can draw huge amounts of data at interactive speeds.
Other backends for SVG, PDF and the Web are available as well, so Makie can be used in a many different scenarios.
This talk will give an overview of how Makie works and will present the most outstanding plotting examples from the areas of Interactivity, Data Science, Geology and Simulations.
Sparse linear operators that arise from structured grids often tend to have rich structure. We present a feature rich yet simple DIAgonal format (DIA), which also supports blocked and GPU arrays, as well as Algebraic Multigrid (AMG) preconditioners. We present this rich framework of tools to solve large oil reservoir simulations.
We introduce the ArrayChannels.jl library, which allows communication between distributed nodes to occur between fixed buffers in memory. We explore the effects of in-place serialisation on cache usage and communication performance, and consider its suitability for high performance scientific computing.
HydroPowerModels.jl is a Julia/JuMP package for Hydrothermal Multistage Steady-State Power Network Optimization solved by Stochastic Dual Dynamic Programming (SDDP).
The objective of this work is to build an open source tool for hydro-thermal dispatch that is flexible enough for the electrical sector to test new ideas in an agile and high-level way, but at the same time using the state-of-the-art implementations of both the SDDP and the dispatch model formulations. For this, we will take advantage of the julia language and the packages, also open-source, which implement the power flow of the electrical dispatch and the Stochastic Dual Dynamic Programming (SDDP), called respectively PowerModels.jl and SDDP.jl.
We will talk about how a risk management use case got sped up ~150x using multi-core parallel computing techniques in a Docker environment.
The ExaSGD (Optimizing Stochastic Grid Dynamics at Exascale) application is part of the Department of Energy's Exascale project (ECP). The dawn of renewable energies poses a great challenge to long-term planning with higher uncertainties, not only in the grid load, but also in the energy generation. The goal of this project is to provide policy planners and grid operators with cost effective long term planning solutions that are protected against uncertainties in the grid operation. This talk gives an overview of our implementation and where we leverage Julia's unique capabilities to make efficient use of the upcoming exascale hardware, while giving engineers a flexible modeling language.
A conference dinner and cruise are planned on Tuesday evening for all ticketed attendees. Boarding is at 7:00 PM in front of the Baltimore Visitor Center located on the promenade facing Light Street. All participants are expected to bring the conference badges for identification. Boarding will be complete by 7:25 PM and the Cruise begins at sharp 7:30 PM. Dinner will be served at 8:00 PM so, feel free to get a light snack if it’s too late. There will be a DJ onboard in case anyone wants to put on their dancing shoes. Further details will be announced on-site.
Breakfast will be held in the SMC 2nd Floor Lobby
Steven G. Johnson is a Professor of Applied Mathematics and Physics at MIT,
where he joined the faculty in 2004 and previously received a PhD in physics (2001)
and BS degrees in physics, mathematics, and computer science (1995).
He has a long history of contributions to scientific computation and software,
including the FFTW fast Fourier transform library (for which he co-received
the 1999 J. H. Wilkinson Prize) and many other software packages.
He has been using, contributing to, and teaching with Julia since 2012.
He created and maintains blockbuster Julia packages that you may have heard of:
PyCall and IJulia
(and Julia’s FFTW bindings, of course).
Professor Johnson's professional research concerns wave-matter interactions
and electromagnetism in media structured on the wavelength scale (“nanophotonics”),
especially in the infrared and optical regimes. He works on many aspects of the theory,
design, and computational modeling of nanophotonic devices, both classical and quantum.
He is also a coauthor on over 200 papers and over 30 patents in this area,
including the textbook Photonic Crystals: Molding the Flow of Light.
An address from our sponsor.
An address from one of our sponsors.
As a product of the academic community, Julia has been developed with certain assumptions relating to source code availability and access. In secure environments, however, access to public (and even private) package repositories can be deliberately limited. It is still possible to use Julia in these environments: this talk will provide an overview of the challenges in deploying Julia in secure/controlled environments and discuss lessons learned from a real-world deployment on a secure system.
The poster session will be held in room 349
Physician scientists conducting clinical trials are typically not statisticians or computer scientists. Perhaps, in a perfect world, they would be, or more realistically could have statisticians and computer scientists on their research team, but that is often not the case. This leads to what we refer to as the “two-field problem.” Physician-researchers require sophisticated and powerful statistical tools to address complex inferential problems, yet these tools must be intuitive and user-friendly enough not to require advanced statistical knowledge and programming skills. Using Julia, we illustrate the application of Bayesian probabilistic biostatistics to meta-analyses of treatment effects and clinical trials. This combination of Julia and Bayesian methods provides a solution to the “two-field problem.”
As the Julia community grows and becomes core tooling to many scientists and businesses, sustainably keeping members of the community as developers of free software developers is vital to the health of the ecosystem. In this discussion we will talk about the various ways we ourselves are funding or have been funded for Julia-based open source software development, and hypothesize alternative methods such as crowdfunding.
Julia allows interfacing with shared libraries using ccall
. This allows calling into compiled binaries that could be written in any language that exposes the C ABI. In this talk, I'll describe best practices to follow for interfacing with C libraries.
Quantum computation is the future of computing. However, writing quantum program can be hard for developers living in a classical world. We developed Yao.jl to help scientists test and explore their quantum ideas in a simple way.
Julia command literals are one of the most compelling abstractions for dealing with processes in any programming language. This talk will show what these command literals offer that similar constructs in other languages do not and how they can be used to write safer, more robust shell scripts.
Set computations with interval arithmetic allow us to write surprisingly efficient software for guaranteed unconstrained and constrained global optimisation in pure Julia.
At MIT’s preclinical setting (Preclinical Modeling, Imaging and Testing, PMIT), the available shared biomedical imaging instrumentation, such as magnetic resonance imaging (MRI) or x-ray micro-computed tomography (microCT) scanners, produces diverse and large data sets on a daily basis. The acquisition of an image can be fast or slow depending on the acquisition protocols and whether we are interested in a 2D slice, a 3D volume or a 4D dataset over time. The time from acquisition to visualization of the image heavily depends on the size of the dataset, the image reconstruction algorithm and the computing power available. Although image acquisition and visualization are typically tied to the manufacturer of each specific platform, image quantification is more user dependent and can suffer a significant computational burden when performing non-linear mathematical operations on a pixel-by-pixel basis over millions of high-resolution images. The quantification of an image, namely the extraction of precise numerical information from the image that is representative of a biological process tied to disease and therapy, can take days to derive for users that choose high-level, easy-to-use numerical analysis software. We will present a case study of vast improvements in quantitative image processing of large preclinical MRI datasets using Julia libraries and expand on PMIT’s efforts to develop a Julia-based platform for intelligent preclinical evaluation of therapeutics from their development at bench to their visualization in a living subject.
I will be talking about my work on brain tumour classification using gene expression data, and how Julia as a tool aided this process.
Pyodide is a project from Mozilla to build a performant scientific Python stack running entirely in the web browser using WebAssembly.
With Julia v1.0 released, It is time to reflect on what a Julian Julia package is, and why some popular packages such as Optim is not necessarily as Julian as they can be! Based on requests from the community and own experiences, I explain some guiding principles on a complete re-write of the packages in the JuliaNLSolvers organization.
High-fidelity battery modeling requires the estimation of numerous physical parameters in order to properly capture the physics of the electrochemical, thermodynamic and chemical processes that underlie the system. Using Julia, the parameters for this model were able to be estimated by speeding up the code such that a Markov Chain Monte Carlo approach (Hamiltonian Monte Carlo) could be used, combined with a high-performance computing cluster, to sample the vast search domain and reach the global error minima.
Machine learning for data Mining applications in imbalanced big data classification is very challenging task. In this talk, we have proposed a new cluster-based under-sampling approach with ensemble learning for mining real-life imbalanced big data in Julia.
Julia is increasingly being recognized as one of the big three data science programming languages alongside R and Python. However, Julia’s data ecosystem has had less time to mature when compared to R’s or Python’s. Hence it’s not surprising that some data operations in Julia are slower than their counterparts in R and Python, e.g. group-by.
This talk discusses how under-utilized fast sorting methods, such as radix sort, can be used to speed up group-by operations in Julia so that Julia’s group-by operations can match (or even surpass) the speed of optimized C-based group-by implementations in R and Python.
Lunch will be held in the SMC 2nd Floor Lobby
Arch D. Robison is a Principal Systems Software Engineer at NVIDIA, where he works
on TensorRT, NVIDIA's platform for high-performance
deep-learning inference. He was the lead developer for KAI C++, the original architect of Intel
Threading Building Blocks, and one of the authors of the book Structured Parallel Programming:
Patterns for Efficient Computation. Arch contributed type-based alias analysis and vectorization
support to Julia, including the original implementation of SIMD in Julia 0.3. He's used Julia to generate x86 assembly language for a Go
implementation of his video game Frequon Invaders. He also took 2nd place in AI Zimmermann's contest
"Delacorte Numbers" using Julia exclusively. He has 21 patents and an Erdös number of 3.
DataKnots is a Julia library for querying data with an extensible, practical and coherent algebra of query combinators. DataKnots is designed to let data analysts and other accidental programmers query and analyze complex structured data.
We’ll have a birds of a feather session to discuss and brainstorm diversity and inclusion in the Julia community. All are welcome!
This talk will provide an overview of the Federal Reserve Bank of New York's heterogeneous agent dynamic stochastic general equilibrium (DSGE) model development process in Julia, walking through our navigation of Julia-specific functionality in the process. Comparisons of performance relative to MATLAB and FORTRAN will be provided.
SemanticModels.jl is a library for analyzing scientific and mathematical models written in julia. We apply techniques from program analysis to understand and manipulate scientific modeling code. This allows you to write programs that write novel models.
OmniSci (formerly MapD) is an open-source relational database built from the ground-up to run on GPUs, providing millisecond query speed on multi-billion row datasets. This talk presents OmniSci.jl, the database client for OmniSci written completely in Julia and a basic demonstration of using OmniSci and Julia together, with the aim of encouraging community collaboration on GPU accelerated analytics.
This talk will give a brief overview of the Queryverse functionality and some new features that were added over the last year, and then dive deep into the internal design of Query.jl, TableTraits.jl and many other packages from the Queryverse.
Medium-large Dynamic Stochastic General Equilibrium models such as those used for forecasting and policy analysis by central banks take a substantial amount of time to estimate using standard approaches such as Random Walk Metropolis Hastings. Our new Sequential Monte Carlo sampler in DSGE.jl makes it possible to estimate DSGE models in parallel, reducing computational time, and “online,” that is efficiently including new data in the estimation as they become available.
A short break between sessions
Explore Flux's brand-new compiler integration, and how this lets us turn anything in the Julia ecosystem into a machine learning model.
A casual chat about the virtues and concerns relating to running julia in a production enviroment.
Polynomial and moment optimization problems are infinite dimensional optimization problems that can model a wide range of problems in engineering and statistics. In this minisymposium we show how the Julia and JuMP ecosystems are particularly well suited for the effortless construction of these problems and the development of state-of-the-art solvers for them.
This session aims at discussing/showcasing our experience promoting diversity and inclusion in the US, Brazil, Chile and online, with the help of the Julia Computing Diversity & Inclusion Award, funded by the Sloan Foundation.
RayTracer.jl is a package designed for differentiable rendering. In this talk, I shall discuss the inverse graphics problem and how differentiable rendering can help solve it. Apart from this we will see how differentiable rendering can be used in differentiable programming pipelines along with neural networks to solve classical deep learning problems.
This talk will demonstrate the models described in Neural Ordinary Differential Equations implemented in DiffEqFlux.jl, using DifferentialEquations.jl to solve ODEs with dynamics specified and trained with Flux.jl.
Neural Ordinary Differential Equations (neural ODEs) are a brand new and exciting method to model nonlinear transformations as they combine the two fields of machine learning and differential equations. In this talk we discuss DiffEqFlux.jl, a package for designing and training neural ODEs, and we introduce new methodologies to improve the efficiency and robustness of neural ODEs fitting.
A discussion of the Julia GPU ecosystem
Randomized sketching algorithms are a powerful tool for on-the-fly compression of matrices. In this talk we show how sketching can be used for approximate gradient and hessian-times-vector computations that are storage optimal.
This approach gives cutting-edge low memory algorithms to address the challenge of expensive storage in optimization problems with PDE constraints.
We also discuss implications for efficient adjoint computation/back-propagation.
Using Julia and Flux.jl, we want to show how we have applied modern neural architectures like Mask RCNN and Inception to identify diseases and slums in metropolitan cities.
Breakfast will be held in the SMC 2nd Floor Lobby
Heather Miller is an Assistant Professor in the School of Computer Science at Carnegie Mellon,
where she is affiliated with the Institute for Software Research. Prior to joining the faculty at CMU,
Professor Miller not only worked as a research scientist at EPFL, but
also co-founded and served as the Executive Director for the Scala Center, a
nonprofit focused on software development, education, and research surrounding the open source Scala
programming language. She continues to work on and around Scala, while pursuing research on various
flavors of distributed and concurrent computation. Some of her projects underway include programming
models and type systems to facilitate the design of new, functional distributed systems.
I'll describe some of the more fundamental issues in Julia today, as I see it, and how we can potentially solve them to get a better language.
An address from one of our sponsors.
The poster session will be held in room 349
We will show how interval constraint propagation can give a guaranteed description of feasible sets satisfied by nonlinear inequalities via contractors. This technology can be applied to speed up guaranteed global optimization and root finding.
Julia and JavaScript come together like peanut butter and chocolate
This talk is an overview of the JuliaGizmos ecosystem. It starts with the basics of creating a simple page, showing it in various forms, to Interact.jl and beyond. I will present work done by many people that have been aggregated in this github niche, mainly that of Mike Innes, Pietro Vertechi, Joel Mason, Travis DePrato, Sebastian Pfitzner and myself.
This BoF will be a forum to discuss the state of the state around performant parallelism for distributed memory programming in Julia. Performance, parallelism, productivity and portability are four P's of distributed memory parallelism that over the last 30 years have proved hard to satisfy simultaneously in a general solution. The goal of this BoF is discussion and exploration of approaches for providing performant distributed memory parallelism in Julia in ways that are portable and that reflect the productivity vision of Julia. The format will consist of a series of presentations and a discussion/Q&A section. It will look both within Julia and across other languages at the last 30 years of efforts in this space. The motivation for the BoF is that meeting the four P's well remains an unsolved problem. For now projects that seek all of performance, parallelism at scale, portability and productivity typically have to make compromises in one or more of these areas. The hoped for outcome is some shared momentum and sharing of ideas for developing Julian approaches that lessen (or eliminate) the need to compromise in any of the four P's in the future.
If you're familiar with Julia and its ecosystem, you may have noticed something lovely but a bit puzzling: there seems to be an unusually large amount of code reuse between packages compared to other seemingly similar languages. This sharing of code comes in two forms:
- Sharing basic types among a wide variety of packages providing disparate functionality;
- Sharing generic algorithms that work on various implementations of common abstractions.
Why does generic code in Julia "just work"? Why do Julia packages seem to share types with so little friction? Both kinds of reuse are supposed to be natural benefits of class-based object-oriented languages. After all, inheritance and encapsulation are two of the four pillars of OOP. Even more puzzling is that Julia has no encapsulation and doesn't allow inheriting from concrete types at all. Yet both kinds of code reuse are rampant. What is going on? In this talk, I make the case that both of kinds sharing stem directly from Julia's multiple dispatch programming paradigm.
In the proposed talk an efficient root finding algorithm is presented through engineering applications, which are formed as implicit non-linear equation systems.
We present our experience in deploying Julia web servers in production systems. We developed a custom buildpack that facilitates deploying web servers on Heroku. It is built so that any application requires almost no special code to be deployed.
Julia's embrace of multiple dispatch as a key organizing concept provides
developers with all the tools they need to simply implement state machine based
solutions to a wide range of problems. This presentation will explore a series
of increasingly complex tasks that can all be addressed using a clever
combination of types and multiple dispatch.
Timelineapp.co is an on-line platform for financial planners. Recently its core compute engine has been migrated to the Julia language. In this talk we discuss the reasons and benefits of this decision.
This talk introduces computational topology algorithms to generate the 2D/3D space partition induced by a collection of 1D/2D/3D geometric objects. Methods and language are those of basic geometric and algebraic topology. Only sparse arrays are used to compute spaces and maps (the chain complex) from dimension zero to three.
A package about einsum, as well as differentiable tensor network algorithms built on top of it. Why we need automatic differentiating tensor networks and how to achieve this goal.
The Grassmann.jl package provides tools for doing computations based on multi-linear algebra, differential geometry, and spin groups using the extended tensor algebra known as Grassmann-Clifford-Hestenes-Taylor geometric algebra. The primary operations are ∧, ∨, ⋅, *, ×, ⋆, ', ~
(which are the outer, regressive, inner, geometric, and cross products along with the Hodge star, adjoint, and multivector reversal operations). Any operations are truly extensible with high dimensional support for up to 62 indices and staged caching / precompilation, where the code generation enables the fairly automated task of making more definitions. The DirectSum.jl multivector parametric type polymorphism is based on tangent bundle vector spaces and conformal projective geometry to make the dispatch highly extensible for many applications. Additionally, interoperability between different sub-algebras is enabled by AbstractTensors.jl, on which the type system is built.
JuliaCN was founded by early Chinese Julia developers for Julia localization in Chinese. We started it by providing Chinese translation on Julia documentation known as JuliaZH.jl/julia_zh_cn.
Lunch will be held in the SMC 2nd Floor Lobby
Steven Lee is an Applied Mathematics Program Manager for Advanced Scientific Computing
Research (ASCR) within the Department of Energy (DOE), Office of Science. Most recently, Steven and an organizing
committee issued a brochure and workshop report
on Scientific Machine Learning: Core Technologies for Artificial Intelligence.
He has also been an ASCR Program Manager within the Scientific Discovery through Advanced Computing program
(SciDAC-3 Institutes)
for the projects: FASTMATH
- Frameworks, Algorithms and Scalable Technologies for Mathematics; and
QUEST
- Quantification of Uncertainty for Extreme-Scale Computations. Before joining the DOE, Steven was a
computational scientist at Lawrence Livermore National Laboratory and Oak Ridge National Laboratory.
He has also been a visiting Assistant Professor in the Department of Mathematics at MIT. He has a Ph.D.
in Computer Science (UIUC) and B.S. in Applied Mathematics (Yale).
"This block will compile away," the comments say. But will it? In this talk we'll see some scenarios where controlling compile-time vs runtime execution is crucial for performance, and we'll discuss some ideas that might make this control easier in Julia.
This session is for gathering the various groups interested in Julia for healthcare purposes. Pharmacometrics, healthcare-focused biological research, and the translation of software to practice will be discussed.
We will present Mimi.jl, a next generation platform for Integrated Assessment Modelling widely used in climate economics research. The talk will outline technical aspects of the platform, as well as its adoption and impact both on research at universities and in the US federal climate regulation process.
How to use abstractions to write code that will be easy to follow and change while also not significantly impacting performance
The speaker's experience writing a full-length optimization textbook with Julia-generated figures and typeset julia code, how to all works.
We are using Julia to develop the first Earth System Model that automatically learns from diverse data sources.
Transducers are composable algorithms that operate on collections of inputs. This concept is first introduced in Clojure language by Rich Hickey for a fully reusable code for mapping, filtering, concatenation, and similar operations that can be modeled a succession of steps. By this nature, transducers superficially look like iterators that are used by the majority of programming languages for a similar purpose. However, the protocol used by transducers is quite different from iterators and results in different characteristics:
(1) Transducers are driven by a "generalized" foldl
function. It can implement a specialized looping strategy that is most friendly to the way the data is laid out in memory for a given collection (e.g., two nested loops for vector-of-vectors).
(2) Some transducers like Map
, Filter
, Cat
and Scan
can support parallel execution. Importantly, this is done without re-writing any of the code for those transducers.
(3) The code composed by transducers is close to the way code is written manually using raw loops. It seems to result in a good machine code generation. This also means that enabling SIMD using the @simd
macro is straight forward.
In this talk, I explain the formalism of the transducers and discuss the pros and cons for Julia ecosystem based on my experience in implementing Transducers.jl.
A short break between sessions.
QsP is a sophisticated and effective way to predict the interaction between drugs and the human body, however, simulating QsP models can take a long time because of the intrinsic stiffness in transient chemical reactions. Here we take a deep look at the efficiency of various stiff ordinary differential equation solvers in the JuliaDiffEq ecosystem applied to QsP models, and utilize benchmarks to summarize how the ecosystem is progressing and what kinds of advances we can expect in the near future.
Everything Pkg: discussion of package management, version resolution, binary artifacts, registries, manifests, configuration, etc.
Symbolic terms are fundamental to a variety of fields in computer science, including computer algebra, automated reasoning, and scientific modeling. In this talk, we discuss a family of Julia packages for symbolic computation, including Rewrite.jl for term rewriting and ModelingToolkit.jl for symbolic differential equations.
Turing is a probabilistic programming language written in Julia. This talk will introduce Turing and its tooling ecosystem, as well as go over some introductory tutorials.
As the saying goes: "You can solve that with Cassette".
This is a tuitoral on how to use Cassette for building a debugger.
It explains the core of MagneticReadHead.jl, and how you can build similar tools,
to instrument julia code for your purposes.
Stheno.jl is a probabilistic programming framework specifically designed for constructing probabilistic models based around Gaussian processes. Come to this talk to find out what that means, why you should care, and how you can use it with Flux.jl and Turing.jl to do cool things.
The talk will introduce the use of PuMaS.jl for simulation and estimation of Nonlinear Mixed Effects Models used in systems pharmacology.
Electrodialysis, a prominent technology in the production of drinking water from seawater is modelled using the Julia ecosystem. A framework of partial differential equations and neural networks is solved to model the fouling of this process and to optimise its design and operation.
Casual chats about the uses of Julia in Astronomy, JuliaAstro and related packages and studies.
This talk will explore the basic ideas in Soss, a new probabilistic programming library for Julia. Soss allows a high-level representation of the kinds of models often written in PyMC3 or Stan, and offers a way to programmatically specify and apply model transformations like approximations or reparameterizations.
Efficient performance engineering for Julia programs heavily relies on understanding the result of type-inference on your program, this talk will introduce a tool to have a conversation with type-inference.
Concolic testing is a technique that uses concrete execution to create a symbolic representation of a program, which can be used to prove properties of programs or do provable exhaustive fuzzing.
IVIVC.jl is a state of the art package for predictive mathematical modelling which correlates in vitro property (rate of drug dissolution) and in vivo response (plasma drug concentration profile). An IVIVC is meant to serve as a surrogate for in vivo bioavailability. This relationship can guide product development and support biowaivers. IVIVC.jl pipelines input bio-data to an IVIVC model with validations and it involves mathematical modelling, optimization and data visualisation accelerated with Julia.
Revise.jl allows you to modify code in your running Julia session. Revise was recently rewritten around JuliaInterpreter.jl and a new query interface, CodeTracking.jl, resulting in many improvements and easier access to Revise’s internal data.
Flow cytometry clustering for several hundred million cells has long been hampered by software implementations. Julia allows us to go beyond these limits. Through the high-performance GigaSOM.jl package, we gear up for huge-scale flow cytometry analysis.
This talk introduces a new flexible and extensible probabilistic programming system called Gen, that is built on top of Julia. Gen's extensible set of modeling DSLs can express probabilistic models that combine Bayesian networks, black box simulators, deep learning, structure learning, and Bayesian nonparametrics; and Gen's inference library supports custom algorithms that combine Markov chain Monte Carlo, particle filtering, variational inference, and numerical optimization.
GWAS data are extremely high dimensional, large (>100GB), dense, and typically contains rare and correlated predictors. In this talk we discuss its unique data structures, how to efficiently represent it with Julia, how MendelIHT.jl
in conjunction with Distributions.jl
and GLM.jl
fits generalized linear models for GWAS data, and the role of parallel computing.
TimerOutputs.jl is a tool that lets you annotate sections in your code so that after execution, a nicely formatted table with information about how much time and allocations were spent in each section can be shown.
I will present a probabilistic programming language that implements switching Kalman filters, and its applications to industrial time series processing.
In this talk, we will discuss implementation of various models relevant for electrochemical energy systems in order to more rapidly optimize component design and use-case specific optimization.We will show massive performance improvements gained through Julia for a variety of popularpseudo-2D porous electrode models describing Li-ion batteries. In addition, we will illustrate an example of an integrated design workflow of an aircraft power dynamics model along with a battery model, implemented within Julia.
With PackageCompiler, one can ahead of time compile binaries for Julia packages. This includes the ability to create an executable for a Julia script.
In this talk, I will give a short overview of how PackageCompiler works and how it can be used to ship your Julia package or eliminate JIT overhead.
The Julia Language 1.0 Ephemeris and Physical Constants Reader for Solar System Bodies is an ephemeris reader, written in the programming language of Julia, is a new tool intended for use in astrodynamic applications.