JuliaCon 2026

Deep Adaptive Experimental Design for SciML
2026-08-14 , Room 1

Real-time adaptive experimental design for ODE models is hard: each step requires costly posterior inference and optimization. We train a neural network policy offline to amortize this cost. The Julia SciML stack makes this practical: Enzyme.jl differentiates through ODEs, Lux.jl defines the policy network, and Reactant.jl compiles everything to a single GPU program. On a bioreactor benchmark, the learned adaptive policy beats Bayesian D-optimal static designs with a 99.5% win rate.


Model-based design of experiments (MbDoE) is a fundamental methodology in engineering and the sciences: given a mechanistic model with unknown parameters, choose experimental conditions that yield the most informative data [1]. Adaptive designs use the information in the already gathered experiments to guide the remainder of the experiment. However, conventional adaptive MbDoE requires solving a computationally expensive optimization problem between every measurement, which is infeasible when experiments run in real-time [2].

Deep Adaptive Design (DAD) addresses this by training a neural network policy offline to map experimental histories to optimal designs [3]. Once trained, the policy requires only a forward pass at deployment, enabling real-time adaptive decisions. We apply DAD on a differentiable mechanistic model, i.e., dynamical systems described by ODEs with known structure but uncertain parameters.

The SciML stack: Lux + Enzyme + Reactant

The core contribution of this talk is showing how three pillars of the Julia ecosystem compose to solve a problem that would be difficult in any other framework.

Enzyme.jl -- differentiating through ODE solvers.
Lux.jl -- defining the policy network.
Reactant.jl -- GPU compilation of the full training loop.

The key insight is that none of these packages needed special adaptation to work together. Writing the ODE solver, the neural network, and the loss function in plain Julia was sufficient for Enzyme to differentiate through all of it and for Reactant to compile the result to GPU.

Application and results

We primarily demonstrate the approach on a fed-batch bioreactor with Monod growth kinetics. The goal is to estimate the maximum growth rate and substrate affinity constant by adaptively choosing feed rates over a 14-hour experiment based on noisy substrate concentration measurements.
The trained policy is compared against a Bayesian D-optimal static design. The adaptive policy achieves a 99.5% win rate over this optimized static baseline.

Besides the bioreactor with Monod kinetics, an overview of several other applications is showcased.

Who should attend

This talk is relevant to anyone interested in differentiable programming in Julia, scientific machine learning, Bayesian experimental design, or composing the SciML ecosystem for non-standard workloads. No prior knowledge of experimental design is assumed.

References

[1] Franceschini, G. & Macchietto, S. (2008). Model-based design of experiments for parameter precision: State of the art. Chemical Engineering Science, 63(19), 4846-4872.
[2] Ryan, E.G., Drovandi, C.C., McGree, J.M., & Pettitt, A.N. (2016). A review of modern computational algorithms for Bayesian optimal design. International Statistical Review, 84(1), 128-154.
[3] Foster, A., Ivanova, D.R., Malik, I., & Rainforth, T. (2021). Deep Adaptive Design: Amortizing Sequential Bayesian Experimental Design. International Conference on Machine Learning, 3384-3395.

Arno Strouwen is a statistician specializing in optimal experimental design for dynamical systems. He holds a PhD from KU Leuven and teaches experimental design there. He works at PumasAI on noncompartmental analysis and in vitro-in vivo correlation, and previously worked at JuliaHub on quantitative systems pharmacology and consulting for SciML applications. His industry experience includes designing experiments for vaccines and pharmaceuticals at Johnson & Johnson.