2026-08-14 –, Room 6
Functional Mock-up Units (FMUs) are widely used in industry for exchanging dynamical system models, but their black-box binary nature makes them inaccessible to traditional AD tools. Built-in derivative support in the FMI standard is limited in scope and often relies on slow finite differences. We present a novel approach: by embedding LLVM bitcode into FMU binaries during compilation, we make them accessible to Enzyme.jl, enabling fast, automatic differentiation of virtually any FMU function.
The Functional Mock-up Interface standard (FMI) is widely used for the exchange of dynamical system models, particularly in industry. Computing derivatives of these models is relevant for a variety of use-cases, including optimization, control, and building hybrid models that combine physics-based simulations with machine learning.
The Julia ecosystem is already uniquely positioned in this space. Packages such as FMISensitivity.jl and FMIFlux.jl enable computation of various derivatives, for example with respect to solutions, even through discontinuities. This makes Julia the only currently viable platform for working with FMUs in a differentiable programming context.
However, FMUs are generally distributed as black-box binaries, which makes them inaccessible to traditional automatic differentiation tools. The FMI standard does include some built-in mechanisms for providing derivatives. But these are limited: they do not cover all the kinds of derivatives one might want to compute (such as derivatives with respect to discontinuities or time). Often they are realized through finite differences internally, if present at all. There are active efforts to address this by enhancing the FMI specification [1], but this path requires tool vendors to implement additional functionality in their tools with FMI support.
This talk presents a different approach that leverages the LLVM ecosystem and Enzyme.jl. By embedding the LLVM bitcode generated during compilation of the FMU into the binary itself, we can make the compiled code accessible to Enzyme, which can then generate fast, exact derivatives for virtually any function the FMU provides. These also integrate neatly with other code from the Julia ecosystem. In practice, this yields speedups of multiple orders of magnitude over finite differencing in some cases, while in the best case requiring only passing some additional compiler flags during the FMU's compilation from source code.
The talk will cover the approach, discuss some challenges we encountered, and demonstrate performance gains.
[1] T. Thummerer, H. Olsson, C. Song, J. Gundermann, T. Blochwitz, and L. Mikelsons, “LS-SA: Developing an FMI layered standard for holistic & efficient sensitivity analysis of FMUs,” Linköping Electronic Conference Proceedings, vol. 218. Linköping University Electronic Press, Oct. 24, 2025. doi: 10.3384/ecp218681.
Research scientist and PhD student @ University of Augsburg, chair of mechatronics
GitHub
Lars Mikelsons holds a diploma in Mathematics and a Ph.D. in Mechatronics. He began his professional career at Bosch Corporate Research before transitioning to academia. Currently, he is the Head of the Chair for Mechatronics at the University of Augsburg. His research focuses on Scientific Machine Learning and Mechatronic Systems Engineering, contributing to the advancement of intelligent, data-driven approaches in engineering applications.