Bernhard Bachmann
- Professor of Mathematics and Technical Applications, Bielefeld University of Applied Sciences (HSBI, since 1999)
- Research focus: numerical mathematics, nonlinear optimization, symbolic and numerical methods for large hybrid differential-algebraic systems
- Founding member of the Modelica Association (1996) and Open Source Modelica Consortium (2004)
- Key contributions to the BackEnd and C-Runtime of the OpenModelica Compiler
- Co-Author of the Modelica Petri Net Library and Modelica Neural Network Library
- Research stays at ABB Research Center (Switzerland, USA, Sweden), Linköping University (Sweden), and Politecnico di Milano (Italy)
- Member, Promotionskolleg NRW (since 2022)
- Founding board member, Institute for Data Science Solutions (IDaS), HSBI (since 2022)
Sessions
This session is chaired by
We propose a novel approach for training Physics-enhanced Neural ODEs (PeN-ODEs) by expressing the training process as a dynamic optimization problem. The full model, including neural components, is discretized using a high-order implicit Runge-Kutta method with flipped Legendre-Gauss-Radau points, resulting in a large-scale nonlinear program (NLP) efficiently solved by state-of-the-art NLP solvers such as Ipopt. This formulation enables simultaneous optimization of network parameters and state trajectories, addressing key limitations of ODE solver-based training in terms of stability, runtime, and accuracy. Extending on a recent direct collocation-based method for Neural ODEs, we generalize to PeN-ODEs, incorporate physical constraints, and present a custom, parallelized, open-source implementation. Benchmarks on a Quarter Vehicle Model and a Van-der-Pol oscillator demonstrate superior accuracy, speed, generalization with smaller networks compared to other training techniques. We also outline a planned integration into OpenModelica to enable accessible training of Neural DAEs.
The convergence failure of iterative Newton solvers during the initialization of Modelica models is a serious show-stopper, particularly for inexperienced users. This paper presents the implementation in the OpenModelica tool of methods presented by two of the authors in a previous paper, to help diagnosing and resolving these convergence failure by providing ranked lists of potentially critical start attributes that might need to be fixed in order to successfully achieve convergence. The method also provides library developers with useful information about critical nonlinear equations, that could be replaced by equivalent, less nonlinear ones, or approximated by homotopy for more robust initialization.
Direct collocation-based dynamic optimization plays an important role in the optimization of equation-based models. With this approach, continuous problems are transcribed into sparse nonlinear programs (NLPs) that can be solved efficiently. The open-source Modelica environment OpenModelica provides an implementation using Radau IIA collocation, but has major limitations, such as the lack of parameter optimization, no adaptive mesh refinement, and no support for higher-order integration schemes. This paper presents (1) a comprehensive reimplementation that addresses these limitations and (2) a novel $h$-method mesh refinement algorithm. Implemented in the custom Python / C++ optimization framework GDOPT, the approach demonstrates significant performance improvements, solving typical problems 2 to 3 times faster than OpenModelica under equivalent conditions. Using the proposed mesh refinement algorithm, the framework correctly identifies non-smooth regions and increases resolution accordingly, requiring only a small increase in computation time. The implementation lays the foundation for a future integration into the OpenModelica toolchain.
Equation-based modeling that utilizes reusable components to represent real-world systems can result in excessively large models. This, in turn, significantly increases compilation time and code size, even when employing state-of-the-art scalarization and causalization techniques. This paper presents an algorithm that leverages repeating patterns and uniform causalization to enable array-size-independent constant time processing. Allowing structural parameters that govern array sizes to remain resizable during and after the causalization process enables the formulation of an integer-valued nonlinear optimization problem. This approach identifies the minimal model configuration that preserves the required structural integrity, which can subsequently be resized as needed for simulation. The proposed method has been implemented in OpenModelica and builds upon preliminary work aimed at preserving array structures during causalization, while still resolving the underlying problem in a scalarized manner.