JuliaCon 2022 (Times are UTC)

Universal Differential Equation models with wrong assumptions
07-28, 11:30–11:40 (UTC), Green

The JuliaML ecosystem introduces an effective way to model natural phenomena with Universal Differential Equations. UDEs enrich differential equations combining an explicitly known term with a term learned from data via a Neural Network. Here, we explore what happens when our assumptions about the known term are wrong, making use of the rich interoperability of Julia. The insight we offer will be useful to the Julia community in better understanding strengths and possible shortcomings of UDEs.


Introduction

Julia’s SciML ecosystem introduced an effective way to model natural phenomena as dynamical systems with Universal Differential Equations (UDE’s). The UDE framework enriches both classic and Neural Network differential equation modelling, combining an explicitly “known” term (that is, a term which functional expression is known) with an “unknown” term (that is, a term which functional expression is not known). Within a UDE, the unknown term, and therefore the overall functional form of the dynamical system, is learned from observational data by fitting a Neural Network. The task of the Neural Network is facilitated by the domain knowledge embodied by the known term; moreover, the interpolation and, importantly, extrapolation performance of the fitted model is greatly increased by that knowledge (and by a simplification step, such as SiNDY). All of this relies on the tacit assumption that what we think about the natural phenomena is correctly expressed in the known term. Most of the research has focused on the robust identification of the unknown term, and the properties of the Neural Network. We focus instead on the impact of possible pathologies in the design of a UDE system, and in particular, on possible errors we introduce in the expression of the known term. That is, we ask what happens if our domain knowledge is not correctly expressed. In the context of the famous quote “It ain’t what you don’t know that gets you in trouble. It’s what you know for sure that just ain’t so” attributed to Mark Twain, we explore the magnitude of the trouble you get into.

Details

More in detail, for a set of variables X, we consider a dynamical system of the form
dX(t)=F(X,t)=K(t)+U(X,t)
where K(t) is the part of the dynamical equation assumed as “known”, and U(X,t) is the part assumed as “unknown”.
In this scenario, the observational data are samples from X(t) at various points in time.
Let K*(X,t) be a perturbed version of K (say, for a certain ω, K*(t)=sin(t+ω) when K(t)=sin(t)).
Our aim is to recover F(X,t) from the observed data by training a UDE of the form
dX(t)=K*(t)+NN(X,t).

Under the perturbed scenario, we ask some simple questions, which answers are far from trivial:
- Can we recover the functional form of F(X,t)?
- Can we at least approximate it accurately?
- How does the perturbation we imposed on K*(t) impact our model accuracy?

In order to explore the discrepancy between expected and obtained results, we needed: synthetic data from the original dynamical system, that is a family of function for K(t); a family of perturbed versions of the K(t) ; and a way to assess how far off we are from recovering the true F(X,t). All three tasks were facilitated by the interoperability of Julia, and in the presentation we will show how that plays out.

  1. We considered trigonometric, exponential, polynomial functions, as well as linear combinations of these functions to create the original dynamical system and generate synthetic observational data. This was made efficiently by the symbolic computation capabilities of Julia, e.g., Symbolics.jl.
  2. We fitted family of UDEs to the data we generated under three scenarios: (a) a correctly specified known term, i.e., K*(t)=K(t); (b) the lack of a known term, i.e., K*(t)=0; and (c) a perturbation of the known term. The UDE was subsequently simplified to recover a sparse representation of the dynamical system in terms of simple functions. This step was done within Julia’s SciML framework.
  3. Finally, we evaluated the goodness of fit between the recovered dynamical system (simplified and not) with the original dynamical system. For this we developed a package, FunctionalDistances.jl, to automate as much as possible the estimation of the distance between two functions. (The package will be very shortly available in a github repository)

Future Development

The preliminary results we obtained suggest that no UDE with a strongly perturbed known term provided a better model than their counterparts with correctly specified terms. Yet, a few perturbed models give better fits than unspecified ones, raising the question of whether errors in the UDE specification are indeed detectable.

Our talk will interest both people who study UDEs for our cautionary and surprising results, and the wider audience interested more in the use of Julia in mathematical modelling for the encouraging examples of interoperability we present.
The talk will present how Julia helped us in this experimental mathematical exercise, and offer many opportunities for further investigations.

The presentation will be as light as possible on the mathematical side, present ample examples of how the interoperability of Julia helped our analysis, and assumes little or no prior knowledge of UDE’s. Graphs and examples will also be used to aid understanding of the topic.

Currently a student in mathematics at the University of Canterbury in Christchurch New Zealand