JuliaCon 2025

PEM-UDE for Neural Mass Models
2025-07-23 , Main Room 2

Scientific machine learning has proven effective in deriving equations for complex dynamical systems but faces challenges with chaotic systems, particularly in biological systems with incomplete theories and noisy data. We present a new approach combining universal differential equations with the prediction-error method from optimization to successfully learn neural system dynamics from simulated and real spiking neural networks.


Scientific machine learning (SciML) approaches are useful in learning physical equations for dynamical systems, particularly when the derivation of such equations manually would be difficult or impossible using traditional techniques [1, 2]. Chaotic systems, however, are difficult to fit with universal differential equations (UDEs) in many scenarios, both due to numerical issues (e.g., simulating chaotic trajectories with sufficient accuracy) and experimental issues (e.g., collecting high SNR data to allow for accurate modeling of the systems) [1, 3]. Biological systems, in particular, suffer from both of these issues: incomplete theory and very noisy observations lead to intractable problems in fitting UDEs to real data [3]. To overcome these issues, we propose an extension of UDE fitting using the prediction-error method (PEM) technique [4] from classical optimization to assist in training UDEs, and demonstrate its utility in classical chaotic systems and applications to electrical circuits and spiking neural networks.

As an initial validation, we explore this combined PEM-UDE approach in the context of classical chaotic systems. We demonstrate that for the Rössler system [5] the PEM approach smooths the harsh landscape of learnable parameters that makes fitting a UDE in the standard way impossible, while also providing a means of tuning the degree to which the landscape is smoothed. We show that the approach makes the system learnable by a UDE and does not overly rely on the exact knowledge of the unobserved parameters (i.e., the approach is robust to noisy estimates of parameters in the simulated states as well). We also demonstrate that the learned system is an accurate simulation of the actual dynamics and can be generalized by using symbolic regression to learn the missing state directly from the fit UDE. We then extend this analysis to the system that describes Chua’s circuit [6], showing that even when a state (one node of the circuit) is entirely unobserved and cannot be used in training the system the PEM-UDE approach can still learn an appropriately close form of the dynamics.

Having validated the approach in these well-described chaotic systems, we use it in spiking neural networks where the analytical forms are less defined. Next-generation neural mass models (NGNMMs) provide an analytical approach built on the Ott-Antonsen (OA) ansatz [7] to derive mean-field representations of neuron populations that preserve the microscale parameters of individual neurons [8, 9]. This derivation, however, relies on the assumption that neurons are a fully connected network, which is not the case in many areas of the brain (especially cortex). We show that we can train a UDE to learn the terms of a mean-field representation that violates the original OA ansatz assumptions to simulate a wider range of spiking neuron population activities. Together, these approaches validate the use of PEM in training UDEs for chaotic systems that they could not initially learn and illustrate their utility in computational and experimental neuroscience applications, setting it up for future applications in other dynamical systems.

References
[1] M. Raissi, P. Perdikaris, and G. E. Karniadakis, “Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations,” Journal of Computational Physics, vol. 378, pp. 686–707, Feb. 2019.
[2] C. Rackauckas et al., “Universal Differential Equations for Scientific Machine Learning,” Nov. 02, 2021, arXiv: arXiv:2001.04385.
[3] W. Ji, W. Qiu, Z. Shi, S. Pan, and S. Deng, “Stiff-PINN: Physics-Informed Neural Network for Stiff Chemical Kinetics,” The Journal of Physical Chemistry A, vol. 125, no. 36, pp. 8098–8106, Sep. 2021.
[4] R. Larsson, Z. Sjanic, M. Enqvist, and L. Ljung, “Direct prediction-error identification of unstable nonlinear systems applied to flight test data,” IFAC Proceedings Volumes, vol. 42, no. 10, pp. 144–149, Jan. 2009.
[5] O. E. Rössler, “An equation for continuous chaos,” Physics Letters A, vol. 57, no. 5, pp. 397–398, Jul. 1976.
[6] I. Gomes, W. Korneta, S. G. Stavrinides, R. Picos, and L. O. Chua, “Experimental observation of chaotic hysteresis in Chua’s circuit driven by slow voltage forcing,” Chaos, Solitons & Fractals, vol. 166, p. 112927, Jan. 2023..
[7] E. Ott and T. M. Antonsen, “Low dimensional behavior of large systems of globally coupled oscillators,” Chaos, vol. 18, no. 3, p. 037113, Sep. 2008..
[8] E. Montbrio, D. Pazo, and A. Roxin, “Macroscopic Description for Networks of Spiking Neurons,” Phys. Rev. X, vol. 5, no. 2, p. 021028, Jun. 2015.
[9] L. Chen and S. A. Campbell, “Exact mean-field models for spiking neural networks with adaptation,” Mar. 15, 2022, arXiv: arXiv:2203.08341.