JuliaCon 2020 (times are in UTC)

SciML: Automatic Discovery of droplet fragmentation Physics
07-29, 18:30–19:00 (UTC), Red Track

We consider a classical droplet fragmentation problem in fluid mechanics, and augment the system modeling with neural architectures using DiffEqFlux.jl. This augmentation speeds up experimental inquiries by training physically-interpretable neural architectures to recover the physical equations for the spatial and temporal variation of dynamic quantities. Together we showcase how Julia's unique differentiable programming ecosystem can be the basis for next-generation physical science.


In this study, we consider a canonical fragmentation problem in fluid mechanics: splash of a drop on a liquid layer. Although this phenomena occurs in the twinkling of an eye (20 - 30 ms), it is an exquisitely regulated phenomena. The splash is typically accompanied by the formation of a thin cylindrical liquid sheet rising upwards, which resembles a crown. The crown sheet dynamics is typically characterized by a coupled set of mass and momentum balance equations. The thickness of the sheet is of the order of microns. In addition, the crown sheet thickness is found to have a spatial and temporal dependence, making it a dynamically changing quantity and difficult to probe experimentally. The sheet thickness has only recently been experimentally measured and theoretically validated in prior studies.

In the present work, we use a neural architecture to approximate the sheet thickness profile and use it in combination with the crown mass and momentum balance equations. We show that the thickness profile predicted by the trained neural architecture matches well with the experimentally measured thickness profile. In addition, the trained neural network is also able to recover the spatial and temporal dependence of the thickness profile, which matches well with the theoretically derived dependencies.

This augmentation of scientific modeling with neural networks is thus shown to play a major role in speeding experimentally driven inquiries, especially when studying dynamically varying quantities which are difficult to measure; such as the sheet thickness in this system. In addition, such augmentation paves way to make neural architectures more interpretable even when working with small datasets.

PhD student, Massachusetts Institute of Technology, USA.