07-28, 19:10–19:20 (UTC), Green
In the next decade, forthcoming galaxy surveys will provide the astrophysical community with an unprecedented wealth of data. The standard analysis pipeline, usually employed to analyze this kind of surveys, are quite expensive from a computational point of view.
In this presentation I will show how, using Flux.jl and DiffEquations.jl, it is possible to accelerate standard analysis of some order of magnitudes.
We are living in the Golden Age of Cosmology: in the 20th century, our comprehension of the Universe has been rapidly evolving, eventually leading to the establishment of a concordance model, the so-called ΛCDM model. Although the remarkable success of this model, which is able to explain with few parameters a great wealth of observations, there are several unanswered questions.
What is the origin of the primordial fluctuations in the Universe? Is this due to some form of inflationary scenario?
What is Dark Matter? Is it a new particle, not present in the Standard Model of Particle Physics? Is it composed by Primordial Black Holes?
Which is the nature of Dark Energy? Can the Cosmological Constant really explain its effects or is this the sign of the breakdown of Einstein theory of General Relativity?
In the next decade several galaxy surveys will start taking data, data that will be used to study the universe using different observational probes, such as weak lensing, galaxy clustering and their cross-correlation: studying these probes jointly will enhance the scientific outcome from galaxy surveys.
However, this improvement does not come at no cost.
The analysis of a galaxy survey employs the evaluation of a complicated theoretical model, with about a hundred parameters. The computation of this theoretical prediction requires about 1-10 seconds; although this is not an expensive step per se, considering that this computation is repeated 10^5-10^7 times shows that a complete analysis requires either a very long time or dedicated hardware.
In order to overcome this issue, I am developing several surrogate models, based on DifferentialEquations.jl and Flux.jl. The combination of these two packages is quite useful for this particular case: while several papers on this topic have usually relied solely on Neural Networks to build emulators, solving some of the differential equations involved in the model evaluation reduces the dimensionality of the emulated parameters space, obtaining a more precise surrogate model. The result of this work is the development of several surrogate models with a precision of ~ 0.1% (matching the requirement for the scientific analysis) with a speed-up of about 100-1000X. The developed models will be released after the publications of the related papers.
I am a PostDoctoral Researcher at INAF-IASF in Milano. My research interest lies in the field of Cosmology and, specifically, I am involved in Euclid, a mission of the European Space Agency. I am involved in several scientific working group within Euclid, with a particular focus on the analysis of the final data of the mission.