Fabio Hernandez
Mr. Fabio Hernandez has been working in the field of computing for high energy physics research for more than 25 years. Affiliated to IN2P3/CNRS, the French national institute for nuclear physics and particle physics, he is currently the technical leader of the French component of the data processing infrastructure for performing the Legacy Survey of Space and Time (LSST) of the Vera C. Rubin Observatory.
Prior to that position, he spent several years as a senior international scientist visiting the Institute of High Energy Physics in Beijing (China) and served as an expert in the Office for science and technology of the Embassy of France in China. As the deputy director of France’s IN2P3 computing center and technical leader of the French contribution to the Worldwide LHC Computing Grid (WLCG), he was deeply involved in planning, prototyping, deploying and operating the global computing platform designed for processing the data produced by CERN’s Large Hadron Collider (LHC).
During his career, he has been very active in several projects sponsored by the European Commission in the field of distributed computing for science in Europe and in Latin America. He has held positions leading engineering teams in charge of software development for massive data storage, data center operations and distributed computing for international science projects.
Mr. Hernandez received his BSc. in computer science from University of Los Andes (Bogota, Colombia) and his MSc. in computer science from University Lyon 1 (Lyon, France).
Session
Located in Lyon, France, the IN2P3 / CNRS Computing Centre (CC-IN2P3) has been preparing its contribution to produce the Legacy Survey of Space and Time in its role as the Rubin Observatory’s France Data Facility.
An integral copy of the raw images will be imported and stored there for the duration of the 10 year-long survey and annual campaigns of reprocessing of 40% of the raw images recorded since the beginning of the survey will be performed on its premises. The data products of those campaigns will be sent back to the Observatory’s archive center in the USA.
As a scientific data processing facility shared by several dozen international projects in high energy physics, nuclear physics and astroparticle physics, in recent years we have observed a significant increase in both the computing and storage capacity demand as well as in the complexity of the services required for supporting astroparticle physics projects. We expect their needs to continue increasing for the foreseeable future: major international projects like Rubin, Euclid, KM3NeT, Virgo/LIGO represent a sizeable fraction of the resources CC-IN2P3 provides to the science projects it supports, even if not yet at the level of the high energy physics projects.
In this contribution we will address how we have been preparing to perform bulk image processing for the needs of the Rubin Observatory annual data release processing campaigns for the duration of the survey. We will present the architecture of the system we deployed with focus on the storage, compute and data transfer components and how we have been testing the system at significant scale. We will highlight and motivate some of the solutions we adopted which have proven effective for our successful contribution to other large science projects like CERN’s Large Hadron Collider. We will also cover our initial experience with components deployed for the specific needs of scientific exploitation of Rubin data such as the astronomical catalog database and the Rubin science platform.