2023-11-08 –, Talks
The increasing data volumes from present observations with the Very Large Array (VLA) and the prospects for orders of magnitude increase with the next generation VLA (ngVLA) have motivated the development of a high performance and high throughput data processing model to enable data processing rates to be compatible with data acquisition rates. The high performance component is achieved through the GPU-enabled implementation of compute intensive operations. To further scale data processing rates, high throughput is achieved by distributing data partitions across multiple GPUs for independent processing, enabling access to computing resources at a national scale. We present the current state of the development of a high throughput image processing model for VLA data, as well as run time scaling results from our test campaign in the PATh (Partnership to Advance Throughput) facility, that provides access to multiple GPUs on supercomputing infrastructures across the USA.