Binaural stellar spectra sonification based on variational autoencoders
The current development of Virtual Observatory technology (VO) allows easy access to astronomical data from ground-based and space-based observatories, not only for astronomers on remote computer networks but also for anyone interested in this field. This infrastructure represents one of the biggest examples of global collaboration and Open Science development and allows the use of real case studies on Citizen Science projects, Science Education, and Outreach.
Within this global inclusive paradigm, Sonification can play a key role in generating comprehensive multimodal representations of datasets, complementing graphical representations of scientific information, expanding the possibilities of virtual science exploration, and improving inclusion and accessibility for visually impaired and blind users (BVI).
Focused on unsupervised Deep Learning analysis of stellar spectra catalogs, this work describes a method to generate spectrum binaural multimodal representations using variational autoencoders. It includes the implementation and evaluation of an experimental prototype that explores the case study of the STELIB stellar library from the Spanish Virtual Observatory (SVO), to show the potential of the proposed pipeline to incorporate AI techniques and Sound Spatialization into astronomical data sonifications.