2025-10-02 –, Coffee room
Language: English
Graph Neural Networks (GNNs) are powerful models for learning from graph-structured data, but their performance can depend critically on how message passing is implemented. In this talk, I will present my Google Summer of Code project with the Julia organization, aimed at accelerating GNNs by improving support for sparse computations on GPUs. By leveraging sparse-dense matrix multiplications—rather than the traditional gather-scatter paradigm—for layers such as graph convolutions, we can reduce memory overhead and improve performance. These enhancements are integrated into the GraphNeuralNetworks.jl package, helping bring Julia’s GNN ecosystem closer to state-of-the-art frameworks in other languages.
Graph Neural Networks (GNNs) are increasingly used in domains such as recommendation systems, bioinformatics, and social network analysis. In Julia, the GraphNeuralNetworks.jl package provides a composable and elegant framework for experimenting with GNN architectures. However, performance limitations—particularly on GPUs—can hinder scalability in real-world applications.
This talk presents my Google Summer of Code project, which focuses on improving the performance of key GNNs layers by using sparse-dense matrix multiplication instead of the classical gather-scatter message-passing scheme. The gather-scatter approach often creates large intermediate tensors, whereas sparse matrix operations can fuse computation and aggregation more efficiently. These optimizations are especially relevant for GPU acceleration, where memory overhead is a significant concern.
The project involves integrating these performance improvements into GraphNeuralNetworks.jl, benchmarking them, and making them accessible to the broader Julia community. While the project builds on existing CUDA.jl capabilities, it also explores how to best utilize them in practice for GNNs workloads.
This talk will cover the following topics:
- GNNs and efficient message-passing strategies
- Integration of sparse GPU operations into GraphNeuralNetworks.jl
- Benchmarking insights and future directions
I'm a PhD student in Computer Science at Université Côte d’Azur, working within the COATI team, a joint group between Inria and CNRS. My research focuses on machine learning and graph theory, and I enjoy working at the intersection of theory and practice. As a 2025 Google Summer of Code contributor with the Julia organization, I'm working on integrating GPU-accelerated sparse operations into the GraphNeuralNetworks.jl package to support more efficient implementations of graph neural network layers.