Teaching Neural Networks a Sense of Geometry
04-19, 10:50–11:20 (Europe/Berlin), B05-B06

By taking neural networks back to the school bench and teaching them some elements of geometry and topology we can build algorithms that can reason about the shape of data. Surprisingly these methods can be useful not only for computer vision – to model input data such as images or point clouds through global, robust properties – but in a wide range of applications, such as evaluating and improving the learning of embeddings, or the distribution of samples originating from generative models. This is the promise of the emerging field of Topological Data Analysis (TDA) which we will introduce and review recent works at its intersection with machine learning. TDA can be seen as being part of the increasingly popular movement of Geometric Deep Learning which encourages us to go beyond seeing data only as vectors in Euclidean spaces and instead consider machine learning algorithms that encode other geometric priors. In the past couple of years TDA has started to take a step out of the academic bubble, to a large extent thanks to powerful Python libraries written as extensions to scikit-learn or PyTorch.


Researchers have hypothesised that a sense of geometry is something that sets the intelligence of humans apart from that of other animals. This intriguing fact motivates why geometric reasoning can be an interesting direction for AI.
How can we incorporate geometric concepts into deep learning? We can tap in to the mathematical fields of geometry and topology and see how methods in these fields can be adapted to be used in the context of data analysis and machine learning. This is the aim of Topological Data Analysis.
Starting from hierarchical clustering, which many data scientists are familiar with, we gently introduce a method used in TDA, where we look at clustering of a data set at different thresholds and form a topological summary which represents the creation and destruction of clusters (which is an example of a topological feature) at different thresholds.
We then look at a few examples where these methods can be useful:
- In neuroscience we can use these methods to model neuronal or glia trees, capturing properties of important branching structures and incorporating the invariances that these objects have.
- In image segmentation we would like to teach a neural network to take the shape of the segmentation masks into consideration, where some of the classical loss functions can't account for these kind of global properties.
- For dimensionality reduction, we can argue that minimising a reconstruction loss is not enough, instead we would like to somehow make sure that the shape of the original dataset and its dimensionality-reduced version are similar.


Expected audience expertise: Domain

Intermediate

Expected audience expertise: Python

Novice

Abstract as a tweet

By taking neural networks back to the school bench and teaching them some elements of geometry and topology we can build algorithms that can reason about the shape of data. This is the promise of the emerging field of Topological Data Analysis (TDA) which we will introduce!

Jens is pursuing a PhD in Machine Learning and Topological Data Analysis at KTH Royal Institute of Technology in Stockholm, Sweden, while also working as a data scientist at Ericsson.
He believes that an important property that sets humans apart from other animals is that we have a sense of geometry and topology. Teaching computers a sense of geometrical recognition and reasoning is thus a promising direction if we want to develop more powerful AIs.