Skip to yearly menu bar Skip to main content


Convolutional Persistence as a Remedy to Neural Model Analysis

Ekaterina Khramtsova · Guido Zuccon · Xi Wang · Mahsa Baktashmotlagh

Auditorium 1 Foyer 42


While deep neural networks are proven to be effective learning systems, their analysis is complex due to high-dimensionality of their weight space. To remedy this, persistent topological properties can be used as an additional descriptor, providing insights on how the network weights evolve during training. In this paper, we focus on convolutional neural networks, and define the topology of the space, populated by convolutional filters (i.e., kernels). We perform an extensive analysis of topological properties of the convolutional filters. Specifically, we define a metric based on persistent homology, namely, Convolutional Topology Representation, to determine an important factor in neural networks training - generalizability of the model to the test set. We further analyse how various training methods affect the topology of convolutional layers.

Live content is unavailable. Log in and register to view live content