Skip to yearly menu bar Skip to main content


Poster

Learning Bijective Feature Maps for Linear ICA

Alexander Camuto · Matthew Willetts · Chris Holmes · Brooks Paige · Stephen Roberts

Keywords: [ Deep Learning ] [ Neuroscience and Cognitive Science ] [ Problem Solving ] [ Algorithms -> Classification; Deep Learning -> Predictive Models; Neuroscience and Cognitive Science ] [ Human or Animal Learnin ] [ Generative Models and Autoencoders ]


Abstract:

Separating high-dimensional data like images into independent latent factors, i.e independent component analysis (ICA), remains an open research problem. As we show, existing probabilistic deep generative models (DGMs), which are tailor-made for image data, underperform on non-linear ICA tasks. To address this, we propose a DGM which combines bijective feature maps with a linear ICA model to learn interpretable latent structures for high-dimensional data. Given the complexities of jointly training such a hybrid model, we introduce novel theory that constrains linear ICA to lie close to the manifold of orthogonal rectangular matrices, the Stiefel manifold. By doing so we create models that converge quickly, are easy to train, and achieve better unsupervised latent factor discovery than flow-based models, linear ICA, and Variational Autoencoders on images.

Chat is not available.