Skip to yearly menu bar Skip to main content


Poster

Learning Sparse Codes with Entropy-Based ELBOs

Dmytro Velychko · Simon Damm · Asja Fischer · Jörg Lücke

Multipurpose Room 2 - Number 172

Abstract:

Standard probabilistic sparse coding assumes a Laplace prior, a linear mapping from latents to observables, and Gaussian observable distributions.We here derive a solely entropy-based learning objective for the parameters of standard sparse coding.The novel variational objective has the following features: (A) unlike MAP approximations, it uses non-trivial posterior approximations for probabilistic inference; (B) the novel objective is fully analytic; and (C) the objective allows for a novel principled form of annealing.The objective is derived by first showing that the standard ELBO objective converges to a sum of entropies, which matches similar recent results for generative models with Gaussian priors.The conditions under which the ELBO becomes equal to entropies are then shown to have analytic solutions, which leads to the fully analytic objective.Numerical experiments are used to demonstrate the feasibility of learning with such entropy-based ELBOs.We investigate different posterior approximations including Gaussians with correlated latents and deep amortized approximations.Furthermore, we numerically investigate entropy-based annealing which results in improved learning.Our main contributions are theoretical, however, and they are twofold: (1) we provide the first demonstration on how a recently shown convergence of the ELBO to entropy sums can be used for learning; and (2) using the entropy objective, we derive a fully analytic ELBO objective for the standard sparse coding generative model.

Live content is unavailable. Log in and register to view live content