Poster
Bounding Evidence and Estimating Log-Likelihood in VAE
Łukasz Struski · Marcin Mazur · Paweł Batorski · PRzemysław Spurek · Jacek Tabor
Auditorium 1 Foyer 129
Many crucial problems in deep learning and statistical inference are caused by a variational gap, i.e., a difference between model evidence (log-likelihood) and evidence lower bound (ELBO). In particular, in a classical VAE setting that involves training via an ELBO cost function, it is difficult to provide a robust comparison of the effects of training between models, since we do not know a log-likelihood of data (but only its lower bound). In this paper, to deal with this problem, we introduce a general and effective upper bound, which allows us to efficiently approximate the evidence of data. We provide extensive theoretical and experimental studies of our approach, including its comparison to the other state-of-the-art upper bounds, as well as its application as a tool for the evaluation of models that were trained on various lower bounds.
Live content is unavailable. Log in and register to view live content