Skip to yearly menu bar Skip to main content


On Feynman–Kac training of partial Bayesian neural networks

Zheng Zhao · Sebastian Mair · Thomas Schön · Jens Sjölund

MR1 & MR2 - Number 151
[ ] [ Project Page ]
[ Poster
Sat 4 May 6 a.m. PDT — 8:30 a.m. PDT


Recently, partial Bayesian neural networks (pBNNs), which only consider a subset of the parameters to be stochastic, were shown to perform competitively with full Bayesian neural networks. However, pBNNs are often multi-modal in the latent variable space and thus challenging to approximate with parametric models. To address this problem, we propose an efficient sampling-based training strategy, wherein the training of a pBNN is formulated as simulating a Feynman–Kac model. We then describe variations of sequential Monte Carlo samplers that allow us to simultaneously estimate the parameters and the latent posterior distribution of this model at a tractable computational cost. Using various synthetic and real-world datasets we show that our proposed training scheme outperforms the state of the art in terms of predictive performance.

Chat is not available.