Skip to yearly menu bar Skip to main content


Poster

Incorporating functional summary information in Bayesian neural networks using a Dirichlet process likelihood approach

Vishnu Raj · Tianyu Cui · Markus Heinonen · Pekka Marttinen

Auditorium 1 Foyer 126

Abstract:

Bayesian neural networks (BNNs) can account for both aleatoric and epistemic uncertainty. However, in BNNs the priors are often specified over the weights which rarely reflects true prior knowledge in large and complex neural network architectures. We present a simple approach to incorporate prior knowledge in BNNs based on external summary information about the predicted classification probabilities for a given dataset. The available summary information is incorporated as augmented data and modeled with a Dirichlet process, and we derive the corresponding Summary Evidence Lower BOund. The approach is fully Bayesian without any heuristic tuning parameters, and all hyperparameters have a proper probabilistic interpretation. We show how the method can inform the model about task difficulty and class imbalance. Extensive experiments show that, with negligible computational overhead, our method parallels and in many cases outperforms popular alternatives in accuracy, uncertainty calibration, and robustness against corruptions with both balanced and imbalanced data.

Live content is unavailable. Log in and register to view live content