Skip to yearly menu bar Skip to main content


Neural Additive Models for Location Scale and Shape: A Framework for Interpretable Neural Regression Beyond the Mean

Anton Thielmann · René-Marcel Kruse · Thomas Kneib · Benjamin Säfken

MR1 & MR2 - Number 163
[ ]
Thu 2 May 8 a.m. PDT — 8:30 a.m. PDT


Deep neural networks (DNNs) have proven to be highly effective in a variety of tasks, making them the go-to method for problems requiring high-level predictive power. Despite this success, the inner workings of DNNs are often not transparent, making them difficult to interpret or understand. This lack of interpretability has led to increased research on inherently interpretable neural networks in recent years. Models such as Neural Additive Models (NAMs) achieve visual interpretability through the combination of classical statistical methods with DNNs.However, these approaches only concentrate on mean response predictions, leaving out other properties of the response distribution of the underlying data.We propose Neural Additive Models for Location Scale and Shape (NAMLSS), a modelling framework that combines the predictive power of classical deep learning models with the inherent advantages of distributional regression while maintaining the interpretability of additive models. Thecode is available at the following link: \url{}

Live content is unavailable. Log in and register to view live content