Skip to yearly menu bar Skip to main content


Poster

Unifying Feature-Based Explanations with Functional ANOVA and Cooperative Game Theory

Maximilian Muschalik · Eyke Hüllermeier · Fabian Fumagalli · Danqi Liao


Abstract:

Feature-based explanations, using perturbations or gradients, are a prevalent tool to understand decisions of black box machine learning models. Yet, differences between these methods still remain mostly unknown, which limits their applicability for practitioners. In this work, we introduce a unified framework for local and global feature-based explanations using two well-established concepts: functional ANOVA (fANOVA) from statistics, and the notion of value and interaction from cooperative game theory. We introduce three fANOVA decompositions that determine the influence of feature distributions, and use game-theoretic measures, such as the Shapley value and interactions, to specify the influence of higher-order interactions. Our framework combines these two dimensions to uncover similarities and differences between a wide range of explanation techniques for features and groups of features. We then empirically showcase the usefulness of our framework on synthetic and real-world datasets.

Live content is unavailable. Log in and register to view live content