Skip to yearly menu bar Skip to main content


Tackling the XAI Disagreement Problem with Regional Explanations

Gabriel Laberge · Yann Batiste Pequignot · Mario Marchand · Foutse Khomh

MR1 & MR2 - Number 94
[ ]
Thu 2 May 8 a.m. PDT — 8:30 a.m. PDT


The XAI Disagreement Problem concerns the fact that various explainability methods yield different local/global insights on model behavior. Thus, given the lack of ground truth in explainability, practitioners are left wondering ``Which explanation should I believe?''. In this work, we approach the Disagreement Problem from the point of view of Functional Decomposition (FD). First, we demonstrate that many XAI techniques disagree because they handle feature interactions differently. Secondly, we reduce interactions locally by fitting a so-called FD-Tree, which partitions the input space into regions where the model is approximately additive. Thus instead of providing global explanations aggregated over the whole dataset, we advocate reporting the FD-Tree structure as well as the regional explanations extracted from its leaves. The beneficial effects of FD-Trees on the Disagreement Problem are demonstrated on toy and real datasets.

Live content is unavailable. Log in and register to view live content