Skip to yearly menu bar Skip to main content


Self-Compatibility: Evaluating Causal Discovery without Ground Truth

Philipp M. Faller · Leena C Vankadara · Atalanti Mastakouri · Francesco Locatello · Dominik Janzing

MR1 & MR2 - Number 41
[ ]
Sat 4 May 6 a.m. PDT — 8:30 a.m. PDT


As causal ground truth is incredibly rare, causal discovery algorithms are commonly only evaluated on simulated data. This is concerning, given that simulations reflect preconceptions about generating processes regarding noise distributions, model classes, and more. In this work, we propose a novel method for falsifying the output of a causal discovery algorithm in the absence of ground truth. Our key insight is that while statistical learning seeks stability across subsets of data points, causal learning should seek stability across subsets of variables. Motivated by this insight, our method relies on a notion of compatibility between causal graphs learned on different subsets of variables. We prove that detecting incompatibilities can falsify wrongly inferred causal relations due to violation of assumptions or errors from finite sample effects. Although passing such compatibility tests is only a necessary criterion for good performance, we argue that it provides strong evidence for the causal models whenever compatibility entails strong implications for the joint distribution. We also demonstrate experimentally that detection of incompatibilities can aid in causal model selection.

Live content is unavailable. Log in and register to view live content