Skip to yearly menu bar Skip to main content


Efficient interventional distribution learning in the PAC framework

Arnab Bhattacharyya · Sutanu Gayen · Saravanan Kandasamy · Vedant Raval · Vinodchandran N. Variyam


We consider the problem of efficiently inferring interventional distributions in a causal Bayesian network from a finite number of observations. Let P be a causal model on a set V of observable variables on a given causal graph G. For sets X,Y \subseteq V, and setting x to X, Px(Y) denotes the interventional distribution on Y with respect to an intervention x to variables X. Shpitser and Pearl (AAAI 2006), building on the work of Tian and Pearl (AAAI 2001), proved that the {\em ID algorithm} is sound and complete for recovering Px(Y) from observations.We give the first provably efficient version of the ID algorithm. In particular, under natural assumptions, we give a polynomial-time algorithm that on input a causal graph G on observable variables V, a setting x of a set X \subseteq V of bounded size, outputs succinct descriptions of both an evaluator and a generator for a distribution \hat{P} that is epsilon-close (in total variation distance) to Px(Y) where Y = V \ X, if Px(Y) is identifiable. We also show that when Y is an arbitrary subset of V \ X, there is no efficient algorithm that outputs an evaluator of a distribution that is epsilon-close to P_x(Y) unless all problems that have statistical zero-knowledge proofs, including the Graph Isomorphism problem, have efficient randomized algorithms.

Chat is not available.