Skip to yearly menu bar Skip to main content


Poster

Learning Policies for Localized Interventions from Observational Data

Myrl Marmarelis · Fred Morstatter · Aram Galstyan · Greg Ver Steeg

MR1 & MR2 - Number 46
[ ]
Sat 4 May 6 a.m. PDT — 8:30 a.m. PDT
 
Oral presentation: Oral: Bandit & Causality
Fri 3 May 1:30 a.m. PDT — 2:30 a.m. PDT

Abstract:

A largely unaddressed problem in causal inference is that of learning reliable policies in continuous, high-dimensional treatment variables from observational data. Especially in the presence of strong confounding, it can be infeasible to learn the entire heterogeneous response surface from treatment to outcome. It is also not particularly useful, when there are practical constraints on the size of the interventions altering the observational treatments. Since it tends to be easier to learn the outcome for treatments near existing observations, we propose a new framework for evaluating and optimizing the effect of small, tailored, and localized interventions that nudge the observed treatment assignments. Our doubly robust effect estimator plugs into a policy learner that stays within the interventional scope by optimal transport. Consequently, the error of the total policy effect is restricted to prediction errors nearby the observational distribution, rather than the whole response surface.

Live content is unavailable. Log in and register to view live content