Skip to yearly menu bar Skip to main content


Offline Policy Evaluation and Optimization Under Confounding

Chinmaya Kausik · Yangyi Lu · Kevin Tan · Maggie Makar · Yixin Wang · Ambuj Tewari

MR1 & MR2 - Number 42
[ ]
Thu 2 May 8 a.m. PDT — 8:30 a.m. PDT


Evaluating and optimizing policies in the presence of unobserved confounders is a problem of growing interest in offline reinforcement learning.Using conventional methods for offline RL in the presence of confounding can not only lead to poor decisions and poor policies, but also have disastrous effects in critical applications such as healthcare and education.We map out the landscape of offline policy evaluation for confounded MDPs, distinguishing assumptions on confounding based on whether they are memoryless and on their effect on the data-collection policies.We characterize settings where consistent value estimates are provably not achievable, and provide algorithms with guarantees to instead estimate lower bounds on the value.When consistent estimates are achievable, we provide algorithms for value estimation with sample complexity guarantees.We also present new algorithms for offline policy improvement and prove local convergence guarantees.Finally, we experimentally evaluate our algorithms on both a gridworld environment and a simulated healthcare setting of managing sepsis patients.In gridworld, our model-based method provides tighter lower bounds than existing methods, while in the sepsis simulator, we demonstrate the effectiveness of our method and investigate the importance of a clustering sub-routine.

Live content is unavailable. Log in and register to view live content