Skip to yearly menu bar Skip to main content


Poster

Flexible and Efficient Contextual Bandits with Heterogeneous Treatment Effect Oracles

Aldo Carranza · Sanath Kumar Krishnamurthy · Susan Athey

Auditorium 1 Foyer 77

Abstract:

Contextual bandit algorithms often estimate reward models to inform decision-making. However, true rewards can contain action-independent redundancies that are not relevant for decision-making. We show it is more data-efficient to estimate any function that explains the reward differences between actions, that is, the treatment effects. Motivated by this observation, building on recent work on oracle-based bandit algorithms, we provide the first reduction of contextual bandits to general-purpose heterogeneous treatment effect estimation, and we design a simple and computationally efficient algorithm based on this reduction. Our theoretical and experimental results demonstrate that heterogeneous treatment effect estimation in contextual bandits offers practical advantages over reward estimation including more efficient model estimation and greater flexibility to model misspecification.

Live content is unavailable. Log in and register to view live content