Debiasing Samples from Online Learning Using Bootstrap

Ningyuan Chen · Xuefeng GAO · Yi Xiong

[ Abstract ]
Wed 30 Mar 3:30 a.m. PDT — 5 a.m. PDT
Oral presentation: Oral 10: Gaussian processes / Optimization / Online ML
Wed 30 Mar 6 a.m. PDT — 7 a.m. PDT

Abstract: It has been recently shown in the literature (Nie et al, 2018; Shin et al, 2019a,b) that the sample averages from online learning experiments are biased when used to estimate the mean reward. To correct the bias, off-policy evaluation methods, including importance sampling and doubly robust estimators, typically calculate the conditional propensity score, which is ill-defined for non-randomized policies such as UCB. This paper provides a procedure to debias the samples using bootstrap, which doesn't require the knowledge of the reward distribution and can be applied to any adaptive policies. Numerical experiments demonstrate the effective bias reduction for samples generated by popular multi-armed bandit algorithms such as Explore-Then-Commit (ETC), UCB, Thompson sampling (TS) and $\epsilon$-greedy (EG). We analyze and provide theoretical justifications for the procedure under the ETC algorithm, including the asymptotic convergence of the bias decay rate in the real and bootstrap worlds.

Chat is not available.