Skip to yearly menu bar Skip to main content


Poster

Minimum Empirical Divergence for Sub-Gaussian Linear Bandits

Ngo Nguyen · Kwang-Sung Jun


Abstract: We propose a novel linear bandit algorithm called LinMED (Linear Minimum Empirical Divergence), which is a linear extension of the MED algorithm that was originally designed for multi-armed bandits.LinMED is a randomized algorithm that admits a closed-form computation of the arm sampling probabilities, unlike the popular randomized algorithm called linear Thompson sampling.Such a feature proves useful for off-policy evaluation where the unbiased evaluation requires accurately computing the sampling probability.We prove that LinMED enjoys a near-optimal regret bound of dn up to logarithmic factors where d is the dimension and n is the time horizon.We further show that LinMED enjoys a d2Δ(log2(n))log(log(n)) problem-dependent regret where Δ is the smallest suboptimality gap.Our empirical study shows that LinMED has a competitive performance with the state-of-the-art algorithms.

Live content is unavailable. Log in and register to view live content