Skip to yearly menu bar Skip to main content


Poster

Tight Regret and Complexity Bounds for Thompson Sampling via Langevin Monte Carlo

Tom Huix · Shunshi Zhang · Alain Durmus

Auditorium 1 Foyer 82

Abstract: In this paper, we consider high dimensional contextual bandit problems. To address these issues, Thompson Sampling and its variants have been proposed and have been successfully applied to multiple machine learning problems. Existing theory on Thompson Sampling shows that it has suboptimal dimension dependency in contrast to upper confidence bound (UCB) algorithms. To circumvent this issue and obtain optimal regret bounds, Zhang et al (2021) recently proposed to modify Thompson Sampling by enforcing more exploration and hence is able to attain optimal regret bounds. Nonetheless, this analysis does not permit tractable implementation in high dimensions.The main challenge therein is the simulation of the posterior samples at each step given the available observations.To overcome this, we propose and analyze the use of Markov Chain Monte Carlo methods. As a corollary, we show that for contextual linear bandits, using Langevin Monte Carlo (LMC) or Metropolis Adjusted Langevin Algorithm (MALA), our algorithm attains optimal regret bounds of $\tilde{\mc{O}}(d\sqrt{T})$. Furthermore, we show that this is obtained with $\tilde{\mc{O}}(dT^4)$, $\tilde{\mc{O}}(dT^2)$ data evaluations respectively for LMC and MALA. Finally, we validate our findings through numerical simulations and show that we outperform vanilla Thompson sampling in high dimensions.

Live content is unavailable. Log in and register to view live content