Skip to yearly menu bar Skip to main content


Poster

One Arrow, Two Kills: A Unified Framework for Achieving Optimal Regret Guarantees for Sleeping Bandits

Pierre Gaillard · Aadirupa Saha · Soham Dan

Auditorium 1 Foyer 78

Abstract:

We address the problem of Internal Regret in adversarial Sleeping Bandits and the relationship between different notions of sleeping regrets in multi-armed bandits. We propose a new concept called Internal Regret for sleeping multi-armed bandits (MAB) and present an algorithm that achieves sublinear Internal Regret, even when losses and availabilities are both adversarial. We demonstrate that a low internal regret leads to both low external regret and low policy regret for i.i.d. losses. Our contribution is unifying existing notions of regret in sleeping bandits and exploring their implications for each other. In addition, we extend our results to Dueling Bandits (DB), a preference feedback version of multi-armed bandits, and design a low-regret algorithm for sleeping dueling bandits with stochastic preferences and adversarial availabilities. We validate the effectiveness of our algorithms through empirical evaluations.

Live content is unavailable. Log in and register to view live content