Skip to yearly menu bar Skip to main content


Poster

Competing against Adaptive Strategies in Online Learning via Hints

Aditya Bhaskara · Kamesh Munagala

Auditorium 1 Foyer 46

Abstract: For many of the classic online learning settings, it is known that having a "hint" about the loss function before making a prediction leads to significantly better regret guarantees. In this work we study the question, do hints allow us to go beyond the standard notion of regret (which competes against the best fixed strategy) and compete against adaptive or dynamic strategies? After all, if hints were perfect, we can clearly compete against a fully dynamic strategy. For some common online learning settings, we provide upper and lower bounds for the switching regret, i.e., the difference between the loss incurred by the algorithm and the optimal strategy in hindsight that switches state at most $L$ times, where $L$ is some parameter. We show positive results for online linear optimization and the classic experts problem. Interestingly, such results turn out to be impossible for the setting of multi-arm bandits.

Live content is unavailable. Log in and register to view live content