Skip to yearly menu bar Skip to main content


Poster

Self-Concordant Analysis of Generalized Linear Bandits with Forgetting

Yoan Russac · Louis Faury · Olivier CappĂ© · AurĂ©lien Garivier

Keywords: [ Algorithms ] [ Online Learning ] [ Algorithms -> Bandit Algorithms; Theory ] [ Frequentist Statistics ] [ Learning Theory and Statistics ] [ Decision Processes and Bandits ]


Abstract:

Contextual sequential decision problems with categorical or numerical observations are ubiquitous and Generalized Linear Bandits (GLB) offer a solid theoretical framework to address them. In contrast to the case of linear bandits, existing algorithms for GLB have two drawbacks undermining their applicability. First, they rely on excessively pessimistic concentration bounds due to the non-linear nature of the model. Second, they require either non-convex projection steps or burn-in phases to enforce boundedness of the estimators. Both of these issues are worsened when considering non-stationary models, in which the GLB parameter may vary with time. In this work, we focus on self-concordant GLB (which include logistic and Poisson regression) with forgetting achieved either by the use of a sliding window or exponential weights. We propose a novel confidence-based algorithm for the maximum-likehood estimator with forgetting and analyze its perfomance in abruptly changing environments. These results as well as the accompanying numerical simulations highlight the potential of the proposed approach to address non-stationarity in GLB.

Chat is not available.