Skip to yearly menu bar Skip to main content


Revisiting Weighted Strategy for Non-stationary Parametric Bandits

Jing Wang · Peng Zhao · Zhi-Hua Zhou

Auditorium 1 Foyer 49

Abstract: Non-stationary parametric bandits have attracted much attention recently. There are three principled ways to deal with non-stationarity, including sliding-window, weighted, and restart strategies. As many non-stationary environments exhibit gradual drifting patterns, the weighted strategy is commonly adopted in real-world applications. However, previous theoretical studies show that its analysis is more involved and the algorithms are either computationally less efficient or statistically suboptimal. This paper revisits the weighted strategy for non-stationary parametric bandits. In linear bandits (LB), we discover that this undesirable feature is due to an inadequate regret analysis, which results in an overly complex algorithm design. We propose a \emph{refined analysis framework}, which simplifies the derivation and importantly produces a simpler weight-based algorithm that is as efficient as window/restart-based algorithms while retaining the same regret as previous studies. Furthermore, our new framework can be used to improve regret bounds of other parametric bandits, including Generalized Linear Bandits (GLB) and Self-Concordant Bandits (SCB). For example, we develop a simple weighted GLB algorithm with an $\Ot(k_\mu^{\sfrac{5}{4}} c_\mu^{-\sfrac{3}{4}} d^{\sfrac{3}{4}} P_T^{\sfrac{1}{4}}T^{\sfrac{3}{4}})$ regret, improving the $\Ot(k_\mu^{2} c_\mu^{-1}d^{\sfrac{9}{10}} P_T^{\sfrac{1}{5}}T^{\sfrac{4}{5}})$ bound in prior work, where $k_\mu$ and $c_\mu$ characterize the reward model's nonlinearity, $P_T$ measures the non-stationarity, $d$ and $T$ denote the dimension and time horizon.

Live content is unavailable. Log in and register to view live content