Nonstochastic Bandits and Experts with Arm-Dependent Delays

Dirk van der Hoeven · Nicolò Cesa-Bianchi

[ Abstract ]
Wed 30 Mar 3:30 a.m. PDT — 5 a.m. PDT
Oral presentation: Oral 4: Bandits / Reinforcement learning
Mon 28 Mar 7 a.m. PDT — 8 a.m. PDT


We study nonstochastic bandits and experts in a delayed setting where delays depend on both time and arms. While the setting in which delays only depend on time has been extensively studied, the arm-dependent delay setting better captures real-world applications at the cost of introducing new technical challenges.In the full information (experts) setting, we design an algorithm with a first-order regret bound that reveals an interesting trade-off between delays and losses. We prove a similar first-order regret bound also for the bandit setting, when the learner is allowed to observe how many losses are missing.Our bounds are the first in the delayed setting that only depend on the losses and delays of the best arm.In the bandit setting, when no information other than the losses is observed, we still manage to prove a regret bound for bandits through a modification to the algorithm of \citet{zimmert2020optimal}.Our analyses hinge on a novel bound on the drift, measuring how much better an algorithm can perform when given a look-ahead of one round.

Chat is not available.