Skip to yearly menu bar Skip to main content


Multi-armed bandits with guaranteed revenue per arm

Dorian Baudry · Nadav Merlis · Mathieu Molina · Hugo Richard · Vianney Perchet

MR1 & MR2 - Number 73
[ ]
Fri 3 May 8 a.m. PDT — 8:30 a.m. PDT


We consider a Multi-Armed Bandit problem with covering constraints, where the primary goal is to ensure that each arm receives a minimum expected reward while maximizing the total cumulative reward. In this scenario, the optimal policy then belongs to some unknown feasible set. Unlike much of the existing literature, we do not assume the presence of a safe policy or a feasibility margin, which hinders the exclusive use of conservative approaches. Consequently, we propose and analyze an algorithm that switches between pessimism and optimism in the face of uncertainty. We prove both precise problem-dependent and problem-independent bounds, demonstrating that our algorithm achieves the best of the two approaches – depending on the presence or absence of a feasibility margin – in terms of constraint violation guarantees. Furthermore, our results indicate that playing greedily on the constraints actually outperforms pessimism when considering long-term violations rather than violations on a per-round basis.

Live content is unavailable. Log in and register to view live content