Poster
Risk-sensitive Bandits: Arm Mixture Optimality and Regret-efficient Algorithms
Meltem Tatlı · Fabio Feser · Prashanth A. · Danqi Liao · Ali Tajer
[
Abstract
]
Abstract:
This paper introduces a general framework for risk-sensitive bandits that integrates the notions of risk-sensitive objectives by adopting a rich class of {\em distortion riskmetrics}. The introduced framework subsumes the various existing risk-sensitive models. An important and hitherto unknown observation is that for a wide range of riskmetrics, the optimal bandit policy involves selecting a \emph{mixture} of arms. This is in sharp contrast to the convention in the multi-arm bandit algorithms that there is generally a \emph{solitary} arm that maximizes the utility, whether purely reward-centric or risk-sensitive. This creates a major departure from the principles for designing bandit algorithms since there are uncountable mixture possibilities. The contributions of the paper are as follows: (i) it formalizes a general framework for risk-sensitive bandits, (ii) identifies standard risk-sensitive bandit models for which solitary arm selections is not optimal, (iii) and designs regret-efficient algorithms whose sampling strategies can accurately track optimal arm mixtures (when mixture is optimal) or the solitary arms (when solitary is optimal). The algorithms are shown to achieve a regret that scales according to , where is the horizon, and is a riskmetric-specific constant.
Live content is unavailable. Log in and register to view live content