Skip to yearly menu bar Skip to main content


Poster

Multi-task Representation Learning with Stochastic Linear Bandits

Leonardo Cella · · GrĂ©goire Pacreau · Massimiliano Pontil

Auditorium 1 Foyer 17

Abstract: We study the problem of transfer-learning in the setting of stochastic linear contextual bandit tasks. We consider that a low dimensional linear representation is shared across the tasks, and study the benefit of learning the tasks jointly. Following recent results to design Lasso stochastic bandit policies, we propose an efficient greedy policy based on trace norm regularization. It implicitly learns a low dimensional representation by encouraging the matrix formed by the task regression vectors to be of low rank. Unlike previous work in the literature, our policy does not need to know the rank of the underlying matrix, nor {does} it requires the covariance of the arms distribution to be invertible. We derive an upper bound on the multi-task regret of our policy, which is, up to logarithmic factors, of order $T\sqrt{rN}+\sqrt{rNTd}$, where $T$ is the number of tasks, $r$ the rank, $d$ the number of variables and $N$ the number of rounds per task. We show the benefit of our strategy over an independent task learning baseline, which has a worse regret of order $T\sqrt{dN}$. We also argue that our policy {is minimax optimal} and, when $T\geq d$, has a multi-task regret which is comparable to the regret of an oracle policy which knows the true underlying representation.

Live content is unavailable. Log in and register to view live content