Asynchronous Upper Confidence Bound Algorithms for Federated Linear Bandits

Chuanhao Li · Hongning Wang

[ Abstract ]
Mon 28 Mar 10:15 a.m. PDT — 11:45 a.m. PDT


Linear contextual bandit is a popular online learning problem. It has been mostly studied in centralized learning settings. With the surging demand of large-scale decentralized model learning, e.g., federated learning, how to retain regret minimization while reducing communication cost becomes an open challenge. In this paper, we study linear contextual bandit in a federated learning setting. We propose a general framework with asynchronous model update and communication for a collection of homogeneous clients and heterogeneous clients, respectively. Rigorous theoretical analysis is provided about the regret and communication cost under this distributed learning framework; and extensive empirical evaluations demonstrate the effectiveness of our solution.

Chat is not available.