Processing math: 100%
Skip to yearly menu bar Skip to main content


Poster

Federated Reinforcement Learning with Environment Heterogeneity

Hao Jin · Yang Peng · Wenhao Yang · Shusen Wang · Zhihua Zhang

Virtual

Abstract: We study Federated Reinforcement Learning (FedRL) problem in which n agents collaboratively learn a single policy without sharing the trajectories they collected during agent-environment interaction. In this paper, we stress the constraint of environment heterogeneity, which means n environments corresponding to these n agents have different state-transitions. To obtain a value function or a policy function which optimizes the overall performance in all environments, we propose two algorithms, we propose two federated RL algorithms, \texttt{QAvg} and \texttt{PAvg}. We theoretically prove that these algorithms converge to suboptimal solutions, while such suboptimality depends on how heterogeneous these n environments are. Moreover, we propose a heuristic that achieves personalization by embedding the n environments into n vectors. The personalization heuristic not only improves the training but also allows for better generalization to new environments.

Chat is not available.