Skip to yearly menu bar Skip to main content


Poster

Federated Asymptotics: a model to compare federated learning algorithms

Gary Cheng · Karan Chadha · John Duchi

Auditorium 1 Foyer 93

Abstract:

We develop an asymptotic framework to compare the test performance of (personalized) federated learning algorithms whose purpose is to move beyond algorithmic convergence arguments. To that end, we study a high-dimensional linear regression model to elucidate the statistical properties (per client test error) of loss minimizers. Our techniques and model allow precise predictions about the benefits of personalization and information sharing in federated scenarios, including that Federated Averaging with simple client fine-tuning achieves identical asymptotic risk to more intricate meta-learning approaches and outperforms naive Federated Averaging. We evaluate and corroborate these theoretical predictions on federated versions of the EMNIST, CIFAR-100, Shakespeare, and Stack Overflow datasets.

Live content is unavailable. Log in and register to view live content