Skip to yearly menu bar Skip to main content


Byzantine-Robust Federated Learning with Optimal Statistical Rates

Banghua Zhu · Lun Wang · Qi Pang · Shuai Wang · Jiantao Jiao · Dawn Song · Michael Jordan

Auditorium 1 Foyer 159


We propose Byzantine-robust federated learning protocols with nearly optimal statistical rates based on recent progress in high dimensional robust statistics. In contrast to prior work, our proposed protocols improve the dimension dependence and achieve a tight statistical rate in terms of all the parameters for strongly convex losses. We also provide matching statistical lower bound for the problem. For experiments, we benchmark against competing protocols and show the empirical superiority of the proposed protocols.

Live content is unavailable. Log in and register to view live content