Skip to yearly menu bar Skip to main content


BOBA: Byzantine-Robust Federated Learning with Label Skewness

Wenxuan Bao · Jun Wu · Jingrui He

MR1 & MR2 - Number 25
[ ]
Thu 2 May 8 a.m. PDT — 8:30 a.m. PDT


In federated learning, most existing robust aggregation rules (AGRs) combat Byzantine attacks in the IID setting, where client data is assumed to be independent and identically distributed. In this paper, we address label skewness, a more realistic and challenging non-IID setting, where each client only has access to a few classes of data. In this setting, state-of-the-art AGRs suffer from selection bias, leading to significant performance drop for particular classes; they are also more vulnerable to Byzantine attacks due to the increased variation among gradients of honest clients. To address these limitations, we propose an efficient two-stage method named BOBA. Theoretically, we prove the convergence of BOBA with an error of the optimal order. Our empirical evaluations demonstrate BOBA's superior unbiasedness and robustness across diverse models and datasets when compared to various baselines.

Live content is unavailable. Log in and register to view live content