Skip to yearly menu bar Skip to main content


Poster

Fixing by Mixing: A Recipe for Optimal Byzantine ML under Heterogeneity

Youssef Allouah · Sadegh Farhadkhani · Rachid Guerraoui · Nirupam Gupta · Rafael Pinot · John Stephan

Auditorium 1 Foyer 42

Abstract:

Byzantine machine learning (ML) aims to ensure the resilience of distributed learning algorithms to misbehaving (or \emph{Byzantine}) machines. Although this problem received significant attention, prior works often assume the data held by the machines to be {\em homogeneous}, which is seldom true in practical settings. Data \emph{heterogeneity} makes Byzantine ML considerably more challenging, since a Byzantine machine can hardly be distinguished from a non-Byzantine outlier. A few solutions have been proposed to tackle this issue, but these provide suboptimal probabilistic guarantees and fare poorly in practice.This paper closes the theoretical gap, achieving optimality and inducing good empirical results. In fact, we show how to automatically adapt existing solutions for (homogeneous) Byzantine ML to the heterogeneous setting through a powerful mechanism, we call {\em nearest neighbor mixing} (NNM), which boosts any standard robust distributed gradient descent variant to yield optimal Byzantine resilience under heterogeneity. We obtain similar guarantees (in expectation) by plugging NNM in the distributed {\em stochastic} heavy ball method, a practical substitute to distributed gradient descent. We obtain empirical results that significantly outperform state-of-the-art Byzantine ML solutions.

Live content is unavailable. Log in and register to view live content