Robustness and scalability under heavy tails, without strong convexity

Matthew Holland

Keywords: [ Learning Theory and Statistics ] [ Statistical Learning Theory ]

[ Abstract ]
Tue 13 Apr 6:30 p.m. PDT — 8:30 p.m. PDT


Real-world data is laden with outlying values. The challenge for machine learning is that the learner typically has no prior knowledge of whether the feedback it receives (losses, gradients, etc.) will be heavy-tailed or not. In this work, we study a simple, cost-efficient algorithmic strategy that can be leveraged when both losses and gradients can be heavy-tailed. The core technique introduces a simple robust validation sub-routine, which is used to boost the confidence of inexpensive gradient-based sub-processes. Compared with recent robust gradient descent methods from the literature, dimension dependence (both risk bounds and cost) is substantially improved, without relying upon strong convexity or expensive per-step robustification. We also empirically show that the proposed procedure cannot simply be replaced with naive cross-validation.

Chat is not available.