Skip to yearly menu bar Skip to main content


PAC Learning of Halfspaces with Malicious Noise in Nearly Linear Time

Jie Shen

Auditorium 1 Foyer 80

Abstract: We study the problem of efficient PAC learning of halfspaces in $\mathbb{R}^d$ in the presence of the malicious noise, where a fraction of the training samples are adversarially corrupted. A series of recent works have developed polynomial-time algorithms that enjoy near-optimal sample complexity and noise tolerance, yet leaving open whether a {\em linear-time} algorithm exists and matches these appealing statistical performance guarantees. In this work, we give an affirmative answer by developing an algorithm that runs in time $\tilde{O}(m d )$, where $m = \tilde{O}(\frac{d}{\epsilon})$ is the sample size and $\epsilon \in (0, 1)$ is the target error rate. Notably, the computational complexity of all prior algorithms suffer either a high order dependence on the problem size, or is implicitly proportional to $\frac{1}{\epsilon^2}$ through the sample size. Our key idea is to combine localization and an approximate version of matrix multiplicative weights update method to progressively downweight the contribution of the corrupted samples while refining the learned halfspace.

Live content is unavailable. Log in and register to view live content