Skip to yearly menu bar Skip to main content


Poster

NoisyMix: Boosting Model Robustness to Common Corruptions

N. Benjamin Erichson · Soon Hoe Lim · Winnie Xu · Francisco Utrera · Ziang Cao · Michael Mahoney

MR1 & MR2 - Number 80

Abstract:

The robustness of neural networks has become increasingly important in real-world applications where stable and reliable performance is valued over simply achieving high predictive accuracy. To address this, data augmentation techniques have been shown to improve robustness against input perturbations and domain shifts. In this paper, we propose a new training scheme called NoisyMix that leverages noisy augmentations in both input and feature space to improve model robustness and in-domain accuracy. We demonstrate the effectiveness of NoisyMix on several benchmark datasets, including ImageNet-C, ImageNet-R, and ImageNet-P. Additionally, we provide theoretical analysis to better understand the implicit regularization and robustness properties of NoisyMix.

Chat is not available.