Skip to yearly menu bar Skip to main content


Pixel-wise Smoothing for Certified Robustness against Camera Motion Perturbations

Hanjiang Hu · Zuxin Liu · Linyi Li · Jiacheng Zhu · DING ZHAO

MR1 & MR2 - Number 72
[ ]
Thu 2 May 8 a.m. PDT — 8:30 a.m. PDT


Deep learning-based visual perception models lack robustness when faced with camera motion perturbations in practice. The current certification process for assessing robustness is costly and time-consuming due to the extensive number of image projections required for Monte Carlo sampling in the 3D camera motion space. To address these challenges, we present a novel, efficient, and practical framework for certifying the robustness of 3D-2D projective transformations against camera motion perturbations. Our approach leverages a smoothing distribution over the 2D-pixel space instead of in the 3D physical space, eliminating the need for costly camera motion sampling and significantly enhancing the efficiency of robustness certifications. With the pixel-wise smoothed classifier, we are able to fully upper bound the projection errors using a technique of uniform partitioning in camera motion space. Additionally, we extend our certification framework to a more general scenario where only a single-frame point cloud is required in the projection oracle. Through extensive experimentation, we validate the trade-off between effectiveness and efficiency enabled by our proposed method. Remarkably, our approach achieves approximately 80\% certified accuracy while utilizing only 30\% of the projected image frames.

Live content is unavailable. Log in and register to view live content