Skip to yearly menu bar Skip to main content


Poster

Unsupervised Novelty Detection in Pretrained Representation Space with Locally Adapted Likelihood Ratio

Amirhossein Ahmadian · Yifan Ding · Gabriel Eilertsen · Fredrik Lindsten

Multipurpose Room 2 - Number 138

Abstract:

Detecting novelties given unlabeled examples of normal data is a challenging task in machine learning, particularly when the novel and normal categories are semantically close. Large deep models pretrained on massive datasets can provide a rich representation space in which the simple k-nearest neighbor distance works as a novelty measure. However, as we show in this paper, the basic k-NN method might be insufficient in this context due to ignoring the 'local geometry' of the distribution over representations as well as the impact of irrelevant 'background features'. To address this, we propose a fully unsupervised novelty detection approach that integrates the flexibility of k-NN with a locally adapted scaling of dimensions based on the 'neighbors of nearest neighbor' and computing a 'likelihood ratio' in pretrained (self-supervised) representation spaces. Our experiments with image data show the advantage of this method when off-the-shelf vision transformers (e.g., pretrained by DINO) are used as the feature extractor without any fine-tuning.

Live content is unavailable. Log in and register to view live content