Skip to content

ndb796/DMR

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 

Repository files navigation

DMR: Disentangling Marginal Representations for Out-of-Distribution Detection

Abstract

Out-of-distribution (OOD) detection is crucial for the reliable deployment of deep-learning applications. When a given input image does not belong to any categories of the deployed classification model, the classification model is expected to alert the user that the predicted outputs might be unreliable. Recent studies have shown that utilizing a large amount of explicit OOD training data helps improve OOD detection performance. However, collecting explicit real-world OOD data is burdensome, and pre-defining all out-of-distribution labels is fundamentally difficult. In this work, we present a novel method, Disentangling Marginal Representations (DMR), that generates artificial OOD training data by extracting marginal features from images of an In-Distribution (ID) training dataset and manipulating these extracted marginal representations. DMR is intuitive and can be used as a realistic solution that does not require any extra real-world OOD data. Moreover, our method can be simply applied to pre-trained classifier networks without affecting the original classification performance. We demonstrate that a shallow rejection network that is trained on the small subset of synthesized OOD training data generated from our method and attachable to the classifier network achieves superior OOD detection performance. With extensive experiments, we show that our proposed method significantly outperforms the state-of-the-art OOD detection methods on the broadly used CIFAR-10 and CIFAR-100 detection benchmark datasets. We also demonstrate that our proposed method can be further improved when combined with existing methods.

Dataset Configuration

  • We use all the 50,000 $\mathcal{X}^{ID}$ images from CIFAR-10 and CIFAR-100 as the ID dataset.
  • The synthesized 50,000 $\mathcal{X}^{OOD}_{train}$ images are used as the OOD training dataset for KIRBY and DMR.
  • OOD dataset for evaluation: (1) SVHN, (2) Textures, (3) LSUN-Crop, (4) Tiny-ImageNet, (5) Places-365, (6) Gaussian Noise.

OOD Detection Methods

Post-hoc methods

Training methods that utilize the OOD training data

  • OE (Outlier Exposure) (ICLR 2019) [Paper]
  • KIRBY (AAAI 2023) [Paper]
  • Ours
    • OOD training image generation codes:
    • Generated OOD training images (.zip):

Evaluations

Citation

@inproceedings{choi2024dmr,
  title={DMR: Disentangling Marginal Representations for Out-of-Distribution Detection},
  author={Choi, Dasol and Na, Dongbin},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={4032--4041},
  year={2024}
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published