Skip to content

hanoonaR/OWOD

Repository files navigation

Accepted to CVPR 2021 as an ORAL paper

The figure shows how our newly formulated Open World Object Detection setting relates to exsiting settings.

Abstract

Humans have a natural instinct to identify unknown object instances in their environments. The intrinsic curiosity about these unknown instances aids in learning about them, when the corresponding knowledge is eventually available. This motivates us to propose a novel computer vision problem called: Open World Object Detection, where a model is tasked to:

  1. Identify objects that have not been introduced to it as `unknown', without explicit supervision to do so, and
  2. Incrementally learn these identified unknown categories without forgetting previously learned classes, when the corresponding labels are progressively received.

We formulate the problem, introduce a strong evaluation protocol and provide a novel solution, which we call ORE: Open World Object Detector, based on contrastive clustering and energy based unknown identification. Our experimental evaluation and ablation studies analyse the efficacy of ORE in achieving Open World objectives. As an interesting by-product, we find that identifying and characterising unknown instances helps to reduce confusion in an incremental object detection setting, where we achieve state-of-the-art performance, with no extra methodological effort. We hope that our work will attract further research into this newly identified, yet crucial research direction.

A sample qualitative result

The sub-figure (a) is the result produced by our method after learning a few set of classes which doesnot include classes like apple and orange. We are able to identify them and correctly labels them as unknown. After some time, when the model is eventually taught to detect apple and orange, these instances are labelled correctly as seen in sub-figure (b); without forgetting how to detect person. An unidentified class instance still remains, and is successfully detected as an unknown.

Installation

See INSTALL.md.

Quick Start

Some bookkeeping needs to be done for the code, like removing the local paths and so on. We will update these shortly.

Data split and trained models: [Google Drive Link 1] [Google Drive Link 2]

All config files can be found in: configs/OWOD

Sample command on a 4 GPU machine:

python tools/train_net.py --num-gpus 4 --config-file <Change to the appropriate config file> SOLVER.IMS_PER_BATCH 4 SOLVER.BASE_LR 0.005

Kindly run replicate.sh to replicate results from the models shared on the Google Drive.

Kindly check run.sh file for a task workflow.

Replicating Results

The dockerfile allows you to run the detectron2 to execute the scripts. Before running the scripts,

  1. Download the dataset (Annotations and JPEGImages) from the above google drive link and extract them to datasets/VOC2007 folder.

  2. To replicate results with pretrained models provided, download the pretrained models from owod_backup from the provided [Google Drive Link] , create a dir 'workspace/output' and add the corresponding pretrained models of task1 to task4 in files t1_ft to t4_ft.

  3. Build the dockerfile using the command docker build -t your_directory:owod .

  4. To replicate results with pretrained models provided, use the modified replicate script using the command ./scripts/run_docker.sh scripts/test.sh.

  5. To retrain the model, use the modified run script using the command ./scripts/run_docker.sh scripts/train.sh.

Acknowledgement

Our code base is build on top of Detectron 2 library.

Citation

If you use our work in your research please cite us:

@inproceedings{joseph2021open,
  title={Towards Open World Object Detection},
  author={K J Joseph and Salman Khan and Fahad Shahbaz Khan and Vineeth N Balasubramanian},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2021)},
  eprint={2103.02603},
  archivePrefix={arXiv},
  year={2021}
}

About

Replicates results of OWOD

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published