ImageNet Large Scale Visual Recognition Challenge (ILSVRC)
Competition
The ImageNet Large Scale Visual Recognition Challenge (ILSVRC) evaluates algorithms for object detection and image classification at large scale. One high level motivation is to allow researchers to compare progress in detection across a wider variety of objects -- taking advantage of the quite expensive labeling effort. Another motivation is to measure the progress of computer vision for large scale image indexing for retrieval and annotation.For details about each challenge please refer to the corresponding page.
Workshop
Every year of the challenge there is a corresponding workshop at one of the premier computer vision conferences. The purpose of the workshop is to present the methods and results of the challenge. Challenge participants with the most successful and innovative entries are invited to present. Please visit the corresponding challenge page for workshop schedule and information.Download
The most popular challenge is the ILSVRC 2012-2017 image classification and localization task. It is available on Kaggle. For all other data please log in or request access.Evaluation Server
The evaluation server can be used to evaluate image classification results on the test set of ILSVRC 2012-2017. Please see here for our submission policy. Importantly, you should not make more than 2 submissions per week.Updates
- October 10, 2019: The ILSVRC 2012 classification and localization test set has been updated. The Kaggle challenge and our download page both now contain the updated data.
- June 2, 2015: Follow-up update regarding status of the server
- May 19, 2015: Annoucement regarding the submission server
Citation
When reporting results of the challenges or using the datasets, please cite:-
Olga Russakovsky*, Jia Deng*, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg and Li Fei-Fei. (* = equal contribution) ImageNet Large Scale Visual Recognition Challenge. IJCV, 2015.
paper |
bibtex |
paper content on arxiv |
attribute annotations
Additional references
These are some additional publications directly related to collecting the challenge dataset and evaluating the results. These papers are all discussed in the main paper above. Please refer to the individual challenge webpages for information about the most successful entries, and to the ImageNet publications page for a complete list of publications.
- J. Deng, O. Russakovsky, J. Krause, M. Bernstein, A. Berg, L. Fei-Fei. Scalable multi-label annotation. ACM conference on human factors in computing (CHI), 2014. pdf | bibtex | slides
- O. Russakovsky, J. Deng, Z. Huang, A. Berg and L. Fei-Fei, Detecting avocados to zucchinis: what have we done, and where are we going?, Proceedings of the International Conference of Computer Vision (ICCV). 2013. pdf | supplement | website | bibtex | slides | video
- H. Su, J. Deng, L. Fei-Fei. Crowdsourcing Annotations for Visual Object Detection. AAAI Human Computation Workshop, 2012. pdf
- J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li and L. Fei-Fei, ImageNet: A Large-Scale Hierarchical Image Database. IEEE Computer Vision and Pattern Recognition (CVPR), 2009. pdf | BibTex