History and Background of the VOC Challenge

The main challenges have run each year since 2005. For more background on VOC, the following journal paper discusses some of the choices we made and our experience in running the challenge, and gives a more in depth discussion of the 2007 methods and results:

Everingham, M., Van Gool, L., Williams, C. K. I., Winn, J. and Zisserman, A.
The PASCAL Visual Object Classes (VOC) Challenge
International Journal of Computer Vision, 88(2), 303-338, 2010
Bibtex source | Abstract | PDF

The table below gives a brief summary of the main stages of the VOC development.
Year Statistics New developments Notes
2005 Only 4 classes: bicycles, cars, motorbikes, people. Train/validation/test: 1578 images containing 2209 annotated objects. Two competitions: classification and detection Images were largely taken from exising public datasets, and were not as challenging as the flickr images subsequently used. This dataset is obsolete.
2006 10 classes: bicycle, bus, car, cat, cow, dog, horse, motorbike, person, sheep. Train/validation/test: 2618 images containing 4754 annotated objects. Images from flickr and from Microsoft Research Cambridge (MSRC) dataset The MSRC images were easier than flickr as the photos often concentrated on the object of interest. This dataset is obsolete.
2007 20 classes:
  • Person: person
  • Animal: bird, cat, cow, dog, horse, sheep
  • Vehicle: aeroplane, bicycle, boat, bus, car, motorbike, train
  • Indoor: bottle, chair, dining table, potted plant, sofa, tv/monitor
Train/validation/test: 9,963 images containing 24,640 annotated objects.
  • Number of classes increased from 10 to 20
  • Segmentation taster introduced
  • Person layout taster introduced
  • Truncation flag added to annotations
  • Evaluation measure for the classification challenge changed to Average Precision. Previously it had been ROC-AUC.
This year established the 20 classes, and these have been fixed since then. This was the final year that annotation was released for the testing data.
2008 20 classes. The data is split (as usual) around 50% train/val and 50% test. The train/val data has 4,340 images containing 10,363 annotated objects.
  • Occlusion flag added to annotations
  • Test data annotation no longer made public.
2009 20 classes. The train/val data has 7,054 images containing 17,218 ROI annotated objects and 3,211 segmentations.
  • From now on the data consists of the previous years' images augmented with new images. In earlier years an entirely new data set was released each year.
  • Augmenting allows the number of images to grow each year, and means that test results can be compared on the previous years' images.
  • Segmentation becomes a standard challenge (promoted from a taster)
  • No difficult flags were provided for the additional images (an omission).
  • Test data annotation not made public.
2010 20 classes. The train/val data has 10,103 images containing 23,374 ROI annotated objects and 4,203 segmentations.
  • Action Classification taster introduced.
  • Associated challenge on large scale classification introduced based on ImageNet.
  • Amazon Mechanical Turk used for early stages of the annotation.
  • Method of computing AP changed. Now uses all data points rather than TREC style sampling.
  • Test data annotation not made public.

Best Practice

To train and develop algorithms for the challenge, all development e.g. feature selection and parameter tuning must use the "trainval" (training + validation) set alone. One way is to divide the set into training and validation sets (as suggested in the development kit). Other schemes e.g. n-fold cross-validation are equally valid. The tuned algorithms should then be run only once on the test data. In VOC2007 we made all annotations available (i.e. for training, validation and test data) but since then we do not make the test annotations available. Instead results on the test data are submitted to an evaluation server.

Since algorithms should only be run once on the test data we strongly discourage multiple submissions to the server (and indeed the number of submissions for the same algorithm is strictly controlled), as the evaluation server should not be used for parameter tuning.

We encourage you to publish test results always on the latest release of the challenge, using the output of the evaluation server. If you wish to compare methods or design choices e.g. subsets of features, then there are two options: (i) we suggest you use the entire VOC2007 data, where all annotations are available; (ii) you may report cross-validation results using the latest "trainval" set alone.