Visual Recognition Challenge
(Caltech 256 and PASCAL VOC2007)
Note: This workshop will start after the
4th
International Workshop on Object Categorization -- they will
not clash.
4.30-6.30pm PASCAL VOC2007
Chair: | Andrew Zisserman |
4.30-4.55: | Overview and results of classification challenge [PDF]
Mark Everingham |
4.55-5.10: | Classification method. [PDF]
Marcin Marszaek (INRIA, Grenoble). |
5.10-5.30: | Overview and results of detection challenge [PDF]
Mark Everingham |
5.30-5.45: | Detection method. [PDF]
Ondrej Chum (University of Oxford) |
5.45-6.00: | Detection method. [PDF]
Deva Ramanan (Toyota Technological Institute, Chicago) NB: This is as yet unpublished work. Please contact the authors if you plan to make use of any of the ideas presented. |
6.00-6.20: | Overview of segmentation and layout taster challenges [PDF]
Mark Everingham |
6.20-6.30: | Discussion |
6.45-8pm Caltech 256
Chair: | Pietro Perona |
ICCV'07 Workshop, Monday 15th October 2007
There are two datasets that are becoming standard for measuring visual recognition performance in vision papers: the Caltech dataset, and the PASCAL Visual Object Classes Challenge datasets. For 2007 both have released new versions that are more challenging, for example with more classes. The objective of this workshop is to compare the best recognition methods on both datasets.
The workshop will be divided into two parts, one devoted to the PASCAL VOC2007 challenge and the other to the new Caltech 256 classes dataset. In each part there will be overview talks summarizing the competition and results, and announcing the winners. Participants who have performed well will give talks on their methods.
The PASCAL VOC2007 challenge workshop
Organizers: Mark Everingham (Leeds), Luc van Gool (Zurich), Chris Williams (Edinburgh), John Winn (Microsoft, Cambridge), Andrew Zisserman (Oxford).
A new database has been prepared consisting of 20 classes with about 25000 annotated instances in total. The images are obtained from flickr. The classes include people, cats, dogs, cars, motorbikes, bottles and sofas. The annotation includes a rectangular bounding box and flags to indicate pose and level of difficulty.
As in previous challenges there are two main competitions, one testing image classification ("does the image contain an instance of this class?"), and one testing object detection ("provide a bounding box for each instance of the class, if any"). In addition this year two 'taster' competitions have been introduced: the first evaluates the object layout in more detail ("detect the hands, feet etc for a person"), the second evaluates object segmentation at the pixel level.
Full details of the database and how to enter the competition.
The Caltech 256 workshop
Organizers: Pietro Perona, Gregory Griffin and Merrielle Spain (Caltech).
The Caltech 256 database has more images per class than Caltech 101, and includes a background category. For more, see details of how to enter.