On Feature Combination for Multiclass Object Detection

Introduction

On this webpage you will find additional information, results and source code for the paper On Feature Combination for Multiclass Object Detection that has been published at ICCV 2009. For any comments please contact the authors.

Results

The following tables contain the numerical results of the various methods on the Caltech101 and Caltech256 datasets. In the first table you can find the latest results using all kernel matrices I have available. I will try to update this numbers once there are new image features available, please note that these numbers are higher than those reported in the ICCV paper.

Results on Caltech101
5 10 15 20 25 30
Results on Caltech101 (10/11/09), classification accuracy for varying amount of training images per category. These numbers were obtained using all kernel matrices as described in the paper and nine additional from Vedaldi et.al. (Geometric blur, PHOW gray/color, Self-Similarity).
LP-beta 59.5 ± 0.2 69.2 ± 0.4 74.6 ± 1.0 77.6 ± 0.3 79.6 ± 0.4 82.1 ± 0.3
MKL 46.5 ± 1.1 59.2 ± 0.5 66.0 ± 0.9 70.8 ± 0.9 74.3 ± 0.8 77.7 ± 0.5
average 47.1 ± 0.6 58.6 ± 0.7 65.3 ± 1.1 69.0 ± 0.8 71.9 ± 1.0 74.8 ± 0.8
product 46.2 ± 0.7 57.7 ± 0.7 64.2 ± 1.0 68.2 ± 0.8 71.1 ± 0.6 74.1 ± 1.0
best feature 45.9 ± 0.9 55.3 ± 0.7 61.0 ± 0.2 64.1 ± 0.6 66.9 ± 0.6 69.4 ± 1.0

Numerical results from the paper: The following results correspond to those published in the ICCV paper. For the raw numbers used to create all other plots therein please have a look at the additional material.

5 10 15 20 25 30
Caltech101, classification accuracy for varying amount of training images per category. These numbers are those used for Figure 2d) in the ICCV paper.
LP-beta 54.2 ± 0.6 65.0 ± 0.9 70.4 ± 0.7 73.6 ± 0.6 75.7 ± 0.6 77.8 ± 0.4
LP-B 46.5 ± 0.9 59.7 ± 0.7 66.7 ± 0.6 71.1 ± 1.2 73.8 ± 0.3 77.2 ± 0.9
MKL 42.1 ± 1.2 55.1 ± 0.7 62.3 ± 0.8 67.1 ± 0.9 70.5 ± 0.8 73.7 ± 0.7
average 44.4 ± 0.6 55.7 ± 0.5 62.2 ± 1.1 66.1 ± 1.0 68.9 ± 1.0 71.6 ± 1.5
product 43.6 ± 0.7 54.7 ± 0.5 61.3 ± 0.9 65.4 ± 0.8 68.3 ± 0.7 71.3 ± 1.4
best feature 46.1 ± 0.9 55.6 ± 0.5 61.0 ± 0.2 64.3 ± 0.9 66.9 ± 0.8 69.4 ± 0.4
Boimann, Shechtman and Irani (CVPR08) 56.9 - 72.8 - - 79.1
Zhang, Berg, Maire and Malik (CVPR06) 46 54.8 59.1 61.6 - 66.2
Lazebnik, Schmid and Ponce (CVPR06) - - 56.4 - - 64.6
Wang, Zhang and Fei-Fei (CVPR06) 19.5 - 44.5 50 56 63
Grauman and Darell (ICCV05) 34.8 44 50 53.5 55.5 58.2
Mutch and Lowe (CVPR06) - - 51 - - 56
Pinto, Cox and DiCarlo (PLOS08) 47.9 56.8 61.4 - - 67.4
Griffin, Holub and Perona (TR06) 44.2 54.2 59.4 63.3 65.8 67.3

Caltech256 results:

5 10 15 20 25 30 40 50
Caltech256, classification accuracy for varying amount of training images per category. These numbers are those used for Figure 2f) in the ICCV paper.
LP-beta 16.7 30.4 34.2 40.6 42.8 45.8 48.9 50.8
MKL 17.7 25.7 30.6 33.7 34.8 35.6 34.1 -
average 20.6 28.7 33.4 36.2 39.1 41.5 44.4 47.0
product 19.7 28.0 32.6 35.6 38.4 40.7 43.4 45.3
Boiman, Shechtman and Irani (CVPR08) 22.8 31.2 - 38.3 - 42.7 - -
Griffin, Holub and Perona (TR06) 18.7 25.0 28.4 31.3 33.2 34.2 - 39.0
Pinto, Cox and DiCarlo (PLOS08) - - 24 - - - - -

Download

The software for both SimpleMKL and our Infinite Kernel Learning algorithm can be downloaded below. The package includes the source code, pre-compiled binaries for the Linux/x86 and the Linux/x86-64 architectures. For each method there is an example file demonstrating its usage.

Kernel Matrices:

  • directory of gram matrices. Please do read the Readme file.
  • Multiclass LP code: source code, pre-compiled binaries and demo files

    Experiments code: matlab scripts to steer the experimental setup

    Caltech 101/256 features: caltech features being used in the experiments. Please read the README.features for details on what is being provided.

    License: The software is licensed under the BSD License A copy of the license documents is included in the distribution.

    Publications

    Contact

    If you have comments or questions, please feel free to contact us. Thanks!