Multiple view feature descriptors from image sequences via kernel principal component analysis

Jason Meltzer, Ming Hsuan Yang, Rakesh Gupta, Stefano Soatto

Research output: Contribution to journalArticlepeer-review

27 Citations (Scopus)


We present a method for learning feature descriptors using multiple images, motivated by the problems of mobile robot navigation and localization. The technique uses the relative simplicity of small baseline tracking in image sequences to develop descriptors suitable for the more challenging task of wide baseline matching across significant viewpoint changes. The variations in the appearance of each feature are learned using kernel principal component analysis (KPCA) over the course of image sequences. An approximate version of KPCA is applied to reduce the computational complexity of the algorithms and yield a compact representation. Our experiments demonstrate robustness to wide appearance variations on non-planar surfaces, including changes in illumination, viewpoint, scale, and geometry of the scene.

Original languageEnglish
Pages (from-to)215-227
Number of pages13
JournalLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Publication statusPublished - 2004

All Science Journal Classification (ASJC) codes

  • Theoretical Computer Science
  • Computer Science(all)


Dive into the research topics of 'Multiple view feature descriptors from image sequences via kernel principal component analysis'. Together they form a unique fingerprint.

Cite this