Incremental learning for robust visual tracking

David A. Ross, Jongwoo Lim, Ruei Sung Lin, Ming Hsuan Yang

Research output: Contribution to journalArticle

2474 Citations (Scopus)

Abstract

Visual tracking, in essence, deals with non-stationary image streams that change over time. While most existing algorithms are able to track objects well in controlled environments, they usually fail in the presence of significant variation of the object's appearance or surrounding illumination. One reason for such failures is that many algorithms employ fixed appearance models of the target. Such models are trained using only appearance data available before tracking begins, which in practice limits the range of appearances that are modeled, and ignores the large volume of information (such as shape changes or specific lighting conditions) that becomes available during tracking. In this paper, we present a tracking method that incrementally learns a low-dimensional subspace representation, efficiently adapting online to changes in the appearance of the target. The model update, based on incremental algorithms for principal component analysis, includes two important features: a method for correctly updating the sample mean, and a forgetting factor to ensure less modeling power is expended fitting older observations. Both of these features contribute measurably to improving overall tracking performance. Numerous experiments demonstrate the effectiveness of the proposed tracking algorithm in indoor and outdoor environments where the target objects undergo large changes in pose, scale, and illumination.

Original languageEnglish
Pages (from-to)125-141
Number of pages17
JournalInternational Journal of Computer Vision
Volume77
Issue number1-3
DOIs
Publication statusPublished - 2008 May 1

Fingerprint

Lighting
Principal component analysis
Experiments

All Science Journal Classification (ASJC) codes

  • Software
  • Computer Vision and Pattern Recognition
  • Artificial Intelligence

Cite this

Ross, David A. ; Lim, Jongwoo ; Lin, Ruei Sung ; Yang, Ming Hsuan. / Incremental learning for robust visual tracking. In: International Journal of Computer Vision. 2008 ; Vol. 77, No. 1-3. pp. 125-141.
@article{43172e7183f24cb0b28c2f200ea45836,
title = "Incremental learning for robust visual tracking",
abstract = "Visual tracking, in essence, deals with non-stationary image streams that change over time. While most existing algorithms are able to track objects well in controlled environments, they usually fail in the presence of significant variation of the object's appearance or surrounding illumination. One reason for such failures is that many algorithms employ fixed appearance models of the target. Such models are trained using only appearance data available before tracking begins, which in practice limits the range of appearances that are modeled, and ignores the large volume of information (such as shape changes or specific lighting conditions) that becomes available during tracking. In this paper, we present a tracking method that incrementally learns a low-dimensional subspace representation, efficiently adapting online to changes in the appearance of the target. The model update, based on incremental algorithms for principal component analysis, includes two important features: a method for correctly updating the sample mean, and a forgetting factor to ensure less modeling power is expended fitting older observations. Both of these features contribute measurably to improving overall tracking performance. Numerous experiments demonstrate the effectiveness of the proposed tracking algorithm in indoor and outdoor environments where the target objects undergo large changes in pose, scale, and illumination.",
author = "Ross, {David A.} and Jongwoo Lim and Lin, {Ruei Sung} and Yang, {Ming Hsuan}",
year = "2008",
month = "5",
day = "1",
doi = "10.1007/s11263-007-0075-7",
language = "English",
volume = "77",
pages = "125--141",
journal = "International Journal of Computer Vision",
issn = "0920-5691",
publisher = "Springer Netherlands",
number = "1-3",

}

Incremental learning for robust visual tracking. / Ross, David A.; Lim, Jongwoo; Lin, Ruei Sung; Yang, Ming Hsuan.

In: International Journal of Computer Vision, Vol. 77, No. 1-3, 01.05.2008, p. 125-141.

Research output: Contribution to journalArticle

TY - JOUR

T1 - Incremental learning for robust visual tracking

AU - Ross, David A.

AU - Lim, Jongwoo

AU - Lin, Ruei Sung

AU - Yang, Ming Hsuan

PY - 2008/5/1

Y1 - 2008/5/1

N2 - Visual tracking, in essence, deals with non-stationary image streams that change over time. While most existing algorithms are able to track objects well in controlled environments, they usually fail in the presence of significant variation of the object's appearance or surrounding illumination. One reason for such failures is that many algorithms employ fixed appearance models of the target. Such models are trained using only appearance data available before tracking begins, which in practice limits the range of appearances that are modeled, and ignores the large volume of information (such as shape changes or specific lighting conditions) that becomes available during tracking. In this paper, we present a tracking method that incrementally learns a low-dimensional subspace representation, efficiently adapting online to changes in the appearance of the target. The model update, based on incremental algorithms for principal component analysis, includes two important features: a method for correctly updating the sample mean, and a forgetting factor to ensure less modeling power is expended fitting older observations. Both of these features contribute measurably to improving overall tracking performance. Numerous experiments demonstrate the effectiveness of the proposed tracking algorithm in indoor and outdoor environments where the target objects undergo large changes in pose, scale, and illumination.

AB - Visual tracking, in essence, deals with non-stationary image streams that change over time. While most existing algorithms are able to track objects well in controlled environments, they usually fail in the presence of significant variation of the object's appearance or surrounding illumination. One reason for such failures is that many algorithms employ fixed appearance models of the target. Such models are trained using only appearance data available before tracking begins, which in practice limits the range of appearances that are modeled, and ignores the large volume of information (such as shape changes or specific lighting conditions) that becomes available during tracking. In this paper, we present a tracking method that incrementally learns a low-dimensional subspace representation, efficiently adapting online to changes in the appearance of the target. The model update, based on incremental algorithms for principal component analysis, includes two important features: a method for correctly updating the sample mean, and a forgetting factor to ensure less modeling power is expended fitting older observations. Both of these features contribute measurably to improving overall tracking performance. Numerous experiments demonstrate the effectiveness of the proposed tracking algorithm in indoor and outdoor environments where the target objects undergo large changes in pose, scale, and illumination.

UR - http://www.scopus.com/inward/record.url?scp=39749173057&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=39749173057&partnerID=8YFLogxK

U2 - 10.1007/s11263-007-0075-7

DO - 10.1007/s11263-007-0075-7

M3 - Article

AN - SCOPUS:39749173057

VL - 77

SP - 125

EP - 141

JO - International Journal of Computer Vision

JF - International Journal of Computer Vision

SN - 0920-5691

IS - 1-3

ER -