Co-bootstrapping saliency

Huchuan Lu, Xiaoning Zhang, Jinqing Qi, Na Tong, Xiang Ruan, Ming Hsuan Yang

Research output: Contribution to journalArticle

18 Citations (Scopus)

Abstract

In this paper, we propose a visual saliency detection algorithm to explore the fusion of various saliency models in a manner of bootstrap learning. First, an original bootstrapping model, which combines both weak and strong saliency models, is constructed. In this model, image priors are exploited to generate an original weak saliency model, which provides training samples for a strong model. Then, a strong classifier is learned based on the samples extracted from the weak model. We use this classifier to classify all the salient and non-salient superpixels in an input image. To further improve the detection performance, multi-scale saliency maps of weak and strong model are integrated, respectively. The final result is the combination of the weak and strong saliency maps. The original model indicates that the overall performance of the proposed algorithm is largely affected by the quality of weak saliency model. Therefore, we propose a co-bootstrapping mechanism, which integrates the advantages of different saliency methods to construct the weak saliency model thus addresses the problem and achieves a better performance. Extensive experiments on benchmark data sets demonstrate that the proposed algorithm outperforms the state-of-the-art methods.

Original languageEnglish
Article number7742419
Pages (from-to)414-425
Number of pages12
JournalIEEE Transactions on Image Processing
Volume26
Issue number1
DOIs
Publication statusPublished - 2017 Jan

Fingerprint

Classifiers
Fusion reactions
Experiments

All Science Journal Classification (ASJC) codes

  • Software
  • Computer Graphics and Computer-Aided Design

Cite this

Lu, H., Zhang, X., Qi, J., Tong, N., Ruan, X., & Yang, M. H. (2017). Co-bootstrapping saliency. IEEE Transactions on Image Processing, 26(1), 414-425. [7742419]. https://doi.org/10.1109/TIP.2016.2627804
Lu, Huchuan ; Zhang, Xiaoning ; Qi, Jinqing ; Tong, Na ; Ruan, Xiang ; Yang, Ming Hsuan. / Co-bootstrapping saliency. In: IEEE Transactions on Image Processing. 2017 ; Vol. 26, No. 1. pp. 414-425.
@article{37f42539c1a544b78ee386d1ab7dcdef,
title = "Co-bootstrapping saliency",
abstract = "In this paper, we propose a visual saliency detection algorithm to explore the fusion of various saliency models in a manner of bootstrap learning. First, an original bootstrapping model, which combines both weak and strong saliency models, is constructed. In this model, image priors are exploited to generate an original weak saliency model, which provides training samples for a strong model. Then, a strong classifier is learned based on the samples extracted from the weak model. We use this classifier to classify all the salient and non-salient superpixels in an input image. To further improve the detection performance, multi-scale saliency maps of weak and strong model are integrated, respectively. The final result is the combination of the weak and strong saliency maps. The original model indicates that the overall performance of the proposed algorithm is largely affected by the quality of weak saliency model. Therefore, we propose a co-bootstrapping mechanism, which integrates the advantages of different saliency methods to construct the weak saliency model thus addresses the problem and achieves a better performance. Extensive experiments on benchmark data sets demonstrate that the proposed algorithm outperforms the state-of-the-art methods.",
author = "Huchuan Lu and Xiaoning Zhang and Jinqing Qi and Na Tong and Xiang Ruan and Yang, {Ming Hsuan}",
year = "2017",
month = "1",
doi = "10.1109/TIP.2016.2627804",
language = "English",
volume = "26",
pages = "414--425",
journal = "IEEE Transactions on Image Processing",
issn = "1057-7149",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
number = "1",

}

Lu, H, Zhang, X, Qi, J, Tong, N, Ruan, X & Yang, MH 2017, 'Co-bootstrapping saliency', IEEE Transactions on Image Processing, vol. 26, no. 1, 7742419, pp. 414-425. https://doi.org/10.1109/TIP.2016.2627804

Co-bootstrapping saliency. / Lu, Huchuan; Zhang, Xiaoning; Qi, Jinqing; Tong, Na; Ruan, Xiang; Yang, Ming Hsuan.

In: IEEE Transactions on Image Processing, Vol. 26, No. 1, 7742419, 01.2017, p. 414-425.

Research output: Contribution to journalArticle

TY - JOUR

T1 - Co-bootstrapping saliency

AU - Lu, Huchuan

AU - Zhang, Xiaoning

AU - Qi, Jinqing

AU - Tong, Na

AU - Ruan, Xiang

AU - Yang, Ming Hsuan

PY - 2017/1

Y1 - 2017/1

N2 - In this paper, we propose a visual saliency detection algorithm to explore the fusion of various saliency models in a manner of bootstrap learning. First, an original bootstrapping model, which combines both weak and strong saliency models, is constructed. In this model, image priors are exploited to generate an original weak saliency model, which provides training samples for a strong model. Then, a strong classifier is learned based on the samples extracted from the weak model. We use this classifier to classify all the salient and non-salient superpixels in an input image. To further improve the detection performance, multi-scale saliency maps of weak and strong model are integrated, respectively. The final result is the combination of the weak and strong saliency maps. The original model indicates that the overall performance of the proposed algorithm is largely affected by the quality of weak saliency model. Therefore, we propose a co-bootstrapping mechanism, which integrates the advantages of different saliency methods to construct the weak saliency model thus addresses the problem and achieves a better performance. Extensive experiments on benchmark data sets demonstrate that the proposed algorithm outperforms the state-of-the-art methods.

AB - In this paper, we propose a visual saliency detection algorithm to explore the fusion of various saliency models in a manner of bootstrap learning. First, an original bootstrapping model, which combines both weak and strong saliency models, is constructed. In this model, image priors are exploited to generate an original weak saliency model, which provides training samples for a strong model. Then, a strong classifier is learned based on the samples extracted from the weak model. We use this classifier to classify all the salient and non-salient superpixels in an input image. To further improve the detection performance, multi-scale saliency maps of weak and strong model are integrated, respectively. The final result is the combination of the weak and strong saliency maps. The original model indicates that the overall performance of the proposed algorithm is largely affected by the quality of weak saliency model. Therefore, we propose a co-bootstrapping mechanism, which integrates the advantages of different saliency methods to construct the weak saliency model thus addresses the problem and achieves a better performance. Extensive experiments on benchmark data sets demonstrate that the proposed algorithm outperforms the state-of-the-art methods.

UR - http://www.scopus.com/inward/record.url?scp=85013389441&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85013389441&partnerID=8YFLogxK

U2 - 10.1109/TIP.2016.2627804

DO - 10.1109/TIP.2016.2627804

M3 - Article

C2 - 28113932

AN - SCOPUS:85013389441

VL - 26

SP - 414

EP - 425

JO - IEEE Transactions on Image Processing

JF - IEEE Transactions on Image Processing

SN - 1057-7149

IS - 1

M1 - 7742419

ER -

Lu H, Zhang X, Qi J, Tong N, Ruan X, Yang MH. Co-bootstrapping saliency. IEEE Transactions on Image Processing. 2017 Jan;26(1):414-425. 7742419. https://doi.org/10.1109/TIP.2016.2627804