Unseen Object Segmentation in Videos via Transferable Representations

Yi Wen Chen, Yi Hsuan Tsai, Chu Ya Yang, Yen Yu Lin, Ming Hsuan Yang

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

In order to learn object segmentation models in videos, conventional methods require a large amount of pixel-wise ground truth annotations. However, collecting such supervised data is time-consuming and labor-intensive. In this paper, we exploit existing annotations in source images and transfer such visual information to segment videos with unseen object categories. Without using any annotations in the target video, we propose a method to jointly mine useful segments and learn feature representations that better adapt to the target frames. The entire process is decomposed into two tasks: (1) solving a submodular function for selecting object-like segments, and (2) learning a CNN model with a transferable module for adapting seen categories in the source domain to the unseen target video. We present an iterative update scheme between two tasks to self-learn the final solution for object segmentation. Experimental results on numerous benchmark datasets show that the proposed method performs favorably against the state-of-the-art algorithms.

Original languageEnglish
Title of host publicationComputer Vision – ACCV 2018 - 14th Asian Conference on Computer Vision, Revised Selected Papers
EditorsHongdong Li, C.V. Jawahar, Greg Mori, Konrad Schindler
PublisherSpringer Verlag
Pages615-631
Number of pages17
ISBN (Print)9783030208691
DOIs
Publication statusPublished - 2019 Jan 1
Event14th Asian Conference on Computer Vision, ACCV 2018 - Perth, Australia
Duration: 2018 Dec 22018 Dec 6

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume11364 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference14th Asian Conference on Computer Vision, ACCV 2018
CountryAustralia
CityPerth
Period18/12/218/12/6

Fingerprint

Segmentation
Annotation
Target
Pixels
Personnel
Submodular Function
Pixel
Update
Entire
Benchmark
Module
Object
Experimental Results
Model

All Science Journal Classification (ASJC) codes

  • Theoretical Computer Science
  • Computer Science(all)

Cite this

Chen, Y. W., Tsai, Y. H., Yang, C. Y., Lin, Y. Y., & Yang, M. H. (2019). Unseen Object Segmentation in Videos via Transferable Representations. In H. Li, C. V. Jawahar, G. Mori, & K. Schindler (Eds.), Computer Vision – ACCV 2018 - 14th Asian Conference on Computer Vision, Revised Selected Papers (pp. 615-631). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 11364 LNCS). Springer Verlag. https://doi.org/10.1007/978-3-030-20870-7_38
Chen, Yi Wen ; Tsai, Yi Hsuan ; Yang, Chu Ya ; Lin, Yen Yu ; Yang, Ming Hsuan. / Unseen Object Segmentation in Videos via Transferable Representations. Computer Vision – ACCV 2018 - 14th Asian Conference on Computer Vision, Revised Selected Papers. editor / Hongdong Li ; C.V. Jawahar ; Greg Mori ; Konrad Schindler. Springer Verlag, 2019. pp. 615-631 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)).
@inproceedings{aedc16e1874d46abb328f7a7def050b9,
title = "Unseen Object Segmentation in Videos via Transferable Representations",
abstract = "In order to learn object segmentation models in videos, conventional methods require a large amount of pixel-wise ground truth annotations. However, collecting such supervised data is time-consuming and labor-intensive. In this paper, we exploit existing annotations in source images and transfer such visual information to segment videos with unseen object categories. Without using any annotations in the target video, we propose a method to jointly mine useful segments and learn feature representations that better adapt to the target frames. The entire process is decomposed into two tasks: (1) solving a submodular function for selecting object-like segments, and (2) learning a CNN model with a transferable module for adapting seen categories in the source domain{\^A} to the unseen target video. We present an iterative update scheme between two tasks to self-learn the final solution for object segmentation. Experimental results on numerous benchmark datasets show that the proposed method performs favorably against the state-of-the-art algorithms.",
author = "Chen, {Yi Wen} and Tsai, {Yi Hsuan} and Yang, {Chu Ya} and Lin, {Yen Yu} and Yang, {Ming Hsuan}",
year = "2019",
month = "1",
day = "1",
doi = "10.1007/978-3-030-20870-7_38",
language = "English",
isbn = "9783030208691",
series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
publisher = "Springer Verlag",
pages = "615--631",
editor = "Hongdong Li and C.V. Jawahar and Greg Mori and Konrad Schindler",
booktitle = "Computer Vision – ACCV 2018 - 14th Asian Conference on Computer Vision, Revised Selected Papers",
address = "Germany",

}

Chen, YW, Tsai, YH, Yang, CY, Lin, YY & Yang, MH 2019, Unseen Object Segmentation in Videos via Transferable Representations. in H Li, CV Jawahar, G Mori & K Schindler (eds), Computer Vision – ACCV 2018 - 14th Asian Conference on Computer Vision, Revised Selected Papers. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 11364 LNCS, Springer Verlag, pp. 615-631, 14th Asian Conference on Computer Vision, ACCV 2018, Perth, Australia, 18/12/2. https://doi.org/10.1007/978-3-030-20870-7_38

Unseen Object Segmentation in Videos via Transferable Representations. / Chen, Yi Wen; Tsai, Yi Hsuan; Yang, Chu Ya; Lin, Yen Yu; Yang, Ming Hsuan.

Computer Vision – ACCV 2018 - 14th Asian Conference on Computer Vision, Revised Selected Papers. ed. / Hongdong Li; C.V. Jawahar; Greg Mori; Konrad Schindler. Springer Verlag, 2019. p. 615-631 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 11364 LNCS).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

TY - GEN

T1 - Unseen Object Segmentation in Videos via Transferable Representations

AU - Chen, Yi Wen

AU - Tsai, Yi Hsuan

AU - Yang, Chu Ya

AU - Lin, Yen Yu

AU - Yang, Ming Hsuan

PY - 2019/1/1

Y1 - 2019/1/1

N2 - In order to learn object segmentation models in videos, conventional methods require a large amount of pixel-wise ground truth annotations. However, collecting such supervised data is time-consuming and labor-intensive. In this paper, we exploit existing annotations in source images and transfer such visual information to segment videos with unseen object categories. Without using any annotations in the target video, we propose a method to jointly mine useful segments and learn feature representations that better adapt to the target frames. The entire process is decomposed into two tasks: (1) solving a submodular function for selecting object-like segments, and (2) learning a CNN model with a transferable module for adapting seen categories in the source domain to the unseen target video. We present an iterative update scheme between two tasks to self-learn the final solution for object segmentation. Experimental results on numerous benchmark datasets show that the proposed method performs favorably against the state-of-the-art algorithms.

AB - In order to learn object segmentation models in videos, conventional methods require a large amount of pixel-wise ground truth annotations. However, collecting such supervised data is time-consuming and labor-intensive. In this paper, we exploit existing annotations in source images and transfer such visual information to segment videos with unseen object categories. Without using any annotations in the target video, we propose a method to jointly mine useful segments and learn feature representations that better adapt to the target frames. The entire process is decomposed into two tasks: (1) solving a submodular function for selecting object-like segments, and (2) learning a CNN model with a transferable module for adapting seen categories in the source domain to the unseen target video. We present an iterative update scheme between two tasks to self-learn the final solution for object segmentation. Experimental results on numerous benchmark datasets show that the proposed method performs favorably against the state-of-the-art algorithms.

UR - http://www.scopus.com/inward/record.url?scp=85066875844&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85066875844&partnerID=8YFLogxK

U2 - 10.1007/978-3-030-20870-7_38

DO - 10.1007/978-3-030-20870-7_38

M3 - Conference contribution

AN - SCOPUS:85066875844

SN - 9783030208691

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 615

EP - 631

BT - Computer Vision – ACCV 2018 - 14th Asian Conference on Computer Vision, Revised Selected Papers

A2 - Li, Hongdong

A2 - Jawahar, C.V.

A2 - Mori, Greg

A2 - Schindler, Konrad

PB - Springer Verlag

ER -

Chen YW, Tsai YH, Yang CY, Lin YY, Yang MH. Unseen Object Segmentation in Videos via Transferable Representations. In Li H, Jawahar CV, Mori G, Schindler K, editors, Computer Vision – ACCV 2018 - 14th Asian Conference on Computer Vision, Revised Selected Papers. Springer Verlag. 2019. p. 615-631. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)). https://doi.org/10.1007/978-3-030-20870-7_38