Show, Match and Segment: Joint Weakly Supervised Learning of Semantic Matching and Object Co-Segmentation

Yun Chun Chen, Yen Yu Lin, Ming Hsuan Yang, Jia Bin Huang

Research output: Contribution to journalArticlepeer-review

13 Citations (Scopus)

Abstract

We present an approach for jointly matching and segmenting object instances of the same category within a collection of images. In contrast to existing algorithms that tackle the tasks of semantic matching and object co-segmentation in isolation, our method exploits the complementary nature of the two tasks. The key insights of our method are two-fold. First, the estimated dense correspondence fields from semantic matching provide supervision for object co-segmentation by enforcing consistency between the predicted masks from a pair of images. Second, the predicted object masks from object co-segmentation in turn allow us to reduce the adverse effects due to background clutters for improving semantic matching. Our model is end-to-end trainable and does not require supervision from manually annotated correspondences and object masks. We validate the efficacy of our approach on five benchmark datasets: TSS, Internet, PF-PASCAL, PF-WILLOW, and SPair-71k, and show that our algorithm performs favorably against the state-of-the-art methods on both semantic matching and object co-segmentation tasks.

Original languageEnglish
Article number9057736
Pages (from-to)3632-3647
Number of pages16
JournalIEEE transactions on pattern analysis and machine intelligence
Volume43
Issue number10
DOIs
Publication statusPublished - 2021 Oct 1

Bibliographical note

Publisher Copyright:
© 1979-2012 IEEE.

All Science Journal Classification (ASJC) codes

  • Software
  • Computer Vision and Pattern Recognition
  • Computational Theory and Mathematics
  • Artificial Intelligence
  • Applied Mathematics

Fingerprint

Dive into the research topics of 'Show, Match and Segment: Joint Weakly Supervised Learning of Semantic Matching and Object Co-Segmentation'. Together they form a unique fingerprint.

Cite this