Large-scale Unsupervised Semantic Segmentation

Shanghua Gao, Zhong Yu Li, Ming Hsuan Yang, Ming Ming Cheng, Junwei Han, Philip Torr

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)

Abstract

Empowered by large datasets, e.g., ImageNet and MS COCO, unsupervised learning on large-scale data has enabled significant advances for classification tasks. However, whether the large-scale unsupervised semantic segmentation can be achieved remains unknown. There are two major challenges: i) we need a large-scale benchmark for assessing algorithms; ii) we need to develop methods to simultaneously learn category and shape representation in an unsupervised manner. In this work, we propose a new problem of <bold>l</bold>arge-scale <bold>u</bold>nsupervised <bold>s</bold>emantic <bold>s</bold>egmentation (LUSS) with a newly created benchmark dataset to help the research progress. Building on the ImageNet dataset, we propose the ImageNet-S&#x00A0;dataset with 1.2 million training images and 50k high-quality semantic segmentation annotations for evaluation. Our benchmark has a high data diversity and a clear task objective. We also present a simple yet effective method that works surprisingly well for LUSS. In addition, we benchmark related un/weakly/fully supervised methods accordingly, identifying the challenges and possible directions of LUSS. The benchmark and source code is publicly available at <uri>https://github.com/LUSSeg</uri>.

Original languageEnglish
Pages (from-to)1-20
Number of pages20
JournalIEEE transactions on pattern analysis and machine intelligence
DOIs
Publication statusAccepted/In press - 2022

Bibliographical note

Publisher Copyright:
IEEE

All Science Journal Classification (ASJC) codes

  • Software
  • Computer Vision and Pattern Recognition
  • Computational Theory and Mathematics
  • Artificial Intelligence
  • Applied Mathematics

Fingerprint

Dive into the research topics of 'Large-scale Unsupervised Semantic Segmentation'. Together they form a unique fingerprint.

Cite this