Adversarial learning for semi-supervised semantic segmentation

Wei Chih Hung, Yi Hsuan Tsai, Yan Ting Liou, Yen Yu Lin, Ming Hsuan Yang

Research output: Contribution to conferencePaperpeer-review

96 Citations (Scopus)


We propose a method for semi-supervised semantic segmentation using an adversarial network. While most existing discriminators are trained to classify input images as real or fake on the image level, we design a discriminator in a fully convolutional manner to differentiate the predicted probability maps from the ground truth segmentation distribution with the consideration of the spatial resolution. We show that the proposed discriminator can be used to improve semantic segmentation accuracy by coupling the adversarial loss with the standard cross entropy loss of the proposed model. In addition, the fully convolutional discriminator enables semi-supervised learning through discovering the trustworthy regions in predicted results of unlabeled images, thereby providing additional supervisory signals. In contrast to existing methods that utilize weakly-labeled images, our method leverages unlabeled images to enhance the segmentation model. Experimental results on the PASCAL VOC 2012 and Cityscapes datasets demonstrate the effectiveness of the proposed algorithm.

Original languageEnglish
Publication statusPublished - 2019
Event29th British Machine Vision Conference, BMVC 2018 - Newcastle, United Kingdom
Duration: 2018 Sep 32018 Sep 6


Conference29th British Machine Vision Conference, BMVC 2018
Country/TerritoryUnited Kingdom

Bibliographical note

Funding Information:
Acknowledgments. W.-C. Hung is supported in part by the NSF CAREER Grant #1149783, gifts from Adobe and NVIDIA.

Publisher Copyright:
© 2018. The copyright of this document resides with its authors.

All Science Journal Classification (ASJC) codes

  • Computer Vision and Pattern Recognition


Dive into the research topics of 'Adversarial learning for semi-supervised semantic segmentation'. Together they form a unique fingerprint.

Cite this