We present a new domain generalized semantic segmentation network named WildNet, which learns domain-generalized features by leveraging a variety of contents and styles from the wild. In domain generalization, the low generalization ability for unseen target domains is clearly due to overfitting to the source domain. To address this problem, previous works have focused on generalizing the domain by removing or diversifying the styles of the source domain. These alleviated overfitting to the source-style but overlooked overfitting to the source-content. In this paper, we propose to diversify both the content and style of the source domain with the help of the wild. Our main idea is for networks to naturally learn domain-generalized semantic information from the wild. To this end, we diversify styles by augmenting source features to resemble wild styles and enable networks to adapt to a variety of styles. Further-more, we encourage networks to learn class-discriminant features by providing semantic variations borrowed from the wild to source contents in the feature space. Finally, we regularize networks to capture consistent semantic information even when both the content and style of the source domain are extended to the wild. Extensive experiments on five different datasets validate the effectiveness of our WildNet, and we significantly outperform state-of-the-art methods. The source code and model are available online: https://github.com/suhyeonlee/WildNet.
|Title of host publication||Proceedings - 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022|
|Publisher||IEEE Computer Society|
|Number of pages||11|
|Publication status||Published - 2022|
|Event||2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022 - New Orleans, United States|
Duration: 2022 Jun 19 → 2022 Jun 24
|Name||Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition|
|Conference||2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022|
|Period||22/6/19 → 22/6/24|
Bibliographical noteFunding Information:
We presented WildNet which exploits unlabeled wild images for domain-generalized semantic segmentation. Our approach effectively extends style and content from source to wild, resulting in drastic performance improvement even we leverage 10 wild images. In contrast to previous studies that exploit generalization cues only from style, we additionally exploit the potential to generalize domain from content. We thoroughly ablated to demonstrate the efficacy of our WildNet and achieved superior segmentation performance under several domain generalization scenarios. We believe that our approach provides an opportunity to utilize huge amounts of unlabeled data for domain generalization. Acknowledgement. This research was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (NRF-2019R1A2C1007153).
© 2022 IEEE.
All Science Journal Classification (ASJC) codes
- Computer Vision and Pattern Recognition