Domain generalization aims to enhance the model robustness against domain shift without accessing the target domain. Since the available source domains for training are limited, recent approaches focus on generating samples of novel domains. Nevertheless, they either struggle with the optimization problem when synthesizing abundant domains or cause the distortion of class semantics. To these ends, we propose a novel domain generalization framework where feature statistics are utilized for stylizing original features to ones with novel domain properties. To preserve class information during stylization, we first decompose features into high and low frequency components. Afterward, we stylize the low frequency components with the novel domain styles sampled from the manipulated statistics, while preserving the shape cues in high frequency ones. As the final step, we re-merge both the components to synthesize novel domain features. To enhance domain robustness, we utilize the stylized features to maintain the model consistency in terms of features as well as outputs. We achieve the feature consistency with the proposed domain-aware supervised contrastive loss, which ensures domain invariance while increasing class discriminability. Experimental results demonstrate the effectiveness of the proposed feature stylization and the domain-aware contrastive loss. Through quantitative comparisons, we verify the lead of our method upon existing state-of-the-art methods on two benchmarks, PACS and Office-Home.
|Title of host publication||MM 2021 - Proceedings of the 29th ACM International Conference on Multimedia|
|Publisher||Association for Computing Machinery, Inc|
|Number of pages||10|
|Publication status||Published - 2021 Oct 17|
|Event||29th ACM International Conference on Multimedia, MM 2021 - Virtual, Online, China|
Duration: 2021 Oct 20 → 2021 Oct 24
|Name||MM 2021 - Proceedings of the 29th ACM International Conference on Multimedia|
|Conference||29th ACM International Conference on Multimedia, MM 2021|
|Period||21/10/20 → 21/10/24|
Bibliographical noteFunding Information:
This research was partly supported by the MSIT (Ministry of Science, ICT), Korea, under the High-Potential Individuals Global Training Program (No. 2021-0-01696) supervised by the IITP (Institute for Information & Communications Technology Planning & Evaluation), the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2019R1A2C2003760), and the Institute for Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No. 2020-0-01361: Artificial Intelligence Graduate School Program (YONSEI UNIVERSITY)). This project was also supported by Microsoft Research Asia.
© 2021 ACM.
All Science Journal Classification (ASJC) codes
- Human-Computer Interaction
- Computer Graphics and Computer-Aided Design