Despite the success of generative adversarial networks (GANs) for image generation, the trade-off between visual quality and image diversity remains a significant issue. This paper achieves both aims simultaneously by improving the stability of training GANs. The key idea of the proposed approach is to implicitly regularize the discriminator using representative features. Focusing on the fact that standard GAN minimizes reverse Kullback-Leibler(KL) divergence, we transfer the representative feature, which is extracted from the data distribution using a pre-trained autoencoder (AE), to the discriminator of standard GANs. Because the AE learns to minimize forward KL divergence, our GAN training with representative features is influenced by both reverse and forward KL divergence. Consequendy, the proposed approach is verified to improve visual quality and diversity of state of the art GANs using extensive evaluations.
|Title of host publication||35th International Conference on Machine Learning, ICML 2018|
|Editors||Andreas Krause, Jennifer Dy|
|Publisher||International Machine Learning Society (IMLS)|
|Number of pages||10|
|Publication status||Published - 2018|
|Event||35th International Conference on Machine Learning, ICML 2018 - Stockholm, Sweden|
Duration: 2018 Jul 10 → 2018 Jul 15
|Name||35th International Conference on Machine Learning, ICML 2018|
|Conference||35th International Conference on Machine Learning, ICML 2018|
|Period||18/7/10 → 18/7/15|
Bibliographical notePublisher Copyright:
© Copyright 2018 by the Authors. All rights reserved.
All Science Journal Classification (ASJC) codes
- Computational Theory and Mathematics
- Human-Computer Interaction