Image super-resolution has been widely employed in various applications with boosted performance thanks to the deep learning techniques. However, many deep learning-based models are highly vulnerable to adversarial attacks, which is also applied to super-resolution models in recent studies. In this paper, we propose a defense method that is formulated as an entropy regularization loss for model training, which can be augmented to the original training loss of super-resolution models. We show that various state-of-the-art super-resolution models trained with our defense method are more robust against adversarial attacks than their original versions. To the best of our knowledge, this is the first attempt of adversarial defense for deep super-resolution models.
|Title of host publication||Computer Vision – ACCV 2020 - 15th Asian Conference on Computer Vision, 2020, Revised Selected Papers|
|Editors||Hiroshi Ishikawa, Cheng-Lin Liu, Tomas Pajdla, Jianbo Shi|
|Publisher||Springer Science and Business Media Deutschland GmbH|
|Number of pages||17|
|Publication status||Published - 2021|
|Event||15th Asian Conference on Computer Vision, ACCV 2020 - Virtual, Online|
Duration: 2020 Nov 30 → 2020 Dec 4
|Name||Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)|
|Conference||15th Asian Conference on Computer Vision, ACCV 2020|
|Period||20/11/30 → 20/12/4|
Bibliographical noteFunding Information:
Acknowledgement. This work was supported by the NRF grant funded by the Korea government (MSIT) (NRF-2020R1F1A1070631), and the Artificial Intelligence Graduate School Program (Yonsei University, 2020-0-01361).
© 2021, Springer Nature Switzerland AG.
All Science Journal Classification (ASJC) codes
- Theoretical Computer Science
- Computer Science(all)