Convolutional neural network (CNN)-based denoisers have been successful in low-dose CT (LDCT) denoising tasks. However, image blurring in the denoised images remains a problem, and it is mainly caused by pixel-level losses during the training process. To reduce blur, perceptual loss with ImageNet-pretrained VGG network is widely used, and it improves image quality by preserving the original structural details in CT images. However, the statistics of the natural RGB images in ImageNet are different from those of CT images. For this reason, the features learned by the ImageNet-pretrained model cannot be generalized to represent the features of CT images. In this work, we propose a CT-specific perceptual loss scheme and apply it to train a LDCT denoiser. As the feature extractor for CT images, we develop a CT image classification network that predicts lesion-present or lesion-absent CT images. To improve the representation power of the proposed feature extractor, we adopt the network parameters learned from RGB images through transfer learning. We empirically demonstrate that 1) transfer learning helps improve the representation power of the CT classifier, and 2) use of the CT classifier trained by means of transfer learning as the feature extractor of perceptual loss for denoising resolves the CT number bias due to VGG loss and helps retain the small features and image textures of normal-dose CT (NDCT) images.
|Number of pages||11|
|Publication status||Published - 2022|
Bibliographical noteFunding Information:
This work was supported in part by the National Research Foundation of Korea (NRF) Grant funded by the Korean Government through the Ministry of Science and ICT (MSIT) under Grant RS-2022-00144336, Grant 2019R1A2C2084936, and Grant 2020R1A4A1016619
© 2013 IEEE.
All Science Journal Classification (ASJC) codes
- Computer Science(all)
- Materials Science(all)
- Electrical and Electronic Engineering