Image synthesis is a novel solution in precision medicine for scenarios where important medical imaging is not otherwise available. The convolutional neural network (CNN) is an ideal model for this task because of its powerful learning capabilities through the large number of layers and trainable parameters. In this research, we propose a new architecture of residual inception encoder-decoder neural network (RIED-Net) to learn the nonlinear mapping between the input images and targeting output images. To evaluate the validity of the proposed approach, it is compared with two models from the literature: synthetic CT deep convolutional neural network (sCT-DCNN) and shallow CNN, using both an institutional mammogram dataset from Mayo Clinic Arizona and a public neuroimaging dataset from the Alzheimer's Disease Neuroimaging Initiative. Experimental results show that the proposed RIED-Net outperforms the two models on both datasets significantly in terms of structural similarity index, mean absolute percent error, and peak signal-to-noise ratio.
Bibliographical noteFunding Information:
Data collection and sharing for this project was funded by the Alzheimer's Disease Neuroimaging Initiative (ADNI) (National Institutes of Health GrantU01 AG024904) and DOD ADNI (Department of Defense Award number W81XWH-12-2-0012). ADNI is funded by the National Institute on Aging, the National Institute of Biomedical Imaging and Bioengineering, and through generous contributions from many other sources. Detailed ADNI acknowledgement information is available in http://adni.Loni.usc.edu/wpcontent/uploads/how_to_apply/ADNI_ Manuscript_Citations.pdf.
© 2013 IEEE.
All Science Journal Classification (ASJC) codes
- Computer Science Applications
- Electrical and Electronic Engineering
- Health Information Management