Abstract
Quantitative tissue characteristics, which provide valuable diagnostic information, can be represented by magnetic resonance (MR) parameter maps using magnetic resonance imaging (MRI); however, a long scan time is necessary to acquire them, which prevents the application of quantitative MR parameter mapping to real clinical protocols. For fast MR parameter mapping, we propose a deep model-based MR parameter mapping network called DOPAMINE that combines a deep learning network with a model-based method to reconstruct MR parameter maps from undersampled multi-channel k-space data. DOPAMINE consists of two networks: 1) an MR parameter mapping network that uses a deep convolutional neural network (CNN) that estimates initial parameter maps from undersampled k-space data (CNN-based mapping), and 2) a reconstruction network that removes aliasing artifacts in the parameter maps with a deep CNN (CNN-based reconstruction) and an interleaved data consistency layer by an embedded MR model-based optimization procedure. We demonstrated the performance of DOPAMINE in brain T1 map reconstruction with a variable flip angle (VFA) model. To evaluate the performance of DOPAMINE, we compared it with conventional parallel imaging, low-rank based reconstruction, model-based reconstruction, and state-of-the-art deep-learning-based mapping methods for three different reduction factors (R = 3, 5, and 7) and two different sampling patterns (1D Cartesian and 2D Poisson-disk). Quantitative metrics indicated that DOPAMINE outperformed other methods in reconstructing T1 maps for all sampling patterns and reduction factors. DOPAMINE exhibited quantitatively and qualitatively superior performance to that of conventional methods in reconstructing MR parameter maps from undersampled multi-channel k-space data. The proposed method can thus reduce the scan time of quantitative MR parameter mapping that uses a VFA model.
Original language | English |
---|---|
Article number | 102017 |
Journal | Medical Image Analysis |
Volume | 70 |
DOIs | |
Publication status | Published - 2021 May |
Bibliographical note
Funding Information:This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science and ICT (2019R1A2B5B01070488), Bio & Medical Technology Development Program of the National Research Foundation (NRF) funded by the Ministry of Science and ICT (NRF-2018M3A9H6081483), Brain Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT & Future Planning (2018M3C7A1024734), Y-BASE R&E Institute a Brain Korea 21, Yonsei University, and partially supported by the Graduate School of YONSEI University Research Scholarship Grants in 2017.
Publisher Copyright:
© 2021 Elsevier B.V.
All Science Journal Classification (ASJC) codes
- Radiological and Ultrasound Technology
- Radiology Nuclear Medicine and imaging
- Computer Vision and Pattern Recognition
- Health Informatics
- Computer Graphics and Computer-Aided Design