Deep neural network for automatic volumetric segmentation of whole-body CT images for body composition assessment

Yoon Seong Lee, Namki Hong, Joseph Nathanael Witanto, Ye Ra Choi, Junghoan Park, Pierre Decazes, Florian Eude, Chang Oh Kim, Hyeon Chang Kim, Jin Mo Goo, Yumie Rhee, Soon Ho Yoon

Research output: Contribution to journalArticlepeer-review

19 Citations (Scopus)

Abstract

Background & aims: Body composition analysis on CT images is a valuable tool for sarcopenia assessment. We aimed to develop and validate a deep neural network applicable to whole-body CT images of PET-CT scan for the automatic volumetric segmentation of body composition. Methods: For model development, one hundred whole-body or torso 18F-fluorodeoxyglucose PET–CT scans of 100 patients were retrospectively included. Two radiologists semi-automatically labeled the following seven body components in every CT image slice, providing a total of 46,967 image slices from the 100 scans for training the 3D U-Net (training, 39,268 slices; tuning, 3116 slices; internal validation, 4583 slices): skin, bone, muscle, abdominal visceral fat, subcutaneous fat, internal organs with vessels, and central nervous system. The segmentation accuracy was assessed using reference masks from three external datasets: two Korean centers (4668 and 4796 image slices from 20 CT scans, each) and a French public dataset (3763 image slices from 24 CT scans). The 3D U-Net-driven values were clinically validated using bioelectrical impedance analysis (BIA) and by assessing the model's diagnostic performance for sarcopenia in a community-based elderly cohort (n = 522). Results: The 3D U-Net achieved accurate body composition segmentation with an average dice similarity coefficient of 96.5%–98.9% for all masks and 92.3%–99.3% for muscle, abdominal visceral fat, and subcutaneous fat in the validation datasets. The 3D U-Net-derived torso volume of skeletal muscle and fat tissue and the average area of those tissues in the waist were correlated with BIA-derived appendicular lean mass (correlation coefficients: 0.71 and 0.72, each) and fat mass (correlation coefficients: 0.95 and 0.93, each). The 3D U-Net-derived average areas of skeletal muscle and fat tissue in the waist were independently associated with sarcopenia (P < .001, each) with adjustment for age and sex, providing an area under the curve of 0.858 (95% CI, 0.815 to 0.901). Conclusions: This deep neural network model enabled the automatic volumetric segmentation of body composition on whole-body CT images, potentially expanding adjunctive sarcopenia assessment on PET-CT scan and volumetric assessment of metabolism in whole-body muscle and fat tissues.

Original languageEnglish
Pages (from-to)5038-5046
Number of pages9
JournalClinical Nutrition
Volume40
Issue number8
DOIs
Publication statusPublished - 2021 Aug

Bibliographical note

Funding Information:
This study was funded by Seoul R&BD Program (CY200053).The authors would like to acknowledge Andrew Dombrowski, PhD (Compecs, Inc.) for his assistance in improving the use of English in this manuscript. We thank KURE team and study participants. KURE cohort was supported by the Research of Korea Centers for Disease Control and Prevention (2016-ER6302-00; 2016-ER6302-01; 2016-ER6302-02; 2019-ER6302-00; 2019-ER6302-01; 2019-ER6302-02).

Funding Information:
The authors would like to acknowledge Andrew Dombrowski, PhD (Compecs, Inc.) for his assistance in improving the use of English in this manuscript. We thank KURE team and study participants. KURE cohort was supported by the Research of Korea Centers for Disease Control and Prevention ( 2016-ER6302-00 ; 2016-ER6302-01 ; 2016-ER6302-02 ; 2019-ER6302-00 ; 2019-ER6302-01 ; 2019-ER6302-02 ).

Publisher Copyright:
© 2021 The Author(s)

All Science Journal Classification (ASJC) codes

  • Nutrition and Dietetics
  • Critical Care and Intensive Care Medicine

Fingerprint

Dive into the research topics of 'Deep neural network for automatic volumetric segmentation of whole-body CT images for body composition assessment'. Together they form a unique fingerprint.

Cite this