Abstract
Transfer learning using a pre-trained model with the ImageNet database is frequently used when obtaining large datasets in the medical imaging field is challenging. We tried to estimate the value of deep learning for facial US images by assessing the classification performance for facial US images through transfer learning using current representative deep learning models and analyzing the classification criteria. For this clinical study, we recruited 86 individuals from whom we acquired ultrasound images of nine facial regions. To classify these facial regions, 15 deep learning models were trained using augmented or non-augmented datasets and their performance was evaluated. The F-measure scores average of all models was about 93% regardless of augmentation in the dataset, and the best performing model was the classic model VGGs. The models regarded the contours of skin and bones, rather than muscles and blood vessels, as distinct features for distinguishing regions in the facial US images. The results of this study can be used as reference data for future deep learning research on facial US images and content development.
Original language | English |
---|---|
Article number | 16480 |
Journal | Scientific reports |
Volume | 12 |
Issue number | 1 |
DOIs | |
Publication status | Published - 2022 Dec |
Bibliographical note
Funding Information:We thank Shihyun Kim from Boston University and Soowan Kim from Johns Hopkins University for their revision of the English translation of the manuscript. This work was supported by the Korea Medical Device Development Fund grant funded by the Korea government (the Ministry of Science and ICT, the Ministry of Trade, Industry and Energy, the Ministry of Health & Welfare, the Ministry of Food and Drug Safety) (Project Number: 1711138194, KMDF_PR_20200901_0109-01). This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science and ICT (NRF-2019R1C1C1008813).
Funding Information:
This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF), funded by the Ministry of Science and ICT (NRF-2019R1C1C1008813). This work was supported by the Korea Medical Device Development Fund grant, funded by the Korean government (the Ministry of Science and ICT, the Ministry of Trade, Industry and Energy, the Ministry of Health & Welfare, the Ministry of Food and Drug Safety) (Project Number: 1711138194, KMDF_PR_20200901_0109-01).
Funding Information:
We thank Shihyun Kim from Boston University and Soowan Kim from Johns Hopkins University for their revision of the English translation of the manuscript. This work was supported by the Korea Medical Device Development Fund grant funded by the Korea government (the Ministry of Science and ICT, the Ministry of Trade, Industry and Energy, the Ministry of Health & Welfare, the Ministry of Food and Drug Safety) (Project Number: 1711138194, KMDF_PR_20200901_0109-01). This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science and ICT (NRF-2019R1C1C1008813).
Publisher Copyright:
© 2022, The Author(s).
All Science Journal Classification (ASJC) codes
- General