TY - JOUR
T1 - Learning-based local-to-global landmark annotation for automatic 3D cephalometry
AU - Yun, Hye Sun
AU - Jang, Tae Jun
AU - Lee, Sung Min
AU - Lee, Sang Hwy
AU - Seo, Jin Keun
N1 - Publisher Copyright:
© 2020 Institute of Physics and Engineering in Medicine.
Copyright:
Copyright 2020 Elsevier B.V., All rights reserved.
PY - 2020/4/21
Y1 - 2020/4/21
N2 - The annotation of three-dimensional (3D) cephalometric landmarks in 3D computerized tomography (CT) has become an essential part of cephalometric analysis, which is used for diagnosis, surgical planning, and treatment evaluation. The automation of 3D landmarking with high-precision remains challenging due to the limited availability of training data and the high computational burden. This paper addresses these challenges by proposing a hierarchical deep-learning method consisting of four stages: 1) a basic landmark annotator for 3D skull pose normalization, 2) a deep-learning-based coarse-to-fine landmark annotator on the midsagittal plane, 3) a low-dimensional representation of the total number of landmarks using variational autoencoder (VAE), and 4) a local-to-global landmark annotator. The implementation of the VAE allows two-dimensional-image-based 3D morphological feature learning and similarity/dissimilarity representation learning of the concatenated vectors of cephalometric landmarks. The proposed method achieves an average 3D point-to-point error of 3.63 mm for 93 cephalometric landmarks using a small number of training CT datasets. Notably, the VAE captures variations of craniofacial structural characteristics.
AB - The annotation of three-dimensional (3D) cephalometric landmarks in 3D computerized tomography (CT) has become an essential part of cephalometric analysis, which is used for diagnosis, surgical planning, and treatment evaluation. The automation of 3D landmarking with high-precision remains challenging due to the limited availability of training data and the high computational burden. This paper addresses these challenges by proposing a hierarchical deep-learning method consisting of four stages: 1) a basic landmark annotator for 3D skull pose normalization, 2) a deep-learning-based coarse-to-fine landmark annotator on the midsagittal plane, 3) a low-dimensional representation of the total number of landmarks using variational autoencoder (VAE), and 4) a local-to-global landmark annotator. The implementation of the VAE allows two-dimensional-image-based 3D morphological feature learning and similarity/dissimilarity representation learning of the concatenated vectors of cephalometric landmarks. The proposed method achieves an average 3D point-to-point error of 3.63 mm for 93 cephalometric landmarks using a small number of training CT datasets. Notably, the VAE captures variations of craniofacial structural characteristics.
UR - http://www.scopus.com/inward/record.url?scp=85084589978&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85084589978&partnerID=8YFLogxK
U2 - 10.1088/1361-6560/ab7a71
DO - 10.1088/1361-6560/ab7a71
M3 - Article
AN - SCOPUS:85084589978
VL - 65
JO - Physics in Medicine and Biology
JF - Physics in Medicine and Biology
SN - 0031-9155
IS - 8
M1 - 085018
ER -