Automatic 3D cephalometric annotation system using shadowed 2D image-based machine learning

Sung Min Lee, Hwa Pyung Kim, Kiwan Jeon, Sang Hwy Lee, Jin Keun Seo

Research output: Contribution to journalArticle

Abstract

This paper presents a new approach to automatic three-dimensional (3D) cephalometric annotation for diagnosis, surgical planning, and treatment evaluation. There has long been considerable demand for automated cephalometric landmarking, since manual landmarking requires considerable time and experience as well as objectivity and scrupulous error avoidance. Due to the inherent limitation of two-dimensional (2D) cephalometry and the 3D nature of surgical simulation, there is a trend away from current 2D to 3D cephalometry. Deep learning approaches to cephalometric landmarking seem highly promising, but there exist serious difficulties in handling high dimensional 3D CT data, dimension referring to the number of voxels. To address this issue of dimensionality, this paper proposes a shadowed 2D image-based machine learning method which uses multiple shadowed 2D images with various lighting and view directions to capture 3D geometric cues. The proposed method using VGG-net was trained and tested using 2700 shadowed 2D images and corresponding manual landmarkings. Test data evaluation shows that our method achieved an average point-to-point error of 1.5 mm for the seven major landmarks.

Original languageEnglish
Article number055002
JournalPhysics in medicine and biology
Volume64
Issue number5
DOIs
Publication statusPublished - 2019 Jan 1

Fingerprint

Cephalometry
Lighting
Cues
Machine Learning
Learning

All Science Journal Classification (ASJC) codes

  • Radiological and Ultrasound Technology
  • Radiology Nuclear Medicine and imaging

Cite this

Lee, Sung Min ; Kim, Hwa Pyung ; Jeon, Kiwan ; Lee, Sang Hwy ; Seo, Jin Keun. / Automatic 3D cephalometric annotation system using shadowed 2D image-based machine learning. In: Physics in medicine and biology. 2019 ; Vol. 64, No. 5.
@article{36ac6320e38941eb8e5140de75f033ed,
title = "Automatic 3D cephalometric annotation system using shadowed 2D image-based machine learning",
abstract = "This paper presents a new approach to automatic three-dimensional (3D) cephalometric annotation for diagnosis, surgical planning, and treatment evaluation. There has long been considerable demand for automated cephalometric landmarking, since manual landmarking requires considerable time and experience as well as objectivity and scrupulous error avoidance. Due to the inherent limitation of two-dimensional (2D) cephalometry and the 3D nature of surgical simulation, there is a trend away from current 2D to 3D cephalometry. Deep learning approaches to cephalometric landmarking seem highly promising, but there exist serious difficulties in handling high dimensional 3D CT data, dimension referring to the number of voxels. To address this issue of dimensionality, this paper proposes a shadowed 2D image-based machine learning method which uses multiple shadowed 2D images with various lighting and view directions to capture 3D geometric cues. The proposed method using VGG-net was trained and tested using 2700 shadowed 2D images and corresponding manual landmarkings. Test data evaluation shows that our method achieved an average point-to-point error of 1.5 mm for the seven major landmarks.",
author = "Lee, {Sung Min} and Kim, {Hwa Pyung} and Kiwan Jeon and Lee, {Sang Hwy} and Seo, {Jin Keun}",
year = "2019",
month = "1",
day = "1",
doi = "10.1088/1361-6560/ab00c9",
language = "English",
volume = "64",
journal = "Physics in Medicine and Biology",
issn = "0031-9155",
publisher = "IOP Publishing Ltd.",
number = "5",

}

Automatic 3D cephalometric annotation system using shadowed 2D image-based machine learning. / Lee, Sung Min; Kim, Hwa Pyung; Jeon, Kiwan; Lee, Sang Hwy; Seo, Jin Keun.

In: Physics in medicine and biology, Vol. 64, No. 5, 055002, 01.01.2019.

Research output: Contribution to journalArticle

TY - JOUR

T1 - Automatic 3D cephalometric annotation system using shadowed 2D image-based machine learning

AU - Lee, Sung Min

AU - Kim, Hwa Pyung

AU - Jeon, Kiwan

AU - Lee, Sang Hwy

AU - Seo, Jin Keun

PY - 2019/1/1

Y1 - 2019/1/1

N2 - This paper presents a new approach to automatic three-dimensional (3D) cephalometric annotation for diagnosis, surgical planning, and treatment evaluation. There has long been considerable demand for automated cephalometric landmarking, since manual landmarking requires considerable time and experience as well as objectivity and scrupulous error avoidance. Due to the inherent limitation of two-dimensional (2D) cephalometry and the 3D nature of surgical simulation, there is a trend away from current 2D to 3D cephalometry. Deep learning approaches to cephalometric landmarking seem highly promising, but there exist serious difficulties in handling high dimensional 3D CT data, dimension referring to the number of voxels. To address this issue of dimensionality, this paper proposes a shadowed 2D image-based machine learning method which uses multiple shadowed 2D images with various lighting and view directions to capture 3D geometric cues. The proposed method using VGG-net was trained and tested using 2700 shadowed 2D images and corresponding manual landmarkings. Test data evaluation shows that our method achieved an average point-to-point error of 1.5 mm for the seven major landmarks.

AB - This paper presents a new approach to automatic three-dimensional (3D) cephalometric annotation for diagnosis, surgical planning, and treatment evaluation. There has long been considerable demand for automated cephalometric landmarking, since manual landmarking requires considerable time and experience as well as objectivity and scrupulous error avoidance. Due to the inherent limitation of two-dimensional (2D) cephalometry and the 3D nature of surgical simulation, there is a trend away from current 2D to 3D cephalometry. Deep learning approaches to cephalometric landmarking seem highly promising, but there exist serious difficulties in handling high dimensional 3D CT data, dimension referring to the number of voxels. To address this issue of dimensionality, this paper proposes a shadowed 2D image-based machine learning method which uses multiple shadowed 2D images with various lighting and view directions to capture 3D geometric cues. The proposed method using VGG-net was trained and tested using 2700 shadowed 2D images and corresponding manual landmarkings. Test data evaluation shows that our method achieved an average point-to-point error of 1.5 mm for the seven major landmarks.

UR - http://www.scopus.com/inward/record.url?scp=85061970247&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85061970247&partnerID=8YFLogxK

U2 - 10.1088/1361-6560/ab00c9

DO - 10.1088/1361-6560/ab00c9

M3 - Article

C2 - 30669128

AN - SCOPUS:85061970247

VL - 64

JO - Physics in Medicine and Biology

JF - Physics in Medicine and Biology

SN - 0031-9155

IS - 5

M1 - 055002

ER -