Radiologists examine lateral view radiographs of the cervical spine to determine the presence of cervical spinal injury. In this paper, we demonstrate that an artificial intelligence neural network can learn the steps employed by a radiologist when examining these radiographs for possible injury. We deconstructed the decision-making strategy into three steps: line drawing, prevertebral soft tissue thickness (PSTT) measurement, and swelling detection. After training neural networks to be guided by the radiologist's intermediate labels, the networks successfully performed comparable line drawings to those of the radiologists, and subsequent PSTT measurement and swelling detection were successful. Quantitative comparison of PSTT measurements between our proposed method and radiologists showed a high correlation (r = 0.8663, p < 0.05, and intraclass correlation coefficient = 0.9283 at the C2 level; r = 0.7720, p < 0.05, and intraclass correlation coefficient = 0.8667 at the C6 level). Using the radiologist's diagnosis as the reference point, the sensitivity, specificity, and accuracy of swelling detection by our proposed method were 100%, 98.37%, and 98.48, respectively. We conclude that our neural networks successfully learned the sequence of skills used by radiologists when interpreting radiographs for injury of the cervical spine.
Bibliographical noteFunding Information:
This work was supported in part by the National Research Foundation (NRF) Grant through the Korean Government, Ministry of Science, ICT & Future Planning (MSIP), under Grant 2015R1A2A1A05001887 and Grant 2018R1A2B6009076, and in part by the NRF Grant through the Korean Government (MSIP) under Grant 2016R1A2B4015016.
© 2013 IEEE.
All Science Journal Classification (ASJC) codes
- Computer Science(all)
- Materials Science(all)