Ultrasonography is one of the key medical imaging modalities for evaluating breast lesions. For differentiating benign from malignant lesions, computer-aided diagnosis (CAD) systems have greatly assisted radiologists by automatically segmenting and identifying features of lesions. Here, we present deep learning (DL)-based methods to segment the lesions and then classify benign from malignant, utilizing both B-mode and strain elastography (SE-mode) images. We propose a weighted multimodal U-Net (W-MM-U-Net) model for segmenting lesions where optimum weight is assigned on different imaging modalities using a weighted-skip connection method to emphasize its importance. We design a multimodal fusion framework (MFF) on cropped B-mode and SE-mode ultrasound (US) lesion images to classify benign and malignant lesions. The MFF consists of an integrated feature network (IFN) and a decision network (DN). Unlike other recent fusion methods, the proposed MFF method can simultaneously learn complementary information from convolutional neural networks (CNNs) trained using B-mode and SE-mode US images. The features from the CNNs are ensembled using the multimodal EmbraceNet model and DN classifies the images using those features. The experimental results (sensitivity of 100 ± 0.00% and specificity of 94.28 ± 7.00%) on the real-world clinical data showed that the proposed method outperforms the existing single- and multimodal methods. The proposed method predicts seven benign patients as benign three times out of five trials and six malignant patients as malignant five out of five trials. The proposed method would potentially enhance the classification accuracy of radiologists for breast cancer detection in US images.
|Journal||Bioengineering and Translational Medicine|
|Publication status||Accepted/In press - 2022|
Bibliographical noteFunding Information:
Development of ICT Convergence Technology for Daegu‐GyeongBuk Regional Industry, Grant/Award Number: 22ZD1100; Institute of Information & communications Technology Planning & Evaluation (IITP), Grant/Award Number: 2019‐0‐01906; Ministry of Science and ICT, South Korea, Grant/Award Number: S2640139; National Research Foundation of Korea, Grant/Award Numbers: 2020M3H2A1078045, 2020R1A6A1A03047902, 2021M3C1C3097624, NRF‐2019R1A2C2006269; Ministry of Small and Medium‐sized Enterprises and Startups (SMEs); Ministry of Education, Republic of Korea; Electronics and Telecommunications Research Institute (ETRI); Korean Government Funding information
This work was supported by the National Research Foundation (NRF) grant (NRF‐2019R1A2C2006269 and 2020M3H2A1078045) funded by the Ministry of Science and ICT (MSIT), Institute of Information & communications Technology Planning & Evaluation (IITP) grant (No. 2019‐0‐01906, Artificial Intelligence Graduate School Program) funded by MSIT, Tech Incubator Program for Startup (TIPS) program (S2640139) funded by Ministry of Small and Medium‐sized Enterprises and Startups (SMEs), Basic Science Research Program through the NRF grant (2020R1A6A1A03047902) funded by the Ministry of Education, Republic of Korea, the Electronics and Telecommunications Research Institute (ETRI) grant funded by the Korean Government (Development of ICT Convergence Technology for Daegu‐GyeongBuk Regional Industry) under Grant 22ZD1100, National R&D Program through the NRF grant (2021M3C1C3097624) funded by Ministry of Science and ICT, and BK21 Four program.
© 2022 The Authors. Bioengineering & Translational Medicine published by Wiley Periodicals LLC on behalf of American Institute of Chemical Engineers.
All Science Journal Classification (ASJC) codes
- Biomedical Engineering
- Pharmaceutical Science