RadImageNet and ImageNet as Datasets for Transfer Learning in the Assessment of Dental Radiographs: A Comparative Study.
Cephalometry
Deep Learning
Machine Learning
Panoramic Radiography
Journal
Journal of imaging informatics in medicine
ISSN: 2948-2933
Titre abrégé: J Imaging Inform Med
Pays: Switzerland
ID NLM: 9918663679206676
Informations de publication
Date de publication:
24 Jul 2024
24 Jul 2024
Historique:
received:
29
01
2024
accepted:
15
07
2024
revised:
14
07
2024
medline:
26
7
2024
pubmed:
26
7
2024
entrez:
24
7
2024
Statut:
aheadofprint
Résumé
Transfer learning (TL) is an alternative approach to the full training of deep learning (DL) models from scratch and can transfer knowledge gained from large-scale data to solve different problems. ImageNet, which is a publicly available large-scale dataset, is a commonly used dataset for TL-based image analysis; many studies have applied pre-trained models from ImageNet to clinical prediction tasks and have reported promising results. However, some have questioned the effectiveness of using ImageNet, which consists solely of natural images, for medical image analysis. The aim of this study was to evaluate whether pre-trained models using RadImageNet, which is a large-scale medical image dataset, could achieve superior performance in classification tasks in dental imaging modalities compared with ImageNet pre-trained models. To evaluate the classification performance of RadImageNet and ImageNet pre-trained models for TL, two dental imaging datasets were used. The tasks were (1) classifying the presence or absence of supernumerary teeth from a dataset of panoramic radiographs and (2) classifying sex from a dataset of lateral cephalometric radiographs. Performance was evaluated by comparing the area under the curve (AUC). On the panoramic radiograph dataset, the RadImageNet models gave average AUCs of 0.68 ± 0.15 (p < 0.01), and the ImageNet models had values of 0.74 ± 0.19. In contrast, on the lateral cephalometric dataset, the RadImageNet models demonstrated average AUCs of 0.76 ± 0.09, and the ImageNet models achieved values of 0.75 ± 0.17. The difference in performance between RadImageNet and ImageNet models in TL depends on the dental image dataset used.
Identifiants
pubmed: 39048809
doi: 10.1007/s10278-024-01204-9
pii: 10.1007/s10278-024-01204-9
doi:
Types de publication
Journal Article
Langues
eng
Sous-ensembles de citation
IM
Informations de copyright
© 2024. The Author(s).
Références
Schwendicke F, Cejudo GOJ, Garcia CA, et al.: Artificial Intelligence for Caries Detection: Value of Data and Information. J Dent Res 101(11):1350-1356, 2009
doi: 10.1177/00220345221113756
Meer M, Khan MA, Jabeen K, et al.: Deep convolutional neural networks information fusion and improved whale optimization algorithm based smart oral squamous cell carcinoma classification framework using histopathological images. Expert Systems: e13536, 2024
Okazaki S, Mine Y, Iwamoto Y, et al.: Analysis of the feasibility of using deep learning for multiclass classification of dental anomalies on panoramic radiographs. Dent Mater J 41(6):889-895, 2022
doi: 10.4012/dmj.2022-098
pubmed: 36002296
Ullah MS, Khan MA, Masood A, et al.: Brain tumor classification from MRI scans: A framework of hybrid deep learning model with Bayesian optimization and quantum theory-based marine predator algorithm. Front. Oncol 14:1-21, 2024
doi: 10.3389/fonc.2024.1335740
Wang J, Zhu H, Wang SH, et al.: A Review of Deep Learning on Medical Image Analysis. Mobile Netw Appl 26:351-380, 2021
doi: 10.1007/s11036-020-01672-7
Rischke R, Schneider L, Müller K, et al.: Federated Learning in Dentistry: Chances and Challenges. J Dent Res 101(11):1269-1273, 2022.
doi: 10.1177/00220345221108953
pubmed: 35912725
Khan ZF, Ramzan M, Raza M, et al.: Deep Convolutional Neural Networks for Accurate Classification of Gastrointestinal Tract Syndromes. CMC-Comput Mater Con 78(1): 1207–1225, 2024.
Matsoukas C, Haslum JF, Söderberg M, et al.: Is it time to replace cnns with transformers for medical images?. arXiv preprint https://doi.org/10.48550/arXiv.2108.09038 , 2021.
Haseeb A, Khan MA, Alhaisoni M, et al.: A Fusion of Residual Blocks and Stack Auto Encoder Features for Stomach Cancer Classification. CMC-Comput Mater Con 77(3): 3895-3920, 2023.
Ali Z, Khan MA, Hamza A, et al.: A deep learning-based x-ray imaging diagnosis system for classification of tuberculosis, COVID-19, and pneumonia traits using evolutionary algorithm. Int J Imaging Syst Technol 34 (1): e23014, 2024.
doi: 10.1002/ima.23014
Kim HE, Cosa-Linan A, Santhanam N, et al.: Transfer learning for medical image classification: a literature review. BMC Med Imaging 22:69, 2022.
doi: 10.1186/s12880-022-00793-7
pubmed: 35418051
pmcid: 9007400
Song MS, Lee SB: Comparative study of time-frequency transformation methods for ECG signal classification. Front. Sig. Proc 4: 1322334, 2024
doi: 10.3389/frsip.2024.1322334
Rajput IS, Gupta A, Jain V, et al.: A transfer learning-based brain tumor classification using magnetic resonance images. Multimed Tools Appl 83: 20487–20506, 2024
doi: 10.1007/s11042-023-16143-w
Deng J, Dong W, Socher R, et al.: Imagenet: A large-scale hierarchical image database. In: Proceedings of the IEEE conference on computer vision and pattern recognition 2009: 248–255, 2009.
Morid MA, Borjali A, Del Fiol G: A scoping review of transfer learning research on medical image analysis using ImageNet. Comput Biol Med 128:104115, 2021.
doi: 10.1016/j.compbiomed.2020.104115
pubmed: 33227578
Raghu M, Zhang C, Kleinberg J, et al.: Transfusion: Understanding transfer learning for medical imaging. arXiv preprint https://doi.org/10.48550/arXiv.1902.07208 , 2019.
Alzubaidi L, Santamaría J, Manoufali M, et al.: MedNet: pre-trained convolutional neural network model for the medical imaging tasks. arXiv preprint https://doi.org/10.48550/arXiv.2110.06512 , 2021.
Schneider L, Arsiwala-Scheppach L, Krois J, et al: Benchmarking Deep Learning Models for Tooth Structure Segmentation. J Dent Res 101(11):1343-1349, 2022.
doi: 10.1177/00220345221100169
pubmed: 35686357
Mei X, Liu Z, Robson PM, et al.: RadImageNet: An Open Radiologic Deep Learning Research Dataset for Effective Transfer Learning. Radiol Artif Intell 4(5):e210315, 2022.
doi: 10.1148/ryai.210315
pubmed: 36204533
pmcid: 9530758
Cadrin-Chênevert A: Moving from ImageNet to RadImageNet for improved transfer learning and generalizability. Radiol Artif Intell 4(5):e220126, 2022.
doi: 10.1148/ryai.220126
pubmed: 36204541
pmcid: 9530775
Mine Y, Iwamoto Y, Okazaki S, et al.: Detecting the presence of supernumerary teeth during the early mixed dentition stage using deep learning algorithms: A pilot study. Int J Paediatr Dent 32(5):678-685, 2021.
doi: 10.1111/ipd.12946
Hase H, Mine Y, Okazaki S, Yoshimi Y, et al.: Sex estimation from maxillofacial radiographs using a deep learning approach. Dent Mater J 43(3):394-399, 2024.
doi: 10.4012/dmj.2023-253
pubmed: 38599831
Szegedy C, Ioffe S, Vanhoucke V, et al.: Inception-v4, Inception-ResNet and the impact of residual connections on learning. arXiv preprint https://doi.org/10.48550/arXiv.1602.07261 , 2016.
He K, Zhang X, Ren S, et al.: Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition 2016: 770–778, 2016.
Huang G, Liu Z, Maaten LVD, et al.: Densely connected convolutional networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition 2017: 2261–2269, 2017.
Szegedy C, Vanhoucke V, Ioffe S, et al.: Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE conference on computer vision and pattern recognition 2016: 2818–2826, 2016.
Chen H, Du Y, Fu Y, Zhu J, Zeng H.: DCAM-Net: A rapid detection network for strip steel surface defects based on deformable convolution and attention mechanism. IEEE Trans Instrum Meas, 72: 1-12, 2023.
Yoshimi Y, Mine Y, Ito S, et al.: Image preprocessing with contrast-limited adaptive histogram equalization improves the segmentation performance of deep learning for the articular disk of the temporomandibular joint on magnetic resonance images. Oral Surg Oral Med Oral Pathol Oral Radiol 138(1): 128-141, 2024
doi: 10.1016/j.oooo.2023.01.016
pubmed: 37263812
Suganuma Y, Teramoto A, Saito K, et al.: Hybrid Multiple-Organ Segmentation Method Using Multiple U-Nets in PET/CT Images. Appl Sci 13(19): 10765, 2023
doi: 10.3390/app131910765
Nehary EA, Rajan S, Rossa C: Comparison of COVID-19 Classification via Imagenet-Based and RadImagenet-Based Transfer Learning Models with Random Frame Selection. In: Proceedings of the IEEE conference on sensors applications symposium 2023: 1–6, 2023
Dovile J, Amelia J S, Veronika C: Revisiting Hidden Representations in Transfer Learning for Medical Imaging. arXiv preprint https://doi.org/10.48550/arXiv.2302.08272 , 2023.
Remzan N, Tahiry K, Farchi A: Advancing brain tumor classification accuracy through deep learning: harnessing radimagenet pre-trained convolutional neural networks, ensemble learning, and machine learning classifiers on MRI brain images. Multimed Tools and Appl 1–29, 2024.
Kumar RL, Kakarla J, Isunuri BV, Singh M: Multi-class brain tumor classification using residual network and global average pooling. Multimed Tools Appl 80(9): 13429–13438, 2021
doi: 10.1007/s11042-020-10335-4
Kihira S, Mei X, Mahmoudi K, et al.: U-Net Based Segmentation and Characterization of Gliomas. Cancers 14(18): 4457, 2022
doi: 10.3390/cancers14184457
pubmed: 36139616
pmcid: 9496685
Zhang J, Xia L, Liu J, et al.: Exploring deep learning radiomics for classifying osteoporotic vertebral fractures in X-ray images. Front Endocrinol 15:1370838, 2024
doi: 10.3389/fendo.2024.1370838