Development of a Unified Deep-Learning Model for Dental Abnormality Examination Using Digital Photographs

International Journal of Electronics and Communication Engineering
© 2024 by SSRG - IJECE Journal
Volume 11 Issue 5
Year of Publication : 2024
Authors : Ramya Mohan, A. Rama, Deepak Nallaswamy
pdf
How to Cite?

Ramya Mohan, A. Rama, Deepak Nallaswamy, "Development of a Unified Deep-Learning Model for Dental Abnormality Examination Using Digital Photographs," SSRG International Journal of Electronics and Communication Engineering, vol. 11,  no. 5, pp. 276-284, 2024. Crossref, https://doi.org/10.14445/23488549/IJECE-V11I5P126

Abstract:

Maintaining good Oral Health (OH) is important for an individual's general health. Clinical-level examination of the OH using recommended protocol is time consuming, and hence computerized algorithm-supported methods are commonly adopted in recent years.  Owing to its increased accuracy, Deep Learning (DL) based OH testing has become a popular procedure in recent times. This study suggests a revolutionary Unified DL (UDL) technique that uses digital photos taken from actual patients to assess the state of the teeth. With a clinically meaningful level of accuracy, the suggested UDL model applies the DL scheme, for instance, segmentation, tooth recognition, and classification. Oral image collection, a novel UDL modelbased assessment, DL-segmentation to extract the tooth and the caries section, and performance verification using the selected image database are the phases of this instrument. MIDNet18 serves as the foundation for three distinct UDL-schemes that are proposed in this study. Three distinct UDL-models can be achieved by combining MIDNet18 with the Region Proposal Network (RPN) and ResNet101. Metrics such as F1-score and error are used to validate the effectiveness of the suggested tool, and the results of the experiment show that the UDLII-model performs superior to other models taken into consideration in this investigation.

Keywords:

Digital photograph, Instance segmentation, MIDNet18, Oral Health, ResNet101.

References:

[1] World Health Organization (WHO), “Strategy for Oral Health in South-East Asia, 2013-2020,” Technical Document, pp. 1-43, 2016.
[Google Scholar] [Publisher Link]
[2] David A. Mouritsen, “Automatic Segmentation of Teeth in Digital Dental Models,” M.S. Thesis, University of Alabama at Birmingham, United States, 2013.
[Google Scholar]
[3] Caswell A. Evans, and Dushanka V. Kleinman, “The Surgeon General’s Report on America’s Oral Health: Opportunities for the Dental Profession,” The Journal of the American Dental Association, vol. 131, no.12, pp. 1721-1728, 2000.
[CrossRef] [Google Scholar] [Publisher Link]
[4] Hugh Devlin et al., “The ADEPT Study: A Comparative Study of Dentists’ Ability to Detect Enamel-Only Proximal Caries in Bitewing Radiographs with and without the Use of AssistDent Artificial Intelligence Software,” British Dental Journal, vol. 231, pp. 481-485, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[5] Baichen Ding et al., “Detection of Dental Caries in Oral Photographs Taken by Mobile Phones Based on the YOLOv3 Algorithm,” Annals of Translational Medicine, vol. 9, no. 21, pp. 1-11, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[6] Wattanapong Suttapak et al., “A Unified Convolution Neural Network for Dental Caries Classification,” ECTI Transactions on Computer and Information Technology, vol. 16, no. 2, pp. 186-195, 2022.
[CrossRef] [Google Scholar] [Publisher Link]
[7] Naseem Shah, Nikhil Bansal, and Ajay Logani, “Recent Advances in Imaging Technologies in Dentistry,” World Journal of Radiology, vol. 6, no. 10, pp. 794-807, 2014.
[CrossRef] [Google Scholar] [Publisher Link]
[8] Supreet Jain et al., “New Evolution of Cone-Beam Computed Tomography in Dentistry: Combining Digital Technologies,” Imaging Science in Dentistry, vol. 49, no. 3, pp. 179-190, 2019.
[CrossRef] [Google Scholar] [Publisher Link]
[9] Esra Sivari et al., “Deep Learning in Diagnosis of Dental Anomalies and Diseases: A Systematic Review,” Diagnostics, vol. 13, no. 15, pp. 1-28, 2023.  
[CrossRef] [Google Scholar] [Publisher Link]
[10] R. Qi Charles et al., “Pointnet: Deep Learning on Point Sets for 3D Classification and Segmentation,” 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, USA, pp. 652-660, 2017.
[CrossRef] [Google Scholar] [Publisher Link]
[11] Truc Le, and Ye Duan, “Pointgrid: A Deep Network for 3D Shape Understanding,” 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA, pp. 9204-9214, 2018.
[CrossRef] [Google Scholar] [Publisher Link]
[12] Pedro Hermosilla et al., “Monte Carlo Convolution for Learning on Non-Uniformly Sampled Point Clouds,” ACM Transactions on Graphics, vol. 37, no. 6, pp. 1-12, 2018.
[CrossRef] [Google Scholar] [Publisher Link]
[13] Yangyan Li et al., “PointCNN: Convolution on X-Transformed Points,” Advances in Neural Information Processing Systems, vol. 31, 2018.
[CrossRef] [Google Scholar] [Publisher Link]
[14] Farhad Ghazvinian Zanjani et al., “Deep Learning Approach to Semantic Segmentation in 3D Point Cloud Intra-Oral Scans of Teeth,” Proceedings of Machine Learning Research, vol. 102, pp. 557-571, 2019.
[Google Scholar] [Publisher Link]
[15] Chen Liu, and Yasutaka Furukawa, “MASC: Multi-Scale Affinity with Sparse Convolution for 3D Instance Segmentation,” arXiv, pp. 14, 2019.
[CrossRef] [Google Scholar] [Publisher Link]
[16] Yin Zhou, and Oncel Tuzel, “Voxelnet: End-to-End Learning for Point Cloud Based 3D Object Detection,” 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4490-4499, 2018.
[CrossRef] [Google Scholar] [Publisher Link]
[17] Kaiming He et al., “Mask R-CNN,” arxiv, pp. 1-12, 2017.
[CrossRef] [Google Scholar] [Publisher Link]
[18] Farhad Ghazvinian Zanjani et al., “Mask-MCNet: Tooth Instance Segmentation in 3D Point Clouds of Intra-Oral Scans,” Neurocomputing, vol. 453, pp. 286-298, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[19] Li Yuan et al., “A Full-Set Tooth Segmentation Model Based on Improved PointNet++,” Visual Intelligence, vol. 1, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[20] Hao Su et al., “Object Detection and Instance Segmentation in Remote Sensing Imagery Based on Precise Mask R-CNN,” IGARSS 20192019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, pp. 1454-1457, 2019.
[CrossRef] [Google Scholar] [Publisher Link]
[21] Paul A. Yushkevich et al., “User-Guided 3D Active Contour Segmentation of Anatomical Structures: Significantly Improved Efficiency and Reliability,” Neuroimage, vol. 31, no. 3, pp. 1116-1128, 2006.
[CrossRef] [Google Scholar] [Publisher Link]
[22] Tao Wang et al., “Tea Picking Point Detection and Location Based on Mask-RCNN,” Information Processing in Agriculture, vol. 10, no. 2, pp. 267-275, 2023.
[CrossRef] [Google Scholar] [Publisher Link]