Automatic Segmentation for Inferior Alveolar Canal Localization: History
Please note this is an old version of this entry, which may differ significantly from the current revision.
Contributor:

Artificial Intelligence could allow a global uniformity of the dental report and assist dentists in their efforts, saving their time but keeping the quality for better outcomes. 

  • artificial intelligence
  • algorithm
  • CBCT
  • AI
  • segmentation
  • automated
  • semi-automated
  • inferior alveolar nerve
  • dental radiology
  • oral and maxillofacial radiology

1. Introduction

Artificial intelligence (AI) is a broad domain combining the science and engineering of developing intelligent systems and machines [1][2] that can accomplish complex human cognitive functions such as problem-solving, structure and word recognition, and decision making [3]. The AI has become integrated into our daily life directly and indirectly through digital assistance (Apple’s Siri, Google Now, Amazon’s Alexa, Microsoft’s Cortana…), online recommendations (music, products, movies, map navigation, etc.), advertisements, email filtering, smart replies, automatic detection and other essential fields such as medicine where it is in continuous development [4][5][6]. Machine learning, a subdivision of AI, enables algorithms to learn and predict from data patterns, whereas deep learning enables this process using larger raw data [7][8].
In order to make the most accurate knowledge-based decision, higher experience and data analysis are required [9]. Based on this concept, AI is being implemented extensively in medicine, particularly in diagnosis and decision-making [8][9]. Two forms of AI exist in the medical field: virtual (electronic health records, diagnostic and treatment planning software, and others) and physical (robot surgery assistance, smart prostheses, etc.) [1][10]. Moreover, AI applications in dentistry are rapidly growing [11]. They are used for caries detection and diagnosis [12], oral cancer screening [13][14], improvement of brushing method [15], management of dental fear [16], automatic cleaning, shaping, and filling of the root canal [17], differential diagnosis, treatment planning, and detection of anatomical structure on dental radiographic data [18].
The knowledge of dentists about the basics of dental tomography and the use of cone-beam computed tomography (CBCT) remains questionable despite its popularity in dentistry [19] due to the lack of uniformity of the dental curriculum across dental schools worldwide. Particularly, the exclusion of the CBCT topic from undergraduate studies in some countries and the lack of specialists from the oral and maxillofacial radiology in most European countries [19] raised the question of whether, despite the growing number of CBCT machines, dentists are prepared for the diagnostic process [20]. In consequence, dentists seek additional training and are also becoming interested in available tools that could assist them in the process of reporting. Researchers proposed the use of artificial intelligence (AI) as a fast-assisting tool for dentists in reading and reporting two-dimensional (2D) and three-dimensional (3D) radiographic scans [21][22].
The inferior alveolar nerve (IAN) is an essential nerve that resides in the mandibular canal (MC), which is also known as the inferior alveolar canal (IAC), along with the artery and veins [23]. The IAN, as well as the MC, exhibits different path variations [24][25]. In order to avoid any IAN injuries that may vary from temporary nerve numbness with or without paresthesia to permanent nerve paresthesia (with or without trigeminal neuralgia) [26], a proper tracing on the radiographic image could be helpful [27]. In particular, using CBCT that delivers 3D images [28] gives the operator a choice to evaluate the scanned structures from different views, allowing proper assessment of the IAC and tracing of IAN [29].

2. Current Insights

The major weaknesses for most of the selected and analyzed studies were the variation of indexes used for result presentation [30][31][32][33][34][35][36], the absence of clear exclusion criteria [30][31][32][35][36], and poor explanation of the reference test [30][32][35][36]. These weaknesses mainly affect the studies’ duplication process that is essential according to the standards for reporting of diagnostic accuracy studies (STARD) guidelines [37].
The used samples were from the same setting or location [30][32][33], and the accuracy of the training sets hasn’t been described extensively [30][32][36]. It is worth noting that accurate results are expected with more extensive training sets because insufficient samples for training may lead to over-fitting and reducing the ability of the algorithm in generalizing unseen data [38]. The inter-observer reliability was only reported in Liu et al. [31] study, using weighted kappa (k = 0.783). It should be emphasized that reporting the inter-rater and the intra-rater reliability would be beneficial to assess the reproducibility of each observer and the overall agreement between observers [39][40].
Analyzing the design, the methodology, and reported results of the seven studies [30][31][32][33][34][35][36], we have noted that the authors did not follow any defined guidelines. The reported accuracy of the diagnostic test in three studies [31][33][34] was given without presenting the diagnostic odds. In contrast, diagnostic values (true positive, false negative, true negative, false positive) are mandatory to ensure a complete evaluation of the test accuracy [41].
Considering the frequent CBCT artifacts (noise, extinction artifacts, beam hardening, scattering, motion artifacts, etc.) and their impact on diagnosing [42], testing the accuracy of the algorithm on a set of CBCT scans including these artifacts is essential for future clinical application. In our review, none of the included studies considered this category in their samples, while Liu et al. [31] excluded blurred CBCT images caused by artifacts.
The principal research guidelines didn’t include the AI section as they had been established before the development of AI. This justifies the high frequency of unclear and not applicable answers in our review, to the QUADAS-2 tool questions. For example, the index test section gave 50% of not applicable and 7.14% of unclear answers as the QUADAS-2 tool wasn’t designed to evaluate the risk of bias for AI diagnostic accuracy studies [43].
The number of studies testing the accuracy of the AI in dentistry, especially in oral and maxillofacial radiology, is increasing alongside the addition of the AI sections within the research guidelines. Recently, Sounderajah et al. [44] started developing AI-specific extensions for STARD guidelines, EQUATOR (Enhancing Quality and Transparency of Health Research), and TRIPOD (Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis). Furthermore, the AI extension for SPIRIT (Standard Protocol Items: Recommendations for Interventional Trials) [45] and CONSORT (Consolidated Standards of Reporting Trials) [46] have been developed, published, and need to be endorsed by journals aiming to improve the quality of dental AI research [47]. A recent checklist by Schwendicke et al. [48], has been published in order to guide researchers, reviewers, and readers.

This entry is adapted from the peer-reviewed paper 10.3390/ijerph19010560

References

  1. Amisha Malik, P.; Pathania, M.; Rathaur, V.K. Overview of artificial intelligence in medicine. J. Fam. Med. Prim. Care 2019, 8, 2328–2331.
  2. Panch, T.; Szolovits, P.; Atun, R. Artificial intelligence, machine learning and health systems. J. Glob. Health 2018, 8, 020303.
  3. Helm, J.M.; Swiergosz, A.M.; Haeberle, H.S.; Karnutaet, J.M.; Schaffer, J.L.; Krebs, V.E.; Spitzer, A.I.; Ramkumar, P.N. Machine Learning and Artificial Intelligence: Definitions, Applications, and Future Directions. Curr. Rev. Musculoskelet. Med. 2020, 13, 69–76.
  4. Lee, R.S.T. Artificial Intelligence in Daily Life; Springer: Singapore, 2020.
  5. Lee, D.; Yoon, S.N. Application of Artificial Intelligence-Based Technologies in the Healthcare Industry: Opportunities and Challenges. Int. J. Environ. Res. Public Health 2021, 18, 271.
  6. Bohr, A.; Memarzadeh, K. The rise of artificial intelligence in healthcare applications. In Artificial Intelligence in Healthcare; Academic Press: Cambridge, MA, USA, 2020; pp. 25–60.
  7. Benke, K.; Benke, G. Artificial Intelligence and Big Data in Public Health. Int. J. Environ. Res. Public Health 2018, 15, 2796.
  8. Hashimoto, D.A.; Rosman, G.; Rus, D.; Meireles, O.R. Artificial Intelligence in Surgery: Promises and Perils. Ann. Surg. 2018, 268, 70–76.
  9. Mintz, Y.; Brodie, R. Introduction to artificial intelligence in medicine. Minim. Invasive Ther. Allied Technol. 2019, 28, 73–81.
  10. Ramesh, A.N.; Kambhampati, C.; Monson, J.R.; Drew, P.J. Artificial intelligence in medicine. Ann. R. Coll. Surg. Engl. 2004, 86, 334–338.
  11. Hassani, H.; Andi, P.A.; Ghodsi, A.; Norouzi, K.; Komendantova, N.; Unger, S. Shaping the Future of Smart Dentistry: From Artificial Intelligence (AI) to Intelligence Augmentation (IA). IoT 2021, 2, 510–523.
  12. Samiuddin Ahmed, M.; Chaturya, K.; Vinay Chandra Tiwari, R.; Virk, I.; Kumar Gulia, S.; Rajkumar Pandey, P.; Tiwari, H. Digital Dentistry-New Era in Dentistry. J. Adv. Med. Dent. Sci. Res. 2020, 8, 67–70.
  13. Krishna, A.B.; Tanveer, A.; Bhagirath, P.V.; Gannepalli, A. Role of artificial intelligence in diagnostic oral pathology—A modern approach. J. Oral Maxillofac. Pathol. 2020, 24, 152–156.
  14. Kar, A.; Wreesmann, V.B.; Shwetha, V.; Thakur, S.; Rao, V.U.; Arakeri, G.; Brennan, P.A. Improvement of oral cancer screening quality and reach: The promise of artificial intelligence. J. Oral Pathol. Med. 2020, 49, 727–730.
  15. Alkilzy, M.; Midani, R.; Höfer, M.; Splieth, C. Improving Toothbrushing with a Smartphone App: Results of a Randomized Controlled Trial. Caries Res. 2019, 53, 628–635.
  16. Klingberg, G.; Sillén, R.; Norén, J.G. Machine learning methods applied on dental fear and behavior management problems in children. Acta Odontol. Scand. 1999, 57, 207–215.
  17. Aminoshariae, A.; Kulild, J.; Nagendrababu, V. Artificial Intelligence in Endodontics: Current Applications and Future Directions. J. Endod. 2021, 47, 1352–1357.
  18. Putra, R.H.; Doi, C.; Yoda, N.; Astuti, E.R.; Sasaki, K. Current applications and development of artificial intelligence for digital dental radiography. Dentomaxillofac. Radiol. 2021, 50, 20210197.
  19. Brown, J.; Jacobs, R.; Levring Jäghagen, E.; Lindh, C.; Baksi, G.; Schulze, D.; Schulze, R.; European Academy of DentoMaxilloFacial Radiology. Basic training requirements for the use of dental CBCT by dentists: A position paper prepared by the European Academy of DentoMaxilloFacial Radiology. Dentomaxillofac. Radiol. 2014, 43, 20130291.
  20. Macleod, I.; Heath, N. Cone-beam computed tomography (CBCT) in dental practice. Dent Update 2008, 35, 590–598.
  21. Hung, K.; Yeung, A.W.K.; Tanaka, R.; Bornstein, M.M. Current Applications, Opportunities, and Limitations of AI for 3D Imaging in Dental Research and Practice. Int. J. Environ. Res. Public Health 2020, 17, 4424.
  22. Nagi, R.; Aravinda, K.; Rakesh, N.; Gupta, R.; Pal, A.; Mann, A.K. Clinical applications and performance of intelligent systems in dental and maxillofacial radiology: A review. Imaging Sci. Dent 2020, 50, 81–92.
  23. Nguyen, J.D.; Duong, H. Anatomy, Head and Neck, Alveolar Nerve. StatPearls. Available online: https://www.ncbi.nlm.nih.gov/books/NBK546712/ (accessed on 1 November 2021).
  24. Wolf, K.T.; Brokaw, E.J.; Bell, A.; Joy, A. Variant Inferior Alveolar Nerves and Implications for Local Anesthesia. Anesth. Prog. 2016, 63, 84–90.
  25. Ozturk, A.; Potluri, A.; Vieira, A.R. Position and course of the mandibular canal in skulls. Oral Surg. Oral Med. Oral Pathol. Oral Radiol. 2012, 113, 453–458.
  26. Shavit, I.; Juodzbalys, G. Inferior alveolar nerve injuries following implant placement—Importance of early diagnosis and treatment: A systematic review. J. Oral Maxillofac. Res. 2014, 5, e2.
  27. Rood, J.P.; Shehab, B.A. The radiological prediction of inferior alveolar nerve injury during third molar surgery. Br. J. Oral Maxillofac. Surg. 1990, 28, 20–25.
  28. Kaasalainen, T.; Ekholm, M.; Siiskonen, T.; Kortesniemi, M. Dental cone beam CT: An updated review. Phys. Med. 2021, 88, 193–217.
  29. Weckx, A.; Agbaje, J.O.; Sun, Y.; Jacobs, R.; Politis, C. Visualization techniques of the inferior alveolar nerve (IAN): A narrative review. Surg. Radiol. Anat. 2016, 38, 55–63.
  30. Orhan, K.; Bilgir, E.; Bayrakdar, I.S.; Ezhov, M.; Gusarev, M.; Shumilov, E. Evaluation of artificial intelligence for detecting impacted third molars on cone-beam computed tomography scans. J. Stomatol. Oral Maxillofac. Surg. 2021, 122, 333–337.
  31. Liu, M.Q.; Xu, Z.N.; Mao, W.Y.; Li, Y.; Zhang, X.H.; Bai, H.L.; Ding, P.; Fu, K.Y. Deep learning-based evaluation of the relationship between mandibular third molar and mandibular canal on CBCT. Clin. Oral Investig. 2021.
  32. Bayrakdar, S.K.; Orhan, K.; Bayrakdar, I.S.; Bilgir, E.; Ezhov, M.; Gusarev, M.; Shumilov, E. A deep learning approach for dental implant planning in cone-beam computed tomography images. BMC Med. Imaging 2021, 21, 86.
  33. Kwak, G.H.; Kwak, E.J.; Song, J.M.; Park, H.R.; Jung, Y.H.; Cho, B.H.; Hui, P.; Hwang, J.J. Automatic mandibular canal detection using a deep convolutional neural network. Sci. Rep. 2020, 10, 5711.
  34. Jaskari, J.; Sahlsten, J.; Järnstedt, J.; Mehtonen, H.; Karhu, K.; Sundqvist, O.; Hietanen, A.; Varjonen, V.; Mattila, V.; Kaski, K. Deep Learning Method for Mandibular Canal Segmentation in Dental Cone Beam Computed Tomography Volumes. Sci. Rep. 2020, 10, 5842.
  35. Abdolali, F.; Zoroofi, R.A.; Abdolali, M.; Yokota, F.; Otake, Y.; Sato, Y. Automatic segmentation of mandibular canal in cone beam CT images using conditional statistical shape model and fast marching. Int. J. Comput. Assist. Radiol. Surg. 2017, 12, 581–593.
  36. Bahrampour, E.; Zamani, A.; Kashkouli, S.; Soltanimehr, E.; Jahromi, M.G.; Pourshirazi, Z.S. Accuracy of software designed for automated localization of the inferior alveolar nerve canal on cone beam CT images. Dento Maxillo Facial Radiol. 2016, 45, 20150298.
  37. Cohen, J.F.; Korevaar, D.A.; Altman, D.G.; Bruns, D.E.; Gatsonis, C.A.; Hooft, L.; Irwig, L.; Levine, D.; Reitsma, J.B.; Bossuyt, P.M.; et al. STARD 2015 guidelines for reporting diagnostic accuracy studies: Explanation and elaboration. BMJ Open 2016, 6, e012799.
  38. Ying, X. An Overview of Overfitting and its Solutions. J. Phys. Conf. Ser. 2019, 1168, 022022.
  39. McHugh, M.L. Interrater reliability: The kappa statistic. Biochem. Med. 2012, 22, 276.
  40. Innes, E.; Straker, L. Reliability of work-related assessments. Work 1999, 13, 107–124.
  41. Eusebi, P. E-Mail Methodological Notes Diagnostic Accuracy Measures. Cerebrovasc. Dis. 2013, 36, 267–272.
  42. Schulze, R.; Heil, U.; Groβ, D.; Bruellmann, D.D.; Dranischnikow, E.; Schwanecke, U.; Schoemer, E. Artefacts in CBCT: A review. Dentomaxillofacial Radiol. 2011, 40, 265.
  43. Sounderajah, V.; Ashrafian, H.; Rose, S.; Shah, N.H.; Ghassemi, M.; Golub, R.; Kahn, C.E.; Esteva, A.; Karthikesalingam, A.; Mateen, B.; et al. A quality assessment tool for artificial intelligence-centered diagnostic test accuracy studies: QUADAS-AI. Nat. Med. 2021, 27, 1663–1665.
  44. Sounderajah, V.; Ashrafian, H.; Aggarwal, R.; de Fauw, J.; Denniston, A.K.; Greaves, F.; Karthikesalingam, A.; King, D.; Liu, X.; Markar, S.R.; et al. Developing specific reporting guidelines for diagnostic accuracy studies assessing AI interventions: The STARD-AI Steering Group. Nat. Med. 2020, 26, 807–808.
  45. Rivera, S.C.; Liu, X.; Chan, A.W.; Denniston, A.K.; Calvert, M.J. Guidelines for clinical trial protocols for interventions involving artificial intelligence: The SPIRIT-AI extension. Nat. Med. 2020, 26, 1351–1363.
  46. Liu, X.; Rivera, S.C.; Moher, D.; Calvert, M.J.; Denniston, A.K. Reporting guidelines for clinical trial reports for interventions involving artificial intelligence: The CONSORT-AI extension. Nat. Med. 2020, 26, 1364–1374.
  47. Clinical-Trials.ai|Home n.d. Available online: https://www.clinical-trials.ai/ (accessed on 1 November 2021).
  48. Schwendicke, F.; Singh, T.; Lee, J.H.; Gaudin, R.; Chaurasia, A.; Wiegand, T.; Uribe, S.; Krois, J. Artificial intelligence in dental research: Checklist for authors, reviewers, readers. J. Dent. 2021, 107, 103610.
More
This entry is offline, you can click here to edit this entry!
ScholarVision Creations