Automatic Segmentation for Inferior Alveolar Canal Localization: History
Please note this is an old version of this entry, which may differ significantly from the current revision.
Contributor:

Artificial Intelligence could allow a global uniformity of the dental report and assist dentists in their efforts, saving their time but keeping the quality for better outcomes. 

  • artificial intelligence
  • algorithm
  • CBCT
  • AI
  • segmentation
  • automated
  • semi-automated
  • inferior alveolar nerve
  • dental radiology
  • oral and maxillofacial radiology

1. Introduction

Artificial intelligence (AI) is a broad domain combining the science and engineering of developing intelligent systems and machines [1,2] that can accomplish complex human cognitive functions such as problem-solving, structure and word recognition, and decision making [3]. The AI has become integrated into our daily life directly and indirectly through digital assistance (Apple’s Siri, Google Now, Amazon’s Alexa, Microsoft’s Cortana…), online recommendations (music, products, movies, map navigation, etc.), advertisements, email filtering, smart replies, automatic detection and other essential fields such as medicine where it is in continuous development [4,5,6]. Machine learning, a subdivision of AI, enables algorithms to learn and predict from data patterns, whereas deep learning enables this process using larger raw data [7,8].
In order to make the most accurate knowledge-based decision, higher experience and data analysis are required [9]. Based on this concept, AI is being implemented extensively in medicine, particularly in diagnosis and decision-making [8,9]. Two forms of AI exist in the medical field: virtual (electronic health records, diagnostic and treatment planning software, and others) and physical (robot surgery assistance, smart prostheses, etc.) [1,10]. Moreover, AI applications in dentistry are rapidly growing [11]. They are used for caries detection and diagnosis [12], oral cancer screening [13,14], improvement of brushing method [15], management of dental fear [16], automatic cleaning, shaping, and filling of the root canal [17], differential diagnosis, treatment planning, and detection of anatomical structure on dental radiographic data [18].
The knowledge of dentists about the basics of dental tomography and the use of cone-beam computed tomography (CBCT) remains questionable despite its popularity in dentistry [19] due to the lack of uniformity of the dental curriculum across dental schools worldwide. Particularly, the exclusion of the CBCT topic from undergraduate studies in some countries and the lack of specialists from the oral and maxillofacial radiology in most European countries [19] raised the question of whether, despite the growing number of CBCT machines, dentists are prepared for the diagnostic process [20]. In consequence, dentists seek additional training and are also becoming interested in available tools that could assist them in the process of reporting. Researchers proposed the use of artificial intelligence (AI) as a fast-assisting tool for dentists in reading and reporting two-dimensional (2D) and three-dimensional (3D) radiographic scans [21,22].
The inferior alveolar nerve (IAN) is an essential nerve that resides in the mandibular canal (MC), which is also known as the inferior alveolar canal (IAC), along with the artery and veins [23]. The IAN, as well as the MC, exhibits different path variations [24,25]. In order to avoid any IAN injuries that may vary from temporary nerve numbness with or without paresthesia to permanent nerve paresthesia (with or without trigeminal neuralgia) [26], a proper tracing on the radiographic image could be helpful [27]. In particular, using CBCT that delivers 3D images [28] gives the operator a choice to evaluate the scanned structures from different views, allowing proper assessment of the IAC and tracing of IAN [29].

2. Current Insights

The major weaknesses for most of the selected and analyzed studies were the variation of indexes used for result presentation [37,38,39,40,41,42,43], the absence of clear exclusion criteria [37,38,39,42,43], and poor explanation of the reference test [37,39,42,43]. These weaknesses mainly affect the studies’ duplication process that is essential according to the standards for reporting of diagnostic accuracy studies (STARD) guidelines [44].
The used samples were from the same setting or location [37,39,40], and the accuracy of the training sets hasn’t been described extensively [37,39,43]. It is worth noting that accurate results are expected with more extensive training sets because insufficient samples for training may lead to over-fitting and reducing the ability of the algorithm in generalizing unseen data [45]. The inter-observer reliability was only reported in Liu et al. [38] study, using weighted kappa (k = 0.783). It should be emphasized that reporting the inter-rater and the intra-rater reliability would be beneficial to assess the reproducibility of each observer and the overall agreement between observers [46,47].
Analyzing the design, the methodology, and reported results of the seven studies [37,38,39,40,41,42,43], we have noted that the authors did not follow any defined guidelines. The reported accuracy of the diagnostic test in three studies [38,40,41] was given without presenting the diagnostic odds. In contrast, diagnostic values (true positive, false negative, true negative, false positive) are mandatory to ensure a complete evaluation of the test accuracy [48].
Considering the frequent CBCT artifacts (noise, extinction artifacts, beam hardening, scattering, motion artifacts, etc.) and their impact on diagnosing [49], testing the accuracy of the algorithm on a set of CBCT scans including these artifacts is essential for future clinical application. In our review, none of the included studies considered this category in their samples, while Liu et al. [38] excluded blurred CBCT images caused by artifacts.
The principal research guidelines didn’t include the AI section as they had been established before the development of AI. This justifies the high frequency of unclear and not applicable answers in our review, to the QUADAS-2 tool questions. For example, the index test section gave 50% of not applicable and 7.14% of unclear answers as the QUADAS-2 tool wasn’t designed to evaluate the risk of bias for AI diagnostic accuracy studies [50].
The number of studies testing the accuracy of the AI in dentistry, especially in oral and maxillofacial radiology, is increasing alongside the addition of the AI sections within the research guidelines. Recently, Sounderajah et al. [51] started developing AI-specific extensions for STARD guidelines, EQUATOR (Enhancing Quality and Transparency of Health Research), and TRIPOD (Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis). Furthermore, the AI extension for SPIRIT (Standard Protocol Items: Recommendations for Interventional Trials) [52] and CONSORT (Consolidated Standards of Reporting Trials) [53] have been developed, published, and need to be endorsed by journals aiming to improve the quality of dental AI research [54]. A recent checklist by Schwendicke et al. [55], has been published in order to guide researchers, reviewers, and readers.

This entry is adapted from the peer-reviewed paper 10.3390/ijerph19010560

This entry is offline, you can click here to edit this entry!
Video Production Service