Artificial intelligence (AI) exceptional capabilities, including pattern recognition, predictive analytics, and decision-making skills, enable the development of systems that can analyze complex medical data at a scale and precision beyond human capacity. AI has significantly impacted thyroid cancer diagnosis, offering advanced tools and methodologies that promise to revolutionize patient outcomes.
1. AI in Thyroid Cancer Diagnosis
The adoption of AI in healthcare has become a pivotal development, profoundly reshaping the landscape of medical diagnosis, treatment, and patient care. AI’s exceptional capabilities, including pattern recognition, predictive analytics, and decision-making skills, enable the development of systems that can analyze complex medical data at a scale and precision beyond human capacity
[1]. This, in turn, augments early disease detection, facilitates accurate diagnoses, and aids personalized treatment planning. Moreover, AI-driven predictive models can forecast disease outbreaks, enhance the efficiency of hospital operations, and significantly improve patient outcomes
[2]. Additionally, AI has the potential to democratize healthcare by bridging the gap between rural and urban health services and making high-quality care more accessible. Hence, the importance of AI in healthcare is profound and will continue to grow as technology advances, leading to more sophisticated applications and better health outcomes for patients worldwide
[3,4][3][4]. However, the trust serves as a mediator, influencing the impact of AI-specific factors on user acceptance. Researchers have investigated how security, risk, and trust impact the adoption of AI-powered assistance
[5]. They have conducted empirical tests on their proposed research framework and found that trust plays a pivotal role in determining user acceptance.
Cancer, a leading cause of death, affects various body parts, as depicted in
Figure 1a. Among different types, thyroid carcinoma is one of the most common endocrine cancers globally
[6,7][6][7]. Concerns are mounting over the escalating incidence of thyroid cancer and associated mortality. Research indicates that thyroid cancer incidence is higher in women aged 15–49 (ranked fifth globally) than in men aged 50–69 years
[8,9,10][8][9][10].
Figure 1.
(
a
) Some of the common types of cancer and (
b
) thyroid cancer detection methods.
According to existing global epidemiological data, the rapid growth of abnormal thyroid nodules is driven by an accelerated increase in genetic cell activity, where the normal functioning and activity of cells in an organism are heightened or intensified. This condition can be categorized into four primary subtypes: papillary carcinoma (PTC)
[11], follicular thyroid carcinoma (FTC)
[12], anaplastic thyroid carcinoma (ATC)
[13], and medullary thyroid carcinoma (MTC)
[14]. Influential factors such as high radiation exposure, Hashimoto’s thyroiditis, psychological and genetic predispositions, and advancements in detection technology can contribute to the onset of these cancer types. These conditions might subsequently lead to chronic health issues, including diabetes, irregular heart rhythms, and blood pressure fluctuations
[15,16,17][15][16][17]. Although the quantity of cancer cells is a significant indicator for assessing both invasiveness and poor prognosis in thyroid carcinoma, obtaining results is often time-consuming due to the requirement to observe cell appearance. Therefore, the detection and quantification of cell nuclei are considered alternative biomarkers for assessing cancer cell proliferation.
The utilization of computer-aided diagnosis (CAD) systems for analyzing thyroid cancer images has seen a significant increase in popularity in recent years. These systems, renowned for enhancing diagnostic accuracy and reducing interpretation time, have become an invaluable tool in the field. Among these technologies, radionics, when used in conjunction with ultrasonography imaging, has become widely accepted as a cost-effective, safe, simple, and practical diagnostic method in clinical practice. Endocrinologists frequently conduct US scans in the 7–15 MHz range to identify thyroid cancer and evaluate its anatomical characteristics. The American College of Radiology has formulated a Thyroid Imaging, Reporting, and Data System (TI-RADS) that classifies thyroid nodules into six categories based on attributes such as composition, echogenicity, shape, size, margins, and echogenic foci. These classifications range from normal (Thyroid Imaging, Reporting, and Data System (TIRADS)-1) to malignant (TIRADS-6)
[18,19,20][18][19][20]. Several open-source applications are available for assessing these thyroid cancer features
[21,22][21][22]. However, the identification and differentiation of nodules continue to present a challenge, largely reliant on radiologists’ personal experience and cognitive abilities. This is due to the subjective nature of human eye-based image recognition, the poor quality of captured images, and the similarities among US images of benign thyroid nodules, malignant thyroid nodules, and lymph nodes.
Moreover, ultrasonography imaging is often a time-intensive and stressful procedure, which can result in inaccurate diagnoses. Misclassifications among normal, benign, malignant, and indeterminate cases are typical
[23,24,25,26,27,28][23][24][25][26][27][28]. A fine-needle aspiration biopsy (FNAB) is typically conducted for a more precise diagnosis. However, FNAB can be an uncomfortable experience for patients, and a specialist’s lack of knowledge can potentially convert benign nodules into malignant ones, not to mention the additional financial burden
[29,30][29][30] (refer to
Figure 1b). Selecting their characteristics is the primary challenge in distinguishing between benign and malignant nodules. Numerous studies have explored the characterization of conventional US imaging for various types of cancers, including retina
[31,32][31][32], breast cancer
[33[33][34][35][36][37],
34,35,36,37], blood cancer
[38,39][38][39], and thyroid cancer
[40,41][40][41]. However, these methods remain insufficiently accurate for the reliable classification of thyroid nodules.
The incorporation of AI technology plays a pivotal role in reducing subjectivity and enhancing the accuracy of pathological diagnoses for various intractable diseases, including those affecting the thyroid gland
[42,43][42][43]. This enhancement is achieved through an improved interpretation of ultrasonography images and faster processing times. Machine learning (ML) and deep learning (DL) have surfaced as potential solutions for automating the classification of thyroid nodules in applications such as US, fine-needle aspiration (FNA), and thyroid surgery
[44,45][44][45]. This potential has been underscored in numerous studies, such as
[43,46,47,48,49,50][43][46][47][48][49][50]. Furthermore, there are ongoing studies examining the use of this innovative technology for cancer detection, where its effectiveness hinges on the volume of data and the precision of the classification process.
2. Example of Thyroid Cancer Detection Using AI
To explain how thyroid cancer has been considered in the literature and how AI can be used to detect types of cancers, in the following, a simple example of TD classification was presented. It has been known that pattern recognition is the process of training a neural network to assign the correct target classes to a set of input patterns. Once trained, the network can be used to classify patterns. An example of thyroid cancer classification as benign, malignant, and normal based on a set of features specified according to the TIRADS was presented. In this example, the dataset (7200 samples) was selected from the UCI Machine Learning Repository
[237][51]. This dataset can be used to create a neural network that classifies patients referred to a clinic as normal, hyperfunctioning, or subnormal functioning. The thyroid inputs and thyroid targets are defined as: (i) TI: a 21 × 7200 matrix consisting of 7200 patients characterized by 15 binary and 6 continuous patient attributes. (ii) TT: a 3 × 7200 matrix of 7200 associated class vectors defining which of three classes each input is assigned to. Classes are represented by a one in rows 1, 2, or 3. (1) Normal, not hyperthyroid. (2) Hyperfunctioning. (3) Subnormal functioning.
In this network, the data were divided into 5040 samples, 1080 samples, and 1080 samples used for training, validation, and testing, respectively. The network was trained to reduce the error between thyroid inputs and thyroid targets or until it reached the target goal. If the ER did not decrease and the training did not improve, the training data were halted with the data of the validation set. The testing dataset was used to deduce the values of the targets. Thus, it determined the percentage of learning. For this example, 10 neurons were used in the hidden layer in this model for 21 inputs and 3 outputs. After the simulation of the model, the percent error was 5.337%, 7.407%, and 5.092% for training, validation, and testing, respectively. Thus, in total, it recognized 94.4% and the overall ER was 5.6%. The confusion matrix and the ROC metric are illustrated in
Figure 2.
Figure 2.
An example of the confusion matrix and ROC metric for thyroid cancer classification.
Figure 3 illustrates an example of a thyroid segmentation in ultrasound images using K-means (three clusters were chosen for this example) which is one of the most commonly used clustering techniques.
Figure 3.
Example of thyroid segmentation based on the K-means method.