Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 -- 3015 2024-03-02 12:31:41 |
2 references update and layout -2 word(s) 3013 2024-03-05 08:38:11 |

Video Upload Options

Do you have a full video?

Confirm

Are you sure to Delete?
Cite
If you have any further questions, please contact Encyclopedia Editorial Office.
Brandão, M.; Mendes, F.; Martins, M.; Cardoso, P.; Macedo, G.; Mascarenhas, T.; Mascarenhas Saraiva, M. Artificial Intelligence Advancements in Cervical Cancer. Encyclopedia. Available online: https://encyclopedia.pub/entry/55780 (accessed on 01 May 2024).
Brandão M, Mendes F, Martins M, Cardoso P, Macedo G, Mascarenhas T, et al. Artificial Intelligence Advancements in Cervical Cancer. Encyclopedia. Available at: https://encyclopedia.pub/entry/55780. Accessed May 01, 2024.
Brandão, Marta, Francisco Mendes, Miguel Martins, Pedro Cardoso, Guilherme Macedo, Teresa Mascarenhas, Miguel Mascarenhas Saraiva. "Artificial Intelligence Advancements in Cervical Cancer" Encyclopedia, https://encyclopedia.pub/entry/55780 (accessed May 01, 2024).
Brandão, M., Mendes, F., Martins, M., Cardoso, P., Macedo, G., Mascarenhas, T., & Mascarenhas Saraiva, M. (2024, March 02). Artificial Intelligence Advancements in Cervical Cancer. In Encyclopedia. https://encyclopedia.pub/entry/55780
Brandão, Marta, et al. "Artificial Intelligence Advancements in Cervical Cancer." Encyclopedia. Web. 02 March, 2024.
Artificial Intelligence Advancements in Cervical Cancer
Edit

Artificial intelligence has yielded remarkably promising results in several medical fields, namely those with a strong imaging component. Gynecology relies heavily on imaging since it offers useful visual data on the female reproductive system, leading to a deeper understanding of pathophysiological concepts. The applicability of artificial intelligence technologies has not been as noticeable in gynecologic imaging as in other medical fields. However, due to growing interest in this area, some studies have been performed with exciting results. From urogynecology to oncology, artificial intelligence algorithms, particularly machine learning and deep learning, have shown huge potential to revolutionize the overall healthcare experience for women’s reproductive health.

artificial intelligence gynecology deep learning machine learning cervical cancer

2.1. Artificial Intelligence Advancements in Cervical Cancer

Cervical cancer is highly prevalent, with a cumulative worldwide incidence of 13.3 cases per 100,000 women-years, which is increased in low-income countries [24]. Additionally, it is associated with a mortality rate of 7.2 deaths per 100,000 women-years [24]. Furthermore, cervical cancer can be easily treated if detected at its early stages [25]. In daily practice, cervical cancer screening is based on human papillomavirus (HPV) testing and cytological examination. Therefore, it depends heavily on the pathologist’s experience, which also is less accurate and has high interobserver variability. Colposcopy is also a critical component of cervical cancer detection. However, because of the increased workload, visual screening leads to misdiagnosis and low diagnostic accuracy [26]. Several authors have advocated the potential of AI-powered cytological examination and colposcopy image analysis, identifying abnormal cells or lesions, thus strengthening cervical cancer screening and diagnostics [27]. This see-and-treat approach allows for earlier and effective treatment of lesions using minimally invasive procedures, such as thermocoagulation, reducing the malignancy and associated mortality [26], while reducing the need for unnecessary biopsies. Table 1 summarizes the most recent evidence about AI models in colposcopy.
The first to study the implementation of an AI model in cervical cancer diagnosis was Mehlhorn and colleagues, namely during colposcopy exams. In 2012, the group developed a computer-assisted diagnostic (CAD) device based on image-processing methods to automatically analyze colposcopy images. The CAD system revealed a diagnostic accuracy of 80%, with a sensitivity of 85% and a specificity of 75%, in differentiating normal or cervical intraepithelial neoplasia grade 1 (CIN1) from high-grade squamous intraepithelial lesions (HSILs)(CIN2 or CIN3) in colposcopy exams [28]. A second study by the same group confirmed the benefit of the CAD application during colposcopy exams’ evaluation, demonstrating an increase in diagnostic accuracy when the exam was evaluated by a less-experienced gynecologist [29]. A Greek group developed and trained a clinical-decision support system (CDSS) based on an artificial neural network to correctly triage 740 women before referral to colposcopy; this was based on the cytological diagnosis and the expression of various biomarkers [30]. Women detected with cervical intraepithelial neoplasia grade 2 or worse (CIN2+) were chosen to undergo colposcopy. The CDSS presented a sensitivity of 89.4%, a specificity of 97.1%, a positive predictive value of 89.4%, and a negative predictive value of 97.1%. This system has the potential to reduce the referral rate for colposcopy when applicated in clinical practice.
Sato et al. were the first to develop a preliminary DL model based on a Keras neural network with 485 images from 158 individuals who underwent colposcopy [31]. The CNN tried to classify colposcopy images and predict post-procedure diagnoses. Patients were classified into three groups: severe dysplasia, carcinoma in situ (CIS), and invasive cancer (IC). Rather than evaluating the performance of a given AI-based model itself, the authors wanted to establish its feasibility and usefulness in clinical practice as quick and efficient way to obtain an accurate preoperative diagnosis that could help doctors in the decision-making process. The model reached 50% accuracy in this dataset.
Asiedu et al. extracted color and textural-based features from visual inspection with acetic acid and lugol’s iodine, and then used the data to train a support vector machine (SVM) model to distinguish cervical intraepithelial neoplasia (CIN) from normal and benign tissue [32]. The proposed framework achieved a sensitivity, specificity, and accuracy of 81.3%, 78.6%, and 80.0%, respectively, achieving better performance than expert physicians using the same dataset. In the same year, Miyagi et al. developed a CNN for classification of cervical squamous intraepithelial lesions from colposcopy images of 330 patients, 97 with low-grade squamous intraepithelial lesions (LSILs) and 213 with HSILs, who underwent colposcopy and lesion biopsy [33]. The CNN differentiated HSILs from LSILs with higher accuracy (82.3% vs. 79.7%) and specificity (88.2% vs. 77.3%), although with slightly lower sensitivity (80.0% vs. 83.1%). A study by the same group in 2020 included the results of human papilloma virus (HPV) testing [34]. The trained CNN revealed an accuracy of 94.1%, higher than gynecologists’ 84.3% global accuracy. This study was one of the first to include additional variables in order to increase the diagnostic accuracy of the CNN.
In 2020, Yuan and colleagues worked on a database composed of 22,330 cases, including 10,365 normal cases, 6357 LSIL cases, and 5608 HSIL cases [35]. Based on a dataset of three frames per case, they developed a ResNet CNN for differentiating between normal images and dysplastic lesions (LSILs or HSILs). The CNN revealed 85% sensitivity, 82% specificity, and 93% accuracy. Also, they created a U-Net CNN capable of delimitating squamous lesions (LSILs or HSILs) in acetic acid and iodine images. The model had 84.7% sensitivity in acetic acid images and 61.6% in lugol’s iodine images. These lesion delimitation models are of utmost importance for guiding colposcopy-based biopsies. Finally, the group developed a MASK-R CNN model to detect HSILs. The model detected HSILs with 84.7% sensitivity in both acetic acid and iodine images, accurately identifying lesions that benefit from treatment.
A Chinese group carried out a study to develop and validate a Colposcopic Artificial Intelligence Auxiliary Diagnostic System (CAIADS) using digital records of 19,435 patients, including colposcopy images and pathological results, which was considered the gold standard [36]. Agreement between CAIADS-graded colposcopy and pathology findings was higher than in expert-interpreted colposcopy (82.2% vs. 65.9%). The CAIADS model was able to increase its diagnostic accuracy after considering patients’ related factors (such as previous cytology results). The new model also revealed a superior ability to predict biopsy sites, with a median mean-intersection-over-union (mIoU) of 0.758.
In 2021, Fu et al. intended to create a model incorporating the results of HPV typing, cytological examination, and colposcopy analysis [37]. First of all, they acquired colposcopy images and created a multiple-image-based DL model using a multivariable logistic regression (MLR), presenting an area under the curve (AUC) of 0.845. Then, the results of the cytology test and HPV test were used to build an ML model, with an AUC of 0.837. Finally, they built a cross-modal integrated model using ML, through combining the multiple-image-based DL model and the Cytology–HPV joint diagnostic model. The authors proved the synergetic benefits of the ensembled model, presenting a higher AUC of 0.921. A ShuffleNet-based cervical precancerous lesion classification method based on colposcopy images was developed by Fang and colleagues [38]. The image dataset was classified into five categories, namely normal, cervical cancer, LSILs (CIN1), HSILs (CIN2/CIN3), and cervical neoplasm. In this dataset, the colposcopy images were expanded to reduce the impact of uneven distribution between the lesions’ categories, Additionally, the ShuttleNet network was compared with other CNNs (like the RestNet or the DenseNet). The new CNN model presented a global accuracy of 81.23%, with an AUC of 0.99. A recent study by Chen et al. collected images from 6002 colposcopy examinations of normal cervixes and those with LSILs and HSILs [39]. A new model based on EficcientNet-B0 using Gate Recurrent Unit was developed in order to accurately identify HSILs. The CNN revealed a sensitivity of 93.6%, specificity of 87.6%, and accuracy of 90.6% in distinguishing between HSILs, LSILs, and normal-cervix images.
Additionally, the diagnosis of cervical cancer can also be guided using magnetic resonance imaging (MRI). Urushibara et al. designed a study including 418 patients, 177 patients with pathologically confirmed cervical cancer and 241 patients without cancer, who underwent MRI between 2013 and 2020 [40]. They compared the performance of a DL architecture, called Xception, with experienced radiologists in the diagnosis of cervical cancer on sagittal T2-weighted images. The CNN presented higher sensitivity (88.3% vs. 78.3–86.7%) and accuracy (90.8% vs. 86.7–89.2%), with similar specificity.
The development of AI models in cervical cancer diagnosis can also be accomplished at the histological level. In fact, in 2019, Sompawong and colleagues applied a Mask Regional Convolutional Neural Network (Mask R-CNN) to analyze cervical cells using liquid-based histological slides and screening for abnormal nuclear features [41]. The proposed algorithm achieved an accuracy of 91.7%, sensitivity of 91.7%, and specificity of 91.7%. In the same year, a group of Indian pathologists trained a CNN to identify abnormal features from liquid-based cytology (LBCC) smears, using 2816 images—816 presenting abnormal features, indicating LSILs or HSILs, and 2000 normal images, containing benign epithelial cells and reactive changes [42]. The referred model yielded a sensitivity of 95.6%, with 79.8% specificity. In addition, its high negative predictive value of 99.1% makes it a potentially valuable tool for cervical cancer screening. The technological development was accompanied by a multicenter observational study that evaluated the performance of AI-assisted cytology for the detection of CIN or cancer [43]. The group used 188,542 digital cytological images to train a supervised DL algorithm. The DL model detected 92.6% of CIN 2 and 96.1% of CIN 3, showing an equivalent sensitivity but higher specificity compared to skilled senior cytologists.
In fact, a validated AI-assisted cytology system, called Landing CytoScanner®, was enrolled in a cohort study including 0.7 million women [44]. Women with abnormal results in both AI-assisted and manual readings were diagnosed using colposcopy and biopsy. The outcomes were of histologically confirmed CIN of grade 2 or worse (CIN2+). The agreement rate between AI and the manual reading was 94.7% and the kappa value was 0.92. The large number of images analyzed contributed to the robustness of this experiment. Given its ability to exclude most normal cytology, with increased sensitivity compared with manual cytology readings, the results support the AI-based cytology system for primary screening of cervical cancer in a large-scale population. More recently, a Chinese group studied the diagnostic performance of an artificial intelligence-enabled liquid-based cytology (AI-LBC) in triaging women with HPV [45]. AI-LBC achieved sensitivity for the detection of CIN2+ comparable to that of experienced cytologists (86.49% vs. 83.78%), but significantly higher in specificity (51.33% vs. 40.93%). Similar results were observed for CIN3+. Moreover, the AI-LBC reduced colposcopy referral by 10%, compared with cytologists, making the process more effective by reducing the number of false positives in the cytological evaluation. Even though there are positive conclusions, prospective designs are needed to test the triaging performance of the developed model.
In order to increase the diagnostic accuracy of cervical lesions, new image methods have been evaluated. High-resolution endomicroscopy (HRME) consists of a fiber optic fluorescence microscope capable of acquiring nuclear images in vivo. In 2022, Brenes et al. used a dataset of images from over 1600 patients to train, validate, and test a CNN algorithm to diagnose CIN2+ cases from HRME images [46]. The proposed method consistently outperformed the current gold-standard methods, achieving an accuracy of 87%, with a sensitivity of 94% and specificity of 58%. By incorporating the HPV status, specificity increased to 71%.
Finally, AI-models can also provide prognostic information, guiding therapeutic decision. In 2019, Matsuo et al. compared the performance of a DL model with four survival-analysis models, including the Cox proportional hazard regression model, the mainstay for survival analyses in oncologic research in predicting survival in women with cervical cancer [47]. The study included 768 women, with a median follow-up time of 40.2 months. The new model exhibited superior performance, outperforming the prediction models for overall survival, but with similar results in predicting progression-free survival. The prognostic information given using DL algorithms was also evaluated in a retrospective study evaluating 157 women who developed recurrent cervical cancer among 431 women with cervical cancer diagnosed between January 2008 and December 2014 [48]. Predictions of 3- and 6-month survival after recurrence were compared between the current approach (linear regression model) and their experimental approach (DL neural network model). The DL model inputs included some clinical and laboratorial parameters and achieved significantly better prediction for 3-month (AUC 0.747 vs. 0.652) and 6- month (AUC 0.724 vs. 0.685) survival. Better predictions of limited life expectancy in women with recurrent cervical cancer pave the way for even more personalized clinical decisions, thus helping clinicians to individually adjust the level of care provided.
Table 1. Summary of Studies about AI implementation in colposcopy. Sn, sensititivy; Sp, specificity; AUC, area under the curve; CIN, cervical intraepithelial neoplasia; HSIL, high-grade squamous intraepithelial neoplasia; LSIL, low-grade squamous intraepithelial neoplasia; N, normal; VIA, visual inspection with acetic acid; VILI, visual inspection lugol iodine. NK—not known.
Author, Year Study Aim Patients n Frames n Pathologic Confirmation AI Methoad Dataset Method Analysis Method Catego-Ries Performance Metrics %
Sn Sp AUC
Mehlhorn, 2012,
Germany
[28]
Detection of CIN 2/3 lesions 198 375 frames (VIA)
Normal: 39
CIN 1: 41
CIN 2: 99
CIN 3: 19
Yes Color texture analysis methods frame annotation in VIA
(normal vs. CIN I vs. CIN II-III)
n-fold cross validation HSIL (CIN 2 or CIN3) 85 75 80
Asiedu, 2019,
USA
[32]
Differentiating normal vs. abnormal (CIN+) 134 Not known
Only number of patients per category
Yes SVM frame annotiation in VIA and VILI
(VILI/VIA positive vs. VILI/VIA negative)
5-fold cross validation
(80–20%)
Abnormal (LSIL or HSIL) 81 79 80
Miyagi, 2019,
Japan
[33]
Differentiating LSIL vs. HSIL 330 1 frame per colposcopy (VIA)
LSIL: 97
HSIL: 213
Yes ResNet frame labeling in acid free
(LSIL vs. HSIL)
5-fold cross validation LSIL vs. HSIL 80 88 83
Yuan, 2020,
China
[35]
Differentiating normal vs. abnormal (LSIL+) 22,330 3 frames per colposcopy (AF, VIA and VILI)
Normal: 10,365 × 3
LSIL: 6357 × 3
HSIL: 5608 × 3
Yes ResNet frame annotation in acid-free, VIA and VILI
(normal vs. LSIL vs. HSIL)
Train–test validation
(80–10–10%)
Abnormal (LSIL or HSIL) 85 82 93
Predicting the area of lesion (LSIL+) 11,198 11,198 VIA frames + 11,198 VILI frames
Normal: NK
LSIL: NK
HSIL: NK
Yes U-Net VIA 85 NK NK
VILI 62 NK NK
Detection of HSIL 11,198 Yes MASK R VIA 85 NK NK
VILI 85 NK NK
Xue,
2020,
China
[36]
Differentiating normal vs. LSIL vs. HSIL vs. cancer 19,435 101,7267 acid-free frames
Normal: NK
LSIL: NK
HSIL: NK
Cancer: NK
Yes U-Net + YOLO frame annotation in acid-free
(normal vs. LSIL vs. HSIL vs. Cancer)
Train–test validation
(70–10–20%)
LSIL+ 87 49 69
HSIL+ 66 90 78
Chen, 2022,
China
[39]
Differentiating LSIL vs. HSIL 6002 18,006 frames (AF, VIA and VILI) Yes E-B0 with GRU frame labeling in acid-free, VIA and VILI
(LSIL vs. HSIL)
Train–test validation
(60–20–20%)
LSIL vs. HSIL 88 94 91
Fang, 2022,
China
[38]
Differentiating normal vs. cervical cancer vs. LSIL vs. HSIL vs. cervical neoplasm 1189 6996 acid-free frames
Normal: 2352
LSIL: 780
HSIL: 2532
Cervical cancer: 408
Cervical neoplasm: 924
Not mentioned ShuffleNet frame labeling in acid free
(normal vs. LSIL vs. HSIL vs. cervical cancer vs. cervical neoplasm)
+
data augmentation
train–test
(90–10%)
N vs. all 90 NK NK
LSIL vs. all 86 NK NK
HSIL vs. all 82 NK NK
Cervical neoplasm   NK NK
Cervical cancer   NK NK

References

  1. Dhombres, F.; Bonnard, J.; Bailly, K.; Maurice, P.; Papageorghiou, A.T.; Jouannic, J.M. Contributions of Artificial Intelligence Reported in Obstetrics and Gynecology Journals: Systematic Review. J. Med. Internet Res. 2022, 24, e35465.
  2. Hosny, A.; Parmar, C.; Quackenbush, J.; Schwartz, L.H.; Aerts, H. Artificial intelligence in radiology. Nat. Rev. Cancer 2018, 18, 500–510.
  3. Gore, J.C. Artificial intelligence in medical imaging. Magn. Reson. Imaging 2020, 68, A1–A4.
  4. Malani, S.N.t.; Shrivastava, D.; Raka, M.S. A Comprehensive Review of the Role of Artificial Intelligence in Obstetrics and Gynecology. Cureus 2023, 15, e34891.
  5. Amisha; Malik, P.; Pathania, M.; Rathaur, V.K. Overview of artificial intelligence in medicine. J. Fam. Med. Prim. Care 2019, 8, 2328–2331.
  6. Ahuja, A.S. The impact of artificial intelligence in medicine on the future role of the physician. PeerJ 2019, 7, e7702.
  7. Liu, P.R.; Lu, L.; Zhang, J.Y.; Huo, T.T.; Liu, S.X.; Ye, Z.W. Application of Artificial Intelligence in Medicine: An Overview. Curr. Med. Sci. 2021, 41, 1105–1115.
  8. Xu, J.; Xue, K.; Zhang, K. Current status and future trends of clinical diagnoses via image-based deep learning. Theranostics 2019, 9, 7556–7565.
  9. Ashrafian, H.; Darzi, A.; Athanasiou, T. A novel modification of the Turing test for artificial intelligence and robotics in healthcare. Int. J. Med. Robot. 2015, 11, 38–43.
  10. Le Berre, C.; Sandborn, W.J.; Aridhi, S.; Devignes, M.D.; Fournier, L.; Smail-Tabbone, M.; Danese, S.; Peyrin-Biroulet, L. Application of Artificial Intelligence to Gastroenterology and Hepatology. Gastroenterology 2020, 158, 76–94.e2.
  11. Yang, Y.J.; Bang, C.S. Application of artificial intelligence in gastroenterology. World J. Gastroenterol. 2019, 25, 1666–1683.
  12. Motwani, M.; Dey, D.; Berman, D.S.; Germano, G.; Achenbach, S.; Al-Mallah, M.H.; Andreini, D.; Budoff, M.J.; Cademartiri, F.; Callister, T.Q.; et al. Machine learning for prediction of all-cause mortality in patients with suspected coronary artery disease: A 5-year multicentre prospective registry analysis. Eur. Heart J. 2017, 38, 500–507.
  13. Waljee, A.K.; Higgins, P.D. Machine learning in medicine: A primer for physicians. Am. J. Gastroenterol. 2010, 105, 1224–1226.
  14. Mascarenhas, M.; Afonso, J.; Andrade, P.; Cardoso, H.; Macedo, G. Artificial intelligence and capsule endoscopy: Unravelling the future. Ann. Gastroenterol. 2021, 34, 300–309.
  15. Rashidi, H.H.; Tran, N.; Albahra, S.; Dang, L.T. Machine learning in health care and laboratory medicine: General overview of supervised learning and Auto-ML. Int. J. Lab. Hematol. 2021, 43 (Suppl. S1), 15–22.
  16. Handelman, G.S.; Kok, H.K.; Chandra, R.V.; Razavi, A.H.; Lee, M.J.; Asadi, H. eDoctor: Machine learning and the future of medicine. J. Intern. Med. 2018, 284, 603–619.
  17. Cleret de Langavant, L.; Bayen, E.; Yaffe, K. Unsupervised Machine Learning to Identify High Likelihood of Dementia in Population-Based Surveys: Development and Validation Study. J. Med. Internet Res. 2018, 20, e10493.
  18. Albahra, S.; Gorbett, T.; Robertson, S.; D’Aleo, G.; Kumar, S.V.S.; Ockunzzi, S.; Lallo, D.; Hu, B.; Rashidi, H.H. Artificial intelligence and machine learning overview in pathology & laboratory medicine: A general review of data preprocessing and basic supervised concepts. Semin. Diagn. Pathol. 2023, 40, 71–87.
  19. Li, N.; Zhao, X.; Yang, Y.; Zou, X. Objects Classification by Learning-Based Visual Saliency Model and Convolutional Neural Network. Comput. Intell. Neurosci. 2016, 2016, 7942501.
  20. Shrestha, P.; Poudyal, B.; Yadollahi, S.; Wright, D.E.; Gregory, A.V.; Warner, J.D.; Korfiatis, P.; Green, I.C.; Rassier, S.L.; Mariani, A.; et al. A systematic review on the use of artificial intelligence in gynecologic imaging—Background, state of the art, and future directions. Gynecol. Oncol. 2022, 166, 596–605.
  21. Drukker, L.; Noble, J.A.; Papageorghiou, A.T. Introduction to artificial intelligence in ultrasound imaging in obstetrics and gynecology. Ultrasound Obstet. Gynecol. 2020, 56, 498–505.
  22. Iftikhar, P.; Kuijpers, M.V.; Khayyat, A.; Iftikhar, A.; DeGouvia De Sa, M. Artificial Intelligence: A New Paradigm in Obstetrics and Gynecology Research and Clinical Practice. Cureus 2020, 12, e7124.
  23. Jost, E.; Kosian, P.; Jimenez Cruz, J.; Albarqouni, S.; Gembruch, U.; Strizek, B.; Recker, F. Evolving the Era of 5D Ultrasound? A Systematic Literature Review on the Applications for Artificial Intelligence Ultrasound Imaging in Obstetrics and Gynecology. J. Clin. Med. 2023, 12, 6833.
  24. Singh, D.; Vignat, J.; Lorenzoni, V.; Eslahi, M.; Ginsburg, O.; Lauby-Secretan, B.; Arbyn, M.; Basu, P.; Bray, F.; Vaccarella, S. Global estimates of incidence and mortality of cervical cancer in 2020: A baseline analysis of the WHO Global Cervical Cancer Elimination Initiative. Lancet Glob. Health 2023, 11, e197–e206.
  25. Pimple, S.A.; Mishra, G.A. Global strategies for cervical cancer prevention and screening. Minerva Ginecol. 2019, 71, 313–320.
  26. Bedell, S.L.; Goldstein, L.S.; Goldstein, A.R.; Goldstein, A.T. Cervical Cancer Screening: Past, Present, and Future. Sex. Med. Rev. 2020, 8, 28–37.
  27. Xue, P.; Ng, M.T.A.; Qiao, Y. The challenges of colposcopy for cervical cancer screening in LMICs and solutions by artificial intelligence. BMC Med. 2020, 18, 169.
  28. Mehlhorn, G.; Munzenmayer, C.; Benz, M.; Kage, A.; Beckmann, M.W.; Wittenberg, T. Computer-assisted diagnosis in colposcopy: Results of a preliminary experiment? Acta Cytol. 2012, 56, 554–559.
  29. Mehlhorn, G.; Kage, A.; Munzenmayer, C.; Benz, M.; Koch, M.C.; Beckmann, M.W.; Wittenberg, T. Computer-assisted diagnosis (CAD) in colposcopy: Evaluation of a pilot study. Anticancer Res. 2012, 32, 5221–5226.
  30. Bountris, P.; Haritou, M.; Pouliakis, A.; Margari, N.; Kyrgiou, M.; Spathis, A.; Pappas, A.; Panayiotides, I.; Paraskevaidis, E.A.; Karakitsos, P.; et al. An intelligent clinical decision support system for patient-specific predictions to improve cervical intraepithelial neoplasia detection. Biomed. Res. Int. 2014, 2014, 341483.
  31. Sato, M.; Horie, K.; Hara, A.; Miyamoto, Y.; Kurihara, K.; Tomio, K.; Yokota, H. Application of deep learning to the classification of images from colposcopy. Oncol. Lett. 2018, 15, 3518–3523.
  32. Asiedu, M.N.; Simhal, A.; Chaudhary, U.; Mueller, J.L.; Lam, C.T.; Schmitt, J.W.; Venegas, G.; Sapiro, G.; Ramanujam, N. Development of Algorithms for Automated Detection of Cervical Pre-Cancers With a Low-Cost, Point-of-Care, Pocket Colposcope. IEEE Trans. Biomed. Eng. 2019, 66, 2306–2318.
  33. Miyagi, Y.; Takehara, K.; Miyake, T. Application of deep learning to the classification of uterine cervical squamous epithelial lesion from colposcopy images. Mol. Clin. Oncol. 2019, 11, 583–589.
  34. Miyagi, Y.; Takehara, K.; Nagayasu, Y.; Miyake, T. Application of deep learning to the classification of uterine cervical squamous epithelial lesion from colposcopy images combined with HPV types. Oncol. Lett. 2020, 19, 1602–1610.
  35. Yuan, C.; Yao, Y.; Cheng, B.; Cheng, Y.; Li, Y.; Li, Y.; Liu, X.; Cheng, X.; Xie, X.; Wu, J.; et al. The application of deep learning based diagnostic system to cervical squamous intraepithelial lesions recognition in colposcopy images. Sci. Rep. 2020, 10, 11639.
  36. Xue, P.; Tang, C.; Li, Q.; Li, Y.; Shen, Y.; Zhao, Y.; Chen, J.; Wu, J.; Li, L.; Wang, W.; et al. Development and validation of an artificial intelligence system for grading colposcopic impressions and guiding biopsies. BMC Med. 2020, 18, 406.
  37. Fu, L.; Xia, W.; Shi, W.; Cao, G.X.; Ruan, Y.T.; Zhao, X.Y.; Liu, M.; Niu, S.M.; Li, F.; Gao, X. Deep learning based cervical screening by the cross-modal integration of colposcopy, cytology, and HPV test. Int. J. Med. Inform. 2022, 159, 104675.
  38. Fang, S.; Yang, J.; Wang, M.; Liu, C.; Liu, S. An Improved Image Classification Method for Cervical Precancerous Lesions Based on ShuffleNet. Comput. Intell. Neurosci. 2022, 2022, 9675628.
  39. Chen, X.; Pu, X.; Chen, Z.; Li, L.; Zhao, K.N.; Liu, H.; Zhu, H. Application of EfficientNet-B0 and GRU-based deep learning on classifying the colposcopy diagnosis of precancerous cervical lesions. Cancer Med. 2023, 12, 8690–8699.
  40. Urushibara, A.; Saida, T.; Mori, K.; Ishiguro, T.; Sakai, M.; Masuoka, S.; Satoh, T.; Masumoto, T. Diagnosing uterine cervical cancer on a single T2-weighted image: Comparison between deep learning versus radiologists. Eur. J. Radiol. 2021, 135, 109471.
  41. Sompawong, N.; Mopan, J.; Pooprasert, P.; Himakhun, W.; Suwannarurk, K.; Ngamvirojcharoen, J.; Vachiramon, T.; Tantibundhit, C. Automated Pap Smear Cervical Cancer Screening Using Deep Learning. Annu. Int. Conf. IEEE Eng. Med. Biol. Soc. 2019, 2019, 7044–7048.
  42. Sanyal, P.; Barui, S.; Deb, P.; Sharma, H.C. Performance of A Convolutional Neural Network in Screening Liquid Based Cervical Cytology Smears. J. Cytol. 2019, 36, 146–151.
  43. Bao, H.; Bi, H.; Zhang, X.; Zhao, Y.; Dong, Y.; Luo, X.; Zhou, D.; You, Z.; Wu, Y.; Liu, Z.; et al. Artificial intelligence-assisted cytology for detection of cervical intraepithelial neoplasia or invasive cancer: A multicenter, clinical-based, observational study. Gynecol. Oncol. 2020, 159, 171–178.
  44. Bao, H.; Sun, X.; Zhang, Y.; Pang, B.; Li, H.; Zhou, L.; Wu, F.; Cao, D.; Wang, J.; Turic, B.; et al. The artificial intelligence-assisted cytology diagnostic system in large-scale cervical cancer screening: A population-based cohort study of 0.7 million women. Cancer Med. 2020, 9, 6896–6906.
  45. Xue, P.; Xu, H.M.; Tang, H.P.; Wu, W.Q.; Seery, S.; Han, X.; Ye, H.; Jiang, Y.; Qiao, Y.L. Assessing artificial intelligence enabled liquid-based cytology for triaging HPV-positive women: A population-based cross-sectional study. Acta Obstet. Gynecol. Scand. 2023, 102, 1026–1033.
  46. Brenes, D.; Barberan, C.J.; Hunt, B.; Parra, S.G.; Salcedo, M.P.; Possati-Resende, J.C.; Cremer, M.L.; Castle, P.E.; Fregnani, J.; Maza, M.; et al. Multi-task network for automated analysis of high-resolution endomicroscopy images to detect cervical precancer and cancer. Comput. Med. Imaging Graph. 2022, 97, 102052.
  47. Matsuo, K.; Purushotham, S.; Jiang, B.; Mandelbaum, R.S.; Takiuchi, T.; Liu, Y.; Roman, L.D. Survival outcome prediction in cervical cancer: Cox models vs deep-learning model. Am. J. Obstet. Gynecol. 2019, 220, 381.e1–381.e14.
  48. Matsuo, K.; Purushotham, S.; Moeini, A.; Li, G.; Machida, H.; Liu, Y.; Roman, L.D. A pilot study in using deep learning to predict limited life expectancy in women with recurrent cervical cancer. Am. J. Obstet. Gynecol. 2017, 217, 703–705.
More
Information
Contributors MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register : , , , , , ,
View Times: 69
Revisions: 2 times (View History)
Update Date: 05 Mar 2024
1000/1000