Gastrointestinal Tract Disorders: Comparison
Please note this is a comparison between Version 2 by Rita Xu and Version 1 by Muhammad Nouman Noor.

Globally, gastrointestinal (GI) tract diseases are on the rise. If left untreated, people may die from these diseases. Early discovery and categorization of these diseases can reduce the severity of the disease and save lives. Automated procedures are necessary, since manual detection and categorization are laborious, time-consuming, and prone to mistakes.

  • gastrointestinal diseases
  • deep learning
  • endoscopic images

1. Introduction

GI tract diseases are disorders related to the digestive system. Diagnoses of these diseases are highly dependent on medical imaging. The processing of large visual data is difficult for medical professionals and radiologists; this renders it subject to incorrect medical evaluation [1]. The most common diseases that occur in the digestive system are ulcerative colitis, ulcers, esophagitis, and polyps, which can transform into colorectal cancer. These diseases are the key causes of mortality around the globe [2].
As per the survey conducted on colorectal cancer for the year 2019, 26% of men, as well as 11% of women, around the globe are diagnosed with this cancer [3]. In 2021, more than 0.3 million cases of colorectal cancer were diagnosed in the US, and the death toll rose to 44% [4]. Roughly 0.7 million new instances of diseases are reported each year worldwide [5]. Alongside GI malignant growth [6[6][7],7], ulcer advancement in the GI tract is additionally a significant illness. The authors of [8] announced that the most noteworthy yearly predominance of ulcers was 141 per 1000 people in Spain, and the least was around 57 in Sweden.
During a routine endoscopic checkup, many lesions are missed due to the factors like the presence of stool and because of the organ’s multifaceted topology. Although the bowel is cleansed for improvement in the detection of cancer or its predecessor lesions, still, the ratio of missed polyps is immoderate, from 21.4–26.8% [9]. Moreover, the interclass similarity between lesions also plays a challenging role in identifying them. A recent procedure called wireless capsule endoscopy (WCE) [10] empowers specialists to see inside the intestinal tract, a region that is undeniably challenging to reach with regular endoscopy. In WCE, the patient swallows a camera-containing capsule that catches many images as it travels through the GI tract. These images are stitched together to form a video, which is then, at that point, examined by the specialists (expert gastroenterologists) to track down deformations. This manual strategy requires 2–3 h overall, so analysts are currently creating different computerized techniques [11,12][11][12]. Therefore, an automated system is required that not only classifies the diseases, but also highlights the diseased area.
Several techniques for identifying colorectal cancer and other diseases using endoscopic images have been proposed in the literature by computer vision (CV) and machine learning (ML) researchers. In [13], the authors proposed a technique with which they established a feature matrix, and then classified these features using a support vector machine (SVM) and decision trees. The maximum achieved accuracy using this method was 93.64%. Similarly, in another paper [7], the authors employed the information of local as well as global features and fused them. Deep discriminant features were obtained using an adaptive aggregation feature module. The best achieved accuracy using the stated techniques was 96.37%. The challenge has persisted up to this point due to the similarities between many symptoms, such as color, shape, and texture of lesions. Moreover, many researchers have focused on single disease detection and binary classification [14,15,16][14][15][16]. Furthermore, the localization of diseases is also a major challenge that requires addressing.

2. Gastrointestinal Tract Disorders

In the past few years, the detection of diseases using medical imaging has been a hot area of research, especially in the domain of the gastrointestinal tract. The segmentation of polyps, in particular, has been the major focus because of the availability of ground truths. Furthermore, the classification of gastrointestinal diseases has also been an active area of research. The performance of machine learning algorithms reported in the literature has been quite impressive [17[17][18],18], but deep learning algorithms surpass the ML approaches and achieve better results [19]. For the detection of GI tract diseases, numerous studies are available in the literature that use the ML method. For example, in [17], the authors developed a ML model based on the longitudinal training cohort of over 20 thousand patients undergoing treatment for peptic ulcers between the years 2007 and 2016. Their greatest accuracies were 82.6% and 83.3% using logistic regression and ridge regression, respectively. Sen Wang et al. [18] established ML architecture for ulcer diagnosis and performed experimentation on a private developed dataset of WCE videos, 1504 to be exact. The effectiveness of this technique was evaluated using the ROC curve and the AUC, and achieved a 0.9235 peak value. In a different work [13], Jinn-Yi Yeh et al. used color characteristics and a WCE image collection to identify bleeding and ulcers. They used texture information in addition to combining all the picture attributes into a single matrix. Several classifiers, including SVM, neural networks, as well as decision trees, were presented in this matrix of characteristics. Various performance metrics were included for examination, and the accuracy ranged from 92.86% to 93.64%. It has been observed that deep learning (DL) models generally performed better in detecting GI tract diseases. The authors of [20] developed the VGGNet model based on CNN to detect GI ulcers, with a dataset of 854 images, and achieved 86.6% accuracy. However, these tests took place using conventional endoscopy images. In [21], the authors developed a CNN-based DL model; the dataset consisted of 5360 images containing ulcers and erosions, and contained merely 450 normal class images. The method achieved 90.8% detection accuracy. Sekuboyina and co-authors, in [22], proposed models based on CNN to detect dissimilar forms of diseases in WCE images, like ulcers, and more. They developed multiple subsections of images and applied the DL model. This experiment attained 71% sensitivity and 72% specificity. Apart from the classification techniques, researchers also proposed segmentation techniques for the detection of the predecessor disease of colorectal cancer. A fully convolutional network (FCN) was proposed in [23], which is trained from start to finish as well as pixel by pixel, and yields the segmentation of polyps. There are no extra postprocessing procedures needed for the suggested model, which is the major contribution of this research. In another paper [24], the authors discussed and enhanced the FCN network and named it the U-Net architecture. The U-Net model achieved good results for localization. Furthermore, many researchers have tried to modify and enhance the U-Net architecture to achieve better segmentation and localization results [25,26[25][26][27],27], but in medical images, these are not evaluated or do not provide better results. By maximizing the characteristics gleaned from two pre-trained models, the authors of [28] established a framework for gastrointestinal illness categorization and achieved 96.43% accuracy. In another framework [29], MobileNet-V2 is used for the multiclass classification of gastrointestinal illnesses, and a contrast enhancement approach was suggested. Based on the literature, it can be said that sufficient related work has been performed in the field of GI tract disease detection and classification. The presented results show reasonable performance in terms of accuracy. However, performance can be improved. Accuracy is an important performance metric; however, for multiclass classification problems, accuracy is less significant as compared to other performance metrics, especially when there is an imbalance in the dataset. For instance, wresearchers would like to emphasize that precision and recall rate are important performance measures for life-critical applications. Most of the presented works have reasonable accuracy, but they suffer from lower precision and recall rate, and require improvement.

References

  1. Ling, T.; Wu, L.; Fu, Y.; Xu, Q.; An, P.; Zhang, J.; Hu, S.; Chen, Y.; He, X.; Wang, J.; et al. A deep learning-based system for identifying differentiation status and delineating the margins of early gastric cancer in magnifying narrow-band imaging endoscopy. Endoscopy 2021, 53, 469–477.
  2. Sung, H.; Ferlay, J.; Siegel, R.L.; Laversanne, M.; Soerjomataram, I.; Jemal, A.; Bray, F. Global cancer statistics 2020: Globocan estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA Cancer J. Clin. 2021, 71, 209–249.
  3. Noor, M.N.; Nazir, M.; Ashraf, I.; Almujally, N.A.; Aslam, M.; Fizzah Jilani, S. GastroNet: A robust attention-based deep learning and cosine similarity feature selection framework for gastrointestinal disease classification from endoscopic images. CAAI Trans. Intell. Technol. 2023, 1–14.
  4. Available online: https://www.cancer.net/cancer-types/colorectal-cancer/statistics (accessed on 20 April 2023).
  5. Siegel, R.L.; Miller, K.D.; Jemal, A. Cancer statistics, 2015. CA Cancer J. Clin. 2015, 65, 5–29.
  6. Korkmaz, M.F. Artificial Neural Network by Using HOG Features HOG_LDA_ANN. In Proceedings of the 15th IEEE International Symposium on Intelligent Systems and Informatics (SISY), Subotica, Serbia, 14–16 September 2017; pp. 327–332.
  7. Li, S.; Cao, J.; Yao, J.; Zhu, J.; He, X.; Jiang, Q. Adaptive aggregation with self-attention network for gastrointestinal image classification. IET Image Process 2022, 16, 2384–2397.
  8. Azhari, H.; King, J.; Underwood, F.; Coward, S.; Shah, S.; Ho, G.; Chan, C.; Ng, S.; Kaplan, G. The global incidence of peptic ulcer disease at the turn of the 21st century: A study of the organization for economic co-operation and development (oecd). Am. J. Gastroenterol. 2018, 113, S682–S684.
  9. Kim, N.H.; Jung, Y.S.; Jeong, W.S.; Yang, H.J.; Park, S.K.; Choi, K.; Park, D.I. Miss rate of colorectal neoplastic polyps and risk factors for missed polyps in consecutive colonoscopies. Intest. Res. 2017, 15, 411.
  10. Iddan, G.; Meron, G.; Glukhovsky, A.; Swain, P. Wireless capsule endoscopy. Nature 2000, 405, 417.
  11. Khan, M.A.; Sarfraz, M.S.; Alhaisoni, M.; Albesher, A.A.; Wang, S. StomachNet: Optimal deep learning features fusion for stomach abnormalities classification. IEEE Access 2020, 8, 197969–197981.
  12. Khan, M.A.; Sharif, M.; Akram, T.; Yasmin, M.; Nayak, R.S. Stomach deformities recognition using rank-based deep features selection. J. Med. Syst. 2019, 43, 329.
  13. Yeh, J.Y.; Wu, T.H.; Tsai, W.J. Bleeding and ulcer detection using wireless capsule endoscopy images. J. Softw. Eng. Appl. 2014, 7, 422.
  14. Dewi, A.K.; Novianty, A.; Purboyo, T.W. Stomach disorder detection through the Iris Image using Backpropagation Neural Network. In Proceedings of the 2016 International Conference on Informatics and Computing (ICIC), Mataram, Indonesia, 28–29 October 2016; pp. 192–197.
  15. Korkmaz, S.A.; Akcicek, A.; Binol, H.; Korkmaz, M.F. Recognition of the stomach cancer images with probabilistic HOG feature vector histograms by using HOG features. In Proceedings of the 2017 IEEE 15th International Symposium on Intelligent Systems and Informatics (SISY), Subotica, Serbia, 14–16 September 2017; pp. 339–342.
  16. De Groen, P.C. Using artificial intelligence to improve adequacy of inspection in gastrointestinal endoscopy. Tech. Innov. Gastrointest. Endosc. 2020, 22, 71–79.
  17. Wong, G.L.-H.; Ma, A.J.; Deng, H.; Ching, J.Y.-L.; Wong, V.W.-S.; Tse, Y.-K.; Yip, T.C.-F.; Lau, L.H.-S.; Liu, H.H.-W.; Leung, C.-M.; et al. Machine learning model to predict recurrent ulcer bleeding in patients with history of idiopathic gastroduodenal ulcer bleeding. APT—Aliment. Pharmacol. Ther. 2019, 49, 912–918.
  18. Wang, S.; Xing, Y.; Zhang, L.; Gao, H.; Zhang, H. Second glance framework (secG): Enhanced ulcer detection with deep learning on a large wireless capsule endoscopy dataset. In Proceedings of the Fourth International Workshop on Pattern Recognition, Nanjing, China, 31 July 2019; pp. 170–176.
  19. Majid, A.; Khan, M.A.; Yasmin, M.; Rehman, A.; Yousafzai, A.; Tariq, U. Classification of stomach infections: A paradigm of convolutional neural network along with classical features fusion and selection. Microsc. Res. Tech. 2020, 83, 562–576.
  20. Sun, J.Y.; Lee, S.W.; Kang, M.C.; Kim, S.W.; Kim, S.Y.; Ko, S.J. A novel gastric ulcer differentiation system using convolutional neural networks. In Proceedings of the 2018 IEEE 31st International Symposium on Computer-Based Medical Systems (CBMS), Karlstad, Sweden, 18–21 June 2018; pp. 351–356.
  21. Aoki, T.; Yamada, A.; Aoyama, K.; Saito, H.; Tsuboi, A.; Nakada, A.; Niikura, R.; Fujishiro, M.; Oka, S.; Ishihara, S.; et al. Automatic detection of erosions and ulcerations in wireless capsule endoscopy images based on a deep convolutional neural network. Gastrointest. Endosc. 2019, 89, 357–363.
  22. Sekuboyina, A.K.; Devarakonda, S.T.; Seelamantula, C.S. A convolutional neural network approach for abnormality detection in wireless capsule endoscopy. In Proceedings of the 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017), Melbourne, Australia, 18–21 April 2017; pp. 1057–1060.
  23. Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 3431–3440.
  24. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; pp. 234–241.
  25. Milletari, F.; Navab, N.; Ahmadi, S.-A. V-net: Fully convolutional neural networks for volumetric medical image segmentation. In Proceedings of the International Conference on 3D Vision (3DV), Stanford, CA, USA, 25–28 October 2016; pp. 565–571.
  26. Zhang, Z.; Liu, Q.; Wang, Y. Road extraction by deep residual unet. IEEE Geosci. Remote Sens. Lett. 2018, 15, 749–753.
  27. Guo, Y.B.; Matuszewski, B. Giana polyp segmentation with fully convolutional dilation neural networks. In Proceedings of the International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, Prague, Czech Republic, 25–27 February 2019; pp. 632–641.
  28. Alhajlah, M.; Noor, M.N.; Nazir, M.; Mahmood, A.; Ashraf, I.; Karamat, T. Gastrointestinal Diseases Classification Using Deep Transfer Learning and Features Optimization. Comput. Mater. Contin. 2023, 75, 2227–2245.
  29. Nouman, N.M.; Nazir, M.; Khan, S.A.; Song, O.-Y.; Ashraf, I. Efficient Gastrointestinal Disease Classification Using Pretrained Deep Convolutional Neural Network. Electronics 2023, 12, 1557.
More
Video Production Service