Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 -- 1578 2023-08-03 11:51:49 |
2 format change Meta information modification 1578 2023-08-04 02:59:50 |

Video Upload Options

We provide professional Video Production Services to translate complex research into visually appealing presentations. Would you like to try it?

Confirm

Are you sure to Delete?
Cite
If you have any further questions, please contact Encyclopedia Editorial Office.
Kodipalli, A.; Fernandes, S.L.; Gururaj, V.; Varada Rameshbabu, S.; Dasar, S. Deep Learning in Medical Imaging Segmentation. Encyclopedia. Available online: https://encyclopedia.pub/entry/47609 (accessed on 20 November 2024).
Kodipalli A, Fernandes SL, Gururaj V, Varada Rameshbabu S, Dasar S. Deep Learning in Medical Imaging Segmentation. Encyclopedia. Available at: https://encyclopedia.pub/entry/47609. Accessed November 20, 2024.
Kodipalli, Ashwini, Steven L. Fernandes, Vaishnavi Gururaj, Shriya Varada Rameshbabu, Santosh Dasar. "Deep Learning in Medical Imaging Segmentation" Encyclopedia, https://encyclopedia.pub/entry/47609 (accessed November 20, 2024).
Kodipalli, A., Fernandes, S.L., Gururaj, V., Varada Rameshbabu, S., & Dasar, S. (2023, August 03). Deep Learning in Medical Imaging Segmentation. In Encyclopedia. https://encyclopedia.pub/entry/47609
Kodipalli, Ashwini, et al. "Deep Learning in Medical Imaging Segmentation." Encyclopedia. Web. 03 August, 2023.
Deep Learning in Medical Imaging Segmentation
Edit

Deep learning algorithms were applied to serve the purpose as a diagnostic tool and applied to Computed Tomography (CT) scan images of the ovarian region. The images went through a series of pre-processing techniques and, further, the tumour was segmented using the UNet model. The instances were then classified into two categories—benign and malignant tumours. 

ovarian tumours UNet convolutional neural networks

1. Introduction

Ovarian cancer stands out as a commonly diagnosed type of cancer worldwide. Considering the fact that it usually goes unrecognised until it reaches terminal stages, ovarian cancer is a leading reason for high mortality rates among women as a gynaecological illness. Ranking fifth in deaths due to cancer among women, the risk of being diagnosed with ovarian cancer peaks between the ages of 55 and 64, on average [1]. Silent symptoms and undetermined causes act as major factors for late diagnosis and ineffective screening methods.
The American Cancer Society claims that around 19,710 women will be diagnosed with ovarian cancer and that around 13,270 deaths will occur from ovarian cancer in 2023 in the United States [2]. In the past few years, significant developments in the field of biomedical imaging have contributed to the domain of cancer detection. With interdisciplinary approaches being popularized to solve objectives, Medical Imaging can be combined with Machine Learning and Deep Learning disciplines to effectively detect and categorize tumours. Ultrasound and CT scan images contain large amounts of information, making them ideal to use in the case of the implementation of Deep Learning algorithms.

2. Medical Imaging Classification Using Convolutional Neural Network (CNN)

Jung et al. [3] used ultrasound images of the lower body region of females to remove unwanted information in the frame and classify the ovaries into five classes—normal, cystadenoma, mature cystic teratoma, endometrioma, and malignant tumour. They used a texture-based analysis for tumour detection and trained a convolutional autoencoder or CNN-CAE. The images before and after the autoencoder are both fed into CNNs such as Inception, ResNet and different variants of DenseNets. Weighted class activation mapping or Grad-CAM was used to visualise the result. It can be noted that the model classified better when unnecessary data were removed using CNN-CAE. DenseNet121 and DenseNet161 were the better performers amongst all the algorithms used when parameters like accuracy, sensitivity, specificity, and the area under the curve (AUC) were considered to be metrics of performance.
Wang et al. [4] used pelvic CT scan images to detect and segment out ovarian cancer tumours simultaneously, i.e., creating a multi-task deep learning model. They proposed a model called YOLO-OCv2, which was an enhancement of their previously proposed algorithm. Mosaic enhancement was also used here, in order to improve the background information of the object. However, the multitask model YOLO-OCv2 outperformed other algorithms like Faster-RCNN, SSD and RetinaNet, which were trained on the COCO dataset. In this work, Mahmood et al. [5] created a Nuclei segmentation model that could be used to segment out the nuclei in multiple locations of the body. The authors used Conditional Generative Adversarial Networks, or cGAN, as they can control the GAN training output depending on a class. The model was trained on synthetically generated data along with real data in order to make sure that sufficient input was present. The model was trained with data from nine organs and was tested on four organs, where it outperformed its peers, such as FCN, U-Net and Mask R-CNN. Guan et al. [6] used mammographic images to detect breast cancer using CNN models. The authors focused on Affine transformations and synthetic data generation using GANs.

3. Medical Imaging Classification Using Ensemble Deep Learning

According to Karimi et al. [7], a Vision Transformers (ViT) algorithm was proposed which divided images into Image Patches. The proposed algorithm using transformers did not use any convolution operations to segment the brain cortical plate and the hippocampus in MRI images of the brain. The results were compared with FCN architectures like 3D UNet++, Attention UNet, SE-FCN and DSRNet. The proposed network performed segmentation accurately when compared to the other, and with a significantly smaller number of labelled training images. Xu et al. [8] worked on histopathological whole-slide images (WSIs) to detect ovarian cancer using CNNs trained on images of multiple resolutions. The authors proposed a new modified version of ResNet50 called the Heatmap ResNet50 algorithm for CNN-based patch selection, and ResNet18 along with MR-ViT was used for ViT-based slide classification. Li et al. [9] introduced a variation of UNet known as CR-UNet to simultaneously segment out ovaries and follicles from transvaginal ultrasound (TVUS) images. The proposed model was then compared with models like DeepLabV3+, PSPNet-1, PSPNet-2 and U-Net to find that the proposed model outperformed them all. In the proposed work by Goodfellow et al. [10], an adversarial net framework was suggested that loosely resembles a minimax two-player game. Nagarajan et al. [11] and Zhao et al. [12], in their research work, provided three approaches that were used to classify ovarian cancer types using CT images. The first approach used a deep convolutional neural network (DCNN) based on AlexNet, which did not provide satisfactory results. The second approach had an overfitting problem. To overcome this, GAN was used in the third approach to augment the image samples along with the DCNN, which provided the best results out of the three approaches in metrics such as precision, recall, f-measure and accuracy. The research work of Saha et al. [13] included a novel 2D segmentation network called MU-net, which was a combination of MobileNetV2 and U-Net used to segment out follicles in ovarian ultrasound images. An USOVA3D Training Set 1 dataset was used. The proposed model was evaluated against several other models from previous works in the literature, and was shown to be more accurate, with an accuracy of 98.4%. Jin, J et al. [14], in their work, used four UNet models: U-net, U-net++, U-net with Resnet and CE-Net to perform automatic segmentation. In Thangamma et al. [15], the k-means algorithm and fuzzy c-means algorithm were used on ultrasound images of ovaries. It was concluded that the fuzzy c-means algorithm provided a better result than the k-means algorithm The work by Hema et al. [16] involved FaRe-ConvNN, which applied annotations on the image dataset, where the images had three categories: epithelial, germ and stroma cells. In order to avoid overfitting and other issues due to the small dataset size, image augmentation using image enhancement and transformation techniques like resizing, masking, segmentation, normalization, vertical or horizontal flips and rotation was undertaken. FaRe-ConvNN was used to compensate for manual annotation. After the region-based training in FaRe-ConvNN, a combination of SVC and Gaussian NB classifiers was used to classify the images, which resulted in impressive precision and recall values [17]. In the works carried out by Ashwini et al. [18][19][20], various Deep Learning models were used to segment the CT scanned images and classify them using variants of CNN. In the work [18][19], Otsu’s method was used to segment the tumour and a dice score of 0.82 and Jaccard score of 0.8356 were obtained. Further, to perform segmentation, cGAN was used [20] and, in this research, the segmentation and classification of tumours were carried out in a single pipeline, which obtained the dice score of 0.91 and the Jaccard score of 0.89. Similarly, in the works carried out by Fernandes et al. [21][22], according to the work [21], the authors proposed the segmentation of brain MRI images using entropy-based techniques. As per [22], the detection and classification of brain tumours by parallel processing was carried out using big data tools such as Kafka and PySpark.

4. Deep Learning in Medical Imaging Segmentation

Koonce et al. [23] shed light on EfficientNet, which comprised the inverted residual blocks of MobileNet v2 combined with the MnasNet architecture to form a robust model for performing Image Recognition. Rehman et al. [24] at BU-Net used a Residual Extended Skip (RES) block and a Wide Context (WC) block in a U-Net architecture to implement the proposed model, BU-Net, to segment Brain tumour cells in MRI scanned images. In the current work by Rehman et al. [25], the authors proposed a model named BrainSeg-Net to achieve the segmentation of tumour. The proposed model included a Feature Enhancer (FE) block at every encoder stage to protect critical information that could be tampered with during the convolution and transformation processes. Jalali et al. [26] proposed ResBCDU-Net for lung segmentation in CT images, which was used in applications such as in detecting lung cancer. To form the ResBCDU-Net, a pre-trained ResNet-34 network was used in place of an encoder in a typical U-Net model. The proposed method performed better than models like U-Net, RU-Net, ResNet34-UNet and BCDU-Net when measured using several evaluation metrics. Maureen et al. [27] and Neelima et al. [28] carried out an extensive review of bone image segmentation by considering the methods used in medical additive manufacturing. According to this review, global thresholding is the most commonly used method for segmentation and has obtained an accuracy of under 0.6 mm. Further, the authors have proposed using other advanced thresholding methods that may improve the accuracy to 0.38 mm. In the work carried out by Minnema et al. [29], the CNN-based STL method was applied for bone segmentation in CT scan images, which was able to accurately segment the skull and obtain a mean dice value of 0.92 ± 0.4. As per [30], a residual spatial pyramid pooling (RASPP) module was proposed to minimize the loss of location information in different modules. On similar lines, the work proposed by [31] optimized the CNN UNet model by applying it on a CT dataset generated from the MRI images. The results showed that the model performed well on the CT images when compared with the MRI images.

References

  1. Labidi-Galy, S.I.; Treilleux, I.; Goddard-Leon, S.; Combes, J.-D.; Blay, J.-Y.; Ray-Coquard, I.; Caux, C.; Bendriss-Vermare, N. Plasmacytoid dendritic cells infiltrating ovarian cancer are associated with poor prognosis. Oncoimmunology 2012, 1, 380–382.
  2. Siegel, R.L.; Miller, K.D.; Wagle, N.S.; Jemal, A. Cancer statistics. CA Cancer J. Clin. 2023, 73, 17–48.
  3. Jung, Y.; Kim, T.; Han, M.R.; Kim, S.; Kim, G.; Lee, S.; Choi, Y.J. Ovarian tumour diagnosis using deep convolutional neural networks and a denoising convolutional autoencoder. Sci. Rep. 2022, 12, 17024.
  4. Wang, X.; Li, H.; Zheng, P. Automatic Detection and Segmentation of Ovarian Cancer Using a Multitask Model in Pelvic CT Images. Oxidative Med. Cell. Longev. 2022, 2022, 6009107.
  5. Mahmood, F.; Borders, D.; Chen, R.J.; Mckay, G.N.; Salimian, K.J.; Baras, A.; Durr, N.J. Deep Adversarial Training for Multi-Organ Nuclei Segmentation in Histopathology Images. IEEE Trans. Med. Imaging 2019, 39, 3257–3267.
  6. Guan, S.; Loew, M. Breast cancer detection using synthetic mammograms from generative adversarial networks in convolutional neural networks. J. Med. Imaging 2019, 6, 031411.
  7. Karimi, D.; Dou, H.; Gholipour, A. Medical Image Segmentation Using Transformer Networks. IEEE Access 2022, 10, 29322–29332.
  8. Xu, T.; Farahani, H.; Bashashati, A. Multi-Resolution Vision Transformer for Subtype Classification in Ovarian Cancer Whole-Slide Histopathology Images. 2022. Available online: http://hdl.handle.net/2429/81390 (accessed on 2 February 2023).
  9. Li, H.; Fang, J.; Liu, S.; Liang, X.; Yang, X.; Mai, Z.; Van, M.T.; Wang, T.; Chen, Z.; Ni, D. CR-Unet: A Composite Network for Ovary and Follicle Segmentation in Ultrasound Images. IEEE J. Biomed. Health Inform. 2019, 24, 974–983.
  10. Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Networks. arXiv 2014, arXiv:1406.2661v1.
  11. Nagarajan, P.H.; Tajunisha, N. Automatic Classification of Ovarian Cancer Types from CT Images Using Deep Semi-Supervised Generative Learning and Convolutional Neural Network. Rev. D’intelligence Artif. 2021, 35, 273–280.
  12. Zhao, Q.; Lyu, S.; Bai, W.; Cai, L.; Liu, B.; Wu, M.; Sang, X.; Yang, M.; Chen, L. A Multi-Modality Ovarian Tumour Ultrasound Image Dataset for Unsupervised Cross-Domain Semantic Segmentation. arXiv 2022, arXiv:2207.06799.
  13. Saha, D.; Mandal, A.; Ghosh, R. MU Net: Ovarian Follicle Segmentation Using Modified U-Net Architecture. Int. J. Eng. Adv. Technol. 2022, 11, 30–35.
  14. Jin, J.; Zhu, H.; Zhang, J.; Ai, Y.; Zhang, J.; Teng, Y.; Xie, C.; Jin, X. Multiple U-Net-Based Automatic Segmentations and Radiomics Feature Stability on Ultrasound Images for Patients With Ovarian Cancer. Front. Oncol. 2021, 10, 614201.
  15. Thangamma, N.G.; Prasanna, D.S. Analyzing ovarian tumour and cancer cells using image processing algorithms K means & fuzzy C-means. Int. J. Eng. Technol. 2018, 7, 510–512.
  16. Hema, L.K.; Manikandan, R.; Alhomrani, M.; Pradeep, N.; Alamri, A.S.; Sharma, S.; Alhassan, M. Region-Based Segmentation and Classification for Ovarian Cancer Detection Using Convolution Neural Network. Contrast Media Mol. Imaging 2022, 2022, 5968939.
  17. Ahamad, M.; Aktar, S.; Uddin, J.; Rahman, T.; Alyami, S.A.; Al-Ashhab, S.; Akhdar, H.F.; Azad, A.; Moni, M.A. Early-Stage Detection of Ovarian Cancer Based on Clinical Data Using Machine Learning Approaches. J. Pers. Med. 2022, 12, 1211.
  18. Kodipalli, A.; Guha, S.; Dasar, S.; Ismail, T. An inception-ResNet deep learning approach to classify tumours in the ovary as benign and malignant. Expert Syst. 2022, e13215.
  19. Kodipalli, A.; Devi, S.; Dasar, S.; Ismail, T. Segmentation and classification of ovarian cancer based on conditional adversarial image to image translation approach. Expert Syst. 2022, e13193.
  20. Ruchitha, P.J.; Sai, R.Y.; Kodipalli, A.; Martis, R.J.; Dasar, S.; Ismail, T. Comparative analysis of active contour random walker and watershed algorithms in segmentation of ovarian cancer. In Proceedings of the 2022 International Conference on Distributed Computing, VLSI, Electrical Circuits and Robotics (DISCOVER), Shivamogga, India, 14–15 October 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 234–238.
  21. Liaqat, A.; Khan, M.A.; Shah, J.H.; Sharif, M.; Yasmin, M.; Fernandes, S.L. Automated ulcer and bleeding classification from WCE images using multiple features fusion and selection. J. Mech. Med. Biol. 2018, 18, 1850038.
  22. Fernandes, S.L.; Tanik, U.J.; Rajinikanth, V.; Karthik, K.A. A reliable framework for accurate brain image examination and treatment planning based on early diagnosis support for clinicians. Neural Comput. Appl. 2020, 32, 15897–15908.
  23. Koonce, B. EfficientNet. In Convolutional Neural Networks with Swift for Tensorflow; Apress: Berkeley, CA, USA, 2021.
  24. Rehman, M.U.; Cho, S.; Kim, J.H.; Chong, K.T. Bu-net: Brain tumour segmentation using modified u-net architecture. Electronics 2020, 9, 2203.
  25. Rehman, M.U.; Cho, S.; Kim, J.; Chong, K.T. Brainseg-net: Brain tumour mr image segmentation via enhanced encoder–decoder network. Diagnostics 2021, 11, 169.
  26. Jalali, Y.; Fateh, M.; Rezvani, M.; Abolghasemi, V.; Anisi, M.H. ResBCDU-Net: A deep learning framework for lung CT image segmentation. Sensors 2021, 21, 268.
  27. van Eijnatten, M.; van Dijk, R.; Dobbe, J.; Streekstra, G.; Koivisto, J.; Wolff, J. CT image segmentation methods for bone used in medical additive manufacturing. Med. Eng. Phys. 2018, 51, 6–16.
  28. Neelima, G.; Chigurukota, D.R.; Maram, B.; Girirajan, B. Optimal DeepMRSeg based tumour segmentation with GAN for brain tumour classification. Biomed. Signal Process. Control 2022, 74, 103537.
  29. Minnema, J.; van Eijnatten, M.; Kouw, W.; Diblen, F.; Mendrik, A.; Wolff, J. CT image segmentation of bone for medical additive manufacturing using a convolutional neural network. Comput. Biol. Med. 2018, 103, 130–139.
  30. Rehman, M.U.; Ryu, J.; Nizami, I.F.; Chong, K.T. RAAGR2-Net: A brain tumour segmentation network using parallel processing of multiple spatial frames. Comput. Biol. Med. 2023, 152, 106426.
  31. Islam, K.T.; Wijewickrema, S.; O’leary, S. A deep learning framework for segmenting brain tumours using MRI and synthetically generated CT images. Sensors 2022, 22, 523.
More
Information
Contributors MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register : , , , ,
View Times: 273
Revisions: 2 times (View History)
Update Date: 04 Aug 2023
1000/1000
ScholarVision Creations