Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 -- 1673 2022-12-01 08:03:29 |
2 format correct -27 word(s) 1646 2022-12-02 02:53:03 |

Video Upload Options

Do you have a full video?

Confirm

Are you sure to Delete?
Cite
If you have any further questions, please contact Encyclopedia Editorial Office.
Huang, S.;  Hsu, W.;  Hsu, R.;  Liu, D. Semantic Segmentation of Medical Images. Encyclopedia. Available online: https://encyclopedia.pub/entry/37586 (accessed on 18 June 2024).
Huang S,  Hsu W,  Hsu R,  Liu D. Semantic Segmentation of Medical Images. Encyclopedia. Available at: https://encyclopedia.pub/entry/37586. Accessed June 18, 2024.
Huang, Sheng-Yao, Wen-Lin Hsu, Ren-Jun Hsu, Dai-Wei Liu. "Semantic Segmentation of Medical Images" Encyclopedia, https://encyclopedia.pub/entry/37586 (accessed June 18, 2024).
Huang, S.,  Hsu, W.,  Hsu, R., & Liu, D. (2022, December 01). Semantic Segmentation of Medical Images. In Encyclopedia. https://encyclopedia.pub/entry/37586
Huang, Sheng-Yao, et al. "Semantic Segmentation of Medical Images." Encyclopedia. Web. 01 December, 2022.
Semantic Segmentation of Medical Images
Edit

There have been major developments in deep learning in computer vision since the 2010s. Deep learning has contributed to a wealth of data in medical image processing, and semantic segmentation is a salient technique in this field. Lesion detection is one of the primary objectives of medical imaging, as the size and location of lesions are often directly associated with a patient’s diagnosis, treatment, and prognosis. Since the development of computer vision algorithms, however, researchers have begun to utilize these algorithms in the field of medical imaging.

semantic segmentation medical image processing deep learning fully-convolutional network

1. Introduction

Medical imaging has long functioned as an assistive means for diagnosis and treatment. Advancements in technology have increased the types and qualities of medical images. Lesion detection is one of the primary objectives of medical imaging, as the size and location of lesions are often directly associated with a patient’s diagnosis, treatment, and prognosis. Previously, the size and location of lesions were determined by radiologists through medical image examination. At best, the instruments and software used were only able to enhance the image quality by adjusting the brightness and contrast features to facilitate better observation. Since the development of computer vision algorithms, however, researchers have begun to utilize these algorithms in the field of medical imaging [1].
As the core of deep learning, convolutional neural networks (CNN) (Figure 1) have a considerably long development history [2]. However, due to hardware-related limitations, it was only in the 2010s that breakthroughs in the effectiveness of CNNs were made [3]. Meanwhile, deep learning models that meet specific targets have gradually been proposed, from classification models, to object detection, to object segmentation. Consequently, advancements such as the detection of lung diseases through X-ray [4], the detection of lesion locations [5], and segmentation have been applied in medical imaging. Due to improvements in model performance, deep learning models have exhibited diagnostic capabilities approximating those of clinical physicians, based on the inference of specific datasets. However, in traditional CNN models, there are limitations to segmentation. One of them is about the features extracted from the models. When using a smaller kernel, the features will become more local to the original image. Global information, such as location, may be lost. However, when using a larger kernel, the context of features may decrease [6]. Another limitation is the data availability of biomedical segmentation tasks. Due to privacy in the medical profession, the medical data volume is often small when compared with the data volume in other fields [6]. New models have been developed to solve these problems. Another solution is data augmentation.
Figure 1. CNN kernel. Before input, the figure will be transformed into a signal, as shown on the left; 0/1, for example. The arrays in the middle are called the convolutional kernel. The size of the kernel must not be larger than the input. The neural network applies several kernels with different weight compositions to the input to obtain feature maps, which are usually dot products, as shown on the right. The neural network extracts a feature that can make them accomplish the tasks from those kernels.

2. Clinical Datasets and Relevant Studies

2.1. Lung Lesions

Some chest lesions have been studied, including lung lesions such as pneumothorax, lung nodules, pneumonia, cardiac lesions such as ventricle hypertrophy, and bony lesions such as rib fractures. Except for rib fractures, there are obvious targets for segmentation in these lesions, along with open datasets [7][8][9] for constructing a pretrained model so that the training for those tasks is more likely to succeed.
Table 1 shows research on lung lesion segmentation. Singadkar et al. [10] applied an FCN-based neural network, combined with residual blocks in the decoder section and long skip connections between the encoder and decoder, for lung nodule segmentation in CT scan images. They successfully reached an average Dice score of 0.95 and a Jaccard index of 0.887. Abedalla et al. [11] utilized multiple U-Net models with different backbones in each network for training and used a method similar to ensemble learning, in which four models are first summated according to fixed weights and then subjected to a threshold in order to accomplish segmentation via a pneumothorax during in inference phase. The weights and thresholds are manually adjusted. The network achieved a DSC of 0.86 in the 2019 Pneumothorax Challenge dataset.
Table 1. Research on lung lesion segmentation.
Authors (Year) Method Medical Image Performance Notes
Wang et al. [12] (2017) CF-CNN Computed tomography DC: 0.82 Central-focused CNN: extract features from 3D and 2D simultaneously
Wang et al. [13] (2017) MV-CNN Computed tomography DC: 0.77 Multi-scaled CNN
Maqsood et al. [14] (2021) DA-Net Computed tomography DC: 0.81
IoU: 0.76
U-Net-based, with atrous convolution and dense connection
Meraj et al. [15] (2020) CNN Computed tomography Accuracy: 0.83 For nodule detection using PCA and other machine learning techniques
Singadkar et al. [10] (2020) DDRN Computed tomography DSC: 0.95 ResNet-based, with deep deconvolution (residual block at the decoder)
Zhao et al. [16] (2020) 3D U-Net Computed tomography   3D U-Net combined with GAN for segmentation; another CNN for classifying nodule
Usman et al. [17] (2020) 3D U-Net Computed tomography DSC: 0.88
(consensus)
3D voxel feature, ResUNet, with semi-automated ROI selection
Keetha et al. [18] (2020) U-Det Computed tomography DSC: 0.83 U-Net cooperates with a bidirectional feature network (Bi-FPN)
Ozdemir et al. [19] (2020) 3D Vnet Computed tomography Sensitivity: 0.97 Combined segmentation and classification for lung nodule diagnosis
Hesamian et al. [20] (2019) FCN Computed tomography DSC: 0.81 Atrous convolution and residual block in FCN combined with conditioned random field (CRF)

2.2. Brain Lesions

Brain lesion detection includes brain tumors, strokes, traumatic brain injuries, and brain metastases. BraTS [21] (Figure 2a) is a brain tumor dataset with labels not only the location and size but also the cell type of tumors, primarily low-grade and high-grade gliomas. Magnetic resonance imaging (MRI) scans are divided into pretreatment and posttreatment images. In addition, each patient is scanned via instruments with varying magnetic field intensities (1.5 and 3 T) and protocols (2D and 3D). There are four major types of MR images: T1, T1c, T2, and FLAIR. The tumor edge is difficult to identify in segmentation tasks because of infiltrations, particularly those of high-grade gliomas, and the variety of degrees of contrast enhancement across different MRI scans.
Figure 2. Clinical dataset for training segmentation model. (a) BraTS is a dataset for glioblastoma multiforme. The figures show part of a case series with different MRI weights (from the left: T1, T1ce, T2, FLAIR) and annotations (rightmost: white—perifocal edema; yellow—tumor; red—necrosis). (b) LiTSis a dataset about liver and liver tumor segmentation. The figures show part of a case series with annotations (upper: red—liver; white—tumor) and CT images (lower). Reference: (a) from BRATS (Menze et al. [21][22][23]); (b) from LiTS (Bilic et al. [24]). The figures were illustrated by using Python 3.6 from the datasets.
Table 2 shows research on segmenting brain lesions. Isensee et al. [25] attempted to modify the structure of the U-Net architecture by using batch normalization and short skip connections such as s residual block in ResNet instead of a traditional convolutional block. Finally, they summated the outputs of each layer in the ascending part before entering the output part. The Dice coefficient was superior to that of the traditional U-Net architecture. In summary, most of the leading models in the BraTS dataset over the years have been based on U-Net architecture. Some of them have been modified from convolutional blocks, while others have been adjusted at the ascending part.
The intensity of stroke lesions in CT images can change over time after examination, especially infarction strokes. [26] In addition to CT, MRI datasets such as ISLES [27] have been established in recent years. Models trained with those datasets not only determine the location and region of stroke lesions but also facilitate physicians to determine the severity of brain damage and may predict the prognosis and potential of recovery. Zhang et al. [28] developed a multi-plane neural network structure to segment stroke lesions from diffusion-weighted magnetic resonance images. In contrast with the direct usage of 3D neural networks, they applied three neural networks that correspond to images on three different planes—axial, coronal, and sagittal—then integrated them into a multi-angle neural network, which is called a multi-plane fusion network. This neural network offers both segmentation and detection functions and can retain the original information from the input. Based on images from three different planes, the edges of lesions can be identified more accurately. The authors achieved a Dice coefficient of 62.2% and a sensitivity of 71.7% in the ISLES dataset.
Table 2. Research on brain lesion segmentation.
Authors (Year) Method Medical Image Performance Notes
Brain tumor        
Havaei et al. [29] (2016) Deep CNN Magnetic resonance images DC 1: 0.88 Cascade architecture using pre-output concatenation
Pereira et al. [30] (2016) CNN-based Magnetic resonance images DC: 0.88 Patch extraction from an image before entering the CNN
Isensee et al. [25] (2018) 3D U-Net Magnetic resonance images DC: 0.85 Modified from U-Net; summation for multi-level features
Xu et al. [31] (2020) U-Net Magnetic resonance images DC: 0.87 Attention-U-Net
McKinley et al. [32] (2018) deepSCAN Magnetic resonance images Mean DC
ET 2: 0.7
WT 3: 0.86
TC 4: 0.71
Bottleneck CNN design; dilated convolution
Stroke        
Wang et al.[33] (2016) Deep Lesion Symmetry ConvNet Magnetic resonance images Mean DSC 5: 0.63 Combined unilateral (local) and bilateral (global) voxel descriptor
Monteiro et al. [34] (2020) DeepMedic Computed tomography Differs according to size Three parallel 3D CNNs for different resolutions
Zhang et al. [28] (2020) U-Net Magnetic resonance images DSC: 0.62
IoU 6: 0.45
FPN for extraction first
1 Dice coefficient, 2 enhanced tumor, 3 whole tumor, 4 tumor core, 5 Dice similarity coefficient, 6 intersection over union.

2.3. Abdomen

Abdominal Organ Segmentation

The solid organs in the abdomen such as the liver, kidneys, spleen, and pancreas, as well as lower abdomen organs such as the prostate, have more prominent edges and distinct intensity values compared with the background, which is usually fat or peritoneum. Thus, they are obvious targets for segmentation. Convincing results could be achieved with traditional computer vision techniques [35][36]. Regarding the urinary bladder, due to its prominent edges, despite being a hollow organ, segmentation tasks could still be accomplished with trained models (particularly in the case of a distended bladder) [37]. There is a wealth of data focusing on abdominal organ segmentation [38][39][40]. In recent years, the application of deep learning for segmentation tasks has been considerably robust [41][42].

References

  1. Cootes, T.F.; Taylor, C.J. Statistical Models of Appearance for Medical Image Analysis and Computer Vision. Med. Imaging 2001, 4322, 236–249.
  2. Lecun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-Based Learning Applied to Document Recognition. Proc. IEEE 1998, 86, 2278–2324.
  3. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. Commun. ACM 2017, 60, 84–90.
  4. Hwang, S.; Kim, H.-E.; Jeong, J.; Kim, H.-J. A novel approach for tuberculosis screening based on deep convolutional neural networks. Med. Imaging 2016, 9785, 750–757.
  5. Dou, Q.; Chen, H.; Yu, L.; Qin, J.; Heng, P.A. Multilevel Contextual 3-D CNNs for False Positive Reduction in Pulmonary Nodule Detection. IEEE Trans. Biomed. Eng. 2017, 64, 1558–1567.
  6. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention 2015; Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F., Eds.; Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2015; Volume 9351, pp. 234–241.
  7. LUNA16 Dataset|Papers With Code. Available online: https://paperswithcode.com/dataset/luna16 (accessed on 26 December 2021).
  8. Pedrosa, J.; Aresta, G.; Ferreira, C.; Rodrigues, M.; Leitão, P.; Carvalho, A.S.; Rebelo, J.; Negrão, E.; Ramos, I.; Cunha, A.; et al. LNDb: A Lung Nodule Database on Computed Tomography. arXiv 2019.
  9. SIIM-ACR Pneumothorax Segmentation|Kaggle. Available online: https://www.kaggle.com/c/siim-acr-pneumothorax-segmentation (accessed on 1 January 2022).
  10. Singadkar, G.; Mahajan, A.; Thakur, M.; Talbar, S. Deep Deconvolutional Residual Network Based Automatic Lung Nodule Segmentation. J. Digit. Imaging 2020, 33, 678–684.
  11. Abedalla, A.; Abdullah, M.; Al-Ayyoub, M.; Benkhelifa, E. Chest X-ray pneumothorax segmentation using U-Net with EfficientNet and ResNet architectures. PeerJ Comput. Sci. 2021, 7, e607.
  12. Wang, S.; Zhou, M.; Liu, Z.; Liu, Z.; Gu, D.; Zang, Y.; Dong, D.; Gevaert, O.; Tian, J. Central focused convolutional neural networks: Developing a data-driven model for lung nodule segmentation. Med. Image Anal. 2017, 40, 172–183.
  13. Wang, S.; Zhou, M.; Gevaert, O.; Tang, Z.; Dong, D.; Liu, Z.; Jie, T. A multi-view deep convolutional neural networks for lung nodule segmentation. In Proceedings of the 2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Jeju, Korea, 11–15 July 2017; pp. 1752–1755.
  14. Maqsood, M.; Yasmin, S.; Mehmood, I.; Bukhari, M.; Kim, M. An Efficient DA-Net Architecture for Lung Nodule Segmentation. Mathematics 2021, 9, 1457.
  15. Meraj, T.; Rauf, H.T.; Zahoor, S.; Hassan, A.; Lali, M.I.; Ali, L.; Bukhari, S.A.C.; Shoaib, U. Lung nodules detection using semantic segmentation and classification with optimal features. Neural Comput. Appl. 2021, 33, 10737–10750.
  16. Zhao, C.; Han, J.; Jia, Y.; Gou, F. Lung nodule detection via 3D U-net and contextual convolutional neural network. In Proceedings of the 2018 International Conference on Networking and Network Applications (NaNA), Xi’an, China, 12–15 October 2018; pp. 356–361.
  17. Usman, M.; Lee, B.-D.; Byon, S.S.; Kim, S.-H.; Lee, B.-i.; Sho, I.G. Volumetric lung nodule segmentation using adaptive ROI with multi-view residual learning. Sci. Rep. 2020, 10, 12839.
  18. Keetha, N.V.; Babu P, S.A.; Annavarapu, C.S.R. U-Det: A Modified U-Net architecture with bidirectional feature network for lung nodule segmentation. arXiv 2020.
  19. Ozdemir, O.; Russell, R.L.; Berlin, A.A. A 3D Probabilistic Deep Learning System for Detection and Diagnosis of Lung Cancer Using Low-Dose CT Scans. arXiv 2020.
  20. Hesemian, M.H.; Jia, W.; He, X.; Kennedy, P.J. Atrous Convolution for Binary Semantic Segmentation of Lung Nodule. ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK, 12–17 May 2019; pp. 1015–1019.
  21. Menze, B.H.; Jakab, A.; Bauer, S.; Kalpathy-Cramer, J.; Farahani, K.; Kirby, J.; Burren, Y.; Porz, N.; Slotboom, J.; Wiest, R.; et al. The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS). IEEE Trans. Med. Imaging 2015, 34, 1993–2024.
  22. Baid, U.; Ghodasara, S.; Mohan, S.; Bilello, M.; Calabrese, E.; Colak, E.; Farahani, K.; Kalpathy-Cramer, J.; Kitamura, F.C.; Pati, S.; et al. The RSNA-ASNR-MICCAI BraTS 2021 Benchmark on Brain Tumor Segmentation and Radiogenomic Classification. arXiv 2021, arXiv:2107.02314.
  23. Bakas, S.; Akbari, H.; Sotiras, A.; Bilello, M.; Rozycki, M.; Kirby, J.S.; Freymann, J.B.; Farahani, K.; Davatzikos, C. Advancing The Cancer Genome Atlas glioma MRI collections with expert segmentation labels and radiomic features. Nat. Sci. Data 2017, 4, 170117.
  24. Bilic, P.; Christ, P.F.; Vorontsov, E.; Chlebus, G.; Chen, H.; Dou, Q.; Fu, C.-W.; Han, X.; Heng, P.-A.; Hesser, J.; et al. The Liver Tumor Segmentation Benchmark (LiTS). arXiv 2019.
  25. Isensee, F.; Kickingereder, P.; Wick, W.; Bendszus, M.; Maier-Hein, K.H. Brain tumor segmentation and radiomics survival prediction: Contribution to the BRATS 2017 challenge. In Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2018; Volume 10670, pp. 287–297.
  26. Norton, G.A.; Kishore, P.R.; Lin, J. CT contrast enhancement in cerebral infarction. AJR Am. J. Roentgenol. 1978, 131, 881–885.
  27. Liew, S.-L.; Anglin, J.M.; Banks, N.W.; Sondag, M.; Ito, K.L.; Kim, H.; Chan, J.; Ito, J.; Jung, C.; Khoshab, N.; et al. A large, source dataset of stroke anatomical brain images and manual lesion segmentations. Sci. Data 2018, 5, 180011.
  28. Zhang, L.; Song, R.; Wang, Y.; Zhu, C.; Liu, J.; Yang, J.; Liu, L. Ischemic Stroke Lesion Segmentation Using Multi-Plane Information Fusion. IEEE Access 2020, 8, 45715–45725.
  29. Havaei, M.; Davy, A.; Warde-Farley, D.; Biard, A.; Courville, A.; Bengio, Y.; Pal, C.; Jodoin, P.-M.; Larochelle, H. Brain Tumor Segmentation with Deep Neural Networks. arXiv 2015.
  30. Pereira, S.; Pinto, A.; Alves, V.; Silva, C.A. Brain Tumor Segmentation Using Convolutional Neural Networks in MRI Images. IEEE Trans. Med. Imaging 2016, 35, 1240–1251.
  31. Xu, J.; Li, M.; Zhu, Z. Automatic data augmentation for 3d medical image segmentation. In Medical Image Computing and Computer-Assisted Intervention 2020; Lecture Notes in Computer Science; Springer Science and Business Media Deutschland GmbH: Berlin/Heidelberg, Germany, 2020; Volume 12261, pp. 378–387.
  32. McKinley, R.; Jungo, A.; Wiest, R.; Reyes, M. Pooling-Free Fully Convolutional Networks with Dense Skip Connections for Semantic Segmentation, with Application to Brain Tumor Segmentation. In Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries. BrainLes 2017; Crimi, A., Bakas, S., Kuijf, H., Menze, B., Reyes, M., Eds.; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2017; Volume 10670.
  33. Wang, Y.; Katsaggelos, A.K.; Wang, X.; Parrish, T.B. A deep symmetry convnet for stroke lesion segmentation. In Proceedings of the 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016; pp. 111–115.
  34. Monteiro, M.; Newcombe, V.F.J.; Mathieu, F.; Adatia, K.; Kamnitsas, K.; Ferrante, E.; Das, T.; Whitehouse, D.; Rueckert, D.; Menon, D.K.; et al. Multiclass semantic segmentation and quantification of traumatic brain injury lesions on head CT using deep learning: An algorithm development and multicentre validation study. Lancet Digit. Health 2020, 2, e314–e322.
  35. Okada, T.; Linguraru, M.G.; Hori, M.; Suzuki, Y.; Summers, R.M.; Tomiyama, N.; Sato, Y. Multi-Organ Segmentation in Abdominal CT Images. In Proceedings of the 2012 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, San Diego, CA, USA, 28 August–1 September 2012; pp. 3986–3989.
  36. Jawarneh, M.; Rajeswari, M.; Ramachandram, D.; Lutfi, I. Segmentation of abdominal volume dataset slices guided by single annotated image. In Proceedings of the 2009 2nd International Conference on Biomedical Engineering and Informatics, Tianjin, China, 17–19 October 2009; pp. 4–8.
  37. Cha, K.H.; Hadjiiski, L.; Samala, R.K.; Chan, H.P.; Caoili, E.M.; Cohan, R.H. Urinary bladder segmentation in CT urography using deep-learning convolutional neural network and level sets. Med. Phys. 2016, 43, 1882–1896.
  38. MICCAI 2015 Multi-Atlas Abdomen Labeling Challenge. Available online: https://paperswithcode.com/dataset/miccai-2015-multi-atlas-abdomen-labeling (accessed on 4 November 2022).
  39. Rister, B.; Yi, D.; Shivakumar, K.; Nobashi, T.; Rubin, D.L. Ct-ORG, a new dataset for multiple organ segmentation in computed tomography. Sci. Data 2020, 7, 381.
  40. Ma, J.; Zhang, Y.; Gu, S.; Zhu, C.; Ge, C.; Zhang, Y.; An, X.; Wang, C.; Wang, Q.; Liu, X.; et al. AbdomenCT-1K: Is Abdominal Organ Segmentation a Solved Problem. IEEE Trans. Pattern Anal. Mach. Intell. 2021.
  41. Kim, H.; Jung, J.; Kim, J.; Cho, B.; Kwak, J.; Jang, J.Y.; Lee, S.-W.; Lee, J.-G.; Yoon, S.M. Abdominal multi-organ auto-segmentation using 3D-patch-based deep convolutional neural network. Sci. Rep. 2020, 10, 6204.
  42. Kart, T.; Fischer, M.; Küstner, T.; Hepp, T.; Bamberg, F.; Winzeck, S.; Glocker, B.; Rueckert, D.; Gatidis, S. Deep Learning-Based Automated Abdominal Organ Segmentation in the UK Biobank and German National Cohort Magnetic Resonance Imaging Studies. Investig. Radiol. 2021, 56, 401–408.
More
Information
Contributors MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register : , , ,
View Times: 340
Revisions: 2 times (View History)
Update Date: 02 Dec 2022
1000/1000
Video Production Service