Brain Tumor Segmentation from MRI Images: History
Please note this is an old version of this entry, which may differ significantly from the current revision.

Brain tumor segmentation from magnetic resonance imaging (MRI) scans is critical for the diagnosis, treatment planning, and monitoring of therapeutic outcomes.

  • optimization methods
  • computational approaches
  • brain tumor

1. Introduction

Brain tumor segmentation is an essential process in medical image analysis, which aims to pinpoint the affected areas of the brain due to the presence of a tumor [1]. Diagnosis, therapy planning, disease progression monitoring, and precise and effective segmentation of brain tumors are crucial [2]. The complex nature of brain tumors and the differences between patients make manually identifying these tumors a tough and time-consuming job [3]. Brain tumors represent a heterogeneous group of intracranial neoplasms that affect both adults and children, posing significant challenges for diagnosis and treatment [4]. Magnetic resonance imaging (MRI) stands as the top choice for non-invasive brain tumor detection and assessment because of its exceptional resolution and outstanding contrast for soft tissues [5]. In many clinical tasks—for example, diagnosis, treatment planning, and patient monitoring—the accurate segmentation of brain tumors using MRI data is crucial [6]. The traditional segmentation methods are mainly based on handcrafted features, and these are designed based on domain knowledge [7]. The problem is that they are generally sensitive to variations in image intensity and hence require extensive manual tuning. Thus, the robustness and precision are very low [8]. Segmentation of brain tumor techniques is crucial for accurate diagnosis, monitoring of tumor progression, and treatment planning [9]. These techniques can be generally divided into three categories: manual brain tumor segmentation, semi-automatic, and fully automatic methods [10]. Manual segmentation of the brain is performed by radiologists or experts and involves the delineation of tumor regions on medical images using graphical tools [11]. This method can be accurate, but it is slow and takes a lot of work [12]. Furthermore, the increasing demand for medical imaging and the limited availability of expert radiologists make manual segmentation challenging to scale. Semi-automatic methods require minimal user intervention, often providing an initial contour or seed point to guide the segmentation process. These methods rely on algorithms such as region growing, which iteratively groups neighboring pixels with similar intensity values [13]; level-set methods, which evolve a contour based on geometric and image-based properties [14]; and active contours or snakes, which deform a curve or surface to minimize an energy function derived from image features [15]. Semi-automatic methods offer improved efficiency compared to manual methods; however, they still require user interaction, which can be time-consuming and may introduce variability. In fully automatic methods, tumors can be segmented without user interaction, such as machine learning (ML) and DL approaches. These techniques seek to increase the segmentation process’ effectiveness, consistency, and scalability [16]. Handcrafted feature-based methods involve extracting engineered features from images and training ML classifiers for tumor segmentation, while DL techniques such as CNN automatically learn hierarchical representations of the data to perform segmentation [17]. Fully automatic methods have demonstrated the potential for high precision and accuracy; however, they may require large, annotated datasets for training and can be computationally expensive. Recently, convolutional neural networks (CNNs) have emerged as a powerful resource for computer vision tasks, for instance, segmentation, etc. [9]. As compared to the traditional methods, CNNs have shown superior performance, especially in medical image segmentation, by introducing learning features from the data [10].

2. Handcrafted Features-Based Methods

Medical image analysis has made extensive use of handcrafted feature-based techniques, including brain tumor segmentation. These techniques involve the segmentation of images using ML algorithms and the extraction of engineered features that define image qualities [18]. Handcrafted features are divided into three categories: intensity-based, texture-based, and shape-based. Intensity-based features capture the local intensity distribution within the image. These features include statistical measurements such as mean, median, standard deviation, and histogram-based metrics [19]. Intensity-based features are useful in differentiating between normal and abnormal tissue regions due to their distinct intensity profiles. Texture-based features describe the spatial arrangement of intensities and reflect the local patterns in the image. Common texture-based features include the gray-level co-occurrence matrix (GLCM), which captures the frequency of specific pixel value combinations at certain spatial relationships [20]; local binary patterns (LBP), which encode the relationship between a pixel and its neighbors [21]; and Gabor filters, which analyze the frequency and orientation information in images [22]. Texture features can be valuable for characterizing the heterogeneity and complexity of tumor regions. Shape-based features capture the geometric properties of the tumor region, providing information about the tumor’s size, shape, and boundary irregularities. Examples of shape-based features include area, perimeter, compactness, and various moments [23]. These features can help differentiate tumors from surrounding tissues based on their distinct morphological characteristics.
ML algorithms, such as random forests (RF), support vector machines (SVM), and k-nearest neighbors (k-NN), are trained for segmentation tasks after handcrafted features are extracted [24]. Despite the success of handcrafted feature-based methods in various medical image segmentation tasks, these methods often require extensive manual tuning and are sensitive to variations in image intensity, limiting their robustness and precision [25]. Additionally, the reliance on manually engineered features can lead to a lack of adaptability to diverse imaging conditions and tumor appearances. Therefore, there is a need for more robust and versatile approaches to tumor segmentation.

3. Convolutional Neural Network-Based Methods

CNN has revolutionized the field of image recognition and segmentation by automatically learning features from the data, making them more robust to variations and alleviating the need for manual feature engineering [17]. CNNs are composed of multiple layers, including convolutional, pooling, and fully connected layers, that use nonlinear transformations to learn hierarchical representations of the input data [26]. This allows CNNs to effectively capture complex patterns and structures within images, leading to improved performance in various image analysis tasks.
Several CNN architectures have been proposed in the context of brain tumor segmentation to address the challenges faced by the heterogeneity and complexity of brain tumors. Some of the most prominent architectures include U-Net, V-Net, and DeepMedic [27][28][29]. The accurate localization of tumor boundaries is made possible by the symmetric encoder-decoder architecture known as U-Net, which uses skip connections to merge low-level and high-level data. V-Net extends the U-Net architecture to 3D medical images and incorporates a volumetric loss function for improved segmentation performance. DeepMedic employs a multi-scale approach with parallel processing of image patches at different resolutions to capture both global and local contextual information. The study in [30] aimed to accurately segment brain tumors from MRI scans using a 3D nnU-Net model enhanced with domain knowledge from a senior radiologist. The approach improved the model’s performance and achieved high Dice scores for the validation and test sets. The approach was validated on hold-out testing data, including pediatric and sub-Saharan African patient populations, demonstrating high generalization capabilities.
These CNN architectures have demonstrated superior performance in brain tumor segmentation compared to traditional methods by learning context-aware features that capture both local and global information [31]. Additionally, CNN-based methods are more robust to intensity variations and can adapt to diverse imaging conditions and tumor appearances, making them a promising approach for this task.
Despite the success, CNNs usually need large, annotated datasets for training, which can be challenging to obtain in the medical domain due to the limited availability of expert annotations and the time-consuming nature of manual segmentation [32]. Furthermore, CNNs can be computationally expensive, particularly for large 3D medical images, and may lack interpretability due to their black-box nature.
To overcome these limitations, researchers have explored various strategies, such as transfer learning, data augmentation, and incorporating domain knowledge through the integration of handcrafted features. These approaches aim to leverage the strengths of both handcrafted feature-based methods and CNNs to improve the robustness, precision, and interpretability of brain tumor segmentation techniques.

4. Hybrid Approaches in Medical Imaging

Hybrid approaches aim to combine the strengths of handcrafted features and DL techniques to increase the performance of medical image segmentation tasks, taking advantage of domain knowledge and automated feature learning [33]. Several hybrid approaches have been proposed for various medical imaging applications, including lung nodule detection, breast cancer segmentation, and retinal vessel segmentation [34][35][36].
These hybrid approaches often involve integrating handcrafted features at different levels of the CNN architecture, such as input channels, feature maps, or decision levels [33]. Several strategies have been proposed for incorporating handcrafted features into DL models. One approach is to concatenate handcrafted features with deep features before the classification layer, which allows the model to leverage both feature types during the decision-making process [36]. Another approach involves injecting handcrafted features into intermediate layers of the CNN, enabling the network to learn more complex, higher-level representations that integrate domain knowledge [37]. Multi-stream architectures, which process handcrafted and deep features in parallel, have also been proposed to encourage complementary learning and robust feature representations [38].
These hybrid approaches have demonstrated an improved performance compared to individually handcrafted features or CNN-based methods in various medical imaging tasks. By combining the advantages of both techniques, hybrid models can capitalize on the domain knowledge provided by handcrafted features while benefiting from the automatic feature learning capabilities of CNNs.
Table 1 summarizes a comparison of brain tumor segmentation techniques, including handcrafted feature methods, CNN-based, and hybrid approaches. The evaluation of the relevant literature emphasizes the limitations of handcrafted feature-based methods and the potential of CNN-based methods for tumor segmentation. However, the integration of handcrafted features and CNNs has not been thoroughly investigated for brain tumor segmentation. A hybrid method that combines the strengths of each approach could lead to improved performance in this task, offering a promising avenue for future research and development in the field of medical image analysis.
Table 1. Comparison of brain tumor segmentation techniques.

5. Data Augmentation Techniques

The literature includes an extensive collection of data augmentation in brain MRI. Different techniques were applied to the MRI such as translation, noise addition, rotation, and shearing to increase the size of the dataset as well the performance of tumor segmentation. Khan et al. [39] applied the noise addition to and shearing methods to increase the size of the dataset and improved the accuracy of the classification and tumor segmentation. Similarly, Dufumier et al. [40] applied rotation, random cropping, noise addition, translation, and blurring to increase the dataset size and performance in the prediction of age, and sex classification. Different studies used elastic deformation, rotation, and scaling to improve tumor segmentation and accuracy at the same time [41]. These techniques are common due to their simplicity and performance. In addition to these techniques, the researchers also generated synthetic images to perform a specific task. The most common method of image generation is the mix-up, where the patches from two random images are combined to generate the new image. In all these applications, the researchers used different datasets and different numbers of images. Similarly, everyone used a different network architecture. Thus, every researcher presented the results performance based on their selected techniques. After careful evaluation of the literature, the common techniques are used. These techniques are presented in Table 2. Furthermore, Nelapa et al. [42] provided a comprehensive survey of the data augmentation that can be used for further details.
Table 2. Data augmentation techniques.
Technique Description
Rotation Randomly rotate the MRI scans by ±15 degrees
Scaling Randomly scale the MRI scans by a factor between 0.8 and 1.2
Horizontal Flip Randomly flip the MRI scans horizontally with a probability of 0.5
Elastic Deformation Apply random elastic deformation to the MRI scans with a Gaussian filter of σ = 4.0
Intensity Shift Randomly shift the intensity of the MRI scans by a factor between −0.1 and 0.1
Contrast Normalization Normalize the contrast of the MRI scans by histogram equalization

This entry is adapted from the peer-reviewed paper 10.3390/diagnostics13162650

References

  1. Hossain, T.; Shishir, F.S.; Ashraf, M.; Al Nasim, M.A.; Shah, F.M. Brain tumor detection using convolutional neural network. In Proceedings of the 2019 1st International Conference on Advances in Science, Engineering and Robotics Technology (ICASERT), Dhaka, Bangladesh, 3–5 May 2019; pp. 1–6.
  2. Fernandes, S.L.; Tanik, U.J.; Rajinikanth, V.; Karthik, K.A. A reliable framework for accurate brain image examination and treatment planning based on early diagnosis support for clinicians. Neural Comput. Appl. 2020, 32, 15897–15908.
  3. Magadza, T.; Viriri, S. Deep learning for brain tumor segmentation: A survey of state-of-the-art. J. Imaging 2021, 7, 19.
  4. Ostrom, Q.T.; Cioffi, G.; Waite, K.; Kruchko, C.; Barnholtz-Sloan, J.S. CBTRUS statistical report: Primary brain and other central nervous system tumors diagnosed in the United States in 2014–2018. Neuro-Oncol. 2021, 23, 1–105.
  5. Augustine, R.; Al Mamun, A.; Hasan, A.; Salam, S.A.; Chandrasekaran, R.; Ahmed, R.; Thakor, A.S. Imaging cancer cells with nanostructures: Prospects of nanotechnology driven non-invasive cancer diagnosis. Adv. Colloid Interface Sci. 2021, 294, 102457.
  6. Ullah, F.; Salam, A.; Abrar, M.; Amin, F. Brain Tumor Segmentation Using a Patch-Based Convolutional Neural Network: A Big Data Analysis Approach. Mathematics 2023, 11, 1635.
  7. Xie, X.; Niu, J.; Liu, X.; Chen, Z.; Tang, S.; Yu, S. A survey on incorporating domain knowledge into deep learning for medical image analysis. Med. Image Anal. 2021, 69, 101985.
  8. Ayadi, W.; Charfi, I.; Elhamzi, W.; Atri, M. Brain tumor classification based on hybrid approach. Vis. Comput. 2022, 38, 107–117.
  9. O’Mahony, N.; Campbell, S.; Carvalho, A.; Harapanahalli, S.; Hernandez, G.V.; Krpalkova, K.; Riordan, D.; Walsh, J. Deep learning vs. traditional computer vision. In Advances in Computer Vision: Proceedings of the 2019 Computer Vision Conference (CVC); Springer: Cham, Switzerland, 2020; Volume 1, pp. 128–144.
  10. Du, G.; Cao, X.; Liang, J.; Chen, X.; Zhan, Y. Medical image segmentation based on u-net: A review. J. Imaging Sci. Technol. 2020, 64, 020508–020512.
  11. Li, W.; Li, Y.; Qin, W.; Liang, X.; Xu, J.; Xiong, J.; Xie, Y. Magnetic resonance image (MRI) synthesis from brain computed tomography (CT) images based on deep learning methods for magnetic resonance (MR)-guided radiotherapy. Quant. Imaging Med. Surgery 2020, 10, 12–23.
  12. Gordillo, N.; Montseny, E.; Sobrevilla, P. State of the art survey on MRI brain tumor segmentation. Magn. Reson. Imaging 2013, 31, 1426–1438.
  13. Mesanovic, N.; Grgic, M.; Huseinagic, H.; Males, M.; Skejic, E.; Smajlovic, M. Automatic CT image segmentation of the lungs with region growing algorithm. In Proceedings of the 18th International Conference on Systems, Signals and Image Processing—IWSSIP, Sarajevo, Bosnia and Herzegovina, 16–18 June 2011; pp. 395–400.
  14. Osher, S.; Sethian, J.A. Fronts propagating with curvature-dependent speed: Algorithms based on Hamilton-Jacobi formulations. J. Comput. Phys. 1988, 79, 12–49.
  15. Kass, M.; Witkin, A.; Terzopoulos, D. Snakes: Active contour models. Int. J. Comput. Vis. 1988, 1, 321–331.
  16. Bakas, S.; Reyes, M.; Jakab, A.; Bauer, S.; Rempfler, M.; Crimi, A.; Shinohara, R.T.; Berger, C.; Ha, S.M.; Rozycki, M.; et al. Identifying the best machine learning algorithms for brain tumor segmentation, progression assessment, and overall survival prediction in the BRATS challenge. arXiv 2018, arXiv:1811.02629.
  17. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444.
  18. Zacharaki, E.I.; Wang, S.; Chawla, S.; Soo Yoo, D.; Wolf, R.; Melhem, E.R.; Davatzikos, C. Classification of brain tumor type and grade using MRI texture and shape in a machine learning scheme. Magn. Reson. Med. Off. J. Int. Soc. Magn. Reson. Med. 2009, 62, 1609–1618.
  19. Chaddad, A. Automated feature extraction in brain tumor by magnetic resonance imaging using gaussian mixture models. J. Biomed. Imaging 2015, 5, 868031.
  20. Haralick, R.M.; Shanmugam, K.; Dinstein, I.H. Textural features for image classification. IEEE Trans. Syst. Man Cybern. 1973, 6, 610–621.
  21. Ojala, T.; Pietikäinen, M.; Harwood, D. A comparative study of texture measures with classification based on featured distributions. Pattern Recognit. 1996, 29, 51–59.
  22. Daugman, J.G. Uncertainty relation for resolution in space, spatial frequency, and orientation optimized by two-dimensional visual cortical filters. J. Opt. Soc. Am. A 1985, 2, 1160–1169.
  23. Asodekar, B.H.; Gore, S.A.; Thakare, A. Brain tumor analysis based on shape features of MRI using machine learning. In Proceedings of the 2019 5th International Conference on Computing, Communication, Control and Automation (ICCUBEA), Pune, India, 19–21 September 2019; pp. 1–5.
  24. Tandel, G.S.; Biswas, M.; Kakde, O.G.; Tiwari, A.; Suri, H.S.; Turk, M.; Laird, J.R.; Asare, C.K.; Ankrah, A.A.; Khanna, N.N.; et al. A Review on a Deep Learning Perspective in Brain Cancer Classification. Cancers 2019, 11, 111.
  25. Menze, B.H.; Jakab, A.; Bauer, S.; Kalpathy-Cramer, J.; Farahani, K.; Kirby, J.; Burren, Y.; Porz, N.; Slotboom, J.; Wiest, R.; et al. The multimodal brain tumor image segmentation benchmark (BRATS). IEEE Trans. Med. Imaging 2014, 34, 1993–2024.
  26. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90.
  27. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, 5–9 October 2015, Proceedings, Part III; Springer: Cham, Switzerland, 2015; pp. 234–241.
  28. Milletari, F.; Navab, N.; Ahmadi, S.-A. V-net: Fully convolutional neural networks for volumetric medical image segmentation. In Proceedings of the 2016 Fourth International Conference on 3D Vision (3DV), Stanford, CA, USA, 25–28 October 2016; pp. 565–571.
  29. Kamnitsas, K.; Ledig, C.; Newcombe, V.F.; Simpson, J.P.; Kane, A.D.; Menon, D.K.; Rueckert, D.; Glocker, B. Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation. Med. Image Anal. 2017, 36, 61–78.
  30. Kotowski, K.; Adamski, S.; Machura, B.; Zarudzki, L.; Nalepa, J. Infusing Domain Knowledge into nnU-Nets for Segmenting Brain Tumors in MRI. In Proceedings of the International MICCAI Brainlesion Workshop, Singapore, 18–22 September 2022; pp. 186–194.
  31. Litjens, G.; Kooi, T.; Bejnordi, B.E.; Setio, A.A.A.; Ciompi, F.; Ghafoorian, M.; van der Laak, J.A.W.M.; van Ginneken, B.; Sanchez, C.I. A survey on deep learning in medical image analysis. Med. Image Anal. 2017, 42, 60–88.
  32. Shen, H.; Wang, R.; Zhang, J.; McKenna, S.J. Boundary-aware fully convolutional network for brain tumor segmentation. In Medical Image Computing and Computer-Assisted Intervention—MICCAI 2017: 20th International Conference, Quebec City, QC, Canada, 11–13 September 2017, Proceedings, Part II; Springer: Cham, Switzerland, 2017; pp. 433–441.
  33. Hu, H.; Mu, Q.; Bao, Z.; Chen, Y.; Liu, Y.; Chen, J.; Wang, K.; Wang, Z.; Nam, Y.; Jiang, B.; et al. Mutational landscape of secondary glioblastoma guides MET-targeted trial in brain tumor. Cell 2018, 175, 1665–1678.
  34. Raza, A.; Ayub, H.; Khan, J.A.; Ahmad, I.; Salama, S.A.; Daradkeh, Y.I.; Javeed, D.; Ur Rehman, A.; Hamam, H. A Hybrid Deep Learning-Based Approach for Brain Tumor Classification. Electronics 2022, 11, 1146.
  35. Shah, P.M.; Ullah, F.; Shah, D.; Gani, A.; Maple, C.; Wang, Y.; Abrar, M.; Islam, S.U. Deep GRU-CNN model for COVID-19 detection from chest X-rays data. IEEE Access 2021, 10, 35094–35105.
  36. Fu, H.; Cheng, J.; Xu, Y.; Wong, D.W.K.; Liu, J.; Cao, X. Joint optic disc and cup segmentation based on multi-label deep network and polar transformation. IEEE Trans. Med. Imaging 2018, 37, 1597–1605.
  37. Song, B.; Wen, P.; Ahfock, T.; Li, Y. Numeric investigation of brain tumor influence on the current distributions during transcranial direct current stimulation. IEEE Trans. Biomed. Eng. 2015, 63, 176–187.
  38. Saba, T.; Mohamed, A.S.; El-Affendi, M.; Amin, J.; Sharif, M. Brain tumor detection using fusion of hand crafted and deep learning features. Cogn. Syst. Res. 2020, 59, 221–230.
  39. Khan, A.R.; Khan, S.; Harouni, M.; Abbasi, R.; Iqbal, S.; Mehmood, Z. Brain tumor segmentation using K-means clustering and deep learning with synthetic data augmentation for classification. Microsc. Res. Tech. 2021, 7, 1389–1399.
  40. Dufumier, B.; Gori, P.; Battaglia, L.; Victor, J.; Grigis, A.; Duchesnay, E. Benchmarking CNN on 3D anatomical brain MRI: Architectures, data augmentation and deep ensemble learning. arXiv 2021, arXiv:2106.01132.
  41. Isensee, F.; Jäger, P.F.; Full, P.M.; Vollmuth, V.; Maier, K.H. nnU-Net for brain tumor segmentation. In Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries: 6th International Workshop, BrainLes 2020, Held in Conjunction with MICCAI 2020, Lima, Peru, 4 October 2020; Springer: Cham, Switzerland, 2021; pp. 118–132.
  42. Nalepa, J.; Marcinkiewicz, M.; Kawulok, M. Data augmentation for brain-tumor segmentation: A review. Front. Comput. Neurosci. 2019, 13, 83–102.
More
This entry is offline, you can click here to edit this entry!
Video Production Service