Brain tumors can cause serious health complications and lead to death if not detected accurately. Therefore, early-stage detection of brain tumors and accurate classification of types of brain tumors play a major role in diagnosis. Timely detection, diagnosis, and classification of brain tumors have been instrumental in effective treatment planning for the recovery and life extension of the patient. Brain tumor detection is a procedure to differentiate the abnormal tissues for example active tumor tissue, edema tissue from normal tissues for example gray matter, white matter.
1. Introduction
Recently, computer vision-based medical imaging techniques help medical experts for better diagnosis and treatment
[1]. A number of medical imaging modalities for example X-ray, computer tomography (CT), MRI, and ultrasound have shown remarkable achievements in the health care system
[2]. These medical imaging techniques have been utilized for brain imaging analysis, diagnosis, and treatment. The detection and classification of brain tumors have emerged as a hot research topic for researchers, radiologists, and medical experts
[3].
Brain tumors arise due to the unusual growths and unrestrained cell division in the brain. It can deteriorate the health condition of a patient and expedite casualty if not detected precisely
[4]. Generally, brain tumor is grouped into two varieties, for example malignant tumor and benign tumor. A malignant tumor is regarded a cancerous tumor and a benign tumor is considered a noncancerous tumor. The objective of tumor detection is to identify the position and extension of the tumor area. This detection task can be accomplished by comparing the abnormal areas with the normal tissue
[5]. Accurate imaging analysis of brain tumor images can determine a patient’s condition. MRI is an extensively used imaging technique for the study of brain tumors. The brain MR images provide a clear representation of the brain structure and abnormalities
[6]. It is observed that for brain tumor detection, two important imaging modalities such as CT scan and MRI are used. However, as compared to CT scans, MRI is preferred due to its non-invasive nature which produces high-resolution images of brain tumors. Usually, brain MRI can be modeled into four modes such as T1-weighted, T1-weighted contrast-enhanced, T2-weighted, and T2-weighted FLAIR. Each model illustrates different features of a brain tumor
[7]. In the literature, various automated approaches have been introduced for brain tumor classification utilizing brain MRI. Over the years, support vector machine (SVM) and neural network (NN) based approaches are extensively utilized for brain tumor classification
[8][9][10][11]. Konur et al.
[12] proposed SVM based approach where the SVM model is first trained with known samples and then, the trained model is used to process other brain tumor images. Xiao et al.
[13] developed a segmentation technique by merging both Fuzzy C-Means and SVM algorithms.
Earlier, machine learning (ML) based tumor detection approaches are considered state-of-the-art techniques. Recently, these ML-based approaches are unable to provide high accuracy results due to inefficient prediction models and the acute features of the medical data. Therefore, most of the researchers tried to find an alternative learning-based approach for the improvement of the classification accuracy
[14][15]. Alternatively, deep learning (DL) sets a sensational development in the machine learning domain since DL architectures can efficiently predict the model by using a large dataset. Unlike SVM and KNN, the deep learning models are able to signify complex relationships deprived of using a large number of nodes. Therefore, these approaches have obtained brilliant performance in medical imaging applications
[16]. Recently many researchers have developed computer-aided frameworks for medical image classification tasks that produce outstanding results. Yu et al.
[17] introduced a computer-aided electroencephalogram (EEG) classification framework named CABLES that classifies the six different EEG domains under a unified sequential frame. The authors have conducted comprehensive experiments on seven different types of datasets by using a 10-fold cross-validation scheme. The proposed EEG signal classification framework has shown significant improvements over the domain-specific approaches in terms of classification accuracies. Sadiq et al.
[18] developed an innovative pre-trained CNN based automated brain-computer interface (BCI) framework for EEG signal identifications. This framework is basically investigating the consequences of various limiting factors. The proposed approach has been assessed using three public datasets. The experimental results witnessed the robustness of the proposed BCI framework when identifying EEG signals. Huang et al.
[19] urther developed a deep learning based EEG segregation pipeline to overcome the previous limitations present in the BCI framework. In this approach, the authors have merged the concepts of multiscale principal component analysis, Hilbert transform based signal resolution approach, and pre-trained CNNs for the automatic feature estimation and segregation. The proposed BCI framework has been evaluated using three binary class datasets. It is found the proposed approach is reliable in identifying EEG signals and has shown outstanding performance in terms of classification accuracy.
The traditional diagnostic approach such as histopathology is the process of detecting the disease with the help of microscopic investigation of a biopsy which is exposed onto a glass slide. The traditional diagnostic approaches are performed manually from tissue samples by pathologists. The area of infected area However, these traditional diagnostic approaches are time consuming and difficult. On the other hand, transfer learning-based DCNN framework reduces the workload of the pathologist and supports them to concentrate on vulnerable cases. Moreover, the use of transfer learning can help to process the brain MRl images faster and more accurately. Further, the automatic detection and classification will lead to a quicker diagnosis procedure which is less labor-intensive.
The DCNN architectures have shown outstanding performance in detecting and classifying brain tumors because of their generalizations of different levels of features. Also, the pre-processing steps such as data augmentation and stain normalization used in DCNN are beneficial to obtain robust and accurate performance. Therefore, researchers are motivated to use DCNN architecture to detect and classify brain tumors. However, the accurateness of DCNN architectures depends on the data sample and the training process since these architectures require more precise data for better output. In order to overcome this limitation, transfer learning can be employed for improved performance. Mainly, transfer learning has two main aspects such as fine-tune the convolutional network and freeze the layers of the convolutional network. Instead of building a CNN model from scratch, fine-tuning the pre-trained model will be sufficient for the classification task.
Therefore, researchers use a pre-trained DCNN architecture called as VGGNet based on transfer learning to classify brain tumors for instance meningioma tumor, glioma tumor, and pituitary tumor. Usually, the pre-trained architecture is previously trained on a large dataset and used to transfer its learning parameters to the target dataset. Therefore, the pre-trained model can consume less time since it does not require a large dataset to get the results. The top layers of the VGGNet model extract the low-level features for example colors and edges whereas, the bottom layers extract the high-level features for example objects and contours. The objective is to transfer the knowledge learned by VGGNet to a different target task of classifying brain tumor MRI images. The main objective of using VGGNet over other pre-trained networks is the use of small receptive fields in VGGNet rather than massive fields. Due to the use of smaller convolutional filters in the VGGNet, it contains a significant number of weight layers and in turn, it provides better performance. Further, this proposed approach uses a Global Average Pooling (GAP) layer at the output to avoid overfitting issues and vanishing gradient problems. The GAP layer is used to transform the multidimensional feature map into a one-dimensional feature vector
[20][21][22]. Since the GAP layer does not need parameter optimization, therefore, the overfitting issue can be escaped at this layer.
2. Classification of Tumor in Brain Magnetic Resonance Images
Brain tumor detection and classification problem have been evolved as a hot research topic for two decades because of their high medical relevance. Timely detection, diagnosis, and classification of brain tumors have been instrumental in effective treatment planning for the recovery and life extension of the patient. Brain tumor detection is a procedure to differentiate the abnormal tissues for example active tumor tissue, edema tissue from normal tissues for example gray matter, white matter. Generally, the brain tumor detection process is grouped into three types such as manual detection, semi-automatic detection, and fully automatic detection. Currently, medical experts are giving more importance to fully automatic detection methods where the tumor location and area can be detected automatically deprived of human intervention by setting appropriate parameters.
The deep learning model extends the conventional neural networks with the addition of more hidden layers among the input layer and output layer of the network in order to establish additional complex and nonlinear relations. A number of deep learning models for instance convolutional neural network (CNN), deep neural network (DNN), recurrent neural network (RNN) are extensively employed for medical imaging applications. Here, numerous deep learning-based existing work done for the brain tumor classification task is summarized.
Havaei et al.
[23] introduced an automated DNN based brain tumor segmentation technique. This method accomplishes both local and global contextual features of the brain at one time. The fully connected (FC) layer used at the last layer of the network improves the network speed by 40 fold. This proposed model is applied specifically for the segmentation of glioblastomas tumors pictured in brain MRI. Rehman et al.
[24] introduced three different types of CNN-based architecture for example AlexNet, GoogLeNet, and VGGNet for the classification of brain tumors. This framework attained an accuracy of 98.69% by utilizing the VGG16 network.
Instead of extracting features from the bottom layers of the pre-trained network, Noreen et al.
[25] introduced an efficient framework where, the features are extracted from multiple levels and then, concatenated to diagnose the brain tumor. Initially, the features are extracted from DensNet201 and then, concatenated. At last, the concatenated features are provided as input to the softmax classifier for the classification. Similar steps are applied for the pre-trained Inceptionv3 model. The performances of both models are assessed and validated using a three-class brain tumor dataset. The proposed framework achieved accuracies of 99.34%, and with Inception-v3 model and DensNet201 model respectively in terms of classification of brain tumor.
Li et al.
[26] developed a multi-CNN structure by combining multimodal information fusion and CNN to detect brain tumors. The authors have extended the 2D-CNNs to multimodal 3D-CNNs in order to get different information among the modalities. Also, a superior weight loss function is introduced to minimize the interference of the non-focal area which in turn increases the accuracy of detection. Sajjad et al.
[27] developed a CNN-based multi-grade classification system which helped in clear segmentation of tumor region from the dataset. In this system, first, a deep learning technique is utilized to segment tumor areas from brain MRI. Subsequently, the proposed model is trained effectively to avoid the deficiency of data problems to deal with MR images. Finally, the trained network is fine-tuned using augmented data to classify brain tumors. This method achieves an accuracy of 90.67% for enhancing the classification of tumors into different grades.
Anaraki et al.
[1] proposed a tumor classification approach by taking advantage of both CNN and the genetic algorithm (GA). Instead of adopting a pre-defined deep neural network model, this proposed approach uses GA for the development of CNN architecture. The proposed approach attained an accuracy of 90.9% for classifying three Glioma grades. Zhou et al.
[28] presented a universal methodology based on DenseNet and RNN to detect numerous types of brain tumors on brain MRI. The proposed methodology can successfully handle the variations of location, shape, and size of tumors. In this method, first DenseNet is applied for the extraction of features from the 2D slices. Then, the RNN is used for the classification of obtained sequential features. The effectiveness of this approach is evaluated on public and proprietary datasets and attained an accuracy of 92.13%.
Afshar et al.
[29] proposed a new learning-based architecture called capsule networks (CapsNets) for the detection of brain tumors. It is perceived that CNN needs enormous amounts of data for training. The introduction of CapsNets can overcome the training complexity of CNN as it requires fewer amounts of data for training. This approach incorporates CapsNets for brain tumor classification and achieves better performance than CNNs. Deepak et al.
[30] recommended a transfer learning-based tumor classification system utilizing a pre-trained GoogLeNet to categorize three prominent tumors seen in the brain such as glioma, meningioma, and pituitary. This method effectively classifies tumors into different grades with a classification accuracy of 98%. Frid-Adar et al.
[31] introduced a generative adversarial network for the task of medical image classification to face the challenges of the unavailability of medical image datasets. The proposed approach generates synthetic medical images which are utilized to improve the performance. This approach is validated on a limited liver lesions CT image dataset and has shown a superior classification performance of 85.7% sensitivity and 92.4% specificity. Abdullah et al.
[32] introduced a robust transfer learning enabled lightweight network for the segmentation of brain tumors based on the VGG pre-trained network. The efficacy of the proposed approach has been evaluated on the BRATS2015 dataset. This framework attained a global accuracy of 98.11%.
This entry is adapted from the peer-reviewed paper 10.3390/pr11030679