Deep Learning in Neuro-Oncology Data Analysis: History
Please note this is an old version of this entry, which may differ significantly from the current revision.
Contributor: , , ,

Machine Learning is entering a phase of maturity, but its medical applications still lag behind in terms of practical use. The field of oncological radiology (and neuro-oncology in particular) is at the forefront of these developments, now boosted by the success of Deep-Learning methods for the analysis of medical images. 

  • machine learning
  • neuro-oncology
  • radiology
  • deep learning

1. Introduction

Although Machine Learning (ML) is entering a phase of maturity, its applications in the medical domain at the point of care are still few and tentative at best. This paradoxical contradiction has been explained according to several different factors. One of them is the lack of experimental reproducibility, a requirement in which ML models in health have been reported to fare badly in comparison to other application areas [1]. One main reason to explain this is the mismatch between a data-centered (and often data-hungry) approach and the scarcity of publicly available and properly curated medical databases, combined with a nascent but insufficient data culture at the clinical level [2]. Another factor has to do with regulatory issues of ML (and Artificial Intelligence in general) in terms of both lack of maturity and geographical heterogeneity [3].
The field of oncological radiology (and neuro-oncology in particular) is arguably at the forefront of the practical use of ML in medicine [4], now boosted by the success of Deep-Learning (DL) methods for the analysis of medical images [5][6]. Unfortunately, though, imaging does not escape the challenges and limitations summarized in the previous paragraph. Central to them, what has been called the “long-tail effect” [7]: pathologies for which only small and scattered datasets exist due to the scarcity of clinical data management strategies (technically complex and expensive) at levels beyond the local (regional, national, international).

2. Ml-Based Analytical Pipelines and Their Use in Neuro-Oncology

Ultimately, the whole point of using ML methods for data-based problems in the area of neuro-oncology is to provide radiologists with evidence-based medical tools at the point of care that can assist them in decision-making processes, especially with ambiguous or borderline cases. This is why it makes sense to embed these methods in Clinical Decision Support Systems (CDSS). A thorough and systematic review of intelligent systems-based CDSS for brain tumor analysis based on magnetic resonance data (spectra or images) is presented in this same Special Issue of Cancers [8]. It reports their increasing use over the last decade, addressing problems that include general ones such as tumor detection, type classification, and grading, but also more specific ones such as physicians’ alerting of treatment change plans.
At the core of ML-based CDSS, scholars need not just ML methods, models, and techniques but, more formally, ML pipelines. An ML pipeline goes beyond the use of a collection of methods to encompass all stages of the data mining process, including data pre-processing (data cleaning, data transformations potentially including feature selection and extraction, but also other aspects of data curation such as data extraction and standardization, missing data imputation and data clinical validation [9]) and models’ post-processing, potentially including evaluation, implementation and the definition of interpretability and explainability processes [10]. Pipelines can also accommodate specific needs, such as those related to the analysis of “big data”, with their corresponding challenges of standardization and scalability.
An example of an ML pipeline for the specific problem of differentiation of glioblastomas from single brain metastases based on MR spectroscopy (MRS) data can be found in [11]. In this same issue of Cancers, Pitarch and co-workers [12] describe an ML pipeline for glioma grading from MRI data with a focus on the trustworthiness of the predictions generated by the ML models. This entails robustly quantifying the uncertainty of the models regarding their predictions, as well as implementing procedures to avoid data leakage, thus avoiding the risk of unreliable conclusions.
Radiomics is an image transformation approach that aims to extract either hard-coded statistical or textural features based on expert domain knowledge or feature representations learned from data, often using DL methods. The former may include first-order statistics, size and shape-based features, image intensity histogram descriptors, image textural information, etc. The use of this method for the pre-processing of brain tumor images prior to the use of ML has been recently and exhaustively reviewed in [13]. From that review, it is clear that the predominant problem under analysis is diagnosis, with only a limited number of studies addressing prognosis, survival, and progression. The types of brain tumors under investigation are dominated by the most frequent classes. 
The use of radiomics as a data transformation strategy in pre-processing is facilitated by the existence of off-the-shelf software such as the open-source PyRadiomics package [14].
Source extraction methods have a very different analytical rationale for data dimensionality reduction as a pre-processing step. They do not achieve it through plain feature transformation, as in radiomics. Instead, they aim to find the underlying and unobserved sources of observed radiological data. In doing so, they achieve dimensionality reduction as a byproduct of a process that may provide insight into the generation of the images themselves.
The ICA technique [15] has a long history in medical applications, most notoriously for the analysis of electroencephalographic signals. Source extraction is natural in this context as a tool for spatially locating sources of the EEG from electric potentials measured in the skull surface. In ICA, scholars assume that the observed data can be expressed as a linear combination of sources that are estimated to be statistically independent or as independent as possible. This technique has mostly been applied to brain tumor segmentation, but some alternative recent studies have extended its possibilities to dynamic settings, such as that in [16], where dynamic contrast-enhanced MRI is analyzed using temporal ICA (tICA), and in [17], where probabilistic ICA is used for the analysis of dynamic susceptibility contrast (DSC) perfusion MRI.
The NMF technique [18], on the other hand, was originally devised for the extraction of sources from images and assumes data non-negativity but does not assume statistical independence. Data are still approximated by linear combinations of factors. Although NMF and variants of this family of methods have extensively been used for the pre-processing and analysis of MRS and MRS imaging (MRSI) signal [19][20], they have only scarcely been used for the pre-processing of MRI.

3. Deep Learning in Neuro-Oncology Data Analysis

3.1. Overview of the Main DL Methods of Interest

Recent advances in the DL field have brought about new possibilities in medical imaging analysis and diagnosis. One of its arguably most successful models is Convolutional Neural Networks (CNNs), a widely used type of DL algorithm, well known for its ability to capture spatial correlations within image pixel data hierarchically. They have shown promise in medical imaging tasks [21][22][23], enabling improved tumor detection, classification, and prognosis assessment. The input data of a CNN is represented as a tensor with dimensions in the format of (channels, depth, height, width). Notably, the “depth” dimension is specific to 3D images and not applicable to 2D data, and “height” and “width” correspond to the image’s spatial dimensions. In practical terms, the number of channels for color images is translated into three, representing Red, Green, and Blue (RGB) components, while gray-scale images consist of a single channel. The most characteristic operation in a CNN is called convolution, which gives the name to the convolutional layers. These layers capture spatial correlations by applying a set of filters or kernels across all areas of the input image data and compute the weighted sum, resulting in the generation of a feature map as an output. This feature map contains essential characteristics extracted by the actual layer and serves as the input for subsequent layers of processing.
CNNs often consist of multiple layers that work together to learn hierarchical high-level image features. These layers progressively extract more abstract and complex information from the input image data. In the final step, the last feature map is passed through a fully connected layer, resulting in a one-dimensional vector. To obtain the class probabilities, Sigmoid or SoftMax are applied to this vector.
Several networks have made significant contributions to the world of DL. AlexNet [24], GoogLeNet [25], InceptionNet, VGGNet [26], ResNet [27], DenseNet [28], and EfficientNet [29] are among the most widely used CNNs to extract patterns from medical imaging.
DL models are considered data-hungry since they require substantial amounts of data for effective training. In the realm of medical data analysis, a primary challenge, as previously mentioned, is the inherent data scarcity and class imbalance. Common solutions to address this challenge include the application of data-augmentation (DA) methods and transfer-learning (TL) techniques.
Data Augmentation techniques are a crucial strategy to mitigate the challenge of limited annotated data in medical image analysis. These methods encompass a range of transformations applied to existing images, effectively expanding the dataset in terms of both size and diversity. Former approaches involve a wide range of geometric modifications such as rotation, scaling, flipping, cropping, zooming, or color changes. Beyond traditional augmentations, advanced methods like Generative Adversarial Networks (GANs) [30] are used to generate new synthetic and realistic examples.
The idea behind TL is to leverage pre-trained models, typically trained in large and diverse datasets, and adapt them for the specific task at hand, for which scholars might not have such a representative sample. Widely used pre-trained CNNs, such as ImageNet [31] or MS-COCO [32], have been originally developed from 2D large-scale datasets. However, a notable challenge when dealing with medical image data is the limited availability of large and diverse 3D datasets for universal pre-training [33]. Transferring the knowledge acquired from the 2D to the 3D domain proves to be a non-trivial task, primarily due to the fundamental differences in data structure and representation between these two contexts. To tackle this challenge and address the limitation of limited data, a broadly used strategy is to decompose 3D volumes into individual 2D slices within a determined anatomical plane. However, the decomposition of 3D volumes into individual 2D slices introduces a potential data leakage concern. This issue arises when 2D slices from the same individual inadvertently end up in both the training and testing datasets in an analytical pipeline. Such data leakage can lead to overestimations of model performance and affect the validity of experimental results. In addition, it is important to note that this approach comes with the trade-off of losing the 3D context present in the original data.
Recent efforts have aimed at overcoming these challenges. Banerjee et al. [34] classified low-grade glioma (LGG) and high-grade glioma (HGG) multi-sequence brain MRIs from TCGA and BraTS2017 data using multiple slice-based approaches. In their work, they provided a comparison of the performance obtained with CNNs trained from scratch on 2D image patches (PatchNet), entire 2D slices (SliceNet), and multi-planar slices through a final ensemble method that averages the classification obtained from each anatomical view (VolumeNet). The classification obtained with these models is also compared with pre-trained VGGNet and ResNet on ImageNet. The multi-planar method outperformed the rest of the approaches with an accuracy of 94.74%, and the lowest accuracy (68.07%) was obtained with pre-trained VGGNet. Unfortunately, TCGA and BraTS data share some patient data, which could involve an overlap between training and testing samples and hence be prone to data leakage. Ding et al. [35] combined radiomics and DL features using 2D pre-trained CNNs using single-plane images and performing a subsequent multi-planar fusion. VGG16, in combination with radiomics and RF, achieved the highest accuracy of 80% when combining the information obtained from the three views. Even though the multi-planar approach processes the information gathered from the axial, coronal, and sagittal views, it is still essentially a 2.5D approach, weak at fully capturing 3D contexts. Zhuge et al. [36] presented a properly native 3D CNN for tumor segmentation and subsequent binary glioma grade classification and compared it with a pre-trained 2D ResNet50 on ImageNet with previous tumor detection, employing a Mask R-CNN. The results of the 3D approach were slightly higher than the 2D ones, reporting 97.10% and 96.30% accuracy, respectively. In their study, Chatterjee et al. [37] explored the role of (2+1)D, mixed 2D–3D, and native 3D convolutions based on ResNet. This study highlights the effectiveness of mixed 2D–3D convolutions, achieving an accuracy of 96.98%, surpassing both the (2+1)D and the pure 3D approaches. Furthermore, the use of pre-trained networks demonstrated enhanced performance in the spatial models, yet, intriguingly, the pure 3D model performed better when trained from scratch. A study conducted by Yang et al. [33] introduced ACS convolutions, a novel approach that facilitates TL from models pre-trained on large publicly accessible 2D datasets. In this method, 2D convolutions are divided by channel into three parts and applied separately to the three anatomical views (axial, coronal, and sagittal). The effectiveness of this approach was demonstrated using a publicly available nodule dataset. Subsequently, Baheti et al. [38] further advanced the application of ACS convolutions by showcasing their enhanced performance on 3D MRI brain tumor data. Their study provides evidence of notable improvements in both segmentation and radiogenomic classification tasks.

3.2. Publicly Available Datasets

Access to large and high-quality datasets plays a crucial role in the development and evaluation of robust DL classification algorithms. These datasets encompass diverse tumor types, imaging modalities, and annotated labels, facilitating the advancement of computational methods for accurate tumor classification (Table 1).
Table 1. An overview of publicly available MRI datasets for brain tumor classification benchmarking.
Dataset Categories Dim. Sample Size MRI Modalities
BraTS [39] 2020
Low-Grade Glioma (LGG)

High-Grade Glioma (HGG)
3D 369 (LGG: 76, HGG: 293) T1, T1c, T2, FLAIR
2019 3D 335 (LGG: 76, HGG: 259) T1, T1c, T2, FLAIR
2018 3D 284 (LGG: 75, HGG: 209) T1, T1c, T2, FLAIR
2017 3D 285 (LGG: 75, HGG: 210) T1, T1c, T2, FLAIR
2015 3D 274 (LGG: 54, HGG: 220) T1, T1c, T2, FLAIR
2013 3D 30 (LGG: 10, HGG: 20) T1, T1c, T2, FLAIR
2012 3D 30 (LGG: 10, HGG: 20) T1, T1c, T2, FLAIR
CPM-RadPath [40] Astrocytoma (AS) IDH-mutant Oligodendroglioma (OG) IDH-mutant 1p/19q codeletion Glioblastoma (GB) IDH-wildtype 3D Training: 221 (AS: 54, OG: 34, GB: 133)
[unseen sets] Val: 35, Test: 73
T1, T1c, T2, FLAIR
Figshare [41] Meningioma (MN), Glioma (GL), Pituitary (PT) 2D 233 (MN: 82, GL: 89, PT: 62) T1c
IXI [42] Healthy 3D 600 T1, T2, PD, DW
Kaggle-I [43] Healthy (H), Tumor (T) 2D 3000 (H: 1500, T: 1500) -
Kaggle-II [44] Healthy (H), Meningioma (MN), Glioma (GL), Pituitary (PT) 2D 3264 (H: 500, MN: 937, GL: 926, PT: 901) -
Kaggle-III [45] Healthy (H), Tumor (T) 2D 253 (H: 98, T: 155) -
Radiopaedia [46] - - - -
REMBRANDT [47] Oligodendroglioma (OG), Astroctyoma (AS), Glioblastoma (GB) 3D 111 (OG: 21, AS: 47, GB: 44) T1, T1c, T2, FLAIR
Grade II (G.II), Grade III (G.III), Grade IV (G.IV) 109 (G.II: 44, G.III:24, G.IV: 44)
TCGA-GBM [48] Glioblastoma 3D 262 T1, T1c, T2, FLAIR
TCGA-LGG [49] Grade II (G.II), Grade III (G.III) 3D 197 (G.II: 100, G.III: 96, discrepancy: 1) T1, T1c, T2, FLAIR
Astroctyoma (AS), Oligodendroglioma (OG), Oligoastrocytoma (OAS) 197 (AS: 64, OG: 86, OAS: 47)
DW: Diffusion-weighted, FLAIR: Fluid Attenuated Inversion Recovery, PD: Proton Density, T1c: contrast-enhanced T1 weighted.

4. Machine Learning Applications to Ultra-Low Field Imaging

A completely different area of application of ML to neuroradiology has recently emerged with the availability of ultra-low field magnetic resonance imaging devices for point-of-care applications, typically with <0.1 T permanent magnets [50][51][52]. In the 0.055 T implementation described by Liu et al. [53], DL was used to improve the quality of the acquisition by detecting and canceling external electromagnetic interference (EMI) signals, eliminating the need for radio-frequency shielded rooms. They compared the results of the DL EMI cancelation in 13 patients with brain tumors, both in the 0.055 T and in another 3 T machine, on same-day acquisitions, finding that it was possible to identify the different tumor types.
Another example is the Hyperfine system, which received FDA clearance in 2020 for brain imaging and in 2021 (K212456) for DL-image reconstruction to enhance the quality of the generated images. In particular, DL is used as part of the image reconstruction pipeline of T1, T2, and FLAIR images. There are two DL steps: the first one is a so-called DL gridding, where the undersampled k-space data are transformed into images not by Fourier transformations but with DL. The transformed images are then combined, and a final post-processing DL step is applied to eliminate noise. However, no details about the specific algorithms are provided. 

This entry is adapted from the peer-reviewed paper 10.3390/cancers16020300

References

  1. Sohn, E. The reproducibility issues that haunt health-care AI. Nature 2023, 613, 402–403.
  2. McDermott, M.; Wang, S.; Marinsek, N.; Ranganath, R.; Foschini, L.; Ghassemi, M. Reproducibility in machine learning for health research: Still a ways to go. Sci. Transl. Med. 2021, 13, eabb1655.
  3. Muehlematter, U.; Daniore, P.; Vokinger, K. Approval of artificial intelligence and machine learning-based medical devices in the USA and Europe (2015–20): A comparative analysis. Lancet Digit. Health 2021, 3, e195–e203.
  4. Di Nunno, V.; Fordellone, M.; Minniti, G.; Asioli, S.; Conti, A.; Mazzatenta, D.; Balestrini, D.; Chiodini, P.; Agati, R.; Tonon, C.; et al. Machine learning in neuro-oncology: Toward novel development fields. J. Neuro-Oncol. 2022, 159, 333–346.
  5. Bacciu, D.; Lisboa, P.; Vellido, A. Deep Learning in Biology and Medicine; World Scientific: London, UK, 2022.
  6. Bernal, J.; Kushibar, K.; Clèrigues, A.; Oliver, A.; Lladó, X. Deep learning for medical imaging. In Deep Learning in Biology and Medicine; World Scientific: London, UK, 2022; pp. 11–54.
  7. Xue, H.; Hu, G.; Hong, N.; Dunnick, N.; Jin, Z. How to keep artificial intelligence evolving in the medical imaging world? Challenges and opportunities. Sci. Bull. 2023, 68, 648–652.
  8. Mukherjee, T.; Pournik, O.; Lim Choi Keung, S.; Arvanitis, T. Clinical decision support systems for brain tumour diagnosis and prognosis: A systematic review. Cancers 2023, 15, 3523.
  9. Bertsimas, D.; Wiberg, H. Machine Learning in Oncology: Methods, applications, and challenges. JCO Clin. Cancer Inform. 2020, 4, 885–894.
  10. Lisboa, P.; Saralajew, S.; Vellido, A.; Fernández-Domenech, R.; Villmann, T. The Coming of Age of Interpretable and Explainable Machine Learning Models. Neurocomputing 2023, 535, 25–39.
  11. Mocioiu, V.; Pedrosa de Barros, N.; Ortega-Martorell, S.; Slotboom, J.; Knecht, U.; Arús, C.; Vellido, A.; Julià-Sapé, M. A Machine Learning pipeline for supporting differentiation of glioblastomas from single brain metastases. In Proceedings of the ESANN 2016, European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN) Bruges (Belgium), Bruges, Belgium, 5–7 October 2016; pp. 247–252.
  12. Pitarch, C.; Ribas, V.; Vellido, A. AI-Based Glioma Grading for a Trustworthy Diagnosis: An Analytical Pipeline for Improved Reliability. Cancers 2023, 15, 3369.
  13. Tabassum, M.; Suman, A.; Suero Molina, E.; Pan, E.; Di Ieva, A.; Liu, S. Radiomics and Machine Learning in Brain Tumors and Their Habitat: A Systematic Review. Cancers 2023, 15, 3845.
  14. Griethuysen, J.; Fedorov, A.; Parmar, C.; Hosny, A.; Aucoin, N.; Narayan, V.; Beets-Tan, R.; Fillon-Robin, J.; Pieper, S.; Aerts, H. Clinical Decision Support Systems for Brain Tumour Diagnosis and Prognosis: A Systematic Review. Cancer Res. 2017, 77, e104–e107.
  15. Hyvärinen, A.; Oja, E. Independent component analysis: Algorithms and applications. Neural Netw. 2000, 13, 411–430.
  16. Lee, J.; Zhao, Q.; Kent, M.; Platt, S. Tumor Segmentation using temporal Independent Component Analysis for DCE-MRI. BioRxiv 2022.
  17. Chakhoyan, A.; Raymond, C.; Chen, J.; Goldman, J.; Yao, J.; Kaprealian, T.; Pouratian, N.; Ellingson, B. Probabilistic independent component analysis of dynamic susceptibility contrast perfusion MRI in metastatic brain tumors. Cancer Imaging 2019, 19, 14.
  18. Lee, D.; Seung, H. Learning the parts of objects by non-negative matrix factorization. Nature 1999, 401, 788–791.
  19. Ortega-Martorell, S.; Lisboa, P.; Vellido, A.; Julià-Sapé, M.; Arús, C. Non-negative matrix factorisation methods for the spectral decomposition of MRS data from human brain tumours. BMC Bioinform. 2012, 13, 38.
  20. Ungan, G.; Arús, C.; Vellido, A.; Julià-Sapé, M. A Comparison of Non-Negative Matrix Underapproximation Methods for the Decomposition of Magnetic Resonance Spectroscopy Data from Human Brain Tumors. NMR Biomed. 2023, 36, e5020.
  21. Lundervold, A.S.; Lundervold, A. An overview of deep learning in medical imaging focusing on MRI. Z. Med. Phys. 2019, 29, 102–127.
  22. Cai, L.; Gao, J.; Zhao, D. A review of the application of deep learning in medical image classification and segmentation. Ann. Transl. Med. 2020, 8, 713.
  23. Chen, X.; Wang, X.; Zhang, K.; Fung, K.M.; Thai, T.C.; Moore, K.; Mannel, R.S.; Liu, H.; Zheng, B.; Qiu, Y. Recent advances and clinical applications of deep learning in medical image analysis. Med. Image Anal. 2022, 79, 102444.
  24. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. In Proceedings of the Advances in Neural Information Processing Systems; Pereira, F., Burges, C., Bottou, L., Weinberger, K., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2012; Volume 25.
  25. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2014; pp. 1–9.
  26. Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. In Proceedings of the 3rd International Conference on Learning Representations, ICLR 2015—Conference Track Proceedings, San Diego, CA, USA, 7–9 May 2014.
  27. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 770–778.
  28. Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708.
  29. Tan, M.; Le, Q. EfficientNet: Rethinking model scaling for convolutional neural networks. In Proceedings of the 36th International Conference on Machine Learning, Long Beach, CA, USA, 9–15 June 2019; Volume 97, pp. 6105–6114.
  30. Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Networks. arXiv 2014, arXiv:1406.2661.
  31. Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Li, F.-F. Imagenet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255.
  32. Lin, T.Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Dollár, P.; Zitnick, C.L. Microsoft COCO: Common objects in context. In Proceedings of the Computer Vision—ECCV 2014; Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T., Eds.; Springer: Cham, Switzerland, 2014; pp. 740–755.
  33. Yang, J.; Huang, X.; He, Y.; Xu, J.; Yang, C.; Xu, G.; Ni, B. Reinventing 2D Convolutions for 3D Images. IEEE J. Biomed. Health Inform. 2021, 25, 3009–3018.
  34. Banerjee, S.; Mitra, S.; Masulli, F.; Rovetta, S. Glioma classification using deep radiomics. SN Comput. Sci. 2020, 1, 209.
  35. Ding, J.; Zhao, R.; Qiu, Q.; Chen, J.; Duan, J.; Cao, X.; Yin, Y. Developing and validating a deep learning and radiomic model for glioma grading using multiplanar reconstructed magnetic resonance contrast-enhanced T1-weighted imaging: A robust, multi-institutional study. Quant. Imaging Med. Surg. 2022, 12, 1517.
  36. Zhuge, Y.; Ning, H.; Mathen, P.; Cheng, J.Y.; Krauze, A.V.; Camphausen, K.; Miller, R.W. Automated glioma grading on conventional MRI images using deep convolutional neural networks. Med. Phys. 2020, 47, 3044–3053.
  37. Chatterjee, S.; Nizamani, F.A.; Nürnberger, A.; Speck, O. Classification of brain tumours in MR images using deep spatiospatial models. Sci. Rep. 2022, 12, 1505.
  38. Baheti, B.; Pati, S.; Menze, B.; Bakas, S. Leveraging 2D Deep Learning ImageNet-trained Models for Native 3D Medical Image Analysis. In Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries, Proceedings of the BrainLes 2022, Singapore, 18 September 2022; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2023; Volume 13769, pp. 68–79.
  39. Brain Tumor Segmentation (BraTS) Challenge. Available online: http://www.braintumorsegmentation.org/ (accessed on 10 June 2023).
  40. Computational Precision Medicine: Radiology-Pathology Challenge on Brain Tumor Classification 2019 (CPM-RadPath). Available online: https://www.med.upenn.edu/cbica/cpm-rad-path-2019/ (accessed on 30 August 2023).
  41. Figshare Brain Tumor Dataset. Available online: https://figshare.com/articles/dataset/brain_tumor_dataset/1512427 (accessed on 1 June 2023).
  42. IXI Dataset. Available online: https://brain-development.org/ixi-dataset/ (accessed on 10 June 2023).
  43. Hamada, A. Br35H Brain Tumor Detection 2020 Dataset. Available online: https://www.kaggle.com/datasets/ahmedhamada0/brain-tumor-detection (accessed on 1 June 2023).
  44. Bhuvaji, S.; Kadam, A.; Bhumkar, P.; Dedge, S. Brain Tumor Classification (MRI). Available online: https://www.kaggle.com/datasets/sartajbhuvaji/brain-tumor-classification-mri (accessed on 1 June 2023).
  45. Chakrabarty, N. Brain MRI Images Dataset for Brain Tumor Detection, Kaggle. 2019. Available online: https://www.kaggle.com/datasets/navoneel/brain-mri-images-for-brain-tumor-detection (accessed on 1 June 2023).
  46. Radiopaedia. Available online: https://radiopaedia.org/cases/system/central-nervous-system (accessed on 1 June 2023).
  47. Scarpace, L.; Flanders, A.E.; Jain, R.; Mikkelsen, T.; Andrews, D.W. Data From REMBRANDT . The Cancer Imaging Archive. 2019. Available online: https://www.cancerimagingarchive.net/collection/rembrandt/ (accessed on 20 April 2023).
  48. Scarpace, L.; Mikkelsen, T.; Cha, S.; Rao, S.; Tekchandani, S.; Gutman, D.; Saltz, J.H.; Erickson, B.J.; Pedano, N.; Flanders, A.E.; et al. The Cancer Genome Atlas Glioblastoma Multiforme Collection (TCGA-GBM) (Version 4) . The Cancer Imaging Archive. 2016. Available online: https://www.cancerimagingarchive.net/collection/tcga-gbm/ (accessed on 4 March 2023).
  49. Pedano, N.; Flanders, A.E.; Scarpace, L.; Mikkelsen, T.; Eschbacher, J.M.; Hermes, B.; Sisneros, V.; Barnholtz-Sloan, J.; Ostrom, Q. The Cancer Genome Atlas Low Grade Glioma Collection (TCGA-LGG) (Version 3) . The Cancer Imaging Archive. 2016. Available online: https://www.cancerimagingarchive.net/collection/tcga-lgg/ (accessed on 5 March 2023).
  50. O’Reilly, T.; Teeuwisse, W.M.; de Gans, D.; Koolstra, K.; Webb, A.G. In vivo 3D brain and extremity MRI at 50 mT using a permanent magnet Halbach array. Magn. Reson. Med. 2021, 85, 495–505.
  51. Cooley, C.Z.; McDaniel, P.C.; Stockmann, J.P.; Srinivas, S.A.; Cauley, S.F.; Śliwiak, M.; Sappo, C.R.; Vaughn, C.F.; Guerin, B.; Rosen, M.S.; et al. A portable scanner for magnetic resonance imaging of the brain. Nat. Biomed. Eng. 2020, 5, 229–239.
  52. Man, C.; Lau, V.; Su, S.; Zhao, Y.; Xiao, L.; Ding, Y.; Leung, G.K.; Leong, A.T.; Wu, E.X. Deep learning enabled fast 3D brain MRI at 0.055 tesla. Sci. Adv. 2023, 9, eadi9327.
  53. Liu, Y.; Leong, A.; Zhao, Y.; Xiao, L.; Mak, H.; Tsang, A.; Lau, G.; Leung, G.; Wu, E. A low-cost and shielding-free ultra-low-field brain MRI scanner. Nat. Commun. 2021, 12, 7238.
More
This entry is offline, you can click here to edit this entry!
Video Production Service