Deep Learning Aided Neuroimaging for Brain Monitoring: History
Please note this is an old version of this entry, which may differ significantly from the current revision.
Contributor: , ,

Deep learning has shown tremendous potential in the field of neuroimaging and brain regulation. Neuroimaging techniques such as MRI, CT, PET/CT, EEG/MEG, optical imaging, and other imaging modalities generate large amounts of comprehensive and complex data, which can be challenging to analyze and interpret. Deep learning techniques such as convolutional neural networks (CNNs), recurrent neural networks (RNNs), and generative adversarial network (GAN) have been proven to be effective in extracting meaningful information from these data and transforming the neuroimaging from qualitative to quantitative imaging modality. The aforementioned information is merged with additional patient data and processed using advanced bioinformatics software to create models that could potentially enhance the accuracy in the diagnosis, prognosis, and prediction for brain monitoring and regulation.

  • artificial intelligence
  • deep learning
  • medical imaging
  • neuroimaging

1. Deep Learning Assisted MRI

MRI scans of the brain are considered the most effective approach for identifying chronic neurological disorders such as brain tumors, dementia, stroke, and multiple sclerosis. They are also the preferred method for detecting conditions affecting the pituitary gland, brain vessels, inner ear organs, and eyes due to their high sensitivity. In recent years, several deep learning-based medical image analysis methods have been introduced to facilitate health monitoring and diagnosis using brain MRI scans [1][2][3]. One of the primary applications of deep learning in neuroimaging by MRI is the identification and classification of neurological disorders. For example, CNNs have been used to accurately diagnose Alzheimer’s disease, Parkinson’s disease, and multiple sclerosis by analyzing MRI scans. Deep learning has also shown potential in identifying different stages of brain development, detecting early signs of neurological disorders, and predicting the progression of these disorders.
Recent advancements in the classification of gliomas based on biological genotypes, coupled with the utilization of computational deep learning models based on multi-modal MRI biomarkers, offer a promising avenue for customized and effective treatment plans. In this regard, deep learning-based assessment of gliomas using hand-crafted or auto-extracted features derived from MRI has emerged as a critical tool, as genomic alterations can be correlated with MRI-based phenotypes [4]. Deep learning algorithms have been extensively explored for the purpose of classifying neurodegenerative diseases using medical imaging techniques such as magnetic resonance imaging. Utilizing CNNs on MRI data has emerged as a promising technique for achieving exceptional levels of precision in predicting the progression of neurological conditions such as brain tumors, Alzheimer’s disease, multiple sclerosis, and stroke by capturing image features that are not detectable using traditional methods. However, little attention has been given to utilizing post-mortem immunofluorescence imaging studies of patients’ brains for this purpose. These studies have the potential to be a valuable tool in detecting abnormal chemical changes or pathological post-translational modifications of the Tau polypeptide. Therefore, L. Diaz-Gomez et al. proposed a CNN pipeline that utilized transfer learning to analyze post-mortem immunofluorescence images with different Tau biomarkers for the classification of Tau pathology in Alzheimer’s disease and progressive supranuclear palsy. The ResNet-IFT architecture was used to generate models, and interpretability algorithms such as Guided Grad-CAM and occlusion analysis were employed to interpret the outputs of these models. They tested four different architectures to determine the best classifier, and the results showed that their design was able to classify diseases with an average accuracy of 98.41%. Additionally, they were able to provide an interpretation of the classification, which included different structural patterns in the immunoreactivity of the Tau protein in NFTs present in the brains of patients with progressive supranuclear palsy and Alzheimer’s disease [5]. O. Ozkaraca et al. created a new modular deep learning model to enhance the classification accuracy of MRI images while simultaneously addressing the drawbacks of prevalent transfer learning approaches like DenseNet, VGG16, and basic CNN architectures. They employed brain tumor images from the Kaggle database to train and test their model using two distinct data splitting methods: 80% for training and 20% for testing, and 10-fold cross-validation. Although the proposed deep learning model demonstrated better classification performance compared to other transfer learning methods, it required more processing time [1]. In another study, T. Chattopadhyay et al. utilized 3D CNN to forecast Abeta+ based on 3D brain MRI data derived from 762 elderly participants (mean age: 75.1 years ± 7.6SD; 394F/368M; 459 healthy controls, 67 with MCI, and 236 with dementia) who were scanned as part of the Alzheimer’s Disease Neuroimaging Initiative. The 3D CNN accurately projected Abeta+ with a 76% balanced accuracy from T1w scans [6]. Exploring CNN-generated attention maps, which identify the most significant anatomical features used for CNN-driven decisions, holds the potential to unveil crucial disease mechanisms that contribute to the accumulation of disability. L. Coll et al. predicted a class of multiple sclerosis disability using whole-brain MRI scans as input by a 3D-CNN model, which achieved a mean accuracy of 79% and proved to be superior to the equivalent logistic regression model (77%). The model was also successfully validated in the independent external cohort without any re-training (accuracy = 71%) [7]. Perinatal arterial ischemic stroke has been linked to unfavorable neurological outcomes. However, the evaluation of ischemic lesions and the subsequent development of the brain in newborns requires time-consuming manual inspection of brain tissues and ischemic lesions. Therefore, R. Zoetmulder et al. proposed an automatic method that used CNNs to segment brain tissues and ischemic lesions in the MRI scans of infants suffering from perinatal arterial ischemic stroke. This method eliminates the need for the labor-intensive manual assessment of brain tissues and ischemic lesions [8]. This study indicates that the automatic segmentation of brain tissue and ischemic lesions in the MRI scans of patients is feasible and may allow for the evaluation of the brain development and efficacy of treatment in large datasets.

2. Deep Learning Assisted PET/CT

PET/CT provides powerful diagnosis methods for neurodegenerative disorder by identifying disease-specific pathologies. Deep learning techniques have shown great promise in enhancing PET and CT imaging for neuroimaging and brain monitoring/regulation. These techniques can help improve the accuracy, speed, and efficiency of image processing, enabling more effective analysis and interpretation of neuroimaging data. Three-dimensional CNN can be trained to denoise the PET images for each disease cohort of neurodegenerative disorders [9] and predict the diagnosis of dementia with Lewy bodies, Alzheimer’s disease, and mild cognitive impairment [10] as well as amyloid standardized uptake value ratio through PET for Alzheimer’s prognosis [11]. One example of deep learning-assisted neuroimaging is the use of convolutional neural networks (CNNs) to improve the accuracy of PET image segmentation. In one study, the researchers developed a CNN-based segmentation method that achieved higher accuracy (96%), sensitivity (96%), and specificity (94%) than the traditional methods in the evaluation of neuro images for the diagnosis of Alzheimer’s disease, which was evaluated using the 18FDG-PET images of 855 patients including 635 normal control and 220 Alzheimer’s disease patients from the ADNI database, thus capable of discriminating the normal control from the Alzheimer’s disease patients [12].
Another example is the use of deep learning techniques to enhance the quality and resolution of CT imaging in neuroimaging. In a study published in the journal Radiology, the researchers constructed and trained a deep learning-based stenosis and plaque classification algorithm for head and neck CT angiography that achieved 85.6% consistency between radiologists and the DL-assisted algorithm, and reduced the time needed for diagnosis and report writing by the radiologists from 28.8 min ± 5.6 to 12.4 min ± 2.0 (p < 0.001) [13]. In addition to image processing, deep learning techniques have also been used to analyze neuroimaging data such as semantic segmentation and quantification of intracerebral hemorrhage (ICH), perihematomal edema, and intraventricular hemorrhage on non-contrast CT scans of patients with spontaneous ICH [14]. Overall, deep learning techniques hold great promise for improving the accuracy, speed, and efficiency of PET and CT imaging for neuroimaging and brain monitoring/regulation. With further development and refinement, these techniques could revolutionize the field of neuroimaging and contribute to a better understanding of brain function and dysfunction.

3. Deep Learning Assisted EEG/MEG

Deep learning has become an increasingly popular tool in EEG/MEG neuroimaging and brain monitoring/regulation. With the ability to analyze large datasets and detect subtle patterns in neural activity, deep learning has shown great potential in enhancing our understanding of brain function and informing clinical applications [15][16]. Deep learning has also been used to improve brain regulation techniques such as EEG neurofeedback. EEG neurofeedback is a non-invasive technique that aims to regulate brain activity by providing real-time feedback to the patient. Deep learning algorithms can analyze EEG data in real-time, detect patterns, and provide targeted feedback to patients to help them regulate their brain activity. One of the main applications of deep learning in EEG/MEG is in the classification of brain states or disorders. For example, Chambon et al. used a CNN to classify EEG data into three different cognitive states, achieving an accuracy of up to 90% [17]. In another study, Lawhern et al. used a deep learning model to classify EEG data into different types of epileptic seizures with high accuracy, demonstrating the potential of deep learning in aiding clinical diagnosis [18].
Moreover, deep learning techniques have been used in the development of brain–computer interfaces (BCIs) that allow patients to control external devices such as prosthetic limbs or computer interfaces using their brain activity. Deep learning algorithms can extract meaningful information from EEG or fMRI data and translate it into commands for external devices. Deep learning has also been used for brain activity prediction and regulation. For instance, M. Dinov et al. used deep reinforcement learning in closed-loop behavioral-and neuro-feedback to track and optimize human performance [19], in which a deep learning model was established to predict individualized EEG signals and applied to a closed-loop system for real-time neurofeedback, achieving the successful regulation of brain activity. Recent technological advances such as wireless recording, deep learning analysis, and real-time temporal resolution have increased interest in EEG-based brain–computer interface approaches [20][21]. A deep learning model was developed for real-time decoding of MEG signals, which was applied to a brain–computer interface system for regulating motor imagery tasks [22]. Epilepsy is a chronic brain disorder in which functional changes may precede structural ones and which may be detectable using existing modalities [23]. Functional connectivity analysis using EEG and resting state-functional magnetic resonance imaging (rs-fMRI) can localize epilepsy [24][25]. Finally, deep learning has been used for feature extraction and representation learning in EEG/MEG data. For example, R. Hussein et al. used a deep learning model to learn features from raw EEG data, which were then used for the classification of epileptic seizures [26]. Similarly, Roy et al. used a deep learning smart health monitoring model with the spectral analysis of scalp EEG to automatically predict epileptic seizures [27].
In summary, deep learning has shown great potential in enhancing EEG/MEG neuroimaging and brain monitoring/regulation. By enabling accurate classification of brain states and disorders, predicting and regulating brain activity, and learning meaningful representations from EEG/MEG data, deep learning has the potential to revolutionize our understanding of brain function and inform clinical applications.

4. Deep Learning Assisted Optical Neuroimaging and Others

Optical imaging is a non-invasive technique that uses light to visualize tissue structure and function. Optical neuroimaging holds great promise for imaging guided brain regulation. For example, a study published in Nature Neuroscience in 2020 used deep learning to predict behavior from functional imaging data in mice, demonstrating the potential for using deep learning in real-time behavioral prediction and manipulation [28]. The non-invasive guidance of therapeutic strategies would enable the removal of cancerous tissue while avoiding side effects and systemic toxicity, preventing damage to healthy tissues and decreasing the risk of postoperative problems such as bioluminescence imaging (BLI), fluorescence imaging (FI), Cerenkov luminescence imaging (CLI), and photoacoustic imaging (PAI). BLI is always used in small animal imaging for the in vivo tracking of therapeutic gene expression and cell-based therapy. In contrast, FI is highly promising for clinical translation. The applications of FI include image-guided surgery, radiotherapy, gene therapy, drug delivery, and sentinel lymph node fluorescence mapping. CLI is a novel radioactive optical hybrid imaging strategy for animal and clinical translation. Deep learning has shown significant promise in optical imaging modalities for neuroimaging and brain regulation including photoacoustic imaging and photoacoustic tomography. Photoacoustic imaging is a hybrid imaging technique that combines the advantages of optical and ultrasound imaging. It uses laser light to generate acoustic signals, which are then used to create high-resolution images of tissue structure and function. Deep learning has been applied to photoacoustic imaging for neuroimaging, and there have been several relevant studies in this area. Using deep learning to reconstruct high-resolution images of the cerebral vasculature from photoacoustic tomography data can significantly improve image quality and reduce imaging artifacts in photoacoustic tomography [29]. In 2020, a new deep learning method called Pixel-DL was proposed by S. Guan et al., which involved pixel-wise interpolation based on the physics of photoacoustic wave propagation followed by the use of CNN to reconstruct images. Synthetic data and data from phantoms of the mouse-brain, lung, and fundus vasculature were used to train and test the model. The results showed that Pixel-DL performed similarly or better than the iterative methods and consistently outperformed other CNN-based approaches in correcting artifacts. Furthermore, Pixel-DL is a computationally efficient approach that enables real-time photoacoustic tomography rendering and improves the quality of image reconstruction for limited-view and sparse data [30].
Recent advancements in deep learning assisted optical neuroimaging have greatly solved challenges from biopsy—an invasive, unrepeatable technique that usually ignores heterogeneity within the brain. From a resolution perspective, current optical images for brain disease diagnosis include two branches of cytological and histopathological. The former cytological examination is generally inexpensive, minimally invasive, and easily repeatable compared to histopathological examination. However cytological examinations are also labor intensive and insensitive compared to histopathological examination. Image interpretation is a highly subjective task and deep learning has revealed its ability for a more objective and straightforward diagnosis. Specifically, most deep learning research has focused on optical images on a histopathological scale [31]. Deep learning began to gain more attention in the health care sector due to its promising results in recent years, which use data characterization algorithms to convert conventional imaging information into data matrices by modern linear algebra and statistics that can further be extracted into information revealing certain patterns. These deep learning approaches have been used in radiological diagnosis, bioinformatics, genome sequencing, drug development, and histopathological image analysis. Particularly for histopathological diagnosis, deep learning has surpassed that of clinical experts. The core advantage of deep learning is that it utilizes a complex neural network-like engineering architecture that can detect and extract import features automatically. The predicting label and assessment of the deep learning algorithm has developed several varieties including prognosis, PD-L1 status, microsatellite instability, histological subtyping, microenvironment analysis, and segmentation. Moreover, deep learning can solve the problem that some neuroimaging is difficult to quantify in three dimensions. However, the use of deep learning in neuroimaging and brain regulation also presents challenges. The interpretation of deep learning models is often opaque, making it difficult to understand the reasoning behind the model’s decisions. Moreover, deep learning algorithms require large amounts of data to be trained effectively, which can be challenging to acquire in the field of neuroimaging. In conclusion, deep learning has shown significant promise in various imaging modalities for neuroimaging and brain regulation including fluorescence imaging, photoacoustic imaging, and photoacoustic tomography. These studies demonstrate the potential for using deep learning to improve the image quality, reduce imaging artifacts, and develop predictive models for the diagnosis and treatment of neurological disorders.

This entry is adapted from the peer-reviewed paper 10.3390/s23114993

References

  1. Ozkaraca, O.; Bagriacik, O.I.; Guruler, H.; Khan, F.; Hussain, J.; Khan, J.; Laila, U.E. Multiple Brain Tumor Classification with Dense CNN Architecture Using Brain MRI Images. Life 2023, 13, 349.
  2. Li, Z.; Fan, Q.; Bilgic, B.; Wang, G.; Wu, W.; Polimeni, J.R.; Miller, K.L.; Huang, S.Y.; Tian, Q. Diffusion MRI data analysis assisted by deep learning synthesized anatomical images (DeepAnat). Med. Image Anal. 2023, 86, 102744.
  3. Chen, Z.; Pawar, K.; Ekanayake, M.; Pain, C.; Zhong, S.; Egan, G.F. Deep Learning for Image Enhancement and Correction in Magnetic Resonance Imaging-State-of-the-Art and Challenges. J. Digit Imaging 2023, 36, 204–230.
  4. Gore, S.; Chougule, T.; Jagtap, J.; Saini, J.; Ingalhalikar, M. A Review of Radiomics and Deep Predictive Modeling in Glioma Characterization. Acad. Radiol. 2021, 28, 1599–1621.
  5. Diaz-Gomez, L.; Gutierrez-Rodriguez, A.E.; Martinez-Maldonado, A.; Luna-Munoz, J.; Cantoral-Ceballos, J.A.; Ontiveros-Torres, M.A. Interpretable Classification of Tauopathies with a Convolutional Neural Network Pipeline Using Transfer Learning and Validation against Post-Mortem Clinical Cases of Alzheimer’s Disease and Progressive Supranuclear Palsy. Curr. Issues Mol. Biol. 2022, 44, 5963–5985.
  6. Chattopadhyay, T.; Ozarkar, S.S.; Buwa, K.; Thomopoulos, S.I.; Thompson, P.M.; Alzheimer’s Disease Neuroimaging, I. Predicting Brain Amyloid Positivity from T1 weighted brain MRI and MRI-derived Gray Matter, White Matter and CSF maps using Transfer Learning on 3D CNNs. bioRxiv 2023.
  7. Coll, L.; Pareto, D.; Carbonell-Mirabent, P.; Cobo-Calvo, A.; Arrambide, G.; Vidal-Jordana, A.; Comabella, M.; Castillo, J.; Rodriguez-Acevedo, B.; Zabalza, A.; et al. Deciphering multiple sclerosis disability with deep learning attention maps on clinical MRI. Neuroimage Clin. 2023, 38, 103376.
  8. Zoetmulder, R.; Baak, L.; Khalili, N.; Marquering, H.A.; Wagenaar, N.; Benders, M.; van der Aa, N.E.; Isgum, I. Brain segmentation in patients with perinatal arterial ischemic stroke. Neuroimage Clin. 2023, 38, 103381.
  9. Daveau, R.S.; Law, I.; Henriksen, O.M.; Hasselbalch, S.G.; Andersen, U.B.; Anderberg, L.; Hojgaard, L.; Andersen, F.L.; Ladefoged, C.N. Deep learning based low-activity PET reconstruction of PiB and FE-PE2I in neurodegenerative disorders. Neuroimage 2022, 259, 119412.
  10. Etminani, K.; Soliman, A.; Davidsson, A.; Chang, J.R.; Martinez-Sanchis, B.; Byttner, S.; Camacho, V.; Bauckneht, M.; Stegeran, R.; Ressner, M.; et al. A 3D deep learning model to predict the diagnosis of dementia with Lewy bodies, Alzheimer’s disease, and mild cognitive impairment using brain 18F-FDG PET. Eur. J. Nucl. Med. Mol. Imaging 2022, 49, 563–584.
  11. Maddury, S.; Desai, K. DeepAD: A deep learning application for predicting amyloid standardized uptake value ratio through PET for Alzheimer’s prognosis. Front. Artif. Intell. 2023, 6, 1091506.
  12. Hamdi, M.; Bourouis, S.; Rastislav, K.; Mohmed, F. Evaluation of Neuro Images for the Diagnosis of Alzheimer’s Disease Using Deep Learning Neural Network. Front. Public Health 2022, 10, 834032.
  13. Fu, F.; Shan, Y.; Yang, G.; Zheng, C.; Zhang, M.; Rong, D.; Wang, X.; Lu, J. Deep Learning for Head and Neck CT Angiography: Stenosis and Plaque Classification. Radiology 2023, 307, 220996.
  14. Kok, Y.E.; Pszczolkowski, S.; Law, Z.K.; Ali, A.; Krishnan, K.; Bath, P.M.; Sprigg, N.; Dineen, R.A.; French, A.P. Semantic Segmentation of Spontaneous Intracerebral Hemorrhage, Intraventricular Hemorrhage, and Associated Edema on CT Images Using Deep Learning. Radiol. Artif. Intell. 2022, 4, e220096.
  15. Moghadam, S.M.; Airaksinen, M.; Nevalainen, P.; Marchi, V.; Hellstrom-Westas, L.; Stevenson, N.J.; Vanhatalo, S. An automated bedside measure for monitoring neonatal cortical activity: A supervised deep learning-based electroencephalogram classifier with external cohort validation. Lancet Digit. Health 2022, 4, e884–e892.
  16. Mughal, N.E.; Khan, M.J.; Khalil, K.; Javed, K.; Sajid, H.; Naseer, N.; Ghafoor, U.; Hong, K.S. EEG-fNIRS-based hybrid image construction and classification using CNN-LSTM. Front. Neurorobot 2022, 16, 873239.
  17. Chambon, S.; Galtier, M.N.; Arnal, P.J.; Wainrib, G.; Gramfort, A. A Deep Learning Architecture for Temporal Sleep Stage Classification Using Multivariate and Multimodal Time Series. IEEE Trans. Neural Syst. Rehabil. Eng. 2018, 26, 758–769.
  18. Lawhern, V.J.; Solon, A.J.; Waytowich, N.R.; Gordon, S.M.; Hung, C.P.; Lance, B.J. EEGNet: A compact convolutional neural network for EEG-based brain-computer interfaces. J. Neural Eng. 2018, 15, 056013.
  19. Dinov, M.; Leech, R. Tracking and optimizing human performance using deep reinforcement learning in closed-loop behavioral-and neuro-feedback: A proof of concept. bioRxiv 2017.
  20. Abiri, R.; Borhani, S.; Sellers, E.W.; Jiang, Y.; Zhao, X. A comprehensive review of EEG-based brain-computer interface paradigms. J. Neural Eng. 2019, 16, 011001.
  21. Petrosyan, A.; Sinkin, M.; Lebedev, M.; Ossadtchi, A. Decoding and interpreting cortical signals with a compact convolutional neural network. J. Neural Eng. 2021, 18, 026019.
  22. Zubarev, I.; Vranou, G.; Parkkonen, L. MNEflow: Neural networks for EEG/MEG decoding and interpretation. SoftwareX 2022, 17, 100951.
  23. Shen, D.; Deng, Y.; Lin, C.; Li, J.; Lin, X.; Zou, C. Clinical Characteristics and Gene Mutation Analysis of Poststroke Epilepsy. Contrast Media Mol. Imaging 2022, 2022, 4801037.
  24. Hosseini, M.P.; Tran, T.X.; Pompili, D.; Elisevich, K.; Soltanian-Zadeh, H. Multimodal data analysis of epileptic EEG and rs-fMRI via deep learning and edge computing. Artif. Intell. Med. 2020, 104, 101813.
  25. Hosseini, M.P.; Tran, T.X.; Pompili, D.; Elisevich, K.; Soltanian-Zadeh, H. Deep Learning with Edge Computing for Localization of Epileptogenicity using Multimodal rs-fMRI and EEG Big Data. In Proceedings of the 2017 IEEE International Conference on Automatic Computing (ICAC), Columbus, OH, USA, 17–21 July 2017; pp. 83–92.
  26. Hussein, R.; Palangi, H.; Ward, R.; Wang, Z.J. Epileptic seizure detection: A deep learning approach. arXiv 2018, arXiv:1803.09848.
  27. Singh, K.; Malhotra, J. Deep learning based smart health monitoring for automated prediction of epileptic seizures using spectral analysis of scalp EEG. Phys. Eng. Sci. Med. 2021, 44, 1161–1173.
  28. Markowitz, J.E.; Gillis, W.F.; Jay, M.; Wood, J.; Harris, R.W.; Cieszkowski, R.; Scott, R.; Brann, D.; Koveal, D.; Kula, T.; et al. Spontaneous behaviour is structured by reinforcement without explicit reward. Nature 2023, 614, 108–117.
  29. Gröhl, J.; Schellenberg, M.; Dreher, K.; Maier-Hein, L. Deep learning for biomedical photoacoustic imaging: A review. Photoacoustics 2021, 22, 100241.
  30. Guan, S.; Khan, A.A.; Sikdar, S.; Chitnis, P.V. Limited-View and Sparse Photoacoustic Tomography for Neuroimaging with Deep Learning. Sci. Rep. 2020, 10, 8510.
  31. Cao, R.; Nelson, S.D.; Davis, S.; Liang, Y.; Luo, Y.; Zhang, Y.; Crawford, B.; Wang, L.V. Label-free intraoperative histology of bone tissue via deep-learning-assisted ultraviolet photoacoustic microscopy. Nat. Biomed. Eng. 2023, 7, 124–134.
More
This entry is offline, you can click here to edit this entry!
Video Production Service