The application of AI-based techniques for AD and other diseases research requires extensive data sets, composed of hundreds to thousands of entries describing subjects over many clinical and biological variables, which can be employed to develop novel algorithms by analyzing the features of the disease. In the last 20 years, many open data-sharing initiatives have grown in the field of neurodegenerative disease research
[32][30]; and, in particular, in AD research. Some important data-sharing resources are the Alzheimer’s Disease Genetics Consortium (ADGC,
www.adgenetics.org, accessed on 30 May 2021 Date Month Year), Alzheimer’s Disease Sequencing Project (ADSP,
www.niagads.org/adsp/content/home accessed on 30 May 2021), Alzheimer’s Disease Neuroimaging Initiative (ADNI,
http://adni.loni.usc.edu/ accessed on 30 May 2021), AlzGene (
www.alzgene.org accessed on 30 May 2021), Dementias Platform UK (DPUK,
https://portal.dementiasplatform.uk/ accessed on 30 May 2021), Genetics of Alzheimer’s Disease Data Storage Site (NIAGADS,
www.niagads.org/ accessed on 30 May 2021), Global Alzheimer’s Association Interactive Network (GAAIN,
www.gaain.org/ accessed on 30 May 2021), and National Centralized Repository for Alzheimer’s Disease and Related Dementias (NCRAD,
https://ncrad.iu.edu accessed on 30 May 2021)
[33,34][31][32]. Such public databases and repositories collect biological specimens and data from clinical and cognitive tests; lifestyle, neuroimages; genetics; and CSF and blood biomarkers from normal, cognitively impaired, or demented individuals, which can be combined to apply cutting-edge ML algorithms. Moreover, the National Alzheimer’s Coordinating Center (NACC) has constructed a large relational database for both exploratory and explanatory AD research, by use of standardized clinical and neuropathological research data
[35][33]; DementiaBank, the component of TalkBank dedicated to data on language in dementia, provides data sets from verbal tasks such as the Pitt corpus, which contain audio files and text transcriptions from AD subjects and controls
[36,37][34][35].
In the AD field, public and private databases represent the substrate and the source for AI to facilitate a more comprehensive understanding of disease heterogeneity, as well as personalized medicine and drug development.
2. AI for AD Diagnosis
AI technology, mainly ML algorithms, can handle high-dimensional complex systems that exceed the human capacity of data analysis. ML has been used in the CAD of many pathologies, including AD, by combining electronic medical records, NPS tests, brain imaging, and biological markers, together with data obtained by novel developed tools (e.g., wearable sensors) for the assessment of executive functions (
Figure 2). Magnetic Resonance Imaging (MRI), Positron Emission Tomography (PET), 18F-fluorodeoxyglucose-Positron Emission Tomography (FDG-PET), and Diffusion Tensor Imaging (DTI) provide detailed information about the brain structure and functionality, allowing for the identification of features supporting the diagnosis, such as atrophy, amyloid deposition, or microstructural damages
[40,41][38][39]. Moreover, neuroimages can discriminate pathological processes not due to AD that can lead to cognitive decline (e.g., brain tumors or cerebrovascular disease). Several studies have demonstrated that markers of primary AD pathology (CSF Aβ1-42, total tau and p-tau181, amyloid-PET), neurodegeneration (structural MRI, FDG-PET), or biomarker combinations can be integrated into complex tools for diagnostic or predictive purposes
[6,42,43,44][40][41][42][43]. Of interest, polymorphism in the apolipoprotein E (APOE) gene is the strongest genetic risk factor in the sporadic form of AD, which has an added predictive value, with the APOEε4 allele conferring an increased risk of early age of onset, while the APOEε2 allele confers a decreased risk, relative to the common APOEε3 allele
[45][44].
Figure 2. Schematic representation of CAD tools functioning. After collection, data are elaborated, in order to be made ready for the analysis using AI-based techniques. The outcoming result is the assignment of a class, with potential value for diagnostic evaluation.
So far, the first CAD tools for AD were constructed through the use of AI methods for the analysis of brain imaging
[46,47][45][46]. Analyzing MRI data from the OASIS database
[48][47], a feature extraction and selection method called “eigenbrain”, which is carried out using PCA (see glossary), was used to capture the characteristic changes of anatomical structures between AD and NC; namely, severe atrophy of the cerebral cortex, enlargement of the ventricles, and shrinkage of the hippocampus. The applied SVM algorithm achieved a mean accuracy of 92.36% for an automated classification system of AD diagnosis, based on MRI data
[49][48]. For instance, by using FDG-PET of the brain, a DL algorithm for the early prediction of AD was developed, achieving 82% specificity and 100% sensitivity at an average of 75.8 months prior to the final diagnosis
[50][49].
The majority of ML models for classifying AD from NC are trained with neuroimaging data, which have the advantage of high accuracy
[51][50], but limitations associated to their high cost and lack of diffusion in non-specialized centers. A large group of studies have focused on the identification of fluid marker panels as potential screening tests. In addition to Aβ- and tau-related biomarkers, novel candidate markers according to other mechanisms of AD pathology have been investigated in experimental and meta-analysis studies, in order to optimize the predictive modeling.
A recent study has applied ML algorithms to evaluate data on novel biomarkers that were available on PubMed, Cochrane Systematic Reviews, and Cochrane Collaboration Central Register of Controlled Clinical Trials databases. Experimental or review studies have investigated biomarkers for dementia or AD using ML algorithms, including SVM, logistic regression, random forest, and naïve Bayes. The panel included indices of synaptic dysfunction and loss, neuroinflammation, and neuronal injury (e.g., neurofilament light; NFL). An algorithm, developed by integrating all the data from such fluid biomarkers, has been shown to be capable of accurately predicting AD, thus achieving state-of-the-art results
[52][51].
To the end of designing a blood-based test for identifying AD, the European Medical Information Framework for Alzheimer’s disease biomarker discovery cohort conducted a study using both ML and DL models. Data used for modeling included 883 plasma metabolites assessed in 242 cognitively normal individuals and 115 patients with AD-type dementia, and demonstrated that the panel of plasma markers had good discriminatory power and have the potential to match the AUC of well-established AD CSF biomarkers. Finally, the authors concluded that it can be commonly included in clinical research as part of the diagnostic work-up, with an AUC in the range of 0.85–0.88
[53][52].
Moreover, AI has the potential to integrate data obtained through the use of new technologies, such as devices designed for the evaluation of language and verbal fluency or executive functions in healthy or mildly impaired individuals. A system for acoustic feature extraction over speech segments in AD patients was developed, by analyzing data from DementiaBank’s Pitt corpus data set
[36,37][34][35]. The acoustic features of patient speeches have the advantage of being cost-effective and non-invasive, compared to imaging or blood biomarkers, and can be integrated to develop screening tools for MCI and AD. Moreover, another system has exploited a digital pen to record drawing dynamics, which can detect slight signs of mild impairment in asymptomatic individuals
[54][53]. The ability of this pen was evaluated in patients performing the Clock Drawing Test (CDT), which allowed for the identification of subtle to mild cognitive impairment, with an inexpensive and efficient tool having promising clinical and pre-clinical applications.
3. Prediction of MCI-to-AD Conversion
Diagnosing probable AD in a subject with moderate–severe cognitive decline or evidence of cortex atrophy is usually not difficult for a skilled neurologist when appropriate data are available. Therefore, it is not surprising for an AI model to solve the task of AD vs. NC subject classification with high accuracy, when taking into account NPS test results or neuroimaging data
[55][54]. To date, several predictive models have been developed
[55,56,57,58,59,60,61,62,63,64[54][55][56][57][58][59][60][61][62][63][64][65][66][67][68][69][70][71],
65,66,67,68,69,70,71,72], yielding peak accuracy values of 100% in AD vs. NC classification
[55][54]. In contrast, a much more challenging task for AI is to identify individuals with subjective or mild impairment who will develop AD dementia, with respect to stable MCI or MCI not due to AD, given the shaded differences and the overlapping symptoms in the clinical or biological variables defining these groups in the early phases
[73][72].
Algorithms designed to predict MCI-to-AD conversion aim to classify MCI patients into two groups: those who will convert to AD (MCI-c) within a certain time frame (usually 3 years) and those who will not convert (MCI-nc). Yearly, about 15% of MCI patients convert to AD
[60,74][59][73] and, thus, early and timely identification is crucial, in order to ameliorate the outcome and slow the progression of the disease.
Several AI-based models test the accuracy of combinations of non-invasive predictors, as well as socio-demographic and clinical data, in order to develop effective screening or predictive tools.
By using the ADNI data set, socio-demographic characteristics, clinical scale ratings, and NPS test scores have been used to train different supervised ML algorithms and, finally, develop an ensemble model utilizing them (see glossary). This ensemble learning application demonstrated a high predictive performance, with an AUC of 0.88 in predicting MCI-to-AD conversion
[62][61], and has the advantage of using only non-invasive and easily collectable predictors, rather than neuroimaging or CSF biomarkers, thus enhancing its potential use and diffusion in clinical practice.
As for CAD systems, both MRI and PET data can be independently modeled by ML algorithms, yielding good predictive accuracy; however, integrating neuroimaging data with other variables, such as cognitive measures, genetic factors, or biochemical changes, can significatively enhance the model performance, as is generally expected when integrating multi-modal data (
Figure 3)
[32,64,66,71,72][30][63][65][70][71]. For example, the integration of MRI with multiple modality data, such as PET, CSF biomarkers, and genomic data, reached 84.7% accuracy in an MCI-c vs. MCI-nc classification task. When only single-modality data was used, the accuracy of the model was lower than the all-modalities implementation
[64][63].
Figure 3. Predictive ML ensemble method for the conversion of MCI to AD based on multi-modal data (i.e., socio-demographics, clinical, NPS, biological fluids, and imaging data). The system uses a feature transformation and selection phase followed by data integration, allowing for more efficient use of variables. The final ensemble of several different ML models provides accurate final predictions of AD or AD conversion.
Different data modalities reflect the AD-related pathological markers that are complementary to each other, which can be concatenated as multi-modal features as input to an ML model for classification
[75,76,77][74][75][76]. Notwithstanding, the modality with the larger number of features may weigh more than the others when training the algorithm, inducing bias in the interpretation. In order to overcome this limitation and extract multi-modal feature representations, DL architectures can be used, which do not need feature engineering, due to their ability to non-linearly transform input variables
[78][77].
A DL model for both MCI-to-AD prediction and AD vs. NC classification was trained on data from the ADNI database, including demographic, NPS and genetic data, APOE polymorphism, and MRI. The model processed all the data in a multi-modal feature extraction phase, aiming to combine all data together and obtain a classification output. The AD vs. NC classification task achieved by this model reached performances close to 100%, as expected; whereas, for the MCI-to-AD prediction task, the AUC and accuracy were 0.925 and 86%, respectively
[55][54].
Some models transferred the knowledge in performing AD vs. NC classification to a prediction task—that is, MCI-to-AD conversion—using transfer learning methodology (see glossary). A system with the highest capacity of discriminating AD from NC by analyzing three-dimensional MRI data was recently tested for MCI-to-AD prediction, reaching a high accuracy (82.4%) and AUC (0.83). This finding demonstrated that information from related domains can help AI to solve tasks targeted at the identification of patients at risk of developing AD-related dementia
[56][55].
An interpretation system was embedded with a classification model for both early diagnosis of AD and MCI-to-AD prediction, in order to increase the impact in clinical practice. This model integrates 11 data modalities from ADNI, including NPS tests, neuroimaging, demographics, and electronic health records data (e.g., laboratory blood test, neurological exam, and clinical symptom data). The model outputs a sentence in natural language, explaining the involvement of attributes in the model’s classification output. The model achieved a good performance while balancing the accuracy–interpretability trade-off in both AD classification and MCI-to-AD prediction tasks, allowing for actionable decisions that can enhance physician confidence, contributing to the realization of explainable AI (XAI) in healthcare
[79][78].
Most of the AI models for AD are mainly built on biomarkers such as brain imaging, often with the use of Aβ and tau ligands, Aβ- or tau-PET, as well as biomarkers in CSF, which have high accuracy and predictive value; however, their invasive nature, high cost, and limited availability restrict their use to highly specialized centers
[80,81,82,83][79][80][81][82]. A possible turning point has emerged with the recent development of ultra-sensitive methods for the detection of brain-derived proteins in blood, making it possible to measure NFL
[84][83], Aβ42, and Aβ40
[85[84][85],
86], and tau and P-tau in plasma
[87,88][86][87]. The accuracy of plasma P-tau combined with other non-invasive biomarkers for predicting future AD dementia was recently evaluated in patients with mild cognitive symptoms from ADNI and the Swedish BioFINDER cohort, including patients with repeated examinations and clinical assessments over a period of 4 years to ensure a clinical diagnosis (
https://biofinder.se/ accessed on 30 May 2021). The prediction included not only the discrimination between progression to AD dementia and stable cognitive symptoms, but also versus progression to other forms of dementia. The accuracy of plasma biomarkers was compared with corresponding markers in CSF, and with the diagnostic prediction of expert physicians in memory clinics, based on the assessment at baseline of extensive clinical assessments, cognitive testing, and structural brain imaging
[88][87]. Plasma P-tau in combination with the other non-invasive markers showed a higher value in predicting AD dementia within 4 years, with respect to clinical-based prediction (AUC of 0.89–0.92 and 0.72, respectively). In addition, the biomarker combination showed similarly high predictive accuracy in both plasma and CSF, making plasma an effective alternative to CSF, thus providing a tool to improve the diagnostic potential in clinical practice
[88][87].