Arterial hypertension (AH) is a progressive issue that grows in importance with the increased average age of the world population. The potential role of artificial intelligence (AI) in its prevention and treatment is firmly recognized. Indeed, AI application allows personalized medicine and tailored treatment for each patient.
1. The Principles of AI
AI is a wide-ranging branch of computer science concerned with building smart machines capable of increasing their knowledge through an automatic learning process that typically requires human intelligence
[1][2][3]. Therefore, AI is an interdisciplinary science with multiple approaches that incorporate reasoning (making inferences using data), natural language processing (ability to read and understand human languages), planning (ability to act autonomously and flexibly to create a sequence of actions to achieve a final goal), and machine learning (ML) (algorithms that develop automatically through experience)
[1]. Specifically, AI based on ML techniques
[4] is used to perform predictive analyses by examining mechanisms and associations among given variables from training datasets, which may consist of a variety of data inputs, including wearable devices, multi-omics, and standardized EHRs
[5][6]. Essentially, in ML, the rules would be learned by algorithms directly from a set of data rather than being encoded by hand
[7]; consequently, by using specific algorithms, ML can establish complex relationships among data, rules governing a system, behavioral patterns, and classification schemes
[4]. The classic ML process begins with data acquisition, continues with feature extraction, algorithm selection, and model development, and leads to model evaluation and application
[8] (
Figure 1). Supervised and unsupervised learning are the most popular approaches employed in ML. Supervised learning is used to predict unknown outputs from a known labeled dataset, hypotheses, and appropriate algorithms, such as an artificial neural network (ANN), support vector machine (SVM), and K-nearest neighbor. The choice of the technique depends on the dataset’s features, number of variables, learning curve, training, and computation time
[9][10]. Specifically, supervised learning provides predictions from big data analytics but requires manually labeled datasets and biases that can arise from the dataset itself or the algorithms
[6].
Figure 1. The typical ML workflow in healthcare research.
On the other hand, in unsupervised learning techniques, there is no information on the features to be predicted; consequently, these techniques must learn from the relationships among the elements of a dataset and classify them without basing them on categories or labels
[2]. Therefore, they look for structures, patterns, or characteristics in the source data that can be reproduced in new datasets
[4]. ML mainly mimics the nervous system’s structure by creating ANNs, which are networks of units called artificial neurons structured into layers
[11]. The system learns to generate patterns from data entered in the training session
[11]. A specific ANN, consisting of more layers that allow for improved predictions from data, is known as a deep neural network (DNN). Its performance could be enhanced as the dimension of the training dataset rises
[7]. Still, it largely depends on the distribution gap between training and test datasets: a highly divergent test dataset would test an ML prediction model on a feature space that it was not trained on, resulting in poor testing and results; additionally, a highly overlapping test dataset would not test the model for its generalization ability
[12]. Specifically, DL employs algorithms such as DNNs and convolutional neural networks (CNNs)
[4]. Nevertheless, regardless of its capability of using unlabeled datasets, unsupervised learning still has some limitations, such as the generalizability of cluster patterns identified from a cohort of patients, which can lead to overfitting to the training dataset, and the need to be validated in different large datasets
[6]. In the real world, AI can provide tools to improve and extend the effectiveness of clinical interventions
[4]. For example, incorporating AI into hypertension management could improve every stage of patient care, from diagnosis to therapy; consequently, the clinical practice could become more efficient and effective.
2. AI in the Measurement of Blood Pressure
The commonly used methods for BP monitoring are either non-invasive inflatable cuff-based oscillometric or invasive arterial line manometric measurement. The former takes intermittent measures because a pause of at least 1–2 min between two BP measurements is necessary to avoid errors in the measurement
[13][14]; moreover, the inflation of the cuff may disturb the patient, and the consequences of these disturbances are alterations in BP
[15]. On the other hand, invasive arterial line manometric measurement has an elevated risk of complications; consequently, these unsolved issues drive the search for new non-invasive BP monitoring techniques.
In this scenario, AI algorithms could help improve precision, accuracy, and reproducibility in diagnosing and managing AH using emerging wearable technologies. Alternatives for monitoring BP are cuff-based devices (such as volume-clamp devices or wrist-worn inflatable cuffs) and cuffless devices that use mechanical and optical sensors to determine features of the blood pulse waveform shape (for example, tonometry
[16], photoplethysmography
[17], and capacitance
[14]). In particular, cuffless blood pressure monitoring has been evaluated using a two-step algorithm with a single-channel photoplethysmograph (PPG). This system achieved an AAMI/ISO standard accuracy in all blood pressure categories except systolic hypotension
[18]. Independently of the acquisition method, the received signals are preprocessed and sent for feature extraction and selection. Subsequently, the signals and the gathered data can be used to feed ML to obtain systolic BP (SBP) and diastolic BP (DBP) estimations from the raw signals
[17] (
Figure 2).
Figure 2. Block diagram of the blood pressure estimation process using ML techniques. In detail, the raw signals are prepared through normalization, the correction of baseline wandering due to respiration, and finally, signal filtration. Specifically, to construct a dataset for BP estimation models, it is necessary to accurately extract the features of the original waveform (and underlying demographic and statistical data) and select effective features, improving the generalization and reducing the risk of overfitting the algorithms. PPG: photoplethysmograph; ML: machine learning.
Since the volume and distension of arteries can be related to the pressure in the arteries, the PPG signal produces pulse waveforms that are similar to pressure waveforms created by tonometry. PPG offers the added advantage that it can be measured continuously using miniature, inexpensive, and wearable optical electronics
[19]. However, PPG signal measurements are not without technical challenges; indeed, they require noise elimination, multisite measurement, multiphotodector development, event detection, event visualization, different models, the accurate positioning of sensors, and the calculation of propagation distances, without neglecting the impact of the variable PEP time on the pulse wave velocity timing
[19]. Moreover, there are several PPG-based methods for estimating BP: the PPG signal alone and its derivate, ECG and PPG signals, BCG and PPG signals, and PCG and PPG signals; each has advantages and limitations
[20][21][22][23][24], which, however, are beyond this discussion.
ML Algorithms in BP Estimation
To adapt to the nonlinearity of the dataset and to create a relationship between features and estimated BP, there are different ML approaches
[25]:
-
Gaussian Process Regression: A Bayesian regression approach gives a probability distribution over all possible values
[17][26].
-
Ensemble trees: The idea is to pull together a set of weak learners to create a strong learner
[27].
-
Multivariate Linear Regression: It is a method to analyze the correlation, correlation direction, and strength between multiple independent variables and the dependent variable
[15][28][29].
-
Support vector regression: It is a non-parametric algorithm that uses a kernel function (a class of algorithms for pattern analysis, whose general task is to find and study relations in datasets)
[30][31][32][33].
-
Random forest, gradient boosting, and adaptive boosting regression
[30][31].
-
After hyper-parameter optimization, it is necessary to evaluate the performance of ML algorithms through the correlation between the acquired predicted data and the ground-truth data. The difference between reference and estimated BP could be considered using the following criteria: the mean absolute error, mean squared error, and correlation coefficient
[17]. The role of parameter optimization is to lower the value of the predicted error. The mean absolute and standard deviations are the model’s predictive performance indicators.
Specifically, these AI-based systems could help continuously monitor BP using wearable technologies and improve AH management and outcomes
[15][32]. Starting from the input (raw signals), researchers can reach the output (estimated SBP and DBP) through the algorithms of ML
[6]. In particular, BP can be estimated from a PPG signal obtained from a smartphone or a smartwatch by using DL
[36][37].
Moreover, future studies on AI and wearable devices need to confirm the above results and provide conclusive clinical data to support using a combination of AI and wearable-device-obtained data to correctly perform BP measurements, which may offer an alternative to current oscillometric methods
[6].
This entry is adapted from the peer-reviewed paper 10.3390/jcdd10020074