Continuous advancements in computing technology and artificial intelligence have led to improvements in driver monitoring systems. Numerous experimental studies have collected real driver drowsiness data and applied various artificial intelligence algorithms and feature combinations with the goal of significantly enhancing the performance of these systems in real-time.
Features | Description |
---|
Blink frequency [18] | The number of times an eye closes over a specific period of time. |
Maximum closure duration of the eyes [18] |
Biological Signals | Description |
---|
Nomenclature | |
NHTSA | National highway traffic safety administration |
DDD | Driver drowsiness detection |
IoT | Internet of things |
ML | Machine learning |
PERCLOS | Percentage of eyelid closure |
EAR | Eye aspect ratio |
SVM | Support vector machine |
KNN | K-nearest neighbor |
RF | Random forest |
ANN | Artificial neural networks |
CNN | Convolutional neural network |
EEG | Electroencephalography |
ECG | Electrocardiography |
PPG | Photoplethysmography |
HRV | Heart rate variability |
EOG | Electrooculography |
EMG | Electromyography |
HF | High frequency |
LF | Low frequency |
LF/HF | Low to high frequency |
SWA | Steering wheel angle |
ANFIS | Adaptive neuro-fuzzy inference systems |
MOL | Multilevel ordered logit |
BPNN | Back propagation neural network |
NIRS | Near-infrared spectroscopy |
Electroencephalography (EEG) [23] | An EEG signal is a monitoring method that records the brain’s electrical activity from the scalp. It represents the microscopic activity of the brain’s surface layer underneath the scalp. Based on the frequency ranges (0.1 Hz–100 Hz), these signals are categorized as delta, theta, alpha, beta, and gamma. | ||
The maximum time the eye was closed. However, it can be risky to delay detecting an extended eye closure that indicates a drowsy driver. | |||
Electrocardiography (ECG) [24] | ECG signals represent the electrical activity of the heart, which are acquired using electrodes placed on the skin. ECG monitors heart functionality, including heart rhythm and rate. | Percentage of eyelid closure (PERCLOS) [19] | The percentage of time (per minute) in which the eye is 80% closed or more. |
PhoHeart ratoplethysmography (PPGe variability (HRV) [2526] | PPGHRV signals are used to detect blood volummonitor the changes. These signals are measured at the skin’s surface using a pulse oximeter. It is often used for heart rate monitoring in the cardiac cycle, including the heartbeats. | Eye aspect ratio (EAR) [20] | EAR reflects the eye’s openness degree. The EAR value drops down to zero when the eyes are closed. On the other hand, it remains approximately constant when the eye is open. Thus, the EAR detects the eye closure at that time. |
HEleart rate variability (HRVctrooculography (EOG) [2627] | HRVEOG signals are used to monitor | Yawning frequency [21] | The number of times the mouth opens over a specific period of time. |
Head pose [22] | Is a figure that describes the driver’s head movements. It is determined by counting the video segments that show a large deviation of three Euler angles of head poses from their regular positions. These three angles are nodding, shaking, and tilting. |
easure | |
the c | |
hanges i | |
orneo-retinal standing potential betwee | |
n the | cardiac cycle, including the heartbeafront and back of the human eye and record the eye movements. |
Electromyoculography (EOMG) [2728] | EOMG signals are used to measure the corneo-retinal standing potential between the front and back of the human eye and record the eyethe collective electric signals produced from muscles movements. |
Electromyography (EMG) [28] | EMG signals are the collective electric signals produced from muscles movement. |
Ref. | Vehicle Parameters |
Extracted Features | Classification Method | Description | Quality Metric | Dataset |
---|
Ref. | Sensors | Hybrid Parameters |
Extracted Features | Classification Method | Description | Quality Metric | Dataset |
---|
[30] | Steering wheel | SWA | RF | Used SWA as input data and compared it with PERCLOS. The RF algorithm was trained by a series of decision trees, with a randomly selected feature. | Accuracy: RF- steering model: 79% |
[29] | |||||||
PERCLOS: 55% | |||||||
Automatic gearbox, image-generating computers, and control-loaded steering system | |||||||
Prepared their own dataset | |||||||
Image- and vehicle-based features | Latera position, yaw angle, speed, steering angle, driver’s input torque, eyelid opening degree, etc. | A series of mathematical operations, specified schemes from the study hypothesis | A system that assists the driver in case drowsiness is detected to prevent lane departure. It gives the driver a specific duration of time to control the car. If not, the system controls the vehicle and parks it. | Accuracies up to 100% in taking control of the car when the specified driving conditions were met | Prepared their own dataset | ||
[31] | Lateral distance | Statistical features, derived from the time and wavelet domains, relevant to the lateral distance and lane trajectory | SVM and neural network | ||||
[ | |||||||
Detection was based on lateral distance. Additionally, it collects data of the driver’s facial and head movements to be used as ground truth for the vehicle data. | |||||||
16] | |||||||
Accuracy: | Over 90% | ||||||
PPG, sensor, accelerometer, and gyroscope | |||||||
Prepared their own dataset | |||||||
Biological- and vehicle-based features | Heart rate, stress level, respiratory rate, adjustment counter, and pulse rate variability, steering wheel’s linear acceleration, and radian speed | SVM | It collected data from the sensors. Then, the features were extracted and fed to the SVM algorithm. If determined drowsy, the driver is alerted via the watch’s alarm. | Accuracy: 98.3% | Prepared their own dataset | ||
[32] | Steering wheel | SWA | Specially designed binary decision classifier | ||||
[ | |||||||
Used SWA data to apply online fatigue detection. The alertness state is determined using a specially designed classifier. | |||||||
35] | |||||||
Accuracy: Drowsy: 84.85% | Awake: 78.01% | ||||||
Smartphone camera | |||||||
Prepared their own dataset | |||||||
Biological- and image-based features | Blood volume pulse, blinking duration and frequency, HRV, and yawning frequency | If any of the detected parameters showed a specific change/value | Used a multichannel second-order blind identification based on the extended-PPG in a smartphone to extract blood volume pulse, yawning, and blinking signals. | Sensitivity: Up to 94% | Prepared their own dataset | ||
[33] | Steering wheel | SWA, steering wheel velocity | ANFIS for feature selection, PSO for optimizing the ANFIS parameters, and SVM for classification |
Detection was based on steering wheel data. The system used a selection method that utilized ANFIS. | |||
[17 | |||||||
Accuracy: 98.12% | |||||||
] | Headband, equipped with EEG electrodes, accelerometer, and | ||||||
Prepared their own dataset | |||||||
gyroscope | Biological- and behavioral-based features | Eyeblink patterns analysis, head movement angle, and magnitude, and spectral power analysis | Backward feature selection method applied followed by various classifiers | Used a non-invasive and wearable headband that contains three sensors. This system combines the features extracted from the head movement analysis, eye blinking, and spectral signals. The features are then fed to a feature selection block followed by various classification methods. Linear SVM performed the best. | Accuracy, sensitivity, and precision: Linear SVM: 86.5%, 88%, and 84.6% Linear SVM after feature selection: 92%, 88%, and 95.6% |
Prepared their own dataset | |
[34] | |||||||
36 | |||||||
Steering wheel | SW_Range_2, Amp_D2_Theta, PNS, and NMRHOLD | MOL, SVM, and BPNN | Used steering wheel status data. Using variance analysis, four parameters were selected, based on the correlation level with the driver’s status. MOL model performed best. | Accuracy: | MOL: 72.92% SVM: 63.86% BPNN: 62.10% |
Prepared their own dataset |
[ | |||||||
] | |||||||
SCANeR Studio, faceLAB, electrocardiogram, PPG sensor, | electro-dermal activity, Biopac MP150 system, and AcqKnowledge software |
Biological-, image-, and vehicle-based features | Heart rate and variability, respiration rate, blink duration, frequency, PERCLOS, head and eyelid movements, time-to-lane-crossing, position on the lane, speed, and SWA |
ANN | Included two models that used ANN. One is for detecting the drowsiness degree, and the other is for predicting the time needed to reach a specific drowsiness level. Different combinations of the features were tested. | Overall mean square error of 0.22 for predicting various drowsiness levels Overall mean square error of 4.18 min for predicting when a specific drowsiness level will be reached |
Prepared their own dataset |
[37] | EEG, EOG, ECG electrodes, and channels |
Biological-based features and NIRS | Heart rate, alpha and beta bands power, blinking rate, and eye closure duration | Fisher’s linear discriminant analysis method | A new approach that combined EEG and NIRS to detect driver drowsiness. The most informative parameters were the frontal beta band and the oxygenation. As for classification, Fisher’s linear discriminant analysis method was used. Additionally, time series analysis was employed to predict drowsiness. | Accuracy: 79.2% | MIT/BIH polysomnographic database [38] |
[39] | Multi-channel amplifier with active electrodes, projection screen, and touch screen | Biological-based features and contextual information | EEG signal: power spectra, five frequency characteristics, along with four power ratiosEOG signal: blinking duration and PERCLOS contextual information: the driving conditions (lighting condition and driving environment) and sleep/wake predictor value. | KNN, SVM, case-based reasoning, and RF | Used EOG, EEG, and contextual information. The scheme contained five sub-modules. Overall, the SVM classifier showed the best performance. | Accuracy: SVM multiclass classification: 79% SVM binary classification: 93% Sensitivity: SVM multiclass classification: 74% SVM binary classification: 94%. |
Prepared their own data |
[40] | Smartphone | Image-based features, as well as voice and touch information | PERCLOS, vocal data, touch response data | Linear SVM | Utilized a smartphone for DDD. The system used three verification stages in the process of detection. If drowsiness is verified, an alarm will be initiated. | Accuracy: 93.33% | Prepared their own dataset called ‘Invedrifac’ [41] |
[42] | Driving simulator and monitoring system | Biological-, image-, and vehicle-based features | 80 features were extracted: PERCLOS, SWA, LF/HF, etc. | RF and majority voting (logistic regression, SVM, KNN) classifiers | Vehicle-based, physiological, and behavioral signs were used in this system. Two ways for labeling the driver’s drowsiness state were used, slightly drowsy and moderately drowsy. | Accuracy, sensitivity, and precision: RF classifier: Slightly drowsy labeling: 82.4%, 84.1%, and 81.6% Majority voting: Moderately drowsy labeling: 95.4%, 92.9%, and 97.1% |
Prepared their own dataset |