Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 -- 1132 2023-06-17 11:25:13 |
2 update references and layout Meta information modification 1132 2023-06-19 03:59:18 |

Video Upload Options

Do you have a full video?

Confirm

Are you sure to Delete?
Cite
If you have any further questions, please contact Encyclopedia Editorial Office.
Amin, M.; Ullah, K.; Asif, M.; Shah, H.; Mehmood, A.; Khan, M.A. Real-World Driver Stress Recognition and Diagnosis. Encyclopedia. Available online: https://encyclopedia.pub/entry/45743 (accessed on 03 May 2024).
Amin M, Ullah K, Asif M, Shah H, Mehmood A, Khan MA. Real-World Driver Stress Recognition and Diagnosis. Encyclopedia. Available at: https://encyclopedia.pub/entry/45743. Accessed May 03, 2024.
Amin, Muhammad, Khalil Ullah, Muhammad Asif, Habib Shah, Arshad Mehmood, Muhammad Attique Khan. "Real-World Driver Stress Recognition and Diagnosis" Encyclopedia, https://encyclopedia.pub/entry/45743 (accessed May 03, 2024).
Amin, M., Ullah, K., Asif, M., Shah, H., Mehmood, A., & Khan, M.A. (2023, June 17). Real-World Driver Stress Recognition and Diagnosis. In Encyclopedia. https://encyclopedia.pub/entry/45743
Amin, Muhammad, et al. "Real-World Driver Stress Recognition and Diagnosis." Encyclopedia. Web. 17 June, 2023.
Real-World Driver Stress Recognition and Diagnosis
Edit

Mental stress is known as a prime factor in road crashes. The devastation of these crashes often results in damage to humans, vehicles, and infrastructure. Likewise, persistent mental stress could lead to the development of mental, cardiovascular, and abdominal disorders. Preceding research in this domain mostly focuses on feature engineering and conventional machine learning approaches.

driver stress recognition multimodal data deep learning

1. Introduction

Successful driving activities always require both mental and physical skills [1][2][3]. Acute stress reduces the driver’s ability to fix hazardous situations, which causes significant damage to humans and vehicles every year [4][5][6][7][8]. Dangerous driving situations are triggered due to human errors, individual factors, and ambiance conditions [9]. According to the National Motor Vehicle Crash Causation Survey (NMVCCS) in the United States (US), human errors caused 94% of crashes alone, while vehicle defects, ambiance conditions, and other factors collectively caused 6% of crashes during 2005–2007 [10]. Human errors are linked to the driver’s perceptual conditions, so a complete understanding of these conditions is crucial for preventing traffic accidents.
To detect and diagnose drivers’ different stress levels, physiological, physical, and contextual information are widely utilized [11]. Moreover, different traditional machine learning models based on handcrafted feature extraction methods are utilized for the classification of stress. Extracting the best features using these approaches is always a challenging task, as the quality of extracted features has a significant effect on the classification performance [12]. These approaches are laborious, ad hoc, less robust to noise, and need thorough skill [13]. To come through these challenges, deep learning models have been utilized to automatically produce complex nonlinear features reliably [14][15][16]. In addition to automatic feature extraction from raw data, these models offer noise robustness and better classification accuracy [17][18][19]. Different deep learning algorithms are used in recent research, e.g., CNN, RNN, DNN, and LSTM.
The models proposed in the current work are based on 1D CNN and hybrid 1D CNN-LSTM networks. The proposed models are separately trained using multiple physiological signals (SRAD) and multimodal data (AffectiveROAD) including physiological signals and other information about the vehicle, driver, and ambiance. Multimodal fusion of data based on deep learning approaches can be used to develop a precise driver stress level recognition model with improved performance and reliability.

2. Real-World Driver Stress Recognition and Diagnosis

Several machine learning approaches have been proposed for real-world driver mental stress recognition based on different physiological signals. Dalmeida and Masala [20], Vargas-Lopez et al. [21], Khowaja et al. [8], Lopez-Martinez et al. [22], Haouij et al. [23], Chen et al. [4], Ghaderi et al. [24], Zhang et al. [25], and Healey and Picard [26] propose conventional machine models based on physiological signals obtained from the PhysioNet SRAD public database [27]. Unlike the previous studies, Rigas et al. [28] presented a real-world binary stress recognition model based on multimodal data, including physical and contextual data, in addition to physiological signals. On the other hand, Zontone et al. [29], Bianco et al. [30], Lee et al. [31], Lanatà et al. [32], and Gao et al. [33] proposed conventional machine learning models for driver stress recognition based on simulated driving situations. Lanatà et al. [32] and Lee et al. [31] presented driver stress recognition models based on multimodal data. Contrary to previous studies, Šalkevicius et al. [34], Rodríguez-Arce et al. [35], Can et al. [36], Al abdi et al. [37], Betti et al. [38], Siramprakas et al. [39], de Vries et al. [40], and Sun et al. [41] proposed stress recognition models during controlled, lab, semi-lab, and physical (such as sitting, standing, and walking) environments. Recent development in deep learning and machine learning models have shown good results in various applied domains that can be applied in driver stress detection [42][43].
All the mentioned studies are based on feature engineering techniques, and various conventional machine learning algorithms were employed to classify levels of stress. However, handcrafted features are less robust to noise and subjective changes, and need a considerable amount of time and hard work [8][13][19][34][35][44]. Moreover, capturing the features’ sequential nature is difficult due to the absence of explicit features and high dimensionality despite using complex feature selection methods. Likewise, the dependence of the model on past observations would make it impractical to process all the information due to the growing complexity. The feature-level multimodal fusion models proposed by Chen et al. [4], Healey and Picard [26], Haouij et al. [23], Lee et al. [31], Bianco et al. [30], Sun et al. [41], and Can et al. [36] mainly concentrate on pattern learning in individual signals instead of multiple simultaneous signals [18]. Thus, these models are inappropriate to obtain the nonlinear correlation across multiple signals appearing simultaneously. Various linear and non-linear methods employed in these conventional machine learning models have not been able to perform the vigorous investigation of such manifold time series signals [19].
To address the issues faced by conventional machine learning models, deep learning methods have been introduced. Deep learning models are developed based on signal preprocessing (noise filtering), designing a particular deep neural network based on the area of interest, network training, and model testing. Deep learning models learn and classify raw data using multilayer deep neural networks [45]. The last fully connected (FC) layers are utilized to obtain the final output. Contrary to feature engineering techniques used in conventional machine learning approaches, deep learning models automatically produce steady features [14][15]. Moreover, deep learning models are more robust to noise and achieve improved classification accuracy [19]. Different deep learning algorithms are used in recent research, e.g., the recurrent neural network (RNN), deep aeural network (DNN), LSTM, and CNN. Rastgoo et al. [11], Zhang et al. [46], Kanjo et al. [17], Lim and Yang [47], Yan et al. [48], Hajinoroozi et al. [49], and Lee et al. [50] presented different deep learning models to identify different driver states. Rastgoo et al. [11], Kanjo et al. [17], Lim and Yang [47], and Yan et al. [48] proposed deep learning models based on multimodal data. On the other hand, the models proposed by Hajinoroozi et al. [49] and Lee et al. [50] are based on physiological signals only. The stress recognition model proposed by Zhang et al. [46] is based on facial images only. Apart from driving scenarios, Masood and Alghamdi [51], Cho et al. [52], Seo et al. [53], Hwang et al. [54], and He et al. [55] proposed stress recognition models based on deep learning techniques and physiological signals in academic, workplace, and lab settings. Most of these studies including [46][49][50][52][53][54][55][56] are based on two levels of stress only. Moreover, the schemes presented by [46][50][52][55][56] are based on images. Likewise, the schemes proposed by [49][52][53][54][55][56] are either based on physiological signals or a single modality. On the other hand, the model proposed by [11] is based on multimodal data collected during simulated driving.
The models proposed in this study are based on the fusion of multimodal data collected during real-world driving (SRAD and AffectiveROAD datasets). Moreover, these models are based on 1D CNN and 1D CNN-LSTM networks to detect driver’s two (stressed and relaxed) and three levels (low, medium, and high). The fuzzy EDAS approach is also used to find the performance ranks of the proposed models based on different classification metrics.

References

  1. Evans, G.W.; Carrère, S. Traffic Congestion, Perceived Control, and Psychophysiological Stress among Urban Bus Drivers. J. Appl. Psychol. 1991, 76, 658–663.
  2. Hanzlíková, I. Professional drivers: The sources of occupational stress. In Proceedings of the Young Researchers Seminar, The Hague, The Netherlands, 2005.
  3. de Naurois, C.J.; Bourdin, C.; Stratulat, A.; Diaz, E.; Vercher, J.L. Detection and prediction of driver drowsiness using artificial neural network models. Accid. Anal. Prev. 2019, 126, 95–104.
  4. Chen, L.L.; Zhao, Y.; Ye, P.F.; Zhang, J.; Zou, J.Z. Detecting driving stress in physiological signals based on multimodal feature analysis and kernel classifiers. Expert Syst. Appl. 2017, 85, 279–291.
  5. Manseer, M.; Riener, A. Evaluation of driver stress while transiting road tunnels. In Proceedings of the 6th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Seattle, WA, USA, 17–19 September 2014; Association for Computing Machinery (ACM): New York, NY, USA, 2014; pp. 1–6.
  6. Benlagha, N.; Charfeddine, L. Risk factors of road accident severity and the development of a new system for prevention: New insights from China. Accid. Anal. Prev. 2020, 136, 105411.
  7. American Psychological Association. Stress in America: The State of our Nation. Stress in America Survey; American Psychological Association: Washington, DC, USA, 2017.
  8. Khowaja, S.A.; Prabono, A.G.; Setiawan, F.; Yahya, B.N.; Lee, S.L. Toward soft real-time stress detection using wrist-worn devices for human workspaces. Soft Comput. 2021, 25, 2793–2820.
  9. Shinar, D.; Compton, R. Aggressive driving: An observational study of driver, vehicle, and situational variables. Accid. Anal. Prev. 2004, 36, 429–437.
  10. Singh, S. Critical reasons for crashes investigated in the National Motor Vehicle Crash Causation Survey. Natl. Highw. Traffic Saf. Adm. 2015, 1–2.
  11. Rastgoo, M.N.; Nakisa, B.; Maire, F.; Rakotonirainy, A.; Chandran, V. Automatic driver stress level classification using multimodal deep learning. Expert Syst. Appl. 2019, 138, 112793.
  12. Nakisa, B.; Rastgoo, M.N.; Tjondronegoro, D.; Chandran, V. Evolutionary computation algorithms for feature selection of EEG-based emotion recognition using mobile sensors. Expert Syst. Appl. 2018, 93, 143–155.
  13. Pourbabaee, B.; Roshtkhari, M.J.; Khorasani, K. Deep Convolutional Neural Networks and Learning ECG Features for Screening Paroxysmal Atrial Fibrillation Patients. IEEE Trans. Syst. Man Cybern. Syst. 2018, 48, 2095–2104.
  14. Liu, Y.; Chen, X.; Peng, H.; Wang, Z. Multi-focus image fusion with a deep convolutional neural network. Inf. Fusion 2017, 36, 191–207.
  15. Zheng, Y.; Liu, Q.; Chen, E.; Ge, Y.; Zhao, J.L. Exploiting multi-channels deep convolutional neural networks for multivariate time series classification. Front. Comput. Sci. 2016, 10, 96–112.
  16. Zheng, Y.; Liu, Q.; Chen, E.; Ge, Y.; Zhao, J.L. Time series classification using multi-channels deep convolutional neural networks. In Proceedings of the Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Macau, China, 16–18 June 2014; Springer: Berlin/Heidelberg, Germany, 2014; Volume 8485, pp. 298–310.
  17. Kanjo, E.; Younis, E.M.G.; Ang, C.S. Deep learning analysis of mobile physiological, environmental and location sensor data for emotion detection. Inf. Fusion 2019, 49, 46–56.
  18. Ngiam, J.; Khosla, A.; Kim, M.; Nam, J.; Lee, H.; Ng, A.Y. Multimodal deep learning. In Proceedings of the 28th International Conference on Machine Learning, ICML 2011, Bellevue, WA, USA, 28 June–2 July 2011; pp. 689–696.
  19. Deng, L.; Yu, D. Deep learning: Methods and applications. Found. Trends Signal Process. 2013, 7, 197–387.
  20. Dalmeida, K.M.; Masala, G.L. Hrv features as viable physiological markers for stress detection using wearable devices. Sensors 2021, 21, 2873.
  21. Vargas-Lopez, O.; Perez-Ramirez, C.A.; Valtierra-Rodriguez, M.; Yanez-Borjas, J.J.; Amezquita-Sanchez, J.P. An explainable machine learning approach based on statistical indexes and svm for stress detection in automobile drivers using electromyographic signals. Sensors 2021, 21, 3155.
  22. Lopez-Martinez, D.; El-Haouij, N.; Picard, R. Detection of Real-World Driving-Induced Affective State Using Physiological Signals and Multi-View Multi-Task Machine Learning. In Proceedings of the 2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos, ACIIW 2019, Cambridge, UK, 3–6 September 2019; pp. 356–361.
  23. El Haouij, N.; Poggi, J.M.; Ghozi, R.; Sevestre-Ghalila, S.; Jaïdane, M. Random forest-based approach for physiological functional variable selection for driver’s stress level classification. Stat. Methods Appl. 2019, 28, 157–185.
  24. Ghaderi, A.; Frounchi, J.; Farnam, A. Machine learning-based signal processing using physiological signals for stress detection. In Proceedings of the 2015 22nd Iranian Conference on Biomedical Engineering, ICBME 2015, Tehran, Iran, 25–27 November 2016; pp. 93–98.
  25. Zhang, L.; Tamminedi, T.; Ganguli, A.; Yosiphon, G.; Yadegar, J. Hierarchical multiple sensor fusion using structurally learned Bayesian network. In Proceedings of the Proceedings—Wireless Health 2010, WH’10, San Diego, CA, USA, 5–7 October 2010; pp. 174–183.
  26. Healey, J.A.; Picard, R.W. Detecting stress during real-world driving tasks using physiological sensors. IEEE Trans. Intell. Transp. Syst. 2005, 6, 156–166.
  27. PhysioNet. Stress Recognition in Automobile Drivers, Version: 1.0.0; PhysioNet (MIT Laboratory for Computational Physiology US): Cambridge, MA, USA, 2008.
  28. Rigas, G.; Goletsis, Y.; Bougia, P.; Fotiadis, D.I. Towards driver’s state recognition on real driving conditions. Int. J. Veh. Technol. 2011, 2011, 617210.
  29. Zontone, P.; Affanni, A.; Bernardini, R.; Piras, A.; Rinaldo, R.; Formaggia, F.; Minen, D.; Minen, M.; Savorgnan, C. Car Driver’s Sympathetic Reaction Detection through Electrodermal Activity and Electrocardiogram Measurements. IEEE Trans. Biomed. Eng. 2020, 67, 3413–3424.
  30. Bianco, S.; Napoletano, P.; Schettini, R. Multimodal car driver stress recognition. In Proceedings of the PervasiveHealth: Pervasive Computing Technologies for Healthcare, Trento, Italy, 20–23 May 2019; pp. 302–307.
  31. Lee, D.S.; Chong, T.W.; Lee, B.G. Stress Events Detection of Driver by Wearable Glove System. IEEE Sens. J. 2017, 17, 194–204.
  32. Lanatà, A.; Valenza, G.; Greco, A.; Gentili, C.; Bartolozzi, R.; Bucchi, F.; Frendo, F.; Scilingo, E.P. How the Autonomic nervous system and driving style change with incremental stressing conditions during simulated driving. IEEE Trans. Intell. Transp. Syst. 2015, 16, 1505–1517.
  33. Gao, H.; Yuce, A.; Thiran, J.P. Detecting emotional stress from facial expressions for driving safety. In Proceedings of the 2014 IEEE International Conference on Image Processing, ICIP 2014, Paris, France, 27–30 October 2014; pp. 5961–5965.
  34. Šalkevicius, J.; Damaševičius, R.; Maskeliunas, R.; Laukienė, I. Anxiety level recognition for virtual reality therapy system using physiological signals. Electronics 2019, 8, 1039.
  35. Rodríguez-Arce, J.; Lara-Flores, L.; Portillo-Rodríguez, O.; Martínez-Méndez, R. Towards an anxiety and stress recognition system for academic environments based on physiological features. Comput. Methods Programs Biomed. 2020, 190, 105408.
  36. Can, Y.S.; Chalabianloo, N.; Ekiz, D.; Ersoy, C. Continuous stress detection using wearable sensors in real life: Algorithmic programming contest case study. Sensors 2019, 19, 1849.
  37. Al abdi, R.M.; Alhitary, A.E.; Abdul Hay, E.W.; Al-bashir, A.K. Objective detection of chronic stress using physiological parameters. Med. Biol. Eng. Comput. 2018, 56, 2273–2286.
  38. Betti, S.; Lova, R.M.; Rovini, E.; Acerbi, G.; Santarelli, L.; Cabiati, M.; Del Ry, S.; Cavallo, F. Evaluation of an integrated system of wearable physiological sensors for stress monitoring in working environments by using biological markers. IEEE Trans. Biomed. Eng. 2018, 65, 1748–1758.
  39. Sriramprakash, S.; Prasanna, V.D.; Murthy, O.V.R. Stress Detection in Working People. In Proceedings of the Procedia Computer Science, Cochin, India, 22–24 August 2017; Volume 115, pp. 359–366.
  40. de Vries, G.J.J.; Pauws, S.C.; Biehl, M. Insightful stress detection from physiology modalities using Learning Vector Quantization. Neurocomputing 2015, 151, 873–882.
  41. Sun, F.T.; Kuo, C.; Cheng, H.T.; Buthpitiya, S.; Collins, P.; Griss, M. Activity-aware mental stress detection using physiological sensors. In Proceedings of the Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering, LNICST, Santa Clara, CA, USA, 25–28 October 2012; Volume 76, pp. 282–301.
  42. Anceschi, E.; Bonifazi, G.; De Donato, M.C.; Corradini, E.; Ursino, D.; Virgili, L. Savemenow.ai: A machine learning based wearable device for fall detection in a workplace. Stud. Comput. Intell. 2021, 911, 493–514.
  43. Bonifazi, G.; Corradini, E.; Ursino, D.; Virgili, L. Defining user spectra to classify Ethereum users based on their behavior. J. Big Data 2022, 9, 37.
  44. Zalabarria, U.; Irigoyen, E.; Martinez, R.; Larrea, M.; Salazar-Ramirez, A. A Low-Cost, Portable Solution for Stress and Relaxation Estimation Based on a Real-Time Fuzzy Algorithm. IEEE Access 2020, 8, 74118–74128.
  45. Lecun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444.
  46. Zhang, J.; Mei, X.; Liu, H.; Yuan, S.; Qian, T. Detecting negative emotional stress based on facial expression in real time. In Proceedings of the 2019 IEEE 4th International Conference on Signal and Image Processing, ICSIP 2019, Wuxi, China, 19–21 July 2019; pp. 430–434.
  47. Lim, S.; Yang, J.H. Driver state estimation by convolutional neural network using multimodal sensor data. Electron. Lett. 2016, 52, 1495–1497.
  48. Yan, S.; Teng, Y.; Smith, J.S.; Zhang, B. Driver behavior recognition based on deep convolutional neural networks. In Proceedings of the 2016 12th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery, ICNC-FSKD 2016, Changsha, China, 13–15 August 2016; pp. 636–641.
  49. Hajinoroozi, M.; Mao, Z.; Jung, T.P.; Lin, C.T.; Huang, Y. EEG-based prediction of driver’s cognitive performance by deep convolutional neural network. Signal Process. Image Commun. 2016, 47, 549–555.
  50. Lee, J.; Lee, H.; Shin, M. Driving stress detection using multimodal convolutional neural networks with nonlinear representation of short-term physiological signals. Sensors 2021, 21, 2381.
  51. Masood, K.; Alghamdi, M.A. Modeling Mental Stress Using a Deep Learning Framework. IEEE Access 2019, 7, 68446–68454.
  52. Cho, Y.; Bianchi-Berthouze, N.; Julier, S.J. DeepBreath: Deep learning of breathing patterns for automatic stress recognition using low-cost thermal imaging in unconstrained settings. In Proceedings of the 2017 7th International Conference on Affective Computing and Intelligent Interaction, ACII 2017, San Antonio, TX, USA, 23–26 October 2017; Volume 2018, pp. 456–463.
  53. Seo, W.; Kim, N.; Kim, S.; Lee, C.; Park, S.M. Deep ECG-respiration network (DeepER net) for recognizing mental stress. Sensors 2019, 19, 3021.
  54. Hwang, B.; You, J.; Vaessen, T.; Myin-Germeys, I.; Park, C.; Zhang, B.T. Deep ECGNet: An optimal deep learning framework for monitoring mental stress using ultra short-term ECG signals. Telemed. E-Health 2018, 24, 753–772.
  55. He, J.; Li, K.; Liao, X.; Zhang, P.; Jiang, N. Real-Time Detection of Acute Cognitive Stress Using a Convolutional Neural Network from Electrocardiographic Signal. IEEE Access 2019, 7, 42710–42717.
  56. Martínez-Rodrigo, A.; García-Martínez, B.; Huerta, Á.; Alcaraz, R. Detection of negative stress through spectral features of electroencephalographic recordings and a convolutional neural network. Sensors 2021, 21, 3050.
More
Information
Contributors MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register : , , , , ,
View Times: 265
Revisions: 2 times (View History)
Update Date: 19 Jun 2023
1000/1000