Electroencephalogram-Based Emotion Classification: Comparison
Please note this is a comparison between Version 2 by Rita Xu and Version 3 by Rita Xu.

Rapid advancements in the medical field have drawn much attention to automatic emotion classification from EEG data. People’s emotional states are crucial factors in how they behave and interact physiologically. The diagnosis of patients’ mental disorders is one potential medical use. When feeling well, people work and communicate more effectively. Negative emotions can be detrimental to both physical and mental health. Many earlier studies that investigated the use of the electroencephalogram (EEG) for emotion classification have focused on collecting data from the whole brain because of the rapidly developing science of machine learning.

  • deep learning
  • emotion classification
  • EEG data
  • stacking ensemble classifier

1. Introduction

A physiological and behavioral response to internal and external stimuli, emotion is a complicated, physiological behavior of human beings [1]. The goal of human emotion recognition is to identify human emotions from a variety of modalities, including body language, physiological signs, and audiovisual manifestations. In human-to-human communication and contact, emotion is crucial. Emotion is a result of mental processes that people engage in, and it can be expressed as a reflection of their psychophysiological states [2].
Over the past few years, numerous investigations on engineering strategies for automatic emotion identification have been conducted. Three broad categories can be made for them. The first category examines speech, body language, and facial expressions [3][4][5][6]. These audio–visual methods enable non-contact emotion detection. The second group largely focuses on physiological signals at the periphery. Studies have shown that different emotional states alter peripheral physiological signals. The third group of methods focuses primarily on brain signals obtained from the central nervous system using a device that records the brain signals, known as an electroencephalography (EEG) or electrocorticography (ECoG). It has been demonstrated that EEG signals have informative characteristics in response to emotional states [7][8] among these brain signals. According to Davidson et al. [9], the experience of two emotions was correlated with frontal brain electrical activity; they were positive and negative. According to the studies, there has been a lot of discussion regarding the link between EEG asymmetry and emotions.
ECG signal offers useful information for identifying emotional stress in people. For many years, research has been conducted on emotional stress, mostly in the psychological realm. Emotional stress is a major contributor to mental illnesses such as depression, anxiety, and bipolar disorders. Positive emotions (such as happiness and surprise) and negative emotions are two major categories for emotions (sad, anger, fear, and disgust). EEG provides a good temporal resolution when measuring the brain’s electrical activity. Researchers classify three states of human emotion; they are positive, negative, and neutral. Understanding that brain activity is subject-specific and that people’s emotional brain activities in various brain areas vary for a given subject is crucial to understanding how to recognize emotions through brain activity. This research discusses a better way of categorizing human emotions from EEG data.
Determining how fleeting mental experiences are translated into a specific pattern of brain activity is a significant difficulty in brain–machine interface applications. The quantity of information required to accurately represent the many states of the EEG signals, which are complex, non-linear, and unpredictable, is one of the critical problems with classifying EEG signals. This research proposes a new ensemble model developed with a random forest, light gradient boosting machine, and a gradient boosting-based stacking ensemble (RLGB-SE) model to categorize various emotional states. The 2549 source attributes created a smaller dataset through feature selection. The methods of choice scored the qualities according to how well they performed in classification, and a manual cutoff point was tuned when the score started to decline, keeping only the strongest traits. A stacking ensemble classifier combines different classification models to improve the model’s accuracy. To build the model, thwe employed three classifiers as foundation models: random forest, light gradient boosting machine, and gradient boosting classifier. These three classifiers’ outputs serve as the meta-input. The paradigm separates mental states into three categories: positive, negative, and neutral. The proposed RLGB-SE performs better than cutting-edge techniques for identifying intelligent emotions in EEG signals.

2. Electroencephalogram-Based Emotion Classification

The recognition of human emotions via EEG signal data has been the subject of extensive research in recent years. In earlier investigations, various feature extraction techniques, channel selection strategies, and classification techniques have been used to identify emotions. Machine learning methods and statistical variables acquired from EEG data are frequently combined with categorizing mental states [10][11]. For finite control points, these mental states can serve as a brain–computer interface. A Muse headband gained the respect of neuroscientists for its efficiency, affordability, and classification accuracy. Alhagry et al. [12] classified the DEAP dataset into high/low arousal, high/low valence, and high/low liking classes with average accuracies of 85.65%, 85.45%, and 87.99% using the long short-term memory neural network and all of the EEG data. Using multi-channel EEG-based emotion recognition with deep forest, Cheng et al. [13] classified emotions using the EEG data from the DEAP and DREAMER datasets. They utilized deep forest to classify the data of all the channels into high/low valence, high/low arousal, and high/low dominance. They then mapped the data of all the channels to 2D frame sequences. Another study [14] classified emotions using the whole DEAP dataset and a transfer learning strategy. EEG-based emotion identification models for the three emotions of positive, neutral, and negative were built using deep belief networks (DBNs). It achieved the best outcome when compared to SVM, LR, and KNN machine learning models, with an accuracy of 86.5% [15]. Another method for EEG-based emotion recognition focuses on how subjects’ EEGs behave differently while watching films intended to evoke positive or negative emotions [16]. SVM and KNN models produced the best classification accuracy. The accuracy and effectiveness of EEG-based emotion classifiers are also improved by the feature smoothing approach known as the linear dynamical system (LDS), and the feature selection algorithm known as the minimal-redundancy-maximum-relevance (MRMR) algorithm. To take the first step toward a potential EEG-based brain–computer interface (BCI) for supporting autism intervention, Fan et al. [17] investigated the viability of detecting the engagement level, emotional states, and mental effort during VR-based driving using EEG. The Bayes network, naive Bayes, support vector machine (SVM), multilayer perceptron, K-nearest neighbors (KNN), random forest, and J48 classification techniques were used and compared. The classification results were encouraging, with over 80% accuracy in assessing engagement and mental workload and over 75% in classifying emotional states. In addition to using EEG asymmetry to examine emotion, researchers also looked at the relationship between EEG and emotion using event-related potentials, which index a small percentage of mean EEG activity [18][19][20]. These methods still have two drawbacks, though. The first is that EEG characteristics must be averaged in the current methods. As a result, they require more significant periods to identify the emotional state from EEG signals. The ability to record only a tiny amount of EEG activity is the other. Due to these drawbacks, the current approaches to evaluating emotional states are either inappropriate or insufficient for use in practical contexts. Since EEG reflects different types of cognitive activity in the brain and emotional states, given the variability of EEG and electrode locations, it is not always clear which independent variables to utilize to differentiate between moods. Thus, in recent years, scientists have attempted to employ more sophisticated techniques to discover the relationship between emotional shifts and EEG data. Chanel et al. [21] suggested an emotion detection system that classifies two emotional states using EEG. Their study had a 72% naïve Bayes classification accuracy for the arousal component of emotions and a 70% Fisher discriminant analysis classification accuracy. Li et al. [22] classified the feelings of happiness and sadness using EEG data. They used linear-SVM and common spatial patterns (CSP) in the experimental setup. Their analysis found that two emotional states had a favorable recognition rate of 93.5%. Using EEG features, Zhang et al. [23] classified the subject’s status into two emotional states, positive and negative, with an average accuracy of 73.0%. The study examined and categorized emotional states elicited by a natural setting using electroencephalography (EEG) and functional magnetic resonance imaging (fMRI). The Laplacian filtering method was utilized to pre-process the raw EEG data, and KNN and linear discriminant analysis (LDA) methods were used to classify the emotional states. Discrete wavelet transform was then used to split the raw EEG signals into three distinct frequency bands. An emotion recognition method using several EEG channels was demonstrated by Murugappan et al. [24], who achieved 83.26% of accuracy for five emotional states. A system for user-independent emotion identification was presented by Petrantonakis et al. [25]. An SVM classifier had an 83.33% recognition rate for six different emotion categories. The suggested classifier achieved 87% accuracy using the Muse headband EEG dataset with a combination of cross validation and several feature selection techniques [26]. In later work proposed by Jordan et al. [27], three states of emotion classification were studied using an ensemble approach, and the proposed model obtained 97.89% accuracy. An ensemble strategy was presented [28] to classify Parkinson’s disease (PD), including feature selection and a sample dependent classifier. The brain works seem to change from person to person and from one emotional state to another. Using EEG data associated with PTSD, a hybrid deep learning model combining CNN-LSTM and ResNet-152 models was created to categorize emotion [29]. Classification model prediction performance was improved via ensemble learning. A semisupervised multiple choice learning (SemiMCL) strategy was used in the study to improve the assignment of labeled data among the constituent networks and to take advantage of unlabeled data to gather domain-specific information [30].

References

  1. Mauss, I.B.; Robinson, M.D. Measures of emotion: A review. Cogn. Emot. 2009, 23, 209–237.
  2. Zhong, P.; Wang, D.; Miao, C. EEG-based emotion recognition using regularized graph neural networks. IEEE Trans. Affect. Comput. 2020, 13, 1290–1301.
  3. Ranganathan, H.; Chakraborty, S.; Panchanathan, S. Multimodal emotion recognition using deep learning architectures. In Proceedings of the 2016 IEEE Winter Conference on Applications of Computer Vision (WACV), Lake Placid, NY, USA, 7–10 March 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 1–9.
  4. Petrushin, V. Emotion in speech: Recognition and application to call centers. In Proceedings of the Artificial Neural Networks in Engineering; Springer: Berlin/Heidelberg, Germany, 1999; Volume 710, p. 22.
  5. Black, M.J.; Yacoob, Y. Recognizing facial expressions in image sequences using local parameterized models of image motion. Int. J. Comput. Vis. 1997, 25, 23–48.
  6. Anderson, K.; McOwan, P.W. A real-time automated system for the recognition of human facial expressions. IEEE Trans. Syst. Man Cybern. Part Cybern. 2006, 36, 96–105.
  7. Coan, J.A.; Allen, J.J. Frontal EEG asymmetry as a moderator and mediator of emotion. Biol. Psychol. 2004, 67, 7–50.
  8. Li, X.; Hu, B.; Zhu, T.; Yan, J.; Zheng, F. Towards affective learning with an EEG feedback approach. In Proceedings of the First ACM International Workshop on Multimedia Technologies for Distance Learning, Beijing, China, 23 October 2009; pp. 33–38.
  9. Davidson, R.J.; Fox, N.A. Asymmetrical brain activity discriminates between positive and negative affective stimuli in human infants. Science 1982, 218, 1235–1237.
  10. Yuen, C.T.; San San, W.; Seong, T.C.; Rizon, M. Classification of human emotions from EEG signals using statistical features and neural network. Int. J. Integr. Eng. 2009, 1, 3.
  11. Tanaka, H.; Hayashi, M.; Hori, T. Statistical features of hypnagogic EEG measured by a new scoring system. Sleep 1996, 19, 731–738.
  12. Alhagry, S.; Fahmy, A.A.; El-Khoribi, R.A. Emotion recognition based on EEG using LSTM recurrent neural network. Int. J. Adv. Comput. Sci. Appl. 2017, 8, 10.
  13. Cheng, J.; Chen, M.; Li, C.; Liu, Y.; Song, R.; Liu, A.; Chen, X. Emotion recognition from multi-channel eeg via deep forest. IEEE J. Biomed. Health Inform. 2020, 25, 453–464.
  14. Lin, W.; Li, C.; Sun, S. Deep convolutional neural network for emotion recognition using EEG and peripheral physiological signal. In Proceedings of the International Conference on Image and Graphics; Springer: Berlin, Germany, 2017; pp. 385–394.
  15. Zheng, W.L.; Lu, B.L. Investigating critical frequency bands and channels for EEG-based emotion recognition with deep neural networks. IEEE Trans. Auton. Ment. Dev. 2015, 7, 162–175.
  16. Duan, R.N.; Zhu, J.Y.; Lu, B.L. Differential entropy feature for EEG-based emotion classification. In Proceedings of the 2013 6th International IEEE/EMBS Conference on Neural Engineering (NER), San Diego, CA, USA, 6–8 November 2013; pp. 81–84.
  17. Fan, J.; Wade, J.W.; Bian, D.; Key, A.P.; Warren, Z.E.; Mion, L.C.; Sarkar, N. A Step towards EEG-based brain computer interface for autism intervention. In Proceedings of the 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milan, Italy, 25–29 August 2015; pp. 3767–3770.
  18. Schupp, H.T.; Cuthbert, B.N.; Bradley, M.M.; Cacioppo, J.T.; Ito, T.; Lang, P.J. Affective picture processing: The late positive potential is modulated by motivational relevance. Psychophysiology 2000, 37, 257–261.
  19. Pizzagalli, D.; Regard, M.; Lehmann, D. Rapid emotional face processing in the human right and left brain hemispheres: An ERP study. Neuroreport 1999, 10, 2691–2698.
  20. Eimer, M.; Holmes, A. An ERP study on the time course of emotional face processing. Neuroreport 2002, 13, 427–431.
  21. Chanel, G.; Kronegg, J.; Grandjean, D.; Pun, T. Emotion assessment: Arousal evaluation using EEG’s and peripheral physiological signals. In Proceedings of the International Workshop on Multimedia Content Representation, Classification and Security, Istanbul, Turkey, 11 September 2006; Springer: Berlin/Heidelberg, Germany, 2006; pp. 530–537.
  22. Li, M.; Lu, B.L. Emotion classification based on gamma-band EEG. In Proceedings of the 2009 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Minneapolis, MN, USA, 3–6 September 2009; IEEE: Piscataway, NJ, USA, 2009; pp. 1223–1226.
  23. Zhang, Q.; Lee, M. Analysis of positive and negative emotions in natural scene using brain activity and GIST. Neurocomputing 2009, 72, 1302–1306.
  24. Murugappan, M.; Ramachandran, N.; Sazali, Y. Classification of human emotion from EEG using discrete wavelet transform. J. Biomed. Sci. Eng. 2010, 3, 390.
  25. Petrantonakis, P.C.; Hadjileontiadis, L.J. Emotion recognition from EEG using higher order crossings. IEEE Trans. Inf. Technol. Biomed. 2009, 14, 186–197.
  26. Bird, J.J.; Manso, L.J.; Ribeiro, E.P.; Ekárt, A.; Faria, D.R. A study on mental state classification using eeg-based brain-machine interface. In Proceedings of the 2018 International Conference on Intelligent Systems (IS), Funchal, Portugal, 25–27 September 2018; pp. 795–800.
  27. Bird, J.J.; Ekart, A.; Buckingham, C.D.; Faria, D.R. Mental emotional sentiment classification with an eeg-based brain-machine interface. In Proceedings of the International Conference on Digital Image and Signal Processing (DISP’19), Oxford, UK, 29–30 April 2019.
  28. Ali, L.; Chakraborty, C.; He, Z.; Cao, W.; Imrana, Y.; Rodrigues, J.J. A novel sample and feature dependent ensemble approach for Parkinson’s disease detection. In Neural Computing and Applications; Springer: Berlin/Heidelberg, Germany, 2022; pp. 1–14.
  29. Chakravarthi, B.; Ng, S.C.; Ezilarasan, M.; Leung, M.F. EEG-based Emotion Recognition Using Hybrid CNN and LSTM Classification. Front. Comput. Neurosci. 2022. epub ahead of print.
  30. Zhong, J.; Zeng, X.; Cao, W.; Wu, S.; Liu, C.; Yu, Z.; Wong, H.S. Semisupervised Multiple Choice Learning for Ensemble Classification. IEEE Trans. Cybern. 2020, 52, 3658–3668.
More
Video Production Service