Electroencephalogram Control Strategies: Comparison
Please note this is a comparison between Version 1 by Mostafa Orban and Version 2 by Jason Zhu.

Electroencephalography (EEG) is the most often-used brain signal in brain-machine interface applications. EEG measures brain activity electric signals generated by currents created by neurons within the brain. Several factors contribute to this popularity compared to other brain wave measurement methods. EEG signals are non-invasive, low cost, compatible, portable, and have a high temporal resolution. This explains why EEG is the most widely used tool to measure brain activity. Furthermore, it is reasonably priced and has an excellent temporal resolution (1 ms).

  • brain-computer interfaces (BCI)
  • electroencephalogram (EEG)
  • exogenous EEG signals

1. Introduction

Different control mechanisms for controlling human-robot interaction have been established. The first approach is to employ a control scheme that predicts or follows the subject’s intention based on data acquired from the exoskeletons. Only the information acquired from the exoskeleton would be used to predict the user’s movement intention in this control method. Two closed loops are needed to control this scheme. The first controller reflects the user and actuator’s effect on the exoskeleton [1][51]. The second control strategy utilizes a control scheme based on the interaction force that can be measured by measuring the deformation of an elastic transmission element or structure coupled to an exoskeleton robot link. Low-level control techniques (direct control) were used to a large extent in this previously demonstrated electroencephalography (EEG)-based computer interface (BCI)-controlled robotic arm system. However, users must issue control orders often under a low-level control method, which can lead to the user [2][76]. EEG is a popular non-invasive technology for capturing brain activity. EEG signals are analyzed and translated into control commands [3][77]. The interface between the human and the wearable robot is crucial for an efficient and successful control scheme that predicts the intention of the user to move. Consequently, the control scheme can be categorized according to the human-robot interaction. Exoskeleton information is obtained based on the interaction force measured between the exoskeleton and the human. The physiological signal measured from the human body reflects the user’s movement intention, Electroencephalography (EEG) signals significantly impact the development of assistive rehabilitation devices. EEG signals have recently been employed as a common way to explore human motion behavior and functions [1][51]. Human motion intention (HMI) based on EEG can control different kinds of robots to assist paralyzed persons with neuromuscular diseases such as amyotrophic lateral sclerosis and stroke in rehabilitation training. Compared to the traditional approach of repeated motion, a large body of research suggests that EEG-based assisted robots enhance patients’ recovery by essentially helping to reestablish the neural circuit between the brain and the muscles [4][78]. Brain potentials captured by scalp electrodes are converted into commands for controlling robot arms, exoskeletons, wheelchairs, or other robots through brain-computer interface algorithms. Slow cortical potentials, event-related P300, and steady-state visual evoked potentials are several EEG processes that distinguish EEG-based brain-computer interfaces [3][77]. In terms of reliability, the BCI can be divided into independent BCI and independent BCI. The dependent brain-computer interface enables people to use some form of motion control, such as gaze. The brain-computer interface based on moving images is one of the most commonly-used paradigms of brain-computer interface [5][79]. Independent BCIs such as P300 evoked potentials, steady-state visual evoked potentials (SSVEPs), sensorimotor rhythms, motion-onset visual evoked potentials, and slow cortical potentials can be utilized to extract control signals; SSVEPs are periodic evoked potentials (PEPs) generated by rapidly repeating visual stimulation, particularly at a frequency greater than 6 Hz. The 5–20 Hz stimulation frequencies produce the most significant response to visual inputs. SSVEPs are more abundant in the occipital and parietal lobes, and their frequency corresponds to the fundamental frequency and harmonics of the frequency-coded inputs. By extracting frequency information from this signal, an SSVEP-BCI system may identify the user’s intended command, such as moving a cursor on a computer screen or operating a robot arm [6][80]. SSVEP-based BCIs have a high information transfer rate (ITR) and need little user training. SSVEP-based BCIs are easier to encode with more instructions without much training and show good promise in high-speed communication [2][76] but are limited by a small number of controls. In other words, SSVEP-BCIs of various classes can be realized using flickering lights with different frequencies. These flashing stimuli, given using light-emitting diodes (LEDs) or a computer display, change EEG signals at the stimulating frequency and its harmonics. The frequency components of SSVEP could be calculated using the lock-in analyzer system (LAS), the power spectral density analysis, and the canonical correlation analysis (CCA) [7][81].
BCIs based on SSVEP and the P300 component can be set up with little or no training, but they require external stimuli. In contrast, BCIs based on sensorimotor rhythms (SMR) and slow cortical potentials (SCP), on the other hand, do not require external input but do require significant user training [7][81]. The assist robot can be controlled more easily via event-related potentials (ERPs), which are brain voltage fluctuations reacting to certain stimuli such as sights or noise. A lower limb prosthesis based on P300, the peak detected 300 ms (250–500 ms) following a given event, has been developed to assist persons in walking. Motor imaging (MI) has also been addressed to tightly connecting brain commands and bodily movement responses. A method for after-stroke rehabilitation activities that use MI to control a robot to drive the arm by allowing individuals to visualize moving their hands has been demonstrated [3][77]. Because of its effectiveness over traditional BCI, hybrid BCIs (hBCI), which “detect at least two brain modalities in a simultaneous or sequential pattern,” have been emphasized for control applications [8][82]. Researchers looked for multiple regions of the brain to boost the number of commands, improve classification accuracy, reduce signal detection time, and shorten brain command detection time. For example, SSVEPs and event-related potentials (ERPs) were mixed to generate a hybrid EEG paradigm. The combination of SSVEP and P300 signals for BCI is a good example. SSVEP has also been paired with motor imagery (MI) [9][83]. EEG is also hybridized with electrooculography (EOG), functional near infrared spectroscopy (fNIRS), electromyography (EMG), and eye tracker [10][84].

2. EEG Signal Preparation Overview

To operate external devices such as an upper or lower limb exoskeleton using an EEG signal, the individual must generate various brain activity patterns (motor imagery or motor execution), which will be identified and translated into control commands [10][84]. The detected brain signal is preprocessed to remove artifacts and prepare the signal for machine learning, turning EEG signals into control commands operating terminal devices. The feature extraction stage began from this process, and the extracted features were then submitted to a feature reduction procedure if necessary. Finally, the new projected feature vectors are divided into various classes based on the task. A BCI system has four major components: signal acquisition, signal preprocessing, feature extraction, and classification.
The user conducts MI of the limb, which is encoded in EEG readings; features describing the task are deciphered, processed, and transformed into commands to control the assistive robot equipment.
The brain signals are captured in the signal acquisition stage, which may also include noise reduction and artifact processing. Skin impedance fluctuations, electrooculography activity, eye blinks, electrocardiographic activity, facial/body muscular EMG activity, and respiration can cause EEG abnormalities. The bandpass filter can be an effective preprocessing tool because the frequency ranges for the physiological signals are typically known.

3. Feature Extraction

The feature extraction stage is to identify distinguishable information in the recorded brain signals. Then, the EEG signals can be mapped to different processing vectors, which include the actual features and discrimination features of the measured observation signals. Some methods divide the signals into short parts, from which the parameters can be calculated. The length of the segment length impacts the accuracy of the estimated features. Wavelet transform or adaptive autoregressive components are preferred to highlight non-stationary time changes in brain signals [11][14].
Several distinct feature extraction techniques, including the autoregressive model, discrete wavelet transform, wavelet packet transform, and sample entropy, were utilized. The redundant and irrelevant information was managed by the feature selection methods, which benefited classification. To improve the performance of feature selection, one of the global optimization strategies based on binary particle swarm optimization (BPSO) is presented [12][13][85,86]. To evaluate the efficacy of feature extraction, class separability experiments are conducted. Using a 14-channel EEG machine, 21 healthy subjects aged 12 to 14 years who viewed images containing one of four distinct emotional stimuli had scalp EEG data recorded (happy, calm, sad or scared).
Then, a balanced one-way ANOVA was used to determine the most useful EEG characteristics. Statistics-based selection of features outperformed manual or multiple variable selection. Support vector machine, k-nearest neighbor, linear discriminant analysis, naive Bayes, random forest, deep learning, and four ensembles were used to classify emotions using the most effective features [14][15][87,88]. In addition, a Markov is employed to process the simulated EEG signals based on the actual EEG signals. Simulated and experimental results demonstrate that the performance of the proposed method is superior to that of widely used methods [16][17][89,90]. The proposed method can prevent the mixing of components of EEG signals with complex structures and extract brain rhythm from EEG signals with low SNR.
The most common features of EEG-based BCIs include spatial filtering, band power, time points, etc. [18][91]. In addition, stationary subspace analysis (SSA), which decomposes multivariate time series into stationary and non-stationary components, has recently been presented to cope with the non-stationarity of EEG data [11][14] after the retrieved feature vector is used to train a classifier [19][92].

4. Classification

Small changes can easily affect the complex structure of EEG in human cognition. As a result, a highly efficient and robust classifier is required. In a BCI system, the objective of the classification step is to recognize a user’s intents using a feature vector that characterizes the brain activity provided by the feature step. This goal can be achieved using regression or classification methods. However, classification techniques are currently the most preferred option. Regression methods use features retrieved from EEG signals as independent variables to predict user intentions. On the other hand, classification algorithms use the extracted features as independent variables to create boundaries between various targets in the feature space [11][14].
Classification algorithms turn the extracted data into distinct motor activities such as hand gestures, foot movements, word production, and so on in motor imagery brain-computer interfaces [5][79]. Combining several signal characteristics from different modalities/devices for the same brain activity can increase the classification accuracy. For example, finger-tapping and hand/arm movement have been detected using a combination of EEG and fNIRS [10][84]. Machine learning (ML) and deep learning (DL) techniques have been used to identify EEG-based BCI; with each successive session, machine learning techniques allow the brain-computer interface to learn from the subject’s brain, modifying the generated rules for classifying ideas and thereby increasing the effectiveness of the system [5][79]. Machine learning algorithms are divided into three groups based on their results: supervised, unsupervised, and reinforcement learning [5][79]. Moreover, deep learning approaches have been shown to improve classification accuracy. Deep networks can also detect latent structures or patterns in raw data [19][92], and robots can study innate movement patterns and estimate human intentions when combined with MLAs [20][93].
Various classification algorithms have been implemented, such as k-nearest neighbors (k-NN), multilayer perceptron (MLP), decision trees [19][92], convolutional neural network (CNN) [9][83], linear discriminant analysis (LDA), support vector machine (SVM) [5][79] with the SVM classifier outperforming other classifiers such as LDA and K-NN [5][79]. When comparing the classification accuracies of LDA, SVM, and backpropagation neural network (BPNN), the former two classifiers produced similar high accuracies, which are more significant than BPNN [21][59]. Compared with PCA, Recurrent Neural Network (RNN) obtained a control accuracy of 94.5 percent and a time cost of 0.61, whereas the PCA algorithm achieved a control accuracy of 93.1 percent and a time cost of 0.48 ms [22][94].
A convolutional neural network (CNN) based deep learning framework is employed for inter-subject continuous decoding of MI-related electroencephalographic (EEG) signals. The results, which were obtained using the publicly available BCI competition IV-2b dataset, show that adaptive moment estimation and stochastic gradient descent yield an average continuous decoding accuracy of 71.49 percent (a = 0.42) and 70.84 percent (=0.42) for the two different training methods, respectively [23][24][95,96].
The pattern recognition step is coming after the feature classification step, which means that the EEG signal has been classified into different shapes, and the subsequent step is required to determine the pattern recognition. This is the case for this part of the process. Statistical data analysis, signal processing, image analysis, information retrieval, bioinformatics, data compression, computer graphics, and machine learning are just some of the fields that can benefit from its use. The fields of statistics and engineering are where pattern recognition first found its roots; some contemporary approaches to pattern recognition include the application of machine learning.
Video Production Service