Survey on Physiological Computing in Human–Robot Collaboration: History
Please note this is an old version of this entry, which may differ significantly from the current revision.
Contributor: ,

Human–robot collaboration has emerged as a prominent research topic. To enhance collaboration and ensure safety between humans and robots, researchers employ a variety of methods. One such method is physiological computing, which aims to estimate a human’s psycho-physiological state by measuring various physiological signals such as galvanic skin response (GSR), electrocardiograph (ECG), heart rate variability (HRV), and electroencephalogram (EEG). This information is then used to provide feedback to the robot.

  • physiological computing
  • human–robot collaboration
  • data collection methods

1. Introduction

The proliferation of robots is rapidly transforming the way we live, work, and interact with technology. The International Federation of Robotics (IFR) reports that the number of robots worldwide has increased by 11% between 2019 and 2020, with collaborative robots being a significant contributor to this growth [1]. This expansion can be attributed to two primary factors: cost and capabilities. The price of robots per unit has decreased by 50% over the last five years, while their abilities have been significantly enhanced through advances in machine learning, enabling them to perform more sophisticated tasks [2]. Consequently, robots have become more intelligent and talented, and companies are increasingly utilizing them in the production environment.
As robots have become more integrated into the workforce, safety measures have become a top priority. The International Standard Organization (ISO) recognizes the need for safety guidelines, as outlined in ISO/TS 15066, which specifies that human–robot collisions should not cause pain or injury [3]. Consequently, safety protocols have become a central focus in industrial applications, with physical and electronic safeguards being implemented to ensure worker safety. Despite these precautions, new strategies and approaches are necessary for human–robot collaboration, where fewer standards exist to implement complex protection schemes. To address this, a new category of robots known as “collaborative robots” or “cobots” has emerged in the market. These robots, such as Universal Robots, Kuka lbr-iiwa, and Rethink Robotics Sawyer, are intentionally designed to work in direct cooperation with humans in a defined workspace, reducing the severity and risks of injury due to collisions. In-depth research by Kumar et al. [4] provides insight into human–robot collaboration in terms of the awareness, intelligence, and compliance of the systems, highlighting the need for a more comprehensive understanding of the safety protocols required for human–robot interactions. As such, it is vital to continue developing safety standards and guidelines that enable humans and robots to work together seamlessly and safely.

2. Physiological Computing

Physiological computing is a multi-disciplinary field that aims to use human physiological signals to simulate and understand the psycho-physiological state of individuals. This involves recognizing, interpreting, and processing physiological signals to dynamically adjust and adapt to the user’s psycho-physiological state. Areas of study within physiological computing include human–computer interaction, brain–computer interaction, and affective computing, as noted by Fairclough [5]. The ultimate goal of physiological computing is to enable programs and robots to modify their behavior in response to a user’s psycho-physiological state, which will allow them to interact in a socially intelligent and acceptable manner.
Physiological computing has affected many fields, such as human–computer interaction, E-learning, automotive, healthcare, neuroscience, marketing, and robotics [5]. As an example of E-learning, physiological computing can help the tutor modify the presentation style based on students’ affective states such as interest, boredom, and frustration. In the automotive field, it can be used as an alert system in for surrounding vehicles when the driver is not paying attention to the road. In social robotics, physiological computing can help robotic pets to enhance realism.
According to NSF Research Statement for Cyber Human Systems (2018–2019), “improve the intelligence of increasingly autonomous systems that require varying levels of supervisory control by the human; this includes a more symbiotic relationship between human and machine through the development of systems that can sense and learn the human’s cognitive and physical states while possessing the ability to sense, learn, and adapt in their environments” [6]. Thus, to have a safe environment, a robot should sense human’s cognitive and physical state, which will help to build the trust between humans and robots.
In a human–robot interaction setup, a change in a robot’s motion can affect human behavior. Experiments such as [7][8] revealed similar results. The literature review in [9] highlights the use of the ‘psycho-physiological’ method to evaluate human response and behavior during human–robot interactions. The continuous monitoring of physiological signals during human–robot tasks is the first step in quantifying human trust in automation. The inferences from these signals and incorporating them in real-time to adapt robot motion can enhance human–robot interactions. Such a system capable of ‘physiological computing’ will result in a closed human-in-the-loop (also known as a ‘biocybernetics loop’ [10]) system where both humans and robots in an HRC setup are monitored and information is shared. This approach could result in better communication, which would improve trust in automation and increase productivity.
According to Fairclough, physiological computing can be divided into two categories. The first category is a system of sensory-motor function, which is related to extending body schema [10]. In this category, the subject is aware that he/she is in control. For example, an electromyography (EMG) sensor placed on the forearm can be used as an alternative method for typing [11], or it can control a prosthetic arm. Similarly, brain–computer interaction (BCI) provides an alternative way to type via an electroencephalogram (EEG) headset.
The second category concerns creating a representation of physiological state through monitoring and responding to simultaneous data originating from a psycho-physiological interaction in the central nervous system [10]. This category is also known as biocybernetics adaptation. The biocybernetics adaptation needs to detect spontaneous changes in the user’s physiological state. Thus, the system can respond to this change. The biocybernetics adaptation has many applications such as emotional detection, anxiety detection, and mental workload estimation. For example, based on mental workload, the amount of data displayed can be filtered to reduce the workload in flight simulation. A computer game can change difficulty levels based on the player’s anxiety levels.

3. Physiological Signals

The representation of a human’s psycho-physiological state requires a complex analysis of physiological signals. Hence, to estimate the psycho-physiological state, a variety of physiological signals were used such as electrocardiogram (ECG), photoplethysmography (PPG), galvanic skin response (GSR) (also known as electrodermal activity (EDA)), electroencephalography (EEG), electromyography (EMG), respiration rate (RSP), and pupil dilation.
In addition to the commonly used physiological signals in human–robot interactions (HRCs), there are several other signals that have potential usage in HRC research. These include arterial blood pressure (ABP); blood volume pulse (BVP); phonocardiography (PCG) signals; electrooculogram (EOG); functional near-infrared spectroscopy (fNIRS); and biomechanical/biokinetic signals such as acceleration, angular velocity, angle near joints, and force, which are generated by human movements. However, these signals are either not very common or difficult to collect.

3.1. Electroencephalogram (EEG)

The EEG is a method to measure the electric activity of the neurons in the brain. The EEG signal is a complex signal; thus, extensive research is presently conducted in the field of neuroscience psychology. The EEG signal can be collected in invasive or non-invasive methods. The non-invasive method is widely used to collect the brain’s activity. The invasive method has started to become more available, and it is promising [12].
Researchers categorized EEG signals based on the frequency band: delta band (1–4 Hz), theta band (4–8 Hz), alpha band (8–12 Hz), beta band (13–25 Hz), and gamma band (>25 Hz). The results showed that the delta band has been used in several studies such as sleeping [13]. The theta band is related to brain processes, mostly mental workload [14][15]. It has been shown that alpha waves are associated with relaxed wakefulness [16], and beta waves are associated with focus attention or anxious thinking [17]. Finally, in the gamma band, it is not clear what the gamma band oscillation reflects.
It can be argued that wearing an EEG cap while working can be uncomfortable. However, it must be noted that in industry, workers are required to wear a helmet or a hat. With the advent of IoT systems and wireless communication, the size of the EEG sensors shrinks; hence, they can be embedded into a headphone [18].

3.2. Electrocardiogram (ECG)

The ECG is a widely used non-invasive method for recording the electrical activity of the heart, first developed by Dr. Willem Einthoven in 1902 [19]. By measuring the electrical signals generated by the heart, the ECG can provide valuable information about the heart’s function and detect diseases such as atrial fibrillation, ischemia, and arrhythmia.
The ECG signal is characterized by a repeating pattern of heartbeats, with the QRS complex being the most prominent and recognizable feature. Typically lasting between 0.06 and 0.10 s in adults [20], the QRS complex is used to determine heart rate (HR), which is the number of R peaks observed in one minute. While other methods exist to measure HR, the ECG is the most accurate and reliable as it directly reflects the heart’s electrical activity.
Another valuable metric extracted from the ECG is heart rate variability (HRV), which measures the time elapsed between consecutive R peaks. HRV has been shown to be useful in detecting heart disease, such as atrial fibrillation (AF), and can also be affected by an individual’s state, such as exercise or rest. Sudden changes in HRV may indicate a change in emotional state or heart disease. Recent research has shown a positive correlation between HRV and emotion, indicating that it may have potential applications in emotional detection [21].

3.3. Photoplethysmography (PPG)

The photoplethysmogram (PPG) is a low-cost and convenient method that provides an alternative to the traditional ECG approach for measuring heart rate and heart rate variability. Using a light source and photon detector placed on the skin, PPG technology measures the amount of reflection of light, which corresponds to the volumetric variations of blood circulation. Unlike the ECG signal, which uses the QRS complex, the PPG signal relies on the inter-beat-interval (IBI) for heart rate and HRV calculations.
The PPG method offers several advantages over the ECG approach. The ECG electrode placement is complicated and is prone to the effects of motion noise. However, PPG can be easily and non-invasively measured on the skin. Lu et al. demonstrated the feasibility of using PPG for heart rate and heart rate variability measurements, indicating its potential as an alternative to the ECG method [22].

3.4. Galvanic Skin Response/Electrodermal Activity

Galvanic skin response (GSR) or electrodermal activity (EDA) is a physiological signal obtained by measuring skin conductivity. The conductivity of skin changes whenever sweat glands are triggered. This phenomenon is an unconscious process controlled by the sympathetic division of the autonomic nervous system. The sympathetic division is activated when exposed to emotional moments (fear, happiness, or joy) or undesirable situations. Hence, it triggers the sweat glands, heart, lungs, and other organs; as a result, the hands become sweaty, the heart rate increases, and the breathing rate becomes excessive.
The GSR signal is used in various fields such as physiological research, consumer neuroscience, marketing, media, and usability testing. The GSR signal is a non-invasive method that uses two electrodes placed on the palms of the hands, fingers, or foot soles, which are the commonly used locations for emotional arousal. The GSR signal has two components: tonic level; skin conductance level (SCL); and phasic response, known as skin conductance response (SCR). The tonic level changes and varies slowly. It also may vary between individuals and their skin dryness, and hydration. Thus, it does not provide valuable information about the sympathetic division. Unlike the tonic level, the phasic response changes and alternates faster. All of these changes and deviations are directly related to reactions coming from the sympathetic division under the autonomic nervous system. The phasic response is sensitive to emotional arousal and mental load. Thus, the phasic response provides essential information about the physiological state.
The GSR signal provides valuable information about the strength of arousal, whether it is decreasing or increasing. However, positive and negative events (moments) may have similar GSR signal outputs. Therefore, the GSR signal should be used with another sensor such as EEG, ECG, EMG, or pupil dilation.

3.5. Pupil Dilation/Gaze Tracking

Human visual attention can be detected by eye movement, and this information is critical for neuromarketing and psychological study [23]. Gaze tracking provides information about where the subject is looking. This information also can be used in other fields such as robotics. For example, if the robot knows a co-worker is not paying attention in a critical operation, the robot can take an action to notify the co-worker.
The eye not only provides information about where researchers are looking but also provides information about pupil dilation. Pupil dilation is a measurement of change in pupil diameter. Although pupil dilation can be caused by ambient or other light-intensity changes in the environment, it has been shown that it can dilate due to emotional change as well [24].

3.6. Electromyography (EMG)

The EMG is a non-invasive method that measures electrical activity generated by a muscle. The EMG has been used in biocybernetics loop applications as a control input for a system or robot [25]. Another example of EMG is using facial muscles to provide information about sudden emotional changes or reactions [26][27].

3.7. Physiological Signal Features

Deep learning algorithms can learn from raw data, but they require a large dataset, which can be difficult to obtain. Compared to deep learning models, classical machine learning (ML) algorithms usually require features for training. Hence, features need to be extracted from signals. There are different methods of extracting features from a signal, such as the time and frequency domain. There are open-source libraries that simplify feature extraction tasks, such as the time series feature extraction library (TSFEL), tsfresh, and NeuroKit2. These libraries offer a range of automated feature extraction capabilities, with TSFEL extracting up to 60 different features [28][29][30].
In addition to Deep Learning and Classical ML, there are other methods that rely on subsequence search and similarity measurement and that are more suitable for real-time applications. For example, time series subsequence search (TSSEARCH) is a Python library that focuses on query search and time series segmentation [31]. Similarly, Rodrigues et al. proposed a practical and manageable way to automatically segment and label single-channel or multimodal biosignal data using a self-similarity matrix (SSM) computed with signals’ feature-based representation [32].

4. Data Collection Methods

4.1. Baseline

The baseline method is a way of defining what is considered normal or typical, which is then used as a reference point during an experiment or study. This approach is often used in biocybernetic adaptation applications, such as estimating anxiety levels during computer games [33][34]. To apply the baseline method in this context, researchers typically record a subject’s physiological signals before the game, marking this as the baseline. They then use this information to create thresholds and make decisions during the game. For instance, the game’s difficulty level may be adjusted automatically based on the player’s anxiety levels, and the difficulty may be lowered to improve the player’s experience. Overall, the baseline method provides a useful framework for measuring and responding to physiological signals in real-time, which can enhance the effectiveness of interventions in various domains.

4.2. Pre-Trial

Compared to the baseline data collection method, the pre-trial data collection method involves collecting physiological data before each trial. These data describe the participant’s physiological state before the trial. For instance, in a study conducted by Dobbins et al. [35], participants were asked to complete a questionnaire before and after their commute for five working days. The questionnaire was used to measure the participants’ stress levels while driving. This approach enables researchers to identify changes in participants’ physiological state before and after the trial, providing valuable information about their daily commute.
However, this approach has its limitations. It requires participants to answer the same questions multiple times, which can be overwhelming and may affect the quality of the data collected. Therefore, researchers need to find ways to minimize the burden on participants while collecting accurate and reliable data.

4.3. Post-Trial

Post-trial data collection is a commonly used technique in which a visual stimulus is presented to the subject, and the subject evaluates the stimulus by answering a questionnaire after the trial. For instance, in a study by Kumar et al. [36], participants worked with a UR-10 robot to perform an assembling task. The participants then completed a post-questionnaire to provide feedback on their experience.
Although this approach is widely used and provides valuable insight into participants’ perceptions, it has some limitations. The subjective nature of post-questionnaires may lead to biased responses, and participants may have difficulty recaling their experience accurately. Therefore, researchers need to design their post-questionnaires carefully and ensure that they are appropriate for the study’s objectives to obtain reliable and valid data. Additionally, researchers may consider using complementary data collection techniques, such as physiological measurements, to validate the results obtained through post-questionnaires.

4.4. During Trial

The during-trial data collection method involves asking the same question to participants during an ongoing trial. This approach is valuable for monitoring trial progress, as evidenced by Sahin et al. [37], who collected perceived safety data during the trial and demonstrated that during-trial data collection provides more information than the after-trial method.
To ensure the integrity of the experiment, two critical aspects of during-trial data collection must be considered. Firstly, it is essential to limit the number of questions asked since the trial has not yet concluded. Secondly, data entry should be effortless. Instead of using pen and paper to collect participant data, it would be advantageous to provide an app that enables participants to enter their responses using taps on a tablet’s screen. Alternatively, recording participant audio feedback during the trial may improve during-trial data collection.
In conclusion, during-trial data collection methods provide additional information, but the questionnaire should have a limited number of questions to maintain the experiment’s integrity.

5. Data Labeling

After data collection, physiological signals need to be labeled. In some cases, the labeling can be cumbersome, especially in biocybernetics adaptation. This section will discuss commonly used data labeling techniques.

5.1. Action/Content-Related Labeling

Action/content-related labeling is commonly used in visual-stimuli-type experiments [38][39][40]. In a visual experiment, the exact time of the shown image or video is known. Thus, the physiological signal can easily be labeled with a corresponding label. Similarly, in an action-related experiment, the amount of time for which the subject is repeating the gesture/action is known; thus, a window that captures the gesture can be labeled accordingly [11]. Savur et al. talk about the critical aspect of data collection and labeling in HRC settings. They provide case studies for a human–robot collaboration experiment that has building signal synchronization and automatic event generation [11].
Action/content labeling is the simplest way of labeling, and it can be carried out during the data collection process. Thus, this method is widely adopted in different fields such as physiological study, marketing, emotion detection, and other related factors.

5.2. Subjective Labeling

The questionnaire is a widely used tool in quantitative research, including in HRC studies. In human–robot collaboration research, questionnaires are essential for evaluating the effectiveness of various methodologies. For instance, Kumar et al. [36] used subjective responses obtained through questionnaires to compare their speed and separation monitoring methods with state-of-the-art techniques. Similarly, in emotion detection research, questionnaires are used to evaluate subjective responses to different scenes that may elicit different emotions [41]. Dobbins et al. [35] employed pre- and post-surveys to evaluate the impact of their experiment on the subjects. The survey results were quantitatively analyzed to determine if the experiment had a positive, negative, or neutral effect.
Questionnaires are useful in quantifying the subject’s preferences and evaluating the proposed methodology. Although it is common to use questionnaires, there is no standardized set of questions that researchers follow [42]. Generally, researchers create their own set of questions or modify an existing questionnaire to suit their research hypothesis. Below are some commonly used questionnaires in HRC research.
  • Godspeed was designed to standardize measurement tools for HRI by Bartneck et al. [43]. Godspeed focused on five measurements: anthropomorphism, adaptiveness, intelligence, safety, and likability. Godspeed is commonly used, and it has been translated into different languages.
  • NASA TLX was designed to measure subjective workload assessment. It is widely used in cognitive experiments. The NASA TLX measures six metrics: mental demand, physical demand, temporal demand, performance, effort, and frustration [44].
  • BEHAVE-II was developed for the assessment of robot behavior [45]. It measures the following metrics: anthropomorphism, attitude towards technology, attractiveness, likability, and trust.
  • Multidimensional Robot Attitude Scale (MRAS) is a 12-dimensional questionnaire was developed by Ninomiya et al. [46]. The MRAS measures a variety of metrics such as familiarity, ease of use, interest, appearance, and social support.
  • Self-Assessment Manikin Instrument (SAM) consists of 18 questions that measure three metrics of pleasure, arousal, dominance [47]. Unlike most surveys, the SAM uses a binary selection of two opposite emotions: calm vs. excited, unhappy vs. happy, etc.
  • Negative Attitude toward Robots Scale (NARS), developed to measure negative attitudes toward robots in terms of negative interaction with robots, social influence, and emotions in interaction with robots. Moreover, the NARS measures discomfort, anxiety, trust, etc. [48].
  • Robot Social Attributes Scale (RoSAS) is a survey that seeks to extract metrics of social perception of a robot such as warmth, competence, and discomfort [49].
  • STAXI-2 consists of 44 questions that measure state anger, trait anger, and anger expression [50].

This entry is adapted from the peer-reviewed paper 10.3390/machines11050536

References

  1. IFR. World Robotics Report 2020; Technical Report. Available online: https://ifr.org/ifr-press-releases/news/record-2.7-million-robots-work-in-factories-around-the-globe (accessed on 27 April 2023).
  2. Korus, S. Industrial Robot Cost Declines Should Trigger Tipping Points in Demand; Technical Report. Available online: https://ark-invest.com/articles/analyst-research/industrial-robot-cost-declines/ (accessed on 27 April 2023).
  3. ISO/TS 15066:2016; Robots And Robotic Devices-Collaborative Robots. ISO: Geneva, Switzerland, 2016.
  4. Kumar, S.; Savur, C.; Sahin, F. Survey of Human–Robot Collaboration in Industrial Settings: Awareness, Intelligence, and Compliance. IEEE Trans. Syst. Man Cybern. Syst. 2021, 51, 280–297.
  5. Fairclough, S.H. Fundamentals of physiological computing. Interact. Comput. 2009, 21, 133–145.
  6. NSF. Information and Intelligent Systems (IIS): Core Programs. Available online: https://www.nsf.gov/pubs/2018/nsf18570/nsf18570.htm (accessed on 27 April 2023).
  7. Kulic, D.; Croft, E.A. Anxiety detection during human–robot interaction. In Proceedings of the 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, Edmonton, AB, Canada, 2–6 August 2005; pp. 616–621.
  8. Kulić, D.; Croft, E. Physiological and subjective responses to articulated robot motion. Robotica 2007, 25, 13–27.
  9. Tiberio, L.; Cesta, A.; Belardinelli, M. Psychophysiological Methods to Evaluate User’s Response in Human Robot Interaction: A Review and Feasibility Study. Robotics 2013, 2, 92–121.
  10. Fairclough, S.H. Physiological Computing and Intelligent Adaptation. In Emotions and Affect in Human Factors and Human–Computer Interaction; Number 2017; Elsevier: Amsterdam, The Netherlands, 2017; pp. 539–556.
  11. Savur, C.; Sahin, F. Real-Time American Sign Language Recognition System Using Surface EMG Signal. In Proceedings of the 2015 IEEE 14th International Conference on Machine Learning and Applications (ICMLA), Miami, FL, USA, 9–11 December 2015; pp. 497–502.
  12. Musk, E. An integrated brain-machine interface platform with thousands of channels. J. Med. Internet Res. 2019, 21, 1–14.
  13. Hughes, J.R. Electroencephalography. Basic principles, clinical applications and related fields. Electroencephalogr. Clin. Neurophysiol. 1982, 54, 473–474.
  14. Klimesch, W. Memory processes, brain oscillations and EEG synchronization. Int. J. Psychophysiol. 1996, 24, 61–100.
  15. O’Keefe, J.; Burgess, N. Theta activity, virtual navigation and the human hippocampus. Trends Cogn. Sci. 1999, 3, 403–406.
  16. Yılmaz, B.; Korkmaz, S.; Arslan, D.B.; Güngör, E.; Asyalı, M.H. Like/dislike analysis using EEG: Determination of most discriminative channels and frequencies. Comput. Methods Programs Biomed. 2014, 113, 705–713.
  17. Zhang, Y.; Chen, Y.; Bressler, S.; Ding, M. Response preparation and inhibition: The role of the cortical sensorimotor beta rhythm. Neuroscience 2008, 156, 238–246.
  18. Alcaide, R.; Agarwal, N.; Candassamy, J.; Cavanagh, S.; Lim, M.; Meschede-Krasa, B.; McIntyre, J.; Ruiz-Blondet, M.V.; Siebert, B.; Stanley, D.; et al. EEG-Based Focus Estimation Using Neurable’s Enten Headphones and Analytics Platform. bioRxiv 2021.
  19. Hurst, J.W. Naming of the Waves in the ECG, With a Brief Account of Their Genesis. Circulation 1998, 98, 1937–1942.
  20. Ali, M.; Machot, F.; Mosa, A.; Jdeed, M.; Machot, E.; Kyamakya, K. A Globally Generalized Emotion Recognition System Involving Different Physiological Signals. Sensors 2018, 18, 1905.
  21. Choi, K.H.; Kim, J.; Kwon, O.S.; Kim, M.J.; Ryu, Y.H.; Park, J.E. Is heart rate variability (HRV) an adequate tool for evaluating human emotions?—A focus on the use of the International Affective Picture System (IAPS). Psychiatry Res. 2017, 251, 192–196.
  22. Lu, G.; Yang, F.; Taylor, J.A.; Stein, J.F. A comparison of photoplethysmography and ECG recording to analyse heart rate variability in healthy subjects. J. Med. Eng. Technol. 2009, 33, 634–641.
  23. Tobii. Available online: https://www.tobii.com/ (accessed on 27 April 2023).
  24. Bonifacci, P.; Desideri, L.; Ottaviani, C. Familiarity of Faces: Sense or Feeling? J. Psychophysiol. 2015, 29, 20–25.
  25. Savur, C.; Sahin, F. American Sign Language Recognition system by using surface EMG signal. In Proceedings of the 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Budapest, Hungary, 9–12 October 2016; pp. 002872–002877.
  26. Kulic, D.; Croft, E.A. Affective State Estimation for Human–Robot Interaction. IEEE Trans. Robot. 2007, 23, 991–1000.
  27. Gouizi, K.; Bereksi Reguig, F.; Maaoui, C. Emotion recognition from physiological signals. J. Med. Eng. Technol. 2011, 35, 300–307.
  28. Barandas, M.; Folgado, D.; Fernandes, L.; Santos, S.; Abreu, M.; Bota, P.; Liu, H.; Schultz, T.; Gamboa, H. TSFEL: Time Series Feature Extraction Library. SoftwareX 2020, 11, 100456.
  29. Christ, M.; Braun, N.; Neuffer, J.; Kempa-Liehr, A.W. Time Series FeatuRe Extraction on basis of Scalable Hypothesis tests (tsfresh–A Python package). Neurocomputing 2018, 307, 72–77.
  30. Makowski, D.; Pham, T.; Lau, Z.J.; Brammer, J.C.; Lespinasse, F.; Pham, H.; Schölzel, C.; Chen, S.H.A. NeuroKit2: A Python toolbox for neurophysiological signal processing. Behav. Res. Methods 2021, 53, 1689–1696.
  31. Folgado, D.; Barandas, M.; Antunes, M.; Nunes, M.L.; Liu, H.; Hartmann, Y.; Schultz, T.; Gamboa, H. TSSEARCH: Time Series Subsequence Search Library. SoftwareX 2022, 18, 101049.
  32. Rodrigues, J.; Liu, H.; Folgado, D.; Belo, D.; Schultz, T.; Gamboa, H. Feature-Based Information Retrieval of Multimodal Biosignals with a Self-Similarity Matrix: Focus on Automatic Segmentation. Biosensors 2022, 12, 1182.
  33. Rani, P.; Sarkar, N.; Liu, C. Maintaining Optimal Challenge in Computer Games through Real-Time Physiological Feedback Mechanical Engineering. In Task-Specific Information Processing in Operational and Virtual Environments: Foundations of Augmented Cognition; Taylor & Francis: Boca Raton, FL, USA, 2006; pp. 184–192.
  34. Villani, V.; Sabattini, L.; Secchi, C.; Fantuzzi, C. A Framework for Affect-Based Natural Human–robot Interaction. In Proceedings of the 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Nanjing, China, 27–31 August 2018; pp. 1038–1044.
  35. Dobbins, C.; Fairclough, S.; Lisboa, P.; Navarro, F.F.G. A Lifelogging Platform Towards Detecting Negative Emotions in Everyday Life using Wearable Devices. In Proceedings of the 2018 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops), Athens, Greece, 19–23 March 2018; pp. 306–311.
  36. Kumar, S.; Savur, C.; Sahin, F. Dynamic Awareness of an Industrial Robotic Arm Using Time-of-Flight Laser-Ranging Sensors. In Proceedings of the 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Miyazaki, Japan, 7–10 October 2018; pp. 2850–2857.
  37. Sahin, M.; Savur, C. Evaluation of Human Perceived Safety during HRC Task using Multiple Data Collection Methods. In Proceedings of the 2022 17th Annual Conference System of Systems Engineering, SoSE 2022, Rochester, NY, USA, 7–11 June 2022; pp. 465–470.
  38. Kumar, S.; Sahin, F. A framework for a real time intelligent and interactive Brain Computer Interface. Comput. Electr. Eng. 2015, 43, 193–214.
  39. Artal-Sevil, J.S.; Acon, A.; Montanes, J.L.; Dominguez, J.A. Design of a Low-Cost Robotic Arm controlled by Surface EMG Sensors. In Proceedings of the 2018 XIII Technologies Applied to Electronics Teaching Conference (TAEE), Canary Island, Spain, 20–22 June 2018; pp. 1–8.
  40. Mangukiya, Y.; Purohit, B.; George, K. Electromyography(EMG) sensor controlled assistive orthotic robotic arm for forearm movement. In Proceedings of the 2017 IEEE Sensors Applications Symposium (SAS), Glassboro, NJ, USA, 13–15 March 2017; pp. 1–4.
  41. Shu, L.; Xie, J.; Yang, M.; Li, Z.; Li, Z.; Liao, D.; Xu, X.; Yang, X. A review of emotion recognition using physiological signals. Sensors 2018, 18, 2074.
  42. Zoghbi, S.; Kulić, D.; Croft, E.; Van Der Loos, M. Evaluation of affective state estimations using an on-line reporting device during human–robot interactions. In Proceedings of the 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2009, St. Louis, MO, USA, 11–15 October 2009; pp. 3742–3749.
  43. Bartneck, C.; Kulić, D.; Croft, E.; Zoghbi, S. Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. Int. J. Soc. Robot. 2009, 1, 71–81.
  44. Sandra, G. Hart and Lowell E. Staveland, Development of NASA-TLX (Task Load Index): Results of Empirical and Theoretical Research, Human Mental Workload; North-Holland: Amsterdam, The Netherlands, 1988; pp. 139–183.
  45. Joosse, M.; Sardar, A.; Lohse, M.; Evers, V. BEHAVE-II: The Revised Set of Measures to Assess Users’ Attitudinal and Behavioral Responses to a Social Robot. Int. J. Soc. Robot. 2013, 5, 379–388.
  46. Ninomiya, T.; Fujita, A.; Suzuki, D.; Umemuro, H. Development of the Multi-dimensional Robot Attitude Scale: Constructs of People’s Attitudes Towards Domestic Robots. In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Cham, Switzerland, 2015; Volume 9388 LNCS, pp. 482–491.
  47. Bradley, M.M.; Lang, P.J. Measuring emotion: The self-assessment manikin and the semantic differential. J. Behav. Ther. Exp. Psychiatry 1994, 25, 49–59.
  48. Nomura, T.; Suzuki, T.; Kanda, T.; Kato, K. Measurement of negative attitudes toward robots. Interact. Stud. Studies. Soc. Behav. Commun. Biol. Artif. Syst. 2006, 7, 437–454.
  49. Carpinella, C.M.; Wyman, A.B.; Perez, M.A.; Stroessner, S.J. The Robotic Social Attributes Scale (RoSAS). In Proceedings of the 2017 ACM/IEEE International Conference on Human–robot Interaction, Vienna, Austria, 6–9 March 2017; ACM: New York, NY, USA, 2017; Volume Part F1271, pp. 254–262.
  50. Spielberger, C.D. State-Trait Anger Expression Inventory–2. Available online: https://www.parinc.com/Products/Pkey/429 (accessed on 27 April 2023).
More
This entry is offline, you can click here to edit this entry!
ScholarVision Creations