Action Recognition for Human–Robot Teaming: History
Please note this is an old version of this entry, which may differ significantly from the current revision.
Subjects: Robotics
Contributor: , , ,

Human–robot teaming (HrT) is being adopted in an increasing range of industries and work environments. Effective HrT relies on the success of complex and dynamic human–robot interaction. Although it may be optimal for robots to possess all the social and emotional skills to function as productive team members, certain cognitive capabilities can enable them to develop attitude-based competencies for optimizing teams. Despite the extensive research into the human–human team structure, the domain of HrT research remains relatively limited. In this sense, incorporating established human–human teaming (HhT) elements may prove practical.

  • human–robot interaction
  • human–robot collaboration
  • teaming AI
  • human–robot teaming
  • ML
  • human factors

1. Human–Robot Teaming: A System Modeling Perspective

The concept of HrT depends on merging the capabilities of humans and robots. From a system modeling perspective, a human–robot team can be regarded as a system comprising two subsystems: a human subsystem and a robot subsystem. Each subsystem has its own characteristics and capabilities, and the interaction between these two subsystems determines the overall performance of the system. In order to model the human–robot team as a system, it is necessary to understand the capabilities and limitations of each subsystem, as well as how they can interact and cooperate to achieve a common goal.
Humans possess intrinsic flexibility, cognition, and problem-solving skills [1], whereas robots offer high accuracy, speed, and repeatability [2]. As the ability of robots to act intelligently and the potential for them to be installed without cages increases, manufacturers have developed guidelines and design criteria such as autonomy and mechanical design [3,4,5,6]. The purpose of these guidelines and design criteria is to ensure the safety and reliability of the system. However, HrT brings unique challenges to these guidelines. Due to cognitive/computational and physiological differences between robots and humans [7], robots need to be programmed with the ability to predict and comprehend the tasks/intentions of human teammates.Humans acquire the ability to predict the behaviour of other humans over time [8], but robots must be explicitly trained on how to do this. In addition, when confronted with the volatility of real-world applications, robots cannot rely on a human teammate to stick to the defined task [9], nor can they consistently predict how their human partner would act when something goes wrong. The potential solution to this problem is to equip the robot with explicit models of their human teammates. Ideally, these models should learn the generalized features of human behavior without requiring individuals to act in a certain manner. To this end, machine learning (ML) offers emerging techniques for constructing cognitive models and behavioral blocks [10], providing a vital perspective for human–robot collaborative activities. By modeling the human–robot team from a system perspective, it is possible to gain a better understanding of how the team functions as a whole and identify potential improvements or enhancements to the system.
The human–robot interaction (HRI) field has advanced, and the need for applications requiring humans and robots to form a team and collaborate to solve complicated issues has increased [11,12]. Therefore, addressing the teaming configurations and elements and showing the path to incorporate them using advanced algorithms is necessary. However, determining the optimal teaming configuration for HrT can be complicated, particularly due to the unpredictable nature of HRI, as well as the diverse range of tasks and environments that human–robot teams may encounter.

2. Trust in Human–Robot Teaming

Trust plays a vital part in collaborative efforts, particularly in the context of HrT. In teams where tasks are related and mutually dependent, the efficacy relies primarily on the trust between the human and robot team members. This mutual trust is a cornerstone of teamwork and is important for the success of HrT [13].
A human’s trust in a robot involves a belief in the robot’s proficiency [14]; as demonstrated by the robot’s capacity to understand and conform to human preferences and the ability to make a valuable contribution to a common objective. In exploring the factors that influence trust in HRI, Hancock et al. [15] highlighted that various robot characteristics and performance-based factors are particularly crucial. Their research suggests that trust in HRI is most significantly influenced by aspects related to the robot’s ability and performance. Therefore, manipulating and improving these performance aspects can have a substantial impact on the level of trust established between humans and robots.
In order for robots to establish trust and work effectively alongside human teammates, we argue that gaining an understanding of human behavior would indeed enhance the robot’s adaptability and behavior, which are key performance factors. This would involve analyzing contextual patterns and predicting human task performance based on this analysis. By doing so, robots can be better equipped to anticipate the needs and actions of their human counterparts, leading to trust building and, ultimately, effective teaming. The study by Hancock et al. [15] stated that higher trust is associated with higher reliability. Therefore, the implication is that if a capacity for performance monitoring in a robot is to be developed, it also needs to be highly reliable; otherwise, it could potentially have a detrimental effect on the overall HRI trust dynamic. Hancock et al. included team collaboration characteristics and tasking factors as relevant factors of the HRI trust dynamic. However, further specification of the team- and task-related effects could not be provided because of an insufficient number of empirically codable studies. To gain more insights, additional research experiments are required in this field.
Overall, trust in HrT is a dynamic element that should facilitate smooth interactions, effective task execution, and adaptive coordination between human and robot team members. The dataset we used for this preliminary analysis did not contain data regarding trust and performance from the perspective of the human operator, as it was collected solely for providing action recognition capabilities. However, as stated in our study, it is considered a stepping stone for future research.

3. Translating Human–Human Teaming Characteristics into a Human–Robot Teaming Setting

Teamwork is a collection of each team member’s interlinked thoughts, behaviors, and emotions required for the team to function [16]. Teammates offer emotional support; individuals usually feel more assured when communicating with others who share a similar experience. Moreover, humans enjoy the sense of belonging that comes with being a team member, especially in a well-functioning team [17].
The main elements of teamwork include coordinating teammates’ tasks, anticipating needs based on knowledge of assigned roles, adaptability, team orientation, backup behaviors, closed communication, and mutual performance monitoring. The aforementioned teaming elements are extracted from the Big Five teamwork model, a theoretical foundation for implementing team learning theory in HhT structures [18].
Despite the fact that many researchers have discussed teaming features in the context of HhT, relatively few studies [9,19,20,21,22,23,24,25] have underlined their importance in HrT. Incorporating HhT features into a HrT framework may increase the team’s efficacy and efficiency. By applying these elements, the team members may be better able to understand each other’s strengths and weaknesses, communicate more effectively, and work together toward shared goals and objectives. Moreover, the concept may improve overall productivity and reduce the risk of errors or miscommunication. These teaming components are seen as universally applicable to collaborative processes [18]. It is imperative to adapt these elements to optimize the efficacy of HrT. To this end, we have reinterpreted these HhT elements in the context of HrT and identified current methods that hold promise for prospective implementation while also highlighting the distinct challenges associated with these approaches:
Team leadership roles can be dynamically allocated to robots or humans depending on the task. For instance, a human can lead a task requiring creativity and decision making, and a robot with advanced data processing capabilities can lead a task requiring large data analysis. In this regard, multi-agent systems [26] can be employed to assign roles. However, the challenge lies in human safety, and humans may resist being led by a robot.
Team orientation is important to align the objectives of both human and robot team members. The field of social robotics can be instrumental in this regard. Utilizing Natural Language Processing (NLP) models and social signal processing can equip robots with social intelligence [27]. However, the implementation of NLP models requires substantial data and model training.
Backup behaviors involve team members supporting each other when required. In HrT, robots can be programmed to support human team members in tasks that are hazardous or difficult for humans. Scheduling algorithms [28] can be used to optimize backup behaviors in HrT. Additionally, multi-agent systems [26] can be used to develop a backup behavior framework. Developing algorithms that can assess the needs of human team members and switch tasks accordingly requires detailed design and testing to ensure functional team collaboration.
Adaptability is a key aspect that entails the ability of a team to adjust to environmental or task-related transformations. To expedite adaptation among team members, shared mental models are employed to modify task requirements. Advanced ML methods, such as reinforcement learning [29] and online optimization algorithms [30], can enable robots to adapt to environmental changes. However, the challenge lies in ensuring that the adaption is timely, contextually appropriate, and aligns with the preferences of the human team members.
Mutual performance monitoring involves understanding the performance of team members to ensure that the team is working toward the goals. This requires a comprehensive understanding of each teammate’s actions or tasks. Sensors and real-time data analytics can be employed to monitor teammates’ actions. Robots can be equipped to interpret teammates’ actions by integrating cognitive abilities using ML algorithms [31]. The development of reliable algorithms demands a significant amount of training data and involves feature selection, model selection, and hyperparameter tuning to optimize the performance of these algorithms.

4. The Need for Mutual Performance Monitoring in Human–Robot Teaming

Fundamental to the development of HrT is the question of how well robots can engage in implicit collaboration, which is the process of synchronizing with team members’ actions based on understanding what each teammate is most likely to do or is doing. It has always been challenging for team researchers to specify cognitive skills or attributes that enable teams to engage, adapt, and synchronize task-relevant information [32]. However, researchers have discovered that MPM is a practical teaming component that contributes to successful teaming [17]. MPM is the capacity to monitor the tasks of other teammates while executing one’s own tasks [18]. It is an important aspect of teaming that can be characterized in the context of HrT as the reciprocal understanding and monitoring of team actions, progress, and outcomes. Here, we consider the innate human ability to understand and interpret teammates’ actions and apply this concept to the team relations between humans and robots. However, it is important to note that although robots may lack the natural capabilities humans possess, they have the potential to acquire them through artificial intelligence (AI) and programming. Through MPM, team members can gain valuable insights into their intentions and task-related challenges, enabling effective communication and shared decision making for successful task accomplishment. Although humans possess the innate ability to interpret their teammates’ actions or intentions [11], not all social skills may be necessary for robots. However, certain task-related cognitive abilities can empower robots to function as effective team members in collaborative settings. Further, teaming abilities involve robots’ responsiveness to adaptable team dynamics and individual preferences. Robots should be equipped with the ability to adjust their behavior, communication style, and task allocation to their human teammates’ preferences and work styles [9]. In this regard, the modeling of MPM can be a step forward in achieving adaptability, allowing for enhanced teamwork and collaboration.

This entry is adapted from the peer-reviewed paper 10.3390/machines12010045

This entry is offline, you can click here to edit this entry!
ScholarVision Creations