Modelling and Measuring Trust in Human–Robot Collaboration: Comparison
Please note this is a comparison between Version 1 by Leire Bastida and Version 2 by Camila Xu.

Human–Robot Collaboration (HRC) has emerged as a critical area in the engineering and social sciences domain. In any kind of collaboration, including HRC, trust has been identified as a significant factor that can either motivate or hinder cooperation, especially in scenarios characterized by incomplete or uncertain information.

  • Human–Robot Collaboration (HRC)
  • trust dimensions
  • trust dynamics

1. Introduction

Human–Robot Collaboration (HRC) has emerged as a critical area in the engineering and social sciences domain. In any kind of collaboration, including HRC, trust has been identified as a significant factor that can either motivate or hinder cooperation, especially in scenarios characterized by incomplete or uncertain information. Despite the ubiquitous understanding of the concept of “trust”, its definition has proven to be complex due to the range of fields it applies to and the individual context in which it is studied. Several perspectives contribute to the understanding of trust, depending on the theoretical focus and the specific field of study it is applied to. While interpersonal trust is the most studied, there is also an increasing focus on the trust between humans and technology, which is essential in HRC.

2. Theoretical Foundations of Trust

Trust is a crucial determinant of effective collaboration, in both human-to-human and human-to-machine interactions. Consequently, studies on trust modelling and measurement span a variety of disciplines, including psychology, sociology, biology, neuroscience, economics, management, and computer science [1][2][3][4][5][6][7][8][9][10][1,2,3,4,5,6,7,8,9,10]. These two approaches—(modelling and measuring)—share common knowledge, but with differing purposes and considerations. Trust modelling aims to depict human trust behaviour, extrapolating individual responses to a universal level, whereas trust measurement encompasses various methods. These methods include subjective ratings, which involve individuals’ personal assessments of trustworthiness based on their perceptions and experiences, and objective, measurable approaches that quantify involuntary body responses to varying trust-related stimuli. While subjective ratings provide insights into individuals’ subjective perceptions of trust, objective measurements offer quantifiable data that can be analysed and compared across different contexts. The multidisciplinary nature of human trust research signifies that defining a unique modelling approach is complex. Various disciplines offer distinct perspectives on trust with specific views that, when integrated, offer a holistic perspective on the definition of trust. Sociology views trust as a subjective probability that another party will act in a manner that does not harm one’s interests amid uncertainty and ignorance [1], while philosophy frames trust as a risky action deriving from personal and moral relationships between entities [3]. Economics defines trust as the expectation of a beneficial outcome of a risky action taken under uncertain circumstances and based on informed and calculated incentives [4], and psychology characterizes trust as a cognitive learning process shaped by social experiences and the consequences of trusting behaviours [5]. Organizational management perceives trust as the willingness to take risks and be vulnerable in a relationship, predicated on factors like ability, integrity, and benevolence [6], and trust in international relations is viewed as the belief that the other party is trustworthy, with a willingness to reciprocate cooperation [7]. Within the realm of automation, trust is conceptualized as the attitude that one agent will achieve another agent’s goal in situations characterized by imperfect knowledge, uncertainty, and vulnerability [8], while in computing and networking contexts, trust is understood as the estimated subjective probability that an entity will exhibit reliable behaviour for specific operations under conditions of potential risk [9]. In general terms, trust is perceived as a relationship where a subject (trustor) interacts with an actor (trustee) under uncertain conditions to attain an anticipated goal. In this scenario, trust is manifested as the willingness of the trustor to take risks based on a subjective belief and a cognitive assessment of past experiences that a trustee will demonstrate reliable behaviour to optimize the trustor’s interest under uncertain situations [2]. This definition emphasizes several issues concerning the nature of trust:
  • A subjective belief: Trust perception heavily relies on individual interactions and the preconceived notion of the other’s behaviour.
  • To optimize the trustor’s interest: Profit or loss implications for both the trustor and trustee through their interactions reveal the influence of trust/distrust dynamics.
  • Interaction under uncertain conditions: The trustor’s actions rely on expected behaviours of the trustee to optimize the anticipated outcome, but it may yield suboptimal or even prejudicial results.
  • Cognitive assessment of past experiences: Trust is dynamic in nature, initially influenced by preconceived subjective beliefs but evolving with ongoing interactions.
Authors have proposed varying dimensions of trust to explain different elements of trust development. Some differentiate between moralistic trust, based on previous beliefs about behaviour, and strategic trust, based on individual experiences [10]. Other perspectives delineate dispositional, situational, and learned trust as distinct categories, where dispositional traits—such as age, race, education, and cultural background—influence an individual’s predisposition to trust, situational factors acknowledge conditions influencing trust formation (e.g., variations in risk), and learned trust reflects iterative development shaped by past interactions [11]. Another approach categorizes the nature of trust into three facets: Referents of Trust, which encompass phenomena to which trust pertains, such as individual attitudes and social influences; Components of Trust, which encompass the sentiments inherent in trust, including consequential and motivational aspects; and Dimensions of Trust, outlining the evaluative aspects of trust, including its incremental and symmetrical nature. To avoid ambiguity between the terms “Trust dimensions” and “Dimensions of Trust”, will use the denominations “Phenomenon-based”, “Sentiment-based”, and “Judgement-based” [12]. Trust is also described as a combination of individual trust (derived from own personal characteristics and conforming to logical trust and emotional trust) and relational trust, referencing the dimensions of trust that rise from the relationship with other entities [2]. Even if different authors use different classifications, it is possible to map similarities between different approaches. Despite omitting minor particularities, Table 1 shows the convergence of these classifications.
Table 1.
A rough similarity map between trust dimensions among different authors.
In essence, trust relies on multiple complex factors, encompassing both individual and relational aspects. Notwithstanding the diverse disciplinary perspectives and methodologies in researching trust, there is considerable consensus on the fundamental concept and dimensions of trust. However, understanding, modelling, and measuring trust, particularly in human-to-machine contexts, continue to pose considerable challenges. Tackling these challenges requires an in-depth understanding of multifaceted trust dynamics. The primary challenge lies in encapsulating the complexities of human trust in a computational model, given its subjectivity and dynamic nature. Quantifying trust is another significant obstacle, as trust is an internal and deeply personal emotion. The novelty of trust in the Human–Robot Collaboration domain implies a lack of historical data and testing methodologies to build the trust models upon. Furthermore, implementing these models in real-world scenarios is another challenge due to constraints related to resources, variability in responses and the need for instantaneous adaptation. Despite these hurdles, the potential rewards of successfully modelling and measuring trust in Human–Robot Collaborations—including enhanced efficiency, increased user acceptance and improved safety—are immense.

3. Trust in Human–Robot Collaboration

Within the field of Human–Robot Collaboration, trust plays a crucial role and is considered a significant determining factor. Various studies, including [11][13][11,13], have dedicated efforts to investigate and identify the factors that influence trust in this collaborative context. These factors have not only been structured within a single matrix but also classified based on their origins and dimensions of influence, which are instrumental in facilitating trust and designing experimental protocols. Authors in [14] provide a series of controllable factors with correlation to trust:
  • Robot behaviour: This factor relates to the necessity for robot companions to possess social skills and be capable of real-time adaptability, taking into account individual human preferences [15][16][15,16]. In the manufacturing domain, trust variation has been studied in correlation to changes in robot performance based on the human operator’s muscular fatigue [17].
  • Reliability: An experiential correlation between subjective and objective trust measures was demonstrated through a series of system–failure diagnostic trials [18].
  • Level of automation: Consistent with task difficulty and complexity and corresponding automation levels, alterations in operator trust levels were noted [19].
  • Proximity: The physical or virtual presence of a robot significantly influences human perceptions and task execution efficiency [20].
  • Adaptability: A robot teammate capable of emulating the behaviours and teamwork strategies observed in human teams has a positive influence on trustworthiness and performance [21].
  • Anthropomorphism: Research in [22] showed that anthropomorphic interfaces are prone to greater trust resilience. However, the uncanny valley phenomenon described in [23] states that a person’s reaction to a humanlike robot would suddenly transition from empathy to repulsion as it neared, but did not achieve, a lifelike appearance.
  • Communication: Trust levels fluctuated based on the transparency and detail encapsulated within robot-to-human communication [24][25][24,25].
  • Task type: The task variability was recorded to influence interaction performance, preference, engagement, and satisfaction [26].
Less controllable dimensions of trust include dispositional trust that is influenced exclusively by human traits and the organizational factors linked to the human–robot team [11][13][11,13]. These factors exhibit limited flexibility as they depend directly on the individual or the organizational culture. On the other hand, situational trust is controllable and heavily dependent on various factors, such as the characteristics of the task being developed, making it possible to manipulate it based on the experiment’s objective [11][13][11,13]. Moreover, trust manifests through brainwave patterns and physiological signals, making their use in assessing trust crucial [27][28][27,28]. Biologically driven, these elements foster a more symbiotic interaction, allowing machines to adapt to human trust levels. Notably, trust in HRC is dynamic and influenced by a myriad of factors. Understanding the various dimensions of trust and the controllable and uncontrollable factors encompassed allows for the creation of experimental protocols and strategies to enhance trust, a hypothesis evident in studies looking into the triad of operator, robot, and environment [13]. The importance of fostering and maintaining trust in the HRC domain is clear, especially considering the complexity of trust in the ever-evolving landscape of human–robot interaction.

4. Trust Measuring Using Different and Combined Psychophysiological Signals

Studies on trust have traditionally been situated within the context of interpersonal relationships, primarily utilizing various questionnaires to evaluate levels of trust [29][30][31][29,30,31]. However, due to the arrival of automatic systems and the decreased cost of acquiring and analysing psychophysiological signals, focus has shifted towards examining these types of signals in response to specific stimuli in a bid to lower the subjectivity and potential biases associated with questionnaire-based approaches. Recent studies, like [9][32][33][34][9,32,33,34], have been centred on the usage of psychophysiological measurements in the study of human trust. The choice of these psychophysiological sensors can differ, depending on which human biological systems (central and peripheral nervous systems) they are applied to. A common pattern has emerged from studies in which EEG is the most used signal to measure central nervous system activity, with fMRI closely behind it—the latter being more extensively used in the context of interpersonal trust [35]. EEG analysis is increasingly being utilized in human–robot interaction evaluation and brain–computer interfaces [36], as it provides the means to create real-time non-interruptive evaluation systems enabling the assessment of human mental states such as attention, workload, and fatigue during interaction [37][38][39][37,38,39]. Additionally, attempts have been made to study trust through EEG measurements which only look at event-related potentials (ERPs), but ERP has proven to be unsuitable for real-time trust level sensing during human–machine interaction due to the difficulty in identifying triggers [33][39][33,39]. Similarly, sensors measuring signals from the peripheral nervous system, notably ECG (electrocardiography) and GSR (galvanic skin response), have been frequently used in assessing trust [35]. GSR, a classic psychophysiological signal that captures arousal based on the conductivity of the skin’s surface, not under conscious control but instead modulated by the sympathetic nervous system, has seen use in measuring stress, anxiety, and cognitive load [40]. Some research revealed that the net phasic component, as well as the maximum value of phasic activity in GSR, might play a critical role in trust detection [33]. In contrast, the use of single signals most commonly involves only EEG, succeeded by fMRI [35]. However, some studies have proposed that combining different psychophysiological signals (like GSR, ECG, EEG, etc.) improves the depth, breadth, and temporal resolution of results [41][42][41,42]. Interestingly, pupillometry has been recently highlighted as a viable method for detecting human trust, revealing that trust levels may be influenced by changes in partners’ pupil dilation [43][44][45][43,44,45].
Video Production Service