Modelling and Measuring Trust in Human–Robot Collaboration: Comparison
Please note this is a comparison between Version 2 by Camila Xu and Version 1 by Leire Bastida.

Human–Robot Collaboration (HRC) has emerged as a critical area in the engineering and social sciences domain. In any kind of collaboration, including HRC, trust has been identified as a significant factor that can either motivate or hinder cooperation, especially in scenarios characterized by incomplete or uncertain information.

  • Human–Robot Collaboration (HRC)
  • trust dimensions
  • trust dynamics

1. Introduction

Human–Robot Collaboration (HRC) has emerged as a critical area in the engineering and social sciences domain. In any kind of collaboration, including HRC, trust has been identified as a significant factor that can either motivate or hinder cooperation, especially in scenarios characterized by incomplete or uncertain information. Despite the ubiquitous understanding of the concept of “trust”, its definition has proven to be complex due to the range of fields it applies to and the individual context in which it is studied. Several perspectives contribute to the understanding of trust, depending on the theoretical focus and the specific field of study it is applied to. While interpersonal trust is the most studied, there is also an increasing focus on the trust between humans and technology, which is essential in HRC.

2. Theoretical Foundations of Trust

Trust is a crucial determinant of effective collaboration, in both human-to-human and human-to-machine interactions. Consequently, studies on trust modelling and measurement span a variety of disciplines, including psychology, sociology, biology, neuroscience, economics, management, and computer science [1,2,3,4,5,6,7,8,9,10][1][2][3][4][5][6][7][8][9][10]. These two approaches—(modelling and measuring)—share common knowledge, but with differing purposes and considerations. Trust modelling aims to depict human trust behaviour, extrapolating individual responses to a universal level, whereas trust measurement encompasses various methods. These methods include subjective ratings, which involve individuals’ personal assessments of trustworthiness based on their perceptions and experiences, and objective, measurable approaches that quantify involuntary body responses to varying trust-related stimuli. While subjective ratings provide insights into individuals’ subjective perceptions of trust, objective measurements offer quantifiable data that can be analysed and compared across different contexts. The multidisciplinary nature of human trust research signifies that defining a unique modelling approach is complex. Various disciplines offer distinct perspectives on trust with specific views that, when integrated, offer a holistic perspective on the definition of trust. Sociology views trust as a subjective probability that another party will act in a manner that does not harm one’s interests amid uncertainty and ignorance [1], while philosophy frames trust as a risky action deriving from personal and moral relationships between entities [3]. Economics defines trust as the expectation of a beneficial outcome of a risky action taken under uncertain circumstances and based on informed and calculated incentives [4], and psychology characterizes trust as a cognitive learning process shaped by social experiences and the consequences of trusting behaviours [5]. Organizational management perceives trust as the willingness to take risks and be vulnerable in a relationship, predicated on factors like ability, integrity, and benevolence [6], and trust in international relations is viewed as the belief that the other party is trustworthy, with a willingness to reciprocate cooperation [7]. Within the realm of automation, trust is conceptualized as the attitude that one agent will achieve another agent’s goal in situations characterized by imperfect knowledge, uncertainty, and vulnerability [8], while in computing and networking contexts, trust is understood as the estimated subjective probability that an entity will exhibit reliable behaviour for specific operations under conditions of potential risk [9]. In general terms, trust is perceived as a relationship where a subject (trustor) interacts with an actor (trustee) under uncertain conditions to attain an anticipated goal. In this scenario, trust is manifested as the willingness of the trustor to take risks based on a subjective belief and a cognitive assessment of past experiences that a trustee will demonstrate reliable behaviour to optimize the trustor’s interest under uncertain situations [2]. This definition emphasizes several issues concerning the nature of trust:
  • A subjective belief: Trust perception heavily relies on individual interactions and the preconceived notion of the other’s behaviour.
  • To optimize the trustor’s interest: Profit or loss implications for both the trustor and trustee through their interactions reveal the influence of trust/distrust dynamics.
  • Interaction under uncertain conditions: The trustor’s actions rely on expected behaviours of the trustee to optimize the anticipated outcome, but it may yield suboptimal or even prejudicial results.
  • Cognitive assessment of past experiences: Trust is dynamic in nature, initially influenced by preconceived subjective beliefs but evolving with ongoing interactions.
Authors have proposed varying dimensions of trust to explain different elements of trust development. Some differentiate between moralistic trust, based on previous beliefs about behaviour, and strategic trust, based on individual experiences [10]. Other perspectives delineate dispositional, situational, and learned trust as distinct categories, where dispositional traits—such as age, race, education, and cultural background—influence an individual’s predisposition to trust, situational factors acknowledge conditions influencing trust formation (e.g., variations in risk), and learned trust reflects iterative development shaped by past interactions [11]. Another approach categorizes the nature of trust into three facets: Referents of Trust, which encompass phenomena to which trust pertains, such as individual attitudes and social influences; Components of Trust, which encompass the sentiments inherent in trust, including consequential and motivational aspects; and Dimensions of Trust, outlining the evaluative aspects of trust, including its incremental and symmetrical nature. To avoid ambiguity between the terms “Trust dimensions” and “Dimensions of Trust”, will use the denominations “Phenomenon-based”, “Sentiment-based”, and “Judgement-based” [12]. Trust is also described as a combination of individual trust (derived from own personal characteristics and conforming to logical trust and emotional trust) and relational trust, referencing the dimensions of trust that rise from the relationship with other entities [2]. Even if different authors use different classifications, it is possible to map similarities between different approaches. Despite omitting minor particularities, Table 1 shows the convergence of these classifications.
Table 1.
A rough similarity map between trust dimensions among different authors.
In essence, trust relies on multiple complex factors, encompassing both individual and relational aspects. Notwithstanding the diverse disciplinary perspectives and methodologies in researching trust, there is considerable consensus on the fundamental concept and dimensions of trust. However, understanding, modelling, and measuring trust, particularly in human-to-machine contexts, continue to pose considerable challenges. Tackling these challenges requires an in-depth understanding of multifaceted trust dynamics. The primary challenge lies in encapsulating the complexities of human trust in a computational model, given its subjectivity and dynamic nature. Quantifying trust is another significant obstacle, as trust is an internal and deeply personal emotion. The novelty of trust in the Human–Robot Collaboration domain implies a lack of historical data and testing methodologies to build the trust models upon. Furthermore, implementing these models in real-world scenarios is another challenge due to constraints related to resources, variability in responses and the need for instantaneous adaptation. Despite these hurdles, the potential rewards of successfully modelling and measuring trust in Human–Robot Collaborations—including enhanced efficiency, increased user acceptance and improved safety—are immense.

3. Trust in Human–Robot Collaboration

Within the field of Human–Robot Collaboration, trust plays a crucial role and is considered a significant determining factor. Various studies, including [11[11][13],13], have dedicated efforts to investigate and identify the factors that influence trust in this collaborative context. These factors have not only been structured within a single matrix but also classified based on their origins and dimensions of influence, which are instrumental in facilitating trust and designing experimental protocols. Authors in [14] provide a series of controllable factors with correlation to trust:
  • Robot behaviour: This factor relates to the necessity for robot companions to possess social skills and be capable of real-time adaptability, taking into account individual human preferences [15,16][15][16]. In the manufacturing domain, trust variation has been studied in correlation to changes in robot performance based on the human operator’s muscular fatigue [17].
  • Reliability: An experiential correlation between subjective and objective trust measures was demonstrated through a series of system–failure diagnostic trials [18].
  • Level of automation: Consistent with task difficulty and complexity and corresponding automation levels, alterations in operator trust levels were noted [19].
  • Proximity: The physical or virtual presence of a robot significantly influences human perceptions and task execution efficiency [20].
  • Adaptability: A robot teammate capable of emulating the behaviours and teamwork strategies observed in human teams has a positive influence on trustworthiness and performance [21].
  • Anthropomorphism: Research in [22] showed that anthropomorphic interfaces are prone to greater trust resilience. However, the uncanny valley phenomenon described in [23] states that a person’s reaction to a humanlike robot would suddenly transition from empathy to repulsion as it neared, but did not achieve, a lifelike appearance.
  • Communication: Trust levels fluctuated based on the transparency and detail encapsulated within robot-to-human communication [24,25][24][25].
  • Task type: The task variability was recorded to influence interaction performance, preference, engagement, and satisfaction [26].
Less controllable dimensions of trust include dispositional trust that is influenced exclusively by human traits and the organizational factors linked to the human–robot team [11,13][11][13]. These factors exhibit limited flexibility as they depend directly on the individual or the organizational culture. On the other hand, situational trust is controllable and heavily dependent on various factors, such as the characteristics of the task being developed, making it possible to manipulate it based on the experiment’s objective [11,13][11][13]. Moreover, trust manifests through brainwave patterns and physiological signals, making their use in assessing trust crucial [27,28][27][28]. Biologically driven, these elements foster a more symbiotic interaction, allowing machines to adapt to human trust levels. Notably, trust in HRC is dynamic and influenced by a myriad of factors. Understanding the various dimensions of trust and the controllable and uncontrollable factors encompassed allows for the creation of experimental protocols and strategies to enhance trust, a hypothesis evident in studies looking into the triad of operator, robot, and environment [13]. The importance of fostering and maintaining trust in the HRC domain is clear, especially considering the complexity of trust in the ever-evolving landscape of human–robot interaction.

4. Trust Measuring Using Different and Combined Psychophysiological Signals

Studies on trust have traditionally been situated within the context of interpersonal relationships, primarily utilizing various questionnaires to evaluate levels of trust [29,30,31][29][30][31]. However, due to the arrival of automatic systems and the decreased cost of acquiring and analysing psychophysiological signals, focus has shifted towards examining these types of signals in response to specific stimuli in a bid to lower the subjectivity and potential biases associated with questionnaire-based approaches. Recent studies, like [9[9][32][33][34],32,33,34], have been centred on the usage of psychophysiological measurements in the study of human trust. The choice of these psychophysiological sensors can differ, depending on which human biological systems (central and peripheral nervous systems) they are applied to. A common pattern has emerged from studies in which EEG is the most used signal to measure central nervous system activity, with fMRI closely behind it—the latter being more extensively used in the context of interpersonal trust [35]. EEG analysis is increasingly being utilized in human–robot interaction evaluation and brain–computer interfaces [36], as it provides the means to create real-time non-interruptive evaluation systems enabling the assessment of human mental states such as attention, workload, and fatigue during interaction [37,38,39][37][38][39]. Additionally, attempts have been made to study trust through EEG measurements which only look at event-related potentials (ERPs), but ERP has proven to be unsuitable for real-time trust level sensing during human–machine interaction due to the difficulty in identifying triggers [33,39][33][39]. Similarly, sensors measuring signals from the peripheral nervous system, notably ECG (electrocardiography) and GSR (galvanic skin response), have been frequently used in assessing trust [35]. GSR, a classic psychophysiological signal that captures arousal based on the conductivity of the skin’s surface, not under conscious control but instead modulated by the sympathetic nervous system, has seen use in measuring stress, anxiety, and cognitive load [40]. Some research revealed that the net phasic component, as well as the maximum value of phasic activity in GSR, might play a critical role in trust detection [33]. In contrast, the use of single signals most commonly involves only EEG, succeeded by fMRI [35]. However, some studies have proposed that combining different psychophysiological signals (like GSR, ECG, EEG, etc.) improves the depth, breadth, and temporal resolution of results [41,42][41][42]. Interestingly, pupillometry has been recently highlighted as a viable method for detecting human trust, revealing that trust levels may be influenced by changes in partners’ pupil dilation [43,44,45][43][44][45].

References

  1. Gambetta, D. Can we trust, Trust: Making and breaking cooperative relations. Br. J. Sociol. 2000, 13, 213–237.
  2. Cho, J.-H.; Chan, K.; Adali, S. A Survey on Trust Modeling. ACM Comput. Surv. 2015, 48, 28:1–28:40.
  3. Lahno, B.; Lagerspetz, O. Trust. The tacit demand. Ethical Theory Moral Pract. 1999, 2, 433–435.
  4. James, H.S. The trust paradox: A survey of economic inquiries into the nature of trust and trustworthiness. J. Econ. Behav. Organ. 2002, 47, 291–307.
  5. Rotter, J.B. Interpersonal trust, trustworthiness, and gullibility. Am. Psychol. 1980, 35, 1.
  6. Schoorman, F.D.; Mayer, R.C.; Davis, J.H. An Integrative Model of Organizational Trust: Past, Present, and Future. Acad. Manag. Rev. 2007, 32, 344–354.
  7. Kydd, A.H. Trust and Mistrust in International Relations; Princeton University Press: Princeton, NJ, USA, 2007.
  8. Lee, J.; See, K.A. Trust in automation: Designing for appropriate reliance. Hum. Factors 2004, 46, 50–80.
  9. Cho, J.-H.; Swami, A.; Chen, I.-R. A Survey on Trust Management for Mobile Ad Hoc Networks. IEEE Commun. Surv. Tutor. 2010, 13, 562–583.
  10. Uslaner, E. The Moral Foundations of Trust; Cambridge University Press: Cambridge, MA, USA, 2002.
  11. Hoff, K.; Bashir, M. Trust in automation: Integrating empirical evidence on factors that influence trust. Hum. Factors 2015, 57, 407–434.
  12. Romano, D.M. The Nature of Trust: Conceptual and Operational Clarification; Louisiana State University and Agricultural & Mechanical College: Baton Rouge, LA, USA, 2003.
  13. Schaefer, K. The Perception and Measurement of Human-Robot Trust. Ph.D. Thesis, University of Central Florida, Orlando, FL, USA, 2013.
  14. Hancock, P.A.; Billings, D.R.; Schaefer, K.E.; Chen, J.Y.C.; de Visser, E.J.; Parasuraman, R. A Meta-Analysis of Factors Affecting Trust in Human-Robot Interaction. Hum. Factors J. Hum. Factors Ergon. Soc. 2011, 53, 517–527.
  15. Dautenhahn, K. Socially intelligent robots: Dimensions of human—Robot interaction. Philos. Trans. R. Soc. B Biol. Sci. 2007, 362, 679–704.
  16. Dautenhahn, K. Robots we like to live with? A developmental perspective on a personalized, life-long robot companion. In Proceedings of the International Workshop on Robot and Human Interactive Communication, Kurashiki, Japan, 20–22 September 2004.
  17. Sadrfaridpour, B.; Saeidi, H.; Burke, J.; Madathil, K.; Wang, Y. Modeling and Control of Trust in Human-Robot Collaborative Manufacturing, Obust Intelligence and Trust in Autonomous Systems; Springer: Berlin/Heidelberg, Germany, 2016; pp. 115–141.
  18. Wiegmann, D.A.; Rich, A.; Zhang, H. Automated diagnostic aids: The effects of aid reliability on users’ trust and reliance. Theor. Issues Ergon. Sci. 2001, 2, 352–367.
  19. Moray, N.; Inagaki, T.; Itoh, M. Adaptive automation, trust, and self-confidence in fault management of time-critical tasks. J. Exp. Psychol. Appl. 2000, 6, 44.
  20. Bainbridge, W.A.; Hart, J.; Kim, E.; Scassellati, B. The effect of presence on human-robot interaction. In Proceedings of the IEEE International Symposium on Robot and Human Interactive Communication, Munich, Germany, 1–3 August 2008; pp. 701–706.
  21. Shah, J.; Wiken, J.; Williams, B.; Breazeal, C. Improved human-robot team performance using Chaski, A human-inspired plan execution system. In Proceedings of the International Conference on Human-Robot Interaction, Lausanne, Switzerland, 6–9 March 2011; pp. 29–36.
  22. de Visser, E.J.; Monfort, S.; McKendrick, R.; Smith, M.; McKnight, P.; Krueger, F.; Parasuraman, R. Almost human: Anthropomorphism increases trust resilience in cognitive agents. J. Exp. Psychol. Appl. 2016, 22, 331.
  23. Mori, M.; MacDorman, K.F.; Kageki, N. The uncanny valley . IEEE Robot. Autom. Mag. 2012, 19, 98–100.
  24. Akash, K.; Reid, T.; Jain, N. Improving Human-Machine Collaboration Through Transparency-based Feedback—Part II: Control Design and Synthesis. IFAC-PapersOnLine 2019, 51, 322–328.
  25. Fischer, K.; Weigelin, H.M.; Bodenhagen, L. Increasing trust in human—Robot medical interactions: Effects of transparency and adaptability. Paladyn J. Behav. Robot. 2018, 9, 95–109.
  26. Li, D.; Rau, P.L.P.; Li, Y. A Cross-cultural Study: Effect of Robot Appearance and Task. Int. J. Soc. Robot. 2010, 2, 175–186.
  27. Winston, J.; Strange, B.; O’Doherty, J.; Dolan, R. Automatic and intentional brain responses during evaluation of trustworthiness of faces. Nat. Neurosci. 2002, 5, 277–283.
  28. Xu, J.; Montague, E. Working with an invisible active user: Understanding Trust in Technology and Co-User from the Perspective of a Passive User. Interact. Comput. 2013, 25, 375–385.
  29. Leichtenstern, K.; Bee, N.; André, E.; Berkmüller, U.; Wagner, J. Physiological measurement of trust-related behavior in trust-neutral and trust-critical situations. In Proceedings of the International Conference on Trust Management, Copenhagen, Denmark, 29 June–1 July 2011; pp. 165–172.
  30. Nomuraand, T.; Takagi, S. Exploring effects of educational backgrounds and gender in human-robot interaction. In Proceedings of the International Conference on User Science and Engineering, Selangor, Malaysia, 29 November–1 December 2011; pp. 24–29.
  31. Soroka, S.; Helliwell, J.; Johnston, R. Measuring and Modelling Trust, Diversity, Social Capital and the Welfar Estate; University of British Columbia Press: Vancouver, BC, Canada, 2003; pp. 279–303.
  32. Ajenaghughrure, I.; Sousa, S.; Kosunen, I.; Lamas, D. Predictive model to assess user trust: A psycho-physiological approach. In Proceedings of the 10th Indian Conference on Human-Computer Interaction, Hyderabad, India, 1–3 November 2019.
  33. Akash, K.; Hu, W.-L.; Jain, N.; Reid, T. A Classification Model for Sensing Human Trust in Machines Using EEG and GSR. ACM Trans. Interact. Intell. Syst. 2018, 8, 1–20.
  34. Hu, W.; Akash, K.; Jain, N.; Reid, T. Real-time sensing of trust in human-machine interactions. IFAC-PapersOnLine 2016, 49, 48–53.
  35. Ajenaghughrure, I.B.; Sousa, S.D.C.; Lamas, D. Measuring Trust with Psychophysiological Signals: A Systematic Mapping Study of Approaches Used. Multimodal Technol. Interact. 2020, 4, 63.
  36. Liu, H.; Wang, F.; Zhang, D. Inspiring Real-Time Evaluation and Optimization of Human—Robot Interaction with Psychological Findings from Human—Human Interaction. Appl. Sci. 2023, 13, 676.
  37. Cano, S.; Soto, J.; Acosta, L.; Peñeñory, V.M.; Moreira, F. Using Brain—Computer Interface to evaluate the User Experience in interactive systems. Comput. Methods Biomech. Biomed. Eng. Imaging Vis. 2023, 11, 378–386.
  38. Frey, J.; Mühl, C.; Lotte, F.; Hachet, M. Review of the use of electroencephalography as an evaluation method for human-computer interaction. In Proceedings of the PhyCS 2014—International Conference on Physiological Computing Systems, Lisbon, Portugal, 7–9 January 2014.
  39. Kivikangas, M.J.; Chanel, G.; Cowley, B.; Ekman, I.; Salminen, M.; Järvelä, S.; Ravaja, N. A review of the use of psychophysiological methods in game research. J. Gaming Virtual Worlds 2011, 3, 181–199.
  40. Boudreau, C.; McCubbins, M.D.; Coulson, S. Knowing when to trust others: An ERP study of decision making after receiving information from unknown people. Soc. Cogn. Affect. Neurosci. 2008, 4, 23–34.
  41. Jacobs, S.C.; Friedman, R.; Parker, J.D.; Tofler, G.H.; Jimenez, A.H.; Muller, J.E.; Benson, H.; Stone, P.H. Use of skin conductance changes during mental stress testing as an index of autonomic arousal in cardiovascular research. Am. Heart J. 1994, 128, 1170–1177.
  42. Montague, E.; Xu, J. Understanding active and passive users: The effects of an active user using normal, hard and unreliable technologies on user assessment of trust in technology and co-user. Appl. Ergon. 2012, 43, 702–712.
  43. Montague, E.; Xu, J.; Chiou, E. Shared Experiences of Technology and Trust: An Experimental Study of Physiological Compliance between Active and Passive Users in Technology-Mediated Collaborative Encounters. IEEE Trans. Hum.-Mach. Syst. 2014, 44, 614–624.
  44. Kret, M.E.; Fischer, A.H.; De Dreu, C.K.W. Pupil Mimicry Correlates with Trust in In-Group Partners with Dilating Pupils. Psychol. Sci. 2015, 26, 1401–1410.
  45. Minadakis, G.; Lohan, K. Using Pupil Diameter to Measure Cognitive Load. arXiv 2018, arXiv:1812.07653.
More
ScholarVision Creations