Importance of Contextual Effects in Models of Anthropomorphism: Comparison
Please note this is a comparison between Version 3 by Jessie Wu and Version 2 by Jessie Wu.

The increasing presence of robots in our society raises questions about how these objects are perceived by users. Individuals seem inclined to attribute human capabilities to robots, a phenomenon called anthropomorphism. Contrary to what intuition might suggest, these attributions vary according to different factors, not only robotic factors (related to the robot itself), but also situational factors (related to the interaction setting), and human factors (related to the user). 

  • human–robot interaction
  • anthropomorphism
  • social robots
  • adults
  • children

1. Introduction

A previous study ([1], see also [2][3] for a review) showed that some psychological processes, typically found in the literature to be observed only at a specific age, could in fact be observed much earlier when a robot was used as an experimenter. This is especially true when the robot is introduced to the child as being ignorant and slow. This paradigm, the mentor–child paradigm, only works because children attributed intentions (trying to learn) and states of mind (having a piece of information, a concept, or not) to the robot. This is what rcall anthropomorphism. This notion is not new and has been heavily discussed in the literature [4][5][6]. Much work has been conducted regarding robotic factors (the design of the robot itself). Researchers review this research and emphasize that other contextual elements contribute to anthropomorphism and that these elements are not directly related to the robot itself. 

2. Contextual Effects in Models of Anthropomorphism

How can we explain our tendency to attribute human characteristics to non-human agents, even when we know that they do not actually possess these attributes? Although several explanations have been proposed, this phenomenon is generally considered a mistake [7][8][9]. Anthropomorphism could be caused by the activation of a default schema that would also apply to non-social objects whose behavior cannot be explained otherwise, such as computers and robots [10]. However, the invariance of the process is discussed: Some non-human agents are more anthropomorphized than others, and some individuals show an increased tendency toward anthropomorphism [5]. Moreover, the same agent can be anthropomorphized or considered as an object depending on the situation: Anthropomorphism is, therefore, situational [11]

In this research, researchers will focus on models of anthropomorphism based on experimental studies with robots. Researchers will see that some of these models only take into account the appearance of the robot [12], while others also take into account the interaction situation the robot is involved in.

2.1. A Contextless Model: The Mere Appearance Hypothesis

One model does not take into account the effects of the context in which the interaction takes place: the mere appearance hypothesis [12]. For the proponents of this approach, it is the appearance of the agent itself that activates processes usually used in human–human interactions. In human–human interactions, socio-cognitive processes are triggered automatically, given that humans are social agents. The misattribution of this social agent status, based only on the appearance of the agent, would trigger these processes similarly and, thus, be the cause of anthropomorphism [13][14]. Therefore, an individual confronted with a robot relies on their socio-cognitive capacities acquired from humans, due to the absence of a behavior explanation mechanism specifically tailored to robots [15]. According to the mere appearance hypothesis, humans would generalize these mechanisms to stimuli (agents or objects), depending on their superficial resemblance to humans (in appearance or behavior). This argument is founded on work showing that organisms can generalize their responses to new stimuli when they resemble the original [16]. This phenomenon explains social reactions expressed by individuals in the presence of inanimate objects, such as eye-like geometric shapes. Therefore, robots similarly prompt a generalization of stimuli, which explains the occurrence of perspective-taking toward robots [12].
According to this theory, the perspective-taking of a robot by a human is only done if the robot has an appearance that resembles the human (rather than activated by the perception of a mind in the robot). If the robot does not have a human appearance, the consideration of the robot’s perspective is less, whereas it will be important when the robot strongly resembles the human and will persist even if the robot is perceived as strange or witless. The integration of the robot’s perspective-taking is related to the level of anthropomorphism of the robot. Participants adopt the perspective of an iconic robot (NAO or BAXTER) more than an abstract robot (THYMIO) or a biologically similar non-human agent, namely a cat. When the robot (BAXTER) has a face or a head and looks at the target, participants adopt its perspective more. Similarly, the perspective-taking rate of a humanoid (i.e., strongly human-like) robot (ERICA) is higher than that observed for the iconic robot but lower than that of the human agent [12].
Furthermore, in human interactions, observing another individual performing goal-directed actions (such as gaze orientation or grasping an object) increases a person’s tendency to adopt another individual’s physical perspective [17]. The authors speculate that, similarly, observing a robot’s actions may reinforce the tendency of individuals to adopt its perspective. Indeed, in a human-like robot, when comparing a directed gaze or gesture with a blank stare, goal-directed actions seem to elicit more perspective-taking [12]. In the same study, if the robot (BAXTER) had a face but looked to the side, the participants adopted its perspective as much as when it had no face or head. This result led the authors to argue that “the impact of human-like appearance on perspective taking lies more in the goal-directed behaviors that it enables—such as gaze and reaching—than in the mere possession of the physical features per se” ([12], p. 16).
Despite these observations, others have shown that a human-like appearance is neither sufficient nor necessary to trigger anthropomorphism, since some non-human-like agents are also anthropomorphized, such as zoomorphic [18] and abstract robots [19], geometric figures [20], computers, or cars [21]. Conversely, some agents with a strong human resemblance are less anthropomorphized than agents with a moderate human resemblance [22]. In another study, children aged 4–11 attributed mental states to a moderately human-like NAO robot, similar to a non-human-like vocal assistant such as Alexa [23]. Finally, the same agent can be anthropomorphized or not depending on the interaction situation (which includes the way the agent is presented to the user, but also the characteristics of the users themselves) [11]. The mere appearance hypothesis, therefore, seems insufficient at explaining anthropomorphism toward robots. Thus, theoretical frameworks should also explain the process of anthropomorphism by taking into account factors related to the interaction situation.
Due to the presence of this context of interaction, it is important to understand the psychological elements that are in effect in such a situation to fully grasp the process of anthropomorphization [5]. This would explain the individual variability.

2.2. Why We Anthropomorphize: The Three Psychological Determinants of Anthropomorphism

According to the Sociality, Effectance, and Elicited agent Knowledge (SEEK) theory, the process of anthropomorphism, which essentially applies a default model of human interaction to artificial agents, is modulated by three psychological determinants: (1) the accessibility and applicability of anthropocentric knowledge; (2) the motivation to explain and understand the behavior of other agents; (3) the desire for social contact and affiliation [5]. For this approach, it involves the need to interact with and explain one’s environment that prompts individuals to anthropomorphize an object.
The human observer uses their knowledge to explain the behavior of a robot: They automatically rely on the representation elaborated from their experiences with humans, as it is more accessible and more economical. Thus, it becomes the default model used when interacting with robots in the absence of a more specific model targeted to them. It would result in the attribution of human characteristics to non-humans, to “complete a partial representation” [24]. A study supports this aspect of the SEEK theory as participants with more experience in interacting with robots have a decreased propensity to anthropomorphism [25]. The more a person interacts with a robot, the more relevant a specific representation of an interaction with a robot becomes, and the less anthropomorphism is needed. Thus, children are more likely to anthropomorphize than adults [26].
The motivation to explain and understand artificial agents results from the epistemophilic behavior that allows reducing the uncertainty implied by this interaction situation, all the more when the latter is new [24][27]. Anthropomorphization, thus, aims at answering the need of individuals to explain the robot’s behavior [28][29][30]. This phenomenon is all the more important when non-human entities are perceived as having intentions with unpredictable behavior (for instance, when the robot Asimo answers questions in a random fashion) [29]. The need for individuals to understand as well as predict their environment increases the tendency for anthropomorphism, and in turn, anthropomorphism fills this need to explain the world [29][31]. This is particularly true for people who are anxious as anthropomorphism increases their sense of control [32].
Finally, anthropomorphism satisfies the desire for social contact and affiliation by providing a framework to manage interactions with non-human agents at the lowest cognitive cost. Human individuals would need to establish social links with other humans. Anthropomorphism could satisfy this need by providing a social connection with a non-human agent similar to the one that can be created with a human. The more a person feels a strong need for social contact, the greater the tendency to anthropomorphize. Hence, a high level of social isolation will lead to an increased tendency to anthropomorphize robots (for example, see [33] with AIBOs and [34] with NAO), pets [5], and objects, such as alarm clocks [32] or smartphones [35].

3. Anthropomorphizing Factors: Robotic Factors Are Not Enough

According to the theories researchers have just seen, anthropomorphism is determined by the interaction situation or the appearance of the robot. But these different factors jointly modulate the tendency to anthropomorphize a robot.
Anthropomorphism is a particular process of inductive inferences. It can be influenced in two different ways: top-down induction and bottom-up induction [26][36]. To play on bottom-up inferences, modifying the design of the robot is required, i.e., its appearance and shape, voice, behavior, and the quality of its movements. To activate top-down inferences, promoting anthropomorphic beliefs is necessary, for example by attributing human socio-cognitive abilities to the robot (e.g., suggesting to the participants that the robot feels pain if it falls from the table). The latter is heavily context-dependent: the situation itself can promote beliefs, and the user’s own disposition can also have an influence. Researchers will explore all of these elements. Thus, three broad categories of factors impact anthropomorphism [37]: robot design (bottom-up induction [36]), the interaction situation, which includes situational factors (top-down induction [36]), and human factors.

4. Conclusions

Human individuals generally tend to attribute human characteristics to robots. Despite the wide methodological differences observed in the research, researchers can argue that several factors influence anthropomorphism: robotic, situational, and human factors. 

Researchers have seen that many studies focus on the appearance of robots to explain the tendency of individuals to attribute human characteristics to them. However, other factors of the robot can influence their perception. Among the robotic factors, in addition to the robot’s appearance, its voice, its behavior, and the quality of its movements also modulate the way it is perceived. Moreover, the context of the interaction plays a crucial role. Taking into account situational factors related to the interaction setting (anthropomorphic framing, the role held by the robot, its autonomy, the frequency of interaction) and human factors related to the user (age, gender, personality, culture, and others) seems, therefore, essential in the study of anthropomorphism.

Several theoretical frameworks have attempted to explain the nature of anthropomorphism, such as the mere appearance hypothesis [12] or the SEEK theory [5]. On the one hand, for the simple appearance hypothesis, which is a context-free model, the robot’s appearance would activate processes similar to those involved in human–human interactions, through a stimulus generalization mechanism. This theory is based solely on the robot’s appearance, but researchers  have seen that other robotic factors have an impact on anthropomorphism, such as the robot’s voice, its behavior, and the quality of its movements. The SEEK theory, on the other hand, takes into account the context of interaction. According to it, anthropomorphism would be a way for individuals to explain a robot’s behavior in the most accessible and economical way possible, in order to satisfy their need for prediction of the environment while satisfying their desire for social contact. Thus, the mere appearance hypothesis is mostly based on robotic factors related to the robot’s design, whereas the SEEK theory focuses on situational and human factors, including some factors researchers described in this paperresearch (the frequency of interaction, the participant’s social isolation, and personality). Nevertheless, this research  highlighted other situational factors (anthropomorphic framing, robot’s role, and autonomy) and human factors (age, gender, culture, education level, prior experience with technology, and developmental type) that impact the acceptance of robots and anthropomorphism toward it, which are not directly mentioned in the SEEK theory. Researchers  will see below that the impact of these factors could still potentially be explained by this theory.

In conclusion, there are several theories to explain anthropomorphism, but they should take greater account of the other factors highlighted in this research to enable the most exhaustive possible conception of anthropomorphism. The SEEK theory seems to be the most consistent with the results observed in the literature since it includes the majority of the factors cited. Indeed, the psychological determinants involved in this theory (i.e., the accessibility of anthropocentric knowledge, the motivation of individuals to understand the behavior of other agents, and to create social links) can be impacted by all the factors researchers have listed in this research.

Although researchers have observed that many contextual factors have been explained by the SEEK theory, certain questions remain unanswered and, thus, require further research. These shortcomings emphasize the need to revise the theories explaining anthropomorphism, and the means used to measure it, in order to better understand the phenomenon. It is also important to bear in mind the methodological limitations identified in this field of research because (1) they limit the interpretation of the results obtained, and (2) they prevent the results from being generalized. Researchers recommend a precise description of the samples (age, gender, nationality) and of the robot used as these characteristics can have an impact on the interaction with the robots. The context in which the interaction takes place (the way the robot is presented, the role and autonomy assigned to it, and the duration and frequency of interaction) must also be taken into account when analyzing results.

References

  1. Jamet, F.; Masson, O.; Jacquet, B.; Stilgenbauer, J.L.; Baratgin, J. Learning by teaching with humanoid robot: A new powerful experimental tool to improve children’s learning ability. J. Robot. 2018, 2018, 4578762.
  2. Dubois-Sage, M.; Jacquet, B.; Jamet, F.; Baratgin, J. The mentor-child paradigm for individuals with autism spectrum disorders. In Proceedings of the Workshop Social Robots Personalisation at the Crossroads between Engineering and Humanities (Concatenate) at the 18th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Stockholm, Sweden, 13–16 March 2023.
  3. Baratgin, J.; Dubois-Sage, M.; Jacquet, B.; Stilgenbauer, J.L.; Jamet, F. Pragmatics in the false-belief task: Let the robot ask the question! Front. Psychol. 2020, 11, 593807.
  4. Duffy, B. Anthropomorphism and the social robot. Robot. Auton. Syst. 2003, 42, 177–190.
  5. Epley, N.; Waytz, A.; Cacioppo, J.T. On seeing human: A three-factor theory of anthropomorphism. Psychol. Rev. 2007, 114, 864–886.
  6. Bartneck, C.; Kulić, D.; Croft, E.; Zoghbi, S. Measurement Instruments for the Anthropomorphism, Animacy, Likeability, Perceived Intelligence, and Perceived Safety of Robots. Int. J. Soc. Robot. 2009, 1, 71–81.
  7. Guthrie, S.E. Faces in the Clouds: A New Theory of Religion; Oxford University Press: New York, NY, USA, 1995.
  8. Dacey, M. Anthropomorphism as Cognitive Bias. Philos. Sci. 2017, 84, 1152–1164.
  9. Dacey, M.; Coane, J.H. Implicit measures of anthropomorphism: Affective priming and recognition of apparent animal emotions. Front. Psychol. 2023, 14, 1149444.
  10. Caporael, L.R. Anthropomorphism and mechanomorphism: Two faces of the human machine. Comput. Hum. Behav. 1986, 2, 215–234.
  11. Airenti, G. The Development of Anthropomorphism in Interaction: Intersubjectivity, Imagination, and Theory of Mind. Front. Psychol. 2018, 9, 2136.
  12. Zhao, X.; Malle, B.F. Spontaneous perspective taking toward robots: The unique impact of humanlike appearance. Cognition 2022, 224, 105076.
  13. Chaminade, T.; Franklin, D.; Oztop, E.; Cheng, G. Motor interference between Humans and Humanoid Robots: Effect of Biological and Artificial Motion. In Proceedings of the 4th International Conference on Development and Learning, Hong Kong, China, 31 July–3 August 2005; Volume 2005, pp. 96–101.
  14. Chaminade, T.; Zecca, M.; Blakemore, S.J.; Takanishi, A.; Frith, C.; Micera, S.; Dario, P.; Rizzolatti, G.; Gallese, V.; Umiltà, M. Brain response to a humanoid robot in areas implicated in the perception of human emotional gestures. PLoS ONE 2010, 5, e11577.
  15. Heyes, C.M.; Frith, C.D. The cultural evolution of mind reading. Science 2014, 344, 1243091.
  16. Shepard, R.N. Toward a universal law of generalization for psychological science. Science 1987, 237, 1317–1323.
  17. Zhao, X.; Cusimano, C.; Malle, B. Do people spontaneously take a robot’s visual perspective? In Proceedings of the 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Christchurch, New Zealand, 7–10 March 2016; ACM: New York, NY, USA, 2016; pp. 335–342.
  18. Paepcke, S.; Takayama, L. Judging a bot by its cover: An experiment on expectation setting for personal robots. In Proceedings of the 2010 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Osaka, Japan, 2–5 March 2010; ACM: New York, NY, USA, 2010; pp. 45–2148.
  19. Banks, J. Theory of mind in social robots: Replication of five established human tests. Int. J. Soc. Robot. 2020, 12, 403–414.
  20. Heider, F.; Simmel, M. An Experimental Study of Apparent Behavior. Am. J. Psychol. 1944, 57, 243–259.
  21. Reeves, B.; Nass, C.I. The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Places; Center for the Study of Language and Information, Ed.; Cambridge University Press: New York, NY, USA, 1966.
  22. Zlotowski, J.; Sumioka, H.; Nishio, S.; Glas, D.; Bartneck, C.; Ishiguro, H. Persistence of the uncanny valley: The influence of repeated interactions and a robot’s attitude on its perception. Front. Psychol. 2015, 6, 883.
  23. Flanagan, T.; Wong, G.; Kushnir, T. The minds of machines: Children’s beliefs about the experiences, thoughts, and morals of familiar interactive technologies. Dev. Psychol. 2023, 59, 1017–1031.
  24. Spatola, N. L’interaction homme-robot, de l’anthropomorphisme à l’humanisation. L’Année Psychol. 2019, 119, 515–563.
  25. Kim, M.J.; Kohn, S.; Shaw, T. Does Long-Term Exposure to Robots Affect Mind Perception? An Exploratory Study. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2020, 64, 1820–1824.
  26. Nijssen, S.R.R.; Müller, B.C.N.; Bosse, T.; Paulus, M. You, robot? The role of anthropomorphic emotion attributions in children’s sharing with a robot. Int. J.-Child-Comput. Interact. 2021, 30, 332–336.
  27. Barsante, L.S.; Paixão, K.S.; Laass, K.H.; Cardoso, R.T.N.; Eiras, Ã.E.; Acebal, J.L. A model to predict the population size of the dengue fever vector based on rainfall data. arXiv 2014, arXiv:1409.7942.
  28. Waytz, A.; Gray, K.; Epley, N.; Wegner, D.M. Causes and consequences of mind perception. Trends Cogn. Sci. 2010, 14, 383–388.
  29. Waytz, A.; Morewedge, C.K.; Epley, N.; Monteleone, G.; Gao, J.H.; Cacioppo, J.T. Making sense by making sentient: Effectance motivation increases anthropomorphism. J. Personal. Soc. Psychol. 2010, 99, 410–435.
  30. Waytz, A.; Cacioppo, J.; Epley, N. Who sees human? The stability and importance of individual differences in anthropomorphism. Perspect. Psychol. Sci. 2010, 5, 219–232.
  31. Spatola, N.; Wykowska, A. The personality of anthropomorphism: How the need for cognition and the need for closure define attitudes and anthropomorphic attributions toward robots. Comput. Hum. Behav. 2021, 122, 106841.
  32. Bartz, J.A.; Tchalova, K.; Fenerci, C. Reminders of Social Connection Can Attenuate Anthropomorphism: A Replication and Extension of Epley, Akalis, Waytz, and Cacioppo (2008). Psychol. Sci. 2016, 27, 1644–1650.
  33. Lee, K.M.; Jung, Y.; Kim, J.; Kim, S.R. Are physically embodied social agents better than disembodied social agents?: The effects of physical embodiment, tactile interaction, and people’s loneliness in human-robot interaction. Int. J.-Hum.-Comput. Stud. 2006, 64, 962–973.
  34. Jung, Y.; Hahn, S. Social Robots as Companions for Lonely Hearts: The Role of Anthropomorphism and Robot Appearances. arXiv 2023, arXiv:2306.02694.
  35. Wang, W. Smartphones as social actors? Social dispositional factors in assessing anthropomorphism. Comput. Hum. Behav. 2017, 68, 334–344.
  36. Nijssen, S.R.R.; Müller, B.C.N.; Baaren, R.B.v.; Paulus, M. Saving the robot or the human? Robots who feel deserve moral care. Soc. Cogn. 2019, 37, 41–56.
  37. Mubin, O.; Stevens, C.; Shahid, S.; Mahmud, A.; Dong, J.J. A review of the applicability of robots in education. Technol. Educ. Learn. 2013, 1, 13.
More
Video Production Service