Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 -- 1859 2022-05-18 17:23:25 |
2 format correct Meta information modification 1859 2022-05-19 06:04:52 |

Video Upload Options

Do you have a full video?

Confirm

Are you sure to Delete?
Cite
If you have any further questions, please contact Encyclopedia Editorial Office.
Li, Y.; , . Artificial Intelligence Decision-Making Transparency and Employees' Trust. Encyclopedia. Available online: https://encyclopedia.pub/entry/23080 (accessed on 19 May 2024).
Li Y,  . Artificial Intelligence Decision-Making Transparency and Employees' Trust. Encyclopedia. Available at: https://encyclopedia.pub/entry/23080. Accessed May 19, 2024.
Li, Yi, . "Artificial Intelligence Decision-Making Transparency and Employees' Trust" Encyclopedia, https://encyclopedia.pub/entry/23080 (accessed May 19, 2024).
Li, Y., & , . (2022, May 18). Artificial Intelligence Decision-Making Transparency and Employees' Trust. In Encyclopedia. https://encyclopedia.pub/entry/23080
Li, Yi and . "Artificial Intelligence Decision-Making Transparency and Employees' Trust." Encyclopedia. Web. 18 May, 2022.
Artificial Intelligence Decision-Making Transparency and Employees' Trust
Edit

Artificial Intelligence (AI) is a new generation of technology that can interact with the environment and aims to simulate human intelligence. In recent years, more and more enterprises have introduced AI, and how to encourage employees to accept AI, use AI, and trust AI has become a hot research topic. Whether AI can be successfully integrated into enterprises and serve as a decision maker depends crucially on employees’ trust in AI. Humans’ trust in AI refers to the degree to which humans consider AI to be trustworthy. 

AI decision-making transparency trust effectiveness discomfort

1. Introduction

Artificial Intelligence (AI) is a new generation of technology that can interact with the environment and aims to simulate human intelligence [1]. In recent years, more and more enterprises have introduced AI, and how to encourage employees to accept AI, use AI, and trust AI has become a hot research topic. Whether AI can be successfully integrated into enterprises and serve as a decision maker depends crucially on employees’ trust in AI [1][2]. Humans’ trust in AI refers to the degree to which humans consider AI to be trustworthy [2]. Transparency, which reflects the degree to which humans understand the inner workings or logic of a technology, is essential for building trust in new technologies [3]. Transparency is more problematic for AI than for other technologies [1]. The operation process of AI (usually based on deep learning methods) is complex and multi-layered, and the logic behind it is difficult to understand [1]. As a result, AI’s decision-making process is considered non-transparent [1]. The relationship between AI transparency and trust, and how AI transparency affects trust, is unclear [4].
Many previous studies have explored the relationship between AI system transparency and humans’ trust in AI, with inconsistent conclusions. First, some studies have found a positive correlation. For example, the transparency of music recommendation systems promotes user trust [5][6]. Providing explanations for automated collaborative filtering systems can increase users’ acceptance of the systems [7], and providing explanations for recommendation systems can increase users’ trust in the systems [8].
However, some studies found no correlation. For example, Cramer et al.’s [9] study on recommendation systems in the field of cultural heritage did not find a positive effect of transparency on trust in the systems, although transparency did increase acceptance of recommendations. Meanwhile, Kim and Hinds [10] investigated the effect of robot transparency on trust and blame attribution and found no significant effects.
FInally, some studies have found an inverted U-shaped relationship. For example, advertisers develop algorithms to select the most relevant advertisements for users, but an appropriate level of transparency of advertising algorithms is needed to enhance trust and satisfaction [11]. Too vague or too specific explanations in advertisements will produce feelings of anxiety and distrust, whereas moderate explanations in advertisements enhance trust and satisfaction [11]. Kizilcec [12] found that providing students with high or low levels of transparency is detrimental, as both extremes confuse students and reduce their trust in a system. In other words, providing some transparent information helps promote trust, whereas providing too much or too little information may counteract this effect. The above research shows that the relationship between AI transparency and trust is inconsistent, and more research is needed to explore it.
In terms of how AI transparency affects human trust in AI, research on the mediating mechanism between the two is relatively lacking. Zhao et al. [13] investigated whether providing information about how online-shopping advice-giving systems (AGSS) work can enhance users’ trust, and they found that users’ perceived understanding of AGSS plays a mediating role between subjective AGS transparency and users’ trust in AGSS. Cramer et al. [9] studied the influence of transparency of recommendation systems on users’ trust and took perceived competent and perceived understanding as mediating variables; they found that the transparent versions of recommendation systems would be easier to understand but would not be considered as more competent.

2. AI in Enterprises Requires Trust

The application of AI to enterprises can generate a great deal of value and greatly improve the efficiency and effectiveness of enterprises [14]. For example, AI can improve the accuracy of recommendation systems and increase the confidence of users [15]. AI is beneficial to performance management, employee measurement, and evaluation in enterprises [16]. AI can enhance human capabilities by making decisions in the enterprise [17]. AI in the enterprise reduces potential conflict by standardizing decision-making procedures, thereby reducing pressure on supervisors and team leaders [18].
However, whether AI can be successfully integrated into enterprises and become the main decision maker depends critically on employees’ trust in AI [1]. First, AI as a decision maker has the power to make decisions that are very relevant to employees and that influence employees [19][20]. Therefore, trust in the context of AI decision-making is necessary, and influences employees’ willingness to accept and follow AI decisions; trust may potentially promote further behavioral outcomes and attitudes related to the validity of AI decisions [2]. In addition, when AI is the primary decision maker, lack of trust negatively affects human–AI collaboration in multiple ways. One reason is that lack of trust can lead to brittleness in the design and use of decision support systems. If the brittleness of a system leads to poor recommendations, it is likely to strongly influence people to make bad decisions [21]. Another reason is that high-trust teams generate less uncertainty, and problems are solved more efficiently [22]. Further, if employees do not believe in AI, enterprises or organizations may not be able to apply AI because of trust issues. For example, lack of trust is an important factor in the failure of sharing economy platforms [23]. Therefore, trust can enhance human–AI collaboration in an enterprise. In order for AI to make better decisions, AI requires trust.

3. SOR Model

Mehrabian and Russell [24] first proposed the stimulus–organism–response (SOR) theory, confirming that when an individual is incited by external stimuli (S), certain internal and physical states (O) will be generated, and then an individual response (R) will be triggered. External stimuli trigger an individual’s internal state, which can be either a cognitive state or an emotional state, and then the individual decides what action to take [25]. SOR models have been used in AI scenarios. Xu et al. [26] studied the influence of a specific design of a recommendation agent interface on decision making, taking trade-off transparency as an external stimulus in the SOR model and measuring trade-off transparency at different levels. Saßmannshausen et al. [27] used the SOR model to study humans’ trust in AI, where external characteristics were the stimuli, the perception of AI characteristics was the individual internal state, and trust in AI was the individual response.
In sum, the SOR model has been used in AI scenarios in previous studies, with transparency as the external stimulus and trust in AI as the individual response. This entry argues that AI decision-making transparency is an external stimulus that conveys decision-making information to employees. AI decision-making transparency can lead not only to cognitive states but also to emotional states in employees. The cognitive states caused by transparency include perceived competence [9] and perceived understanding [13], among others. There are few studies on emotional states caused by transparency. Eslami et al. [11] believed that including overly specific and general explanations would make people feel “creepy”. An employee’s perceived transparency is the employee’s cognitive state in relation to an external transparency stimulus [13]; effectiveness and discomfort are an employee’s internal cognitive and emotional states [28]; and trust is an employee’s response.

4. Algorithmic Reductionism

According to algorithmic reductionism, the quantitative characteristics of algorithmic decision making will cause individuals to perceive the decision-making process as reductionist and decontextualized [29]. For example, Nobel et al. [30] found that candidates believed AI could not “read between the lines”. Although current algorithms are considered to be highly efficient [31], algorithmic reductionism refers to how people affected by an algorithm’s decisions subjectively perceive the decision-making process, independent of the algorithm’s objective validity [29]. Existing studies have found that individuals believe that AI decision-making results are obtained by statistical fitting based on limited data [32]. Therefore, individuals think that AI decision-making ignores background and environmental knowledge [32], thereby simplifying information processing. Therefore, algorithmic reductionism is mainly used to explain the individual’s perception of and feelings about the AI decision-making process. Employees will think of AI decision-making process as reductionistic, especially for non-transparent decision-making.

5. Social Identity Theory

Social identity theory believes that individuals identify with their own groups through social classification and generate in-group preferences and out-group biases [33]. In addition, people like to believe that their inner group is unique, and when the outer group begins to challenge this uniqueness, the outer group will be judged negatively [34]. Negative emotions toward AI occurs when employees realize that AI is becoming more and more human-like and beginning to challenge the uniqueness of human work.

6. AI Decision-Making Transparency and Employees’ Perceived Transparency

In an organizational context, transparency refers to the availability of information about how and why an organization or other entity makes decisions [35]. Decision making is divided into three levels [35]: (1) non-transparency (the final decision is simply announced to the participants); (2) transparency in rationale (the final decision and the reasons for it are announced to the participants); and (3) transparency in process (the final decision and reasons are announced and the participants have an opportunity to observe and discuss the decision-making process) [35]. In the AI context, de Fine Licht et al. [36] stated that a transparent AI decision-making process includes goalsetting, coding, and implementation stages. Referencing earlier studies on transparency and AI decision-making transparency, this entry defines AI decision-making non-transparency as informing employees only of the AI decision-making results, whereas AI decision-making transparency is defined as informing employees of the AI decision-making result, rationale, and process [35][36].
AI decision-making transparency is thus the degree to which an AI system releases objective information about its working mode [13], whereas employees’ perceived transparency refers to the availability of employees’ subjectively perceived information [13]. Thus, AI decision-making transparency (i.e., objective transparency) and employees’ perceived transparency (i.e., subjective transparency) are different. Zhao et al. [13] proved that objective transparency has a positive effect on subjective transparency. If an AI system provides more information (objective transparency), employees receive more information (subjective transparency); that is, more AI decision-making transparency will lead to an increase in employees’ perceived transparency [13].
Moreover, people prefer AI decision-making transparency to non-transparency for several reasons: (1) limited transparency is used as a common technique to hide the interest-related information of the real stakeholders, which can be avoided by full transparency [37]; (2) transparency increases the public’s understanding of decision making and the decision-making process, thereby making the public more confident in decision makers [36]; (3) transparency has positive results, including increasing legitimacy, promoting accountability, supporting autonomy, and increasing the principal’s control over the agent [35][38][39][40]; and (4) transparency is a means to overcome information asymmetry [41] and to make the public believe that the decision-making process is fair [36]. Therefore, people subjectively prefer that more information be disclosed. The more AI decision-making transparency, the better people feel subjectively. The more information AI provides, the more useful information people are likely to receive from it; that is, the subjective transparency is improved [13]. Hence, this entry argues that AI decision-making transparency leads to greater perceived transparency, compared with AI decision-making non-transparency, in the human–AI collaborative work scenario where AI is the primary decision-maker.

References

  1. Glikson, E.; Woolley, A.W. Human trust in artificial intelligence: Review of empirical research. Acad. Manag. Ann. 2020, 14, 627–660.
  2. Höddinghaus, M.; Sondern, D.; Hertel, G. The automation of leadership functions: Would people trust decision algorithms? Comput. Hum. Behav. 2021, 116, 106635.
  3. Hoff, K.A.; Bashir, M. Trust in automation: Integrating empirical evidence on factors that influence trust. Hum. Factors. 2015, 57, 407–434.
  4. Felzmann, H.; Villaronga, E.F.; Lutz, C.; Tamò-Larrieux, A. Transparency you can trust: Transparency requirements for artificial intelligence between legal norms and contextual concerns. Big Data Soc. 2019, 6, 2053951719860542.
  5. Sinha, R.; Swearingen, K. The role of transparency in recommender systems. In CHI’02 Extended Abstracts on Human Factors in Computing Systems; Association for Computing Machinery: New York, NY, USA, 2002; pp. 830–831.
  6. Kulesza, T.; Stumpf, S.; Burnett, M.; Yang, S.; Kwan, I.; Wong, W.K. Too much, too little, or just right? Ways explanations impact end users’ mental models. In 2013 IEEE Symposium on Visual Languages and Human Centric Computing; IEEE: Piscataway, NJ, USA, 2013; pp. 3–10.
  7. Herlocker, J.L.; Konstan, J.A.; Riedl, J. Explaining collaborative filtering recommendations. In Proceedings of the 2000 ACM conference on Computer Supported Cooperative Work, Philadelphia, PA, USA, 2–6 December 2000; pp. 241–250.
  8. Pu, P.; Chen, L. Trust-inspiring explanation interfaces for recommender systems. Knowl. Based Syst. 2007, 20, 542–556.
  9. Cramer, H.; Evers, V.; Ramlal, S.; Van Someren, M.; Rutledge, L.; Stash, N.; Aroyo, L.; Wielinga, B. The effects of transparency on trust in and acceptance of a content-based art recommender. User Model. User-Adapt. Interact. 2008, 18, 455.
  10. Kim, T.; Hinds, P. Who should I blame? Effects of autonomy and transparency on attributions in human-robot interaction. In ROMAN 2006-The 15th IEEE International Symposium on Robot and Human Interactive Communication; IEEE: Piscataway, NJ, USA, 2006; pp. 80–85.
  11. Eslami, M.; Krishna Kumaran, S.R.; Sandvig, C.; Karahalios, K. Communicating algorithmic process in online behavioural advertising. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Montréal, QC, Canada, 21–27 April 2018; pp. 1–13.
  12. Kizilcec, R.F. How much information? In Effects of transparency on trust in an algorithmic interface. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, San Jose, CA, USA, 5–12 May 2016; pp. 2390–2395.
  13. Zhao, R.; Benbasat, I.; Cavusoglu, H. Do users always want to know more? Investigating the relationship between system transparency and users’ trust in advice-giving systems. In Proceedings of the 27th European Conference on Information Systems (ECIS), Stockholm & Uppsala, Sweden, 8–14 June 2019.
  14. Kolbjørnsrud, V.; Amico, R.; Thomas, R.J. Partnering with AI: How organizations can win over skeptical managers. Strategy Leadersh. 2017, 45, 37–43.
  15. Rrmoku, K.; Selimi, B.; Ahmedi, L. Application of Trust in Recommender Systems—Utilizing Naive Bayes Classifier. Computation 2022, 10, 6.
  16. Lin, S.; Döngül, E.S.; Uygun, S.V.; Öztürk, M.B.; Huy, D.T.N.; Tuan, P.V. Exploring the Relationship between Abusive Management, Self-Efficacy and Organizational Performance in the Context of Human–Machine Interaction Technology and Artificial Intelligence with the Effect of Ergonomics. Sustainability 2022, 14, 1949.
  17. Rossi, F. Building trust in artificial intelligence. J. Int. Aff. 2018, 72, 127–134.
  18. Ötting, S.K.; Maier, G.W. The importance of procedural justice in human–machine interactions: Intelligent systems as new decision agents in organizations. Comput. Hum. Behav. 2018, 89, 27–39.
  19. Dirks, K.T.; Ferrin, D.L. Trust in leadership: Meta-analytic findings and implications for research and practice. J. Appl. Psychol. 2002, 87, 611–628.
  20. Chugunova, M.; Sele, D. We and It: An Interdisciplinary Review of the Experimental Evidence on Human-Machine Interaction; Research Paper No. 20-15; Max Planck Institute for Innovation & Competition: Munich, Germany, 2020.
  21. Smith, P.J.; McCoy, C.E.; Layton, C. Brittleness in the design of cooperative problem-solving systems: The effects on user performance. IEEE Trans. Syst. Man Cybern. Part A Syst. Hum. 1997, 27, 360–371.
  22. Zand, D.E. Trust and managerial problem solving. Adm. Sci. Q. 1972, 17, 229–239.
  23. Räisänen, J.; Ojala, A.; Tuovinen, T. Building trust in the sharing economy: Current approaches and future considerations. J. Clean. Prod. 2021, 279, 123724.
  24. Mehrabian, A.; Russell, J.A. An Approach to Environmental Psychology; The MIT Press: Cambridge, MA, USA, 1974.
  25. Lee, S.; Ha, S.; Widdows, R. Consumer responses to high-technology products: Product attributes, cognition, and emotions. J. Bus. Res. 2011, 64, 1195–1200.
  26. Xu, J.; Benbasat, I.; Cenfetelli, R.T. The nature and consequences of trade-off transparency in the context of recommendation agents. MIS Q. 2014, 38, 379–406.
  27. Saßmannshausen, T.; Burggräf, P.; Wagner, J.; Hassenzahl, M.; Heupel, T.; Steinberg, F. Trust in artificial intelligence within production management–an exploration of antecedents. Ergonomics 2021, 64, 1333–1350.
  28. Castelo, N.; Bos, M.W.; Lehmann, D.R. Task-dependent algorithm aversion. J. Mark. Res. 2019, 56, 809–825.
  29. Newman, D.T.; Fast, N.J.; Harmon, D.J. When eliminating bias isn’t fair: Algorithmic reductionism and procedural justice in human resource decisions. Organ. Behav. Hum. Dec. 2020, 160, 149–167.
  30. Noble, S.M.; Foster, L.L.; Craig, S.B. The procedural and interpersonal justice of automated application and resume screening. Int. J. Select. Assess. 2021, 29, 139–153.
  31. Wilson, H.J.; Alter, A.; Shukla, P. Companies are reimagining business processes with algorithms. Harv. Bus. Rev. 2016, 8.
  32. Balasubramanian, N.; Ye, Y.; Xu, M. Substituting human decision-making with machine learning: Implications for organizational learning. Acad Manag Ann. 2020, in press.
  33. Tajfel, H. Social psychology of intergroup relations. Annu. Rev. Psychol. 1982, 33, 1–39.
  34. Ferrari, F.; Paladino, M.P.; Jetten, J. Blurring human–machine distinctions: Anthropomorphic appearance in social robots as a threat to human distinctiveness. Int. J. Soc. Robot. 2016, 8, 287–302.
  35. De Fine Licht, J.; Naurin, D.; Esaiasson, P.; Gilljam, M. When does transparency generate legitimacy? Experimenting on a context-bound relationship. Gov. Int. J. Policy Adm. I. 2014, 27, 111–134.
  36. De Fine Licht, K.; de Fine Licht, J. Artificial intelligence, transparency, and public decision-making. AI Soc. 2020, 35, 917–926.
  37. Elia, J. Transparency rights, technology, and trust. Ethics. Inf. Technol. 2009, 11, 145–153.
  38. Felzmann, H.; Fosch-Villaronga, E.; Lutz, C.; Tamò-Larrieux, A. Towards transparency by design for artificial intelligence. Sci. Eng. Ethics. 2020, 26, 3333–3361.
  39. Wieringa, M. What to account for when accounting for algorithms: A systematic literature review on algorithmic accountability. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, Barcelona, Spain, 27–30 January 2020; ACM: New York, NY, USA, 2020; pp. 1–18.
  40. De Fine Licht, J.; Naurin, D.; Esaiasson, P.; Gilljam, M. Does transparency generate legitimacy? An experimental study of procedure acceptance of open-and closed-door decision-making. QoG Work. Pap. Ser. 2011, 8, 1–32.
  41. Rawlins, B. Give the emperor a mirror: Toward developing a stakeholder measurement of organizational transparency. J. Public. Relat. Res. 2008, 21, 71–99.
More
Information
Subjects: Management
Contributors MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register : ,
View Times: 1.2K
Revisions: 2 times (View History)
Update Date: 19 May 2022
1000/1000