Artificial Intelligence Decision-Making Transparency and Employees' Trust: Comparison
Please note this is a comparison between Version 1 by Yi Li and Version 2 by Vivi Li.

Artificial Intelligence (AI) is a new generation of technology that can interact with the environment and aims to simulate human intelligence. In recent years, more and more enterprises have introduced AI, and how to encourage employees to accept AI, use AI, and trust AI has become a hot research topic. Whether AI can be successfully integrated into enterprises and serve as a decision maker depends crucially on employees’ trust in AI. Humans’ trust in AI refers to the degree to which humans consider AI to be trustworthy. 

  • AI decision-making transparency
  • trust
  • effectiveness
  • discomfort

1. Introduction

Artificial Intelligence (AI) is a new generation of technology that can interact with the environment and aims to simulate human intelligence [1]. In recent years, more and more enterprises have introduced AI, and how to encourage employees to accept AI, use AI, and trust AI has become a hot research topic. Whether AI can be successfully integrated into enterprises and serve as a decision maker depends crucially on employees’ trust in AI [1][2][1,2]. Humans’ trust in AI refers to the degree to which humans consider AI to be trustworthy [2]. Transparency, which reflects the degree to which humans understand the inner workings or logic of a technology, is essential for building trust in new technologies [3]. Transparency is more problematic for AI than for other technologies [1]. The operation process of AI (usually based on deep learning methods) is complex and multi-layered, and the logic behind it is difficult to understand [1]. As a result, AI’s decision-making process is considered non-transparent [1]. The relationship between AI transparency and trust, and how AI transparency affects trust, is unclear [4].
Many previous studies have explored the relationship between AI system transparency and humans’ trust in AI, with inconsistent conclusions. First, some studies have found a positive correlation. For example, the transparency of music recommendation systems promotes user trust [5][6][5,6]. Providing explanations for automated collaborative filtering systems can increase users’ acceptance of the systems [7], and providing explanations for recommendation systems can increase users’ trust in the systems [8].
However, some studies found no correlation. For example, Cramer et al.’s [9] study on recommendation systems in the field of cultural heritage did not find a positive effect of transparency on trust in the systems, although transparency did increase acceptance of recommendations. Meanwhile, Kim and Hinds [10] investigated the effect of robot transparency on trust and blame attribution and found no significant effects.
FInally, some studies have found an inverted U-shaped relationship. For example, advertisers develop algorithms to select the most relevant advertisements for users, but an appropriate level of transparency of advertising algorithms is needed to enhance trust and satisfaction [11]. Too vague or too specific explanations in advertisements will produce feelings of anxiety and distrust, whereas moderate explanations in advertisements enhance trust and satisfaction [11]. Kizilcec [12] found that providing students with high or low levels of transparency is detrimental, as both extremes confuse students and reduce their trust in a system. In other words, providing some transparent information helps promote trust, whereas providing too much or too little information may counteract this effect. The above research shows that the relationship between AI transparency and trust is inconsistent, and more research is needed to explore it.
In terms of how AI transparency affects human trust in AI, research on the mediating mechanism between the two is relatively lacking. Zhao et al. [13] investigated whether providing information about how online-shopping advice-giving systems (AGSS) work can enhance users’ trust, and they found that users’ perceived understanding of AGSS plays a mediating role between subjective AGS transparency and users’ trust in AGSS. Cramer et al. [9] studied the influence of transparency of recommendation systems on users’ trust and took perceived competent and perceived understanding as mediating variables; they found that the transparent versions of recommendation systems would be easier to understand but would not be considered as more competent.

2. AI in Enterprises Requires Trust

The application of AI to enterprises can generate a great deal of value and greatly improve the efficiency and effectiveness of enterprises [14][17]. For example, AI can improve the accuracy of recommendation systems and increase the confidence of users [15][18]. AI is beneficial to performance management, employee measurement, and evaluation in enterprises [16][19]. AI can enhance human capabilities by making decisions in the enterprise [17][20]. AI in the enterprise reduces potential conflict by standardizing decision-making procedures, thereby reducing pressure on supervisors and team leaders [18][21]. However, whether AI can be successfully integrated into enterprises and become the main decision maker depends critically on employees’ trust in AI [1]. First, AI as a decision maker has the power to make decisions that are very relevant to employees and that influence employees [19][20][22,23]. Therefore, trust in the context of AI decision-making is necessary, and influences employees’ willingness to accept and follow AI decisions; trust may potentially promote further behavioral outcomes and attitudes related to the validity of AI decisions [2]. In addition, when AI is the primary decision maker, lack of trust negatively affects human–AI collaboration in multiple ways. One reason is that lack of trust can lead to brittleness in the design and use of decision support systems. If the brittleness of a system leads to poor recommendations, it is likely to strongly influence people to make bad decisions [21][24]. Another reason is that high-trust teams generate less uncertainty, and problems are solved more efficiently [22][25]. Further, if employees do not believe in AI, enterprises or organizations may not be able to apply AI because of trust issues. For example, lack of trust is an important factor in the failure of sharing economy platforms [23][26]. Therefore, trust can enhance human–AI collaboration in an enterprise. In order for AI to make better decisions, AI requires trust.

3. SOR Model

Mehrabian and Russell [24][27] first proposed the stimulus–organism–response (SOR) theory, confirming that when an individual is incited by external stimuli (S), certain internal and physical states (O) will be generated, and then an individual response (R) will be triggered. External stimuli trigger an individual’s internal state, which can be either a cognitive state or an emotional state, and then the individual decides what action to take [25][28]. SOR models have been used in AI scenarios. Xu et al. [26][29] studied the influence of a specific design of a recommendation agent interface on decision making, taking trade-off transparency as an external stimulus in the SOR model and measuring trade-off transparency at different levels. Saßmannshausen et al. [27][30] used the SOR model to study humans’ trust in AI, where external characteristics were the stimuli, the perception of AI characteristics was the individual internal state, and trust in AI was the individual response. In sum, the SOR model has been used in AI scenarios in previous studies, with transparency as the external stimulus and trust in AI as the individual response. This papentryr argues that AI decision-making transparency is an external stimulus that conveys decision-making information to employees. AI decision-making transparency can lead not only to cognitive states but also to emotional states in employees. The cognitive states caused by transparency include perceived competence [9] and perceived understanding [13], among others. There are few studies on emotional states caused by transparency. Eslami et al. [11] believed that including overly specific and general explanations would make people feel “creepy”. An employee’s perceived transparency is the employee’s cognitive state in relation to an external transparency stimulus [13]; effectiveness and discomfort are an employee’s internal cognitive and emotional states [28][15]; and trust is an employee’s response.

4. Algorithmic Reductionism

According to algorithmic reductionism, the quantitative characteristics of algorithmic decision making will cause individuals to perceive the decision-making process as reductionist and decontextualized [29][31]. For example, Nobel et al. [30][32] found that candidates believed AI could not “read between the lines”. Although current algorithms are considered to be highly efficient [31][14], algorithmic reductionism refers to how people affected by an algorithm’s decisions subjectively perceive the decision-making process, independent of the algorithm’s objective validity [29][31]. Existing studies have found that individuals believe that AI decision-making results are obtained by statistical fitting based on limited data [32][33]. Therefore, individuals think that AI decision-making ignores background and environmental knowledge [32][33], thereby simplifying information processing. Therefore, algorithmic reductionism is mainly used to explain the individual’s perception of and feelings about the AI decision-making process. Employees will think of AI decision-making process as reductionistic, especially for non-transparent decision-making.

5. Social Identity Theory

Social identity theory believes that individuals identify with their own groups through social classification and generate in-group preferences and out-group biases [33][34]. In addition, people like to believe that their inner group is unique, and when the outer group begins to challenge this uniqueness, the outer group will be judged negatively [34][35]. Negative emotions toward AI occurs when employees realize that AI is becoming more and more human-like and beginning to challenge the uniqueness of human work.

6. AI Decision-Making Transparency and Employees’ Perceived Transparency

In an organizational context, transparency refers to the availability of information about how and why an organization or other entity makes decisions [35][36]. Decision making is divided into three levels [35][36]: (1) non-transparency (the final decision is simply announced to the participants); (2) transparency in rationale (the final decision and the reasons for it are announced to the participants); and (3) transparency in process (the final decision and reasons are announced and the participants have an opportunity to observe and discuss the decision-making process) [35][36]. In the AI context, de Fine Licht et al. [36][37] stated that a transparent AI decision-making process includes goalsetting, coding, and implementation stages. Referencing earlier studies on transparency and AI decision-making transparency, this papentryr defines AI decision-making non-transparency as informing employees only of the AI decision-making results, whereas AI decision-making transparency is defined as informing employees of the AI decision-making result, rationale, and process [35][36][36,37]. AI decision-making transparency is thus the degree to which an AI system releases objective information about its working mode [13], whereas employees’ perceived transparency refers to the availability of employees’ subjectively perceived information [13]. Thus, AI decision-making transparency (i.e., objective transparency) and employees’ perceived transparency (i.e., subjective transparency) are different. Zhao et al. [13] proved that objective transparency has a positive effect on subjective transparency. If an AI system provides more information (objective transparency), employees receive more information (subjective transparency); that is, more AI decision-making transparency will lead to an increase in employees’ perceived transparency [13]. Moreover, people prefer AI decision-making transparency to non-transparency for several reasons: (1) limited transparency is used as a common technique to hide the interest-related information of the real stakeholders, which can be avoided by full transparency [37][38]; (2) transparency increases the public’s understanding of decision making and the decision-making process, thereby making the public more confident in decision makers [36][37]; (3) transparency has positive results, including increasing legitimacy, promoting accountability, supporting autonomy, and increasing the principal’s control over the agent [35][38][39][40][36,39,40,41]; and (4) transparency is a means to overcome information asymmetry [41][42] and to make the public believe that the decision-making process is fair [36][37]. Therefore, people subjectively prefer that more information be disclosed. The more AI decision-making transparency, the better people feel subjectively. The more information AI provides, the more useful information people are likely to receive from it; that is, the subjective transparency is improved [13]. Hence, this papentryr argues that AI decision-making transparency leads to greater perceived transparency, compared with AI decision-making non-transparency, in the human–AI collaborative work scenario where AI is the primary decision-maker.
Video Production Service