AI-Informed Decision Making: History
Please note this is an old version of this entry, which may differ significantly from the current revision.
Contributor: , , , ,

AI-assisted decision-making that impacts individuals raises critical questions about transparency and fairness in artificial intelligence (AI). Much research has highlighted the reciprocal relationships between the transparency/explanation and fairness in AI-assisted decision-making. Thus, considering their impact on user trust or perceived fairness simultaneously benefits responsible use of socio-technical AI systems, but currently receives little attention.

  • AI explanation
  • AI fairness
  • trust
  • perception of fairness
  • AI ethics

1. Introduction

Artificial Intelligence (AI) informed decision-making is claimed to lead to faster and better decision outcomes. It has been increasingly used in our society from decision-making of daily lives such as recommending movies and books to making more critical decisions such as medical diagnosis, credit risk prediction, and shortlisting talents in recruitment. In 2020, the EU proposed the European approach to excellence and trust with their White Paper on AI [1]. They stated that AI will change lives by improving not only healthcare but also increasing the efficiency of farming and contributing to climate change mitigation. Thus, their approach is to improve lives, while respecting rights. Among such AI-informed decision-making tasks, trust and perception of fairness have been found to be critical factors driving human behaviour in human–machine interactions [2,3]. The black-box nature of AI models makes it hard for users to understand why a decision is made or how the data are processed for the decision-making [4,5,6]. Thus, trustworthy AI has experienced a significant surge in interest from the research community in various application domains, especially in high stake domains which usually require testing and verification for reasonability by domain experts not only for safety but also for legal reasons [7,8,9,10,11].

1.1. AI Explanation

Explanation and trust are common partners in everyday life, and extensive research has investigated the relations between AI explanations and trust from different perspectives ranging from philosophical to qualitative and quantitative dimensions [12]. For instance, Zhou et al. [13] showed that the explanation of influences of training data points on predictions significantly increased the user trust in predictions. Alam and Mueller [14] investigated the roles of explanations in AI-informed decision-making in medical diagnosis scenarios. The results show that visual and example-based explanations integrated with rationales had a significantly better impact on patient satisfaction and trust than no explanations, or with text-based rationales alone. The previous studies that empirically tested the importance of explanations to users, in various fields, consistently showed that explanations significantly increase user trust. Furthermore, with the advancement of AI explanation research, different explanation approaches such as local and global explanations, as well as feature importance-based and example-based explanations are proposed [6]. As a result, besides the explanation presentation styles such as visualisation and text [14,15], it is also critical to understand how different explanation approaches affect user trust in AI-informed decision-making. In addition, Edwards [16] stated that the main challenge for AI-informed decision-making is to know whether an explanation that seems valid is accurate. This information is also needed to ensure transparency and accountability of the decision.

1.2. AI Fairness

The data used to train machine learning models are often historical records or samples of events. They are usually not a precise description of events and conceal discrimination with sparse details which are very difficult to identify. AI models are also imperfect abstractions of reality because of their statistical nature. All these lead to imminent imprecision and discrimination (bias) associated with AI. As a result, the investigation of fairness in AI has been becoming an indispensable component for responsible socio-technical AI systems in various decision-making tasks [17,18]. In addition, extensive research focuses on fairness definitions and unfairness quantification. Furthermore, human’s perceived fairness (perception of fairness) plays an important role in AI-informed decision-making since AI is often used by humans and/or for human-related decision-making [19].
Duan et al. [20] argue that AI-informed decision-making can help users make better decisions. Furthermore, the authors propose that AI-informed decisions will be mostly accepted by humans when used as a support tool. Thus, it is crucial to consider the human perception of AI in general, and to what extent users would be willing to use such systems [21]. Considerable research on perceived fairness has evidenced its links to trust such as in management and organizations [22,23].

2. Fairness and Explanation in AI-Informed Decision Making

2.1. Perception of Fairness

Current machine learning outlines fairness in the context of different protected attributes (race, sex, culture, etc.) receiving equal treatments by algorithms [25,26,27]. Definitions of fairness are formalised ranging from statistical bias, group fairness, and individual fairness, to process fairness, and others. Various metrics are proposed to quantify the unfairness (bias) of algorithms [28,29,30].
The research on the perception of fairness can be categorised into the following dimensions [19]: First, algorithmic factors study how the technical design of an AI system affects people’s fairness perceptions. For example, Lee et al. [31,32] investigated people’s perception of fairness regarding the allocation of resources based on equality, equity, or efficiency. They found that people had many variations in the preferences for the three fairness metrics (equality, equity, efficiency) impacted by the decision. Dodge et al. [24] found that people’s perception of fairness is evaluated primarily based on features that are used and not used in the model, algorithm errors, and errors or flaws in input data. Secondly, human factors investigate how human-related information affects the perception of fairness. For example, Helberger et al. [33] found that education and age affected both perceptions of algorithmic fairness and people’s reasons for the perception of AI fairness. Thirdly, comparative effects investigate how individuals react in fairness to humans compared to algorithmic decision-makers. For example, Helberger et al. [33] found that people believe that AI makes fairer decisions than human decision-makers. Some studies found the opposite results in the criminal justice system [34]. Fourthly, the consequence of the perception of fairness aims to investigate the impact of the perception of fairness on AI-informed decision-making. For example, Shin and Park [35] investigated the effects of perception of fairness on satisfaction and found that people’s perception of fairness has a positive impact on satisfaction with algorithms. Moreover, Shin et al. [36] argued that the algorithmic experience is inherently related to the perception of fairness, transparency and the underlying trust. Zhou et al. [3] investigated the relationship between induced algorithmic fairness and its perception in humans. It was found that introduced fairness is positively related to the perception of fairness, i.e., the high level of introduced fairness resulted in a high level of perception of fairness.
People’s perception of fairness has close relations with AI explanations. Shin [37] looked at explanations for an algorithmic decision as a critical factor of perceived fairness, and it was found that explanations for an algorithmic decision significantly increased people’s perception of fairness in an AI-based news recommender system. Dodge et al. [24] found that case-based and sensitivity-based explanations effectively exposed fairness discrepancies between different cases, while demographic explanations (offering information about the classification for individuals in the same demographic categories) and input influence (presenting all input features and their impact in the classification) enhanced fairness perception by increasing people’s confidence in understanding the model. Binns et al. [38] examined people’s perception of fairness in AI-informed decision-making under four explanation types (input influence, sensitivity, case-based, and demographic). It was found that people did consider fairness in AI-informed decision-making. However, depending on when and how explanations were presented, explanations had different effects on people’s perception of fairness: (1) when multiple explanation types were presented, case-based explanations (presenting a case from the model’s training data which is most similar to the decision being explained) had a negative influence on the perception of fairness. (2) When only one explanation type was presented to people, the explanation did not show effects on people’s perception of fairness.
Besides explanation types, mathematical fairness inherently introduced by AI models and/or data (also refers to introduced fairness in this paper) can affect people’s perceived fairness [3]. However, little work is found in understanding whether different explanation types and introduced fairness together affect people’s perception of fairness.

2.2. AI Fairness and Trust

User trust in algorithmic decision-making has been investigated from different perspectives. Zhou et al. [39,40] argued that communicating user trust benefits the evaluation of the effectiveness of machine learning approaches. Kizilcec [41] found that appropriate transparency of algorithms by explanation benefited the user trust. Other empirical studies found the effects of confidence score, model accuracy and users’ experience of system performance on user trust [8,42,43].
Understanding relations between fairness and trust is nontrivial in the social interaction context such as marketing and services. Roy et al. [23] showed that perceptions of fair treatment of customers play a positive role in engendering trust in the banking context. Earle and Siegrist [44] found that the issue’s importance affected the relations between fairness and trust. They showed that procedural fairness did not affect trust when the issue importance was high, while procedural fairness had moderate effects on trust when issue importance was low. Nikbin et al. [45] showed that perceived service fairness had a significant effect on trust, and confirmed the mediating role of satisfaction and trust in the relationship between perceived service fairness and behavioural intention.
Kasinidou et al. [46] investigated the perception of fairness in algorithmic decision-making and found that people’s perception of a system’s decision as ‘not fair’ affects the participants’ trust in the system. Shin’s investigations [27,37] showed that perception of fairness had a positive effect on trust in an algorithmic decision-making system such as recommendations. Zhou et al. [3] obtained similar conclusions that introduced fairness is positively related to user trust in AI-informed decision-making.
These previous works motivate us to further investigate how multiple factors such as AI fairness and AI explanation together affect user trust in AI-informed decision-making.

2.3. AI Explanation and Trust

Explainability is indispensable to foster user trust in AI systems, particularly in sensible application domains. Holzinger et al. [47] introduced the concept of causability and demonstrated the importance of causability in AI explanations [48,49]. Shin [37] used causability as an antecedent of explainability to examine their relations to trust, where causability gives the justification for what and how AI results should be explained to determine the relative importance of the properties of explainability. Shin argued that the inclusion of causability and explanations would help to increase trust and help users to assess the quality of explanations, e.g., with the Systems Causability Scale [50].
The influence of training data points on predictions is one of the typical AI explanation approaches [51]. Zhou et al. [13] investigated the effects of influence on user trust and found that the presentation of influences of training data points significantly increased the user trust in predictions, but only for training data points with higher influence values under the high model performance condition. Papenmerer et al. [52] investigated the effects of model accuracy and explanation fidelity, and found that model accuracy is more important for user trust than explainability. When adding nonsensical explanations, explanations can potentially harm trust. Larasati et al. [53] investigated the effects of different styles of textual explanations on user trust in an AI medical support scenario. Four textual styles of explanations including contrastive, general, truthful, and thorough were investigated. It was found that contrastive and thorough explanations produced higher user trust scores compared to the general explanation style, and truthful explanations showed no difference compared to the rest of the explanations. Wang et al. [54] compared different explanation types such as feature importance, feature contribution, nearest neighbour and counterfactual explanation from three perspectives of improving people’s understanding of the AI model, helping people recognize the model uncertainty, and supporting people’s calibrated trust in the model. They highlighted the importance of selecting different AI explanation types in designing the most suitable AI methods for a specific decision-making task.

This entry is adapted from the peer-reviewed paper 10.3390/make4020026

This entry is offline, you can click here to edit this entry!
Video Production Service