It is important to consider the user perspective when designing computer systems. In the section "Problems, Paradoxes and Overlooked Social Realities" Kling and Star [4] talk about how SAP is not a human-centered application, but an organization centered application. SAP is adaptable but many organizations change the way their employees work to accommodate such applications. Kling and Star also state that a large system of workable computers needs the support of a strong socio-technical infrastructure. The bottom line is that good human-centered application is a three-way partnership between designers, users, and social scientists.
To get researchers thinking about whether their systems are human-centered, Kling and Star [5] recommend they understand the goals of the system. If the end user's needs are met, then it is a human-centered system. A complicated design process that does not freeze at one development stage and takes into account the complexity of human decision-making is human-centered. The social relationship between the system and humans must be considered. The relationship between stakeholders and the design process, similar to the SAP example should be understood before work begins on a new system.
Shneiderman has a different approach to HCAI. They believe in creating a two-dimensional framework. There are 4 levels in this scenario - High Human Control and High Computer Automation; Low Human Control and High Computer Automation; High Human Control and Low Computer Automation; Low Human Control and Low Computer Automation. Shneiderman says that the best HCAI system is one where there is high Human Control and High Computer Automations. These types of systems are reliable and trustworthy. Some examples of such systems include Elevators and Cameras. On the other end, systems with dangerously excessive human control or computer automation are unreliable. A recent example can be the excessive automation the Boeing 737 Max MCAS system that was dependent on a faulty system. [6]
Datasets carry immense power in the behavior of machine learning models. Since most machine learning models are trained and evaluated on static datasets, they can be subject to the inherent societal biases present within training data.[7] These biases may be amplified when the context of the dataset creation and collection are different from the model's deployment context. Dataset consumers may not have insight into the dataset's background and intended usage.
Dataset creators, on the other hand, have better knowledge about the context of the data and its underlying assumptions. Therefore, in order to mitigate unintended behaviors in machine learning models, the creation of datasheets for datasets is suggested as a possible solution.[7] These datasheets would detail the creators' motivation, data collection process, and suggested uses for the data. Although the content of datasheets may vary based on factors such as the domain and organizational workflows, overall, datasheets would present a list of questions for data creators to elicit information that increases transparency regarding the data's characteristics.
Counterexamples show that applying AI or ML technology without emphasizing human factors could cause serious issues. The algorithms may bring unfairness and bias to the stakeholders involved. Here are a few examples of AI technologies with gender or ethnicity bias or neglect human factors.
AI recruiting tool shows gender bias against women in resume screening. Every year big tech companies will receive thousands of resumes for the various jobs they are hiring. Some companies decided to use AI/ML recruiting tools to help screen those resumes. AI algorithms use the resumes that humans have selected as a training dataset. Then it can figure out how the features or information of each human selected resume relate to how likely someone can get an interview. At the training stage of the machine learning algorithm, based on the original recruiter screened resumes, the algorithm can see the pattern of the ideal candidate that those recruiters were looking for when deciding on whom to interview. And the bias starts to happen when AI screening tools start to screen on new resumes that humans have never screened before. The system will select the preferred candidates based on what it learned during the training stage. The results show that even if “Gender” was never explicitly input into the systems as a training feature, the technology will favor candidates who described themselves using masculine languages such as “executed” and “captured” it also prefers candidates who graduate from male-dominated colleges. In other words, the AI system will pick up the traits related to gender. [8]
AI shows bias in the health care industry, the bias in those systems could potentially put women’s lives at risk. Over decades cardiovascular diseases were mainly considered men’s conditions, so the data points were primarily collected from male patients. These self-diagnosis apps which incorporate those AI algorithms may suggest a different level of emergency for treating patients. When female patients consult their pain symptoms using App, the App will indicate that the pain is due to non-urgent associate diseases and recommend scheduling non-urgent visits. In contrast, male users will be asked to contact their doctors immediately due to potential heart attacks. But women could also suffer a heart attack. This gender bias could lead to fatal results and put women’s lives at risk. Besides, the Berlin Institute of Health stated that many medical algorithms are based on U.S. military personnel data, where women in some areas only represent 6%. [9]
A series of workshops on HCAI topics were conducted by the U.S. National Institute of Standards and Technology.[10] Established conferences such as CHI run workshops on Human-Centered Machine Learning in 2016,[11] and 2019.[12] NeurIPS is another conference that ran a workshop on Human-Centered AI,[13] and the Human-Computer Interaction International[14] held a day-long set of Special Thematic Sessions on Human-Centered AI in 2021.[15]
Academic research groups at leading universities have emerged to cover human-centered topics such as ethics, trustworthiness, autonomy, policy, and responsibility. The international participation and diverse approaches are represented by key labs such as Berkman Klein Center for Internet & Society, National University of Singapore, Singapore (Centre for AI Technology for Humankind),[16] Stanford University, U.S. (Human-centered AI (HAI) Institute),[17] Center for Human-Compatible Artificial Intelligence, and University of Oxford, U.K. (Institute for Ethics in AI).[18]
Another indicator of the strength of the Human-Centered AI movement is the commitment of major technology companies, as shown by leaders such as the Human-Centered AI team, IBM Research,[19] the People and AI Research, Google,[20] and the Responsible AI Resources, Microsoft [21]
Since Human-Centered AI has profound impacts on society, non-governmental organizations and civil society groups have arisen to shape policy responses by governmental and regulatory bodies. Leading examples are the International Outreach for a Human-Centric Artificial Intelligence, Europe (InTouchAI.eu),[22] the Center for AI and Digital Policy,[23] AI Now Institute, and ForHumanity [24]
This entry is adapted from the peer-reviewed paper 10.3390/su14137804