Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 -- 3547 2024-02-19 09:04:37 |
2 Reference format revised. + 2 word(s) 3549 2024-02-20 01:31:26 |

Video Upload Options

Do you have a full video?


Are you sure to Delete?
If you have any further questions, please contact Encyclopedia Editorial Office.
Albaroudi, E.; Mansouri, T.; Alameer, A. AI for Addressing Algorithmic Bias in Job Hiring. Encyclopedia. Available online: (accessed on 23 April 2024).
Albaroudi E, Mansouri T, Alameer A. AI for Addressing Algorithmic Bias in Job Hiring. Encyclopedia. Available at: Accessed April 23, 2024.
Albaroudi, Elham, Taha Mansouri, Ali Alameer. "AI for Addressing Algorithmic Bias in Job Hiring" Encyclopedia, (accessed April 23, 2024).
Albaroudi, E., Mansouri, T., & Alameer, A. (2024, February 19). AI for Addressing Algorithmic Bias in Job Hiring. In Encyclopedia.
Albaroudi, Elham, et al. "AI for Addressing Algorithmic Bias in Job Hiring." Encyclopedia. Web. 19 February, 2024.
AI for Addressing Algorithmic Bias in Job Hiring

More businesses are using artificial intelligence (AI) in curriculum vitae (CV) screening. While the move improves efficiency in the recruitment process, it is vulnerable to biases, which have adverse effects on organizations and the broader society. It recommends the need for collaboration between machines and humans to enhance the fairness of the hiring process. The results can help AI developers make algorithmic changes needed to enhance fairness in AI-driven tools. This will enable the development of ethical hiring tools, contributing to fairness in society.

algorithmic bias deep learning curriculum vitae screening natural language processing artificial intelligence

1. Introduction

In the era of globalization, businesses navigate a landscape of intensified international competition [1]. Technological advances have fostered increased accessibility of foreign markets for businesses. Although this has expanded market reach, it has also exposed them to heightened levels of competition. Consequently, enterprises are compelled to maintain a competitive advantage to ensure their survival [2]. To maintain competitiveness, businesses must prioritize hiring the right workforce. Acknowledged as the most critical asset within any organizational framework [3], employees possess the knowledge, skills, and expertise essential for an entity’s functioning. Research establishes the correlation between employees’ productivity and organizational performance [4]. Given the pivotal role of employees, organizations are using more stringent hiring practices. The aim is to hire workers with the requisite skills and qualifications to perform effectively [5]. Organizations using strict hiring criteria increase the probability of acquiring high-performing employees. Research stipulates that employees suitably matched to their functions exhibit more productivity and motivation [6]. Hiring unsuitable candidates extends to heightened turnover rates [7], increasing costs and disruptions for businesses [8]. Legislative measures have been established to address workplace discrimination. Notably, Title VII of the Civil Rights Act of 1964 in the United States (US) prohibits intentional exclusion based on identity and the unintentional disadvantage of a protected class through a facially neutral procedure [9]. While identity-based forms of discrimination have become diminished over time, unintentional biases persist [10].
The integration of AI in hiring is not a recent development. In the late 20th century, the initial adoption of AI in hiring focused on automating routine tasks like resume screening and application sorting. The early systems were designed to enhance efficiency by reducing the time and effort invested in such tasks. However, the sophistication inherent in contemporary AI hiring technologies was absent in the early iterations. For instance, resumix, introduced in 1988 and later acquired by HotJobs in 2000, is one of the earliest examples of resume parsing tools [11]. The tool deployed AI to read resumes and extract specific keywords, work experiences, and educational qualifications. Advancements in the 1990s saw the integration of job posting sites like CareerBuild with an application tracking system (ATS) to improve the sourcing of applications [12]. The early 2000s saw the emergence of talent assessment tools like eSkill and SkillSurvey, leveraging AI for automating pre-employment testing and reference checks. The 2010s marked the emergence of AI-powered video interviewing software. For instance, HireVue gained prominence in the mid-2010s for utilizing machine learning (ML) algorithms to assess candidates based on analysis of facial expressions, speech patterns, and body language. In a notable recent example, Amazon attempted to automate its recruitment process using AI in 2018. The algorithm, trained on decades of resumes, aimed to optimize the selection of the most appropriate candidates from a large pool of applicants. However, the tool faced significant challenges and was ultimately abandoned due to allegations of biases [13]. The algorithm, trained on resumes predominantly submitted by male applicants over a ten-year period, exhibited a preference for male-centric language patterns, discriminating against female-oriented applicants [13]. Despite Amazon’s setback, commercially available online hiring tools like Talenture, fetcher, TurboHire, and Findem are widely used.
The availability of data has facilitated the evolution of AI-powered tools, enabling the extraction of new insights through computational analysis [14]. However, the development has given rise to unintended consequences. Algorithmic screening tools that appear evidence-based have emerged as purported alternatives to subjective human evaluations [15]. Divergent opinions exist on the objectivity of algorithmic techniques in mitigating biases. Some scholars assert that algorithmic techniques, especially those utilizing deep learning, are inherently bias-proof, affording businesses an objective way of selecting candidates [16]
Algorithmic biases emerge in various forms. Firstly, measurement bias emerges from the identification and measurement of specific features [17]. This type of bias occurs when training data for AI algorithms inadequately represents the intended construct it seeks to measure [18]. In hiring, measurement bias can manifest when training data do not accurately capture the skills and other traits relevant to the job. If an organization continues to hire more white than Black applicants, it may associate good performance with being white given the availability of data about the performance of white employees. Measurement bias can be addressed by auditing and updating training data regularly to ensure they reflect the evolving requirements of a job. The second type, representation bias, results from the ways in which researchers sample populations during data collection, causing non-representative samples that do not represent the entire population [18]. In hiring, this type of bias manifests in under-representation and over-representation of particular demographic groups. 
Moreover, omitted variable bias occurs when hiring algorithms have one or more important variables missing in a model [19], affecting how systems make predictions. Additionally, linking biases occur when biases and attributes from user connections misrepresent the behavior of users [20]. While analyzing users’ connections, hiring algorithms may draw conclusions that do not align with the actual behavior of individuals. For instance, an algorithm over-relying on technical skills without factoring in interpersonal skills may overlook candidates excelling in critical components like communication. The omitted variable bias can be addressed by undertaking a thorough analysis to include all the relevant variables in a model. The last type is aggregation bias, arising from false conclusions made regarding individuals based on an analysis of the entire population [20]. Aggregation biases cause hiring models to ignore individual differences, making them unsuitable where diversity is involved [16].

2. Understanding Bias in Hiring

2.1. HR Bias in Ranking CVs

The CV ranking process, as an initial stage in hiring, involves the assessment of applicants’ CVs by recruiters or HR professionals to identify candidates possessing the required qualifications and experiences for specific positions [21]. This crucial step establishes whether an applicant advances to subsequent stages like interviews and assessments. Recruiters use CV ranking to sift through a large pool of applicants and identify potentially suitable candidates for a job. Given the volume of applications HR professionals receive, specific criteria or qualifications are established to rank CVs [21]. CV ranking is crucial in streamlining the recruitment process, enabling organizations to focus on evaluating the most promising candidates. It optimizes time and resource efficiency in hiring, allowing recruiters swift assessment of applicants’ qualifications and skills, making informed decisions about candidates who should proceed to the next stage. Through CV ranking, HR professionals eliminate unsuitable candidates, saving time. Simultaneously, the CV ranking process influences an applicant’s perception of the hiring organization. From the initial interactions with an organization, applicants form impressions regarding the fairness and transparency of the entire recruitment process. Perception of unfairness in the CV ranking process may dissuade potential candidates from applying, depriving a business of a qualified labor force.
The CV ranking process, while integral to hiring, introduces the risk of bias and discrimination. HR professionals may harbor unconscious biases that influence their CV evaluation. Implicit bias, manifested in unconscious attitudes and stereotypes that people may hold towards particular groups [22], is a major contributor to hiring discrimination. Researchers use the Implicit Association Test (IAT) to measure the unconscious association of specific traits with certain demographic groups [23]. The results confirm that people tend to associate negative traits with racial minorities [24].
Stereotypes influence perceptions and evaluations unconsciously, significantly impacting employment decisions. Studies indicate that when evaluating people from stereotyped groups, individuals tend to concentrate on information aligning with stereotypes and interpret data to affirm the stereotypes [25]. Countering stereotypes poses challenges in the CV ranking process, especially for stigmatized groups expected to conform to established stereotypes [26]. Deviation from stereotypes may result in a good performance being dismissed as mere luck. Consequently, the anticipation of biased treatment adversely impacts the performance of stereotyped groups.
In summary, CV ranking is an important tool for filtering numerous applications; however, it introduces biases that compromise the fairness of the recruitment process. Unconscious attitudes and stereotypes among HR professionals contribute to decisions that favor or discriminate against certain groups. Reliance on existing networks for sourcing exacerbates biases, causing candidates in certain groups to rank higher compared to the rest of the applicants.

2.2. Non-Technical Solutions in Ranking CVs

The prevalence of HR bias in CV ranking is a critical organizational challenge. HR professionals and other recruiters should be aware of biases [27] and the following non-technical solutions available to mitigate them. Firstly, training programs are needed to equip HR professionals with the skills needed to minimize biases. Often, inadequate training limits recruiters’ understanding of hiring biases. In response, organizations can implement unconscious bias training programs to equip HR professionals with the skills required to reduce biases [28]. The training raises awareness among hiring professionals concerning implicit biases that may influence decision making [28]. It facilitates the recognition of unconscious biases, enhancing the decision-making capabilities of HR professionals.
Additionally, recruiters can adopt blind hiring techniques in CV ranking [29]. HR professionals are supposed to consider applicants’ basic qualifications, background, and educational aspects to determine suitability for a position. However, personal details may introduce biases related to religious, cultural, and background factors. 
Moreover, diversifying CV ranking teams is instrumental in mitigating biases. The absence of diversity in hiring teams prompts biased hiring, as recruiters favor specific groups. Including recruiters from diverse backgrounds, experiences, and perspectives effectively promotes multiple viewpoints in the evaluation process, challenging biased assumptions [30]. Businesses must develop a culture that fosters diversity. This will encourage individuals from different backgrounds, religions, economic statuses, and genders to apply.
Subsequent to the implementation of the non-technical measures, entities should adopt continuous monitoring and evaluation. An analysis of data generated from CV ranking will assist recruiters in identifying areas that need improvement. Where discrepancies emerge, investigations should be conducted to address the problem. Constant scrutiny of the CV ranking process will foster a more inclusive and equitable CV ranking, providing an even playing ground to all candidates. Calibration of the CV ranking process should be performed often to align with organizational goals.
In summary, non-technical solutions exist to address biases in the CV ranking process. These include the implementation of unconscious bias training programs to equip HR professionals with the necessary skills to minimize biases, having clear and objective evaluation criteria, blind hiring techniques in CV ranking, and the diversification of their CV ranking teams. The implementation of these measures must be accompanied by continuous monitoring and evaluation of the hiring outcomes to establish areas requiring improvements.

3. Applications of AI in Hiring

AI techniques have revolutionised various industries and processes, including hiring. Recognising the need to avoid biases in hiring, NLP techniques emerged as innovative solutions, poised to streamline the hiring process. NLP, a subset of AI, allows for computers to understand human language, hence deriving meaning from a vast array of linguistic inputs [31]. Traditionally, businesses adopted manual processing of information. However, the advancement in NLP technology and the deployment of neural networks allows for organizations to leverage data for the development of systems to address common issues. The adaption of NLP systems facilitates efficiency and cost reduction in organizations. Manual HR processes in large organizations are difficult and time consuming, and often frustrate candidates. With the global shift towards post-COVID-19 pandemic operations, businesses are optimizing their operations, intensifying the demand for more workforce. NLP techniques are increasingly adopted to automate the hiring process. Similarly, deep learning, a subset of ML, is a useful tool in with potential application in the measurement of human behavior [32]. Deep learning techniques have automated routine tasks, for example, in healthcare, where they have proven superior to medical professionals in the detection of cancer in mammograms [33].
The conventional approaches utilizing psychometric principles have proven less effective, particularly in recruiting candidates with the required skills and qualifications [34]. At the same time, traditional sourcing methods like printed job applications have become less popular, paving way to more advanced internet-based sources and e-recruitment processes. The HR function enhances an organization’s growth competitive advantage and innovation. Businesses are engaged in fierce competition to attract and retain candidates with the required skill sets. Organizations have resulted in technologically driven processes in the hiring process as demonstrated with the increased adoption of AI from 2018 when businesses started sourcing for candidates using information derived from social media profiles [34]. Data from social media enabled recruiters to evaluate candidate values, beliefs, and attitudes, providing information that could not be obtained from traditional CVs. 
AI is instrumental in the CV screening process, especially in the identification of the most suitable candidates. As the first step in the hiring process, CV screening entails the identification of CVs for a particular position based on the job description. A manual approach proves laborious and time consuming, especially when dealing with large volumes of applications. AI techniques allow the automation of the CV screening process. They can autonomously extract important information from resumes, including education, work, experience, and skills. Automation saves recruiters time and allows for them to focus on establishing the suitability of the candidates. Bhakagat indicates that the recruitment industry will save significant time with AI-enabled tools [35]. Unlike manual CV screening, which is time-consuming, AI-based tools analyze extensive data sets, providing comprehensive results promptly for HR professionals [35]
Beyond CV screening, AI techniques can assess applicants’ personalities and behaviors. Deep learning models can analyze data from applicants’ social media profiles and other online forums, since research shows that individuals are using their social media accounts to share aspects about themselves [36]. Social media profiles serve as a repository of information that can offer more insights into a candidate’s persona. Recruiters can use the information to understand the personality traits that align with the job requirements and how individuals fit into the organizational culture. Cover letters can establish candidates’ enthusiasm and worldview, allowing for businesses to filter less motivated candidates and those whose worldview is contrary to the values of the business from a large pool of applicants. 
Language barrier is a major challenge in hiring, especially for multinational corporations. Such organizations operate across diverse linguistic landscapes, exposing them to language barriers. For instance, a UK-based multinational corporation operating in Germany and China would be forced to hire workers who speak these languages. These global entities often encounter the necessity to recruit local talent to align with the unique needs of each geographical location. Interviewing candidates from different cultures forces a business to employ HR professionals who understand local languages. However, employing such a large number of HR professionals is expensive. AI facilitates the hiring of employees for businesses with operations overseas despite the language barriers [37]. AI techniques can recognize different languages, allowing for HR professionals to interview candidates without necessarily understanding the local dialect. The evolution of NLP further amplifies this capability, with the promise of incorporating an even broader spectrum of languages into the AI-driven hiring base [38]. AI allows for businesses, particularly multinational corporations, to recruit employees from different linguistic backgrounds without necessarily hiring translators.
Engaging candidates through the hiring process is critical in the contemporary hiring landscape, with AI-powered solutions emerging as enablers. Organizations are unable to maintain constant communication with candidates due to time constraints. A solution to this challenge comes in the form of AI-driven chatbots that can offer applicants more information about the organization’s mission, vision, values, and any other information they need.
In summary, AI has widespread applications in the hiring process. Firstly, AI acts as a catalyst in the automatic CV screening process, saving time wasted in a manual laborious process. Secondly, AI facilitates the selection of quality candidates from a pool of applicants. The analytical capabilities of AI contribute significantly to the identification of candidates best suited for specific positions. Thirdly, AI tools assume a major role in assessing the personality and behavior of candidates. Their ability to extract data from social media profiles assists in establishing a candidate’s suitability for a given post. Lastly, AI systems assist in overcoming language barriers between applicants and hiring professionals, facilitating global recruitment efforts.

4. Applications of AI in Eliminating Bias

4.1. AI Approaches to Bias Mitigation

Hiring algorithms exhibit biases across various dimensions, even where designers work towards eliminating them. It is important to consider ways of enhancing hiring algorithms to detect and mitigate some of the biases. The application of AI techniques in hiring is a promising solution to algorithmic biases [39]. It commences with the examination of how hiring algorithms introduce biases. In ML evaluations, bias and discrimination can be examined by considering the confusion matrices for various protected categories. Language models estimate the probability of a sequence of words, allowing for them to predict the most probable next word or phrase. Since biases are prevalent in any human language, language models are vulnerable to the same biases. Unfairness emanates from skewed behavior that wrongly uses biases to create a certain outcome that discriminates against a certain group. When dealing with words describing gender, e.g., men and women, certain attributes can be ascribed to each category, significantly reinforcing stereotypes.
There are two major AI techniques for addressing algorithmic biases: correction of the vector space and data augmentation. For the correction of the vector space of the model, bias often emerges in the vector space where word embeddings are learned. Words associated with gender, ethnicity, or other sensitive attributes may become vectors that perpetuate biases [40]. Correcting the vector space follows a structured procedure. Developers identify the vector space dimensions that harbor the bias and try to equalize the distance between the protected attribute (such as Blacks and white people) and the biased concept (like qualifications and skills). Where the vector model leans towards associating white people with better skills and qualifications, developers can rectify the problem by associating the same skills and qualifications with Black people, neutralizing biased embeddings by moving them closer to a neutral point in the vector space [41]. This entails adjusting the gender-related word pairs like “he” or “she” to have similar embeddings, which ensures equal representation in the vector space. 
Data augmentation is a major AI technique in mitigating bias [42]. It works by generating data using information derived from the training set. The deep learning technique operates by fine-tuning the model by changing its source data [43]. The technique seeks to diversify the available examples, exposing the model to a wider range of scenarios without necessarily introducing new data. The process begins with identifying underrepresented groups in the existing training data. Next, modifications of the existing data are performed to create new instances through synonym replacement, paraphrasing, or introducing slight variations to numerical features. In this way, developers can balance the number of times protected attributes appear [42].

4.2. Step-by-Step Approach to Algorithm Bias

Garrido-Muñoz et al. provide a systematic approach to algorithm bias, offering a comprehensive guide that software engineers can follow to address biases in deep model generation and application [44]. While the authors do not specifically focus on algorithms in hiring, the steps are universal for any author who wants to deal with bias, including in designing algorithms for hiring. The first step is defining stereotype knowledge by identifying the protected properties and the related stereotyped aspects. Algorithm designers are encouraged to develop an ontology for each protected category [44], enabling them to populate their stereotyped knowledge to identify potential biases that may harm the system. The second step is the need for software engineers to evaluate the model to establish how it behaves with stereotyped and protected expressions. The third step is the need for developers to analyze the results of the evaluation [44]. This is meant to pinpoint the expressions or categories resulting in higher bias [44]. Next, software engineers must reevaluate the model and loop the last steps until they receive an acceptable response. Lastly, the procedure results should be reported by attaching model cards to attain transparent model reporting [44]. The procedure can be adapted according to the requirements of the particular AI project, which makes it applicable in addressing biases in hiring algorithms.

4.3. Case Studies of AI Eliminating Biases

AI techniques have been used to mitigate biases in different industries. For instance, IBM’s AI Fairness 360 Toolkit is an open-source software that assists in in detecting and removing bias in ML. It allows for developers to utilize state-of-the-art algorithms to identify unwanted biases from appearing in their ML pipeline [45]. By adopting the toolkit, businesses can improve the fairness of their candidate selection process. Moreover, Textio’s augmented writing platform employs AI to mitigate biased language in job descriptions. The platform uses NLP to analyze job postings and suggest alternative language that is more inclusive and appealing to a diverse audience. Organizations using Textio, like CISCO, have indicated the platform has helped them create gender-neutral job adverts, allowing for the business to resonate with a diverse pool of people [46]. Lastly, Accenture’s Fairness Tool for AI is designed to evaluate and address bias in AI models, including those in recruitment. The tool assesses models for fairness across demographic groups and offers recommendations to organizations on how to mitigate biases. The tool has been employed to enhance fairness in hiring algorithms, ensuring that AI systems do not discriminate against particular groups [47].


  1. Hameed, K.; Arshed, N.; Yazdani, N.; Munir, M. On globalization and business competitiveness: A panel data country classification. Stud. Appl. Econ. 2021, 39, 1–27.
  2. Farida, I.; Setiawan, D. Business strategies and competitive advantage: The role of performance and innovation. J. Open Innov. Technol. Mark. Complex. 2022, 8, 163.
  3. Dupret, K.; Pultz, S. People as our most important asset: A critical exploration of agility and employee commitment. Proj. Manag. J. 2022, 53, 219–235.
  4. Charles, J.; Francis, F.; Zirra, C. Effect of employee involvement in decision making and organization productivity. Arch. Bus. Res. ABR 2021, 9, 28–34.
  5. Hamadamin, H.H.; Atan, T. The impact of strategic human resource management practices on competitive advantage sustainability: The mediation of human capital development and employee commitment. Sustainability 2019, 11, 5782.
  6. Sukmana, P.; Hakim, A. The Influence of Work Quality and Employee Competence on Human Resources Professionalism at the Ministry of Defense Planning and Finance Bureau. Int. J. Soc. Sci. Bus. 2023, 7, 233–242.
  7. Li, Q.; Lourie, B.; Nekrasov, A.; Shevlin, T. Employee turnover and firm performance: Large-sample archival evidence. Manag. Sci. 2022, 68, 5667–5683.
  8. Lyons, P.; Bandura, R. Employee turnover: Features and perspectives. Dev. Learn. Organ. Int. J. 2020, 34, 1–4.
  9. Bishop, J.; D’arpino, E.; Garcia-Bou, G.; Henderson, K.; Rebeil, S.; Renda, E.; Urias, G.; Wind, N. Sex Discrimination Claims Under Title VII of the Civil Rights Act of 1964. Georget. J. Gender Law 2021, 22, 369–373.
  10. Fry, R.; Kennedy, B.; Funk, C. STEM Jobs See Uneven Progress in Increasing Gender, Racial and Ethnic Diversity; Pew Research Center: Washington, DC, USA, 2021; pp. 1–28.
  11. HireAbility. The Evolution of Resume Parsing: A Journey Through Time. 2023. Available online: (accessed on 5 January 2024).
  12. Ajunwa, I. Automated video interviewing as the new phrenology. Berkeley Technol. Law J. 2021, 36, 1173.
  13. Dastin, J. Insight—Amazon Scraps Secret AI Recruiting Tool That Showed Bias Against Women. 11 October 2018. Available online: (accessed on 5 January 2024).
  14. Wang, J.; Yang, Y.; Wang, T.; Sherratt, R.S.; Zhang, J. Big data service architecture: A survey. J. Internet Technol. 2020, 21, 393–405.
  15. De Cremer, D.; De Schutter, L. How to use algorithmic decision-making to promote inclusiveness in organizations. AI Ethics 2021, 1, 563–567.
  16. Kordzadeh, N.; Ghasemaghaei, M. Algorithmic bias: Review, synthesis, and future research directions. Eur. J. Inf. Syst. 2022, 31, 388–409.
  17. Mehrabi, N.; Morstatter, F.; Saxena, N.; Lerman, K.; Galstyan, A. A survey on bias and fairness in machine learning. ACM Comput. Surv. CSUR 2021, 54, 115.
  18. Shahbazi, N.; Lin, Y.; Asudeh, A.; Jagadish, H. Representation Bias in Data: A Survey on Identification and Resolution Techniques. ACM Comput. Surv. 2023, 55, 293.
  19. Wilms, R.; Mäthner, E.; Winnen, L.; Lanwehr, R. Omitted variable bias: A threat to estimating causal relationships. Methods Psychol. 2021, 5, 100075.
  20. Sun, W.; Nasraoui, O.; Shafto, P. Evolution and impact of bias in human and machine learning algorithm interaction. PLoS ONE 2020, 15, e0235502.
  21. Fisher, E.; Thomas, R.S.; Higgins, M.K.; Williams, C.J.; Choi, I.; McCauley, L.A. Finding the right candidate: Developing hiring guidelines for screening applicants for clinical research coordinator positions. J. Clin. Transl. Sci. 2022, 6, e20.
  22. FitzGerald, C.; Martin, A.; Berner, D.; Hurst, S. Interventions designed to reduce implicit prejudices and implicit stereotypes in real world contexts: A systematic review. BMC Psychol. 2019, 7, 29.
  23. Marvel, J.D.; Resh, W.D. An unconscious drive to help others? Using the implicit association test to measure prosocial motivation. Int. Public Manag. J. 2019, 22, 29–70.
  24. Banaji, M.R.; Fiske, S.T.; Massey, D.S. Systemic racism: Individuals and interactions, institutions and society. Cogn. Res. Princ. Implic. 2021, 6, 82.
  25. Tabassum, N.; Nayak, B.S. Gender stereotypes and their impact on women’s career progressions from a managerial perspective. IIM Kozhikode Soc. Manag. Rev. 2021, 10, 192–208.
  26. Zingora, T.; Vezzali, L.; Graf, S. Stereotypes in the face of reality: Intergroup contact inconsistent with group stereotypes changes attitudes more than stereotype-consistent contact. Group Process. Intergroup Relat. 2021, 24, 1284–1305.
  27. Marcelin, J.R.; Siraj, D.S.; Victor, R.; Kotadia, S.; Maldonado, Y.A. The impact of unconscious bias in healthcare: How to recognize and mitigate it. J. Infect. Dis. 2019, 220, S62–S73.
  28. Kim, J.Y.; Roberson, L. I’m biased and so are you. What should organizations do? A review of organizational implicit-bias training programs. Consult. Psychol. J. 2022, 74, 19.
  29. Yarger, L.; Cobb Payton, F.; Neupane, B. Algorithmic equity in the hiring of underrepresented IT job candidates. Online Inf. Rev. 2020, 44, 383–395.
  30. Ashikali, T.; Groeneveld, S.; Kuipers, B. The role of inclusive leadership in supporting an inclusive climate in diverse public sector teams. Rev. Public Pers. Adm. 2021, 41, 497–519.
  31. de la Fuente Garcia, S.; Ritchie, C.W.; Luz, S. Artificial intelligence, speech, and language processing approaches to monitoring Alzheimer’s disease: A systematic review. J. Alzheimer’s Dis. 2020, 78, 1547–1574.
  32. Thompson, I.; Koenig, N.; Mracek, D.L.; Tonidandel, S. Deep Learning in Employee Selection: Evaluation of Algorithms to Automate the Scoring of Open-Ended Assessments. J. Bus. Psychol. 2023, 38, 509–527.
  33. McKinney, S.M.; Sieniek, M.; Godbole, V.; Godwin, J.; Antropova, N.; Ashrafian, H.; Back, T.; Chesus, M.; Corrado, G.S.; Darzi, A.; et al. International evaluation of an AI system for breast cancer screening. Nature 2020, 577, 89–94.
  34. Brishti, J.K.; Javed, A. The Viability of AI-Based Recruitment Process: A Systematic Literature Review. Master’s Thesis, Umeå University, Umeå, Sweden, 2020.
  35. Bhalgat, K.H. An Exploration of How Artificial Intelligence Is Impacting Recruitment and Selection Process. Ph.D. Thesis, Dublin Business School, Dublin, Ireland, 2019.
  36. Adegboyega, L.O. Influence of Social Media on the Social Behavior of Students as Viewed by Primary School Teachers in Kwara State, Nigeria. Elem. Sch. Forum (Mimbar Sekol. Dasar) 2020, 7, 43–53.
  37. Sridevi, G.; Suganthi, S.K. AI based suitability measurement and prediction between job description and job seeker profiles. Int. J. Inf. Manag. Data Insights 2022, 2, 100109.
  38. Alzubaidi, L.; Zhang, J.; Humaidi, A.J.; Al-Dujaili, A.; Duan, Y.; Al-Shamma, O.; Santamaría, J.; Fadhel, M.A.; Al-Amidie, M.; Farhan, L. Review of deep learning: Concepts, CNN architectures, challenges, applications, future directions. J. Big Data 2021, 8, 1–74.
  39. Pandey, A.; Caliskan, A. Disparate impact of artificial intelligence bias in ridehailing economy’s price discrimination algorithms. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, Virtual, 19–21 May 2021; pp. 822–833.
  40. Nemani, P.; Joel, Y.D.; Vijay, P.; Liza, F.F. Gender bias in transformers: A comprehensive review of detection and mitigation strategies. Nat. Lang. Process. J. 2023, 6, 100047.
  41. Shin, S.; Song, K.; Jang, J.; Kim, H.; Joo, W.; Moon, I.C. Neutralizing gender bias in word embedding with latent disentanglement and counterfactual generation. arXiv 2020, arXiv:2004.03133.
  42. Maudslay, R.H.; Gonen, H.; Cotterell, R.; Teufel, S. It’s all in the name: Mitigating gender bias with name-based counterfactual data substitution. arXiv 2019, arXiv:1909.00871.
  43. Sinha, R.S.; Lee, S.M.; Rim, M.; Hwang, S.H. Data augmentation schemes for deep learning in an indoor positioning application. Electronics 2019, 8, 554.
  44. Garrido-Muñoz, I.; Montejo-Ráez, A.; Martínez-Santiago, F.; Ureña-López, L.A. A survey on bias in deep NLP. Appl. Sci. 2021, 11, 3184.
  45. IBM. AI Fairness 360. 14 February 2018. Available online: (accessed on 24 October 2023).
  46. Novet, J. Cisco Is Hiring More Women and Non-White Employees than Ever, and They Credit This Start-Up for 1051 Helping. 9 October 2019. Available online: (accessed on 4 January 2024).
  47. Alameer, A.; Degenaar, P.; Nazarpour, K. Processing occlusions using elastic-net hierarchical max model of the visual cortex. In Proceedings of the 2017 IEEE International Conference on INnovations in Intelligent SysTems and Applications (INISTA), Gdynia, Poland, 3–5 July 2017; IEEE: New York, NY, USA, 2017; pp. 163–167.
Contributors MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to : , ,
View Times: 163
Revisions: 2 times (View History)
Update Date: 20 Feb 2024