Ethnicity Classification: History
Please note this is an old version of this entry, which may differ significantly from the current revision.

Ethnic conflicts frequently lead to violations of human rights, such as genocide and crimes against humanity, as well as economic collapse, governmental failure, environmental problems, and massive influxes of refugees. Many innocent people suffer as a result of violent ethnic conflict. People’s ethnicity can pose a threat to their safety. There have been many studies on the topic of how to categorize people by race.

  • face biometrics
  • soft biometrics
  • convolutional neural networks (CNNs)
  • ethnicity classification

1. Introduction

Soft biometrics, such as gender, ethnicity, age, and expression, have recently gained attention from the pattern recognition community because of their wide range of retail and video surveillance applications and the difficulty in designing effective and reliable algorithms in challenging real-world scenarios [1]. The face is the part of the human body that contains the most semantic information about an individual. Convolutional neural networks (CNNs) are increasingly being used to solve problems such as face recognition [2] and verification [3], gender recognition [4], aging prediction [5], and facial emotion recognition [6]. Ethnicity recognition, which is the ability of a system to discern whether an individual belongs to one of the ethnic groups based on facial appearance observations, has not received the same attention from the scientific community. As a result of new methods and datasets [7][8][9][10] that have been proposed to improve the accuracy of real-world applications with ethnicity-biased results or to give a definitive push to forensic applications, interest in ethnicity recognition has been growing (e.g., ethnicity-based subject identification for public safety). According to recent comprehensive assessments [11][12][13], ethnicity data is lacking. In the realm of deep learning, having a substantial amount of data is crucial for effectively training CNNs. However, when it comes to specific face soft biometrics, such as in ethnicity recognition, the availability of large datasets is still lacking [11][12][13]. Existing studies [14][15] have shown that CNNs trained on the currently available datasets for ethnicity recognition have limited generalization capabilities. The scarcity of ethnicity data can be attributed, in part, to the challenges involved in collecting and annotating such data. In addition to the difficulty of identifying universal distinguishing characteristics, ethnicity cannot be quantitatively measured, unlike other biometric factors such as gender. In the absence of genetic traits that can categorize individuals based on commonly recognized “ethnicities”, the term “ethnicity” lacks biological meaning [2][3]. Instead, human-perceived distinctions in somatic face features are considered for categorization purposes. An automatic annotation technique cannot be developed using, for example, a person’s place of birth; human annotators must manually establish the ground truths of ethnicity groups, and their reliability greatly depends on the annotator’s competence. Facial soft biometrics such as age, gender, and expression have seen a boom in research using deep neural networks, but ethnicity has not had the same focus in that field of study. In addition, gathering new ethnicity datasets is not an easy task; it must be carried out manually by people trained to recognize the basic ethnicity groups using somatic facial features [6][8]. More than 3,000,000 face images have been annotated within four ethnicity groups, namely African American, East Asian, Caucasian Latin, and Asian Indian, to cover this gap in the facial soft biometrics analysis of the VGGFace2 Mivia Ethnicity Recognition (VMER) dataset [6]. With the help of three people from diverse ethnic backgrounds, the final annotations can be derived free from the well-known other-race effect [1][2].

2. Background

A person’s identification is the primary focus in face biometrics, but other soft biometric information, such as age, gender, ethnicity, or emotional state, is also significant [3]. Ethnicity categorization is a growing field of study and has a wide range of applications. Convolutional neural networks (CNNs) have been extensively used in ethnicity categorization in recent years; thus, the study provides an overview of the most recent developments in this area. Belcar et al. [3] examined the discrepancies between CNNs’ results when landmarks are plotted and when they are not. On the UTKFace and Fair Face datasets, the proposed model was put to the test using the holdout testing approach. Accuracy for a 5-class classification was 80.34%, while accuracy for a 7-class classification was 61.74%, which is slightly better than the current best practice. However, results were obtained using only a portion of the face, reducing both time and resources. Consequently, the results are more accurate than the current best practice.

3. Existing Methods

The challenge of determining a person’s ethnicity based solely on their visual traits was analyzed in [4]. Three primary ethnic groups were represented in this work: Mongolians, Caucasians, and blacks. The authors used 447 photos from the FERET database, 357 of which were used for training and 90 of which were used for testing. In order to solve the classification challenge, the authors extracted several geometric features and color attributes from the image. A CNN yielded a model with a precision of 98.6%, while an artificial neural network yielded a precision of just 82.4%. A new indexing strategy based on a hash table that uses a hierarchy of classifiers to predict attributes such as age, gender, and ethnicity was shown in [16]. The matching procedure selected only a tiny fraction of the database from the indexed hash table, lowering retrieval time while retaining low computational complexity. Transfer learning was used to train the hierarchical classifiers using a pre-trained CNN. A new probabilistic back-tracking approach that corrects misclassifications using conditional probabilities was proposed to decrease the classification error generated by classifiers. Based on expected qualities, a method called dynamic thresholding was proposed that dynamically sets the threshold for matching computations based on the results of these computations. It was necessary to perform extensive testing on classifiers to compare their results with those of the most recent approaches. On a large-scale database, the suggested indexing strategy showed a significant reduction in search time and a significant boost in accuracy over current face-image retrieval techniques. Probabilistic back-tracking and dynamic thresholding were shown to be important in statistical tests. Researchers have recently focused their attention on the human face and its traits, making it one of the most popular topics. Known as “soft biometrics”, the features and information extracted from a person have been applied in various fields, including law enforcement, surveillance videos, advertising, and social media profiling, to improve recognition performance and the search engine for facial images. The authors of [6] discovered that there was no mention of the Arab world in relevant papers or Arab datasets. To identify these labels using deep-learning methodologies, the authors set out to generate an Arab dataset and properly label Arab sub-ethnic groups. Images from the Gulf Cooperation Council (GCC) countries, the Levant, and Egypt comprised the Arab image dataset that was developed. The challenge was solved by combining two different methods of learning. A CNN pre-trained model was utilized to obtain state-of-the-art outcomes in computer vision classification issues with supervised deep learning. In the second category of deep learning, there was unsupervised learning (deep clustering). Ethnic classification is one of the primary goals of employing unsupervised learning. To the best of this author’s knowledge, that was the first time that deep clustering had been applied to ethnicity classification problems. Three approaches were considered for this. With the Arab dataset labels adjusted, the best results were 56.97% and 52.12% when pre-trained CNNs were evaluated on different datasets. Deep-clustering algorithms were applied to several datasets, and the accuracy (ACC) ranged from 32% to 59%, with Normalized Mutual Information (NMI) and Adjusted Rand Index (ARI) values ranging from 0 to 0.2714 and 0.2543, respectively.
The assessment of age, gender, and ethnicity in human face photographs is a crucial step in a variety of fields, including access control, forensics, and surveillance. Face recognition and facial aging can be better understood using demographic estimations. When performing such a study, demographics and face recognition and retrieval are two separate components. The initial step in [17] was to extract demographically informative features based on facial asymmetry to predict an image’s age group, gender, and race. Face photos were then recognized and retrieved using demographic features. The demographic estimates from a state-of-the-art algorithm and the proposed approach were also shown. The MORPH and FERET face datasets showed that the suggested strategy can compete with existing approaches for recognizing face photos across aging differences in recognition accuracy. Many deep-learning (DL) algorithms have recently been developed for diverse applications, and those for face recognition (FR), in particular, have taken a huge leap. Deep FR systems benefit from the hierarchical architecture of DL approaches to develop discriminative face representations. On the other hand, DL approaches considerably enhance the current state-of-the-art FR systems and stimulate a wide range of efficient and diversified real-world applications. FR systems that use several types of DL approaches were examined, and 171 recent contributions were summarized for the study in [18]. Those authors addressed works on DL-based FR systems that deal with various algorithmic and architectural aspects as well as the present and future trends in the field. Afterward, they discussed various activation and loss functions for various DL approaches to better grasp the present state of the art. They also included a summary of the datasets used in FR tasks and problems relating to illumination, expressions, position variations, and occlusion, which were discussed in detail. Their final discussion looked at ways to improve FR duties and future developments. Despite recent advances in this area, ethnicity recognition with deep neural networks has received less attention from the scientific community than other facial soft biometrics such as gender and age. To train CNNs for ethnicity recognition, the authors need a large and representative dataset, which is currently unavailable. In addition, gathering new ethnicity datasets is difficult; people are trained to recognize the basic ethnicity groups using somatic facial features that must be carried out manually. The VGGFace2 Mivia Ethnicity Recognition (VMER) dataset, which contains more than 3,000,000 face pictures annotated with four ethnicity categories—African American, East Asian, Caucasian Latin, and Asian Indian—fills this gap in facial soft biometrics research. To prevent the bias produced by the well-known race effect, the final annotations were obtained using a methodology that requires the judgment of three people belonging to various ethnicities. VGG-16, VGG-Face, ResNet-50, and MobileNet v2 were some of the prominent deep network architectures analyzed in [10]. Last, but not least, those authors conducted a cross-dataset evaluation to show that deep network architectures trained using VMER generalized better across diverse testing datasets than similar models trained on the largest ethnicity dataset available. The ethnicity labels for the VMER dataset and the code used in the studies are available at https://mivia.unisa.it (accessed on 6 June 2022) upon request. One of the most intriguing study areas in computer vision is face recognition. Deep-learning techniques such as the CNN have made strides in recent years. Face recognition has been a huge success for CNNs. Computer face recognition is a method used to identify people’s faces in pictures. Several researchers have studied facial recognition. This report summarized researchers’ work on facial recognition using a CNN [19], covering studies that have been published during the last five years, including those that were cited. Face recognition using a CNN was tested to see if a renewal had occurred. CNNs, facial recognition, and a description of the database that has been utilized in numerous studies are all included in this research’s theoretical foundations. The survey aimed to yield new insights into facial recognition based on CNNs.
Face recognition and ethnicity recognition are closely related as they both involve analyzing and identifying characteristics from a person’s face. While face recognition focuses on recognizing and verifying the identity of a person from their facial features, ethnicity recognition aims to classify individuals based on their ethnic background or race. CNN models have been utilized to extract features from facial images, which were then used to classify individuals into different ethnicity categories. Therefore, the techniques and methodologies used in face recognition, such as CNNs, can be applied to ethnicity recognition tasks.
In the recent decade, the number of people interested in face recognition studies has increased significantly. Identifying people’s ethnicities is one of the most difficult challenges in face recognition. New CNNs were used to construct a new model that can identify people’s ethnicity based on their facial traits. Three different nationalities were represented by 3141 photos in the new dataset for identifying people based on ethnicity [20]. As far as this author knows, this is the first time that an image dataset for ethnicity had been collected and made available to the public. Each CNN was compared with two other models that were currently considered the best in the field: Inception V3 and VGG. Results demonstrated that the authors’ model performed the best, with a verification accuracy of 96.9% from a set of photos of people that were evaluated. Unconstrained real-world facial photos are classified into preset ages and genders using age and gender forecasts of unfiltered faces. The standard methods, however, fail miserably when used with unfiltered benchmarks because of the wide range of changes in the unconstrained photos. Due to their superior performance in facial analysis, CNN-based approaches have recently been widely used for categorization. A novel end-to-end CNN approach was proposed in the work of [21] to accomplish robust age group and gender classification for unfiltered real-world faces. In the two-level CNN architecture, features were extracted and classified. Classification and feature extraction worked together to classify photos according to their age and gender based on the extracted features. A robust picture preparation approach was used to handle the huge variances in unfiltered real-world faces before they were input into the CNN model. It was technically possible to train the network on an IMDb-WIKI with noisy labels, then fine-tune it on MORPH-II and ultimately on the original OIU-Audience dataset. Looking at the OIU-Audience benchmark, the experimental results reveal that their model performs the best in age and gender classification. It showed improvement above the best-reported results by 16.6% (precise accuracy) and 3.2% (one-off accuracy) for age-group classification and gender classification, respectively. In computer vision, the study of human facial image analysis is a hot topic. A methodology for facial image analysis was proposed to address the three tough challenges of race, age, and gender detection through face parsing [22]. The authors used deep CNNs to train an end-to-end face parsing model by manually labeling face photos. A facial image was segmented into seven dense classes using a deep-learning-based segmentation technique. Using probabilistic classification, the authors constructed probability maps for each face category. Probability maps were employed as a means of characterizing features. They created a CNN model for each demographic job by extracting features from probability maps (race, age, and gender). Extensive studies on state-of-the-art datasets yielded significantly better results than those previously obtained.
With the growth of the digital age and the rise of human–computer connections, the desire for gender classification systems has increased. Automated classification of gender could be used for various purposes, including indexing facial photos based on gender, monitoring gender-restricted places, gender-adaptive targeted marketing, and collecting passive data on gender demographics [23]. To test the accuracy of face gender classification algorithms, the National Institute of Standards and Technology (NIST) enlisted the help of five commercial companies and one institution, utilizing a combined corpus of close to one million facial pictures from visas and mugshots [24]. The testing approach adopted by NIST simulated operational reality, where software was shipped and used “as-is” without the need for further algorithmic training, which was the goal of the methodology. From a large dataset of photographs taken under controlled lighting, position, and facial expression settings, the core gender categorization accuracy was evaluated by gender, age group, and ethnicity. Comparing results from the limited dataset with those from commonly benchmarked “in the wild” (i.e., unrestrained) datasets was an important part of the research. Assessments were made on sketch classification performance and gender verification accuracy based on how many images were taken from each individual. Demographic variables such as categorization, age, ethnicity, and gender have a significant impact on the appearance of the human face, with each category further subdivided into classes such as black and white, male and female, young (18–30), middle age (30–50), and old (50–70). Most students look more like their peers in their age group than they do those in other age groups. Subjects from a wide range of ages, ethnicities, and genders were analyzed to see how the accurate facial verification was [5]. To that end, the authors employed a CNN for feature extraction and demonstrated that their approach outperformed a commercial face recognition engine in terms of performance for specific demographics. Women, young people between the ages of 18 and 30, and African Americans all had inferior biometric verification performance than other demographic groups. Using this strategy, the authors then tested the accuracy of face verification across multiple demographic groups. As a result of their findings, they offered recommendations on how to improve face verification for people of different ethnicities [24].
Unsupervised fair-score normalization was proposed to lessen the impact of bias on face recognition and lead to a large overall performance gain. The author’s theory was based on treating “similar” people legally by implementing a normalization strategy [25][26]. Three publicly available datasets were used in experiments conducted under controlled and natural conditions. When gender was considered, the results showed that the author’s method eliminated demographic biases by 82.7%. Furthermore, it consistently reduced the bias compared with previous efforts. The total performance was improved by up to 53.2% with a false match rate of 103 and by up to 82.9% with a false match rate of 105, which contrasted with earlier works. Furthermore, the method is not restricted to face biometrics and can be easily integrated into existing recognition systems.
Many real-world applications, such as human–computer interaction (HCI), demography-based classification, biometric-based identification, security, and defense, to mention a few, use ethnicity as an important demographic trait of human beings. A person’s ethnicity can be deduced from face photographs using a new method presented in [2]. The proposed method employed an SVM with a linear kernel as a classifier, which employed a pre-trained CNN. In contrast to prior research, which used handcrafted features such as Local Binary Pattern (LBP) and Gabor, this technique leveraged translationally invariant hierarchical characteristics learned by the network. To support the authors’ claim that their method can handle a wide range of expressions and lighting circumstances, they conducted extensive trials on ten facial databases. There were three classes of ethnicity: Asian, African American, and Caucasian. The average accuracy of classification was 98.28%, 99.66%, and 99.05%, respectively, across all datasets. Other races (e.g., Latinos) are considerably underrepresented in public face-image collections. Face analytic methods cannot be applied to non-white race groups since the models built on such datasets have inconsistent classification accuracies [24][25][26][27]. These datasets have a race bias problem; thus, the authors created a new face-picture dataset that has photographs from 108,501 people of different races. White, black, Indian, East Asian, Southeast Asian, Middle Eastern, and Latino were the seven racial categories the authors identified. The YFCC-100M Flickr dataset was used to capture images, which were then classified by race, gender, and age. Tests were run to gauge generalizability on pre-existing face attribute datasets and brand-new image datasets. When tested on new datasets, the model the authors trained using their data performed significantly better than other models for both male and female participants. In addition, the authors assessed the accuracy of various commercial computer-vision APIs across a range of demographics, including gender, race, and age. Race recognition (RR), which has numerous applications in surveillance systems, image and video interpretation, analysis, and others, is a challenging problem to solve. The use of a deep-learning model to help solve that challenge has been analyzed in [28][29]. A race recognition framework (RRF) was proposed, which included an information collector (IC), face detection and preprocessing (FD&P), and RR modules. Both independent models were presented here in the study for the RR module. In the first instance, an RR-based CNN model was used (the RR-CNN model). The trained model (RR-VGG) was a fine-tuning model for RR based on the VGG object-recognition model. The dataset is entitled VNFaces and is made up of photographs taken directly from Vietnamese Facebook pages. Their experiments compared the accuracy of the RR-CNN and RR-VGG models based on the suggested framework’s performance. On the VNFaces dataset, the RR-VGG model with enhanced input photos had the highest accuracy (88.87%). In comparison, the independent and lightweight model (88.64%) of RR-CNN had the lowest accuracy (88.54%). Extension experiments showed that the authors’ models may be applied to other race dataset problems, such as Japanese, Chinese, or Brazilian, with over 90% accuracy; the fine-tuning RR-VGG model attained the best accuracy and was suggested for the majority of situations. Ethnicity plays a fundamental and significant role in biometric recognition because it is a characteristic of human beings. A new approach to ethnicity classification is presented in [8]. Methods for identifying ethnicity by extracting characteristics from facial photos and developing a classifier based on these features are commonly used. Deep CNNs were used instead to extract and classify features simultaneously. The proposed method was tested on three populations: blacks and whites, Chinese and non-Chinese, and Han, Uyghurs, and other non-Chinese people. The proposed method was tested on both public and self-collected databases and found to be effective. Race classification in facial image analysis has been a long-standing problem [28]. To avoid analyzing every area of the face, focusing on the most noticeable ones is critical. The classification of ethnicity and race can be greatly aided by face segmentation, which is an important part of many face analysis tasks. A race-classification technique based on a face segmentation framework was proposed in [13]. A face segmentation model was built using a deep convolutional neural network (DCNN). The DCNN was trained by labeling facial images with seven different classes, which included the nose (skin), skin (hair), eyebrows (eyelid), and the back (mouth). It was employed in the first phase to generate segmentation results. Probability maps (PMs) were constructed for each semantic class using the probabilistic classification approach. Five of the seven most important facial traits for determining a person’s race were examined in depth. The DCNN was used to train a new model based on the features retrieved from the PMs of five classes. The authors tested the suggested race classification method on four typical face datasets and found it to be more accurate than earlier methods.
Because there is no universally accepted definition of what constitutes “race” as well as the fact that the world’s population is so diverse, determining one’s race is a difficult undertaking. The identification of four basic racial groups (Caucasian, African, Asian, and Indian) is the focus of this research. To train the author’s deep convolutional network (R-Net), the authors used the recently developed BUPT Equalized Face dataset, which contains around 1.3 million photos in an uncontrolled environment. The studies in [30][31][32][33][34][35][36][37] were conducted on other datasets, such as UTK and CFD, to verify their validity. Additionally, the race-estimation model, VGG16, is compared to R-Net. This model’s ability to withstand the rigors of a wide range of settings was demonstrated through experiments. Finally, Grad-CAM (Grad-weighted Class Activation Mapping) was used to visualize the deep-learning model. The creation of a deep-learning-based approach to intelligent face recognition for smart homes was one of the main objectives of [38].

4. Existing Methods and Novelty

In one report, an investigation was conducted on how a candidate’s skin tone, race, and ethnicity intersected with voters’ voting preferences and interpersonal evaluations using three experimental studies (e.g., warmth, trustworthiness, and expertise) [9]. A light-skinned (as opposed to dark-skinned) African American candidate was the focus of study 1. The second study examined the voting preferences of white and non-white participants and looked at the influence of race, ethnicity, and skin tone on voting choices (lighter vs. darker). The third study focused on how race and ethnicity influence voters’ preferences as well as the accuracy and significance of skin-tone memories. The authors found that white people were less inclined than non-white people to vote for underrepresented candidates of color because they had more negative views (e.g., they displayed less warmth; they thought candidates were less trustworthy). The extent to which this bias influenced voters was observed in the prediction of a candidate’s perceived warmth, trustworthiness, and level of competence [36][37]. When race and ethnicity were connected with certain skin tones, race and ethnicity significantly impacted voting choices and attitudes.

This entry is adapted from the peer-reviewed paper 10.3390/app13127288

References

  1. Akbar, M.; Furqan, K.M.Y.; Yaseen, H. Evaluation of Ethnicity and Issues of Political Development in Punjab, Pakistan. Glob. Polit. Rev. 2020, V, 57–64.
  2. Anwar, I.; Islam, N.U. Learned features are better for ethnicity classification. Cybern. Inf. Technol. 2017, 17, 152–164.
  3. Belcar, D.; Grd, P.; Tomičić, I. Automatic Ethnicity Classification from Middle Part of the Face Using Convolutional Neural Networks. Informatics 2022, 9, 18.
  4. Masood, S.; Gupta, S.; Wajid, A.; Gupta, S.; Ahmed, M. Prediction of human ethnicity from facial images using neural networks. Adv. Intell. Syst. Comput. 2018, 542, 217–226.
  5. El Khiyari, H.; Wechsler, H. Face Verification Subject to Varying (Age, Ethnicity, and Gender) Demographics Using Deep Learning. J. Biom. Biostat. 2016, 7, 11.
  6. Sulaiman, M.A.; Kocher, I.S. A systematic review on Evaluation of Driver Fatigue Monitoring Systems based on Existing Face/Eyes Detection Algorithms. Acad. J. Nawroz Univ. (AJNU) 2022, 11, 57–72.
  7. Ghani, M.U.; Alam, T.M.; Jaskani, F.H. Comparison of Classification Models for Early Prediction of Breast Cancer. In Proceedings of the 2019 International Conference on Innovative Computing (ICIC), Lahore, Pakistan, 1–2 November 2019; pp. 1–6.
  8. Wang, W.; He, F.; Zhao, Q. Facial ethnicity classification with deep convolutional neural networks. In Proceedings of the 11th Chinese Conference, CCBR 2016, Chengdu, China, 14–16 October 2016; Springer International Publishing: Berlin/Heidelberg, Germany, 2016; pp. 176–185.
  9. Chirco, P.; Buchanan, T.M. Dark faces in white spaces: The effects of skin tone, race, ethnicity, and intergroup preferences on interpersonal judgments and voting behavior. Anal. Soc. Issues Public Policy 2022, 22, 427–447.
  10. Greco, A.; Percannella, G.; Vento, M.; Vigilante, V. Benchmarking deep network architectures for ethnicity recognition using a new large face dataset. Mach. Vis. Appl. 2020, 31, 67.
  11. SteelFisher, G.K.; Findling, M.G.; Bleich, S.N.; Casey, L.S.; Blendon, R.J.; Benson, J.M.; Sayde, J.M.; Miller, C. Gender discrimination in the United States: Experiences of women. Health Serv. Res. 2019, 54, 1442–1453.
  12. Deshpande, K.V.; Pan, S.; Foulds, J.R. Mitigating Demographic Bias in AI-based Resume Filtering. In Proceedings of the Adjunct Publication of the 28th ACM Conference on User Modeling, Adaptation and Personalization, Genoa, Italy, 14–17 July 2020; pp. 268–275.
  13. Khan, K.; Khan, R.U.; Ali, J.; Uddin, I.; Khan, S.; Roh, B.H. Race classification using deep learning. Comput. Mater. Contin. 2021, 68, 3483–3498.
  14. Vicente-Samper, J.M.; Vila-Navarro, E.; Sabater-Navarro, J.M. Data acquisition devices towards a system for monitoring sensory processing disorders. IEEE Access 2020, 8, 183596–183605.
  15. Rawan, B.; Bibi, N. Construction of Advertisements in Pakistan: How far Television Commercials Conform to Social Values and Professional Code of Conduct? Glob. Reg. Rev. 2019, IV, 22–31.
  16. Chitale, V.S.; Sciences, M. A Novel Indexing Method Using Hierarchical Classification for Face-Image. Doctoral Dissertation, Auckland University of Technology, Auckland, New Zealand, 2020.
  17. Sajid, M.; Shafique, T.; Manzoor, S.; Iqbal, F.; Talal, H.; Samad Qureshi, U.; Riaz, I. Demographic-assisted age-invariant face recognition and retrieval. Symmetry 2018, 10, 148.
  18. Fuad, M.T.H.; Fime, A.A.; Sikder, D.; Iftee, M.A.R.; Rabbi, J.; Al-Rakhami, M.S.; Gumaei, A.; Sen, O.; Fuad, M.; Islam, M.N. Recent advances in deep learning techniques for face recognition. IEEE Access 2021, 9, 99112–99142.
  19. Saragih, R.E.; To, Q.H. A Survey of Face Recognition based on Convolutional Neural Network. Indones. J. Inf. Syst. 2022, 4, 122–139.
  20. Albdairi, A.J.A.; Xiao, Z.; Alghaili, M.; Huang, C. Identifying Ethnics of People through Face Recognition: A Deep CNN Approach. Sci. Program. 2020, 2020, 6385281.
  21. Agbo-Ajala, O.; Viriri, S. Deeply Learned Classifiers for Age and Gender Predictions of Unfiltered Faces. Sci. World J. 2020, 2020, 1289408.
  22. Khan, K.; Attique, M.; Khan, R.U.; Syed, I.; Chung, T.S. A multi-task framework for facial attributes classification through end-to-end face parsing and deep convolutional neural networks. Sensors 2020, 20, 328.
  23. Angulu, R.; Tapamo, J.R.; Adewumi, A.O. Age estimation via face images: A survey, 2018, no. 1. EURASIP J. Image Video Process. 2018, 2018, 42.
  24. Ngan, M.; Grother, P. Face Recognition Vendor Test (FRVT)—Performance of Automated Gender Classification Algorithms; US Department of Commerce, National Institute of Standards and Technology: Gaithersburg, MD, USA, 2015.
  25. Atallah, R.R.; Kamsin, A.; Ismail, M.A.; Abdelrahman, S.A.; Zerdoumi, S. Face Recognition and Age Estimation Implications of Changes in Facial Features: A Critical Review Study. IEEE Access 2018, 6, 28290–28304.
  26. Terhörst, P.; Kolf, J.N.; Damer, N.; Kirchbuchner, F.; Kuijper, A. Post-comparison mitigation of demographic bias in face recognition using fair score normalization. Pattern Recognit. Lett. 2020, 140, 332–338.
  27. Kärkkäinen, K.; Joo, J. FairFace: Face Attribute Dataset for Balanced Race, Gender, and Age. arXiv 2019, arXiv:1908.04913.
  28. Vo, T.; Nguyen, T.; Le, C.T. Race recognition using deep convolutional neural networks. Symmetry 2018, 10, 564.
  29. Mustapha, M.F.; Mohamad, N.M.; Osman, G.; Hamid, S.H.A. Age group classification using Convolutional Neural Network (CNN). J. Phys. Conf. Ser. 2021, 2084, 012028.
  30. Ahmed, M.A.; Choudhury, R.D.; Kashyap, K. Race estimation with deep networks. J. King Saud Univ.-Comput. Inf. Sci. 2020, 34, 4579–4591.
  31. Badrulhisham, N.A.S.; Mangshor, N.N.A. Emotion Recognition Using Convolutional Neural Network (CNN). J. Phys. Conf. Ser. 2021, 1962, 1748–1765.
  32. Meenakshi, S.; Jothi, M.S.; Murugan, D. Face recognition using deep neural network across variationsin pose and illumination. Int. J. Recent Technol. Eng. 2019, 8, 289–292.
  33. Boussaad, L.; Boucetta, A. An effective component-based age-invariant face recognition using Discriminant Correlation Analysis. J. King Saud Univ.-Comput. Inf. Sci. 2020, 34, 1739–1747.
  34. Sharmila; Sharma, R.; Kumar, D.; Puranik, V.; Gautham, K. Performance Analysis of Human Face Recognition Techniques. In Proceedings of the 2019 4th International Conference on Internet of Things: Smart Innovation and Usages (IoT-SIU), Ghaziabad, India, 18–19 April 2019; pp. 1–4.
  35. Rubeena; Kavitha, E. Sketch face Recognition using Deep Learning. In Proceedings of the2021 5th International Conference on Electronics, Communication and Aerospace Technology (ICECA), Coimbatore, India, 2–4 December 2021; Volume 1, pp. 928–930.
  36. Sun, H.; Grishman, R. Lexicalized dependency paths based supervised learning for relation extraction. Comput. Syst. Sci. Eng. 2022, 43, 861–870.
  37. Sun, H.; Grishman, R. Employing lexicalized dependency paths for active learning of relation extraction. Intell. Autom. Soft Comput. 2022, 34, 1415–1423.
  38. Rahim, A.; Zhong, Y.; Ahmad, T. A Deep Learning-Based Intelligent Face Recognition Method in the Internet of Home Things for Security Applications. J. Hunan Univ. (Nat. Sci.) 2022, 49, 10.
More
This entry is offline, you can click here to edit this entry!
ScholarVision Creations