Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 -- 1911 2023-08-29 14:17:52 |
2 only format change Meta information modification 1911 2023-08-30 04:21:11 |

Video Upload Options

Do you have a full video?

Confirm

Are you sure to Delete?
Cite
If you have any further questions, please contact Encyclopedia Editorial Office.
Moreno Escobar, J.J.; Morales Matamoros, O.; Aguilar Del Villar, E.Y.; Quintana Espinosa, H.; Chanona Hernández, L. Artificial Intelligence for Facial Emotion Detection in Children. Encyclopedia. Available online: https://encyclopedia.pub/entry/48599 (accessed on 06 May 2024).
Moreno Escobar JJ, Morales Matamoros O, Aguilar Del Villar EY, Quintana Espinosa H, Chanona Hernández L. Artificial Intelligence for Facial Emotion Detection in Children. Encyclopedia. Available at: https://encyclopedia.pub/entry/48599. Accessed May 06, 2024.
Moreno Escobar, Jesús Jaime, Oswaldo Morales Matamoros, Erika Yolanda Aguilar Del Villar, Hugo Quintana Espinosa, Liliana Chanona Hernández. "Artificial Intelligence for Facial Emotion Detection in Children" Encyclopedia, https://encyclopedia.pub/entry/48599 (accessed May 06, 2024).
Moreno Escobar, J.J., Morales Matamoros, O., Aguilar Del Villar, E.Y., Quintana Espinosa, H., & Chanona Hernández, L. (2023, August 29). Artificial Intelligence for Facial Emotion Detection in Children. In Encyclopedia. https://encyclopedia.pub/entry/48599
Moreno Escobar, Jesús Jaime, et al. "Artificial Intelligence for Facial Emotion Detection in Children." Encyclopedia. Web. 29 August, 2023.
Artificial Intelligence for Facial Emotion Detection in Children
Edit

The knowledge of the emotions of children with Down Syndrome (DS) obtained through the analysis of their facial expressions during an assisted therapy with dolphins using Artificial Vision and Deep Convolutional Neural Networks can significantly contribute to the effectiveness of the therapy.  Understanding the emotional responses of DS children during therapy sessions can provide valuable insights into their level of engagement, comfort, and overall well-being. This information can help therapists and caregivers tailor the therapy sessions to meet the specific emotional needs of each child, enhancing their overall experience and potentially improving therapeutic outcomes. 

Down Syndrome deep convolutional neural network dolphin-assisted therapy facial emotion detection

1. Introduction

An incidence of Trisomy 21 is estimated in the world to be between 1 in 1000 and 1100 newborns. In Mexico, according to data from the General Directorate of Health Information (2018), there is an annual incidence of 689 newborns with Trisomy 21 [1]. Zhao et al. in [2], mention that Trisomy 21, known in society as Down Syndrome, is a disease of genetic origin characterized by the presence of a third chromosome, in which it should be the 21st pair. The purpose of the current study is to use artificial intelligence in favor of certain medical conditions through therapies that can be considered regular. It is important to mention that patients with various pathologies not only resort to regular therapies but also to alternative therapies, either herbal or assisted by animals, such as horses, dogs, and even dolphins. One of these alternative therapies used in children with Down Syndrome is dolphin-assisted therapy, which is focused on reducing anxiety and stress levels, as well as physical improvement. Dolphin-assisted therapy has been developed as an alternative treatment for people with various physical disabilities and psychological disorders, such as Autism, Attention Deficit, Trisomy 21, Spastic Cerebral Palsy, and Obsessive Compulsive Disorder. This therapy is aimed at complementing and reinforcing existing therapies but does not replace them [3]. Being a therapy aimed at people who have difficulties to communicate in general, it is difficult to determine its level of effectiveness.
Emotions are part of human life, and although sometimes people try to hide them, they occur involuntarily; they are reflected in microexpressions and macroexpressions. These facial expressions are very useful for a person with Trisomy 21 to communicate better, since they mostly use nonverbal communication [4]. Facial emotions (FEs) are identified by being either voluntary or nonvoluntary. Voluntary FEs can last between 0.5 to 4 s, and they also cover a large facial area that can be perceived in the entire area of the image with very notable gestures, while the nonvoluntary FEs are characterized by having small muscular movements with a very short duration, between 0.065 to 0.5 s, in a natural way in a small facial area, demonstrating the emotions that are trying to be hidden [5].
In addition, artificial vision has become a great tool to develop a recognition algorithm, as it combines five tools from computer science: (i) computer science concepts, (ii) digital image processing techniques and ideas, (iii) recognition of patterns, (iv) artificial intelligence, and (v) computer graphics. Most machine vision tasks are related to the process of obtaining information about events or descriptions from input scenes based on four main features:
  • Human vision;
  • Pattern recognition;
  • Computer science;
  • Signal processing.
The knowledge of the emotions of children with Down Syndrome (DS) obtained through the analysis of their facial expressions during an assisted therapy with dolphins using Artificial Vision and Deep Convolutional Neural Networks can significantly contribute to the effectiveness of the therapy. Understanding the emotional responses of DS children during therapy sessions can provide valuable insights into their level of engagement, comfort, and overall well-being. This information can help therapists and caregivers tailor the therapy sessions to meet the specific emotional needs of each child, enhancing their overall experience and potentially improving therapeutic outcomes. Additionally, monitoring and tracking emotions throughout therapy can aid in identifying patterns or triggers that may impact the child’s progress, allowing for timely adjustments in the treatment plan. Ultimately, having a deeper understanding of the emotions of DS children during assisted therapy with dolphins can foster a more supportive and empathetic therapeutic environment, promoting their emotional development and overall growth.

2. Artificial Intelligence for Facial Emotion Detection in Children

Nowadays, there is a wide variety of therapies that favor or promise to contribute to a much fuller development of children with Down Syndrome (DS). These range from the most conventional, such as physiotherapy, occupational, and behavioral therapy or speech therapy, which seek to correct oral and written communication disorders, mentioned by Esther Martín at [6], to alternative therapies that turn out to be of more of a complementary nature to the conventional ones. Within the latter are therapies with nutritional supplements, drugs, cells, or with animals, there are even therapies that seek healing through rites of faith and prayer, all of which are described by Roizen in [7]. This work is focused in depth on dolphin-assisted therapies. Nowadays, there is a diverse range of therapies available for children with DS, including not only the above-mentioned ones but also speech therapy, occupational therapy, physical therapy, and cognitive behavioral therapy. These therapies aim to address various developmental challenges and enhance the overall well-being of children with DS. Regarding the integration of artificial intelligence (AI) in these therapies, recent advancements have shown promising potential. AI technologies, such as machine learning algorithms and natural language processing, are being used to develop personalized therapy programs tailored to the specific needs and abilities of each child with DS. AI-based tools can assist in speech and language assessment, provide real-time feedback during therapy sessions, and facilitate interactive learning experiences. These innovative approaches have the potential to improve the effectiveness, efficiency, and accessibility of therapies for children with DS, ultimately promoting their development and quality of life. By discussing the different types of therapies used for children with DS and highlighting the role of AI in these therapies, people hope to provide a comprehensive understanding of the evolving landscape of therapeutic interventions for this population.
Sampathila et al. in [8], propose an intelligent deep learning algorithm for identifying white blood cells that generate the overproduction of lymphocytes in the human body’s bone marrow, leading to the development of acute lymphoblastic leukemia. They aimed to detect this rare type of blood cancer in children at a very early stage due to the possibility of patients being cured when this deadly disease is detected in time. The proposed algorithm uses microscopic blood smear images as input data and is implemented using a convolutional neural network to predict which blood cells are leukemia cells. The customized ALLNET model was trained and tested from available microscopic images as open-source data, achieving 96% accuracy, 96% specificity, 96% sensitivity, and a 95% F1-Score for this classifier. This intelligent method can be applied during prescreening to detect leukemia cells during complete blood count and peripheral blood analysis.
Krishnadas et al. in [9], applied YOLOv5 and YOLOv4 scaled models for automated detection and classification of types of malaria parasites and their progression stage in order to generate a faster and more accurate diagnosis for patients; the authors use a set of microscopic images of parasitized red blood cells, as well as another set of images to train the model. The YOLOv4 model had 83% accuracy and the YOLOv5 model achieved 79% accuracy. Both models can support doctors in a more accurate diagnosis of malaria and in predicting its stage. Chadaga et al. in [10], conduct a systematic literature review regarding artificial intelligence (AI) models for accurate and early diagnosis of monkeypox (Mpox), selecting 34 studies with the following thematic categories: Mpox diagnostic tests, epidemiological modeling of infection spread by Mpox, drug and vaccine discovery, and media risk management. The authors explained the detection of Mpox using AI and categorized machine learning and deep learning applications to mitigate Mpox. And Chadaga et al. in [11] developed a decision support system using machine learning and deep learning techniques, such as deep neural networks and one-dimensional convolutional networks, to predict a patient’s COVID-19 diagnosis by applying clinical, demographic, and blood markers, supporting the results of the standard RT-PCR test. Additionally, the authors applied explainable artificial techniques such as Shapley additive values, ELI5, interpretable local model explainer, and Qlattice, resulting in a stacked multilevel model with 94% accuracy, 95% recall, a 94% F1-Score, and a 98% AUC. The models proposed by these authors can be used as a decision support system for the initial detection of coronavirus patients, streamlining medical infrastructure.
Fraiwan et al. in [12], evaluates human emotions with significant benefits in numerous domains, including medicine. A machine learning model is developed in order to estimate the level of enjoyment and visual interest experienced by individuals while engaging with museum content, for instance. The model takes input from 8-channel electroencephalogram signals, which are processed using multiscale entropy analysis to extract three features: mean, slope of the curve, and complexity index (i.e., area under the curve). In addition, this scheme reduces the number of features using principal component analysis without a noticeable loss of precision. In this research, the potential of leveraging electroencephalogram signals and machine learning techniques to accurately estimate human enjoyment and visual interest is demonstrated, thereby contributing to the advancement of emotion evaluation in various fields. Shehu et al. in [13], employ a residual neural network (ResNet) model to propose an adversarial attack-resistant approach for analyzing emotion in facial images using landmarks, specifically addressing the detection of small changes in the input image. Their findings demonstrate a decrease of up to 22% in vulnerability to attacks and significantly reduced execution time compared with the ResNet model. Ngoc et al. in [14], propose a graph convolutional neural network that utilizes landmark features for facial emotion recognition, called a directed graph neural network. Nodes in the graph structure were defined by landmarks, and the edges in the directed graph were built to capture emotional information through the inherent properties of faces, such as geometric and temporal information. To address the vanishing gradient problem, the authors employed a stable form of temporal block in the graph framework, and their approach proved effective when fused with a conventional video-based method. The proposed method achieved advanced performance on the CK+ and AFEW datasets, with accuracies of 98.47% and 50.65%, respectively. Chowdary et al. in [15], use pretrained networks, including ResNet50, VGG19, Inception V3, and MobileNet, to recognize emotions using facial expressions in order to address the problem. To achieve this, the authors remove the fully connected layers of the pretrained ConvNets and add their own fully connected layers, which are suitable for the number of instructions to be performed. The newly added layers can only be trained to update the weights, resulting in an accuracy of 96% for the VGG19 model, 97.7% for the ResNet50 model, 98.5% for the Inception V3 model, and 94. 2% for the MobileNet model. Kansizoglou et al. in [16], generate two Deep Convolutional Neural Network models to classify emotions online, using audio and video modalities for responsive prediction when the system has sufficient confidence. The authors cascade a long short-term memory layer and a reinforcement learning agent, which monitors the speaker, to carry out the final prediction. However, each learning architecture is susceptible to highly unbalanced training samples, so the authors suggest the usage of more data for highly misclassified emotions, such as fear. Finally, Kansizoglou et al. in [17], develop a human–robot interaction system, where a robot gradually maps and learns the personality of a human by conceiving and tracking individual emotional variations throughout their interaction. Facial landmarks of the subject are extracted, and a suitably designed deep recurrent neural network architecture is used to train the model.

References

  1. Día Mundial del Síndrome de Down. Available online: https://www.un.org/es/observances/down-syndrome-day (accessed on 9 April 2023).
  2. Zhao, Q.; Rosenbaum, K.; Okada, K.; Zand, D.J.; Sze, R.; Summar, M.; Linguraru, M.G. Automated Down Syndrome detection using facial photographs. In Proceedings of the 2013 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Osaka, Japan, 3–7 July 2013.
  3. Escobar, J.J.M.; Matamoros, O.M.; del Villar, E.Y.A.; Padilla, R.T.; Reyes, I.L.; Zambrano, B.E.; Gómez, B.D.L.; Morfín, V.H.C. Non-Parametric Evaluation Methods of the Brain Activity of a Bottlenose Dolphin during an Assisted Therapy. Animals 2021, 11, 417.
  4. SÍNDROME DE DOWN: HABLA, LENGUAJE Y COMUNICACIÓN. 2021. Available online: https://psikids.es/2021/09/27/sindrome-de-down-habla-lenguaje-y-comunicacion-2/ (accessed on 9 April 2023).
  5. Ben, X.; Ren, Y.; Zhang, J.; Wang, S.J.; Kpalma, K.; Meng, W.; Liu, Y.J. Video-based Facial Micro-Expression Analysis: A Survey of Datasets, Features and Algorithms. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 5826–5846.
  6. Martín, E. Tratamiento y Pronóstico del Síndrome de Down. Web Consultas, Revista de Salud y Bienestar. 2010. Available online: https://www.webconsultas.com/sindrome-de-down/tratamiento-y-pronostico-del-sindrome-de-down-2243 (accessed on 9 April 2023).
  7. Roizen, N.J. Terapias Complementarias y Alternativas Parael Síndrome de Down; Síndrome de Down 22; Fundación Síndrome de Down de Cantabria: Cantabria, Spain, 2005.
  8. Sampathila, N.; Chadaga, K.; Goswami, N.; Chadaga, R.P.; Pandya, M.; Prabhu, S.; Bairy, M.G.; Katta, S.S.; Bhat, D.; Upadya, S.P. Customized Deep Learning Classifier for Detection of Acute Lymphoblastic Leukemia Using Blood Smear Images. Healthcare 2022, 10, 1812.
  9. Krishnadas, P.; Chadaga, K.; Sampathila, N.; Rao, S.; S., S.K.; Prabhu, S. Classification of Malaria Using Object Detection Models. Informatics 2022, 9, 76.
  10. Chadaga, K.; Prabhu, S.; Sampathila, N.; Nireshwalya, S.; Katta, S.S.; Tan, R.S.; Acharya, U.R. Application of Artificial Intelligence Techniques for Monkeypox: A Systematic Review. Diagnostics 2023, 13, 824.
  11. Chadaga, K.; Prabhu, S.; Bhat, V.; Sampathila, N.; Umakanth, S.; Chadaga, R. A Decision Support System for Diagnosis of COVID-19 from Non-COVID-19 Influenza-like Illness Using Explainable Artificial Intelligence. Bioengineering 2023, 10, 439.
  12. Fraiwan, M.; Alafeef, M.; Almomani, F. Gauging human visual interest using multiscale entropy analysis of EEG signals. J. Ambient Intell. Humaniz. Comput. 2021, 12, 2435–2447.
  13. Shehu, H.A.; Browne, W.; Eisenbarth, H. An Adversarial Attacks Resistance-based Approach to Emotion Recognition from Images using Facial Landmarks. In Proceedings of the 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), Naples, Italy, 31 August–4 September 2020; pp. 1307–1314.
  14. Ngoc, Q.T.; Lee, S.; Song, B.C. Facial Landmark-Based Emotion Recognition via Directed Graph Neural Network. Electronics 2020, 9, 764.
  15. Chowdary, M.K.; Nguyen, T.N.; Hemanth, D.J. Deep learning-based facial emotion recognition for human–computer interaction applications. Neural Comput. Appl. 2021.
  16. Kansizoglou, I.; Bampis, L.; Gasteratos, A. An Active Learning Paradigm for Online Audio-Visual Emotion Recognition. IEEE Trans. Affect. Comput. 2022, 13, 756–768.
  17. Kansizoglou, I.; Misirlis, E.; Tsintotas, K.; Gasteratos, A. Continuous Emotion Recognition for Long-Term Behavior Modeling through Recurrent Neural Networks. Technologies 2022, 10, 59.
More
Information
Contributors MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register : , , , ,
View Times: 321
Revisions: 2 times (View History)
Update Date: 30 Aug 2023
1000/1000