Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 -- 2405 2024-01-21 09:18:23 |
2 Reference format revised. Meta information modification 2405 2024-01-22 02:46:55 |

Video Upload Options

We provide professional Video Production Services to translate complex research into visually appealing presentations. Would you like to try it?

Confirm

Are you sure to Delete?
Cite
If you have any further questions, please contact Encyclopedia Editorial Office.
Vrochidou, E.; Papić, V.; Kalampokas, T.; Papakostas, G.A. Machine Learning-Based Facial Palsy Detection and Evaluation. Encyclopedia. Available online: https://encyclopedia.pub/entry/54153 (accessed on 19 November 2024).
Vrochidou E, Papić V, Kalampokas T, Papakostas GA. Machine Learning-Based Facial Palsy Detection and Evaluation. Encyclopedia. Available at: https://encyclopedia.pub/entry/54153. Accessed November 19, 2024.
Vrochidou, Eleni, Vladan Papić, Theofanis Kalampokas, George A. Papakostas. "Machine Learning-Based Facial Palsy Detection and Evaluation" Encyclopedia, https://encyclopedia.pub/entry/54153 (accessed November 19, 2024).
Vrochidou, E., Papić, V., Kalampokas, T., & Papakostas, G.A. (2024, January 21). Machine Learning-Based Facial Palsy Detection and Evaluation. In Encyclopedia. https://encyclopedia.pub/entry/54153
Vrochidou, Eleni, et al. "Machine Learning-Based Facial Palsy Detection and Evaluation." Encyclopedia. Web. 21 January, 2024.
Machine Learning-Based Facial Palsy Detection and Evaluation
Edit

Automated solutions for medical diagnosis based on computer vision form an emerging field of science aiming to enhance diagnosis and early disease detection. The detection and quantification of facial asymmetries enable facial palsy evaluation.  Deep learning methods allow the automatic learning of discriminative deep facial features, leading to comparatively higher performance accuracies.

mathematical modeling facial palsy facial landmarks asymmetry index deep learning

1. Introduction

Facial palsy is a common neuromuscular disorder causing facial weakness and the disability of facial expressions [1]. Palsy patients lose control of the affected side of their face, experiencing the dropping or stiffness of muscles and disorders of taste buds. Statistics regarding facial palsy report 25 incidents annually per 100,000 people, or approximately one patient out of 60 people in their lifetime, while an average of 40,000 palsy patients are reported in the United States every year [2]. Even though palsy does not cause patients to be in physical pain, they experience phycological stress, external discomfort, and depression, since palsy affects their appearance, facial movements, feeding functions, and, thus, their daily lives [3]. Therefore, the accurate diagnosis and exact evaluation of the degree of palsy are essential for the objective assessment of the facial nerve’s function in terms of monitoring the progress or resolution of palsy. The latter could help for evaluating the therapeutic processes and designing effective treatment plans.
Traditionally, the diagnosis of facial palsy is clinically performed by specialized neurologists who force patients to perform specific facial expressions for evaluating the condition of certain face muscles. The level of palsy is assessed by evaluating the symmetry between the right and left parts of the face in terms of various scoring standards and measuring distances between facial landmarks for both sides with a simple ruler [4]. The manual and empirical evaluation of palsy are, therefore, both labor intensive and subjective. Assessment based on visual inspection makes it hard to precisely quantify the severity of palsy, and it is not feasible to track improvements between subsequent rehabilitation interventions. Moreover, assessment relies on the degree of human expertise; thus, the clinical quantification of palsy may differ between different neurologists [5].
Automatic inspection approaches can alleviate these disadvantages and provide more consistent and objective facial palsy diagnosis and evaluation methods, providing neurologists with an efficient decision-supporting tool [6]. The automatic quantitative evaluation of facial palsy has been a subject of research for many years. Several approaches use optical markers attached to human faces to determine the degree of palsy [7][8], as well as full-face laser scanning [9][10] or electroneurography (ENoG) and electromyography (EMG) signals. The latter approaches, although very accurate, require specialized high-cost equipment and a constrained clinical environment and presuppose physical interventions, which are obtrusive and uncomfortable. Moreover, the patients themselves cannot perform these approaches on their own to monitor their progress at home.
Recent advancements in image analysis algorithms, combined with the increasingly affordable cost of high-resolution capturing devices, resulted in the development of efficient, simple, and cost-effective vision-based techniques for medical applications, reporting impressive state-of-the-art performances [11][12][13]. The diagnosis of various diseases is greatly assisted by facial abnormalities recognition using computer vision [14][15], dynamically incorporating facial recognition into artificial intelligence (AI)-based medicine [16][17]. Automatic image-based facial palsy could accelerate the diagnosis and progress evaluation of the disease, offering a non-invasive, simple, and time- and cost-saving method that could be used by the palsy patients themselves without the presence of a human expert.

2. Machine Learning-Based Facial Palsy Detection and Evaluation

Traditional machine learning methods are based on encoding facial palsy with facial asymmetry-related mathematical features. A portable automatic diagnosis system based on a smartphone application for classifying subjects to healthy or palsy patients was presented by Kim et al. [18]. Facial landmarks were extracted, and an asymmetry index was computed. Classification was implemented using Linear Discriminant Analysis (LDA) combined with Support Vector Machines (SVMs), resulting in 88.9% classification accuracy. Wang et al. [19] used Active Shape Models (ASMs) to locate facial landmarks, dividing the face in eight regions and Local Binary Patterns (LBPs) used to extract descriptors for recognizing patterns of facial movements in these regions, reaching the highest recognition rate of up to 93.33%. In [20], He et al. extracted features based on LBPs in the spatial–temporal domain in both facial regions and validated their method using biomedical videos, reporting an overall accuracy of up to 94% for the HB grading. In [21], the authors automatically measure the ability of palsy patients to smile using Active Appearance Models (AAMs) for feature extraction and facial expression synthesis, providing an average accuracy of 87%. McGrenary et al. [22] quantified facial asymmetry in videos using an artificial neural network (ANN).
Early research into facial asymmetry analysis was also studied by Quan et al. [23], who presented a method for automatically detecting and quantifying facial dysfunctions based on 3D face scans. The authors extracted a number of feature points that enabled the segmentation of faces in local regions, enabling specific asymmetry evaluation for regions of interest rather than the entire face. Gaber et al. [24] proposed an evaluation system for seven palsy categories based on an ensemble learning SVM classifier, reporting an accuracy of 96.8%. The authors proved that their proposed classifier was robust and stable, even for different training and testing samples. Zhuang et al. [25] implemented a performance evaluation between various feature extraction techniques and concluded that 2D static images with Histogram of Oriented Gradients (HOG) features tend to be more accurate. The authors proposed a framework in which landmark and HOG features were extracted, Principal Component Analysis (PCA) was employed separately to the features, and the results were used as inputs to an SVM classifier for classification into three classes, demonstrating performance of up to 92.2% for the entire face. The same research group, as shown in [26], demonstrated a video classification detection tool, namely the Facial Deficit Identification Tool for Videos (F-DIT-V), exploiting HOG features to find a 92.9% classification accuracy. Arora et al. [27] tested an SVM and a Logistic Regressor on generated facial landmark features, achieving 76.87% average accuracy with SVM. In [28], laser speckle contrast imaging was employed by Jiang et al. to monitor the facial blood flow of palsy patients. Then, faces were segmented into regions based on blood distribution features, and three HB score classifiers were tested for their classification performance: a neural network (NN), an SVM, and a k-NN, achieving an accuracy of up to 97.14%. A set of four classifiers (multi-layer perceptron (MLP), SVM, k-NN, multinomial logistic regression (MNLR)) was also comparatively tested in [29]. The authors explored regional information, extracting handcrafted features only in certain face areas of interest. Experimental results reported up to 95.61% correct facial palsy detection and 95.58% correct facial palsy assessment in three categories (healthy, slight palsy, and strong palsy).
All previous methods are based on hand-crafted features. Deep learning methods can automatically learn discriminative feature from the data, without the need to compute them in advance. Deep learning models have accomplished state-of-the-art performances in the field of medical imaging [30]. Based on the above, most of the recent works in vision-based facial palsy detection and evaluation employ deep features. Storey and Jiang [31] presented a unified multitask convolutional neural network (CNN) for the simultaneous object proposal, detection and asymmetry analysis of faces. Sajid et al. [32] introduced a CNN to classify palsy into five scales, resulting in a 92.6% average classification accuracy. Xia et al. [5] suggested a deep neural network (DNN) to detect facial landmarks in palsy. Hsu et al. [33] proposed a deep hierarchical network (DHN) to quantify facial palsy, including a YOLO2 detector for face detection, a fused neural architecture (line segment network—LSN) to detect facial landmarks, and an object detector, similar to Darknet, to locate palsy regions. Preliminary results of the same method were published in [34]. Guo et al. [35] investigated the unilateral peripheral facial paralysis classification using GoogLeNet, reaching a classification accuracy of up to 91.25% for predicting the HB degree.
Storey et al. [36] implemented a facial grading system from video sequences based on a 3D CNN model using ResNet as the backbone, reporting a palsy classification accuracy of up to 82%. Barrios Dell’Olio and Sra [37] proposed a CNN for detecting muscle activation and intensity in the users of their mobile augmented reality mirror therapy system. In [38], Tan et al. introduced a facial palsy assessment method, including a facial landmark detector, a feature extractor based on EfficientNet backbone and semi-supervised extreme learning to classify features, reporting an 85.5% accuracy. Abayomi-Alli et al. [39] trained a SqueezeNet network with augmented images and used the activations from the final convolutional layer as features to train a multiclass error-corrected output code SVM (ECOC-SVM) classifier, reporting an up to 99.34% mean classification accuracy. In [40], computed tomography (CT) images were used to train two geometric deep learning models, namely PointNet++ and PointCNN, for the facial part segmentation of healthy and palsy patients for facial monitoring and rehabilitation. Umirzakova et al. [41] suggested a light deep learning model for analyzing facial symmetry, using a foreground attention block for enhanced local feature extraction and a depth-map estimator to provide more accurate segmentation results. Table 1 summarizes basic information from all the aforementioned studies, including the followed methodology, dataset, and performance results. Details regarding the mathematical modeling of machine learning and deep learning classification models can be found in [42][43][44][45][46].
Table 1. Methodologies for facial palsy (FP) detection.
Ref. Objective Methodology Dataset Performance Conclusions/Limitations
[18] Smartphone-based FP diagnostic system (five FP grades) Linear regression model for facial landmark detection and SVM with linear kernel for classification Private dataset of 36 subjects (23 noral−13 palsy patients) performing 3 motions 88.9% classification accuracy Reproducibility under different experimental conditions, as well as repeatability of measurements over a period of time, were not implemented
[19] Facial movement patterns recognition for FP (2 classes, i.e., normal and asymmetric) Active Shape Models plus Local Binary Patterns (ASMLBP) for feature extraction and SVM for classification Private dataset of 570 images of 57 subjects with 5 facial movements Up to 93.33% recognition rate High robustness and accuracy
[20] Quantitative evaluation of FP (HB scale) Multiresolution extension of uniform LBP and SVM for FP evaluation Private dataset of 197 subject videos with 5 facial movements ~94% classification accuracy Sensitive to out-plane facial movements, with significant natural bilateral asymmetry
[21] Facial landmarks tracking and feedback for FP assessment (HB scale) Active Appearance Models (AAMs) for facial expression synthesis Private dataset of frontal images of neutral and smile expressions from 5 healthy subjects 87% accuracy Preliminary results to demonstrate a proof of concept
[22] FP assessment ANN Private dataset of 43 videos from 14 subjects 1.6% average MSE Pilot study; general results follow the opinions of experts
[23] Facial asymmetry measurement Measuring 3D asymmetry index Three-dimensional dynamic scans from Hi4D-ADSIP database (stroke) - Extraction of 3D feature points, as well as potential for detecting facial dysfunctions
[24] FP classification of real-time facial animation units (seven FP grades) Ensemble learning SVM classifier Private dataset of 375 records from 13 patients and 1650 records from 50 control subjects 96.8% accuracy
88.9% sensitivity
99% specificity
Data augmentation for the imbalanced dataset issues
[25] FP quantification Combination of landmarks and intensity
HoG-based features and a CNN model for classification
Private dataset of 125 images of left facial weakness, 126 images of right facial weakness, and 186 images of normal subjects Up to 94.5% accuracy The
combination of landmarks and HoG intensity features produced the best, when compared to either landmarks or intensity features separately
[26] FP classification (three classes) HOG features and a voting classifier Private dataset of 37 videos of left weakness, 38 of right and 60 of normal subjects 92.9% accuracy
93.6% precision
92.8% recall
94.2% specificity
Comparison with other methods revealed the reliability of HOG features
[27] Facial metric calculation of face sides symmetry Facial landmark features with cascade regression and SVM Stroke faces dataset of 1024 images and 1081 images of healthy faces 76.87% accuracy Machine learning problem-specific models can lead to improved performances
[28] FP assessment (HB scale) Laser speckle contrast imaging and NN classifiers Private dataset of 80 FP patients 97.14% accuracy Outperforms the state-of-the-art systems and other classifiers
[29] FP classification (three classes) Regional handcrafted features and four classifiers (MLP, SVM, k-NN, MNLR) YouTube Facial Palsy (YFP) database Up to 95.58% correct classification Severity is higher classified in eyes and mouth regions
[31] Face symmetry analysis (symmetrical-asymmetrical) Unified multi-task
CNN
AFLW database to fine tune the model and extended Cohn–Kanade (CK+) to learn face symmetry (18,786 images in total) - Lack of fully annotated training set, as well as the need for labeling or a synthesized training set
[32] FP classification (five grades) CNN (VGG-16) Dataset from online sources augmented to 2000 images 92.6% accuracy
92.91% precision
93.14% sensitivity
93% F1 Score
Deep features combined with data augmentation can lead to robust classification
[5] FP classification FCN AFLFP dataset Normalized mean error (NME): 11.5% Mean average: 2.3% standard deviation Comparative results indicate that deep learning methods are, overall, better than machine learning methods
[33] Quantitative analysis of FP Deep Hierarchical Network YouTube Facial Palsy (YFP) database 5.83% NME Line segment learning
leads to an important part of deep features being able to improve the accuracy of facial landmark and palsy region detection
[34] Quantitative analysis of FP Hierarchical Detection Network YouTube Facial Palsy
(YFP) database
Up to 93% precision and 88% recall Efficient for video-to-description diagnosis
[35] Unilateral peripheral FP assessment (HB scale) Deep CNN Private dataset of 720 labeled images of four facial expressions 91.25% classification accuracy Fine-tuning deep CNNs can learn specific representations from biomedical images
[36] FP grading Fully 3D CNN Private FP dataset of 696 sequences with 17 subjects 82% classification accuracy Very competent at learning spatio-temporal features
[37] AR system for FP estimation Light-Weight Facial Activation Unit model (LW-FAU) Private dataset from 20 subjects - Lack of FP benchmark models and datasets
[38] FP assessment (six classes) FNPARCELM-CCNN method YouTube Facial Palsy
(YFP) database
85.5% accuracy Semi-supervised methods can distinguish different degrees of FP, even with little-labeled data
[39] FP detection and classification Deep feature extraction with SqueezeNet and ECOC-SVM classifier YouTube Facial Palsy
(YFP) database
99.34% accuracy Improvement in FP detection from a small dataset
[40] Part segmentation Point-Net++ and PointCNN CT images of 33 subjects 99.19% accuracy
89.09% IOU
Geometric deep learning can be efficient
[41] FP asymmetry analysis Proposed deep architecture YouTube Facial Palsy
(YFP) database
93.8% IOU Poor with bearded faces due to a lack of such training data images

From the information included in Table 1, useful conclusions can be drawn. The lack of available datasets designated for palsy detection and evaluation is obvious. Most research teams develop their own private sets to test their algorithms. The most used public dataset among the referenced works is the YFP dataset; however, it refers to a limited video dataset. The videos are converted into image sequences; however, low dysfunctions cannot be easily visible from only one image and, thus, a sequence of frames needs to be examined to draw conclusions. Moreover, the dataset is labeled but facial landmark points are not annotated. From Table 4, it can be observed that deep learning methods lead to better performance results compared to machine learning methods or methods relying on hand-crafted features.

References

  1. Van Veen, M.M.; ten Hoope, B.W.T.; Bruins, T.E.; Stewart, R.E.; Werker, P.M.N.; Dijkstra, P.U. Therapists’ Perceptions and Attitudes in Facial Palsy Rehabilitation Therapy: A Mixed Methods Study. Physiother. Theory Pract. 2022, 38, 2062–2072.
  2. Banita, B.; Tanwar, P. A Tour Toward the Development of Various Techniques for Paralysis Detection Using Image Processing. In Lecture Notes in Computational Vision and Biomechanics; Springer: Berlin/Heidelberg, Germany, 2018; pp. 187–214.
  3. Hotton, M.; Huggons, E.; Hamlet, C.; Shore, D.; Johnson, D.; Norris, J.H.; Kilcoyne, S.; Dalton, L. The Psychosocial Impact of Facial Palsy: A Systematic Review. Br. J. Health Psychol. 2020, 25, 695–727.
  4. McKernon, S.; House, A.D.; Balmer, C. Facial Palsy: Aetiology, Diagnosis and Management. Dent. Update 2019, 46, 565–572.
  5. Xia, Y.; Nduka, C.; Yap Kannan, R.; Pescarini, E.; Enrique Berner, J.; Yu, H. AFLFP: A Database With Annotated Facial Landmarks for Facial Palsy. IEEE Trans. Comput. Soc. Syst. 2023, 10, 1975–1985.
  6. Guo, Z.; Dan, G.; Xiang, J.; Wang, J.; Yang, W.; Ding, H.; Deussen, O.; Zhou, Y. An Unobtrusive Computerized Assessment Framework for Unilateral Peripheral Facial Paralysis. IEEE J. Biomed. Health Inform. 2018, 22, 835–841.
  7. Demeco, A.; Marotta, N.; Moggio, L.; Pino, I.; Marinaro, C.; Barletta, M.; Petraroli, A.; Palumbo, A.; Ammendolia, A. Quantitative Analysis of Movements in Facial Nerve Palsy with Surface Electromyography and Kinematic Analysis. J. Electromyogr. Kinesiol. 2021, 56, 102485.
  8. Baude, M.; Hutin, E.; Gracies, J.-M. A Bidimensional System of Facial Movement Analysis Conception and Reliability in Adults. Biomed. Res. Int. 2015, 2015, 1–8.
  9. Petrides, G.; Clark, J.R.; Low, H.; Lovell, N.; Eviston, T.J. Three-Dimensional Scanners for Soft-Tissue Facial Assessment in Clinical Practice. J. Plast. Reconstr. Aesthetic Surg. 2021, 74, 605–614.
  10. Azuma, T.; Fuchigami, T.; Nakamura, K.; Kondo, E.; Sato, G.; Kitamura, Y.; Takeda, N. New Method to Evaluate Sequelae of Static Facial Asymmetry in Patients with Facial Palsy Using Three-Dimensional Scanning Analysis. Auris Nasus Larynx 2022, 49, 755–761.
  11. Amsalam, A.S.; Al-Naji, A.; Yahya Daeef, A.; Chahl, J. Computer Vision System for Facial Palsy Detection. J. Tech. 2023, 5, 44–51.
  12. Lou, J.; Yu, H.; Wang, F.-Y. A Review on Automated Facial Nerve Function Assessment from Visual Face Capture. IEEE Trans. Neural Syst. Rehabil. Eng. 2020, 28, 488–497.
  13. Boochoon, K.; Mottaghi, A.; Aziz, A.; Pepper, J.-P. Deep Learning for the Assessment of Facial Nerve Palsy: Opportunities and Challenges. Facial Plast. Surg. 2023, 39, 508–511.
  14. Meintjes, E.M.; Douglas, T.S.; Martinez, F.; Vaughan, C.L.; Adams, L.P.; Stekhoven, A.; Viljoen, D. A Stereo-Photogrammetric Method to Measure the Facial Dysmorphology of Children in the Diagnosis of Fetal Alcohol Syndrome. Med. Eng. Phys. 2002, 24, 683–689.
  15. Wachtman, G.S.; Cohn, J.F.; VanSwearingen, J.M.; Manders, E.K. Automated Tracking of Facial Features in Patients with Facial Neuromuscular Dysfunction. Plast. Reconstr. Surg. 2001, 107, 1124–1133.
  16. Rajpurkar, P.; Chen, E.; Banerjee, O.; Topol, E.J. AI in Health and Medicine. Nat. Med. 2022, 28, 31–38.
  17. Wen, Z.; Huang, H. The Potential for Artificial Intelligence in Healthcare. J. Commer. Biotechnol. 2023, 27, 217–224.
  18. Kim, H.; Kim, S.; Kim, Y.; Park, K. A Smartphone-Based Automatic Diagnosis System for Facial Nerve Palsy. Sensors 2015, 15, 26756–26768.
  19. Wang, T.; Dong, J.; Sun, X.; Zhang, S.; Wang, S. Automatic Recognition of Facial Movement for Paralyzed Face. Biomed. Mater. Eng. 2014, 24, 2751–2760.
  20. He, S.; Soraghan, J.J.; O’Reilly, B.F.; Xing, D. Quantitative Analysis of Facial Paralysis Using Local Binary Patterns in Biomedical Videos. IEEE Trans. Biomed. Eng. 2009, 56, 1864–1870.
  21. Delannoy, J.R.; Ward, T.E. A Preliminary Investigation into the Use of Machine Vision Techniques for Automating Facial Paralysis Rehabilitation Therapy. In Proceedings of the IET Irish Signals and Systems Conference (ISSC 2010), Cork, Ireland, 23–24 June 2010; pp. 228–232.
  22. McGrenary, S.; O’Reilly, B.F.; Soraghan, J.J. Objective Grading of Facial Paralysis Using Artificial Intelligence Analysis of Video Data. In Proceedings of the 18th IEEE Symposium on Computer-Based Medical Systems (CBMS’05), Dublin, Ireland, 23–24 June 2005; pp. 587–592.
  23. Quan, W.; Matuszewski, B.J.; Shark, L.-K. Facial Asymmetry Analysis Based on 3-D Dynamic Scans. In Proceedings of the 2012 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Seoul, Republic of Korea, 14–17 October 2012; pp. 2676–2681.
  24. Gaber, A.; Taher, M.F.; Wahed, M.A.; Shalaby, N.M.; Gaber, S. Classification of Facial Paralysis Based on Machine Learning Techniques. Biomed. Eng. Online 2022, 21, 65.
  25. Zhuang, Y.; McDonald, M.; Uribe, O.; Yin, X.; Parikh, D.; Southerland, A.M.; Rohde, G.K. Facial Weakness Analysis and Quantification of Static Images. IEEE J. Biomed. Health Inform. 2020, 24, 2260–2267.
  26. Zhuang, Y.; Uribe, O.; McDonald, M.; Yin, X.; Parikh, D.; Southerland, A.; Rohde, G. F-DIT-V: An Automated Video Classification Tool for Facial Weakness Detection. In Proceedings of the 2019 IEEE EMBS International Conference on Biomedical & Health Informatics (BHI), Chicago, IL, USA, 19–22 May 2019; pp. 1–4.
  27. Arora, A.; Sinha, A.; Bhansali, K.; Goel, R.; Sharma, I.; Jayal, A. SVM and Logistic Regression for Facial Palsy Detection Utilizing Facial Landmark Features. In Proceedings of the 2022 Fourteenth International Conference on Contemporary Computing, Noida, India, 4–6 August 2022; ACM: New York, NY, USA; pp. 43–48.
  28. Jiang, C.; Wu, J.; Zhong, W.; Wei, M.; Tong, J.; Yu, H.; Wang, L. Automatic Facial Paralysis Assessment via Computational Image Analysis. J. Healthc. Eng. 2020, 2020, 1–10.
  29. Parra-Dominguez, G.S.; Garcia-Capulin, C.H.; Sanchez-Yanez, R.E. Automatic Facial Palsy Diagnosis as a Classification Problem Using Regional Information Extracted from a Photograph. Diagnostics 2022, 12, 1528.
  30. Zhang, Y.; Gorriz, J.M.; Dong, Z. Deep Learning in Medical Image Analysis. J. Imaging 2021, 7, 74.
  31. Storey, G.; Jiang, R. Face Symmetry Analysis Using a Unified Multi-Task CNN for Medical Applications. In Advances in Intelligent Systems and Computing; Springer: Berlin/Heidelberg, Germany, 2019; pp. 451–463.
  32. Sajid, M.; Shafique, T.; Baig, M.; Riaz, I.; Amin, S.; Manzoor, S. Automatic Grading of Palsy Using Asymmetrical Facial Features: A Study Complemented by New Solutions. Symmetry 2018, 10, 242.
  33. Hsu, G.-S.J.; Kang, J.-H.; Huang, W.-F. Deep Hierarchical Network With Line Segment Learning for Quantitative Analysis of Facial Palsy. IEEE Access 2019, 7, 4833–4842.
  34. Hsu, G.-S.J.; Huang, W.-F.; Kang, J.-H. Hierarchical Network for Facial Palsy Detection. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City, UT, USA, 18–22 June 2018; pp. 693–6936.
  35. Guo, Z.; Shen, M.; Duan, L.; Zhou, Y.; Xiang, J.; Ding, H.; Chen, S.; Deussen, O.; Dan, G. Deep Assessment Process: Objective Assessment Process for Unilateral Peripheral Facial Paralysis via Deep Convolutional Neural Network. In Proceedings of the 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017), Melbourne, Australia, 18–21 April 2017; pp. 135–138.
  36. Storey, G.; Jiang, R.; Keogh, S.; Bouridane, A.; Li, C.-T. 3DPalsyNet: A Facial Palsy Grading and Motion Recognition Framework Using Fully 3D Convolutional Neural Networks. IEEE Access 2019, 7, 121655–121664.
  37. Barrios Dell’Olio, G.; Sra, M. FaraPy: An Augmented Reality Feedback System for Facial Paralysis Using Action Unit Intensity Estimation. In Proceedings of the 34th Annual ACM Symposium on User Interface Software and Technology, Online, 10–14 October 2021; ACM: New York, NY, USA, 2021; pp. 1027–1038.
  38. Tan, X.; Yang, J.; Cao, J. Facial Nerve Paralysis Assessment Based on Regularized Correntropy Criterion SSELM vc and Cascade CNN. In Proceedings of the 2021 55th Asilomar Conference on Signals, Systems, and Computers, Pacific Grove, CA, USA, 31 October–3 November 2021; pp. 1043–1047.
  39. Abayomi-Alli, O.O.; Damaševičius, R.; Maskeliūnas, R.; Misra, S. Few-Shot Learning with a Novel Voronoi Tessellation-Based Image Augmentation Method for Facial Palsy Detection. Electronics 2021, 10, 978.
  40. Nguyen, D.-P.; Berg, P.; Debbabi, B.; Nguyen, T.-N.; Tran, V.-D.; Nguyen, H.-Q.; Dakpé, S.; Dao, T.-T. Automatic Part Segmentation of Facial Anatomies Using Geometric Deep Learning toward a Computer-Aided Facial Rehabilitation. Eng. Appl. Artif. Intell. 2023, 119, 105832.
  41. Umirzakova, S.; Ahmad, S.; Mardieva, S.; Muksimova, S.; Whangbo, T.K. Deep Learning-Driven Diagnosis: A Multi-Task Approach for Segmenting Stroke and Bell’s Palsy. Pattern Recognit. 2023, 144, 109866.
  42. Bensoussan, A.; Li, Y.; Nguyen, D.P.C.; Tran, M.-B.; Yam, S.C.P.; Zhou, X. Machine Learning and Control Theory. In Handbook of Numerical Analysis; Elsevier: Amsterdam, The Netherlands, 2022; pp. 531–558. ISBN 9780323850599.
  43. Sukumaran, A.; Abraham, A. Automated Detection and Classification of Meningioma Tumor from MR Images Using Sea Lion Optimization and Deep Learning Models. Axioms 2021, 11, 15.
  44. Berner, J.; Grohs, P.; Kutyniok, G.; Petersen, P. The Modern Mathematics of Deep Learning. In Mathematical Aspects of Deep Learning; Cambridge University Press: Cambridge, UK, 2022; pp. 1–111.
  45. Dutta, N.; Subramaniam, U.; Padmanaban, S. Mathematical Models of Classification Algorithm of Machine Learning. In Proceedings of the International Meeting on Advanced Technologies in Energy and Electrical Engineering, Tunis, Tunisia, 28–29 November 2019; Hamad bin Khalifa University Press (HBKU Press): Doha, Qatar, 2020.
  46. Pedrammehr, S.; Hejazian, M.; Chalak Qazani, M.R.; Parvaz, H.; Pakzad, S.; Ettefagh, M.M.; Suhail, A.H. Machine Learning-Based Modelling and Meta-Heuristic-Based Optimization of Specific Tool Wear and Surface Roughness in the Milling Process. Axioms 2022, 11, 430.
More
Information
Contributors MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register : , , ,
View Times: 357
Revisions: 2 times (View History)
Update Date: 22 Jan 2024
1000/1000
ScholarVision Creations