Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 -- 2371 2024-01-29 10:49:06 |
2 update references and layout -6 word(s) 2365 2024-01-30 02:30:55 |

Video Upload Options

Do you have a full video?

Confirm

Are you sure to Delete?
Cite
If you have any further questions, please contact Encyclopedia Editorial Office.
Islam, M.; Aloraini, M.; Aladhadh, S.; Habib, S.; Khan, A.; Alabdulatif, A.; Alanazi, T.M. Arabic Sign Language Recognition. Encyclopedia. Available online: https://encyclopedia.pub/entry/54479 (accessed on 03 July 2024).
Islam M, Aloraini M, Aladhadh S, Habib S, Khan A, Alabdulatif A, et al. Arabic Sign Language Recognition. Encyclopedia. Available at: https://encyclopedia.pub/entry/54479. Accessed July 03, 2024.
Islam, Muhammad, Mohammed Aloraini, Suliman Aladhadh, Shabana Habib, Asma Khan, Abduatif Alabdulatif, Turki M. Alanazi. "Arabic Sign Language Recognition" Encyclopedia, https://encyclopedia.pub/entry/54479 (accessed July 03, 2024).
Islam, M., Aloraini, M., Aladhadh, S., Habib, S., Khan, A., Alabdulatif, A., & Alanazi, T.M. (2024, January 29). Arabic Sign Language Recognition. In Encyclopedia. https://encyclopedia.pub/entry/54479
Islam, Muhammad, et al. "Arabic Sign Language Recognition." Encyclopedia. Web. 29 January, 2024.
Arabic Sign Language Recognition
Edit

Sign language recognition, an essential interface between the hearing and deaf-mute communities, faces challenges with high false positive rates and computational costs, even with the use of advanced deep learning techniques.

Arabic sign language recognition convolution neural network computer vision

1. Introduction

About 70 million people worldwide use sign language (SL), and a machine translation system could significantly change communication between people who use SL and those who do not. Nonverbal communication that uses additional physical organs is called SL communication, which uses facial emotions, lip, hand, and eye gestures to convey information. A significant portion of daily communication for those who are hard of hearing or deaf is SL [1]. According to the World Health Organization, 5% of people on Earth have a hearing impairment. Although this number may seem tiny, it shows that over 460 million people worldwide are affected by hearing loss, 34 million of whom are children. It is predicted that more than 900 million people will have hearing loss by 2050 [2], with 1.1 billion young people at risk of becoming deaf due to noise exposure and other problems. Worldwide, hearing loss has a cost of USD 750 billion [2]. Depending on the degree of deafness, there are four types of hearing loss: mild, moderate, severe, and profound. People with severe or profound hearing loss find it challenging to communicate since they are unable to pay attention to others. A deaf person’s mental health can be significantly affected by poor communication, which can leave them feeling lonely, isolated, and unhappy. The SL used by the deaf community is gesture-based. Deaf people communicate by using gestures from SL. Interaction between a hearing person and a deaf person is complicated by the fact that the hearing person does not understand these signs. Just as spoken languages differ from each other, there are about 200 SLs around the world.
The deaf use SL, a kind of communication to exchange information. It uses gestures or signs that are major physical motions that are not part of other natural languages to convey messages. Messages are conveyed through finger and hand gestures, head nods, shoulder movements, and facial emotions. Thus, this study would allow hearing people or hearing and deaf people to talk to each other. When a hard-of-hearing or deaf person is trying to communicate something, they use gestures as a means of communication. Every symbol indicates a distinct word, letter, or feeling. Similar to how a sequence of words forms a word in spoken languages, a mixture of signals creates a sentence. SL thus has a syntax and sentence structure like a fully developed natural language. When speaking and listening in SL, facial features and lip, eye, and hand gestures are utilized to deliver meaning. SL is an important part of daily interaction with deaf people [3]. Nevertheless, it was extremely challenging for computers to comprehend hand signals due to the inconsistent size, shape, and posture of the hands or fingers in an image. SL can be tackled from two different angles: sensor-based and image-based. Users of expression frameworks do not need to employ sophisticated devices, which is their main benefit. In any case, a lot of work needs to be carried out during the preprocessing step. It is impossible to exaggerate the value of language for development. It not only serves as a channel for interpersonal communication but also helps people accept social rules and improve communication control. Even though they can hear the language spoken to them, deaf children do not learn the same terms to describe themselves as hearing children.
Recent SL research falls into two categories: Strategies based on vision and approaches based on contact. A component of the interaction technique is the interaction between users and sensing equipment. An interferometric glove is typically used to collect data on finger movement, bending, motion, and the angle of the generated sign using EMG signals, inertial measurements, or electromagnetic measurements. As input to the platform, the visual approach utilizes information from video streams taken with a camera. Additionally, it is split into presence and 3D-model-based approaches categories [4]. Most 3D model-based methods start by creating a 2D image from the position and joint angle of the hand in 3D space.
Demeanor identification uses attributes taken from a PowerPoint presentation of the image, whereas recognition relies on matching the traits [5]. Few “normal” people can understand or utilize SL, even though many hearing-impaired people have mastered it. This affects the communication of people with communication impairments and fosters a feeling of alienation between them and “normal” society. By utilizing technology that continuously transforms SL to written language and vice versa, this gap can be closed. Academics have now been helped by numerous paradigm shifts in many scientific and technological domains to suggest and put into practice SL recognition systems. Instead of using written or spoken language, people communicate with one another by using hand signals, a gesture-based method. There are 25 nations whose official language is Arabic. Only a small portion of the populace in some countries speaks Arabic [6]. Some estimates place the overall number of countries at 22 to 26. Arabic gestures are not deontological, although the language is. Jordanians, Libyans, Moroccans, Egyptians, Palestinians, and Iraqis, to name a few, are among those who speak Arabic. But every nation has a distinctive dialect. Or, to put it another way, there appear to be two dialects of Arabic: formal and informal. Arabic SL is the same across the board because they all use the same alphabet. This feature is quite helpful for research projects. A close-knit community exists among Arabs who are deaf. Low levels of interaction exist between the deaf and hearing populations, with most interactions occurring between deaf communities, deaf relatives, and occasionally playmates and professionals. Arabic SL is recognized using a continuous recognition program based on the K-nearest neighbor classifier and an Arabic SL feature-extraction method. However, Tubaiz’s method has the fundamental flaw of requiring patients to wear interferometric gloves to record data on certain activities, which in turn can be very distracting to users [7]. An interferometric glove was developed to aid in the development of a system for recognizing Arabic SL. Arabic SL can be recognized continuously using hidden Markov models (HMMs) and temporal features [8]. The goal of the study was to transcribe Arabic SL for use on portable devices. Previous work covered a wide range of SLs, but few of the studies focused on Arabic SL. Using a HMM quantifier, the researchers achieved 93% accuracy for a sample of 300 words. They used KNN and Bayesian classifications [9], which gave similar results to HMMs. This research introduces a network-matching technique for ongoing Arabic SL sentence recognition. The model makes use of decision trees and breaks down actions into stationary positions. They translate multi-word sentences with at least 63% accuracy using a polynomial runtime method. However, the above approaches, mostly based on a conventional approach to initialize weights, which involves problems of vanishing gradients and high computational complexity, achieved only a limited level of accuracy for the recognition of Arabic SL.

2. Arabic Sign Language Recognition

The fourth most spoken language in the world is Arabic (Generates a Set Consulting Group 2020). In 2001, the Arab Federation of the Deaf officially declared Arabic SL as the main language for people with speech and hearing problems in Arab countries. Arabic SL is still in its infancy, even though Arabic is one of the most widely spoken languages in the world. The most general issue that Arabic SL patients realize is “diglossia”. Each country has its regional dialects that are spoken instead of written languages. As a result, the different dialects spoken have given rise to different Arabic SLs. They are as numerous as the Arab states, but all share the same alphabet and a small number of vocabulary words. Arabic is one of the more sophisticated and appealing languages and is spoken by over 380 million people around the world as the first official language. The intellectual and semantic homogeneity of Arabic is tenable [8]. The ability of NN to facilitate the recognition of Arabic SL hand gestures was the main concern of the authors [10]. The main aim was to illustrate the application of different types of stationary and dynamic indicators by detecting actual human movements. First, it was shown how different architectures and fully and moderately repetitive systems can be combined with a feed-forward neural network and a recurrent neural network [10]. The experimental evaluations show a 95% precision rate for the detection of stationary action, which inspired them to further explore their proposed structure. The automated detection of Arabic SL alphabets using an image-based approach was highlighted in [11]. In particular, to create an accurate sensor for the Arabic SL alphabet, several visual aspects were investigated. The extracted visible tags were fed into the One-Versus-All SVM. The results demonstrated that the Histogram of Oriented Gradients obtained promising performance, using One-Versus-All SVM and HOG identifiers. The Kinect sensor was used in [12] to develop a real-time automatic Arabic SL recognition system based on the Dynamic Time Warping coordination approach. Power and data gloves are not used by the software. Different aspects of human–computer interactions were covered in a few other studies [13]. Studies from 2011 that can identify Arabic SL with an accuracy of up to 82.22% [14][15] show that Hidden Markov models are at the center of alternative methods for SL recognition. Some other works using Hidden Markov Models can be found in [16]. A five-stage approach for an Arabic SL translator with an efficiency of 91.3% was published at the same time in [16], which focuses on the background subtraction of transcription, size, or partial invariance. Almasre and Al-Nuaim recognized 28 Arabic SL gestures using specialized detectors such as the Microsoft Kinect or Leap Motion Detectors. More recent studies have focused on understanding Arabic SL [17]. An imaging method that included the elevation, width, and intensity of the elements was used to create many CNNs and provide feedback. Instead, the frame rate of the depth footage is used by CNN to interpret the data, which also defines how vast the system is. Faster refresh rates produce more detail, while lower frame rates produce less depth. Furthermore, a new method for Arabic SL recognition was proposed in 2019 using a CNN to identify 28 letters of the Arabic language and digits from 0 to 10 [18]. In numerous training and testing permutations, the proposed seven-layer architecture was frequently taught, with the highest apparent correctness being 90.02 percent using a training dataset of 80 percent images. Finally, the researchers showed why the proposed paradigm was better than alternative strategies. Among deep neural networks, CNNs have primarily been utilized in computer-vision-based methods that generally focus on the collected images of a motion and extract its important features to identify it. Multimedia systems, emotion recognition, picture segmentation and semantic breakdown, super resolution, and other issues have all been addressed using this technology [19][20][21]. Oyedotun et al. employed a CNN and the Stacked Denoising Autoencoder to identify 24 American SL gestures [22]. Pigou et al. [23], on the other hand, recommended the use of a CNN for Italian SL recognition [24]. Another study [25] shows a remarkable CNN model that uses hand gestures to automatically recognize numbers and communicates the precise results in Bangla. This model is used in the current investigation [25]. In a related work [24][25], a CRNN module is used to estimate hand posture. Moreover, [26], recommends using a deep learning model to recognize the distinguishing features in large datasets and apply transfer learning to data collected from different individuals. In [27], a Bernoulli heat map based on deep CNN was constructed to measure head posture. Another study used separable 3D convolutional networks using a neural network to recognize dynamic hand gestures for identifying the hand signal. Another article [28] was submitted on wearable hand gesture recognition using flexible strain sensors; this is the most recent study on this topic. The authors of [29] made the most recent work-related hand gesture deformable CNN in use. Another recent effort proposed for HCI uses fingerprint detection for hand gesture recognition [30]. A small neural network is used to recognize hand gestures [31]. Learning geometric features [32] is another way to understand hand gestures. In [33], the K-nearest neighbor method provides a reliable recognition system. Arabic SL is one way to capture statistical feature extraction using a classifier. The Arabic character language is another way. Tubaiz’s method has a number of weaknesses, but the biggest one is that users have to wear instrumented gloves to capture the subtleties of a particular gesture, which is often very uncomfortable for the user. In [34], the researcher proposed using a glove with instruments to create a system for recognizing Arabic SL utilizing hidden Markov models and spatiotemporal features for the continuous recognition of Arabic SL. The authors of [35] advocated using a multiscale network for hand pose estimation. Similarly, ref. [36] investigated text translation from Arabic SL for use on portable devices. It is reported in [37] that Arabic SL can be automatically identified using sensor and picture approaches. In [38], the authors provide a programmable framework for Arabic SL hand gesture recognition using two depth cameras and two Microsoft Kinect-based machine learning algorithms. The CNN approach, which is now being used to study Arabic SL, is also unmatched [39].
In addition to the above approaches, a region-based (RCNN) is also explored for sign language recognition. For instance, various backbone pre-trained models are evaluated with RCNN, which intelligently works in numerous background scenes [40]. Next, in the case of low-resolution images, the authors of [41] used CNN for more prominent features, followed by machine learning classifiers SVM with triplet loss. Similarly, to overcome the issue of computational complexity, ref. [42] proposed a lightweight model for real-time sign language recognition, which obtained incredible performance on testing data. However, these models show better classification accuracy in the case of small datasets but limited performance over large-scale datasets. To tackle such issues, a deep CNN network was developed that was trained on massive amounts of samples and improved recognition scores [43]. This work is further enhanced in [44], where a novel deep CNN architecture is designed that obtained a tremendous semantic recognition score. In addition, to address the balancing problem, the authors of [45] developed a DL model followed by a synthetic minority oversampling technique that yielded better performance with a large number of parameters and a large model size. Therefore, it is highly desirable to develop an image-based intelligent system for Arabic hand sign recognition using novel CNN architecture.

References

  1. Shukla, P.; Garg, A.; Sharma, K.; Mittal, A. A DTW and fourier descriptor based approach for Indian sign language recognition. In Proceedings of the 2015 Third International Conference on Image Information Processing (ICIIP), Waknaghat, India, 21–24 December 2015; pp. 113–118.
  2. Kushalnagar, R. Deafness and hearing loss. In Web Accessibility; Springer: Berlin/Heidelberg, Germany, 2019; pp. 35–47.
  3. Almasre, M.A.; Al-Nuaim, H. A comparison of Arabic sign language dynamic gesture recognition models. Heliyon 2020, 6, e03554.
  4. Elons, A.S.; Abull-Ela, M.; Tolba, M.F. A proposed PCNN features quality optimization technique for pose-invariant 3D Arabic sign language recognition. Appl. Soft Comput. 2013, 13, 1646–1660.
  5. Tharwat, A.; Gaber, T.; Hassanien, A.E.; Shahin, M.K.; Refaat, B. Sift-based arabic sign language recognition system. In Proceedings of the Afro-European Conference for Industrial Advancement; Springer: Berlin/Heidelberg, Germany, 2015; pp. 359–370.
  6. Shahin, A.; Almotairi, S. Automated Arabic sign language recognition system based on deep transfer learning. IJCSNS Int. J. Comput. Sci. Netw. Secur. 2019, 19, 144–152.
  7. Bencherif, M.A.; Algabri, M.; Mekhtiche, M.A.; Faisal, M.; Alsulaiman, M.; Mathkour, H.; Al-Hammadi, M.; Ghaleb, H. Arabic sign language recognition system using 2D hands and body skeleton data. IEEE Access 2021, 9, 59612–59627.
  8. Mustafa, M. A study on Arabic sign language recognition for differently abled using advanced machine learning classifiers. J. Ambient Intell. Humaniz. Comput. 2021, 12, 4101–4115.
  9. Hisham, B.; Hamouda, A. Supervised learning classifiers for Arabic gestures recognition using Kinect V2. SN Appl. Sci. 2019, 1, 1–21.
  10. Maraqa, M.; Al-Zboun, F.; Dhyabat, M.; Zitar, R.A. Recognition of Arabic sign language (ArSL) using recurrent neural networks. J. Intell. Learn. Syst. Appl. 2012, 4, 41–52.
  11. Alzohairi, R.; Alghonaim, R.; Alshehri, W.; Aloqeely, S. Image based Arabic sign language recognition system. Int. J. Adv. Comput. Sci. Appl. 2018, 9, 185–194.
  12. Duwairi, R.M.; Halloush, Z.A. Automatic recognition of Arabic alphabets sign language using deep learning. Int. J. Electr. Comput. Eng. (2088-8708) 2022, 12, 2996–3004.
  13. Hu, Z.; Zhang, Y.; Xing, Y.; Zhao, Y.; Cao, D.; Lv, C. Toward human-centered automated driving: A novel spatial-temporal vision transformer-enabled head tracker. IEEE Veh. Technol. Mag. 2022, 17, 57–64.
  14. Youssif, A.A.; Aboutabl, A.E.; Ali, H.H. Arabic sign language (arsl) recognition system using hmm. Int. J. Adv. Comput. Sci. Appl. 2011, 2, 45–51.
  15. Abdo, M.; Hamdy, A.; Salem, S.; Saad, E.M. Arabic alphabet and numbers sign language recognition. Int. J. Adv. Comput. Sci. Appl. 2015, 6, 209–214.
  16. El-Bendary, N.; Zawbaa, H.M.; Daoud, M.S.; Hassanien, A.E.; Nakamatsu, K. Arslat: Arabic sign language alphabets translator. In Proceedings of the 2010 International Conference on Computer Information Systems and Industrial Management Applications (CISIM), Krakow, Poland, 8–10 October 2010; pp. 590–595.
  17. ElBadawy, M.; Elons, A.; Shedeed, H.A.; Tolba, M. Arabic sign language recognition with 3d convolutional neural networks. In Proceedings of the 2017 Eighth International Conference on Intelligent Computing and Information Systems (ICICIS), Cairo, Egypt, 5–7 December 2017; pp. 66–71.
  18. Hayani, S.; Benaddy, M.; El Meslouhi, O.; Kardouchi, M. Arab sign language recognition with convolutional neural networks. In Proceedings of the 2019 International Conference of Computer Science and Renewable Energies (ICCSRE), Agadir, Morocco, 22–24 July 2019; pp. 1–4.
  19. Kayalibay, B.; Jensen, G.; van der Smagt, P. CNN-based segmentation of medical imaging data. arXiv 2017, arXiv:1701.03056.
  20. Hossain, M.S.; Muhammad, G. Emotion recognition using secure edge and cloud computing. Inf. Sci. 2019, 504, 589–601.
  21. Kamruzzaman, M. E-crime management system for future smart city. In Data Processing Techniques and Applications for Cyber-Physical Systems (DPTA 2019); Springer: Berlin/Heidelberg, Germany, 2020; pp. 261–271.
  22. Oyedotun, O.K.; Khashman, A. Deep learning in vision-based static hand gesture recognition. Neural Comput. Appl. 2017, 28, 3941–3951.
  23. Pigou, L.; Dieleman, S.; Kindermans, P.-J.; Schrauwen, B. Sign Language Recognition Using Convolutional Neural Networks; Springer International Publishing: Berlin/Heidelberg, Germany, 2015.
  24. Hu, Z.; Hu, Y.; Liu, J.; Wu, B.; Han, D.; Kurfess, T. A CRNN module for hand pose estimation. Neurocomputing 2019, 333, 157–168.
  25. Ahmed, S.; Islam, M.; Hassan, J.; Ahmed, M.U.; Ferdosi, B.J.; Saha, S.; Shopon, M. Hand sign to Bangla speech: A deep learning in vision based system for recognizing hand sign digits and generating Bangla speech. arXiv 2019, arXiv:1901.05613.
  26. Côté-Allard, U.; Fall, C.L.; Drouin, A.; Campeau-Lecours, A.; Gosselin, C.; Glette, K.; Laviolette, F.; Gosselin, B. Deep learning for electromyographic hand gesture signal classification using transfer learning. IEEE Trans. Neural Syst. Rehabil. Eng. 2019, 27, 760–771.
  27. Hu, Z.; Xing, Y.; Lv, C.; Hang, P.; Liu, J. Deep convolutional neural network-based Bernoulli heatmap for head pose estimation. Neurocomputing 2021, 436, 198–209.
  28. Si, Y.; Chen, S.; Li, M.; Li, S.; Pei, Y.; Guo, X. Flexible strain sensors for wearable hand gesture recognition: From devices to systems. Adv. Intell. Syst. 2022, 4, 2100046.
  29. Wang, H.; Zhang, Y.; Liu, C.; Liu, H. sEMG based hand gesture recognition with deformable convolutional network. Int. J. Mach. Learn. Cybern. 2022, 13, 1729–1738.
  30. Alam, M.M.; Islam, M.T.; Rahman, S.M. Unified learning approach for egocentric hand gesture recognition and fingertip detection. Pattern Recognit. 2022, 121, 108200.
  31. Chenyi, Y.; Yuqing, H.; Junyuan, Z.; Guorong, L. Lightweight neural network hand gesture recognition method for embedded platforms. High Power Laser Particle Beams 2022, 34, 031023.
  32. Joudaki, S.; Rehman, A. Dynamic hand gesture recognition of sign language using geometric features learning. Int. J. Comput. Vis. Robot. 2022, 12, 1–16.
  33. Tubaiz, N.; Shanableh, T.; Assaleh, K. Glove-based continuous Arabic sign language recognition in user-dependent mode. IEEE Trans. Hum.-Mach. Syst. 2015, 45, 526–533.
  34. Al-Buraiky, S.M. Arabic Sign Language Recognition Using an Instrumented Glove; King Fahd University of Petroleum and Minerals: Dhahran, Saudi Arabia, 2004.
  35. Hu, Z.; Hu, Y.; Wu, B.; Liu, J.; Han, D.; Kurfess, T. Hand pose estimation with multi-scale network. Appl. Intell. 2018, 48, 2501–2515.
  36. Halawani, S.M. Arabic sign language translation system on mobile devices. IJCSNS Int. J. Comput. Sci. Netw. Secur. 2008, 8, 251–256.
  37. Mohandes, M.; Deriche, M.; Liu, J. Image-based and sensor-based approaches to Arabic sign language recognition. IEEE Trans. Hum.-Mach. Syst. 2014, 44, 551–557.
  38. Almasre, M.A.; Al-Nuaim, H. Comparison of four SVM classifiers used with depth sensors to recognize Arabic sign language words. Computers 2017, 6, 20.
  39. Hu, Z.; Lv, C.; Hang, P.; Huang, C.; Xing, Y. Data-driven estimation of driver attention using calibration-free eye gaze and scene features. IEEE Trans. Ind. Electron. 2021, 69, 1800–1808.
  40. Alawwad, R.A.; Bchir, O.; Ismail, M.M.B. Arabic Sign Language Recognition using Faster R-CNN. Int. J. Adv. Comput. Sci. Appl. 2021, 12, 692–700.
  41. Althagafi, A.; Alsubait, G.T.; Alqurash, T. ASLR: Arabic sign language recognition using convolutional neural networks. IJCSNS Int. J. Comput. Sci. Netw. Secur. 2020, 20, 124–129.
  42. Zakariah, M.; Alotaibi, Y.A.; Koundal, D.; Guo, Y.; Mamun Elahi, M. Sign Language Recognition for Arabic Alphabets Using Transfer Learning Technique. Comput. Intell. Neurosci. 2022, 2022, 4567989.
  43. Latif, G.; Mohammad, N.; AlKhalaf, R.; AlKhalaf, R.; Alghazo, J.; Khan, M. An automatic Arabic sign language recognition system based on deep CNN: An assistive system for the deaf and hard of hearing. Int. J. Comput. Digit. Syst. 2020, 9, 715–724.
  44. Elsayed, E.K.; Fathy, D.R. Sign language semantic translation system using ontology and deep learning. Int. J. Adv. Comput. Sci. Appl. 2020, 11, 141–147.
  45. Alani, A.A.; Cosma, G. ArSL-CNN: A convolutional neural network for Arabic sign language gesture recognition. Indones. J. Electr. Eng. Comput. Sci. 2021, 22, 1096–1107.
More
Information
Contributors MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register : , , , , , ,
View Times: 172
Revisions: 2 times (View History)
Update Date: 30 Jan 2024
1000/1000
Video Production Service