Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 -- 2021 2022-04-18 14:43:03 |
2 format correct -29 word(s) 1992 2022-06-01 14:21:01 | |
3 format correct + 3755 word(s) 5747 2022-06-01 14:27:59 |

Video Upload Options

We provide professional Video Production Services to translate complex research into visually appealing presentations. Would you like to try it?

Confirm

Are you sure to Delete?
Cite
If you have any further questions, please contact Encyclopedia Editorial Office.
Amin, M.S.; Rizvi, S.T.H.; Hossain, M. Sign Language Recognition Models. Encyclopedia. Available online: https://encyclopedia.pub/entry/21888 (accessed on 06 December 2024).
Amin MS, Rizvi STH, Hossain M. Sign Language Recognition Models. Encyclopedia. Available at: https://encyclopedia.pub/entry/21888. Accessed December 06, 2024.
Amin, Muhammad Saad, Syed Tahir Hussain Rizvi, Md.murad Hossain. "Sign Language Recognition Models" Encyclopedia, https://encyclopedia.pub/entry/21888 (accessed December 06, 2024).
Amin, M.S., Rizvi, S.T.H., & Hossain, M. (2022, April 18). Sign Language Recognition Models. In Encyclopedia. https://encyclopedia.pub/entry/21888
Amin, Muhammad Saad, et al. "Sign Language Recognition Models." Encyclopedia. Web. 18 April, 2022.
Sign Language Recognition Models
Edit

A hybrid system for sign language is a combination of both vision-sensor-based and combination of different sensors based models. 

sign language sign language review American Sign Language Sensor Recognition Vision recognition models

1. Sensor-Based Models

An American Sign Language Recognition system with flex and motion sensors was implemented in ref. [1]. This system produced better results than a cyber glove embedded with a Kinect camera. Authors succeeded in proposing a model which performs better recognition of signs using different algorithms of machine learning [2]. Response time and accuracy were increased using better sensors and an efficient algorithm for the specified task. The traditional approach of sign language was changed by embedding sensor networks for recognition purposes. A proposed model was implemented by the combination of Artificial Neural Network (ANN) and Support Vector Machine (SVM) in sign language recognition. This combined algorithm produced better results than the Hidden Markova model (HMM) [3]. A smart data glove named “E-Voice” was developed by authors for alphabetical gesture recognition of ASL. The prototype was designed using flex sensors and accelerometer sensors. The data glove was successful in recognizing sign gestures with improved accuracy and increased efficiency [4]. Sign language is a subjective matter, so a new method of recognition was developed using surface electromyography (sEMG). Here, sensors were connected to the right forearm of the subject and collected data for training and testing purposes. They used the Support Vector Machine (SVM) algorithm for recognition and obtained better results for real-time gesture recognition [5]. Another model was proposed as a combination of three types of sensors. These sensors included flux, motion, and pressure sensors to determine SVM impact on sign language recognition [6]. Daily activity was recognized using a smart-data glove. Two basic techniques for gesture interpretation were used using data glove interaction [7].
Implementation of a more advanced approach including a deep learning model was developed in ref. [8]. Static gestures were converted into American Sign Language alphabets using Hough Transform, and this technique was applied on 15 samples per alphabet and obtained 92% accuracy. A combination of motion tracker and the smart sensor was used in sign language recognition. An Artificial Neural Network approach was implemented to obtain the desired results [9]. The Artificial Neural Network translates American Sign Language into alphabets. A sensory glove called a smart hand glove with a motion detection mechanism was used for data collection purposes, and as a result, the transmitter-receiver network processed input data to control home appliances and generate recognition results [10]. Hand-body language-based data analysis was performed using a machine-learning approach. A sensor glove embedded with 10 sensors was used to capture 22 different kinds of postures. KNN, SVM, and PNN algorithms were applied to perform sign language posture recognition [11]. The authors in ref. [12] presented a device named the “Electronic Speaking Glove”. This device was developed using a combination of Flex sensors. Flex sensor data were fed into a low-power, high-performance, and fast 8-bit TMEGA32L AVR microcontroller. A reduced instruction set architecture (RISC)-based AVR microcontroller used the “template matching” algorithm for sign language recognition.
Another Sign language recognition-based system was developed by [13]. This virtual image interaction-based sensor system succeeded in recognizing six letters, i.e., “A, B, C, D, F, and K,” and a digit, “8”. So, the prototype was developed using two flex sensors attached to the index and middle finger of the right hand. These sensor data were transmitted towards the Arduino Uno microcontroller. In this experimental setup, MATLAB-based Fuzzy logic was implemented. A sign gesture recognition-based prototype was developed by the authors in ref. [14]. This prototype consisted of a smart glove embedded with five flex sensors. This acquired data were then sent towards the Arduino Nano microcontroller, and a template matching algorithm was used for gesture recognition purposes. This experimental set succeeded in recognizing four gestures made for Sign Language. A Liquid Crystal Display (LCD) and a speaker were used to display and speaking recognized gestures, respectively. The authors in ref. [15] developed an experimental model for Standard American Sign Language (ASL) alphabet recognition. A programmable intelligent computer (PIC) was used to store the predefined alphabet data of ASL alphabets. This experimental setup was also based on template matching phenomena. For data acquisition purposes, a smart prototype based on three flex sensors along with an analogue to digital (ADC) converter, an LCD, and an INA 126 instrumentation speaker were utilized. In this setup, the 16F377A modeled microcontroller was used, which succeeded in recognizing 70% of ASL alphabet gestures.
A more advanced, intelligent, and smart system was implemented by the authors in ref. [16]. Their experimental setup included eight contact sensors and nine flex sensors. These sensors were placed inside and outside of fingers. The outer five sensors were deployed to detect bending changes, and the inner four sensors were attached to measure hand orientation. This system was also based on a template matching algorithm where a unique 36 gesture-based standard ASL dataset was matched with input data. The ATmega 328P microcontroller was used for matching purposes, which succeeded in producing 83.1% and 94.5% accuracy for alphabet and digits, respectively.
In ref. [17], the authors made a Standard Sign language recognition prototype. This prototype consisted of five flex sensors embedded with an ATmega 328 based microcontroller. Senor-based acquired data were compared with the already stored ASL dataset. This experimental set succeeded in producing 80% overall accuracy. To facilitate the deaf-mute community, another important contribution was presented by ref. [18]. The authors succeeded in developing a prototype which translates sign language into its perfectly matched alphabet or digit. Eleven resistive sensors were used to measure the bending of each finger. Two separate sensors were utilized in this scenario to detect wrist bending. The developed smart device worked perfectly on static gestures and produced good results in alphabet recognition with an overall accuracy of 90%. A Vietnamese Sign language recognition system was developed using six accelerometer sensors [19]. This prototype was designed for 23 local Vietnamese gestures, including two extra postures of “space” and “punctuation”. Gesture classification was performed using Fuzzy logic-based algorithms. This device, named “AcceleGlove”, succeeded in producing 92% overall accuracy in Vietnamese Sign language recognition. A posture recognition system was developed in ref. [20]. This innovative glove-based system was assembled using a flex sensor, force sensor, and a MPU6050 Accelerometer sensor. Five flex and five force sensors were attached to each finger, and an accelerometer was attached to to the wrist. The experimental setup comprised data from flex sensors, force sensors, gyro sensors, accelerometer sensors, and IMU sensor data. All these sensors related to Arduino Mega for data acquisition. Based on data classification, the output was displayed on LCD. This system achieved 96% accuracy on average. A real-time sign-to-speech translator was developed to convert static signs into speech by using “Sign Language trainer & Voice converter” software [21]. Data were acquired using five flex sensors and a 3-axis accelerometer sensor connected with an Arduino-based microcontroller.
A handmade sign recognition system was developed with the help of LabVIEW software using the Arduino board. The user interacted with the environment using the LabVIEW provided Graphical User Interface (GUI), and recognition was performed with the help of Arduino [22]. Another smart-sensor-embedded glove was developed by [23]. A good combination of flex sensors, contact sensors, and a 3-axis ADXL335 accelerometer was used for recognition purposes. Flex sensors were attached to each finger of the hand, and contact sensors were placed in between two consecutive fingers. The sign language-based gestures were obtained using a described smart glove. These sign-based analog data were transferred towards the Arduino Mega environment for recognition purposes. Classified sign gestures were displayed with the help of a 16 × 2 Liquid Crystal Display (LCD) and were converted into speech with the help of a speaker. A smart glove based on five flex sensors and an accelerometer was designed for sign language recognition [24]. This data glove transferred analog signal data to the microcontroller for recognition. Lastly, the output was shown with the help of pre-recorded voice matched with a recognized sign. A sign language recognition system based on numeric data was developed in ref. [25]. The authors used a combination of a Hall sensor and a 3-axis accelerometer. The smart data glove was composed of four Hall sensors attached to the fingers only. Hand orientation was measured with the help of an accelerometer, and finger bend was detected by using Hall sensors. These analog sensor data were passed towards MATLAB code to ideally recognize signs made by smart gloves. This experimental setup was only tested on numbers ranging from 0 to 9. The developed system succeeded in producing an accuracy of 96% in digit recognition.
Despite traditional sensor-based bright data gloves, another advanced approach was utilized by [26]. A smart glove for gesture recognition was created by using LTE-4602 modeled light emitting diodes (LEDs), photodiodes, and polymeric fibers. This combination was used only to detect finger bending. Hand motion was also captured using a 3-axis Accelerometer and gyroscope. This portable smart glove succeeded in hand gesture recognition made for sign language translation. The authors also made regional sign language systems from different origins. An Urdu Sign Language-based system was developed in ref. [27]. The smart data glove was composed of five flex sensors attached to each finger, and a 3-axis accelerometer was placed at the palm. To display the output, a liquid crystal display (LCD) of 16 × 2 dimensions was used. The authors succeeded in creating a dataset of 300 × 8 dimensions, and the Principal Component Analysis (PCA) technique was utilized using the MATLAB software to detect static sign gestures. Using PCA, the authors succeeded in achieving 90% accuracy. Another regional sign language recognition system was presented in ref. [28]. The authors made a prototype to convert Malaysian Sign Language. A smart data glove made of Tilt sensors and a 3-axis accelerometer was developed for recognition purposes. Microcontroller and Bluetooth modes were also involved in this prototype to classify detected signs and transmit them to a smartphone. The microcontroller operated on template-matching phenomena and succeeded in recognizing a few Malaysian Sign Language gestures. Overall system accuracy was from 78.33% to 95%. Flex sensor and accelerometer-based smart gloves can perform alphanumeric data classification. Using this prototype, 26 alphabets and ten digits can be recognized using a template-matching algorithm [29]. Five flex sensors attached to each finger produced an analog signal of a performed gesture which was transferred towards an Arduino Uno microcontroller. Including an accelerometer for hand motion detection, the authors obtained eight valued data for a sign gesture. In ref. [30], the authors developed two gloves-based models. These models contained ten flex sensors attached to each finger of both hands and a 9 degree of freedom (DoF) accelerometer for motion detection. Two glove-based systems were tested on phonetic letters, including a, b, c, ch, and zh. With the help of a matching algorithm, the authors performed static sign recognition with approximately 88% accuracy.
American Sign Language classification and recognition-system-based probabilistic segmentation were presented in ref. [31]. This system was divided into two main modules. The first module performed segmentation based on the Bayesian Network (BN). Data obtained during this session were used for training purposes. The second module was based on classification using a combination of Support Vector Machine (SVM) classifier with multilayer Conditional Random Field (CRF). This system succeeded in producing 89% accuracy on average. The authors in ref. [32] brought some innovation in existing sign language recognition systems by combining data obtained from sensor gloves and the data obtained using hand-tracking systems. A very well-known methodology known as Dumpster-Shafer theory was implemented on the obtained and fused data for evidence assembling. This fused system achieved 96.2% recognition accuracy on 100 two-handed ArSL. Hand motion and tilt sensor-based sign data were collected using Cyber Glove [33]. The classification of 27 hand-shapes based on Signing Exact English (SEE) was performed using Fisher’s linear discriminant embedded with a linear decision tree. Vector Quantization Principal Component Analysis (VQPCA) was used as a classification tool for sign language recognition. This system was successful in obtaining 96.1% overall accuracy.
An Arabic Sign Language recognition-based deep learning framework focusing on the singer independent isolated model was discussed in ref. [34]. The main focus of this research was on the regional sign gestures. In a vast variety of regional domains, these authors focused on only Arabic sign gestures and implemented deep learning-based approaches to achieve the desired results. Implementation of hand gestures recognition for posture classification was implemented in ref. [35]. The prototype was purely based on real-time hand gesture recognition. For implementation, an IMU-based data glove embedded with different sensors was used to achieve the desired results. Another advancement in the field of sensor-based gestures recognition was implemented by ref. [36]. A dual leap motion controller (LMC)-based prototype was designed to capture and identify data. Gaussian Mixture Model and Linear Discriminant based approaches were implemented to achieve results. A case study-related implementation based on regional data was implemented in ref. [37]. The authors focused on Pakistani Sign Language models to work on Multiple Kernel Learning-based approaches. Working with signal-based sensor values for classification of real-time gestures was implemented in ref. [38]. The authors worked on wrist-worm-based real-time hand and surface postures. EMGs and IMU-based sensors were embedded to achieve the desired values of sign postures. An armband EMG sensor-based approach was implemented by the authors in ref. [39]. The main focus was to classify finger language by utilizing ensemble-based artificial neural network learning. The sensor values helped ANN to classify gestures accurately. A sign language interpretation-based smart glove was designed by the authors in ref. [40]. A sensor-fused data glove was used to recognize and classify SL postures. Another novel approach to capture sign gestures was discussed in ref. [41]. The developers of the smart data glove named it SkinGest as it completely grips skin with no detachments. For capturing gestures and postures data, filmy stretchable strain sensors were used. Leap motion-based identification of sign gestures was implemented with the help of a modified LSTM model in ref. [42]. Continuous sign gestures was perfectly classified using an LSTM model to get the desired results. Another novel approach of working with key frame sampling was implemented in ref. [43]. The authors also focused on skeletal features to utilize an attention-based sign language recognition network. A Turkish sign language dataset was processed using based line methods in ref. [44]. Large scale multimodal data were classified based on regional postures to achieve good recognition results. Similarly, the authors used Multimodal Spatiotemporal Networks to classify sign language postures in ref. [45]. Development of a low cost model for translating sign gestures was targeted in ref. [46]. The main focus was the development of a smart wearable device with a very reasonable price.

2. Vision Based Models

The authors in ref. [47] developed a model using vision-sensor-based techniques to extract temporal and spatial features from video sequences. The CNN algorithm was applied on removed lines to identify the recognized activity. An American Sign Language dataset was used for feature extraction and activity recognition. An Intel Real sense camera was used to translate American Sign Language (ASL) gestures into text. The proposed system included an Intel Real camera-based setup and applied SVM and Neural Network (NN) algorithms to recognize sign language [48]. Due to the large set of classes, inter-class complexity was increased to a large extent. This issue was resolved using the Convolutional Neural Network CNN-based approach. Depth images were captured using a high-definition Kinect Camera. Obtained images were processed using CNN to obtain alphabets [49]. Real-time sign language was interpreted using CNN to perform real-time sign detection. This approach does not include outdated datasets or predefined image datasets. The authors manually implemented a real-time data analysis mechanism rather than the traditional approach of using predefined datasets in ref. [50]. In vision-sensor-based recognition, 20 alphabets with numbers were recognized using Neural Network-based Hough Transform [51]. Due to the image’s dataset, a specific threshold value of 0.25 was used for efficiency achievement in the Canny edge detector. This system succeeded in achieving 92.3% accuracy. Fifty samples of alphabets and numbers were recognized by the Indian sign language system using a vision-sensor-based technique [52]. A support vector machine (SVM)-based classifier with B-spline approximation was used, which achieved 91% accuracy on average. A hybrid pulse-coupled neural network (PCNN) embedded with a nondeterministic finite automaton (NFA) algorithm was used collectively to identify image-based gesture data [53]. This prototype achieved 96% accuracy based on the best match phenomena.
Principal component analysis (PCA) along with local binary patterns (LBP) extracted Hidden Markov Model (HMM) features with 99.97% accuracy in ref. [54]. In ref. [55], hand segmentation based on skin color detection was used. For hands identification and tracking, a skin blob tracking system was used. This system achieved 97% accuracy on 30 recognition words. In ref. [56], Arabic Sign language recognition was performed using various transformation techniques like Log-Gabor, Fourier, and Hartley transform. Hartley transform and Support Vector Machine (SVM) and K-Nearest Neighbor (KNN) classifiers helped produce 98% accuracy. Combined orientation histogram and statistical (COHST) features along with wavelet feature techniques were used in ref. [57]. These techniques succeeded in recognizing static signs made for numbers from zero to nine in ASL. The neural network produced efficient results based on the feature values of COHST, wavelet, and histogram with 98.17% accuracy. Static gesture recognition based on alphabets was performed using neural network-based wavelet transform. This system achieved 94.06% accuracy in recognizing Persian sign language [58]. Manual signs were detected using the finger, palm, and place of articulation. Equipment arranged for manual sign extracted data from a video sequence and matched it with a 2D image of standard American Sign Language alphabets. The proposed setup resulted in accurate sign detection of alphabets [59].
Deep learning-based SLR models are also focused on vision-based approaches. The authors in ref. [60] focused on current deep learning-based techniques, trends, and issues in deep models for SL generation. Keeping in mind standard American Sign Language models, the authors in ref. [61] focused on the development of a deep image-based user independent approach. Their main work was based on PCANet features based on depth analysis. Another edge computing-based thermal image detection system was presented by the authors in ref. [62]. They worked on digit-based sign recognition model using deep learning approaches. Different computer vision-based techniques were applied for SLR tasks. A camera sensor-based prototype was used by the authors in ref. [63] to correctly identify sign postures. A convolutional neural network-based approach was implemented by using video sequences in ref. [64]. A three-dimensional attention-based model was designed for a very large vocabulary to acquire data from video sequences and classify them using a 3D-CNN model. Similarly, the same authors implemented a boundary adaptive encoder using an attention-based method on a regional Chinese language dataset in ref. [65]. A novel key-frame-centered clip-based approach was implemented on the same Chinese Sign Language-based dataset in ref. [66]. The regional Chinese sign dataset was classified using video sequences in the form of images. This vision-based novel approach produced challenging results in CSL. Another fingerspell-based smart model was developed by the authors in ref. [67]. They focused on the development of an Indian quiz portfolio that was based on camera-oriented posture classification. The main point of identification was based on ASLR models using a vision-based approach. A vision sensor-based three-dimensional approach was implemented by the authors in ref. [68]. Three-dimensional sign language representation was classified with the help of spatial three-dimensional relational geometric features. These 3-D data were classified and recognized with the help of a S3DRGF-based technique quite efficiently. Another vision-based technique focusing on color mapping-based classification and recognition was developed by the authors in ref. [69]. A CNN-based deep learning model was trained on the three-dimensional data of signs. Color texture coded-based joint angular displacement maps were classified efficiently with the help of a 3-D deep CNN model. Another advanced approach based on three-dimensional data manipulation for sign gestures was implemented in ref. [70]. The authors focused on classification and recognition of angular velocity maps with the help of the deep ResNet model. Connived Feature ResNet was deployed specifically to classify and recognize 3-D sign data. Another video sequences-based novel approach to classify sign gestures was implemented in ref. [71]. A BiLSTM-based three-dimensional residual neural network was used to capture video sequences and identify the posture data. A novel deep learning-based hand gesture recognition approach was implemented by the authors in ref. [72]. Image-based fine postures were captured and perfectly recognized using deep learning-based architecture. A virtual sign channel for visual communication was developed in ref. [73]. The authors’ main focus was to create a virtual communication channel for deaf-mute and hearing individuals. Another three-dimensional data representation for Indian sign language was developed in ref. [74]. The authors used an adoptive kernel-based motionlets-matching technique to classify gesture data. A video sequence and text embedding-based continuous sign language model was implemented in ref. [75]. Joint latent spaces-based data were processed using cross model alignment of a continuous sign language recognition model.

3. Non-Commercial Models for Data Glove

In non-commercial systems, most authors work on finger bend detection regarding any sign made. So, a large variety of different solo sensors or a combination of different sensors were used to detect this finger bending. The authors in ref. [76] developed a non-commercial-based prototype for sign language recognition. This system was completely based on the finger bending method. To detect finger bending, ten flex sensors were used. A pair of sensors were attached to two joints of each finger. To deal with analogue flex data, a MPU-506A multiplexer was used. Selected data coming from the multiplexer were sent to the MSP430G2231 microcontroller. A Bluetooth module was used to transmit data towards a smart cell phone. This captured data were then compared with the sign language database and the sorted result was converted into speech using a text-to-speech converter. The authors in ref. [77] also succeeded in developing a non-commercial sign language recognition-based prototype. This prototype included five ADXL 335 accelerometer sensors connected with an ATmega 2560 microcontroller system. Based on axis orientation, sign language was identified and transmitted via a Bluetooth module towards mobile application for text-to-speech conversion. In ref. [78], a prototype was developed to help handicapped people. This prototype converted finger orientation into some actions. For this purpose, five optical fibers sensors were used to collect finger bending data. These 8-bit analog data were used to train multilayered neural networks (NN) using MATLAB. So, six hand gesture-based operations were performed using the Backpropagation training algorithm. For data validation, a tenfold validation method was implemented on 800 sample records. Similarly, for Sign Language Recognition, the authors made a non-commercial prototype based on five flex sensors [79]. The MSP430F149 microcontroller was used to classify incoming analog data. These data were compared with standard American Sign Language (ASL) data, and the output was displayed on Liquid Crystal Display (LCD). Using text-to-speech methodology, the recognized letter was converted into speech using a good quality speaker. The authors in ref. [80] developed the Sign-to-Letter (S2L) system. This system contained six flex sensors and a combination of discrete-valued components and a microcontroller. Five flex sensors were attached to five fingers of the hand, and one sensor was attached to the wrist of the same hand. This combination of two different bending-based sensors succeeded in converting signs into the letter. The output of this system was displayed via the programming “IF-ELSE” condition. A combination of Light Emitting Diode- Laser Dependent Resistor (LED-LDR) sensors was used by [81]. MSP430G2553 microcontroller was used to detect signs made by finger bending. Using mentioned microcontroller, analog data were converted into digital and ASCII codes related to 10 Sign Language Alphabets. Converted data were transmitted using a Bluetooth module named as ZigBee, and recognized ASCII code was displayed on a computer screen. This code was also converted into speech.
Another fingerspell system was developed in ref. [82]. This prototype included four flex sensors and an accelerometer sensor. The main idea in this prototype design was to translate handmade signs into their corresponding American Sign Language (ASL) alphabets. For data acquisition, four deaf-mute individuals were gathered. This system succeeded in understanding 21 gestures out of 26. A hand gesture recognition system was developed by measuring inertial measurements along with altitude values [83]. For data acquisition, six Inertial Measurement Units (IMUs) were used in this prototype. Each IMU was attached to each finger, and one IMU was attached to the wrist. This experimental setup succeeded in collecting hand gesture data by an accelerometer and a gyroscope, and a magnetometer sensor provided values. These values were refined using Kalman Filter and processed through the Linear Discriminant Analysis (LDA) algorithm. Overall, 85% accuracy was achieved by using this prototype in hand gesture recognition.

4. Commercial Data Glove Based Models

Besides following the traditional way of making cheap data gloves, some of the authors used a commercial data glove named “CyberGlove”. This commercial glove was specifically designed for deaf-mute people. A lot of affected communities and research centers used this glove for communication and research purposes. CyberGlove was manufactured precisely with the combination of 22 sensors embedded on the glove. The basic structure of the glove contained four sensors attached in between fingers and three sensors attached on each finger. Palm sensors and wrist bending measurement sensors were also included in this commercial prototype. This smart, thin layer, elastic fiber-based sensor glove had an approximate cost of $40,000 for each pair. Using this CyberGlove, authors in ref. [84] applied a combination of neural network-based algorithms to measure the accuracy and efficiency of the system. Finger orientation and hand motion projection were captured with a smart CyberGlove embedded with a 3D motion-tracker sensor. This analog signal data were transferred towards a pair of word recognition network and velocity network algorithms. These algorithms worked on 60 American Sign Language (ASL) combinations and obtained an accuracy of 92% and 95%, respectively. A posture recognition system based on a 3D hand posture model was developed in ref. [85]. A Java 3D-based model helped in classification and segmentation of real-time input posture data. These data were compared with pre-recorded CyberGlove-based data with the help of an index tree algorithm. Another CyberGlove embedded with a 3D motion tracker named as Folk of Birds was used for sign language recognition. CyberGlove-based data containing bend, axis, motion, and hand orientation were fed into the multilayered neural network. The Levenberg-Marquardt backpropagation algorithm was used for segmentation and sign classification. This prototype succeeded in producing 90% accuracy in American Sign Language (ASL) recognition [86].
In the sensor-based sign language recognition domain, another advancement was made by introducing a new five-dimensional technology commercial data glove commonly known as the 5DT data glove. This 5DT commercial glove was made in two variants, one with five fiber optic sensors and the other with fourteen optic sensors. 5DT manufacturers named this fiber optic smart data device ultra-motion. Internationally this data glove’s cost was approximately $995. In five sensor-based data gloves, each optical sensor is attached to each finger, and one sensor is attached for hand orientation detection. In 14 optic fiber sensors, two sensors are kept in contact with one finger, and a sensor is also attached in between fingers to check finger abduction. Two-axis measurement-based sensors are also attached in that glove to determine axis and orientation, including pitch and roll of the hand. So, these 5DT-based bright data gloves were used by authors for Japanese Sign language recognition [87]. The main idea of developing this system was to automate the learning system. A 3D model based on the 5DT 14 sensor-based smart data glove for simulating signs was made. This system highlights motion errors for beginners and helps understand hand motion completely via a 3D model. To facilitate communication for deaf and mute people, another advancement was applied using a combination of 5DT data gloves with five embedded sensors. Data obtained by using ultra motion glove were trained using the MATLAB simulator. A multilayered neural network with five inputs and 26 outputs was utilized for the training model for sign language recognition. A series of NN-based algorithms like resilient, back, quick, and Manhattan propagation, including scaled conjugated gradient, was used for the training model [88].
Another advancement in sign language recognition was seen in ref. [89]. The authors used a DG5 V hand data glove for data acquisition. The internal structure of the DG5 V hand data glove contained five flex or bending sensors with one three axis accelerometer and three contact sensors. This data glove was capable of transmitting acquired data wirelessly. The overall system was made remotely functional by using a battery. The DG 5 V commercial data glove was used for American and Arabic Sign language recognition systems. The authors focused on Arabic Sign Language, whereas this data glove had already been used previously for American Sign Language. The only left-hand glove cost $750. A pair of DG5 V data gloves were used in Arabic Sign language recognition. Two glove-based models succeeded in acquiring data for 40 sentences. This dataset was classified using a modified K-Nearest Neighbor (MKNN) algorithm. The overall system succeeded in producing 98.9% accuracy. The hand gesture cannot be fully recognized without knowing hand orientation and posture. Therefore, an advancement in the traditional system was brought by fusing the concept of Electromyography and inertial sensors within the system [90]. Using a combination of the Accelerometer (ACC) sensor with Electromyography (EMG), the authors achieved multiple degrees of freedom for hand movement. This setup was used for Chinese Sign language recognition. The EMG sensors were attached at five muscle points over the forearm, and the MMA7361 modeled 3-axis accelerometer was attached over the wrist. Multi-layered Hidden Markov Model and decision tree algorithms were used for recognition purposes, which succeeded in producing 72.5% accuracy.
The same setup of Accelerometer and Electromyography was used for German Sign Language. The authors used a single EMG with a single ACC sensor to recognize a small database of German vocabulary. The training was performed on seven words with seventy samples for each word. K-Nearest Neighbor (KNN) and Support Vector Machine (SVM) classifiers were used. The system succeeded in achieving an average accuracy of 88.75% and 99.82% in the case of subject dependency [91]. A similar hybrid approach of Accelerometer and Electromyography was used for the Greek Sign language recognition system. The experimental setup consisted of five-channel Electromyography and an accelerometer sensor. The experiment was conducted on the singer with the intrinsic entropy mode. Experiments repeated ten times on three native singers produced training data. So, the system was trained using the intrinsic entropy mode on MATLAB. The system’s overall accuracy was 93% collectively (without the personal effect of native singers involved for data collection purposes) [92].

5. Hybrid Recognition Models

A vision-sensor-based approach was also adopted in sign language recognition. The previously used combination of electromyography with an accelerometer was replaced with a vision-sensor-based hybrid approach. In the hybrid approach, the authors used a variety of accelerometers with vision-sensor cameras. The purpose of a hybrid system was to enhance data acquisition and accuracy. The vision-sensor-based hybrid prototype contained red, green, and blue (RGB) color model cameras, depth sensors, and accelerometer-based axis and orientation sensors. This combination of the smart hybrid approach was used for gesture identification purposes. The experimental setup included seven IMU accelerometer sensors attached to the arm, wrist, and fingers. For data acquisition, five different age group sign language speakers performed ten times repeated forty gestures. Parallel Hidden Markov Model (PaHMM) succeeded in producing 99.75% accuracy [93]. Another combination of an accelerometer-based glove and camera sensor was used for American Sign Language recognition. The experimental setup contained a camera attached to a hat for detecting correctly made signs. Nine accelerometer sensors were used for gesture formation: five attached on each finger and two on the shoulder and arm to detect arm and shoulder movement. Two sensors were attached to the back of the palm for hand orientation measurement. This setup was tested on 665 gestures using the Hidden Markov Model (HMM) and produced a per sign accuracy of 94% [94].

6. Framework-Based Recognition Models

Most of the articles [95][96][97][98] followed a predefined framework for sign language recognition. The main objective of using the same framework was to enhance data accuracy and dataset efficiency. The authors in ref. [99] correctly developed a sign language system and implemented that system using different classification and recognition algorithms. The authors in ref. [100] succeeded in creating a Vietnamese Sign Language framework that worked wirelessly. A two-handed wireless smart data glove was designed and developed using bend and orientation measurement. The experimental setup included MEMS accelerometer sensors attached just like the Accele Glove and as an addition one more sensor was attached to the palm of hands for orientation measurement. Wireless communication was made feasible by using a Bluetooth module attached to a cellphone. The user-generated sign was compared with the standard sign language database, and the correctly found result was displayed on a cellphone screen. Finally, a text-to-speech Google translator was utilized to convert the recognized sign alphabet into speech. This sign language framework succeeded in producing a reasonable accuracy. Similarly, the authors in ref. [101] developed an Arabic Sign Language recognition system. The main purpose of developing another framework for static sign analysis was to minimize the number of sensors on data gloves. This experiment was simulated on the Proteus software. The two-handed glove system contained six flex sensors, four contact sensors, one gyroscope, and one accelerometer sensor on each hand.
Another algorithmic-based sign language recognition framework was designed in ref. [102]. Stream segmentation-based sign descriptors and text auto-correction-based algorithm were utilized. The system also provided software architecture of descriptors for hand gesture recognition. The Sign Language-based Interpolator, which converted text into speech, was also designed in ref. [103]. The overall system framework contained four basic modules that included the smart data glove, training algorithms for the input sign dataset, wirelessly visible sign application, and sign language database for matching the input sign with the standard repository. A very simple resistor-based framework was developed and implemented by ref. [104]. The authors used ten resistors and detected finger movement only. This was a medical application used only for finger flexion and extension. This was a very simple, low-cost, efficient, reliable, and low-power trigger. A data glove containing resistor-based framework was directly connected with a microcontroller which further transmits captured data towards a computer for finger movement analysis. Another simple gesture recognition-based framework was developed by ref. [105]. The smart spelling data glove consisted of three bending sensors attached on three fingers. The authors worked only on five gestures, including thumbs-up and rest. Input gesture data were fed into the microcontroller for recognition purposes, and analyzed gestures were combined in a row to form meaningful data before transmitting them to the receiver. A detailed review on all the frameworks based on Chinese Sign Language was discussed in ref. [106]. All the technical approaches that are only related to the regional Chinese Sign Language recognition and classification mechanisms were discussed in detail. Another detailed review on all the wearable frameworks and prototypes related to sign gesture classification was discussed in ref. [107]. The authors focused on maximum frameworks that are related to and had been previously used by authors in the same field. This is also a review article with good depth of technologies and frameworks in SLR.

References

  1. Bhatnagar, V.; Magon, R.; Srivastava, R.; Thakur, M. A Cost Effective Sign Language to voice emulation system. In Proceedings of the 2015 Eighth International Conference on Contemporary Computing (IC3), Noida, India, 20–22 August 2015.
  2. Masieh, M.A. Smart Communication System for Deaf-Dumb People. In Proceedings of the International Conference on Embedded Systems, Cyber-physical Systems, and Applications (ESCS), Athens, Greece, 2017; Available online: https://www.proquest.com/openview/2747505eab9eb43cb1717f9654ca7d16/1?pq-origsite=gscholar&cbl=1976354 (accessed on 21 December 2021).
  3. Kashyap, A.S. Digital Text and Speech Synthesizer Using Smart Glove for Deaf and Dumb. Int. J. Adv. Res. Electron. Commun. Eng. (IJARECE) 2017, 6, 4.
  4. Amin, M.S.; Amin, M.T.; Latif, M.Y.; Jathol, A.A.; Ahmed, N.; Tarar, M.I.N. Alphabetical Gesture Recognition of American Sign Language using E-Voice Smart Glove. In Proceedings of the 2020 IEEE 23rd International Multitopic Conference (INMIC), Bahawalpur, Pakistan, 5–7 November 2020; pp. 1–6.
  5. Lokhande, P.; Prajapati, R.; Pansare, S. Data Gloves for Sign Language Recognition System. Int. J. Comput. Appl. 2015, 975, 8887.
  6. Iwasako, K.; Soga, M.; Taki, H. Development of finger motion skill learning support system based on data glove. Procedia Comput. Sci. Appl. 2014, 35, 1307–1314.
  7. Aditya, T.S. Meri Awaaz—Smart Glove Learning Assistant for Mute Students and teachers. Int. J. Innov. Res. Comput. Commun. Eng. 2017, 5, 6.
  8. Padmanabhan, M. Hand gesture recognition and voice conversion system for dumb people. Int. J. Sci. Eng. Res. 2014, 5, 427.
  9. Kalyani, B.S. Hand Talk gloves for Gesture Recognizing. Int. J. Eng. Sci. Manag. Res. 2015, 2, 5.
  10. Patel, H.S. Smart Hand Gloves for Disable People. Int. Res. J. Eng. Technol. (IRJET) 2018, 5, 1423–1426.
  11. Pławiak, P.; Sósnicki, T.; Niedzwiecki, M.; Tabor, Z.; Rzecki, K. Hand body language gesture recognition based on signals from specialized glove and machine learning algorithms. IEEE Trans. 2016, 12, 1104–1113.
  12. Ahmed, S.M. Electronic speaking glove for speechless patients, a tongue to a dumb. In Proceedings of the 2010 IEEE Conference on Sustainable Utilization and Development in Engineering and Technology, Petaling Jaya, Malaysia, 20–21 November 2010.
  13. Bedregal, B.; Dimuro, G. Interval fuzzy rule-based hand gesture recognition. In Proceedings of the 12th GAMM-IMACS International Symposium on Scientific Computing, Computer Arithmetic and Validated Numerics, Duisburg, Germany, 26–29 September 2006.
  14. Tanyawiwat, N.; Thiemjarus, S. Design of an assistive communication glove using combined sensory channels. In Proceedings of the 2012 Ninth International Conference on Wearable and Implantable Body Sensor Networks (BSN), London, UK, 9–12 May 2012.
  15. Arif, A.; Rizvi, S.; Jawaid, I.; Waleed, M.; Shakeel, M. Techno-Talk: An American Sign Language (ASL) Translator. In Proceedings of the 2016 International Conference on Control, Decision and InformationTechnologies (CoDIT), Saint Julian, Malta, 6–8 April 2016.
  16. Chouhan, T.; Panse, A.; Voona, A.; Sameer, S. Smart glove with gesture recognition ability for the hearing and speech impaired. In Proceedings of the 2014 IEEE Global Humanitarian Technology Conference-South Asia Satellite (GHTC-SAS), Rivandrum, India, 26–27 September 2014.
  17. Vijayalakshmi, P.; Aarthi, M. Sign language to speech conversion. In Proceedings of the 2016 International Conference on Recent Trends in Information Technology, Chennai, India, 8–9 April 2016.
  18. Praveen, N.; Karanth, N.; Megha, M. Sign language interpreter using a smart glove. In Proceedings of the 2014 International Conference on Advances in Electronics, Computers and Communications (ICAECC), Bangalore, India, 10–11 October 2014.
  19. Phi, L.; Nguyen, H.; Bui, T.; Vu, T. A glove-based gesture recognition system for Vietnamese sign Languages. In Proceedings of the 2015 15th International Conference on Control, Automation and Systems (ICCAS), Busan, Korea, 13–16 October 2015.
  20. Preetham, C.; Ramakrishnan, G.; Kumar, S.; Tamse, A.; Krishnapura, N. Hand talk-implementation of a gesture recognizing glove. In Proceedings of the 2013 Texas Instruments India Educators’ Conference (TIIEC), Bangalore, India, 4–6 April 2013.
  21. Mehta, A.; Solanki, K.; Rathod, T. Automatic Translate Real-Time Voice to Sign Language Conversion for Deaf and Dumb People. Int. J. Eng. Res. Technol. (IJERT) 2021, 9, 174–177.
  22. Sharma, D.; Verma, D.; Khetarpal, P. LabVIEW based Sign Language Trainer cum portable display unit for the speech impaired. In Proceedings of the 2015 Annual IEEE India Conference (INDICON), New Delhi, India, 17–20 December 2015.
  23. Lavanya, M.S. Hand Gesture Recognition and Voice Conversion System Using Sign Language Transcription System. IJECT 2014, 5, 4.
  24. Vutinuntakasame, S.; Jaijongrak, V.; Thiemjarus, S. An assistive body sensor network glove for speech-and hearing-impaired disabilities. In Proceedings of the 2011 International Conference on Body Sensor Networks (BSN), Dallas, TX, USA, 23–25 May 2011.
  25. Borghetti, M.; Sardini, E.; Serpelloni, M. Sensorized glove for measuring hand finger flexion for rehabilitation purposes. IEEE Trans. 2013, 62, 3308–3314.
  26. Adnan, N.; Wan, K.; Shahriman, A.; Zaaba, S.; Basah, S.; Razlan, Z.; Hazry, D.; Ayob, M.; Rudzuan, M.; Aziz, A. Measurement of the flexible bending force of the index and middle fingers for virtual interaction. Procedia Eng. 2012, 41, 388–394.
  27. Alvi, A.; Azhar, M.; Usman, M.; Mumtaz, S.; Rafiq, S.; Rehman, R.; Ahmed, I. Pakistan sign language recognition using statistical template matching. Int. J. Inf. Technol. 2004, 1, 1–12.
  28. Shukor, A.; Miskon, M.; Jamaluddin, M.; bin Ali, F.; Asyraf, M.; bin Bahar, M. A new data glove approach for Malaysian sign language detection. Procedia Comput. Sci. Appl. 2015, 76, 60–67.
  29. Elmahgiubi, M.; Ennajar, M.; Drawil, N.; Elbuni, M. Sign language translator and gesture recognition. In Proceedings of the 2015 Global Summit on Computer & Information Technology (GSCIT), Sousse, Tunisia, 11–13 June 2015.
  30. Sekar, H.; Rajashekar, R.; Srinivasan, G.; Suresh, P.; Vijayaraghavan, V. Low-cost intelligent static gesture recognition system. In Proceedings of the 2016 Annual IEEE Systems Conference, Orlando, FL, USA, 18–21 April 2016.
  31. Mehdi, S.; Khan, Y. Sign language recognition using sensor gloves. In Proceedings of the 9th International Conference on Neural Information Processing, Singapore, 8–22 November 2002.
  32. Ibrahim, N.; Selim, M.; Zayed, H. An Automatic Arabic Sign Language Recognition System (ArSLRS). J. King Saud Univ. Comput. Inf. Sci. 2018, 30, 470–477.
  33. López-Noriega, J.; Emiliano, J.; Fernández-Valladares, M.I.; Uc-cetina, V. Glove-based sign language recognition solution to assist communication for deaf users. In Proceedings of the 2014 11th International Conference on Electrical Engineering, Computing Science and Automatic Control (CCE), Ciudad del Carmen, Mexico, 29 September–3 October 2014.
  34. Aly, S.; Aly, W. DeepArSLR: A Novel Signer-Independent Deep Learning Framework for Isolated Arabic Sign Language Gestures Recognition. IEEE Access 2020, 8, 83199–83212.
  35. Chaithanya Kumar, M.; Leo, F.P.P.; Verma, K.D.; Kasireddy, S.; Scholl, P.M.; Kempfle, J.; Van Laerhoven, K. Real-Time and Embedded Detection of Hand Gestures with an IMU-Based Glove. Informatics 2018, 5, 28.
  36. Deriche, M.; Aliyu, S.O.; Mohandes, M. An Intelligent Arabic Sign Language Recognition System Using a Pair of LMCs with GMM Based Classification. IEEE Sens. J. 2019, 19, 8067–8078.
  37. Farman Shah, M.S. Sign Language Recognition Using Multiple Kernel Learning: A Case Study of Pakistan Sign Language. IEEE Access 2021, 9, 67548–67558.
  38. Jiang, S.; Lv, B.; Guo, W.; Zhang, C.; Wang, H.; Sheng, X.; Shull, P.B. Feasibility of Wrist-Worn, Real-Time Hand, and Surface Gesture Recognition via sEMG and IMU Sensing. IEEE Trans. Ind. Inform. 2018, 14, 3376–3385.
  39. Kim, S.; Kim, J.; Ahn, S.; Kim, Y. Finger language recognition based on ensemble artificial neural network learning using armband EMG sensors. Technol Health Care 2018, 26, 249–258.
  40. Lee, B.G.; Lee, S.M. Smart Wearable Hand Device for Sign Language Interpretation System with Sensors Fusion. IEEE Sens. J. 2018, 18, 1224–1232.
  41. Li, L.; Jiang, S.; Shull, P.B.; Gu, G. SkinGest: Artificial skin for gesture recognition via filmy stretchable strain sensors. Adv. Robot. 2018, 32, 1112–1121.
  42. Mittal, A.; Kumar, P.; Roy, P.P.; Balasubramanian, R.; Chaudhuri, B.B. A Modified LSTM Model for Continuous Sign Language Recognition Using Leap Motion. IEEE Sens. J. 2019, 19, 7056–7063.
  43. Pan, W.; Zhang, X.; Ye, Z. Attention-Based Sign Language Recognition Network Utilizing Keyframe Sampling and Skeletal Features. IEEE Access 2020, 8, 215592–215602.
  44. Sincan, O.M.; Keles, H.Y. AUTSL: A Large Scale Multi-Modal Turkish Sign Language Dataset and Baseline Methods. IEEE Access 2020, 8, 181340–181355.
  45. Zhang, S.; Meng, W.; Li, H.; Cui, X. Multimodal Spatiotemporal Networks for Sign Language Recognition. IEEE Access 2019, 7, 180270–180280.
  46. Zhao, T.; Liu, J.; Wang, Y.; Liu, H.; Chen, Y. Towards Low-Cost Sign Language Gesture Recognition Leveraging Wearables. IEEE Trans. Mob. Comput. 2021, 20, 1685–1701.
  47. Sharma, V.; Kumar, V.; Masaguppi, S.; Suma, M.; Ambika, D. Virtual Talk for Deaf, Mute, Blind and Normal Humans. In Proceedings of the 2013 Texas Instruments India Educators’ Conference (TIIEC), Bangalore, India, 4–6 April 2013.
  48. Abdulla, D.; Abdulla, S.; Manaf, R.; Jarndal, A. Design and implementation of a sign-to-speech/text system for deaf and dumb people. In Proceedings of the 2016 5th International Conference on Electronic Devices, Systems and Applications, Ras AL Khaimah, United Arab Emirates, 6–8 December 2016.
  49. Kadam, K.; Ganu, R.; Bhosekar, A.; Joshi, S. American sign language interpreter. In Proceedings of the 2012 IEEE Fourth International Conference on Technology for Education (T4E), Hyderabad, India, 18–20 July 2012.
  50. Ibarguren, A.; Maurtua, I.; Sierra, B. Layered architecture for real-time sign recognition. Int. J. Comput. 2009, 53, 1169–1183.
  51. Munib, Q.; Habeeb, M.; Takruri, B.; Al-Malik, H. American sign language (ASL) recognition based on Hough transform and neural networks. Exp. Syst. Appl. 2007, 32, 24–37.
  52. Geetha, M.; Manjusha, U. A vision-based recognition of indian sign language alphabets and numerals using B-spline approximation. Int. J. Comput. Sci. Eng. 2012, 4, 406.
  53. Elons, A.; Abull-Ela, M.; Tolba, M. A proposed PCNN features quality optimization technique for pose-invariant 3D Arabic sign language recognition. Softw. Comput. Appl. 2013, 13, 1646–1660.
  54. Mohandes, S.M. Arabic sign language recognition an image-based approach. In Proceedings of the 21st International Conference on Advanced Information Networking and Applications Workshops, Niagara Falls, ON, Canada, 21–23 May 2007.
  55. Erol, A.; Bebis, G.; Nicolescu, M.; Boyle, R.; Twombly, X. Vision-based hand pose estimation: A review. Comput. Vis. Imag. 2007, 108, 52–73.
  56. Ahmed, A.; Aly, S. Appearance-based arabic sign language recognition using hidden markov models. In Proceedings of the 2014 International Conference on Engineering and Technology (ICET), Cairo, Egypt, 19–20 April 2014.
  57. Thalange, A.; Dixit, S.C. OHST and wavelet features based Static ASL numbers recognition. Procedia Comput. Sci. 2016, 92, 455–460.
  58. Kau, L.; Su, W.; Yu, P.; Wei, S. A real-time portable sign language translation system. In Proceedings of the 2015 IEEE 58th International Midwest Symposium on Circuits and Systems (MWSCAS), Fort Collins, CO, USA, 2–5 August 2015.
  59. Kong, W.; Ranganath, S.T. Towards subject independent continuous sign language recognition: A segment and merge approach. Pattern Recogn. 2014, 47, 1294–1308.
  60. Al-Qurishi, M.; Khalid, T.; Souissi, R. Deep Learning for Sign Language Recognition: Current Techniques, Benchmarks, and Open Issues. IEEE Access 2021, 9, 126917–126951.
  61. Aly, W.; Aly, S.; Almotairi, S. User-Independent American Sign Language Alphabet Recognition Based on Depth Image and PCANet Features. IEEE Access 2019, 7, 123138–123150.
  62. Breland, D.S.; Skriubakken, S.B.; Dayal, A.; Jha, A.; Yalavarthy, P.K.; Cenkeramaddi, L.R. Deep Learning-Based Sign Language Digits Recognition From Thermal Images with Edge Computing System. IEEE Sens. J. 2021, 21, 10445–10453.
  63. Casam Njagi, N.; Wario, R.D. Sign Language Gesture Recognition through Computer Vision. In Proceedings of the 2018 IST-Africa Week Conference, Botswana, Africa, 9–11 May 2018; Cunningham, P., Cunningham, M., Eds.; IIMC International Information Management Corporation: Botswana, Africa, 2018; p. 8.
  64. Huang, J.; Zhou, W.; Li, H.; Li, W. Attention-Based 3D-CNNs for Large-Vocabulary Sign Language Recognition. IEEE Trans. Circuits Syst. Video Technol. 2019, 29, 2822–2832.
  65. Huang, S.; Ye, Z. Boundary-Adaptive Encoder with Attention Method for Chinese Sign Language Recognition. IEEE Access 2021, 9, 70948–70960.
  66. Huang, S.; Mao, C.; Tao, J.; Ye, Z. A Novel Chinese Sign Language Recognition Method Based on Keyframe-Centered Clips. IEEE Signal Proc. Lett. 2018, 25, 442–446.
  67. Joy, J.; Balakrishnan, K.; Sreeraj, M. SignQuiz: A Quiz Based Tool for Learning Fingerspelled Signs in Indian Sign Language Using ASLR. IEEE Access. 2019, 7, 28363–28371.
  68. Kumar, D.A.; Sastry, A.S.; Kishore, P.V.; Kumar, E.K.; Kumar, M.T. S3DRGF: Spatial 3-D Relational Geometric Features for 3-D Sign Language Representation and Recognition. IEEE Signal Proc. Lett. 2019, 26, 169–173.
  69. Kumar, E.K.; Kishore, P.V.; Sastry, A.S.; Kumar, M.T.; Kumar, D.A. Training CNNs for 3-D Sign Language Recognition with Color Texture Coded Joint Angular Displacement Maps. IEEE Signal Proc. Lett. 2018, 25, 645–649.
  70. Kumar, E.K.; Kishore, P.; Kumar, M.T.; Kumar, D.A.; Sastry, A. Three-Dimensional Sign Language Recognition with Angular Velocity Maps and Connived Feature ResNet. IEEE Signal Proc. Lett. 2018, 25, 1860–1864.
  71. Liao, Y.; Xiong, P.; Min, W.; Min, W.; Lu, J. Dynamic Sign Language Recognition Based on Video Sequence with BLSTM-3D Residual Networks. IEEE Access 2019, 7, 38044–38054.
  72. Muneer, A.-H.; Ghulam, M.; Wadood, A.; Mansour, A.; Mohammed, B.; Tareq, A.; Mathkour, H.; Amine, M.M. Deep Learning-Based Approach for Sign Language Gesture Recognition with Efficient Hand Gesture Representation. IEEE Access 2020, 8, 192527–192542.
  73. Oliveira, T.; Escudeiro, N.; Escudeiro, P.; Emanuel, R. The VirtualSign Channel for the Communication between Deaf and Hearing Users. IEEE Revista Iberoamericana de Tecnologias del Aprendizaje 2019, 14, 188–195.
  74. Kishore, P.; Kumar, D.A.; Sastry, A.C.; Kumar, E.K. Motionlets Matching with Adaptive Kernels for 3-D Indian Sign Language Recognition. IEEE Sens. J. 2018, 18, 8.
  75. Papastratis, I.; Dimitropoulos, K.; Konstantinidis, D.; Daras, P. Continuous Sign Language Recognition Through Cross-Modal Alignment of Video and Text Embeddings in a Joint-Latent Space. IEEE Access 2020, 8, 91170–91180.
  76. Kanwal, K.; Abdullah, S.; Ahmed, Y.; Saher, Y.; Jafri, A. Assistive Glove for Pakistani Sign Language Translation. In Proceedings of the 2014 IEEE 17th International Multi-Topic Conference (INMIC), Karachi, Pakistan, 8–10 December 2014.
  77. Sriram, N.; Nithiyanandham, M. A hand gesture recognition based communication system for silent speakers. In Proceedings of the 2013 International Conference on Human Computer Interactions (ICHCI), Warsaw, Poland, 14–17 August 2013.
  78. Fu, Y.; Ho, C. Static finger language recognition for handicapped aphasiacs. In Proceedings of the Second International Conference on Innovative Computing, Information and Control, Kumamoto, Japan, 5–7 September 2007.
  79. Kumar, P.; Gauba, H.; Roy, P.; Dogra, D. A multimodal framework for sensor based sign language recognition. Neurocomputing 2017, 259, 21–38.
  80. El Hayek, H.; Nacouzi, J.; Kassem, A.; Hamad, M.; El-Murr, S. Sign to letter translator system using a hand glove. In Proceedings of the 2014 Third International Conference on e-Technologies and Networks forDevelopment (ICeND), Beirut, Lebanon, 29 April–1 May 2014.
  81. Matételki, P.; Pataki, M.; Turbucz, S.; Kovács, L. An assistive interpreter tool using glove-based hand gesture recognition. In Proceedings of the 2014 IEEE Canada International Humanitarian Technology Conference-(IHTC), Montreal, QC, Canada, 1–4 June 2014.
  82. McGuire, R.; Hernandez-Rebollar, J.; Starner, T.; Henderson, V.; Brashear, H.; Ross, D. Towards a one-way American sign language translator. In Proceedings of the Sixth IEEE International Conference on Automatic Face and Gesture Recognition, Amsterdam, The Netherlands, 17–19 September 2004.
  83. Gałka, J.; Masior, M.M.; Zaborski, M.; Barczewska, K. Inertial motion sensing glove for sign language gesture acquisition and recognition. IEEE J. Sens. 2016, 16, 6310–6316.
  84. Oz, C.; Leu, M. American Sign Language word recognition with a sensory glove using artificial neural networks. Artif. Intell. Based Comput. Appl. 2011, 24, 1204–1213.
  85. Bavunoglu, H.; Bavunoglu, E. System of Converting Hand and Finger Movements into Text and Audio. Google Patents 15,034,875, 29 September 2016.
  86. Barranco, J.Á.Á. System and Method of Sign Language Interpretation. Spanish Patent 201,130,193, 14 February 2011.
  87. Sagawa, H.; Takeuchi, M. A method for recognizing a sequence of sign language words represented in a Japanese sign language sentence. In Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition, Grenoble, France, 28–30 March 2000.
  88. Ibarguren, A.; Maurtua, I.; Sierra, B. Layered architecture for real time sign recognition: Hand gesture and movement. Artif. Intellegence Based Eng. Appl. 2010, 23, 1216–1228.
  89. Bajpai, D.; Porov, U.; Srivastav, G.; Sachan, N. Two Way Wireless Data Communication and American Sign Language Translator Glove for Images Text and Speech Display on Mobile Phone. In Proceedings of the 2015 Fifth International Conference on Communication Systems and Network Technologies (CSNT), Gwalior, India, 4–6 April 2015.
  90. Lei, L.; Dashun, Q. Design of data-glove and Chinese sign language recognition system based on ARM9. In Proceedings of the 2015 12th IEEE International Conference on Electronic Measurement & Instruments (ICEMI), Qingdao, China, 16–18 July 2015.
  91. Cocchia, A. Smart and digital city: A Systematic Literature Review. In Smart City; Springer: Berlin/Heidelberg, Germany, 2014.
  92. Vijay, P.; Suhas, N.; Chandrashekhar, C.; Dhananjay, D. Recent developments in sign language recognition: A review. Int. J. Adv. Comput. Eng. Commun. Technol. 2012, 1, 21–26.
  93. Dipietro, L.; Sabatini, A.; Dario, P. A survey of glove-based systems and their applications. IEEE Trans. Syst. Man Cybern. 2008, 38, 461–482.
  94. Khan, S.; Gupta, G.; Bailey, D.; Demidenko, S.; Messom, C. Sign language analysis and recognition: A preliminary investigation. In Proceedings of the 24th International Conference Image and Vision Computing New Zealand, Wellington, New Zealand, 23–25 November 2009.
  95. Verma, S.K. HANDTALK: Interpreter for the Differently Abled: A Review. IJIRCT 2016, 1, 4.
  96. Shriwas, N.V. A Preview Paper on Hand Talk Glove. Int. J. Res. Appl. Sci. Eng. Technol. 2015. Available online: https://www.ijraset.com/fileserve.php?FID=2179 (accessed on 21 December 2021).
  97. Al-Ahdal, M.; Nooritawati, M. Review in sign language recognition systems. In Proceedings of the 2012 IEEE Symposium on Computers & Informatics (ISCI), Penang, Malaysia, 18–20 March 2012.
  98. Anderson, R.; Wiryana, F.; Ariesta, M.; Kusuma, G. Sign Language Recognition Application Systems for Deaf-Mute People: A Review Based on Input-Process-Output. Procedia Comput. Sci. 2017, 116, 441–448.
  99. Pradhan, G.; Prabhakaran, B.; Li, C. Hand-gesture computing for the hearing and speech impaired. IEEE MultiMedia 2008, 15, 20–27.
  100. Bui, T.; Nguyen, L. Recognizing postures in Vietnamese sign language with MEMS accelerometers. IEEE J. Sens. 2007, 7, 707–712.
  101. Sadek, M.; Mikhael, M.; Mansour, H. A new approach for designing a smart glove for Arabic Sign Language Recognition system based on the statistical analysis of the Sign Language. In Proceedings of the 2017 34th National Radio Science Conference (NRSC), Alexandria, Egypt, 13–16 March 2017.
  102. Khambaty, Y.; Quintana, R.; Shadaram, M.; Nehal, S.; Virk, M.; Ahmed, W.; Ahmedani, G. Cost effective portable system for sign language gesture recognition. In Proceedings of the 2008 IEEE International Conference on System of Systems Engineering, Monterey, CA, USA, 2–4 June 2008.
  103. Ahmed, S.; Islam, R.; Zishan, M.; Hasan, M.; Islam, M. Electronic speaking system for speech impaired people: Speak up. In Proceedings of the 2015 International Conference on Electrical Engineering and Information Communication Technology (ICEEICT), Dhaka, Bangladesh, 21–23 May 2015.
  104. Aguiar, S.; Erazo, A.; Romero, S.; Garcés, E.; Atiencia, V.; Figueroa, J. Development of a smart glove as a communication tool for people with hearing impairment and speech disorders. In Proceedings of the 2016 IEEE Ecuador Technical Chapters Meeting (ETCM), Guayaquil, Ecuador, 12–14 October 2016.
  105. Ani, A.; Rosli, A.; Baharudin, R.; Abbas, M.; Abdullah, M. Preliminary study of recognizing alphabet letter via hand gesture. In Proceedings of the 2014 International Conference on Computational Science and Technology (ICCST), Kota Kinabalu, Malaysia, 27–28 August 2014.
  106. Kamal, S.M.; Chen, Y.; Li, S.; Shi, X.; Zheng, J. Technical Approaches to Chinese Sign Language Processing: A Review. IEEE Access 2019, 7, 96926–96935.
  107. Kudrinko, K.; Flavin, E.; Zhu, X.; Li, Q. Wearable Sensor-Based Sign Language Recognition: A Comprehensive Review. IEEE Rev. Biomed. Eng. 2021, 14, 82–97.
More
Information
Contributors MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register : , ,
View Times: 3.1K
Entry Collection: Remote Sensing Data Fusion
Revisions: 3 times (View History)
Update Date: 01 Jun 2022
1000/1000
ScholarVision Creations