Sign Language Recognition Models: History
Please note this is an old version of this entry, which may differ significantly from the current revision.

Sign language recognition is challenging due to the lack of communication between normal and affected people. Many social and physiological impacts are created due to speaking or hearing disability. A lot of different dimensional techniques have been proposed previously to overcome this gap. A sensor-based smart glove for sign language recognition (SLR) proved helpful to generate data based on various hand movements related to specific signs. A detailed comparative review of all types of available techniques and sensors used for sign language recognition was presented in this article. The focus of this paper was to explore emerging trends and strategies for sign language recognition and to point out deficiencies in existing systems. This paper will act as a guide for other researchers to understand all materials and techniques like flex resistive sensor-based, vision sensor-based, or hybrid system-based technologies used for sign language until now.

  • sign language
  • sign language review
  • American Sign Language
  • Sensor Recognition
  • Vision recognition models

1. Introduction

A speaking or hearing disability is a cause by which people are affected naturally or accidentally. In the world’s whole population, there are approximately 72 million deaf-mute people. Lack of communication is seen between ordinary and deaf-mute people. This communication gap affects their whole lives. A unique language based on hand gestures and facial expressions lets these affected people interact with the environment and society. This language is known as sign language. Sign language varies according to the region and native languages. However, when we speak of standards, American Sign Language is considered a standard for number and alphabets recognition. This standard is considered the best communication tool for affected people only. An average healthy person with all abilities to speak and hear is not required to know this prototype because that person is entirely unfamiliar with these signs. There are two ways to make communication feasible between a healthy and affected person. Firstly, convince a healthy person to learn all sign language gestures for communication with the deaf-mute person or, secondly, make any deaf-mute person capable of translating gestures into some normal speaking format so everyone can understand sign language easily. Considering the first option, it almost looks impossible to convince any healthy person to learn sign language for communication. This is also the main drawback of sign language. Therefore, technologists and researchers have focused on the second option to make deaf-mute people capable of converting their gestures into some meaningful voice or texture information. For Sign language recognition, a smart glove embedded with sensors was introduced that can convert handmade gestures into meaningful information easily understandable by ordinary people.

Smart technology-based sign language interpreters that remove the communication gap between normal and affected people use different techniques. These techniques are based on image processing or vision-sensor-based techniques, sensor fusion-based smart data glove-related techniques, or hybrid techniques. No such limitations are seen in these technological interpreters as extracting required features from an image usually creates problems due to the foreground and background environmental conditions. If we consider an image or vision-sensor-based recognition system, there is no limitation of foreground or background in gesture recognition. Considering a sensor-based smart data glove, there is no limitation in carrying this data glove as it is mobile, lightweight, and flexible. Research has shown that many applications based on vision sensors, flex-based sensors, or hybrid techniques with different combinations of sensors are currently being used as communication tools. These applications also act as a learning tool for normal people to comfortably communicate with deaf or mute people. Latest technologies like robotics, virtual reality, visual gaming, intelligent computer interfaces, and health-monitoring components use sign language-based applications. The goal of this sign language recognition-based article was to deeply understand the current happenings and emerging techniques in sign language recognition systems. This article completely reflected on the evolution of gesture recognition-based systems and their performance, keeping in mind the limitations and pros and cons of each module. The aim of this study was to understand technological gaps and provide analysis to researchers so they can work on highlighted limitations in future perspectives. So, the aims and objectives of the prescribed study were fulfilled by considering published articles based on the specified domain, the technology used, gestures and hand movements recognized, and sensor types and languages targeted for recognition purposes. This paper also reflected on the method of performance evaluation and effectiveness level achieved for analyzing sign language techniques used previously.

People with speech and hearing disabilities use sign language based on hand gestures. Communication is performed with specific finger motions to represent the language. A smart glove is designed to create a communication link for individuals with speech disorders. It provides a close analysis of the engineering and scientific aspects of the system. The fundamentals are taken into account for the social inclusion of such individuals. A smart glove is an electronic device that translates sign language into text. This system is designed to make communication feasible between the mute people and the public. Sign language recognition-based techniques consist of three main streams that are vision-sensor based, flex sensor-based, and a combination of both vision sensor and flex sensor fused systems and are listed below.

1. Vision-sensor based SLR system

2. Sensor-based SLR system

3. Hybrid SLR system

A comprehensive study related to sign language recognition was also conducted. Review-based articles played a vital role in understanding sign language fundamentals. The general review paper reflected an application-based discussion with several pros and cons. Emerging trends, developing technologies, and sign-detection device characteristics were the focus of general review articles. Sign language recognition-based review articles covered most techniques achieved for gesture recognition. The focus of these articles was on technology. Choice of the right sensory material and limitations in existing systems are reflected in these articles. So, these articles provide a deep understanding of recognition materials and methods used to obtain an efficient sign dataset with maximum accuracy. Basically, any machine can be made intelligent with the help of the machine learning approach. Machine learning techniques are counted under the tree of artificial intelligence. So, the data by which our algorithm learns to perform operations were provided by the sensor data collected in a file. These data were used for training purposes. In this way, the gesture is recognized, and the algorithm efficiency can be computed via testing operations. With the help of this technique, the barrier faced by mute people in communicating with society can be reduced to a great extent when someone wants to share with people who are not able to speak due to a vocally impaired disability. This communication gap creates a problem. Therefore, sign language is used for communication. There are several people who cannot understand these sign gestures. Communication hindrance between the public and mute people is the main problem to be addressed.

2. Different Recognition Models

A lot of image processing and sensor-based techniques have been applied for sign language recognition. Recent studies have shown the latest framework for sign recognition with the advent of time. Detailed literature analysis and a deep understanding of sign language recognition categorized this process into different sub-sections. The further division was completely application-based depending on the type of sensors used in the formation of data gloves. So, these subdivisions were based on non-commercial prototypes for data acquisition, commercial data acquisition prototypes, Bi channel-based systems, and hybrid systems. Introducing non-commercial systems, these prototypes are self-made systems that use sensors to gather data. These sensor values are transmitted to another device, usually, any processor or controller, to understand transmitted sensor data and convert these data into their respective sign language format. Most of the sensor-based systems are non-commercial prototype-based systems discussed in the literature review. In non-commercial systems, most of the authors worked on finger bend detection regarding any sign made. So, a large variety of different solo sensors or combinations of different sensors were used to detect this finger bending. So, SLR models can be further divided into non-commercial prototype-based and framework-based prototypes.

1. Sensor-based models

2. Vision-based models

3. Non-commercial models for data glove

4. Commercial data glove models

5. Hybrid recognition models

6. Frameworks based recognition models

All details of recognition models are discussed in the paper referenced below.

3. Analysis

Sign language recognition is one of the emerging trends in today’s modern era. Much research has already been conducted, and currently, most researchers are working on this very domain. The focus of this article is to provide a brief analysis of all related work that has been done until now. For this purpose, a complete breakdown of all research activities was developed. Some authors worked on a general discussion about sign language. Most of their work was based on introduction and hypothesis to deal with sign language scenarios. There was no practical implementation of the proposed hypothesis; therefore, these authors lie in the general article category. A group of authors worked on developing systems that are able to recognize sign gestures. This group of authors is categorized in the developer domain. A good combination of sensors was used to develop a Sign language recognition system. Most of the authors used sensor-based gloves to recognize sign gestures. Another group of authors worked on existing sensor-based models and improved the accuracy and efficiency of the system. Their focus was to use a good combination of Machine Learning and Neural Network-based algorithms for accuracy achievement. Considering the author’s intentions, Machine Learning based algorithms were used by authors working on sensor-based models like sensory gloves. The authors used Neural Network-based models working on vision-sensor-based models for sign gesture recognition.

Considering literature work, some trends were also kept under consideration. Most of the authors in the sign language domain preferred to develop their own sensor-based models. The focus of authors working on this trend was to develop their own cheap and efficient model that could detect and recognize gestures easily. These models were not made for commercial use. Authors obtained another trend to develop commercial gloves. These gloves contained a maximum number of sensors e.g., 18–20 sensors, to detect sign gestures. Cost and efficiency were the main problems in commercial gloves. Analyzing these research articles, the advantages and disadvantages of vision-sensor-based, sensor-based, and hybrid-based recognition models were listed. Additionally, the last trend of focus in sign language articles, including this article, was the group of those authors who worked on surveys and reviewed articles on sign recognition. These authors provided a deep understanding of research work done previously and provided detailed knowledge of hardware modules, sensor performances, efficiency analysis, and accuracy comparisons. The advantage of review and survey articles over general and development research articles was the filtered knowledge of consideration in one article. Survey-based research articles proved to be a good help for learners and newcomers to that specific topic. Survey and review articles also provided researchers with upcoming challenges, trends, motivations, and future recommendations. A detailed comparative study help use determines uses, limitations, benefits, and advancement in the sign language domain.

4. Conclusion

Developing an automatic machine-based SL translation system that transforms SL into speech and text or vice versa is particularly helpful in improving intercommunication. Progress in pattern recognition promises automated translation systems, but many complex problems need to be solved before they become a reality. Several aspects of SLR technology, particularly SLR that uses a glove sensor approach, have been previously explored and investigated by researchers. In this paper, an in-depth comparative analysis of different sensors in addressing and describing the challenges, benefits, and recommendations related to SLR was presented. The paper discussed the literature work of other researchers mainly targeting the available glove types, the sensors used for capturing data, the techniques adopted for recognition purposes, the identification of the dataset in each article, and the specification of the processing unit and output devices of the recognition systems. The comparative analysis would be helpful to explore and develop a translation system capable of interpreting different sign languages. Finally, datasets generated from these sensors can be used for tasks of classifications and segmentation to assist in continuous gestures recognition.

This entry is adapted from the peer-reviewed paper 10.3390/jimaging8040098

This entry is offline, you can click here to edit this entry!
ScholarVision Creations