Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 -- 1137 2023-11-09 07:46:55 |
2 Reference format revised. Meta information modification 1137 2023-11-12 12:02:31 |

Video Upload Options

Do you have a full video?

Confirm

Are you sure to Delete?
Cite
If you have any further questions, please contact Encyclopedia Editorial Office.
Fan, L.; Zhang, Z.; Zhu, B.; Zuo, D.; Yu, X.; Wang, Y. Sensor-Based Gesture Recognition and Algorithm. Encyclopedia. Available online: https://encyclopedia.pub/entry/51327 (accessed on 05 July 2024).
Fan L, Zhang Z, Zhu B, Zuo D, Yu X, Wang Y. Sensor-Based Gesture Recognition and Algorithm. Encyclopedia. Available at: https://encyclopedia.pub/entry/51327. Accessed July 05, 2024.
Fan, Liufeng, Zhan Zhang, Biao Zhu, Decheng Zuo, Xintong Yu, Yiwei Wang. "Sensor-Based Gesture Recognition and Algorithm" Encyclopedia, https://encyclopedia.pub/entry/51327 (accessed July 05, 2024).
Fan, L., Zhang, Z., Zhu, B., Zuo, D., Yu, X., & Wang, Y. (2023, November 09). Sensor-Based Gesture Recognition and Algorithm. In Encyclopedia. https://encyclopedia.pub/entry/51327
Fan, Liufeng, et al. "Sensor-Based Gesture Recognition and Algorithm." Encyclopedia. Web. 09 November, 2023.
Sensor-Based Gesture Recognition and Algorithm
Edit

Traditional vision-based gesture recognition technology has matured, it has significant limitations in underwater environments. The cost of underwater cameras is high, the underwater shooting environment is complex, and it is very easy to be disturbed by water flow, water bubbles, etc., which hinder the line of sight and make shooting difficult. Sensor-based gesture recognition technology has become popular for underwater gesture recognition because of its lower cost and higher stability (not easily affected by the underwater environment).

gesture recognition technology sensor algorithms

1. Sensor-Based Gesture Recognition

Sensor-based gesture recognition can be roughly divided into the following four types: surface electromyography (sEMG) signal-based gesture recognition, IMU-based gesture recognition, stretch-sensor-based gesture recognition, and multi-sensor-based gesture recognition.
sEMG usually records the combined effect of the electromyographic signal of the surface muscle and the nerve trunk’s electrical activity on the skin’s surface. sEMG-based gesture recognition usually relies on surface electrodes deployed on the human arm or forearm to collect sensor signals [1][2][3][4]. However, sEMG-based gesture recognition also has some drawbacks. Firstly, the signals correlate strongly with the user’s status, leading to unstable recognition results. Secondly, the collection of sEMG signals requires the electrodes to be tightly attached to the user’s skin, and prolonged use is susceptible to the influence of oils and sweat produced by the user’s skin and makes users uncomfortable.
IMU-based gesture recognition mainly uses one or more combinations of accelerometers, gyroscopes, and magnetometers to collect hand movement information in the space field [5]. Siddiqui and Chan [6] used the minimum redundancy and maximum correlation algorithm to study the optimal deployment area of the sensor, deployed the sensor on the user’s wrist, and proposed a multimodal framework to solve the IMU sensing during the gesture movement bottleneck problem. Galka et al. [7] placed seven inertial sensors on the experimenter’s upper arm, wrist, and finger joints, proposed and used a parallel HMM model, and reached a recognition accuracy of 99.75%. However, inertial sensors still have limitations, and they focus more on spatial dimension information, which is mainly used for coarse-grained gesture recognition of large gesture movements. It is challenging to perform finer-grained segmentation and recognition, such as recognition of the degree of bending of finger joints.
Flexible stretch-sensor-based gesture recognition is usually used to record changes in gesturing finger joints. Stretch sensors are often highly flexible, thinner, and more portable than other sensors [8][9]. Therefore, in recent years, research on gesture recognition technology based on stretch sensors has also received extensive attention from researchers. However, the limitations of flexible stretch sensors are also evident. First, they can only capture hand joint information but cannot capture the spatial motion characteristics of gestures. Second, stretch sensors are usually sensitive, so they are more prone to damage, and the data they generate are more prone to bias than those from other sensors.
Although the above three sensor-based gesture recognition methods can achieve remarkable gesture recognition accuracy, they all have some limitations, because they only use a single type of sensor. Multisensor gesture recognition can perfectly solve these problems by fusing multisensor data, thereby improving the recognition accuracy and recognizing more types of gestures. Plawiak et al. [8] used a DG5 VHand glove device, which consists of five finger flexion sensors and IMU, to identify 22 dynamic gestures, and the recognition accuracy rate reached 98.32%. Lu et al. [10] used the framework of acceleration signal and surface electromyography signal fusion, proposed an algorithm based on Bayesian and dynamic time warping (DTW), and realized a gesture recognition system that can recognize 19 predefined gestures with a recognition accuracy rate of 95.0%. Gesture recognition with multisensor fusion can avoid the limitations of a single sensor, learn from the strengths of multiple approaches, capture the characteristics of each dimension of gestures from multiple angles, and improve the accuracy of gesture recognition.

2. Sensor-Based Gesture Recognition Algorithm

Sensor-based gesture recognition algorithms are generally divided into the following two types: traditional machine learning and deep learning.
Gesture recognition algorithms based on machine learning (ML) include DTW, support vector machine (SVM), random forest (RF), K-means, and K-nearest neighbors [8][11][12][13]. These methods are widely applicable and adaptable to various types of complex gesture data. At present, many researchers have conducted research on the improvement of related algorithms in sensor-based gesture recognition. Although the ML-based gesture recognition method is relatively simple to implement, the number of parameters generated is also lower than that of neural networks, and the requirements for the computing equipment are relatively low. However, with the increase in gesture types and gesture data sequences, the training data required for learning is also increasing. The accuracy and response time of the recognition algorithm will also be affected to a certain extent.
The basic model of deep learning (DL)-based gesture recognition mainly includes the convolutional neural network (CNN) [14], deep neural network (DNN) [15], and recurrent neural network (RNN) methods [16]. The DL model has become the mainstream classification method in gesture recognition due to its excellent performance, high efficiency in extracting data features, and ability to process sequential data. Fang et al. [17] designed a CNN-based SLRNet network to recognize sign language. This method used an inertial-sensors-based data glove with 36 IMUs to collect a user’s arm and hand motion data, and the accuracy can reach 99.2%. Faisal et al. [18] developed a low-cost data glove deployed with flexible sensors and an IMU, and introduced a spatial projection method that improves upon classic CNN models for gesture recognition. However, the accuracy of this method for static gesture recognition is only 82.19%. Yu et al. [19] used a bidirectional gated recurrent unit (Bi-GRU) network to recognize dynamic gestures, realize real-time recognition on the end side (data glove), and reach a recognition accuracy of 98.4%. The limitation of this approach is that it is not possible to only use the smart glove, but external IMUs must be employed on the user’s arm, which can cause discomfort to the user.
The selected model needs to be determined according to the type of task, requirements, and other factors. Due to the complex amphibious environment, the underwater and land environments are different, and the interference to the sensor is entirely different. It is difficult to transmit Bluetooth signals underwater, and it is difficult to send data to the host wirelessly. Therefore, choosing a gesture recognition model suitable for the amphibious environment is essential. A study addresses this gap by proposing a novel amphibious hierarchical gesture recognition (AHGR) model that adaptively switches classification algorithms according to environmental changes (underwater and land) to ensure recognition accuracy in amphibious scenarios. In addition, it is also challenging to ensure accuracy for cross-user and cross-device recognition using a pretrained DL model. Although some studies on gesture recognition across users and in different environments has made some progress [4], they were mainly focused on EMG-based gesture recognition, and there is a lack of research on cross-user gesture recognition using data gloves based on stretch sensors and IMUs.

References

  1. Geng, W.; Du, Y.; Jin, W.; Wei, W.; Hu, Y.; Li, J. Gesture Recognition by Instantaneous Surface EMG Images. Sci. Rep. 2016, 6, 36571.
  2. Hu, Y.; Wong, Y.; Wei, W.; Du, Y.; Kankanhalli, M.; Geng, W. A Novel Attention-Based Hybrid CNN-RNN Architecture for sEMG-Based Gesture Recognition. PLoS ONE 2018, 13, e0206049.
  3. Milosevic, B.; Farella, E.; Benatti, S. Exploring Arm Posture and Temporal Variability in Myoelectric Hand Gesture Recognition. In Proceedings of the 2018 7th IEEE International Conference on Biomedical Robotics and Biomechatronics (Biorob), Enschede, The Netherlands, 26–29 August 2018; IEEE: Enschede, The Netherlands, 2018; pp. 1032–1037.
  4. Duan, D.; Yang, H.; Lan, G.; Li, T.; Jia, X.; Xu, W. EMGSense: A Low-Effort Self-Supervised Domain Adaptation Framework for EMG Sensing. In Proceedings of the 2023 IEEE International Conference on Pervasive Computing and Communications (PerCom), Atlanta, GA, USA, 13–17 March 2023; IEEE: Atlanta, GA, USA, 2023; pp. 160–170.
  5. Kim, M.; Cho, J.; Lee, S.; Jung, Y. IMU Sensor-Based Hand Gesture Recognition for Human-Machine Interfaces. Sensors 2019, 19, 3827.
  6. Siddiqui, N.; Chan, R.H.M. Multimodal Hand Gesture Recognition Using Single IMU and Acoustic Measurements at Wrist. PLoS ONE 2020, 15, e0227039.
  7. Galka, J.; Masior, M.; Zaborski, M.; Barczewska, K. Inertial Motion Sensing Glove for Sign Language Gesture Acquisition and Recognition. IEEE Sens. J. 2016, 16, 6310–6316.
  8. Plawiak, P.; Sosnicki, T.; Niedzwiecki, M.; Tabor, Z.; Rzecki, K. Hand Body Language Gesture Recognition Based on Signals From Specialized Glove and Machine Learning Algorithms. IEEE Trans. Ind. Inf. 2016, 12, 1104–1113.
  9. Preetham, C.; Ramakrishnan, G.; Kumar, S.; Tamse, A.; Krishnapura, N. Hand Talk-Implementation of a Gesture Recognizing Glove. In Proceedings of the 2013 Texas Instruments India Educators’ Conference, Bangalore, India, 4–6 April 2013; IEEE: Bangalore, India, 2013; pp. 328–331.
  10. Lu, Z.; Chen, X.; Li, Q.; Zhang, X.; Zhou, P. A Hand Gesture Recognition Framework and Wearable Gesture-Based Interaction Prototype for Mobile Devices. IEEE Trans. Human-Mach. Syst. 2014, 44, 293–299.
  11. Agab, S.E.; Chelali, F.Z. New Combined DT-CWT and HOG Descriptor for Static and Dynamic Hand Gesture Recognition. Multimed Tools Appl. 2023, 82, 26379–26409.
  12. Khan, A.M.; Tufail, A.; Khattak, A.M.; Laine, T.H. Activity Recognition on Smartphones via Sensor-Fusion and KDA-Based SVMs. Int. J. Distrib. Sens. Netw. 2014, 10, 503291.
  13. Trigueiros, P.; Ribeiro, F.; de Azurém, C.; Reis, L.P. A Comparison of Machine Learning Algorithms Applied to Hand Gesture Recognition. In Proceedings of the 7th Iberian Conference on Information Systems and Technologies (CISTI 2012), Madrid, Spain, 20–23 June 2012.
  14. LeCun, Y.; Bengio, Y.; Hinton, G. Deep Learning. Nature 2015, 521, 436–444.
  15. Murad, A.; Pyun, J.-Y. Deep Recurrent Neural Networks for Human Activity Recognition. Sensors 2017, 17, 2556.
  16. Hammerla, N.Y.; Halloran, S.; Ploetz, T. Deep, Convolutional, and Recurrent Models for Human Activity Recognition Using Wearables. arXiv 2016, arXiv:1604.08880.
  17. Fang, B.; Lv, Q.; Shan, J.; Sun, F.; Liu, H.; Guo, D.; Zhao, Y. Dynamic Gesture Recognition Using Inertial Sensors-Based Data Gloves. In Proceedings of the 2019 IEEE 4th International Conference on Advanced Robotics and Mechatronics (ICARM), Toyonaka, Japan, 3–5 July 2019; IEEE: Toyonaka, Japan, 2019; pp. 390–395.
  18. Faisal, M.A.A.; Abir, F.F.; Ahmed, M.U.; Ahad, A.R. Exploiting Domain Transformation and Deep Learning for Hand Gesture Recognition Using a Low-Cost Dataglove. Sci. Rep. 2022, 12, 21446.
  19. Yu, C.; Fan, S.; Liu, Y.; Shu, Y. End-Side Gesture Recognition Method for UAV Control. IEEE Sens. J. 2022, 22, 24526–24540.
More
Information
Contributors MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register : , , , , ,
View Times: 139
Revisions: 2 times (View History)
Update Date: 12 Nov 2023
1000/1000
Video Production Service