Human Gait Activity Based on Multimodal Sensors: History
Please note this is an old version of this entry, which may differ significantly from the current revision.

Remote health monitoring plays a significant role in research areas related to medicine, neurology, rehabilitation, and robotic systems. These applications include Human Activity Recognition (HAR) using wearable sensors, signal processing, mathematical methods, and machine learning to improve the accuracy of remote health monitoring systems.

  • multimodal sensor
  • motion classification
  • computational intelligence

1. Introduction

Human gait is a natural activity that people do every time to move from one point to another, involving muscles, nerves and brain activities. Human joints are a fundamental part of human movement, and therefore, a gait analysis is needed to study kinetics and kinematics [1,2], which are examined by physiotherapists, orthopedists, and neurologists to analyze and assess the status, treatment, and rehabilitation of patients [3]. Extrinsic and intrinsic factors (both psychological and physical) influence daily human activities; hence, determining normal gait parameters is very difficult [4]. In addition, there are a wide range of applications in different fields, such as neurology for monitoring neurological symptoms [5], or rehabilitation and physical therapy for the detection of gait disorders [6,7].
Physical activity monitoring via body-worn devices has recently been increased by sensor technologies (multimodal fusion sensors). They help vulnerable people maintain or increase the quality of individual and social lives through activity tracking [8]. The development of automatic information systems and improved methods to analyze biosignals with AI in this area is a way to contribute to more efficient health care.
The devices used to acquire body signals are classified into three approaches: non-wearable sensor (NWS), wearable sensor (WS), and hybrid system [2]. Nevertheless, WS is most commonly used due to its low cost, small dimensions, and high precision. These sensors are installed in the body to acquire the gait biosignal information during personal daily activities. WS includes force sensors, accelerometers, gyroscopes, extensometers, inclinometers, goniometers, active markers, electromyography, etc. To optimize the functionalities of such sensors (accelerometers, gyroscopes, and magnetometers), they are fused into a single unit called Inertial Measurement Units (IMUs) with multimodal fusion sensors technologies.
Extensive research has used body-worn inertial sensors and fortified the development of original Human Activity Recognition (HAR) applications. These applications include health rehabilitation, well-being assistance [9], smart homes and biofeedback systems [8], gait analysis [10,11,12,13], motion symmetry study [14], or for monitoring human activities [15,16]. Each of these applications requires continuous monitoring and tracking [17,18,19,20,21].
Feature extraction and selection algorithms are meant to sort pertinent features or suppress redundant information to increase activity recognition accurately and efficiently. These relevant features are commonly based on time-domain, wavelet and statistical analysis, with several IMU sensors installed in the body [22]. The frequency spectral of accelerometers has helped researchers predict vibrations in building structures [23] or turbines [24] and recognize the running path of dogs [25]. In addition, selection techniques to identify the most relevant features in datasets are needed to simplify the learned models and decrease the computational complexity and improve the model’s efficiency for recognition tasks. Using expensive industrial IMU sensors and extracting more complex features such as entropy [26,27] or frequency measures [20,28] brings promising results.

2. Human Gait Activity Based on Multimodal Sensors

Wearable sensors (WS) are used in recent advances because researchers have successfully implemented body-worn devices to monitor personal locomotion behaviors and recognize human activity. The most common WS uses an accelerometer and gyroscope integrated into one wearable inertial mobile unit (IMU). Another type of WS based on the electrical current associated with muscular actions is also used in combination with IMUs for HAR [22]. These WS named electromyography (EMG) measure the myoelectric signals produced by muscular actions, hence their importance in activity recognition. A study on the fusion of EMG and IMU sensors for HAR is presented in [22], showing the potential of incorporating EMG signals in activity recognition.
A motoring real-time personal locomotion is introduced in [8], wielding three inertial sensors at different body locations (wrist, thigh, and chest). Data were processed through Gaussian and zero-phase filters. A hierarchical feature-based technique is used to extract features based on stochastic gradient descent optimization methods, achieving an accuracy rate of 92.50% in their experiments using the HuGaDB dataset. In [5], a technique to extract features using Discrete Fourier Transforms is proposed to estimate the mean power in selected frequency bands for ataxic gait assessment recognition. The accelerometric data were acquired by 31 time-synchronized sensors (perception neuron system) located at different body parts. Different classifiers were used for evaluation, such as support vector machines, Bayesian, nearest neighbors, and neural network methods, with the highest accuracy of 98.5%. The data comprised 13 normal and 12 ataxic individuals, and the entire study was conducted in a clinical environment. Deep learning techniques were applied to predict falls in older adults [9]. Data were collected on fall risk factors in the elderly using WS (accelerometers), questionaries, and physical tests. The dataset consisted of 296 older adults wearing a triaxial accelerometer on their lower back for a week and the following six months in which fall incidences and descriptions were obtained. Researchers used the raw accelerometer data (without making the preprocessing step) as input to an LSTM classifier, obtaining a time reduction and an AUC (Area Under the Curve) of 0.75.
In summary, most of the works conclude that the more sensors and extracted features from data, the better the accuracy of the computer classification algorithm. Nevertheless, sensors installed on the human body produce less comfort for the patient (ergonomic), make it more challenging to perform human activities, increase noise, and require more time to (pre/post)process data and analyze the activities in real time [29].

This entry is adapted from the peer-reviewed paper 10.3390/math11061538

This entry is offline, you can click here to edit this entry!
Video Production Service