Locomotion Mode Recognition and Prediction for Active Orthoses: History
Please note this is an old version of this entry, which may differ significantly from the current revision.
Contributor: , , ,

Understanding how to seamlessly adapt the assistance of lower-limb wearable assistive devices (active orthosis (AOs) and exoskeletons) to human locomotion modes (LMs) is challenging. Humans can usually adjust their locomotion mode (LM) according to a variety of conditions and terrains that they typically face. Challenges in personalized robotics-based assistance are related to recognizing and predicting different LMs with a non-intrusive sensor setup to timely trigger the assistance delivered by the wearable assistive devices. 

  • gait rehabilitation
  • locomotion mode recognition and prediction
  • wearable assistive devices

1. Introduction

Humans can usually adjust their locomotion mode (LM) according to a variety of conditions and terrains that they typically face. LMs are composed of static and dynamic tasks. The static tasks correspond to the sitting (SIT) and standing (ST) tasks, while the dynamic tasks are further divided into two more categories: continuous and discrete. The continuous tasks correspond to the level-ground walking (LW), stair ascending (SA) and descending (SD), and ramp ascending (RA) and descending (RD) tasks. On the other hand, the discrete tasks consist of transitions between tasks. All these tasks can be recognized after their occurrence or predicted before it. However, patients with lower-limb impairments face challenges while performing these daily tasks [1].
Current challenges in personalized robotics-based assistance are related to recognizing and predicting different LMs with a non-intrusive sensor setup to timely trigger the assistance delivered by the wearable assistive devices. Despite recent advancements, most of the current LM decoding tools integrated into wearable assistive devices (i) address a limited number of daily LMs (are non-generic tools) [2,3]; (ii) present high recognition delays [4,5,6,7] and low prediction times [6,8,9,10]; (iii) do not consider the effect of the walking speed variation [6,8,11,12]; (iv) do not present clinical evidence [3,6,8,13]; and (v) do not adapt the assistance to perform the intended motion [3,6,7,8,13]. It is of the utmost importance that wearable assistive devices, such as AOs and exoskeletons, tackle these limitations by endowing algorithms capable of accurately and timely decoding of the user’s LMs to provide personalized assistance [7,14].
Reviews already published provide an overview of the state of the art of wearable assistive devices [1,14,15]. Yan et al. [15] focused on assistive control strategies for lower-limb AOs and exoskeletons, stating the available devices are able to assist users with lower-limb impairments and specifying their control strategies. Despite providing a valuable contribution to the existing assistive devices, topics related to (i) the control strategies oriented to the user’s needs and (ii) for which sensors and machine learning classifiers should be employed to decode LM were not deeply explored. On the other hand, Novak et al. [14] provide the strengths and weaknesses of using different sensors (electromyography (EMG), electroencephalography (EEG), and mechanical sensors) and their fusion in several applications, such as LM decoding. However, this review [14] focuses on which sensors can be applied to some applications, including AOs and exoskeletons, and it does not address how to use the output of the LM decoding tools to control these wearable assistive devices. Labarrière et al.’s [1] review is centered on the sensors and machine learning classifiers used to recognize and predict LMs when using AOs, exoskeletons, and prostheses. Nonetheless, most of the studies presented in [1] were designed for assisted conditions when using prostheses [16,17,18,19,20]. According to [11], the LM recognition performed when using orthotic/exoskeleton systems is different from prostheses or conditions without a lower-limb assistive device (non-assisted conditions [21,22,23,24,25,26,27]) due to the distinct positions that the sensors can take in these situations. Additionally, the human–robot interaction is also different between orthotic/exoskeleton systems and prostheses, mainly when the motion activity of the lower limb is limited [28].

2. Which Are the Typical LMs and the Target Population Addressed?

This review shows that LW, SA, and SD are the typical LMs addressed [3,4,5,6,7,8,9,10,11,12,13,30,31,32,34,35] (N = 16), while the LW, SA, SD, RA, and RD tasks correspond to the second most addressed tasks [4,8,11,12,13,30,31,32,35] (N = 9). These results are not following the findings of [1] since the most representative decoded tasks were the LW, SA, SD, RA, and RD, whereas the LW, SA, and SD were the second most explored tasks. This phenomenon may be associated with the fact that most of the studies reported in [1] use prostheses, which may reveal a different tendency compared to AOs and exoskeletons. The high prevalence of decoding dynamic tasks is associated with using these robotic devices to increase the motor independence of injured subjects in their daily lives. However, only three studies decoded the LW, SA, SD, RA, and RD tasks together and transitions between them [8,30,32]. The walking initiation and termination movements were only explored in [6,34]. Moreover, no studies decoded the ST-RA, RA-ST, ST-RD, and RD-ST transitions. Consequently, no study decoded all commonly daily performed LMs, including the LW, SA, SD, RA, RD, ST, and SIT tasks and transitions between them under robotic assistance. These facts are in accordance with findings from previous studies [21,23,24], in which the identification of transitions between several tasks when using a wearable assistive device is discussed as one of the main limitations of the current LM decoding tools.
Twelve studies [3,5,6,7,8,11,12,13,32,33,34,35] provided information about the gait speed, and in eight studies [3,5,6,7,8,32,34,35], the participants walked at self-selected speeds (not controlling it), and in four studies [11,12,13,33], the speed was fixed and controlled. These results follow the ones reported in [1] since there were more studies in which the gait speed was self-selected. Furthermore, according to [49], the average self-selected speed of neurologically impaired patients is about 0.46 m/s (1.66 km/h). Considering this information and the used gait speeds, only the study [33] seems to address the typical walking speeds of these patients. There is evidence that waking speed affects the lower limb biomechanics [50,51]. Consequently, the LM decoding tools’ performance may be jeopardized if walking speeds different from those used during the algorithms’ training process (commonly involving healthy subjects walking at higher self-selected speeds than patients) are employed [52,53,54]. Thus, the applicability of the available solutions trained with healthy gait patterns to pathological individuals may be compromised. Additionally, only one study from the reviewed studies involved a stroke patient for tool development, which may be insufficient to validate the application of the developed algorithms in this pathological population. This assumption is in accordance with [7,14], stating that the application of LM decoding algorithms developed for healthy participants in pathological subjects may not properly work since the biomechanics of pathological patients are different from healthy subjects. The recommendations suggest including the target population during the algorithms’ development [14].

3. Which Type of Wearable Sensors and Features Are Commonly Used for LM Recognition and Prediction?

Kinematic sensors are the most used for LM decoding. Among potentiometers, encoders, AHRS, and IMU sensors, these last ones correspond to the most used type of sensor. Based on the considered literature, physiological sensors are the sensor type less used for LM decoding. These results support the ones reported in [1,14,15]. Although EMG signals may allow recognizing the user’s motion intention faster due to their anticipatory ability (about 100 ms before the muscle contraction [14]), EMG-based approaches have been left behind since EMG sensing is prone to fade during long-term use as a result of (i) movements between the skin and the electrodes; (ii) temperature variations; and (iii) sweating [7,12,13,32,33,55,56]. These phenomena can cause an incorrect identification of the user’s LM. Moreover, according to [14,15], the use of EMG signals to decode LMs of pathological users (such as stroke patients) is prone to provide low accuracies due to the muscular activities of pathological users, which may vary across time and during the execution of the LMs as a result of fatigue. For this reason, the target population should be included during the algorithms’ training [14].
Considering the findings of [1,14], the classification accuracy for LM decoding algorithms may increase when data from kinematic, kinetic, and/or EMG systems are fused. The review [14] found that fusing data from EMG and IMU sensors is profitable since the effect of the sensor position in the user’s limb (which affects the EMG signals recorded) may be compensated by the position information provided by IMU sensors. Additionally, as reported in [1], the accuracy of classification algorithms fed by data from IMU and kinetic sensors is higher than those fed exclusively using IMUs. On the other hand, based on the reviewed studies, classifiers that only used IMU sensors [5,9,10,11,12,32,34] achieve, in general, similar classification accuracy to the ones that fused data from IMU sensors and kinetic sensors (load cells [13,35], FSRs [4,31,57], and GRF sensors [30] embedded into the assistive device). In some cases [5,10,32], the exclusive use of IMU sensors seems to provide higher average accuracies when compared to the combination of IMUs and other sensors, which does not support the findings of [1].
Although combining IMUs with other sensors does not seem to provide higher accuracy, this combination seems to provide meaningful advances in terms of decoding time. In [8], lower recognition delays were obtained by fusing kinematic (AHRS sensors placed on the shank and foot segments) and kinetic (pressure insoles) sensors, adding the ability to predict the LW task when preceded by the SA task and vice-versa. Moreover, the fusion between pressure insoles with encoders typically embedded in the wearable assistive device to measure the joint angles appears to contribute to the decoding of other tasks and transitions, namely: SIT, ST, SIT-ST, ST-SIT, LW-ST, ST-LW, SA-ST, SD-ST, ST-SA, and ST-SD. In addition to being an essential contribution to recognizing the referred tasks, the fusion of pressure insoles with encoders in [6] allowed the ability to predict the SIT task preceded by ST task and vice-versa before their occurrence.
Further, raw temporal data directly measured by the sensors were the most common input data for LM decoding tools. Eight studies [2,3,4,5,6,13,30,31] used raw temporal data extracted from sensors, namely: (i) hip, knee, and ankle joint angles depending on the assisted joint [2,3,6,32]; (ii) inclination angles of the thigh, shank, and foot [4,13,31]; and (iii) vertical position and velocity of the striking foot [3,13,31]. This may be related to the time consumption associated with feature determination. The remaining studies used an analysis window to compute time- [5,7,8,9,10,11,12,34,35] (N = 9) and frequency-domain [8] (N = 1) features. The dominance of time-domain over frequency-domain features was also reported in [1]. It may be associated with the fact that the time-domain features are easier and faster to be computed. The most used time-domain features were (i) the hip joint angles and CoP at the heel-strike event [7]; (ii) maximum and minimum thigh, shank, and knee angles [5]; and (iii) maximum, minimum, mean, standard deviation, and root mean square of the thigh and shank inclination angles and triaxial angular velocities and accelerations [9,10,11,12], while the reported frequency-domain features were the (i) GRF during the swing phase [8] and (ii) thigh and foot inclination angles [8].
According to the findings reported in [1], it is preferable to use windows with a length varying between 100 and 250 ms when mechanical sensors (such as kinematic or kinetic) are used. Based on the seven studies that presented information regarding the analysis windows, various combinations were used: (i) window length of 100 ms with an increment of 10 ms [34,35]; (ii) window length of 100 ms with an increment of 50 ms [32]; (iii) window length of 150 ms with an increment of 10 ms [10]; (iv) window length of 200 ms with an increment of 10 ms [8]; and (v) window length of 250 ms and an increment of 10 ms [11,12]. These findings align with those reported in [1] since the analysis windows match the range from 100 to 250 ms.

4. Which Set of Algorithms Should Be Employed to Recognize/Predict Different LMs Attending to Accuracy and Time-Effectiveness?

Generally, the algorithm’s performance for LM decoding is evaluated based on two metrics: (i) the accuracy of the classification process and (ii) the recognition delay, which represents the period between the instant in which the locomotion mode starts and the instant in which that locomotion mode is recognized. Typically, this recognition delay is evaluated as a percentage of a gait cycle, and it should be as low as possible [23]. Ideally, the recognition delay would need to be negative, indicating that the motion task was predicted, i.e., recognized before its occurrence.
The choice between each classifier depends on the purpose of each study. Attending to the accuracy values, the FSM, MFNN, and CNN may be the most appropriate classifiers to decode static LMs, namely ST and SIT tasks, since until now, they were the only algorithms applied for this purpose [6,7,9,11,12,34,35]. In addition, if the goal is to distinguish static tasks and transitions between static tasks (ST, SIT, SIT-ST, and ST-SIT), then it becomes more feasible to choose the FSM [6,7,9]. On the other hand, if the goal is to distinguish continuous dynamic tasks, MFNN, FSM, and DT classifiers appear to provide a higher capability [4,13,30,31]. Additionally, to decode continuous dynamic tasks and static tasks together (ST, LW, SA, SD, RA, and RD), FSM or MFNN may be preferable [3,11,12]. Otherwise, if the goal is to distinguish dynamic tasks and transitions between dynamic tasks, the SVM, CNN, kNN, stacked autoencoder-based deep neural network, or FSM may be employed [2,8,9,10,30,32]. At last, to decode static and continuous tasks and transitions between them, the FSM combined with the SVM classifier may be the best option to take [9]. This different choice between classifiers considering the LM to decode depends not only on the classifier performance but also on the input information used. For example, CNN has the capacity to distinguish dynamic tasks and transitions between them with high accuracy only when fed by IMUs data. If CNN is fed by IMU, encoder, and GRF forces information together, the classification accuracy seems to drop [30,32]. For this reason, the data used to feed the algorithm present an important role in the algorithm’s performance.
Most of the LMs were recognized between 3.96% and 100% of the gait cycle in the new LM, which means that the new LM is identified between 3.96% (after its beginning) and 100% (the end of the first gait cycle) of the new LM. This phenomenon may cause perturbations during the assistance due to the existent delay in identifying the new LM. Thus, timely assistance according to the user’s needs may not be achieved. Nonetheless, there were four studies able to predict some tasks before their occurrence, in which the SVM classifier stands out [6,8,9,10], achieving a prediction from 5.4 to 78.5 ms before the new LM [9,10]. Moreover, different decoding times were achieved when the leading leg was with or without the wearable assistive device. In [10], the SA and SD tasks preceded by the LW task were predicted 5.4 ± 74.6 ms and 78.5 ± 25.0 ms before their occurrence when the leading leg was the leg without the wearable assistive device. In the same study [10], the SD task was predicted by 1.8 ± 39.2 ms in advance, while the LW task was predicted 46.7 ± 46.6 ms before its occurrence. These results support that the leading leg may affect the temporal performance of the classifier.

5. How to Adapt the Exoskeleton/Orthosis Assistance According to the Decoded User’s LM

Considering the findings of this review, five studies adapted the assistance of the wearable assistive device attending to the user’s decoded LM [4,7,9,32,33].
Based on the collected information, the provision of assistance mostly depends on both ongoing LM and gait events, namely stance and swing phases or specific gait events, such as heel-strike, heel-off, and toe-off. Two designs can be employed to adapt the wearable assistive device assistance according to the decoded LM, namely a two-level [4,7,34] and a three-level [9,32,33] hierarchical control. While in [4], a constant torque of 10 N.m was provided by an ankle orthosis when the SD task was recognized, study [7] adapted the joint torque trajectories from [42,43,44,45] according to the recognized tasks (SIT, ST, LW, SA, and SD tasks and transitions between them). Apart from the difference in the torque trajectories, the highest distinction between the control of [4] and [7] is that, while in [4], there was a gait-phase estimation algorithm running before the LM recognition tool to provide features for LM decoding, in [7], the gait-phase estimation algorithm was running after the LM recognition to set the correct time to adapt the assisted torque. Thus, the development of LM decoding tools dependent on preliminary gait algorithms may affect the recognition delay and, consequently, the time compliance of the wearable assistive device’s control. This phenomenon may be the reason for the one-step delay (100% of the gait cycle) exhibited in [4], which is higher than the recognition delay presented in [7] (from 15.2 to 63.8% of the gait cycle). Moreover, despite being a promising implementation, the feasibility of the assistance provided in [7] may be compromised because adopting trajectories from the literature should be done carefully since these trajectories depend on walking speed and the user’s anthropometry [50]. According to [4], the ideality would be to have a library with different trajectory profiles related to various motion tasks, walking speeds, and user’s anthropometry embedded in the control scheme. This would be a valuable contribution to selecting the required trajectory to assist the user according to their motion intentions.
In [9,32,33], a three-level hierarchical control architecture was followed. While in [32], the high level presented a gait-phase estimation algorithm followed by the LM recognition tool, in [9], the LM recognition tool was not dependent on the gait-phase detection algorithm. As reported in the studies [4,7], the non-dependence of previous gait analysis tools may explain the ability of the study [9] to predict some LMs (−78 ms to 38 ms), whereas no prediction was reported in [10] (3.96 to 24 of the gait cycle). Regarding the mid-level, different choices were made in each study, with the following being used: (i) a POILC method for assisting the LW, SA, and SD tasks [32]; (ii) a hybrid torque method for assisting the ST, LW, SA, and SD tasks [9]; and (iii) an EMG-based torque method for assisting the LW task [33]. It is not possible to suggest a control scheme to assist a specific motion since there is no benchmarking analysis of controllers’ performance when considering different LMs. However, according to [52], a hybrid control approach should be adopted to assist the user or reduce the constraints associated with the actuator’s frictions only when needed.
Providing efficient assistance according to the decoded user’s LM implies an accurate and timely identification of the user’s intentions. This is of utmost importance since the higher the anticipation time in identifying the LM, the more time remains to switch the control to assist the users according to their needs timely. This means that, ideally, the decoded LM should be recognized before its occurrence, i.e., predicted, to adapt the assistance according to the LM identified. Considering the five studies that adapt the assistance according to the decoded LM, only one study [9] enables the prediction of the LW-SA, SA-LW, LW-SD, and SD-LW transitions before their occurrence. This may be related to the non-dependence of the LM recognition algorithm on gait event detection algorithms.
Current directions recommend the use of co-adaptive control assistance, named as the “Assist-as-Needed” (AAN) approach, in which the patients are encouraged to participate in the rehabilitation tasks, and the wearable assistive device only assists when and as much as required, helping the users to accomplish a specific motion [58,59,60]. However, based on the collected studies, even those that adapt the assistance according to the user’s motion intention [4,7,9,32,33], no study followed an AAN approach. Thus, the adoption of the current LM-driven control strategies during the rehabilitation of neurologically impaired patients may be compromised.

This entry is adapted from the peer-reviewed paper 10.3390/s22197109

This entry is offline, you can click here to edit this entry!
ScholarVision Creations