Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 -- 3162 2022-09-29 15:35:11 |
2 format -5 word(s) 3157 2022-09-30 03:16:08 |

Video Upload Options

We provide professional Video Production Services to translate complex research into visually appealing presentations. Would you like to try it?

Confirm

Are you sure to Delete?
Cite
If you have any further questions, please contact Encyclopedia Editorial Office.
Moreira, L.;  Figueiredo, J.;  Cerqueira, J.;  Santos, C.P. Locomotion Mode Recognition and Prediction for Active Orthoses. Encyclopedia. Available online: https://encyclopedia.pub/entry/28061 (accessed on 22 December 2024).
Moreira L,  Figueiredo J,  Cerqueira J,  Santos CP. Locomotion Mode Recognition and Prediction for Active Orthoses. Encyclopedia. Available at: https://encyclopedia.pub/entry/28061. Accessed December 22, 2024.
Moreira, Luís, Joana Figueiredo, João Cerqueira, Cristina P. Santos. "Locomotion Mode Recognition and Prediction for Active Orthoses" Encyclopedia, https://encyclopedia.pub/entry/28061 (accessed December 22, 2024).
Moreira, L.,  Figueiredo, J.,  Cerqueira, J., & Santos, C.P. (2022, September 29). Locomotion Mode Recognition and Prediction for Active Orthoses. In Encyclopedia. https://encyclopedia.pub/entry/28061
Moreira, Luís, et al. "Locomotion Mode Recognition and Prediction for Active Orthoses." Encyclopedia. Web. 29 September, 2022.
Locomotion Mode Recognition and Prediction for Active Orthoses
Edit

Understanding how to seamlessly adapt the assistance of lower-limb wearable assistive devices (active orthosis (AOs) and exoskeletons) to human locomotion modes (LMs) is challenging. Humans can usually adjust their locomotion mode (LM) according to a variety of conditions and terrains that they typically face. Challenges in personalized robotics-based assistance are related to recognizing and predicting different LMs with a non-intrusive sensor setup to timely trigger the assistance delivered by the wearable assistive devices. 

gait rehabilitation locomotion mode recognition and prediction wearable assistive devices

1. Introduction

Humans can usually adjust their locomotion mode (LM) according to a variety of conditions and terrains that they typically face. LMs are composed of static and dynamic tasks. The static tasks correspond to the sitting (SIT) and standing (ST) tasks, while the dynamic tasks are further divided into two more categories: continuous and discrete. The continuous tasks correspond to the level-ground walking (LW), stair ascending (SA) and descending (SD), and ramp ascending (RA) and descending (RD) tasks. On the other hand, the discrete tasks consist of transitions between tasks. All these tasks can be recognized after their occurrence or predicted before it. However, patients with lower-limb impairments face challenges while performing these daily tasks [1].
Current challenges in personalized robotics-based assistance are related to recognizing and predicting different LMs with a non-intrusive sensor setup to timely trigger the assistance delivered by the wearable assistive devices. Despite recent advancements, most of the current LM decoding tools integrated into wearable assistive devices (i) address a limited number of daily LMs (are non-generic tools) [2][3]; (ii) present high recognition delays [4][5][6][7] and low prediction times [6][8][9][10]; (iii) do not consider the effect of the walking speed variation [6][8][11][12]; (iv) do not present clinical evidence [3][6][8][13]; and (v) do not adapt the assistance to perform the intended motion [3][6][7][8][13]. It is of the utmost importance that wearable assistive devices, such as AOs and exoskeletons, tackle these limitations by endowing algorithms capable of accurately and timely decoding of the user’s LMs to provide personalized assistance [7][14].
Studies already published provide an overview of the state of the art of wearable assistive devices [1][14][15]. Yan et al. [15] focused on assistive control strategies for lower-limb AOs and exoskeletons, stating the available devices are able to assist users with lower-limb impairments and specifying their control strategies. Despite providing a valuable contribution to the existing assistive devices, topics related to (i) the control strategies oriented to the user’s needs and (ii) for which sensors and machine learning classifiers should be employed to decode LM were not deeply explored. On the other hand, Novak et al. [14] provide the strengths and weaknesses of using different sensors (electromyography (EMG), electroencephalography (EEG), and mechanical sensors) and their fusion in several applications, such as LM decoding. However, this [14] focuses on which sensors can be applied to some applications, including AOs and exoskeletons, and it does not address how to use the output of the LM decoding tools to control these wearable assistive devices. Labarrière et al.’s [1] research is centered on the sensors and machine learning classifiers used to recognize and predict LMs when using AOs, exoskeletons, and prostheses. Nonetheless, most of the studies presented in [1] were designed for assisted conditions when using prostheses [16][17][18][19][20]. According to [11], the LM recognition performed when using orthotic/exoskeleton systems is different from prostheses or conditions without a lower-limb assistive device (non-assisted conditions [21][22][23][24][25][26][27]) due to the distinct positions that the sensors can take in these situations. Additionally, the human–robot interaction is also different between orthotic/exoskeleton systems and prostheses, mainly when the motion activity of the lower limb is limited [28].

2. Which Are the Typical LMs and the Target Population Addressed?

This shows that LW, SA, and SD are the typical LMs addressed [3][4][5][6][7][8][9][10][11][12][13][29][30][31][32][33] (N = 16), while the LW, SA, SD, RA, and RD tasks correspond to the second most addressed tasks [4][8][11][12][13][29][30][31][33] (N = 9). These results are not following the findings of [1] since the most representative decoded tasks were the LW, SA, SD, RA, and RD, whereas the LW, SA, and SD were the second most explored tasks. This phenomenon may be associated with the fact that most of the studies reported in [1] use prostheses, which may reveal a different tendency compared to AOs and exoskeletons. The high prevalence of decoding dynamic tasks is associated with using these robotic devices to increase the motor independence of injured subjects in their daily lives. However, only three studies decoded the LW, SA, SD, RA, and RD tasks together and transitions between them [8][29][31]. The walking initiation and termination movements were only explored in [6][32]. Moreover, no studies decoded the ST-RA, RA-ST, ST-RD, and RD-ST transitions. Consequently, no study decoded all commonly daily performed LMs, including the LW, SA, SD, RA, RD, ST, and SIT tasks and transitions between them under robotic assistance. These facts are in accordance with findings from previous studies [21][23][24], in which the identification of transitions between several tasks when using a wearable assistive device is discussed as one of the main limitations of the current LM decoding tools.
Twelve studies [3][5][6][7][8][11][12][13][31][32][33][34] provided information about the gait speed, and in eight studies [3][5][6][7][8][31][32][33], the participants walked at self-selected speeds (not controlling it), and in four studies [11][12][13][34], the speed was fixed and controlled. These results follow the ones reported in [1] since there were more studies in which the gait speed was self-selected. Furthermore, according to [35], the average self-selected speed of neurologically impaired patients is about 0.46 m/s (1.66 km/h). Considering this information and the used gait speeds, only the study [34] seems to address the typical walking speeds of these patients. There is evidence that waking speed affects the lower limb biomechanics [36][37]. Consequently, the LM decoding tools’ performance may be jeopardized if walking speeds different from those used during the algorithms’ training process (commonly involving healthy subjects walking at higher self-selected speeds than patients) are employed [38][39][40]. Thus, the applicability of the available solutions trained with healthy gait patterns to pathological individuals may be compromised. Additionally, only one study from the studies involved a stroke patient for tool development, which may be insufficient to validate the application of the developed algorithms in this pathological population. This assumption is in accordance with [7][14], stating that the application of LM decoding algorithms developed for healthy participants in pathological subjects may not properly work since the biomechanics of pathological patients are different from healthy subjects. The recommendations suggest including the target population during the algorithms’ development [14].

3. Which Type of Wearable Sensors and Features Are Commonly Used for LM Recognition and Prediction?

Kinematic sensors are the most used for LM decoding. Among potentiometers, encoders, AHRS, and IMU sensors, these last ones correspond to the most used type of sensor. Based on the considered literature, physiological sensors are the sensor type less used for LM decoding. These results support the ones reported in [1][14][15]. Although EMG signals may allow recognizing the user’s motion intention faster due to their anticipatory ability (about 100 ms before the muscle contraction [14]), EMG-based approaches have been left behind since EMG sensing is prone to fade during long-term use as a result of (i) movements between the skin and the electrodes; (ii) temperature variations; and (iii) sweating [7][12][13][31][34][41][42]. These phenomena can cause an incorrect identification of the user’s LM. Moreover, according to [14][15], the use of EMG signals to decode LMs of pathological users (such as stroke patients) is prone to provide low accuracies due to the muscular activities of pathological users, which may vary across time and during the execution of the LMs as a result of fatigue. For this reason, the target population should be included during the algorithms’ training [14].
Considering the findings of [1][14], the classification accuracy for LM decoding algorithms may increase when data from kinematic, kinetic, and/or EMG systems are fused. The research [14] found that fusing data from EMG and IMU sensors is profitable since the effect of the sensor position in the user’s limb (which affects the EMG signals recorded) may be compensated by the position information provided by IMU sensors. Additionally, as reported in [1], the accuracy of classification algorithms fed by data from IMU and kinetic sensors is higher than those fed exclusively using IMUs. On the other hand, based on the studies, classifiers that only used IMU sensors [5][9][10][11][12][31][32] achieve, in general, similar classification accuracy to the ones that fused data from IMU sensors and kinetic sensors (load cells [13][33], FSRs [4][30][43], and GRF sensors [29] embedded into the assistive device). In some cases [5][10][31], the exclusive use of IMU sensors seems to provide higher average accuracies when compared to the combination of IMUs and other sensors, which does not support the findings of [1].
Although combining IMUs with other sensors does not seem to provide higher accuracy, this combination seems to provide meaningful advances in terms of decoding time. In [8], lower recognition delays were obtained by fusing kinematic (AHRS sensors placed on the shank and foot segments) and kinetic (pressure insoles) sensors, adding the ability to predict the LW task when preceded by the SA task and vice-versa. Moreover, the fusion between pressure insoles with encoders typically embedded in the wearable assistive device to measure the joint angles appears to contribute to the decoding of other tasks and transitions, namely: SIT, ST, SIT-ST, ST-SIT, LW-ST, ST-LW, SA-ST, SD-ST, ST-SA, and ST-SD. In addition to being an essential contribution to recognizing the referred tasks, the fusion of pressure insoles with encoders in [6] allowed the ability to predict the SIT task preceded by ST task and vice-versa before their occurrence.
Further, raw temporal data directly measured by the sensors were the most common input data for LM decoding tools. Eight studies [2][3][4][5][6][13][29][30] used raw temporal data extracted from sensors, namely: (i) hip, knee, and ankle joint angles depending on the assisted joint [2][3][6][31]; (ii) inclination angles of the thigh, shank, and foot [4][13][30]; and (iii) vertical position and velocity of the striking foot [3][13][30]. This may be related to the time consumption associated with feature determination. The remaining studies used an analysis window to compute time- [5][7][8][9][10][11][12][32][33] (N = 9) and frequency-domain [8] (N = 1) features. The dominance of time-domain over frequency-domain features was also reported in [1]. It may be associated with the fact that the time-domain features are easier and faster to be computed. The most used time-domain features were (i) the hip joint angles and CoP at the heel-strike event [7]; (ii) maximum and minimum thigh, shank, and knee angles [5]; and (iii) maximum, minimum, mean, standard deviation, and root mean square of the thigh and shank inclination angles and triaxial angular velocities and accelerations [9][10][11][12], while the reported frequency-domain features were the (i) GRF during the swing phase [8] and (ii) thigh and foot inclination angles [8].
According to the findings reported in [1], it is preferable to use windows with a length varying between 100 and 250 ms when mechanical sensors (such as kinematic or kinetic) are used. Based on the seven studies that presented information regarding the analysis windows, various combinations were used: (i) window length of 100 ms with an increment of 10 ms [32][33]; (ii) window length of 100 ms with an increment of 50 ms [31]; (iii) window length of 150 ms with an increment of 10 ms [10]; (iv) window length of 200 ms with an increment of 10 ms [8]; and (v) window length of 250 ms and an increment of 10 ms [11][12]. These findings align with those reported in [1] since the analysis windows match the range from 100 to 250 ms.

4. Which Set of Algorithms Should Be Employed to Recognize/Predict Different LMs Attending to Accuracy and Time-Effectiveness?

Generally, the algorithm’s performance for LM decoding is evaluated based on two metrics: (i) the accuracy of the classification process and (ii) the recognition delay, which represents the period between the instant in which the locomotion mode starts and the instant in which that locomotion mode is recognized. Typically, this recognition delay is evaluated as a percentage of a gait cycle, and it should be as low as possible [23]. Ideally, the recognition delay would need to be negative, indicating that the motion task was predicted, i.e., recognized before its occurrence.
The choice between each classifier depends on the purpose of each study. Attending to the accuracy values, the FSM, MFNN, and CNN may be the most appropriate classifiers to decode static LMs, namely ST and SIT tasks, since until now, they were the only algorithms applied for this purpose [6][7][9][11][12][32][33]. In addition, if the goal is to distinguish static tasks and transitions between static tasks (ST, SIT, SIT-ST, and ST-SIT), then it becomes more feasible to choose the FSM [6][7][9]. On the other hand, if the goal is to distinguish continuous dynamic tasks, MFNN, FSM, and DT classifiers appear to provide a higher capability [4][13][29][30]. Additionally, to decode continuous dynamic tasks and static tasks together (ST, LW, SA, SD, RA, and RD), FSM or MFNN may be preferable [3][11][12]. Otherwise, if the goal is to distinguish dynamic tasks and transitions between dynamic tasks, the SVM, CNN, kNN, stacked autoencoder-based deep neural network, or FSM may be employed [2][8][9][10][29][31]. At last, to decode static and continuous tasks and transitions between them, the FSM combined with the SVM classifier may be the best option to take [9]. This different choice between classifiers considering the LM to decode depends not only on the classifier performance but also on the input information used. For example, CNN has the capacity to distinguish dynamic tasks and transitions between them with high accuracy only when fed by IMUs data. If CNN is fed by IMU, encoder, and GRF forces information together, the classification accuracy seems to drop [29][31]. For this reason, the data used to feed the algorithm present an important role in the algorithm’s performance.
Most of the LMs were recognized between 3.96% and 100% of the gait cycle in the new LM, which means that the new LM is identified between 3.96% (after its beginning) and 100% (the end of the first gait cycle) of the new LM. This phenomenon may cause perturbations during the assistance due to the existent delay in identifying the new LM. Thus, timely assistance according to the user’s needs may not be achieved. Nonetheless, there were four studies able to predict some tasks before their occurrence, in which the SVM classifier stands out [6][8][9][10], achieving a prediction from 5.4 to 78.5 ms before the new LM [9][10]. Moreover, different decoding times were achieved when the leading leg was with or without the wearable assistive device. In [10], the SA and SD tasks preceded by the LW task were predicted 5.4 ± 74.6 ms and 78.5 ± 25.0 ms before their occurrence when the leading leg was the leg without the wearable assistive device. In the same study [10], the SD task was predicted by 1.8 ± 39.2 ms in advance, while the LW task was predicted 46.7 ± 46.6 ms before its occurrence. These results support that the leading leg may affect the temporal performance of the classifier.

5. How to Adapt the Exoskeleton/Orthosis Assistance According to the Decoded User’s LM

Considering the findings of this five studies adapted the assistance of the wearable assistive device attending to the user’s decoded LM [4][7][9][31][34].
Based on the collected information, the provision of assistance mostly depends on both ongoing LM and gait events, namely stance and swing phases or specific gait events, such as heel-strike, heel-off, and toe-off. Two designs can be employed to adapt the wearable assistive device assistance according to the decoded LM, namely a two-level [4][7][32] and a three-level [9][31][34] hierarchical control. While in [4], a constant torque of 10 N.m was provided by an ankle orthosis when the SD task was recognized, study [7] adapted the joint torque trajectories from [44][45][46][47] according to the recognized tasks (SIT, ST, LW, SA, and SD tasks and transitions between them). Apart from the difference in the torque trajectories, the highest distinction between the control of [4] and [7] is that, while in [4], there was a gait-phase estimation algorithm running before the LM recognition tool to provide features for LM decoding, in [7], the gait-phase estimation algorithm was running after the LM recognition to set the correct time to adapt the assisted torque. Thus, the development of LM decoding tools dependent on preliminary gait algorithms may affect the recognition delay and, consequently, the time compliance of the wearable assistive device’s control. This phenomenon may be the reason for the one-step delay (100% of the gait cycle) exhibited in [4], which is higher than the recognition delay presented in [7] (from 15.2 to 63.8% of the gait cycle). Moreover, despite being a promising implementation, the feasibility of the assistance provided in [7] may be compromised because adopting trajectories from the literature should be done carefully since these trajectories depend on walking speed and the user’s anthropometry [36]. According to [4], the ideality would be to have a library with different trajectory profiles related to various motion tasks, walking speeds, and user’s anthropometry embedded in the control scheme. This would be a valuable contribution to selecting the required trajectory to assist the user according to their motion intentions.
In [9][31][34], a three-level hierarchical control architecture was followed. While in [31], the high level presented a gait-phase estimation algorithm followed by the LM recognition tool, in [9], the LM recognition tool was not dependent on the gait-phase detection algorithm. As reported in the studies [4][7], the non-dependence of previous gait analysis tools may explain the ability of the study [9] to predict some LMs (−78 ms to 38 ms), whereas no prediction was reported in [10] (3.96 to 24 of the gait cycle). Regarding the mid-level, different choices were made in each study, with the following being used: (i) a POILC method for assisting the LW, SA, and SD tasks [31]; (ii) a hybrid torque method for assisting the ST, LW, SA, and SD tasks [9]; and (iii) an EMG-based torque method for assisting the LW task [34]. It is not possible to suggest a control scheme to assist a specific motion since there is no benchmarking analysis of controllers’ performance when considering different LMs. However, according to [38], a hybrid control approach should be adopted to assist the user or reduce the constraints associated with the actuator’s frictions only when needed.
Providing efficient assistance according to the decoded user’s LM implies an accurate and timely identification of the user’s intentions. This is of utmost importance since the higher the anticipation time in identifying the LM, the more time remains to switch the control to assist the users according to their needs timely. This means that, ideally, the decoded LM should be recognized before its occurrence, i.e., predicted, to adapt the assistance according to the LM identified. Considering the five studies that adapt the assistance according to the decoded LM, only one study [9] enables the prediction of the LW-SA, SA-LW, LW-SD, and SD-LW transitions before their occurrence. This may be related to the non-dependence of the LM recognition algorithm on gait event detection algorithms.
Current directions recommend the use of co-adaptive control assistance, named as the “Assist-as-Needed” (AAN) approach, in which the patients are encouraged to participate in the rehabilitation tasks, and the wearable assistive device only assists when and as much as required, helping the users to accomplish a specific motion [48][49][50]. However, based on the collected studies, even those that adapt the assistance according to the user’s motion intention [4][7][9][31][34], no study followed an AAN approach. Thus, the adoption of the current LM-driven control strategies during the rehabilitation of neurologically impaired patients may be compromised.

References

  1. Labarrière, F.; Thomas, E.; Calistri, L.; Optasanu, V.; Gueugnon, M.; Ornetti, P.; Laroche, D. Machine Learning Approaches for Activity Recognition and / or Activity Prediction in Locomotion Assistive Devices—A Systematic Review. Sensors 2020, 20, 6345.
  2. Kimura, M.; Pham, H.; Kawanishi, M.; Narikiyo, T. EMG-force-sensorless power assist system control based on Multi-Class Support Vector Machine. In Proceedings of the 11th IEEE International Conference on Control & Automation (ICCA), Taichung, Taiwan, 18–20 June 2014; pp. 284–289.
  3. Jang, J.; Kim, K.; Lee, J.; Lim, B.; Shim, Y. Online gait task recognition algorithm for hip exoskeleton. In Proceedings of the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 28 September–2 October 2015; pp. 5327–5332.
  4. Li, Y.D.; Hsiaowecksler, E.T. Gait mode recognition and control for a portable-powered ankle-foot orthosis. In Proceedings of the IEEE International Conference on Rehabilitation Robotics (ICORR), Seattle, WA, USA, 24–26 June 2013; pp. 1–8.
  5. Wang, W.; Zhang, L.; Liu, J.; Zhang, B.; Huang, Q. A real-time walking pattern recognition method for soft knee power assist wear. Int. J. Adv. Robot. Syst. 2020, 17, 1–14.
  6. Yuan, K.; Parri, A.; Yan, T.; Wang, L.; Munih, M.; Wang, Q.; Vitiello, N. A realtime locomotion mode recognition method for an active pelvis orthosis. In Proceedings of the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 28 September–2 October 2015; pp. 6196–6201.
  7. Parri, A.; Yuan, K.; Marconi, D.; Yan, T.; Crea, S.; Munih, M.; Lova, R.M.; Vitiello, N.; Wang, Q. Real-Time Hybrid Locomotion Mode Recognition for Lower Limb Wearable Robots. IEEE/ASME Trans. Mechatron. 2017, 22, 2480–2491.
  8. Long, Y.; Du, Z.-J.; Wang, W.-D.; Zhao, G.-Y.; Xu, G.-Q.; He, L.; Mao, X.-W.; Dong, W. PSO-SVM-Based Online Locomotion Mode Identification for Rehabilitation Robotic Exoskeletons. Sensors 2016, 16, 1408.
  9. Liu, X.; Wang, Q. Real-Time Locomotion Mode Recognition and Assistive Torque Control for Unilateral Knee Exoskeleton on Different Terrains. IEEE/ASME Trans. Mechatron. 2020, 25, 2722–2732.
  10. Zhou, Z.; Liu, X.; Jiang, Y.; Mai, J.; Wang, Q. Real-time onboard SVM-based human locomotion recognition for a bionic knee exoskeleton on different terrains. In Proceedings of the 2019 Wearable Robotics Association Conference (WearRAcon), Scottsdale, AZ, USA, 25–27 March 2019; pp. 34–39.
  11. Gong, C.; Xu, D.; Zhou, Z.; Vitiello, N.; Wang, Q. BPNN-Based Real-Time Recognition of Locomotion Modes for an Active Pelvis Orthosis with Different Assistive Strategies. Int. J. Humanoid Robot. 2020, 17, 2050004.
  12. Gong, C.; Xu, D.; Zhou, Z.; Vitiello, N.; Wang, Q. Real-Time On-Board Recognition of Locomotion Modes for an Active Pelvis Orthosis. In Proceedings of the 2018 IEEE-RAS 18th International Conference on Humanoid Robots (Humanoids), Beijing, China, 6–9 November 2018; Institute of Electrical and Electronics Engineers (IEEE): Piscataway, NJ, USA, 2018; pp. 346–350.
  13. Kim, H.; Shin, Y.J.; Kim, J. Kinematic-based locomotion mode recognition for power augmentation exoskeleton. Int. J. Adv. Robot. Syst. 2017, 14, 172988141773032.
  14. Novak, D.; Riener, R. A survey of sensor fusion methods in wearable robotics. Robot. Auton. Syst. 2015, 73, 155–170.
  15. Yan, T.; Cempini, M.; Oddo, C.M.; Vitiello, N. Review of assistive strategies in powered lower-limb orthoses and exoskeletons. Robot. Auton. Syst. 2015, 64, 120–136.
  16. Xu, D.; Wang, Q. On-board Training Strategy for IMU-Based Real-Time Locomotion Recognition of Transtibial Amputees with Robotic Prostheses. Front. Neurorobot. 2020, 14, 47.
  17. Zheng, J.; Cao, H.; Chen, D.; Ansari, R.; Chu, K.; Huang, M. Designing Deep Reinforcement Learning Systems for Musculoskeletal Modeling and Locomotion Analysis Using Wearable Sensor Feedback. IEEE Sens. J. 2020, 20, 9274–9282.
  18. Sahoo, S.; Maheshwari, M.; Pratihar, D.K.; Mukhopadhyay, S. A Geometry Recognition-Based Strategy for Locomotion Transitions Early Prediction of Prosthetic Devices. IEEE Trans. Instrum. Meas. 2019, 69, 1259–1267.
  19. Xu, D.; Wang, Q. BP Neural Network Based On-board Training for Real-time Locomotion Mode Recognition in Robotic Transtibial Prostheses. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 3–8 November 2019; pp. 8158–8163.
  20. Billah, Q.M.; Rahman, L.; Adan, J.; Kamal, A.M.; Islam, K.; Shahnaz, C.; Subhana, A. Design of Intent Recognition System in a Prosthetic Leg for Automatic Switching of Locomotion Modes. In Proceedings of the TENCON 2019—2019 IEEE Region 10 Conference (TENCON), Kochi, India, 17–20 October 2019; pp. 1638–1642.
  21. Laschowski, B.; McNally, W.; Wong, A.; McPhee, J. Preliminary Design of an Environment Recognition System for Controlling Robotic Lower-Limb Prostheses and Exoskeletons. In Proceedings of the 2019 IEEE 16th International Conference on Rehabilitation Robotics (ICORR), Toronto, ON, Canada, 24–28 June 2019; pp. 868–873.
  22. Khademi, G.; Simon, D. Convolutional Neural Networks for Environmentally Aware Locomotion Mode Recognition of Lower-Limb Amputees. In Proceedings of the ASME 2019 Dynamic Systems and Control Conference, Park City, UT, USA, 8–11 October 2019.
  23. Carvalho, S.; Figueiredo, J.; Santos, C.P. Environment-Aware Locomotion Mode Transition Prediction System. In Proceedings of the 2019 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC), Porto, Portugal, 24–26 April 2019; pp. 1–6.
  24. Figueiredo, J.; Carvalho, S.P.; Gonçalves, D.; Moreno, J.C.; Santos, C.P. Daily Locomotion Recognition and Prediction: A Kinematic Data-Based Machine Learning Approach. IEEE Access 2020, 8, 33250–33262.
  25. Ryu, J.; Lee, B.-H.; Kim, D.-H. sEMG Signal-Based Lower Limb Human Motion Detection Using a Top and Slope Feature Extraction Algorithm. IEEE Signal Process. Lett. 2016, 24, 929–932.
  26. Martinez-Hernandez, U.; Mahmood, I.; Dehghani-Sanij, A.A. Simultaneous Bayesian Recognition of Locomotion and Gait Phases with Wearable Sensors. IEEE Sens. J. 2017, 18, 1282–1290.
  27. Kim, D.-H.; Cho, C.-Y.; Ryu, J. Real-Time Locomotion Mode Recognition Employing Correlation Feature Analysis Using EMG Pattern. ETRI J. 2014, 36, 99–105.
  28. Mohebbi, A. Human-Robot Interaction in Rehabilitation and Assistance: A Review. Curr. Robot. Rep. 2020, 1, 131–144.
  29. Hua, Y.; Fan, J.; Liu, G.; Zhang, X.; Lai, M.; Li, M.; Zheng, T.; Zhang, G.; Zhao, J.; Zhu, Y. A Novel Weight-Bearing Lower Limb Exoskeleton Based on Motion Intention Prediction and Locomotion State Identification. IEEE Access 2019, 7, 37620–37638.
  30. Islam, M.; Hsiao-Wecksler, E.T. Detection of Gait Modes Using an Artificial Neural Network during Walking with a Powered Ankle-Foot Orthosis. J. Biophys. 2016, 2016, 7984157.
  31. Zhu, L.; Wang, Z.; Ning, Z.; Zhang, Y.; Liu, Y.; Cao, W.; Wu, X.; Chen, C. A Novel Motion Intention Recognition Approach for Soft Exoskeleton via IMU. Electronics 2020, 9, 2176.
  32. Du, G.; Zeng, J.; Gong, C.; Zheng, E. Locomotion Mode Recognition with Inertial Signals for Hip Joint Exoskeleton. Appl. Bionics Biomech. 2021, 2021, 6673018.
  33. Wang, J.; Wu, D.; Gao, Y.; Wang, X.; Li, X.; Xu, G.; Dong, W. Integral Real-time Locomotion Mode Recognition Based on GA-CNN for Lower Limb Exoskeleton. J. Bionic Eng. 2022, 19, 1359–1373.
  34. Fernandes, P.N.; Figueredo, J.; Moreira, L.; Felix, P.; Correia, A.; Moreno, J.C.; Santos, C.P. EMG-based Motion Intention Recognition for Controlling a Powered Knee Orthosis. In Proceedings of the 2019 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC), Porto, Portugal, 24–26 April 2019; pp. 1–6.
  35. Beaman, C.; Peterson, C.; Neptune, R.; Kautz, S. Differences in self-selected and fastest-comfortable walking in post-stroke hemiparetic persons. Gait Posture 2010, 31, 311–316.
  36. Moreira, L.; Figueiredo, J.; Fonseca, P.; Vilas-Boas, J.P.; Santos, C.P. Lower limb kinematic, kinetic, and EMG data from young healthy humans during walking at controlled speeds. Sci. Data 2021, 8, 103.
  37. Moreira, L.; Figueiredo, J.; Vilas-Boas, J.; Santos, C. Kinematics, Speed, and Anthropometry-Based Ankle Joint Torque Estimation: A Deep Learning Regression Approach. Machines 2021, 9, 154.
  38. Koopman, B.; van Asseldonk, E.; van der Kooij, H. Speed-dependent reference joint trajectory generation for robotic gait support. J. Biomech. 2014, 47, 1447–1458.
  39. Stoquart, G.; Detrembleur, C.; Lejeune, T. Effect of speed on kinematic, kinetic, electromyographic and energetic reference values during treadmill walking. Neurophysiol. Clin. Neurophysiol. 2008, 38, 105–116.
  40. Schwartz, M.H.; Rozumalski, A.; Trost, J.P. The effect of walking speed on the gait of typically developing children. J. Biomech. 2008, 41, 1639–1650.
  41. Hassani, W.; Mohammed, S.; Rifai, H.; Amirat, Y. EMG based approach for wearer-centered control of a knee joint actuated orthosis. In Proceedings of the 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, Japan, 3–7 November 2013; pp. 990–995.
  42. Hassani, W.; Mohammed, S.; Amirat, Y. Real-Time EMG Driven Lower Limb Actuated Orthosis for Assistance as Needed Movement Strategy. In Proceedings of the Robotics: Science and Systems IX, Berlin, Germany, 24–28 June 2013.
  43. Yu, C.-J.; Chen, J.-S.; Li, Y.-J. Motion recognition for paraplegic patients wearing a powered lower limb orthosis in ascending and descending. In Proceedings of the 2015 10th Asian Control Conference (ASCC), Kota Kinabalu, Malaysia, 31 May–3 June 2015; pp. 1–5.
  44. Endo, K.; Herr, H. Human walking model predicts joint mechanics, electromyography and mechanical economy. In Proceedings of the 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, St. Louis, MO, USA, 10–15 October 2009; pp. 4663–4668.
  45. Protopapadaki, A.; Drechsler, W.I.; Cramp, M.C.; Coutts, F.J.; Scott, O.M. Hip, knee, ankle kinematics and kinetics during stair ascent and descent in healthy young individuals. Clin. Biomech. 2007, 22, 203–210.
  46. Riener, R.; Rabuffetti, M.; Frigo, C. Stair ascent and descent at different inclinations. Gait Posture 2002, 15, 32–44.
  47. Winter, D.A. Chapter 9: Kinesiological Electromyography. In Biomechanics and Motor Control of Human Movement, 3rd ed.; Winter, D.A., Ed.; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2005.
  48. Chen, G.; Ye, J.; Liu, Q.; Duan, L.; Li, W.; Wu, Z.; Wang, C. Adaptive Control Strategy for Gait Rehabilitation Robot to Assist-When-Needed. In Proceedings of the 2018 IEEE International Conference on Real-time Computing and Robotics (RCAR), Malé, Maldives, 1–5 August 2018; pp. 538–543.
  49. Eguren, D.; Cestari, M.; Luu, T.P.; Kilicarslan, A.; Steele, A.; Contreras-Vidal, J.L. Design of a customizable, modular pediatric exoskeleton for rehabilitation and mobility. In Proceedings of the 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC), Bari, Italy, 6–9 October 2019.
  50. Lyu, M.; Chen, W.-H.; Ding, X.; Wang, J. Knee exoskeleton enhanced with artificial intelligence to provide assistance-as-needed. Rev. Sci. Instrum. 2019, 90, 094101.
More
Information
Contributors MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register : , , ,
View Times: 487
Revisions: 2 times (View History)
Update Date: 30 Sep 2022
1000/1000
Video Production Service