Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 All recent littérature machin inclusion citeria were synthized. Most of the current algorithms report a good precision and robustness. + 1882 word(s) 1882 2020-11-13 07:32:23 |
2 Remove some edited text in introduction -75 word(s) 1807 2020-11-23 08:59:36 | |
3 Format correct -17 word(s) 1790 2020-11-24 04:47:20 | |
4 Format correct -4 word(s) 1803 2020-11-24 05:18:53 |

Video Upload Options

We provide professional Video Production Services to translate complex research into visually appealing presentations. Would you like to try it?

Confirm

Are you sure to Delete?
Cite
If you have any further questions, please contact Encyclopedia Editorial Office.
Labarrière, F.; Thomas, E.; Calistri, L.; Optasanu, V.; Gueugnon, M.; Ornetti, P.; Laroche, D. Locomotion Assistive Devices. Encyclopedia. Available online: https://encyclopedia.pub/entry/3159 (accessed on 26 December 2024).
Labarrière F, Thomas E, Calistri L, Optasanu V, Gueugnon M, Ornetti P, et al. Locomotion Assistive Devices. Encyclopedia. Available at: https://encyclopedia.pub/entry/3159. Accessed December 26, 2024.
Labarrière, Floriant, Elizabeth Thomas, Laurine Calistri, Virgil Optasanu, Mathieu Gueugnon, Paul Ornetti, Davy Laroche. "Locomotion Assistive Devices" Encyclopedia, https://encyclopedia.pub/entry/3159 (accessed December 26, 2024).
Labarrière, F., Thomas, E., Calistri, L., Optasanu, V., Gueugnon, M., Ornetti, P., & Laroche, D. (2020, November 23). Locomotion Assistive Devices. In Encyclopedia. https://encyclopedia.pub/entry/3159
Labarrière, Floriant, et al. "Locomotion Assistive Devices." Encyclopedia. Web. 23 November, 2020.
Locomotion Assistive Devices
Edit

Locomotion assistive devices equipped with a microprocessor can potentially automatically adapt their behavior when the user is transitioning from one locomotion mode to another. Many developments in the field have come from machine learning driven controllers on locomotion assistive devices that recognize/predict the current locomotion mode or the upcoming one.

Locomotion amputees assistive devices lower-limb prosthesis multiple environments

1. Introduction

Healthy humans are easily able to adjust locomotor pattern to deal with multiple environments encountered in daily living situations such as stair ascent/descent, slope ascent/descent, obstacle clearance, walking on uneven floors, cross-slopes or different surfaces. Hence, with lower limb impairments such as unilateral lower limb amputation, it becomes challenging to deal with most of these environmental changes [1].

To handle this issue, intelligent devices such as the C-leg TM (OTTOBOCK, Berlin, Germany) or the Rheo knee (ÖSSUR, Reykjavík, Iceland) have been developed. These variable-damping prostheses, compared to mechanically passive prostheses, improved the smoothness of gait, and decreased hip work during level-ground walking [2]. Additional improvement was provided by a powered prosthesis which was reported to decrease the metabolic cost of transport when compared to a conventional passive prosthesis in similar conditions [3]. Prosthetic devices which passively or actively mimicked human actions were found to be of help. One historic example of such innovation was the energy return foot that reproduced foot behavior and improved the gait of patients with amputation. Other innovations in the attempt to create intelligent devices can be seen with some microprocessor-controlled prostheses with the ability to recognize the terrain being traversed (e.g., Genium OTTOBOCK, Berlin, Germany, Linx BLATCHFORD, Basingstoke, UK). It only stands to reason that the next step in this progression would be the development of devices with the ability to make predictions for automatic gait adjustments across multiple terrains.

Developments in these efforts have come in the form of intelligent controllers on locomotion assistive devices. In such devices, gait is regulated by a hierarchical three-level controller [4]. The highest-level controller is responsible for detecting the user-intent. The mid-level controller automatically switches the control law (e.g., the powered active transfemoral prosthesis developed by Vanderbilt University [5]) of the device in accordance with the high-level controller output. The low-level controller compares the desired state of the device to the sensed state and corrects it when needed. The detection of user-intent is done either by the user directly communicating his intentions to the device using a controller, or by automatic interpretation by an algorithm. Examples of the first are the control buttons found in the ReWalk TM exoskeleton (ARGO MEDICAL TECHNOLOGIES Ltd., Yokneam, Israel) or predefined body movements which allow the wearer of the Power KneeTM (ÖSSUR, Reykjavík, Iceland) to switch between locomotion modes. In this device, switching between locomotion modes requires the user to stop or to perform certain unnatural body movements. As opposed to these explicit methods, algorithm-based implicit methods interpret user intent. Such algorithm-based techniques allow smoother transitions by automatically switching between the control laws of the device. A more promising approach might be one based on machine learning algorithms. Such algorithms automatically detect user-intent by mapping sensor data to an associated locomotion task.

There are numerous studies in which machine learning has been used to adapt the behavior of orthotic/prosthetic devices to user locomotion mode. We performed a systematic review that identifies and summarizes such studies. Under the scope of this review, reports were selected if (1) body-worn sensors or sensors embedded in the devices were used (2) machine learning classifiers were able to identify the investigated locomotion modes of human volunteers. It covers essential technical details such as the pre-processing methods which were used, the specific Machine Learning algorithms which were employed, and the corresponding accuracies obtained.

2. Influence of Sensor Choice and Machine Learning Algorithm

Accuracy and the robustness (e.g., stable performance in the face of long-term use) of the algorithm were the variables most often used to report the results from studies investigating locomotion on different terrains.

2.1. Influence of Sensor Choice

Several sensors have been used to build locomotion mode classifiers. The choices of these sensors may influence the accuracy and the robustness of the classifiers. More details are provided in the sections below.

Among the included studies the three most used sensors were Inertial Motion Units (IMU) (N = 36), load cells (N = 31) and electromyographs (EMG) (N = 21).

Firstly, IMUs measure the acceleration and the rotational speed along three orthogonal axes. For example, Stolyarov et al. [6] classified level-ground walking (LW), stair ascent (SA), stair descent (SD), ramp ascent (RA) and ramp descent (RD) with LDA. They showed that including trajectory information of the prosthesis increased the averaged accuracy compared to using only the accelerations and rotational speeds (from 80.9% to 94.1%). They suggested using filtering techniques to reduce drift (e.g., Kalman filters, particle filters, etc.). These researchers also brought up the point that the performance of the classification algorithms might be reduced when applied to gait at slow walking speed. Other researchers demonstrating the capacity of IMUs for the detection of locomotion mode were Zhou et al. [7]. They were able with the SVM to classify three locomotion modes (LW, SA, SD) with the exclusive use of IMU data. They achieved above 90% accuracy using orientation information. The signals combining acceleration, rotational speed and orientation were directly extracted from the IMUs (MPU 9250, Ivensense®—the filter technique was not reported in the data sheet of the sensor).

However, these studies suggested that the algorithm performances could increase when fusing IMUs signals with other sensors signals. Thus, in most studies using IMUs, information from this sensor was fused with measurements from other sensors (see below).

Secondly, load cells measured the interaction force between the device and the user. For example, Huang et al. [8] classified five locomotion modes (LW, SA, SD, RA, RD) with LDA and SVM by using only a 6 degrees of freedom (DOF) load cell mounted on the prosthetic pylon of an above-knee prosthesis. The phase-dependent strategy achieved 85 to 95% accuracy during stance phase (Initial Double Limb Stance (DS1), Single Limb Stance (SS) and Terminal Double Limb Stance (DS2)) but the accuracies dropped to 50–60% during swing (SW) phase for both LDA and SVM classifiers. Similar drops in accuracy were reported when using only plantar pressure measurements [9][10]. According to the authors [8], the low classification accuracies in the swing phase were almost certainly due to low forces/moments generated during swing phase.

Thirdly, EMG signals measured from the residual limb were reported to contain useful information for locomotion mode predictions in early studies. Indeed, for example, Huang et al. [8] and Miller et al. [11] achieved classification of five locomotion modes (LW, SA, SD, RA, RD) using EMG signals measured in the residual limb of patients with transfemoral and transtibial unilateral amputation respectively. LDA and SVM classifiers were used in both studies. For volunteers with transfemoral amputation [8], the SVM achieved an accuracy of above 90% for all phases. The LDA algorithm achieved similar accuracies in the stance phase but a slightly lower accuracy of 85% in the swing phase. For volunteers with transtibial amputation [11], both LDA and SVM algorithms achieved around 98% accuracy. Many researchers have pointed out that the EMG signals suffer from disturbances especially because of shifts in electrode position when donning and doffing a prosthesis for example. Miller et al. [11] reported a mean loss in accuracy of 15.8% and 23.1% for LDA and SVM classifiers when the medial gastrocnemius electrode was shifted. Both studies concluded that EMG signals could be helpful for classifying locomotion modes as long as the signals are not disturbed. Several studies have provided suggestions for reducing these problems.

Finally, sensor fusion has been proven to significantly increase accuracies of locomotion mode classifiers [8][12]. For example, Huang et al. [8] observed an increase in accuracy by combining EMG and load cell data instead of using either only EMG data or only load cell data (accuracy increase of up to 5.9% for an SVM classifier). Since then, data from different sensors have been fused together to reach higher accuracies. In another example, Young et al. [12] used 13 mechanical sensors (IMU, load cell, position, velocity and torque at knee and ankle joints) and recorded EMG signals from 9 muscles of the residual limb of volunteers with a transfemoral amputation. A DBN algorithm predicting upcoming locomotion modes reached 99% accuracy for steady-state steps and 88% accuracy for transitional steps.

2.2. Influence of Machine Learning Algorithm

A variety of ML algorithms were used in the included studies. The most frequently used algorithms were LDA (N = 29), SVM (N = 19) and DBN (N = 10). Also, CNNs were used to avoid features selection (N = 4).

LDA is easy to implement since no hyperparameters need to be tuned [13][14]. This algorithm is fast (1.29 ms [13], 0.078 ms with parallelization [15]) and not prone to overfitting [16]. For these reasons, this algorithm is often used as a baseline for performance comparisons between several algorithms [15][17]. More importantly, in some studies, LDA obtained accuracies similar to neural networks [13] and to SVM [11].

Even though, hyperparameters such as kernel parameter and the penalty factor need to be tuned for SVM [18], optimization techniques (e.g., grid search [16], particle swarm optimization [19]) have been found in some studies to reach slightly better performances than LDA [16][8] or QDA [20].

One of the first researchers to use DBNs were Young et al. in 2013 [21][22]. By adding past information to those of the current state, the DBN was able to obtain higher classification accuracies than LDA [12] (88% vs. 85% for transitional accuracies for DBN and LDA respectively). The DBN, unlike LDA with uniform priors, take transitional probabilities into account (e.g., in stair ascent mode, the next mode is more likely to be stair ascent or level ground walking).

Finally, CNNs were recently used in a few studies [18][23][3][4]. For example, Zhang et al. [3][4] used depth-images with a depth-camera coupled with an IMU mounted on the prosthetic pylon of an above-knee prosthesis. CNNs, known to perform well when handling image datasets are often used to avoid manual feature selection. CNNs were also used in the case of non-image data, e.g., IMU data [23] or load cell data [18]. All four studies using CNNs reported an accuracy above 89% but none of those studies implemented the designed CNN online.

The most common mistake was misclassification between ramp ascent and level ground walking modes [21]. Grouping ramp ascent and level walking classes were reported to improve the performances of locomotion mode classifiers [21]. Such a technique is relevant when the control laws (impedance in [24][21]) are similar for both modes. Zhang et al. [4] evaluated the influence of such errors (misclassifications between level walking and incline walking) on the stability of the user of an above-knee prosthesis using angular momentum and a subjective questionnaire. It was observed that the effect of the errors depends on the type of error, the error duration, and the gait phase where the error occurred. Errors were considered critical if the stability of prosthesis users was disturbed. This appears to be a good criterion for evaluating the importance of errors when designing a locomotion mode classifier.

References

  1. Goujon-Pillet, H.; Drevelle, X.; Bonnet, X.; Villa, C.; Martinet, N.; Sauret, C.; Bascou, J.; Loiret, I.; Djian, F.; Rapin, N.; et al. APSIC: Training and fitting amputees during situations of daily living. IRBM 2014, 35, 60–65.
  2. Johansson, J.L.; Sherrill, D.M.; Riley, P.O.; Bonato, P.; Herr, H. A Clinical Comparison of Variable-Damping and Mechanically Passive Prosthetic Knee Devices. Am. J. Phys. Med. Rehabil. 2005, 84, 563–575.
  3. Au, S.K.; Weber, J.A.; Herr, H.M. Powered Ankle—Foot Prosthesis Improves Walking Metabolic Economy. IEEE Trans. Robot. 2009, 25, 51–66.
  4. Tucker, M.R.; Olivier, L.; Pagel, A.; Bleuler, H.; Bouri, M.; Lambercy, O.; Millán, J.D.R.; Riener, R.; Vallery, H.; Gassert, R. Control strategies for active lower extremity prosthetics and orthotics: A review. J. Neuroeng. Rehabil. 2015, 12, 1.
  5. Johansson, J.L.; Sherrill, D.M.; Riley, P.O.; Bonato, P.; Herr, H. A Clinical Comparison of Variable-Damping and Mechanically Passive Prosthetic Knee Devices. Am. J. Phys. Med. Rehabil. 2005, 84, 563–575.
  6. Stolyarov, R.; Burnett, G.; Herr, H.M. Translational Motion Tracking of Leg Joints for Enhanced Prediction of Walking Tasks. IEEE Trans. Biomed. Eng. 2018, 65, 763–769.
  7. Zhou, Z.; Liu, X.; Jiang, Y.; Mai, J.; Wang, Q. Real-time onboard SVM-based human locomotion recognition for a bionic knee exoskeleton on different terrains. In Proceedings of the 2019 Wearable Robotics Association Conference (WearRAcon), Scottsdale, AZ, USA, 25–27 March 2019; pp. 34–39.
  8. Zhang, K.; Zhang, W.; Xiao, W.; Liu, H.; De Silva, C.W.; Fu, C. Sequential Decision Fusion for Environmental Classification in Assistive Walking. IEEE Trans. Neural Syst. Rehabil. Eng. 2019, 27, 1780–1790.
  9. Young, A.J.; Simon, A.M.; Fey, N.P.; Hargrove, L.J. Intent Recognition in a Powered Lower Limb Prosthesis Using Time History Information. Ann. Biomed. Eng. 2013, 42, 631–641.
  10. Xu, D.; Feng, Y.; Mai, J.; Wang, Q. Real-Time On-Board Recognition of Continuous Locomotion Modes for Amputees with Robotic Transtibial Prostheses. IEEE Trans. Neural Syst. Rehabil. Eng. 2018, 26, 2015–2025.
  11. Shell, C.E.; Klute, G.K.; Neptune, R.R. Identifying classifier input signals to predict a cross-slope during transtibial amputee walking. PLoS ONE 2018, 13, e0192950.
  12. Zhang, F.; Huang, H. Source Selection for Real-Time User Intent Recognition toward Volitional Control of Artificial Legs. IEEE J. Biomed. Health Inform. 2012, 17, 907–914.
  13. Young, A.J.; Simon, A.M.; Fey, N.P.; Hargrove, L.J. Intent Recognition in a Powered Lower Limb Prosthesis Using Time History Information. Ann. Biomed. Eng. 2013, 42, 631–641.
  14. Fisher, R.A. The use of multiple measurements in taxonomic problems. Ann. Eugen. 1936, 7, 179–188.
  15. Pew, C.; Klute, G.K. Turn Intent Detection for Control of a Lower Limb Prosthesis. IEEE Trans. Biomed. Eng. 2017, 65, 789–796.
  16. Feng, Y.; Chen, W.; Wang, Q. A strain gauge based locomotion mode recognition method using convolutional neural network. Adv. Robot. 2019, 33, 254–263.
  17. Tkach, D.C.; Hargrove, L.J. Neuromechanical sensor fusion yields highest accuracies in predicting ambulation mode transitions for trans-tibial amputees. In Proceedings of the 2013 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Osaka, Japan, 3–7 July 2013; Institute of Electrical and Electronics Engineers (IEEE): Osaka, Japan; Volume 2013, pp. 3074–3077.
  18. Chen, B.; Zheng, E.; Wang, Q.; Wang, L. A new strategy for parameter optimization to improve phase-dependent locomotion mode recognition. Neurocomputing 2015, 149, 585–593.
  19. Mai, J.; Chen, W.; Zhang, S.; Xu, D.; Wang, Q. Performance analysis of hardware acceleration for locomotion mode recognition in robotic prosthetic control. In Proceedings of the 2018 IEEE International Conference on Cyborg and Bionic Systems (CBS), Shenzhen, China, 25–27 October 2018; pp. 607–611.
  20. Zheng, E.; Wang, L.; Wei, K.; Wang, Q. A Noncontact Capacitive Sensing System for Recognizing Locomotion Modes of Transtibial Amputees. IEEE Trans. Biomed. Eng. 2014, 61, 2911–2920.
  21. Young, A.J.; Simon, A.M.; Hargrove, L.J. A Training Method for Locomotion Mode Prediction Using Powered Lower Limb Prostheses. IEEE Trans. Neural Syst. Rehabil. Eng. 2013, 22, 671–677.
  22. Young, A.J.; Kuiken, T.A.; Hargrove, L.J. Analysis of using EMG and mechanical sensors to enhance intent recognition in powered lower limb prostheses. J. Neural Eng. 2014, 11, 056021.
  23. Wang, C.; Wu, X.; Ma, Y.; Wu, G.; Luo, Y. A Flexible Lower Extremity Exoskeleton Robot with Deep Locomotion Mode Identification. Complexity 2018, 2018, 5712108.
  24. Wang, X.; Wang, Q.; Zheng, E.; Wei, K.; Wang, L. A Wearable Plantar Pressure Measurement System: Design Specifications and First Experiments with an Amputee. In Intelligent Autonomous Systems 12; Lee, S., Cho, H., Yoon, K.-J., Lee, J., Eds.; Springer: Berlin/Heidelberg, Germany, 2013; Volume 194, pp. 273–281.
More
Information
Contributors MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register : , , , , , ,
View Times: 1.1K
Revisions: 4 times (View History)
Update Date: 24 Nov 2020
1000/1000
Video Production Service