Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 -- 1335 2023-10-31 07:13:07 |
2 Modify the description and the content. + 4357 word(s) 5603 2023-10-31 16:32:45 | |
3 just remind -4628 word(s) 1283 2023-11-01 13:18:31 | |
4 references update + 16 word(s) 1299 2023-11-08 10:29:43 |

Video Upload Options

We provide professional Video Production Services to translate complex research into visually appealing presentations. Would you like to try it?

Confirm

Are you sure to Delete?
Cite
If you have any further questions, please contact Encyclopedia Editorial Office.
Wang, J.; Zou, Y.; Wei, Y.; Nie, M.; Liu, T.; Luo, D. Robot Arm Reaching Based on Inner Rehearsal. Encyclopedia. Available online: https://encyclopedia.pub/entry/50956 (accessed on 16 November 2024).
Wang J, Zou Y, Wei Y, Nie M, Liu T, Luo D. Robot Arm Reaching Based on Inner Rehearsal. Encyclopedia. Available at: https://encyclopedia.pub/entry/50956. Accessed November 16, 2024.
Wang, Jiawen, Yudi Zou, Yaoyao Wei, Mengxi Nie, Tianlin Liu, Dingsheng Luo. "Robot Arm Reaching Based on Inner Rehearsal" Encyclopedia, https://encyclopedia.pub/entry/50956 (accessed November 16, 2024).
Wang, J., Zou, Y., Wei, Y., Nie, M., Liu, T., & Luo, D. (2023, October 31). Robot Arm Reaching Based on Inner Rehearsal. In Encyclopedia. https://encyclopedia.pub/entry/50956
Wang, Jiawen, et al. "Robot Arm Reaching Based on Inner Rehearsal." Encyclopedia. Web. 31 October, 2023.
Robot Arm Reaching Based on Inner Rehearsal
Edit

Robot arm motion control is a fundamental aspect of robot capabilities, with arm reaching ability serving as the foundation for complex arm manipulation tasks. The researchers propose a robot arm motion control method based on inner rehearsal. Inspired by the cognitive mechanism of inner rehearsal observed in humans, this approach allows the robot to predict or evaluate the outcomes of motion commands before execution. By enhancing the learning efficiency of models and reducing excessive physical executions, the method aims to improve robot arm reaching across different platforms.

arm reaching motion planning inner rehearsal internal model human cognitive mechanism

1. Introduction

In recent years, robots have played important roles in many fields, especially for humanoid robots. As arm manipulation is one of the most basic abilities for human beings [1], arm motion control is also an indispensable ability for humanoid robots [2]. In modern factories, automation manufacturing and many other production activities are inseparable from robot arms [3]. Among various types of arm manipulation abilities, arm reaching is one of the most basic, and it is the first step in many complex arm motions, such as grasping and placing, and can also lay the foundation for subsequent motion, perception, and cognition [4][5].
The major goal of robot arm reaching is to choose a set of appropriate arm joint angles so that the end-effectors can reach the target position in Cartesian space with a certain posture, which is commonly referred to as internal model control (IMC) [6]. In 1998, Wolpert et al. [7] reviewed the necessity of such an internal model and the evidence in a neural loop. In the same year, Wolpert and Kawato [8] proposed a new architecture based on multiple pairs of inverse (controller) and forward (predictor) models, where the inverse and forward models are tightly coupled. Researchers usually implement robots’ motion control through kinematic or dynamic modeling [9]. However, since the control of force and torque is not involved in robot arm reaching tasks, only the kinematic method is considered. The internal kinematics model can be separated into the inverse kinematics (IK) model and forward kinematics (FK) model according to the input and output of the model.
In the task of reaching, the major problem is how to build an accurate IK model that maps a certain posture to a set of joint angles. There are mainly two types of approaches to robot arm reaching control: conventional IK-based approaches and learning-based approaches. Table 1 shows some related research.
Table 1. Classification of robot arm reaching control approaches.
Classification Approaches
  Numerical method [10]
Conventional IK-based Analytical method [11][12]
  Geometric method [13]
Learning-based Supervised learning: deep neural
networks [14], spiking neural networks [15]
Unsupervised learning: self-organizing
maps [16], reinforcement learning [17][18]
The main contributions are as follows.
  • The internal models are established based on the relative positioning method. Researchers limit the output of the inverse model to a small-scale displacement toward the target to smooth the reaching trajectory. The loss of the inverse model during training is defined as the distance in Cartesian space calculated by the forward model.
  • The models are pre-trained with an FK model and then fine-tuned in a real environment. The approach not only increases the learning efficiency of the internal models but also decreases the mechanical wear and tear of the robots.
  • The motion planning approach based on inner rehearsal improves the reaching performance via predictions of the motion command. During the whole reaching process, the planning procedure is divided into two stages, proprioception-based rough reaching planning and visual-feedback-based iterative adjustment planning.

2. Previous Studies

This part describes previous studies on visual servoing reaching, internal model establishment, and inner rehearsal.
To avoid the drawbacks of the conventional IK-based approach, researchers implement the following.
  • Researchers use image-based visual servoing to construct a closed-loop control so that the reaching process can be more robust than that without visual information.
  • Researchers build refined internal models for robots using deep neural networks. After coarse IK-based models generate commands, researchers adjust the commands with learning-based models to eliminate the influence of potential measurement errors.
  • Inner rehearsal is applied before the commands are actually executed. The original commands are adjusted and then executed according to the result of inner rehearsal.

3. Overall Framework

This part describes the proposed robot arm reaching approach in detail. The overall framework, the establishment of the internal models, and the inner-rehearsal-based motion planning approach are introduced.
The overall framework of the proposed approach is shown in Figure 1. The proposed method comprises four blocks: (1) visual information processing, (2) target-driven planning, (3) inner rehearsal, and (4) command execution.
Figure 1. The overall framework of the robot arm reaching approach based on inner rehearsal. The purple shading in some boxes denotes “inner rehearsal”, different from “visual information processing”. The use of cylinders and rectangles is intended to represent different types of components in the system. Cylinders indicate “models”, while rectangles indicate “values”.
  • The target position in Cartesian space is generated after the robot sees the target object through the visual perception module. The visual stimulation is converted into the required visual information, and then the intrinsic motivation is stimulated to generate the target [1][19].
  • The aim of movement is generated by the relative position between the target and the end-effector. The inverse model generates the motion command based on the current arm state and the expected movement. Each movement is supposed to be a small-scale displacement of the end-effector toward the target.
  • The forward model will predict the result of the motion command without actual execution. The predictions of the current movement are considered to be the next state of the robot so that the robot can generate the next motion command accordingly. In this way, a sequence of motion commands will be generated. The robot conducts (2) and (3) repeatedly until the prediction of movements exactly reflects the target.
  • The robot executes these commands and reaches the target.

4. Experiments

To evaluate the effectiveness of the proposed reaching approach based on inner rehearsal, several experiments are conducted. Some experimental settings and analyses of the results are introduced in this section. In the visual part, because it is not the key research part, researchers simply process the image in the HSV and depth space to extract the target information. It is encoded as a Cartesian (x,y,z) position.
The proposed approach is verified on the Baxter robot and the humanoid robot PKU-HR6.0 II. To confirm the position of the end of the robot arm, researchers add a red mark to the end-effector and the robot detects its position throughout the experiment.

5. Conclusions

A robot arm reaching approach based on inner rehearsal is proposed. The internal models are pre-trained with a coarse FK model and fine-tuned in the real environment. The two-stage learning of the internal models helps to improve the learning efficiency and reduce mechanical wear and tear. The motion planning approach based on inner rehearsal improves the reaching performance by predicting the result of a motion command with the forward model. Based on the relative distance, the whole planning process is divided into proprioception-based rough reaching planning and visual-feedback-based iterative adjustment planning, which improves the reaching performance. The experimental results show that researchers' method improves the effectiveness in robot arm reaching tasks. For the operation problem of the robotic arm, researchers' work implements a relatively fixed two-stage framework. The human cognitive mechanism has great potential in enabling agents to learn to determine the strategic framework of grasping by themselves, so that robots can be more suitable for unknown, complex scenes. Furthermore, researchers can delve deeper into the cognitive mechanisms of humans and investigate how these insights can further enhance robot learning and decision making. In doing so, researchers can unlock new possibilities for robotic capabilities and their seamless integration into various applications.

Abbreviations

The following abbreviations are used in this manuscript:

FK Forward Kinematics
IK Inverse Kinematics
IMC Internal Model Controller
DIM Divided Inverse Model
FM Forward Kinematics Model
IM Inverse Kinematics Model
DoF Degree of Freedom

References

  1. Rosenbaum, D.A. Human Motor Control; Academic Press: Cambridge, MA, USA, 2009.
  2. Cangelosi, A.; Schlesinger, M. Developmental Robotics: From Babies to Robots; MIT Press: Cambridge, MA, USA, 2015.
  3. Rüßmann, M.; Lorenz, M.; Gerbert, P.; Waldner, M.; Justus, J.; Engel, P.; Harnisch, M. Industry 4.0: The Future of Productivity and Growth in Manufacturing Industries; Boston Consulting Group: Boston, MA, USA, 2015; Volume 9, pp. 54–89.
  4. Bushnell, E.W.; Boudreau, J.P. Motor development and the mind: The potential role of motor abilities as a determinant of aspects of perceptual development. Child Dev. 1993, 64, 1005–1021.
  5. Hofsten, C.V. Action, the foundation for cognitive development. Scand. J. Psychol. 2009, 50, 617–623.
  6. Garcia, C.E.; Morari, M. Internal model control. A unifying review and some new results. Ind. Eng. Chem. Process Des. Dev. 1982, 21, 308–323.
  7. Wolpert, D.M.; Miall, R.; Kawato, M. Internal models in the cerebellum. Trends Cogn. Sci. 1998, 2, 338–347.
  8. Wolpert, D.; Kawato, M. Multiple paired forward and inverse models for motor control. Neural Netw. 1998, 11, 1317–1329.
  9. Kavraki, L.E.; LaValle, S.M. Motion Planning. In Springer Handbook of Robotics; Siciliano, B., Khatib, O., Eds.; Springer International Publishing: Cham, Switzerland, 2016; pp. 139–162.
  10. Angeles, J. On the Numerical Solution of the Inverse Kinematic Problem. Int. J. Robot. Res. 1985, 4, 21–37.
  11. Shimizu, M. Analytical inverse kinematics for 5-DOF humanoid manipulator under arbitrarily specified unconstrained orientation of end-effector. Robotica 2014, 33, 747–767.
  12. Tong, Y.; Liu, J.; Liu, Y.; Yuan, Y. Analytical inverse kinematic computation for 7-DOF redundant sliding manipulators. Mech. Mach. Theory 2021, 155, 104006.
  13. Lee, C.; Ziegler, M. Geometric Approach in Solving Inverse Kinematics of PUMA Robots. IEEE Trans. Aerosp. Electron. Syst. 1984, AES-20, 695–706.
  14. Liu, T.; Nie, M.; Wu, X.; Luo, D. Developing Robot Reaching Skill via Look-ahead Planning. In Proceedings of the 2019 WRC Symposium on Advanced Robotics and Automation (WRC SARA), Beijing, China, 21–22 August 2019; pp. 25–31.
  15. Bouganis, A.; Shanahan, M. Training a spiking neural network to control a 4-DoF robotic arm based on Spike Timing-Dependent Plasticity. In Proceedings of the The 2010 International Joint Conference on Neural Networks (IJCNN), Barcelona, Spain, 18–23 July 2010; pp. 1–8.
  16. Huang, D.W.; Gentili, R.; Reggia, J. A Self-Organizing Map Architecture for Arm Reaching Based on Limit Cycle Attractors. EAI Endorsed Trans. Self-Adapt. Syst. 2016, 16, e1.
  17. Lampe, T.; Riedmiller, M. Acquiring visual servoing reaching and grasping skills using neural reinforcement learning. In Proceedings of the The 2013 International Joint Conference on Neural Networks (IJCNN), Dallas, TX, USA, 4–9 August 2013; pp. 1–8.
  18. Gu, S.; Holly, E.; Lillicrap, T.; Levine, S. Deep reinforcement learning for robotic manipulation with asynchronous off-policy updates. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2017; pp. 3389–3396.
  19. Man, D.; Vision, A. A Computational Investigation into the Human Representation and Processing of Visual Information; WH Freeman and Company: San Francisco, CA, USA, 1982.
More
Information
Subjects: Robotics
Contributors MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register : , , , , ,
View Times: 343
Revisions: 4 times (View History)
Update Date: 08 Nov 2023
1000/1000
ScholarVision Creations