Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 + 2672 word(s) 2672 2020-12-08 10:50:18 |
2 some keywords have been added + 3 word(s) 2675 2020-12-14 19:30:59 | |
3 format change -62 word(s) 2613 2020-12-15 10:56:09 | |
4 format change Meta information modification 2613 2020-12-21 07:11:34 | |
5 format correct Meta information modification 2613 2021-09-28 08:14:43 |

Video Upload Options

Do you have a full video?


Are you sure to Delete?
If you have any further questions, please contact Encyclopedia Editorial Office.
Gulletta, G.; Erlhagen, W.; Bicho, E. Human-like Arm Motion Generation. Encyclopedia. Available online: (accessed on 20 June 2024).
Gulletta G, Erlhagen W, Bicho E. Human-like Arm Motion Generation. Encyclopedia. Available at: Accessed June 20, 2024.
Gulletta, Gianpaolo, Wolfram Erlhagen, Estela Bicho. "Human-like Arm Motion Generation" Encyclopedia, (accessed June 20, 2024).
Gulletta, G., Erlhagen, W., & Bicho, E. (2020, December 14). Human-like Arm Motion Generation. In Encyclopedia.
Gulletta, Gianpaolo, et al. "Human-like Arm Motion Generation." Encyclopedia. Web. 14 December, 2020.
Human-like Arm Motion Generation

In the last decade, the objectives outlined by the needs of personal robotics have led to the rise of new biologically-inspired techniques for arm motion planning. This entry presents a literature review of the most recent research on the generation of human-like arm movements in humanoid and manipulation robotic systems. Search methods and inclusion criteria are described. The studies are analyzed taking into consideration the sources of publication, the experimental settings, the type of movements, the technical approach, and the human motor principles that have been used to inspire and assess human-likeness. 

motion planning motion control motion learning human-like motion arm motion robotics humanoid robots

1. Introduction

New questions inevitably arise during the design of novel Human-Robot Interaction (HRI) or Collaboration (HRC) paradigms and often concern the authority and the autonomy level of new intelligent devices[1]. Ethical and anthropological issues have been recently considered by the government of Japan with the initiative called Society 5.0[2]. As described by Gladden[2], the evolution of human society has been featured by a Society 1.0 of hunters-gatherers, a Society 2.0 of farmers, a Society 3.0 that results from the Industrial Revolution and by the current Society 4.0, which adds value to the industry by digitally connecting informative networks (an “information society”). The Society 5.0 is expected to be a technologically post-humanized society where humans and robots coexist in the same environment and work to improve the quality of life by offering customized services to cope with various needs. The new social paradigm differs from the current Society 4.0 by the fact that robots will not only be passive tools as they are today. On the contrary, they are expected to be active agents capable of proactive data collecting, making decisions and friendly behaving to maintain human beings at the core of the society, but not alone within it. Due to the process of post-humanization that is taking place, this new anticipated society is significantly dependent on the emerging transformative technologies that will impact the interactions among individuals as it has never happened so far. For example, a humanoid robot can be considered a personal helper designed to respond to the needs of human beings. Such an artificial man[3] could carry and manipulate objects for people with disabilities, could replace some cognitive functions, or take part in the education of children. A robot with these features is human-centered because it is meant to interact, collaborate, and work in unstructured environments with human beings[4]. For this reason, research in robotics and artificial intelligence needs to be necessarily enriched by different scientific disciplines, such as ethics, psychology, anthropology, and neuroscience, to simulate or mimic life-like activities and appearance[2][4][5][6]. These new abilities in robots significantly influence how they are perceived by interacting human partners.

Studies have revealed a very positive attitude towards robots and the idea of being surrounded by them in different personal and societal contexts[7][8]. People commonly expect from robots a very pragmatic daily help in domestic, entertaining, and educational applications. Moreover, having intelligent devices that perform repetitive or dangerous tasks is desirable because safety and monotonousness seem to be the most important issues in industrial settings. Particularly, robotics, in the upcoming Society 5.0, is expected to augment the capabilities of human workers instead of replacing them. The current industry has been featured by robots designed for complete automation of workflows, while, in the next future, industry will include robots intended to satisfy the necessities of human co-workers with the consequent increase of the productivity of an entire company[8]. Therefore, in this new human-centered robotics, augmentation will gradually replace automation with the introduction of novel artificially intelligent devices that enhance collaboration in shared workspaces. For instance, the fundamental principles of motor interpretation behind successful human-human joint actions can be used when adopting motion planning strategies[9][10][11][12]. Koppenborg et al.[13] showed that the predictability and the velocity of the motion in robotic manipulators significantly influence the performance of the collaboration with human partners. Specifically, high-speed movements increase anxiety and risk perception of human co-workers, whom, consequently, act unpleasantly and inefficiently. Therefore, time parametrization also plays an essential role in any motion planning process that is intended for human-robot interaction. Body motion in robotics has also been recently considered a language to communicate emotional states of social robots[14]. This study revealed that social robots for educational support had more significant impressions on learners with body language than without it. However, movements of robots without a social role are also interpreted as social cues by human observers[15]. Therefore, there is an automatic and implicit cognitive process for the attribution of mental states in robots to anticipate their behaviors, which is independent of their assigned purposes. For these reasons, designers of human-centered robots and roboticists of different areas of research cannot skip accurate strategies of motion planning to obtain socially acceptable interactions with humans.

In the past decade, the action research area (of the so-called “New Robotics” introduced by Schaal[4]) has been densely characterized with the proposal of a wide range of techniques for human-like arm motion generation. This review provides an overview of the primary studies on novel human-like arm motion planners that are meant to enhance human-robot interactions and collaborations. Through the analysis and the assessment of the most recent studies, major limitations can be identified, and future investigations can be directed.

2. Literature Review

The aim of this paper was to provide a comprehensive background of human-like arm motion generation for robots by reviewing the corresponding most recent literature. An extensive description of different approaches and techniques is essential for the identification of the current issues and the definition of new activities of investigation. Special attention was put on the criteria used for the evaluation of human-likeness of the generated movements, on the tools and the robotic devices used for experimentations, on the physical nature of the applied methodologies and on obstacles-avoidance. The reliability, the performance, advantages and disadvantages were analyzed and generally discussed to summarize the modern state of the art in the generation of human-like arm motion.

Being the sources of the vast majority of peer-reviewed publications on human-robot interaction and, more generically, robotics, the Association for Computing Machinery (ACM) Digital Library, the IEEEXplore, the Scopus, and the Web of Science were searched. The search terms that were used in the advanced boolean method are: (TITLE-ABS-KEY (arm) OR TITLE-ABS-KEY (upper-limb))AND (TITLE-ABS-KEY (human-like) OR TITLE-ABS-KEY (legible) OR TITLE-ABS-KEY (predictable)) AND (TITLE-ABS-KEY (motion) OR TITLE-ABS-KEY (movement) OR TITLE-ABS-KEY (trajectory)) AND (humanoid OR robot) AND (planning OR generation) AND PUBYEAR > 2005 AND (LIMIT-TO (LANGUAGE, “English”)) AND (LIMIT-TO (SRCTYPE, “p”) OR LIMIT-TO (SRCTYPE, “j”)).

The sources published in peer-reviewed conferences or journals were included, while unpublished or non-peer-reviewed manuscripts, book chapters, posters, and abstracts were excluded. Moreover, anything published before 2006 was also excluded because the state of the art might not reflect the capabilities that robots have today. For the sake of completeness, papers proposing methodologies tested only on simulated frameworks were included as well as on real robotic platforms. The focus is on those methods and techniques capable of generating human-like arm movements on humanoid and generic robotic devices. The minimum inclusion requirement of the proposed planning techniques is the capability of producing arm and hand trajectories showing typical human-like characteristics, which have been widely analyzed and described in psychology and neurophysiology.

3. Relevant Studies in Human-like Arm Motion Generation 

A literature review on the most recent techniques of human-like arm motion generation has been presented. The analysis included 54 papers that were firstly classified according to the sources of knowledge. The vast majority of the papers was found in Scopus, immediately followed by the IEEEXplore, Web of Science, and, with minor contribution, ACM Digital Library. The equipment of the included studies was roughly equally composed by simulators, robotic manipulators and anthropomorphic platforms. The latter devices are of particular impact on motion planning solutions because provide a significant level of anthropomorphism that can positively influence interactions with humans[16][17]. The large majority of the reviewed papers proposed global methods that operate with kinematic variables and address the generation of trajectories in the operational space of robotic manipulators. This result might be due to the fact that a globally-considered static environment can provide solutions in a low dimensional space and a more accurate match with kinematic human-like features[18][19][20]. However, such methods might be reductive and risk to ignore human-like characteristics of the joints space, such as synchrony and bell-shaped angular velocity profiles[21]. Dynamics variables are often the focus of biologically-inspired solutions, which have been poorly addressed due to the difficulties that arise in transferring the activities of the muscles into robotic manipulators. Local human-like arm motion generation methods have also been scarcely proposed, but they can introduce the capability of simulating human reflexes and react to changing scenarios during the execution of a movement. However, these techniques only partially consider the workspace and, therefore, may fail to achieve human-likeness in complex tasks.

Roboticists have considered a variety of human motor control principles as inspirational and evaluational tools for proposing techniques of human-like arm motion generation. The included studies showed that researchers have mainly focused on biomimetic and human-robot mapping approaches, which can ensure a strong resemblance with human motion that is acknowledged as an objective reference of an action. In particular, human-robot transferring techniques can guarantee real-time execution of the movements, which is essential during tele-operation. However, most of these methods need expensive and complicated equipment for capturing human movements, databases to collect them and post-processing for features extraction. Additionally, these techniques generally lack of generalization because usually work for a limited range of goal-directed tasks. Similar issues are also experienced by learning-based methods to teach specific tasks to a robot by imitation or training. Although learning from observations ensures a stable repeatability and a degree of adaptation to external perturbations, generalization is seldom achieved.

The kinematics of the obtained end-effector trajectories have often been compared to the hand path and hand velocity profile of human reaching to show similarities and divergences between robotic and human movements. Most of these particular solutions were developed for single-arm motion and based on human-like optimization principles that minimize energy under tasks-related constraints. Manipulation in the workspace of a robot is often neglected when kinematics is assessed because of the inevitable complications that pick-and-place tasks would import into the analysis of human-likeness. The majority of the proposed methods were not tested with qualitatively assessments to understand the perception of human observers. However, there are many scientific shreds of evidence demonstrating that, for pleasant human-robot interactions and collaborations, the movements of the robot have to be perceived as natural, predictable, and showing the intention of the underlying action without any verbal communication [12][22][23][24].

The smoothness of a trajectory has often been measured in relation with the third derivative of position, i.e., the jerk, which is a compact and intuitive index of human-likeness. However, smoothness is not exclusively connected with jerk, but it must be assessed more comprehensively with other validation systems, such as, for example, the number of movement units [24]. The application of the two-third power law (2/3-PL) is relatively simple for the generation of biologically-inspired planar movements, like writing or drawing. However, the 2/3-PL has shown to introduce systematic errors on the complete shape of a trajectory and needs to be extended to explain regularities of three-dimensional reaching movements [25]. Although path planarity in the operational space may significantly simplify a planning problem, this human motor principle should emerge from other independent human-like motor principles instead of being a constraint of the generation process. Otherwise, there is a risk of complicating a planning problem and decrease the overall performance. The repeatability of motion on imitating observed human movements can certainly increase predictability and simplify learning, but it can seriously undermine generalization on different tasks. With two optimization processes in sequence, the Spatio-Temporal Correspondence[26] can negatively affect real-time interactions with humans, but it is a versatile tool that can be used to either assess or generate human-like body trajectories from one single sample. Human-like arm motion has also been achieved by the RULA-driven technique[27], which introduced an upper limb assessment of the inverse kinematics and the sampling of the search space to enhance ergonomics, but ignored human-like time parametrizations.

Most of the included studies addressed single-arm reaching movements, while human-like dual-arm applications are emerging and pick-and-place tasks still deserve more attention. The simple nature of motion without manipulation and the vast literature concerning the psychology of human reaching have probably encouraged roboticists towards this direction of investigation. Replicating human writing, drawing, and rhythmic movements revealed to be a small research branch, which seems to be isolated because not of easy integration with more classical methods of picking, placing, and reaching motion generation.

The prevention of self-collisions and the avoidance of obstacles in the workspace of a robot were rarely considered in the reviewed papers, even though these are necessary features for interacting devices in human-centered environments. Additionally, many of the proposed solutions ignored a human-like kinematic analysis during the avoidance of obstacles and addressed collisions prevention under simplifying assumptions, which reduce their range of applicabilities.

Roboticists have usually defined the genesis of human-like motion as a connected system of different functional modules[28] where optimization, learning, and control are integrated. However, the included studies do not refer to a common framework of motor generation for optimizing human motor models, learning internal models and control on external disturbances. On a local level, human motor minimization principles are applied, while learning techniques are used for clustering motion on a global level, for extracting regularities of captured human movements and for implementing decision-making settings. Studies have shown that the synthesis of motor solutions happens in the Central Nervous System before execution[29]. However, the reviewed controllers act in the absence of prior optimized trajectories that may guide the process of on-line regulation. Moreover, these controllers are applied to a few degrees of freedom and remain to be tested on more complex redundant manipulators and interacting tasks.

A categorization of the included studies in accordance with their most peculiar methodological features permits to analyze their possible fields of application. With a significant presence in the current literature of human-like arm motion generation, biomimetic techniques for the generation of single-arm reaching movements certainly endow human operators with novel capabilities. For instance, for an efficient tele-operation of a robotic manipulator, accurate human-robot mapping methods for a real-time execution are necessary. Additionally, biomimetic methods privilege the mimicking of simple reaching motion because it is often sufficient for tele-operated tasks and the activation of different types of end-effector might not resemble typical human prehension. Due to the complexity of joint manipulation, dual-arm human-like mimicking is at its early stage of investigation and is expected to advance in the near future. While such techniques aim at directly augmenting human capabilities, the rest of the proposed methods aims at a more indirect augmentation, which passes through human-robot interactions that resemble human-human interactions. In these situations, arm movements are planned to achieve human-like kinematic characteristics and high levels of smoothness to meet the expectations of human observers and co-workers. A full autonomy of such human-like devices is achieved by, for instance, their capability of successfully accomplishing pick-and-place tasks in office-like and industrial-like scenarios, which are often cluttered with generic obstacles. The analysis of the included studies also showed that an advanced level of motor independence is also reached when internal human-like functionals are minimized, repetitive human-like behaviors are learned, and human-like reactions to external perturbations are controlled. It is also worth noticing that, in a broader sense, robots are more human-centered through the generation of human-like arm motion. Their ultimate services augment the human capabilities of action because their goal is not to replace human operators but, instead, to interact and collaborate with them.


  1. Sheridan, T.B. Human-Robot Interaction: Status and Challenges. Hum. Factors J. Hum. Factors Ergon. Soc. 2016, 58, 525–532.
  2. Gladden, M.E. Who Will Be the Members of Society 5.0? Towards an Anthropology of Technologically Posthumanized Future Societies. Soc. Sci. 2019, 8, 148.
  3. Fukuda, T.; Michelini, R.; Potkonjak, V.; Tzafestas, S.; Valavanis, K.; Vukobratovic, M. How far away is “artificial man”. IEEE Robot. Autom. Mag. 2001, 8, 66–73.
  4. Schaal, S. The new robotics: Towards human-centered machines. HFSP J. 2007, 1, 115–126.
  5. Fong, T.; Nourbakhsh, I.; Dautenhahn, K. A survey of socially interactive robots. Robot. Auton. Syst. 2003, 42, 143–166.
  6. Wiese, E.; Metta, G.; Wykowska, A. Robots as intentional agents: Using neuroscientific methods to make robots appear more social. Front. Psychol. 2017, 8, 1–19.
  7. Ray, C.; Mondada, F.; Siegwart, R. What do people expect from robots? In Proceedings of the 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, Nice, France, 22–26 September 2008; pp. 3816–3821.
  8. Welfare, K.S.; Hallowell, M.R.; Shah, J.A.; Riek, L.D. Consider the Human Work Experience When Integrating Robotics in the Workplace. In Proceedings of the 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Daegu, Korea, 11–14 March 2019; pp. 75–84.
  9. Sebanz, N.; Bekkering, H.; Knoblich, G. Joint action: Bodies and minds moving together. Trends Cogn. Sci. 2006, 10, 70–76.
  10. Bicho, E.; Louro, L.; Hipolito, N.; Erlhagen, W. A dynamic field approach to goal inference and error monitoring for human-robot interaction. In Proceedings of the International Symposium on New Frontiers in Human-Robot Interaction, Edinburgh, UK, 6–9 April 2009; pp. 31–37.
  11. Glasauer, S.; Huber, M.; Basili, P.; Knoll, A.; Brandt, T. Interacting in time and space: Investigating human-human and human-robot joint action. In Proceedings of the 19th International Symposium in Robot and Human Interactive Communication, Viareggio, Italy, 13–15 September 2010; pp. 252–257.
  12. Bicho, E.; Erlhagen, W.; Louro, L.; Costa e Silva, E. Neuro-cognitive mechanisms of decision making in joint action: A human-robot interaction study. Hum. Mov. Sci. 2011, 30, 846–868.
  13. Koppenborg, M.; Nickel, P.; Naber, B.; Lungfiel, A.; Huelke, M. Effects of movement speed and predictability in human-robot collaboration. Hum. Factors Ergon. Manuf. Serv. Ind. 2017, 27, 197–209.
  14. Tanizaki, Y.; Jimenez, F.; Yoshikawa, T.; Furuhashi, T. Impression Investigation of Educational Support Robots using Sympathy Expression Method by Body Movement and Facial Expression. In Proceedings of the 2018 Joint 10th International Conference on Soft Computing and Intelligent Systems (SCIS) and 19th International Symposium on Advanced Intelligent Systems (ISIS), Toyama, Japan, 5–8 December 2018; pp. 1254–1258.
  15. Erel, H.; Shem Tov, T.; Kessler, Y.; Zuckerman, O. Robots are Always Social. In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems—CHI EA ’19; ACM Press: New York, NY, USA, 2019; pp. 1–6.
  16. Duffy, B.R. Anthropomorphism and the social robot. Robot. Auton. Syst. 2003, 42, 177–190.
  17. Strait, M.K.; Floerke, V.A.; Ju, W.; Maddox, K.; Remedios, J.D.; Jung, M.F.; Urry, H.L. Understanding the Uncanny: Both Atypical Features and Category Ambiguity Provoke Aversion toward Humanlike Robots. Front. Psychol. 2017, 8, 1–17.
  18. Rosenbaum, D.A.; Meulenbroek, R.J.; Vaughan, J.; Jansen, C. Posture-based
  19. Morasso, P. Spatial control of arm movements. Exp. Brain Res. 1981, 42, 223–227.
  20. Milner, T.E. A model for the generation of movements requiring endpoint precision. Neuroscience 1992, 49, 487–496.
  21. Breteler, M.D.K.; Meulenbroek, R.G.J. Modeling 3D object manipulation: Synchronous single-axis joint rotations? Exp. Brain Res. 2006, 168, 395–409.
  22. Dragan, A.D.; Lee, K.C.T.; Srinivasa, S.S. Legibility and predictability of robot motion. In Proceedings of the 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Tokyo, Japan, 3–6 March 2013; pp. 301–308.
  23. Bisio, A.; Sciutti, A.; Nori, F.; Metta, G.; Fadiga, L.; Sandini, G.; Pozzo, T. Motor Contagion during Human-Human and Human-Robot Interaction. PLoS ONE 2014, 9, e106172.
  24. Chang, J.J.; Yang, Y.S.; Guo, L.Y.; Wu, W.L.; Su, F.C. Differences in reaching performance between normal adults and patients post stroke a kinematic analysis. J. Med. Biol. Eng. 2008, 28, 53–58.
  25. Pollick, F.E.; Maoz, U.; Handzel, A.A.; Giblin, P.J.; Sapiro, G.; Flash, T. Three-dimensional arm movements at constant equi-affine speed. Cortex 2009, 45, 325–339.
  26. Gielniak, M.; Liu, K.; Thomaz, A. Generating human-like motion for robots. Int. J. Robot. Res. 2013, 32, 1275–1301.
  27. Zacharias, F.; Schlette, C.; Schmidt, F.; Borst, C.; Rossmann, J.; Hirzinger, G. Making planned paths look more human-like in humanoid robot manipulation planning. In Proceedings of the 2011 IEEE International Conference on Robotics and Automation, Shanghai, China, 9–13 May 2011; pp. 1192–1198.
  28. Burdet, E.; Franklin, D.W.; Milner, T.E. Human Robotics; MIT Press: Cambridge, MA, USA, 2013.
  29. Schwartz, A.B. Movement: How the Brain Communicates with the World. Cell 2016, 164, 1122–1135.
Contributors MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to : , ,
View Times: 584
Revisions: 5 times (View History)
Update Date: 28 Sep 2021
Video Production Service