Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 -- 1097 2024-02-02 03:47:03 |
2 layout Meta information modification 1097 2024-02-02 06:41:28 |

Video Upload Options

Do you have a full video?

Confirm

Are you sure to Delete?
Cite
If you have any further questions, please contact Encyclopedia Editorial Office.
Osorio, P.; Sagawa, R.; Abe, N.; Venture, G. Expressive Qualifiers, Feature Learning and Expressive Movement Generation. Encyclopedia. Available online: https://encyclopedia.pub/entry/54658 (accessed on 18 May 2024).
Osorio P, Sagawa R, Abe N, Venture G. Expressive Qualifiers, Feature Learning and Expressive Movement Generation. Encyclopedia. Available at: https://encyclopedia.pub/entry/54658. Accessed May 18, 2024.
Osorio, Pablo, Ryusuke Sagawa, Naoko Abe, Gentiane Venture. "Expressive Qualifiers, Feature Learning and Expressive Movement Generation" Encyclopedia, https://encyclopedia.pub/entry/54658 (accessed May 18, 2024).
Osorio, P., Sagawa, R., Abe, N., & Venture, G. (2024, February 02). Expressive Qualifiers, Feature Learning and Expressive Movement Generation. In Encyclopedia. https://encyclopedia.pub/entry/54658
Osorio, Pablo, et al. "Expressive Qualifiers, Feature Learning and Expressive Movement Generation." Encyclopedia. Web. 02 February, 2024.
Expressive Qualifiers, Feature Learning and Expressive Movement Generation
Edit

Expressive body movements are defined by low- and high-level descriptors. Low-level descriptors focus on kinematics or dynamic quantities such as velocity and acceleration, whereas high-level descriptors use low-level features to describe their perceptual or semantic evaluation optimally. Notable high-level systems include Pelachaud’s qualifiers, Wallbot’s descriptors, and the Laban Movement Analysis (LMA) system, which is commonly used for dance performance evaluation.

human–robot interaction human-centered robotics human-in-the-loop human factors

1. Introduction

Bartra [1] asserts that symbolic elements, including speech, social interactions, music, art, and movement shape human consciousness. This theory extends to interactions with society and other living beings [2], suggesting that robotic agents, as potential expressive and receptive collaborators [3], should also be integrated into this symbolic framework. However, current human–robot interactions, whether via generated voices, movement, or visual cues [4][5][6][7][8][9], are often anthropomorphized [10], leading to challenges due to unsolved problems in natural language processing [11][12] and the need for the users’ familiarization with system-specific visual cues [13]. Moreover, these systems still struggle with context understanding, adaptability, and forethought [14][15]. The ideal generalized agent capable of formulating contextually appropriate responses remains unrealized [16]. Nonetheless, the prospect of body movement could enhance these interactions.
In the dance community, body movement is acknowledged for its linguistic properties [17], from minor gestures [18] to significant expressive movements conveying intent or state of mind [19]. This expressiveness can be employed in robots to create meaningful and reliable motion [20][21][22], leveraging elements such as legibility [23], language knowledge [24], and robust descriptors [25][26]. By so doing, robots can create bonds, enhance the rapport between users and robots, persuade, and facilitate collaborative tasks [27][28][29]. Currently, however, the selection of these expressive qualities often relies on user preference or expert design [20][30], limiting motion variability and affecting the human perception of the robot’s expression [31].
In [32], the authors demonstrated the need for an explainable interaction between embodied agents and humans; furthermore, it was suggested that expressivity could hold the necessary terms for the robot to communicate its internal state effectively. Ref. [33] points out that this representation will be required for the realization of sounds and complex interactions with humans. Movement then could be the medium to realize such a system (this is further visualized in the following dance video from Boston Dynamics: https://www.youtube.com/watch?v=fn3KWM1kuAw, accessed on 20 November 2023). As discussed in [34], modeling these human factors can be accomplished using machine-learning techniques. However, direct human expressivity is often set aside in the literature, favoring definitions that could effectively be used as design guidelines for specific embodied agents or interactive technologies [35]. This leads to the question of whether or not it is then possible to rely on human expressivity and expressive movement to communicate this sense effectively. Moreover, can the robot recognize this intent and replicate the same expressive behavior to the user? The robot should communicate its internal state and do it in a manner understandable to humans. This research aims to answer these questions, exploring human expressivity transmission to any robot morphology. In doing so, the approach will be generalizable to any robot and make it possible to ascertain whether the expressive behavior contains the necessary qualities. By addressing this challenge, it is possible to enhance the human–robot interaction and open scenarios where human users could effectively modify and understand robot behavior by demonstrating their expressive intent.

2. Expressive Qualifiers

The LMA system explores the interaction between effort, space, body, and shape, serving as a link between movement and language [36]. It focuses on how the body moves (body and space), its form during motion (shape), and the qualitative aspects of dynamics, energy, and expressiveness (effort). Because it quantifies expressive intent, the Effort component of LMA has been widely used in animation and robotics [37], and is utilized in this research to describe movement expressiveness.
Movements are often associated with emotions, and numerous psychological descriptors have been used to categorize body movement [38]. Scales like Pleasure–Arousal–Dominance (PAD) and Valence–Arousal–Dominance(VAD) have been used in animation and robotics [24][39][40]. However, manual selection can introduce bias [41]. While motion and behavioral qualifiers can improve user engagement with animated counterparts [42][43], no unified system effectively combines effective and expressive qualities.

3. Feature Learning

The idea of feature extraction and exploitation has seen widespread use and advancement in classifying time series across diverse domains [44][45][46]. These techniques have also been applied in image processing and natural language processing to extract meaning and establish feature connections [47][48]. Such methods have been repurposed for cross-domain applications, like the co-attention mechanism that combines image and sentence representations as feature vectors to decipher their relationships [49]. These mechanisms can analyze and combine latent encodings to create new style variations, as seen in music performances [50]. The results demonstrate that these networks can reveal a task’s underlying qualities, context, meaning, and style.
When applied to motion, the formation and generation of movement can be conducted directly in the feature or latent space, where the representation contains information about the task and any anomalies or variations [51]. Studies have shown that multi-modal signals can be similarly represented by leveraging these sub-spaces [52]. The resultant latent manifolds and topologies can be manipulated to generalize to new examples [53].

4. Style Transfer and Expressive Movement Generation

Previous research focused on style transfer using pose generation systems, aiming to generate human-like poses from human input, albeit with limitations in creating highly varied and realistic poses [54][55][56]. To address this, Generative Adversarial Networks (GAN), attention mechanisms, and transformers have been introduced, which, while improving pose generation performance, are usually confined to specific morphologies, compromising their generalizability [57][58][59].
Research suggests that a robot’s movement features can be adaptable, with human input specifying the guiding features of the robot’s motion, serving as a foundation for a divide-and-conquer strategy to learn user-preferred paths [60]. A system built on these features assists the robot’s pose generation system, showing that human motion can influence the basis functions to align with the user’s task preferences.
Although it has been shown that expressive characteristics can be derived from human movement and integrated into a robot arm’s control loop, the generated motions often lack legibility and variability [61]. In addition, much of the essence of higher-order expressive descriptors and affective qualities is lost or unmeasured. Although re-targeting can be used to generate expressive motion, it often faces cross-morphology implementation issues [62][63][64]. Burton emphasized that “imitation does not penetrate the hidden recesses of inner human effort” [36]. However, modulating motion through expert descriptors and exploiting kinematic redundancy can feasibly portray emotional characterizations, provided the motion is within the robot’s limits and the interaction context is suitable [65]. Therefore, effective expressive generation should consider both the user’s expressive intents and the task or capabilities of the robot.

References

  1. Bartra, R. Chamanes y Robots; Anagrama: Barcelona, Spain, 2019; Volume 535.
  2. Mancini, C. Animal-Computer Interaction (ACI): Changing perspective on HCI, participation and sustainability. In Proceedings of the 2013 Conference on Human Factors in Computing Systems CHI 2013, Paris, France, 27 April–2 May 2013; pp. 2227–2236.
  3. Yuan, L.; Gao, X.; Zheng, Z.; Edmonds, M.; Wu, Y.N.; Rossano, F.; Lu, H.; Zhu, Y.; Zhu, S.C. In situ bidirectional human-robot value alignment. Sci. Robot. 2022, 7, eabm4183.
  4. Whittaker, S.; Rogers, Y.; Petrovskaya, E.; Zhuang, H. Designing personas for expressive robots: Personality in the new breed of moving, speaking, and colorful social home robots. ACM Trans. Hum. Robot Interact. (THRI) 2021, 10, 8.
  5. Ceha, J.; Chhibber, N.; Goh, J.; McDonald, C.; Oudeyer, P.Y.; Kulić, D.; Law, E. Expression of Curiosity in Social Robots: Design, Perception, and Effects on Behaviour. In Proceedings of the 2019 Conference on Human Factors in Computing Systems (CHI’19), Glasgow, Scotland, 4–9 May 2019; pp. 1–12.
  6. Ostrowski, A.K.; Zygouras, V.; Park, H.W.; Breazeal, C. Small Group Interactions with Voice-User Interfaces: Exploring Social Embodiment, Rapport, and Engagement. In Proceedings of the 2021 ACM/IEEE International Conference on Human-Robot Interaction (HRI’21), Boulder, CO, USA, 9–11 March 2021; pp. 322–331.
  7. Erel, H.; Cohen, Y.; Shafrir, K.; Levy, S.D.; Vidra, I.D.; Shem Tov, T.; Zuckerman, O. Excluded by robots: Can robot-robot-human interaction lead to ostracism? In Proceedings of the 2021 ACM/IEEE International Conference on Human-Robot Interaction (HRI’21), Boulder, CO, USA, 9–11 March 2021; pp. 312–321.
  8. Brock, H.; Šabanović, S.; Gomez, R. Remote You, Haru and Me: Exploring Social Interaction in Telepresence Gaming With a Robotic Agent. In Proceedings of the 2021 ACM/IEEE International Conference on Human-Robot Interaction (HRI’21), Boulder, CO, USA, 9–11 March 2021; Association for Computing Machinery: New York, NY, USA, 2021; pp. 283–287.
  9. Berg, J.; Lu, S. Review of interfaces for industrial human-robot interaction. Curr. Robot. Rep. 2020, 1, 27–34.
  10. Złotowski, J.; Proudfoot, D.; Yogeeswaran, K.; Bartneck, C. Anthropomorphism: Opportunities and challenges in human–robot interaction. Int. J. Soc. Robot. 2015, 7, 347–360.
  11. Brown, T.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J.D.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; et al. Language models are few-shot learners. Adv. Neural Inf. Process. Syst. 2020, 33, 1877–1901.
  12. Zhang, C.; Chen, J.; Li, J.; Peng, Y.; Mao, Z. Large language models for human-robot interaction: A review. Biomim. Intell. Robot. 2023, 3, 100131.
  13. Capy, S.; Osorio, P.; Hagane, S.; Aznar, C.; Garcin, D.; Coronado, E.; Deuff, D.; Ocnarescu, I.; Milleville, I.; Venture, G. Yōkobo: A Robot to Strengthen Links Amongst Users with Non-Verbal Behaviours. Machines 2022, 10, 708.
  14. Szafir, D.; Mutlu, B.; Fong, T. Communication of intent in assistive free flyers. In Proceedings of the 2014 ACM/IEEE International Conference on Human-Robot interaction (HRI’14), Bielefeld, Germany, 3–6 March 2014; pp. 358–365.
  15. Terzioğlu, Y.; Mutlu, B.; Şahin, E. Designing Social Cues for Collaborative Robots: The RoIe of Gaze and Breathing in Human-Robot Collaboration. In Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction (HRI) (HRI’20), Cambridge, UK, 23–26 March 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 343–357.
  16. Reed, S.; Zolna, K.; Parisotto, E.; Colmenarejo, S.G.; Novikov, A.; Barth-maron, G.; Giménez, M.; Sulsky, Y.; Kay, J.; Springenberg, J.T.; et al. A Generalist Agent. arXiv 2022, arXiv:2205.06175.
  17. Bannerman, H. Is dance a language? Movement, meaning and communication. Danc. Res. 2014, 32, 65–80.
  18. Borghi, A.M.; Cimatti, F. Embodied cognition and beyond: Acting and sensing the body. Neuropsychologia 2010, 48, 763–773.
  19. Karg, M.; Samadani, A.A.; Gorbet, R.; Kühnlenz, K.; Hoey, J.; Kulić, D. Body movements for affective expression: A survey of automatic recognition and generation. IEEE Trans. Affect. Comput. 2013, 4, 341–359.
  20. Venture, G.; Kulić, D. Robot expressive motions: A survey of generation and evaluation methods. ACM Trans. Hum. Robot Interact. THRI 2019, 8, 20.
  21. Zhang, Y.; Sreedharan, S.; Kulkarni, A.; Chakraborti, T.; Zhuo, H.H.; Kambhampati, S. Plan explicability and predictability for robot task planning. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1313–1320.
  22. Wright, J.L.; Chen, J.Y.; Lakhmani, S.G. Agent transparency and reliability in human–robot interaction: The influence on user confidence and perceived reliability. IEEE Trans. Hum. Mach. Syst. 2019, 50, 254–263.
  23. Dragan, A.D.; Lee, K.C.; Srinivasa, S.S. Legibility and predictability of robot motion. In Proceedings of the 2013 ACM/IEEE International Conference on Human-Robot Interaction (HRI’13), Tokyo, Japan, 3–6 March 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 301–308.
  24. Sripathy, A.; Bobu, A.; Li, Z.; Sreenath, K.; Brown, D.S.; Dragan, A.D. Teaching robots to span the space of functional expressive motion. In Proceedings of the 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Kyoto, Japan, 23–27 October 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 13406–13413.
  25. Knight, H.; Simmons, R. Expressive motion with x, y and theta: Laban effort features for mobile robots. In Proceedings of the Proceeding of the 23rd IEEE International Symposium on Robot and Human Interactive Communication, Edinburgh, UK, 25–29 August 2014; IEEE: Piscataway, NJ, USA, 2014; pp. 267–273.
  26. Bobu, A.; Wiggert, M.; Tomlin, C.; Dragan, A.D. Feature Expansive Reward Learning: Rethinking Human Input. In Proceedings of the 2021 ACM/IEEE International Conference on Human-Robot Interaction (HRI’21), Boulder, CO, USA, 9–11 March 2021; pp. 216–224.
  27. Chidambaram, V.; Chiang, Y.H.; Mutlu, B. Designing persuasive robots: How robots might persuade people using vocal and nonverbal cues. In Proceedings of the 2012 ACM/IEEE International Conference on Human-Robot Interaction (HRI’12), Boston, MA, USA, 5–8 March 2012; pp. 293–300.
  28. Saunderson, S.; Nejat, G. How robots influence humans: A survey of nonverbal communication in social human–robot interaction. Int. J. Soc. Robot. 2019, 11, 575–608.
  29. Cominelli, L.; Feri, F.; Garofalo, R.; Giannetti, C.; Meléndez-Jiménez, M.A.; Greco, A.; Nardelli, M.; Scilingo, E.P.; Kirchkamp, O. Promises and trust in human–robot interaction. Sci. Rep. 2021, 11, 9687.
  30. Desai, R.; Anderson, F.; Matejka, J.; Coros, S.; McCann, J.; Fitzmaurice, G.; Grossman, T. Geppetto: Enabling semantic design of expressive robot behaviors. In Proceedings of the 2019 Conference on Human Factors in Computing Systems (CHI’19’), Glasgow, Scotland, 4–9 May 2019; pp. 1–14.
  31. Ciardo, F.; Tommaso, D.D.; Wykowska, A. Human-like behavioral variability blurs the distinction between a human and a machine in a nonverbal Turing test. Sci. Robot. 2022, 7, eabo1241.
  32. Wallkötter, S.; Tulli, S.; Castellano, G.; Paiva, A.; Chetouani, M. Explainable embodied agents through social cues: A review. ACM Trans. Hum. Robot Interact. (THRI) 2021, 10, 27.
  33. Herrera Perez, C.; Barakova, E.I. Expressivity comes first, movement follows: Embodied interaction as intrinsically expressive driver of robot behaviour. In Modelling Human Motion: From Human Perception to Robot Design; Springer International Publishing: Cham, Switzerland, 2020; pp. 299–313.
  34. Semeraro, F.; Griffiths, A.; Cangelosi, A. Human–robot collaboration and machine learning: A systematic review of recent research. Robot. Comput. Integr. Manuf. 2023, 79, 102432.
  35. Bruns, M.; Ossevoort, S.; Petersen, M.G. Expressivity in interaction: A framework for design. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, Yokohama, Japan, 8–13 May 2021; pp. 1–13.
  36. Burton, S.J.; Samadani, A.A.; Gorbet, R.; Kulić, D. Laban movement analysis and affective movement generation for robots and other near-living creatures. In Dance Notations and Robot Motion; Springer International Publishing: Cham, Switzerland, 2016; pp. 25–48.
  37. Bacula, A.; LaViers, A. Character Synthesis of Ballet Archetypes on Robots Using Laban Movement Analysis: Comparison Between a Humanoid and an Aerial Robot Platform with Lay and Expert Observation. Int. J. Soc. Robot. 2021, 13, 1047–1062.
  38. Yan, F.; Iliyasu, A.M.; Hirota, K. Emotion space modelling for social robots. Eng. Appl. Artif. Intell. 2021, 100, 104178.
  39. Claret, J.A.; Venture, G.; Basañez, L. Exploiting the robot kinematic redundancy for emotion conveyance to humans as a lower priority task. Int. J. Soc. Robot. 2017, 9, 277–292.
  40. Häring, M.; Bee, N.; André, E. Creation and evaluation of emotion expression with body movement, sound and eye color for humanoid robots. In Proceedings of the 2011 IEEE RO-MAN: International Symposium on Robot and Human Interactive Communication, Atlanta, GA, USA, 31 July–3 August 2011; IEEE: Piscataway, NJ, USA, 2011; pp. 204–209.
  41. Embgen, S.; Luber, M.; Becker-Asano, C.; Ragni, M.; Evers, V.; Arras, K.O. Robot-specific social cues in emotional body language. In Proceedings of the 2012 IEEE RO-MAN: IEEE International Symposium on Robot and Human Interactive Communication, Paris, France, 9–12 September 2012; IEEE: Piscataway, NJ, USA, 2012; pp. 1019–1025.
  42. Beck, A.; Stevens, B.; Bard, K.A.; Cañamero, L. Emotional body language displayed by artificial agents. ACM Trans. Interact. Intell. Syst. (TiiS) 2012, 2, 2.
  43. Bretan, M.; Hoffman, G.; Weinberg, G. Emotionally expressive dynamic physical behaviors in robots. Int. J. Hum.-Comput. Stud. 2015, 78, 1–16.
  44. Dairi, A.; Harrou, F.; Sun, Y.; Khadraoui, S. Short-term forecasting of photovoltaic solar power production using variational auto-encoder driven deep learning approach. Appl. Sci. 2020, 10, 8400.
  45. Li, Z.; Zhao, Y.; Han, J.; Su, Y.; Jiao, R.; Wen, X.; Pei, D. Multivariate time series anomaly detection and interpretation using hierarchical inter-metric and temporal embedding. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, Singapore, 14–18 August 2021; pp. 3220–3230.
  46. Memarzadeh, M.; Matthews, B.; Avrekh, I. Unsupervised anomaly detection in flight data using convolutional variational auto-encoder. Aerospace 2020, 7, 115.
  47. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. In Proceedings of the 31st Annual Conference on Neural Information Processing Systems (NIPS), Long Beach, CA, USA, 4–9 December 2017.
  48. Chen, H.; Wang, Y.; Guo, T.; Xu, C.; Deng, Y.; Liu, Z.; Ma, S.; Xu, C.; Xu, C.; Gao, W. Pre-trained image processing transformer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Virtual, 20–25 June 2021; pp. 12299–12310.
  49. Lu, J.; Yang, J.; Batra, D.; Parikh, D. Hierarchical question-image co-attention for visual question answering. In Proceedings of the 30th Annual Conference on Neural Information Processing Systems (NIPS), Barcelona, Spain, 5–10 December 2016.
  50. Choi, K.; Hawthorne, C.; Simon, I.; Dinculescu, M.; Engel, J. Encoding musical style with transformer autoencoders. In Proceedings of the International Conference on Machine Learning, Virtual, 13–18 July 2020; pp. 1899–1908.
  51. Ichter, B.; Pavone, M. Robot motion planning in learned latent spaces. IEEE Robot. Autom. Lett. 2019, 4, 2407–2414.
  52. Park, D.; Hoshi, Y.; Kemp, C.C. A multimodal anomaly detector for robot-assisted feeding using an lstm-based variational autoencoder. IEEE Robot. Autom. Lett. 2018, 3, 1544–1551.
  53. Du, Y.; Collins, K.; Tenenbaum, J.; Sitzmann, V. Learning signal-agnostic manifolds of neural fields. Adv. Neural Inf. Process. Syst. 2021, 34, 8320–8331.
  54. Yoon, Y.; Cha, B.; Lee, J.H.; Jang, M.; Lee, J.; Kim, J.; Lee, G. Speech gesture generation from the trimodal context of text, audio, and speaker identity. ACM Trans. Graph. (TOG) 2020, 39, 222.
  55. Cudeiro, D.; Bolkart, T.; Laidlaw, C.; Ranjan, A.; Black, M.J. Capture, learning, and synthesis of 3D speaking styles. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 10101–10111.
  56. Ahuja, C.; Lee, D.W.; Morency, L.P. Low-resource adaptation for personalized co-speech gesture generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 19–24 June 2022; pp. 20566–20576.
  57. Ferstl, Y.; Neff, M.; McDonnell, R. Multi-objective adversarial gesture generation. In Proceedings of the 12th ACM SIGGRAPH Conference on Motion, Interaction and Games, Newcastle upon Tyne, UK, 28–30 October 2019; pp. 1–10.
  58. Yoon, Y.; Ko, W.R.; Jang, M.; Lee, J.; Kim, J.; Lee, G. Robots learn social skills: End-to-end learning of co-speech gesture generation for humanoid robots. In Proceedings of the 2019 International Conference on Robotics and Automation, Montreal, QC, Canada, 20–24 May 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 4303–4309.
  59. Bhattacharya, U.; Rewkowski, N.; Banerjee, A.; Guhan, P.; Bera, A.; Manocha, D. Text2gestures: A transformer-based network for generating emotive body gestures for virtual agents. In Proceedings of the 2021 IEEE Virtual Reality and 3D User Interfaces Conference, Virtual, 27 March–2 April 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 1–10.
  60. Bobu, A.; Wiggert, M.; Tomlin, C.; Dragan, A.D. Inducing structure in reward learning by learning features. Int. J. Robot. Res. 2022, 41, 497–518.
  61. Osorio, P.; Venture, G. Control of a Robot Expressive Movements Using Non-Verbal Features. IFAC PapersOnLine 2022, 55, 92–97.
  62. Penco, L.; Clément, B.; Modugno, V.; Hoffman, E.M.; Nava, G.; Pucci, D.; Tsagarakis, N.G.; Mouret, J.B.; Ivaldi, S. Robust real-time whole-body motion retargeting from human to humanoid. In Proceedings of the 2018 IEEE-RAS International Conference on Humanoid Robots (Humanoids), Beijing, China, 6–9 November 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 425–432.
  63. Kim, T.; Lee, J.H. C-3PO: Cyclic-three-phase optimization for human-robot motion retargeting based on reinforcement learning. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation, Virtual, 31 May–31 August 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 8425–8432.
  64. Rakita, D.; Mutlu, B.; Gleicher, M. A motion retargeting method for effective mimicry-based teleoperation of robot arms. In Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction (HRI’17), Vienna, Austria, 6–9 March 2017; pp. 361–370.
  65. Hagane, S.; Venture, G. Robotic Manipulator’s Expressive Movements Control Using Kinematic Redundancy. Machines 2022, 10, 1118.
More
Information
Contributors MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register : , , ,
View Times: 99
Revisions: 2 times (View History)
Update Date: 02 Feb 2024
1000/1000