Human Operation Augmentation through Wearable Robotic Limb: Comparison
Please note this is a comparison between Version 1 by Hongwei Jing and Version 2 by Sirius Huang.

The supernumerary robotic limb (SRL) is a new type of wearable robot that improves the human body’s ability to move, perceive, and operate through mechanical and human limbs’ integration, mutual assistance, and cooperation. Unlike traditional collaborative robots, SRLs have a closer human–computer interaction mode and a cooperative mode of moving with the human body.

  • human augmentation
  • wearable robotic limb
  • supernumerary robotic limb
  • mixed reality

1. Introduction

The supernumerary robotic limb (SRL) is a new type of wearable robot that is different from prosthetic and exoskeleton robots. It improves the human body’s ability to move, perceive, and operate through mechanical and human limbs’ integration, mutual assistance, and cooperation [1][2][3][1,2,3]. According to different application scenarios, scholars from various countries have designed SRLs that can be worn in different positions. Specifically, it includes an extra robotic limb worn on the waist to complete auxiliary support tasks [4] or tasks of tool delivery and remote assistance [5]. Extra robotic limbs are worn on the shoulders to complete overhead support tasks [6]. Extra robotic fingers are used to improve the independent living ability of the disabled [7] or to expand the grasping capacity of ordinary people [8]. In these application scenarios, the SRLs play a vital role in expanding the ability of a single person to operate and improve work efficiency.
It is also imperative to design appropriate interaction methods to make SRLs not affect the wearer’s operation ability and rationally use human body information. Ref. [9] divided the command interfaces of the supernumerary effector into three categories: body, muscle, and neural. The body interface refers to the interaction method based on body motion mapping, such as fingers [10], feet [11], and the interaction method using changes in hand and finger force [12]. Surface electromyography (EMG) interaction methods based on redundant human muscles are proposed, such as chest and abdominal EMG signals [13] and forehead EMG signals [14]. The electroencephalography/magnetoencephalography (EEG/MEG) interaction method [15][16][15,16] has also been proposed but not applied to actual task scenarios. Although these interaction methods can realize the essential communication of SRLs, there are still some unsolved problems, such as the increased cognitive load of people, complex sensor data processing, and limited adaptability (wearer adaptability and application scene adaptability). Therefore, it is significant to construct a simple, reliable, and natural human–robot interaction interface for SRLs and to design an efficient interaction strategy.
The use of collaborative robots in the industry has grown over the past few years. Augmented Reality (AR), as a prominent and promising tool to support human operators in understanding and interacting with robots, has been applied in many human–robot collaboration and cooperative industrial applications. Research on the human–robot collaboration of head-mounted displays (HMDs) and projectors has attracted more attention [17]. Compared with industrial collaborative robots, SRLs have the characteristics of closer human–robot interaction, moving with the wearer and not affecting the wearer’s operation ability. Mixed Reality (MR) is a broader term encompassing AR and Augmented Virtuality (AV). It is a general term for blending the virtual and the physical worlds [18][19][18,19]. HoloLens2 is an MR device developed by Microsoft Corp.

2. Human–Robot Interaction Method of SRLs

Unlike traditional collaborative robots, SRLs have a closer human–computer interaction mode and a cooperative mode of moving with the human body. Therefore, the current human–robot interaction methods for SRLs focus on researching natural interaction methods that do not affect the wearer’s ability to operate. Refs. [12][13][12,13] have proposed an interaction method based on task redundant finger strength to complete the task of opening the door and an interaction method based on EMG signals. Ref. [20] proposed an interaction method based on eye gaze information and gave quantified manipulation accuracy. Ref. [21] proposed a method for assisted walking foot position prediction that fuses continuous 3D gaze and environmental background. Combining human gaze and environmental point cloud information is significant for studying wearable robotic limb control for assisted walking. Ref. [22] proposed a new gaze-based natural human–robot interaction method that suppresses the noise of the gaze signal to extract human grasping intent. This study is different from the previous research on gaze intention, and it has reference significance for the study of human–robot natural interaction with wearable robotic limbs. Ref. [15] researched the method of manipulating external limbs by EEG signals and evaluated the influence of various factors of SRLs on human–robot interaction. Ref. [8] analyzed the ability to control extra robotic fingers with the toes and the effects on human nerves. Ref. [23] proposed a task model for overhead tasks to realize the human–robot collaboration of SRL according to the operator’s actions. This task model has the adaptability of the task and has reference significance for the construction of interaction strategies for similar tasks. The interaction between SRLs and the wearer mainly converts the wearer’s intentions into robot task execution information. This can be understood as a “decomposition” and “synthesis” process. People decompose their intentions into various external data, and the SRLs collect the external information to synthesize the task execution information required by the robot. The essential lies in the collection of human multimodal information, the synthesis and transformation of multi-source information, the reduction in human cognitive load, and the strategy of SRL cooperation. The existing work has no precedent for applying MR to the human–robot interaction of SRLs. Moreover, most current interaction methods call for a single interaction method, and the user experience is not comfortable enough. 

3. AR-Based Operator Support Systems

Advances in display and vision technologies create new interaction methods that enable information-rich, real-time communication in shared workspaces. The visualization methods applied by AR in human–computer collaboration mainly include HMDs, spatial augmented reality projectors, fixed screens, and hand-held displays (HHDs) [17]. The current research trend mainly implements Safety, Guidance, Feedback, Programming, and Quality Control through HMDs or projectors. Ref. [24] communicate the motion intent of a robotic arm through a Mixed Reality head-mounted display. Ref. [25] applied AR in the human–robot collaborative manufacturing process to solve human–robot safety issues and improve operational efficiency. Ref. [26] present the design and implementation of an AR tool that provides production and process-related feedback information and enhances the operator’s immersion in safety mechanisms. Refs. [27][28][27,28] also researched the application of AR to realize industrial robot programming and collaboration models. These research results have shown the advantages of AR devices in human–robot interaction through qualitative or quantitative methods. The related works are shown in Table 1.
Table 1.
Summary and classification table of related work.
Video Production Service