Human–Robot Collaboration: Comparison
Please note this is a comparison between Version 1 by Nikolaos Anatoliotakis and Version 2 by Dean Liu.

The automation of manufacturing applications where humans and robots operate in a shared environment imposes new challenges for presenting the operator’s safety and robot’s efficiency. Common solutions relying on isolating the robots’ workspace from human access during their operation are not applicable for HRI.

  • robotics
  • human–robot interaction
  • VR applications

1. Introduction

Recent advancements in human–robot collaboration have enabled humans and robots to work together in a shared manufacturing environment [1][2][1,2]. Since their first use at the industrial level, collaborative robots (or cobots) have been used in many tasks combining the perception of several humans with the efficiency of robots. Several repetitive tasks that require precision and usually require moving heavy payloads are being automated by robots. For example, in the automotive industry, large robot manipulators have been used to build car engines, weld and assemble car parts at high speeds. Inside the robot workspace and the neighboring working environment, humans are engaged with high-level tasks that require advanced thinking and possible interactions with the robots. For human–robot applications, a context-aware safety system that prioritizes human protection is needed.
There are two main actions to make any collaborative task safer. The first one is to change the motion of the robot according to the human presence, and the second one is to inform the worker about the robot’s potential danger so that he/she can change his/her actions. For the first one, knowing the common workspace of both the human and the robot is mandatory for successfully controlling the robot to ensure safety. N. Vahrenkamp et al. [3] make use of both workspaces in order to find a robot’s best possible poses in a task requiring human–robot interaction. Other approaches seek to accurately sense potential collisions between the human and the robot based on image feedback [4] or to identify collisions using force/torque sensors [5][6][7][8][9][5,6,7,8,9]. However, in many industries, it is very difficult to make online changes to the motion of the robot due to the continuity of the production line as well as the time needed for restarting it. Therefore, the second action wherein the worker is appropriately informed about the movement of the robot and its potential danger seems to be more feasible and efficient.
While onsite, hands-on safety training is very important for significantly decreasing the risk of injury. On the other hand, virtual training using state-of-the-art technologies, such as virtual, augmented or mixed reality (VR/AR/MR) [10], offers many possibilities to train workers who will be able to work safely in shared or collaborative working environments. VR technology provides a complete virtual environment/test-bed, while AR technology is able to combine the best of both worlds by enhancing real world training in a shared manufacturing environment with visuals, hints, sounds and other additional information.
Another important aspect is the data structure used for representing the virtual objects and information rendering for situational awareness. The variability of workspace configurations in industry creates the requirement of a flexible framework and data structure that is able to be adapted to specific workflows with minimal effort. Point clouds are recognized as a very promising means for representing 3D content in AR/VR; yet to obtain a high level of realism and faithfulness, a very high amount of data is needed, demanding efficient coding solutions. Octrees have been widely used for point cloud geometry coding and compression [11] since they are characterized by adaptive partitioning capabilities.

2. AR- and MR-Based Applications

This augmentation technique is already an integral part of multiple studies on human–robot collaboration. In one such study, [12] a novel integrated mixed reality system for safety-aware HRC using deep learning and digital twin generation is proposed. Their approach can accurately measure the minimum safe distance in real-time and provide MR-based task assistance to the human operator. This technique integrates MR with safety-related monitoring by tracking the shared workplace and providing user-centric visualization through smart MR glasses for safe and effective HRC. A virtual robot, the digital twin of the real robot, is registered accurately utilizing deep-learning-based instance segmentation techniques. The main reason for using digital twins is to calculate safety-related data more effectively and accurately and to provide to the user some relevant safety and task assistance information.
Another study [13] augments the real environment by rendering semi-transparent green and red cubes signifying the safe area and the constrained robot’s work-space, respectively. Through visual alerts and warning messages about potential hazards on the shop floor, the system increases the awareness of the user interacting with the robot.
Hietanen et al. [14] proposed the use of a depth-sensor-based model for workspace monitoring and an interactive augmented reality User Interface (UI). The AR UI was implemented on two different hardware systems: a projector–mirror setup and wearable AR equipment (Hololens). It is important to highlight the fact that Hololens-based AR is not yet suitable for industrial manufacturing, while the projector–mirror setup shows clear improvements to safety and work ergonomics according to the user experience assessment that took place.
In another example [15], the authors created virtual boundaries around the area considered to be safe by utilizing AR technology. When the worker approaches a border, the border elements closest to the user start growing and changing color so that elements grow and turn from green to yellow to red with decreasing distance.

3. VR-Based Applications

Virtual reality, in comparison with augmented reality, eliminates possible hazards that may occur in a real-world scenario during training routines. However, the importance of safety zone awareness during training is an integral part of every HRC. A recent VR application [16] was designed to simulate safety targeting strategies in human–robot collaborative manufacturing. The work reported two alternative safe collaboration strategies—‘reduce speed’ and ‘move back’—in the task of handling fabric when building aerospace composite material parts. Both methods are triggered when the distance between the human and the robot falls below a certain threshold.
VR is also being employed in studies concerning cost-effective solutions rather than just safety oriented ones [17]. In that work, safety zone boundaries were created around the robot, which, when violated by the trainee, lead the program to automatically prompt a warning message explaining the operator’s unsafe behavior. Another important safety training framework is introduced in [18] to empower safety concern and encourage constructing clear, distinguished instructions regarding machine operation.
However, the visualization of safety zones is not restricted to only HRC applications. The authors in [19] focused on the detection of potential collisions of a 3D moving vehicle model in an industrial environment and visualized not only the safety zones around the desired entity but the vehicle path as well. Additionally, visual feedback was developed to illustrate the part of the moving object that collides with the environment.
Video Production Service