Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 -- 1115 2023-08-30 19:07:36 |
2 format correction Meta information modification 1115 2023-08-31 02:48:02 |

Video Upload Options

Do you have a full video?


Are you sure to Delete?
If you have any further questions, please contact Encyclopedia Editorial Office.
Anatoliotakis, N.; Paraskevopoulos, G.; Michalakis, G.; Michalellis, I.; Zacharaki, E.I.; Koustoumpardis, P.; Moustakas, K. Human–Robot Collaboration. Encyclopedia. Available online: (accessed on 20 June 2024).
Anatoliotakis N, Paraskevopoulos G, Michalakis G, Michalellis I, Zacharaki EI, Koustoumpardis P, et al. Human–Robot Collaboration. Encyclopedia. Available at: Accessed June 20, 2024.
Anatoliotakis, Nikolaos, Giorgos Paraskevopoulos, George Michalakis, Isidoros Michalellis, Evangelia I. Zacharaki, Panagiotis Koustoumpardis, Konstantinos Moustakas. "Human–Robot Collaboration" Encyclopedia, (accessed June 20, 2024).
Anatoliotakis, N., Paraskevopoulos, G., Michalakis, G., Michalellis, I., Zacharaki, E.I., Koustoumpardis, P., & Moustakas, K. (2023, August 30). Human–Robot Collaboration. In Encyclopedia.
Anatoliotakis, Nikolaos, et al. "Human–Robot Collaboration." Encyclopedia. Web. 30 August, 2023.
Human–Robot Collaboration

The automation of manufacturing applications where humans and robots operate in a shared environment imposes new challenges for presenting the operator’s safety and robot’s efficiency. Common solutions relying on isolating the robots’ workspace from human access during their operation are not applicable for HRI.

robotics human–robot interaction VR applications

1. Introduction

Recent advancements in human–robot collaboration have enabled humans and robots to work together in a shared manufacturing environment [1][2]. Since their first use at the industrial level, collaborative robots (or cobots) have been used in many tasks combining the perception of several humans with the efficiency of robots. Several repetitive tasks that require precision and usually require moving heavy payloads are being automated by robots. For example, in the automotive industry, large robot manipulators have been used to build car engines, weld and assemble car parts at high speeds. Inside the robot workspace and the neighboring working environment, humans are engaged with high-level tasks that require advanced thinking and possible interactions with the robots. For human–robot applications, a context-aware safety system that prioritizes human protection is needed.
There are two main actions to make any collaborative task safer. The first one is to change the motion of the robot according to the human presence, and the second one is to inform the worker about the robot’s potential danger so that he/she can change his/her actions. For the first one, knowing the common workspace of both the human and the robot is mandatory for successfully controlling the robot to ensure safety. N. Vahrenkamp et al. [3] make use of both workspaces in order to find a robot’s best possible poses in a task requiring human–robot interaction. Other approaches seek to accurately sense potential collisions between the human and the robot based on image feedback [4] or to identify collisions using force/torque sensors [5][6][7][8][9]. However, in many industries, it is very difficult to make online changes to the motion of the robot due to the continuity of the production line as well as the time needed for restarting it. Therefore, the second action wherein the worker is appropriately informed about the movement of the robot and its potential danger seems to be more feasible and efficient.
While onsite, hands-on safety training is very important for significantly decreasing the risk of injury. On the other hand, virtual training using state-of-the-art technologies, such as virtual, augmented or mixed reality (VR/AR/MR) [10], offers many possibilities to train workers who will be able to work safely in shared or collaborative working environments. VR technology provides a complete virtual environment/test-bed, while AR technology is able to combine the best of both worlds by enhancing real world training in a shared manufacturing environment with visuals, hints, sounds and other additional information.
Another important aspect is the data structure used for representing the virtual objects and information rendering for situational awareness. The variability of workspace configurations in industry creates the requirement of a flexible framework and data structure that is able to be adapted to specific workflows with minimal effort. Point clouds are recognized as a very promising means for representing 3D content in AR/VR; yet to obtain a high level of realism and faithfulness, a very high amount of data is needed, demanding efficient coding solutions. Octrees have been widely used for point cloud geometry coding and compression [11] since they are characterized by adaptive partitioning capabilities.

2. AR- and MR-Based Applications

This augmentation technique is already an integral part of multiple studies on human–robot collaboration. In one such study, [12] a novel integrated mixed reality system for safety-aware HRC using deep learning and digital twin generation is proposed. Their approach can accurately measure the minimum safe distance in real-time and provide MR-based task assistance to the human operator. This technique integrates MR with safety-related monitoring by tracking the shared workplace and providing user-centric visualization through smart MR glasses for safe and effective HRC. A virtual robot, the digital twin of the real robot, is registered accurately utilizing deep-learning-based instance segmentation techniques. The main reason for using digital twins is to calculate safety-related data more effectively and accurately and to provide to the user some relevant safety and task assistance information.
Another study [13] augments the real environment by rendering semi-transparent green and red cubes signifying the safe area and the constrained robot’s work-space, respectively. Through visual alerts and warning messages about potential hazards on the shop floor, the system increases the awareness of the user interacting with the robot.
Hietanen et al. [14] proposed the use of a depth-sensor-based model for workspace monitoring and an interactive augmented reality User Interface (UI). The AR UI was implemented on two different hardware systems: a projector–mirror setup and wearable AR equipment (Hololens). It is important to highlight the fact that Hololens-based AR is not yet suitable for industrial manufacturing, while the projector–mirror setup shows clear improvements to safety and work ergonomics according to the user experience assessment that took place.
In another example [15], the authors created virtual boundaries around the area considered to be safe by utilizing AR technology. When the worker approaches a border, the border elements closest to the user start growing and changing color so that elements grow and turn from green to yellow to red with decreasing distance.

3. VR-Based Applications

Virtual reality, in comparison with augmented reality, eliminates possible hazards that may occur in a real-world scenario during training routines. However, the importance of safety zone awareness during training is an integral part of every HRC. A recent VR application [16] was designed to simulate safety targeting strategies in human–robot collaborative manufacturing. The work reported two alternative safe collaboration strategies—‘reduce speed’ and ‘move back’—in the task of handling fabric when building aerospace composite material parts. Both methods are triggered when the distance between the human and the robot falls below a certain threshold.
VR is also being employed in studies concerning cost-effective solutions rather than just safety oriented ones [17]. In that work, safety zone boundaries were created around the robot, which, when violated by the trainee, lead the program to automatically prompt a warning message explaining the operator’s unsafe behavior. Another important safety training framework is introduced in [18] to empower safety concern and encourage constructing clear, distinguished instructions regarding machine operation.
However, the visualization of safety zones is not restricted to only HRC applications. The authors in [19] focused on the detection of potential collisions of a 3D moving vehicle model in an industrial environment and visualized not only the safety zones around the desired entity but the vehicle path as well. Additionally, visual feedback was developed to illustrate the part of the moving object that collides with the environment.


  1. Sharkawy, A.N.; Koustoumpardis, P.N. Human–robot interaction: A review and analysis on variable admittance control, safety, and perspectives. Machines 2022, 10, 591.
  2. Liu, H.; Wang, Y.; Ji, W.; Wang, L. A context-aware safety system for human-robot collaboration. Procedia Manuf. 2018, 17, 238–245.
  3. Vahrenkamp, N.; Arnst, H.; Wächter, M.; Schiebener, D.; Sotiropoulos, P.; Kowalik, M.; Asfour, T. Workspace analysis for planning human-robot interaction tasks. In Proceedings of the 2016 IEEE-RAS 16th International Conference on Humanoid Robots (Humanoids), Cancun, Mexico, 15–17 November 2016; pp. 1298–1303.
  4. Mohammed, A.; Schmidt, B.; Wang, L. Active collision avoidance for human–robot collaboration driven by vision sensors. Int. J. Comput. Integr. Manuf. 2017, 30, 970–980.
  5. Sharkawy, A.N.; Koustoumpardis, P.N.; Aspragathos, N. Human–robot collisions detection for safe human–robot interaction using one multi-input–output neural network. Soft Comput. 2020, 24, 6687–6719.
  6. Sharkawy, A.N.; Koustoumpardis, P.N.; Aspragathos, N. A recurrent neural network for variable admittance control in human–robot cooperation: Simultaneously and online adjustment of the virtual damping and Inertia parameters. Int. J. Intell. Robot. Appl. 2020, 4, 441–464.
  7. Sharkawy, A.N.; Koustoumpardis, P.N.; Aspragathos, N. A neural network-based approach for variable admittance control in human–robot cooperation: Online adjustment of the virtual inertia. Intell. Serv. Robot. 2020, 13, 495–519.
  8. Bimbo, J.; Pacchierotti, C.; Tsagarakis, N.G.; Prattichizzo, D. Collision detection and isolation on a robot using joint torque sensing. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 3–8 November 2019; pp. 7604–7609.
  9. Sharkawy, A.N.; Aspragathos, N. Human-robot collision detection based on neural networks. Int. J. Mech. Eng. Robot. Res 2018, 7, 150–157.
  10. Pavlou, M.; Laskos, D.; Zacharaki, E.I.; Risvas, K.; Moustakas, K. XRSISE: An XR training System for Interactive Simulation and Ergonomics assessment. Front. Virtual Real. 2021, 2, 17.
  11. Schnabel, R.; Klein, R. Octree-based Point-Cloud Compression. In Proceedings of the 3rd Eurographics/IEEE VGTC Conference on Point-Based Graphics, Boston, MA, USA, 29–30 July 2006; pp. 111–120.
  12. Choi, S.H.; Park, K.B.; Roh, D.H.; Lee, J.Y.; Mohammed, M.; Ghasemi, Y.; Jeong, H. An integrated mixed reality system for safety-aware human-robot collaboration using deep learning and digital twin generation. Robot.-Comput.-Integr. Manuf. 2022, 73, 102258.
  13. Michalos, G.; Karagiannis, P.; Makris, S.; Tokçalar, Ö.; Chryssolouris, G. Augmented reality (AR) applications for supporting human-robot interactive cooperation. Procedia CIRP 2016, 41, 370–375.
  14. Hietanen, A.; Pieters, R.; Lanz, M.; Latokartano, J.; Kämäräinen, J.K. AR-based interaction for human-robot collaborative manufacturing. Robot.-Comput.-Integr. Manuf. 2020, 63, 101891.
  15. Krüger, M.; Weigel, M.; Gienger, M. Visuo-tactile AR for Enhanced Safety Awareness in Human-Robot Interaction. In Proceedings of the HRI 2020 Workshop on Virtual, Augmented and Mixed Reality for Human-Robot Interaction (VAM-HRI), Cambridge, UK, 23 March 2020.
  16. Vosniakos, G.C.; Ouillon, L.; Matsas, E. Exploration of two safety strategies in human-robot collaborative manufacturing using Virtual Reality. Procedia Manuf. 2019, 38, 524–531.
  17. Adami, P.; Rodrigues, P.B.; Woods, P.J.; Becerik-Gerber, B.; Soibelman, L.; Copur-Gencturk, Y.; Lucas, G. Effectiveness of VR-based training on improving construction workers’ knowledge, skills, and safety behavior in robotic teleoperation. Adv. Eng. Inform. 2021, 50, 101431.
  18. Dianatfar, M.; Latokartano, J.; Lanz, M. Concept for Virtual Safety Training System for Human-Robot Collaboration. Procedia Manuf. 2020, 51, 54–60.
  19. Giorgini, M.; Aleotti, J. Visualization of AGV in virtual reality and collision detection with large scale point clouds. In Proceedings of the 2018 IEEE 16th International Conference on Industrial Informatics (INDIN), Porto, Portugal, 18–20 July 2018; pp. 905–910.
Contributors MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to : , , , , , ,
View Times: 172
Revisions: 2 times (View History)
Update Date: 31 Aug 2023
Video Production Service