The use of robot arms in various industrial settings has changed the way tasks are completed. However, safety concerns for both humans and robots in these collaborative environments remain a critical challenge. Traditional approaches to visualising safety zones, including physical barriers and warning signs, may not always be effective in dynamic environments or where multiple robots and humans are working simultaneously. Mixed reality technologies offer dynamic and intuitive visualisations of safety zones in real time, with the potential to overcome these limitations.
1. Introduction
Robotics technology has changed industrial operations, enabling the efficient and accurate completion of tasks. One of the most significant applications of robotics is the use of robot arms for repetitive, complex or hazardous tasks in various industrial settings, including manufacturing, construction, and healthcare. However, ensuring the safety of both humans and robots in these environments is a critical challenge that needs to be addressed. To ensure the safe operation of robot arms in industrial settings, it is essential to provide clear visual representations of safety zones—areas around the robot arm that are free from obstacles and where human workers should not enter to avoid collisions or accidents. Traditional approaches to visualising safety zones include physical barriers, warning signs, and colour-coded floor markings [
1]. However, these methods may not always be effective in dynamic environments, where the layout of the workspace may change frequently, or in settings where multiple robots and humans are working simultaneously [
2].
Virtual and augmented reality technologies have the potential to overcome these limitations by providing dynamic and intuitive visualisations of safety zones in real time. Virtual reality (VR) technologies can create fully immersive environments, where users can simulate the operation of robot arms without the need for physical equipment [
3]. Augmented reality (AR) technologies, on the other hand, can overlay holographic-like images onto the real world, providing a more realistic and contextually relevant representation of safety zones [
2,
4,
5,
6,
7]. Recent advances in mixed reality devices, such as the Microsoft HoloLens 2, have enabled users to interact with such holographic images overlaid onto the real world. The HoloLens 2 has been used in various applications, including remote assistance [
8], training [
9], and manufacturing [
10] in industrial settings. However, little research has been conducted to evaluate the effectiveness of safety zone visualisations in virtual and real robot arm environments using the HoloLens 2.
2. Safety Zone Visualisations for Virtual and Physical Robot Arms Using Augmented Reality
A range of AR display technologies have been used in studying the safety and efficiency of robot arm operations. For example, Hietanen et al. [
4] used both projection-based AR and Microsoft HoloLens 1 in the context of a human–robot collaboration task involving a diesel engine assembly, utilising depth sensor information for work space monitoring in both cases. The results showed that AR interaction led to a reduction in task completion time by up to 21–24% and reduced robot idle time by up to 57–64%. Moreover, the user survey indicated that the projector–mirror setup was deemed more comfortable and easier to use than the HoloLens 1, which users found to be bulky and uncomfortable to wear for extended periods. Additionally, the limited field of view of the HoloLens 1 was identified as an issue. Lotsaris et al. [
2] also used a HoloLens 1 in a system that enabled the display of robot sensor information and safety zones to a user. Feedback from the study revealed that, similar to previous research, the HoloLens 1 was considered uncomfortable to wear for prolonged periods, thereby rendering it unfit for extensive use in a work environment.
Hoang et al. [
5] presented a virtual barrier system equipped with an AR interface to ensure safe human–robot collaboration (HRC). The system offers two distinct virtual barriers, namely a person barrier that surrounds and tracks the user and a user-defined virtual barrier for objects or areas. The study evaluated the performance of a pick-and-place task that simulated an industrial manufacturing procedure using the Microsoft HoloLens 2 compared to a standard 2D display interface. They concluded that the proposed configurable virtual barrier system, coupled with AR, enhances HRC safety compared to a conventional 2D display interface. Cogurcu and Maddock [
7] considered user-configurable virtual geometric objects, such as cubes and cylinders, that enveloped a real Universal Robots 10 (UR10) robot arm. These objects were either a single shape that adapted dynamically to the robot arm’s movements or individual shapes that encapsulated the robot arm’s components. The virtual safety zone sizes or the objects are calculated based on a combination of ISO standards [
11], hardware, and network latencies, which were not taken into account in prior studies.
Other work has considered the use of virtual and real environments for testing and optimising HRC. For example, [
12] explored the use of virtual environments for testing safety systems in HRC, and [
13] investigated whether VR technology used in manufacturing could provide faster design options. While these studies have provided valuable insights into the potential benefits of virtual environments, the overall picture is less clear. Some research suggests that virtual environments are not advanced enough to accurately simulate real-world conditions, and real-world testing is necessary for a comprehensive assessment of systems involving robots [
9], while other research argues that virtual environments can provide a suitable approximation of real-world conditions, with some limitations [
14]. For example, according to [
15], the use of VR technology has been shown to provide users with a similar experience to physical interaction with a robot arm. In their study, the authors found that users were able to perform tasks with a virtual robot arm just as accurately and efficiently as they could with a physical robot arm.
A further aspect of the virtual robot arm experiment that could impact on transferability is the sense of touch. In the virtual robot arm experiment, virtual wooden blocks were manipulated by the user. Comments from some users suggested that they had difficulty grabbing the virtual blocks. This was not the case for the real wooden blocks. This could suggest a potential issue for the transferability of precision and efficiency skills in training situations. Overall, transferability remains an open research question [
9,
14,
15].
This entry is adapted from the peer-reviewed paper 10.3390/computers12040075