1. Human–Machine Interaction
To date, manual human work has been often replaced by robotic systems in industry. However, within complex systems, the interaction between humans and machines/robots (HMI) still needs to occur. HMI is an area of research related to the development of robotic systems based on understanding, evaluation, and analysis, and this system combines various forms of cooperation or interaction with humans. Interaction requires communication between robots and humans, and human communication and collaboration with the robot system can take many forms. However, these forms are greatly influenced by whether the human is close to the robot and the context being used: (i) human–computer context—keyboard, buttons, etc.; (ii) real procedures context—haptics, sensors; and (iii) close and exact interaction. Therefore, both human and robot communication or interaction can be divided into two main categories: remote interaction and exact interaction. Remote interaction takes place by remote operation or supervised control. Close interaction takes place by operation with an assistant or companion. Close interaction may include physical interaction. Because close interactions are the most difficult, it is crucial to consider a number of aspects to ensure a successful collaboration, i.e., a real-time algorithm, “touch” detection and analysis, autonomy, semantic understanding capabilities, and AI-aided anticipation skills. A summary of the relevant research focused on improving and developing HMI methods is provided in Table 1.
Table 1.
Research focused on human–machine interaction.
Objective |
Technology |
Approach |
Improvement |
Ref. |
To improve flexibility, productivity and quality of a multi-pass gas tungsten arc welding (GTAW) process performed by a collaborative robot. |
A haptic interface. |
Modelling method. |
Developed model in which the robot is autonomous in task execution, but also aids the operator’s ultimate decision-making process about what to do next. Presentation of the robot’s own model of the work scene enables corrections to be made by the robot, as well as it can enhance the operator’s confidence in the robot’s work. |
[ | 4 | ][5][27,28] |
The interaction between humans and robots or mechatronic systems encompasses many interdisciplinary fields, including physical sciences, social sciences, psychology, artificial intelligence, computer science, robotics, and engineering. This interaction examines all possible situations in which a human and a robot can systematically collaborate or complement each other. Thus, the main goal is to provide robots with various competencies to facilitate their interaction with humans. To implement such competencies, modelling of real-life situations and predictions is necessary, applying models in interaction with robots, and trying to make this interaction as efficient as possible, i.e., inherently intuitive, based on human experience and artificial intelligence algorithms.
The role of various interfering aspects (Table 2.) in human–robot interaction may lead to different future perspectives.
Table 2.
Interfering aspects in human–robot interaction.
Objective |
Interaction |
Approach |
Solution |
Ref. |
| 6-axis robotic arm (Mitsubishi MELFA RV-13FM-D). The end effector with GTAW torch. A monitoring camera (Xiris XVC-1000). A Load Cell (ATI Industrial Automation Mini45-E) to evaluate tool force interactions with work pieces. |
A haptic-based approach is designed and tested in a manufacturing scenario proposing light and low-cost real-time algorithms for “touch” detection. |
Two main criteria were analysed to assess the performance: the 3-Sigma rule and the Hampel identifier. Experimental results showed better performance of the 3-Sigma rule in terms of precision percentage (mean value of 99.9%) and miss rate (mean value of 10%) concerning the Hampel identifier. Results confirmed the influence of the contamination level related to the dataset. This algorithm adds significant advances to enable the use of light and simple machine learning approaches in real-time applications. |
[1][2][24,25] |
Frustration |
Close cooperative work |
Controlled coordination |
Sense of control of frustration, affective computing. |
[6][29] |
To produce more advanced or complex forms of interaction by enabling cobots with semantic understanding capabilities or AI-aided anticipation skills. |
Collaborative robots |
Artificial intelligence. |
The overview provides hints of future cobot developments and identifies future research frontiers related to economic, social, and technological dimensions. |
[3][26] |
Emotion recognition |
By collecting different kinds of data. |
Discrete models describing emotions used, facial expression analysis, camera positioning. |
Affective computing. Empowering robots to observe, interpret and express emotions. Endow robots with emotional intelligence. |
[7][30] |
To strike a balance in order to find a suitable level of autonomy for human operators. |
Model of Remotely Instructed Robots (RIRs.) |
Decoding of action observation |
Elucidating the neural mechanisms of action observation and intention understanding. |
Decoding the underlying neural processes. |
The dynamic involvement of the mirror neuron systems (MNS) and the theory of mind ToM/mentalising network during action observation. |
[8][31] |
Verbal and non-verbal communication |
Interactive communication. |
Symbol grounding |
Composition of grounded semantics, online negotiation of meaning, affective interaction and closed-loop affective dialogue, mixed speech-motor planning, massive acquisition of data-driven models for human–robot communication through crowd-sourced online games, real-time exploitation of online information and services for enhanced human–robot communication. |
[9][32] |
The main aspirations are an intuitive, human-friendly interface, faster and simpler programming methods, advanced communication features, and robot reactions to human movements, mood, and even psychological state. Methods to monitor human actions and emotions
[10][33], fusion of sensors’ data, and machine learning are key technologies for further improvement in the HMI area.
2. Object Recognition
Object recognition is a typical issue in industrial robotics applications, such as sorting, packaging, grouping, pick and place, and assembling (
Table 3). The appropriate recognition method and equipment selection mainly depends on the given task, object type, and the number of recognisable parameters. If there are a small number of parameters, simpler sensing technologies based on typical approaches (geometry measuring, weighing, material properties’ evaluation) can be implemented. Alternatively, if there are a significant number of recognisable parameters, photo or video analysis is preferred. Required information in two- or three-dimensional form from image or video can be extracted using computer vision techniques such as object localisation and recognition. Various techniques of vision-based object recognition have been developed, such as appearance-, model-, template-, and region-based approaches. Most vision recognition methods are based on deep learning
[11][34] and other machine learning methods.
In a previous study
[12][35], a lightweight Franka Emika Panda, cobot with seven degrees of freedom and a Realsense D435 RGB-D camera, mounted on an end effector, was used to extend the default robots’ function. Instead of using a large dataset-based machine learning technique, the authors proposed a method to program the robot from a single demonstration. This robotic system can detect various objects, regardless of their position and orientation, achieving an average success rate of more than 90% in less than 5 min of training time, using an Ubuntu 16.04 server running on an Intel(R) Core(TM) i5-2400 CPU (3.10 GHz) and an NVIDIA Titan X GPU.
Another approach for grasping randomly placed objects was presented in
[13][36]. The authors proposed a set of performance metrics and compared four robotic systems for bin picking, and took first place in the Amazon Robotics Challenge 2017. The survey results show that the most promising solutions for such a task are RGB-D sensors and CNN-based algorithms for object recognition, and a combination of suction-based and typical two-finger grippers for grasping different objects (vacuum grippers for a stiff object with large and smooth surface areas, and two-finger grippers for air-permanent items).
Similar localisation and sorting tasks appear in the food and automotive industries, and in almost every production unit. In
[14][37], an experimental method was proposed using a pneumatic robot arm for separation of objects from a set according to their colour. If the colour of the workpiece is recognisable, it is selected with the help of a robotic arm. If the workpiece colour does not meet the requirements, it is rejected. The described sorting system works according to an image processing algorithm in MATLAB software. More advanced object recognition methods based on simultaneous colour and height detection are presented in
[15][38]. A robotic arm with six degrees of freedom (DoF) and a camera with computer vision software ensure a sorting efficiency of about 99%.
A Five DoF robot arm, “OWI Robotic Arm Edge”, proposed by Pengchang Chen et al., was used to validate the practicality and feasibility of a faster region-based convolutional neural network (faster R-CNN) model using a dataset containing images of symmetric objects
[16][39]. Objects were divided into classes based on colour, and defective and non-defective objects.
Despite significant progress in existing technologies, randomly placed unpredictable objects remain a challenge in robotics. The success of a sorting task often depends on the accuracy with which recognisable parameters can be defined. Yan Yu et al.
[17][40] proposed an RGB-D-based method for solid waste object detection. The waste sorting system consists of a server, vision sensors, industrial robots, and rotational speedometer. Experiments performed on solid waste image analysis resulted in a mean average precision value of 49.1%.
Furthermore, Wen Xiao et al. designed an automatic sorting robot that uses height maps and near-infrared (NIR) hyperspectral images to locate the region of interest (ROI) of objects, and to perform online statistic pixel-based classification in contours
[18][41]. This automatic sorting robot can automatically sort construction and demolition waste ranging in size from 0.05 to 0.5 m. The online recognition accuracy of the developed sorting system reaches almost 100% and ensures operation speed up to 2028 picks/h.
Another challenging issue in object recognition and manipulation is objects having an undefined shaped and contaminated by dust or smaller particles, such as minerals or coal. Quite often, such a task requires not only recognising the object but also determining the position of the centre of mass of the object. Man Li et al.
[19][42] proposed an image processing-based coal and gangue sorting method. Particle analysis of coal and gangue samples is performed using morphological corrosion and expansion methods to obtain a complete, clean target sample. The object’s mass centre is obtained using the centre of the mass method, consisting of particle removal and filling, image binarization, and separation of overlapping samples, reconstruction, and particle analysis. The presented method achieved identification accuracy of coal and gangue samples of 88.3% and 90.0%, and the average object mass centre coordinate errors in the x and y directions were 2.73% and 2.72%, respectively
[19][42].
Intelligent autonomous robots for picking different kinds of objects were studied as a possible means to overcome the current limitations of existing robotic solutions for picking objects in cluttered environments
[20][43]. This autonomous robot, which can also be used for commercial purposes, has an integrated two-finger gripper and a soft robot end effector to grab objects of various shapes. A special algorithm solves 3D perception problems caused by messy environments and selects the right grabbing point. When using lines, the time required depends significantly on the configuration of the objects, and ranges from 0.02 s when the objects have almost the same depth, to 0.06 s in the worst case when the depth of the tactile objects is greater than the lowest depth but not perceived
[20][43].
In robotics, the task of object recognition often includes not only recognition and the determinaton of coordinates, but it also plays an essential role in the creation of a robot control program. Based on the ABB IRB 140 robot and a digital camera, a low-cost shapes identification system was developed and implemented, which is particularly important due to the high variability of welded products
[21][44]. The authors developed an algorithm that recognises the required toolpath from a taken image. The algorithm defines a path as a complex polynomial. It later approximates it by simpler shapes with a lower number of coordinates (line, arc, spline) to realise the tool movement using standard robot programming language features.
Moreover, object recognition can be used for robot machine learning to analyse humans’ behaviour. Such an approach was presented by Hiroaki et al.
[22][45], where the authors studied the behaviour of a human crowd, and formulated a new forecasting task, called crowd density forecasting, using a fixed surveillance camera. The main goal of this experiment was to predict how the density of the crowd would change in unseen future frames. To address this issue, patch-based density forecasting networks (PDFNs) were developed. PDFNs project a variety of complex dynamics of crowd density throughout the scene, based on a set of spatially or spatially overlapping patches, thus adapting the receptive fields of fully convolutional networks. Such a solution could be used to train robotic swarms because they behave similarly to humans in crowded areas.
Table 3.
Research focused on object recognition in robotics.
Objective |
Technology |
Approach |
Improvement |
Ref. |
Extended default “program from demonstration” feature of collaborative robots to adapt them to environments with moving objects. |
Franka Emika Panda cobot with 7 degrees of freedom, with a Realsense D435 RGB-D camera mounted on the end-effector. |
Grasping method to fine-tune using reinforcement learning techniques. |
The system can grasp various objects from a demonstration, regardless of their position and orientation, in less than 5 min of training time. |
[12][23][35,46] |
Introduction of a set of metrics for primary comparison of robotic systems’ detailed functionality and performance. |
Robot with different grippers. |
Recognition method and the grasping method. |
Developed original robot performance metrics and tested on four robot systems used in the Amazon Robotics Challenge competition. Results of analysis showed the difference between the systems and promising solutions for further improvements. |
[13][22][24][36,45,47] |
To build a low-cost system for identifying shapes to program industrial robots for the 2D welding process. |
Robot ABB IRB 140 with a digital camera, which detects contours on a 2D surface. |
A binarisation and contour recognition method. |
A low-cost system based on an industrial vision was developed and implemented for the simple programming of the movement path. |
[25][26][48,49] |
The patch-based density forecasting networks (PDFNs) directly forecast crowd density maps of future frames instead of trajectories of each moving person in the crowd. |
Fixed surveillance camera |
Density Forecasting in Image Space. Density Forecasting in Latent Space. PDFNs. Spatio-Temporal Patch-Based Gaussian filter. |
Proposed patch-based models, PDFN-S and PDFN-ST, outperformed baselines on all the datasets. PDFN-ST successfully forecasted dynamics of individuals, a small group, and a crowd. The approach cannot always forecast sudden changes in walking directions, especially when they happened in the later frames. |
[22][45] |
To separate the objects from a set according to their colour. |
Pneumatic Robot arm |
Force in response to applied pressure. |
The proposed robotic arm may be considered for sorting. Servo motors and image processing cameras can be used to achieve higher repeatability and accuracy. |
[14][27][37,50] |
An image processing-based method for coal and gangue sorting. Development of a positioning and identification system. |
Coal and gangue sorting robot |
Threshold segmentation methods. Clustering method. Morphological corrosion and expansion methods. The centre of mass method. |
Efficiency is evaluated using the images of coal and gangue, which are randomly picked from the production environment. The average coordinate errors in the x and y directions are 2.73% and 2.72%, and the identification accuracy of coal and gangue samples is 88.3% and 90.0%, respectively, and the sum of the time for identification, positioning, and opening the camera for a single sample averaged 0.130 s. |
[18][28][29][41,51,52] |
A computer vision-based robotic sorter is capable of simultaneously detecting and sorting objects by their colours and heights. Vision-based process encompasses identification, manipulation, selection, and sorting objects depending on colour and geometry. |
A 5 or 6 DOF robotic arm and a camera with the computer vision software detecting various colours and heights and geometries. |
Computer Vision methods with the Haar Cascade algorithm. The Canny edge detection algorithm is used for shape identification. |
A robotic arm is used for picking and placing objects based on colour and height. In the proposed system, colour and height sorting efficiency is around 99%. Effectiveness, high accuracy and low cost of computer vision with a robotic arm in the sorting process according to color and shape are revealed. |
[15][30][31][38,53,54] |
A novel multimodal convolutional neural network for RGB-D object detection. |
A base solid waste sorting system consisting of a server, vision sensors, industrial robot, and rotational speedometer. |
Comparison with single modal methods. Washington RGB-D object recognition benchmark evaluated. |
Meeting the real-time requirements and ensuring high precision. Achieved 49.1% mean average precision, processing images in real-time at 35.3 FPS on one single Nvidia GTX1080 GPU. Novel dataset. |
[17][32][40,55] |
Practicality and feasibility of a faster R-CNN model using a dataset containing images of symmetric objects. |
Five DoF robot arm “OWI Robotic Arm Edge.” |
CNN learning algorithm that processes images with multiple layers (filters) and classifies objects in images. |
[ |
38 | ] | [ | 39 | ][43,60,61,62] |
A few main trends can be highlighted from the research analysis related to object recognition in robotics. These can be defined as object recognition for localisation and further manipulation; object recognition for shape evaluation and automatic generation of the robot program code for the corresponding robot movement; and object recognition for behaviour analysis to use as initial data for machine learning algorithms. A large number of reliable solutions have been tested in the industrial environment for the first trend, in contrast to the second and third cases, which are currently being developed.
3. Medical Application
The da Vinci Surgical System is the best-known robotic manipulator used in surgery applications. Florian Richter et al.
[40][63] presented a Patient Side Manipulator (PSM) arm technology to implement reinforcement learning algorithms for the surgical da Vinci robots. The authors presented the first open-source reinforcement learning environment for surgical robots, called dVRL
[40][63]. This environment allows fast training of da Vinci robots for autonomous assistance, and collaborative or repetitive tasks, during surgery. During the experiments, the dVRL control policy was effectively learned, and it was found that it could be transferred to a realrobot- with minimal efforts. Although the proposed environment resulted in the simple and primitive actions of reaching and picking, it was useful for suction and debris removal in a real surgical setting.
Meanwhile, in their work, Yohannes Kassahun et al. reviewed the role of machine learning techniques in surgery, focusing on surgical robotics
[41][64]. They found that currently, the research community faces many challenges in applying machine learning in surgery and robotic surgery. The main issues are a lack of high-quality medical and surgical data, a lack of reliable metrics that adequately reflect learning characteristics, and a lack of a structured approach to the effective transfer of surgical skills for automated execution
[41][64]. Nevertheless, the application of deep learning in robotics is a very widely studied field. The article by Harry A. Pierson et al. in 2017 provides a recent review emphasising the benefits and challenges vis-à-vis robotics
[42][65]. Similarly to
[41][64], they found that the main limitations preventing deep learning in medical robotics are the huge volume of training data required and a relatively long training time.
Surgery is not the only field in medicine in which robotic manipulators can be used. Another autonomous robotic grasping system, described by John E. Downey et al., introduces shared control of a robotic arm based on the interaction of a brain–machine interface (BMI) and a vision guiding system
[43][66]. A BMI is used to define a user’s intent to grasp or transfer an object. Visual guidance is used for low-level control tasks, short-range movements, definition of the optimal grasping position, alignment of the robot end-effector, and grasping. Experiments proved that shared control movements were more accurate, efficient, and less complicated than transfer tasks using BMI alone.
Another case that requires fast robot programming methods and is implemented in medicine is the assessment of functional abilities in functional capacity evaluations (FCEs)
[44][67]. Currently, there is no single rational solution that simulates all or many of the standard work tasks that can be used to improve the assessment and rehabilitation of injured workers. Therefore, the authors proposed that, with the use of the robotic system and machine learning algorithms, it is possible to simulate workplace tasks. Such a system can improve the assessment of functional abilities in FCEs and functional rehabilitation by performing reaching manoeuvres or more complex tasks learned from an experienced therapist. Although this type of research is still in its infancy, robotics with integrated machine learning algorithms can improve the assessment of functional abilities
[44][67].
Although the main task of robotic manipulators is the direct manipulation of objects or tools in medicine, these manipulators can also be used for therapeutic purposes for people with mental or physical disorders. Such applications are often limited by the ability to automatically perceive and respond as needed to maintain an engaging interaction. Ognjen Rudovic et al. presented a personalised deep learning framework that can adapt robot perception
[45][68]. The researchers in the experiment focused on robot perception, for which they developed an individualised deep learning system that could automatically assess a patient’s emotional states and level of engagement. This makes it easier to monitor treatment progress and optimise the interaction between the patient and the robot.
Robotic technologies can also be applied in dentistry. To date, there has been a lack of implementation of fundamental ideas. In a comprehensive review of robotics and the application of artificial intelligence, Jasmin Grischke et al. present numerous approaches to apply these technologies
[46][69]. Robotic technologies in dentistry can be used for maxillofacial surgery
[47][70], tooth preparation
[48][71], testing of toothbrushes
[49][72], root canal treatment and plaque removal
[50][73], orthodontics and jaw movement
[51][74], tooth arrangement for full dentures
[52][75], X-ray imaging radiography
[53][76], swab sampling
[54][77], etc.
A summary of research focused on robotics in medical applications is provided in Table 4. It can be seen that robots are still not very popular in this area, and technological and phycological/ethical factors can explain this. From the technical point of view, more active implementation is limited by the lack of fast and reliable robot program preparation methods. Regarding psychological and ethical factors, robots are still unreliable for a large portion of society. Therefore, they are only accepted with significant hesitation.
Table 4.
Robotic solutions in medical applications.
Objective |
Technology |
Approach |
Improvement |
Ref. |
Create bridge between reinforcement learning and the surgical robotics communities by presenting the first open-sourced reinforcement learning environments for surgical da Vinci robots. |
Patient Side Manipulator (PSM) arm. Da VinciR©Surgical Robot. Large Needle Driver (LND), with a jaw gripper to grab objects such as suturing needle. |
Reinforced learning, OpenAI Gym DDPG (Deep Deterministic Policy Gradients) and HER (Hindsight Experience Replay) V-REP physics simulator |
Developed new reinforced learning environment for fast and effective training of surgical da Vinci robots for autonomous operations. |
[40][63] |
A method of shared control where the user controls a prosthetic arm using a brain–machine interface and receives assistance with positioning the hand when it approaches an object. |
Brain–machine interface system. Robotic arm. RGB-D camera mounted above the arm base. |
Shared control system. An autonomous robotic grasping system |
Shared control system for a robotic manipulator, making control more accurate, more efficient, and less difficult than an alone control system. |
[43][66] |
A personalised deep learning framework can adapt robot perception of children’s affective states and engagement to different cultures and individuals. |
Unobtrusive audiovisual sensors and wearable sensors, providing the child’s heart-rate, skin-conductance (EDA), body temperature, and accelerometer data. |
Feed-forward multilayer neural networks. GPA-net |
Achieved an average agreement of ~60% with human experts to estimate effect and engagement. |
[45][68] |
An overview of existing applications and concepts of robotic systems and artificial intelligence in dentistry, for functional capacity evaluations, of the role of ML in surgery using surgical robotics, of deep learning vis-à-vis physical robotic systems, focused on contemporary research. |
An overview |
An overview |
An overview |
[41][42][44][46][64,65,67,69] |
|
Regional Proposal Network (RPN) |
Transoral robot towards COVID-19 swab sampling. |
Flexible manipulator, an endoscope with a monitor, a master device. |
Teleoperated configuration for swab sampling |
A flexible transoral robot with a teleoperated configuration is proposed to address the surgeons’ risks during the face-to-face COVID-19 swab sampling. |
[54][77] |
The accuracy and precision rate are steadily enhanced. The accuracy rate of detecting defective and non-defective objects is successfully improved, increasing the training dataset to up to 400 images of defective and non-defective objects. |
[ | 16 | ] | [33][34][39,56,57] |
An automatic sorting robot with height maps and near-infrared (NIR) hyperspectral images to locate objects’ ROI and conduct online statistic pixel-based classification in contours. 24/7 monitoring. |
The robotic system with four modules: (1) the main conveyor, (2) a detection module, (3) a light source module, and (4) a manipulator. Mask-RCNN and YOLOv3 algorithms. |
Method for an automatic sorting robot. Identification include pixel, sub-pixel, object-based methods. |
The prototype machine can automatically sort construction and demolition waste with a size range of 0.05–0.5 m. The sorting efficiency can reach 2028 picks/h, and the online recognition accuracy nearly reaches 100%. Can be applied in technology for land monitoring. |
[18][35][36][41,58,59] |
Overcoming current limitations on the existing robotic solutions for picking objects in cluttered environments. |
Intelligent autonomous robots for picking different kinds of objects. Universal jamming gripper. |
A comparative study of the algorithmic performance of the proposed method. |
When a corner is detected, it takes just 0.003 s to output the target point. With lines, the required time depends on the object’s configuration, ranging from 0.02 s, when objects have almost the same depth, to 0.06 s in the worst-case scenario. |
[20][37] |
4. Path Planning, Path Optimisation
The process known as robotic navigation aims to achieve accurate positioning and avoiding obstacles in the pathway. It is essential to satisfy constraints such as limited operating space, distance, energy, and time
[55][78]. The path trajectory formation process consists of these four separate modules: perception, when the robot receives the necessary information from the sensors; localisation, when the robot aims to control its position in the environment; path planning; and motion control
[56][79]. The development of autonomous robot path planning and path optimisation algorithms is one of the most challenging current research areas. Nevertheless, any kind of path planning requires information about the initial robot position. In the stationary robot’s case, such information is usually easily accessible, contrary to industrial manipulators mounted on mobile platforms. In mobile robots and automatically guided vehicles (AGV), accurate self-localisation in various environments
[57][58][80,81] is a basis for further trajectory planning and optimisation.
According to the amount of available information, robot path planning can be categorised into two categories, namely, local and global path planning. Through a local path planning strategy, the robot has rather limited knowledge of the navigation environment. The robot has in-depth knowledge of the navigation environment when planning the global path to reach its destination by following a predetermined path. The robotic path planning method has been applied in many fields, such as reconstructive surgery, ocean and space exploration, and vehicle control. In the case of pure industrial robots, path planning refers to finding the best trajectory to transfer a tool or object to the destination in the robot workspace. It is essential to note that typical industrial robots are not feasible for real-time path planning. Usually, trajectories are prepared in advance using online or offline programming methods. One of the possible techniques is the implementation of specialised commercial computer-aided manufacturing (CAM) software such as Mastercam/Robotmaster or Sprutcam. However, the functionality of such software is relatively constrained and does not go beyond the framework of classical tasks, such as welding or milling. The use of CAM software also requires highly qualified professionals. As a result, the application of this software to individual installations is economically disadvantageous. As an alternative to CAM software, methods based on the copying movements of highly skilled specialists using commercially available equipment, such as MIMIC from Nordbo Robotics (Antvorskov, Denmark), may be used. This platform allows using demonstrations to teach robots smooth, complex paths by recording required movements that are smoothed and optimised. To overcome the limitations caused by the lack of real-time path planning features in robot controllers, additional external controllers and real-time communication with the manipulator is required. In the area of path planning and optimisation, experiments have been conducted for automatic object and 3D position detection
[59][82] quasi-static path optimisation
[60][83], image analysis
[61][84], path smoothing
[62][85], BIM
[63][86], and accurate self-localisation in harsh industrial environments
[57][58][80,81].
7. Civil Engineering Industry
In general, the construction industry is relatively inefficient from the perspective of automation. Robotics are seldom applied
[69][107]. The main identified challenges for higher adoption of robotics in the construction industry were grouped into four categories: contractor-side economic factors; client-side economic factors; technical and work-culture factors; and weak business case factors. Technical and work-culture factors include an untrained workforce; unproven effectiveness and immature technology; and the current work culture and aversion to change
[70][108].
The perspective of robotics in civil engineering is significantly better. Here, robotics provides considerable opportunities to increase productivity, efficiency, and flexibility, from automated modular house production to robotic welding, material handling on construction sites, and 3D printing of houses or certain structures. Robots make the industry safer and more economical, increase sustainability, and reduce its environmental impact, while improving quality and reducing waste. The total global value of the construction industry is forecast to grow by 85% to USD 15.5 trillion by 2030
[71][109]. Robots can make construction safer by handling large and heavy loads, working in hazardous locations, and enabling new, safer construction methods. Transferring repetitive and dangerous tasks that humans are increasingly reluctant to perform to robots means that automation can help address the labour and skills crisis, and make the construction industry more attractive
[72][73][110,111]. Few classic robots are used in the construction process due to the dynamic and inaccurately described environment; however, work on 3D buildings and their environmental models reduces this limitation.