Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 + 5130 word(s) 5130 2021-12-29 04:15:18 |
2 format correct Meta information modification 5130 2022-01-13 01:54:21 |

Video Upload Options

Do you have a full video?

Confirm

Are you sure to Delete?
Cite
If you have any further questions, please contact Encyclopedia Editorial Office.
Bucinskas, V. Industrial Robotics. Encyclopedia. Available online: https://encyclopedia.pub/entry/18134 (accessed on 02 September 2024).
Bucinskas V. Industrial Robotics. Encyclopedia. Available at: https://encyclopedia.pub/entry/18134. Accessed September 02, 2024.
Bucinskas, Vytautas. "Industrial Robotics" Encyclopedia, https://encyclopedia.pub/entry/18134 (accessed September 02, 2024).
Bucinskas, V. (2022, January 12). Industrial Robotics. In Encyclopedia. https://encyclopedia.pub/entry/18134
Bucinskas, Vytautas. "Industrial Robotics." Encyclopedia. Web. 12 January, 2022.
Industrial Robotics
Edit

The industrial robotics sector is one of the most quickly growing industrial divisions, providing standardised technologies suitable for various automation processes. In most applications, industrial robots form bigger units as robotic cells or automated/autonomous manufacturing lines.

industrial robots collaborative robots human-robot interaction robotic prospectives

1. Human–Machine Interaction

To date, manual human work has been often replaced by robotic systems in industry. However, within complex systems, the interaction between humans and machines/robots (HMI) still needs to occur. HMI is an area of research related to the development of robotic systems based on understanding, evaluation, and analysis, and this system combines various forms of cooperation or interaction with humans. Interaction requires communication between robots and humans, and human communication and collaboration with the robot system can take many forms. However, these forms are greatly influenced by whether the human is close to the robot and the context being used: (i) human–computer context—keyboard, buttons, etc.; (ii) real procedures context—haptics, sensors; and (iii) close and exact interaction. Therefore, both human and robot communication or interaction can be divided into two main categories: remote interaction and exact interaction. Remote interaction takes place by remote operation or supervised control. Close interaction takes place by operation with an assistant or companion. Close interaction may include physical interaction. Because close interactions are the most difficult, it is crucial to consider a number of aspects to ensure a successful collaboration, i.e., a real-time algorithm, “touch” detection and analysis, autonomy, semantic understanding capabilities, and AI-aided anticipation skills. A summary of the relevant research focused on improving and developing HMI methods is provided in Table 1.
Table 1. Research focused on human–machine interaction.
Objective Technology Approach Improvement Ref.
To improve flexibility, productivity and quality of a multi-pass gas tungsten arc welding (GTAW) process performed by a collaborative robot. A haptic interface.
6-axis robotic arm (Mitsubishi MELFA RV-13FM-D).
The end effector with GTAW torch.
A monitoring camera (Xiris XVC-1000).
A Load Cell (ATI Industrial Automation Mini45-E) to evaluate tool force interactions with work pieces.
A haptic-based approach is designed and tested in a manufacturing scenario proposing light and low-cost real-time algorithms for “touch” detection. Two main criteria were analysed to assess the performance: the 3-Sigma rule and the Hampel identifier. Experimental results showed better performance of the 3-Sigma rule in terms of precision percentage (mean value of 99.9%) and miss rate (mean value of 10%) concerning the Hampel identifier. Results confirmed the influence of the contamination level related to the dataset. This algorithm adds significant advances to enable the use of light and simple machine learning approaches in real-time applications. [1][2]
To produce more advanced or complex forms of interaction by enabling cobots with semantic understanding capabilities or AI-aided anticipation skills. Collaborative robots Artificial intelligence. The overview provides hints of future cobot developments and identifies future research frontiers related to economic, social, and technological dimensions. [3]
To strike a balance in order to find a suitable level of autonomy for human operators. Model of Remotely Instructed Robots (RIRs.) Modelling method. Developed model in which the robot is autonomous in task execution, but also aids the operator’s ultimate decision-making process about what to do next. Presentation of the robot’s own model of the work scene enables corrections to be made by the robot, as well as it can enhance the operator’s confidence in the robot’s work. [4][5]
The interaction between humans and robots or mechatronic systems encompasses many interdisciplinary fields, including physical sciences, social sciences, psychology, artificial intelligence, computer science, robotics, and engineering. This interaction examines all possible situations in which a human and a robot can systematically collaborate or complement each other. Thus, the main goal is to provide robots with various competencies to facilitate their interaction with humans. To implement such competencies, modelling of real-life situations and predictions is necessary, applying models in interaction with robots, and trying to make this interaction as efficient as possible, i.e., inherently intuitive, based on human experience and artificial intelligence algorithms.
The role of various interfering aspects (Table 2.) in human–robot interaction may lead to different future perspectives.
Table 2. Interfering aspects in human–robot interaction.
Objective Interaction Approach Solution Ref.
Frustration Close cooperative work Controlled coordination Sense of control of frustration, affective computing. [6]
Emotion recognition By collecting different kinds of data. Discrete models describing emotions used, facial expression analysis, camera positioning. Affective computing.
Empowering robots to observe, interpret and express emotions. Endow robots with emotional intelligence.
[7]
Decoding of action observation Elucidating the neural mechanisms of action observation and intention understanding. Decoding the underlying neural processes. The dynamic involvement of the mirror neuron systems (MNS) and the theory of mind ToM/mentalising network during action observation. [8]
Verbal and non-verbal communication Interactive communication. Symbol grounding Composition of grounded semantics, online negotiation of meaning, affective interaction and closed-loop affective dialogue, mixed speech-motor planning, massive acquisition of data-driven models for human–robot communication through crowd-sourced online games, real-time exploitation of online information and services for enhanced human–robot communication. [9]
The main aspirations are an intuitive, human-friendly interface, faster and simpler programming methods, advanced communication features, and robot reactions to human movements, mood, and even psychological state. Methods to monitor human actions and emotions [10], fusion of sensors’ data, and machine learning are key technologies for further improvement in the HMI area.

2. Object Recognition

Object recognition is a typical issue in industrial robotics applications, such as sorting, packaging, grouping, pick and place, and assembling (Table 3). The appropriate recognition method and equipment selection mainly depends on the given task, object type, and the number of recognisable parameters. If there are a small number of parameters, simpler sensing technologies based on typical approaches (geometry measuring, weighing, material properties’ evaluation) can be implemented. Alternatively, if there are a significant number of recognisable parameters, photo or video analysis is preferred. Required information in two- or three-dimensional form from image or video can be extracted using computer vision techniques such as object localisation and recognition. Various techniques of vision-based object recognition have been developed, such as appearance-, model-, template-, and region-based approaches. Most vision recognition methods are based on deep learning [11] and other machine learning methods.
In a previous study [12], a lightweight Franka Emika Panda, cobot with seven degrees of freedom and a Realsense D435 RGB-D camera, mounted on an end effector, was used to extend the default robots’ function. Instead of using a large dataset-based machine learning technique, the authors proposed a method to program the robot from a single demonstration. This robotic system can detect various objects, regardless of their position and orientation, achieving an average success rate of more than 90% in less than 5 min of training time, using an Ubuntu 16.04 server running on an Intel(R) Core(TM) i5-2400 CPU (3.10 GHz) and an NVIDIA Titan X GPU.
Another approach for grasping randomly placed objects was presented in [13]. The authors proposed a set of performance metrics and compared four robotic systems for bin picking, and took first place in the Amazon Robotics Challenge 2017. The survey results show that the most promising solutions for such a task are RGB-D sensors and CNN-based algorithms for object recognition, and a combination of suction-based and typical two-finger grippers for grasping different objects (vacuum grippers for a stiff object with large and smooth surface areas, and two-finger grippers for air-permanent items).
Similar localisation and sorting tasks appear in the food and automotive industries, and in almost every production unit. In [14], an experimental method was proposed using a pneumatic robot arm for separation of objects from a set according to their colour. If the colour of the workpiece is recognisable, it is selected with the help of a robotic arm. If the workpiece colour does not meet the requirements, it is rejected. The described sorting system works according to an image processing algorithm in MATLAB software. More advanced object recognition methods based on simultaneous colour and height detection are presented in [15]. A robotic arm with six degrees of freedom (DoF) and a camera with computer vision software ensure a sorting efficiency of about 99%.
A Five DoF robot arm, “OWI Robotic Arm Edge”, proposed by Pengchang Chen et al., was used to validate the practicality and feasibility of a faster region-based convolutional neural network (faster R-CNN) model using a dataset containing images of symmetric objects [16]. Objects were divided into classes based on colour, and defective and non-defective objects.
Despite significant progress in existing technologies, randomly placed unpredictable objects remain a challenge in robotics. The success of a sorting task often depends on the accuracy with which recognisable parameters can be defined. Yan Yu et al. [17] proposed an RGB-D-based method for solid waste object detection. The waste sorting system consists of a server, vision sensors, industrial robots, and rotational speedometer. Experiments performed on solid waste image analysis resulted in a mean average precision value of 49.1%.
Furthermore, Wen Xiao et al. designed an automatic sorting robot that uses height maps and near-infrared (NIR) hyperspectral images to locate the region of interest (ROI) of objects, and to perform online statistic pixel-based classification in contours [18]. This automatic sorting robot can automatically sort construction and demolition waste ranging in size from 0.05 to 0.5 m. The online recognition accuracy of the developed sorting system reaches almost 100% and ensures operation speed up to 2028 picks/h.
Another challenging issue in object recognition and manipulation is objects having an undefined shaped and contaminated by dust or smaller particles, such as minerals or coal. Quite often, such a task requires not only recognising the object but also determining the position of the centre of mass of the object. Man Li et al. [19] proposed an image processing-based coal and gangue sorting method. Particle analysis of coal and gangue samples is performed using morphological corrosion and expansion methods to obtain a complete, clean target sample. The object’s mass centre is obtained using the centre of the mass method, consisting of particle removal and filling, image binarization, and separation of overlapping samples, reconstruction, and particle analysis. The presented method achieved identification accuracy of coal and gangue samples of 88.3% and 90.0%, and the average object mass centre coordinate errors in the x and y directions were 2.73% and 2.72%, respectively [19].
Intelligent autonomous robots for picking different kinds of objects were studied as a possible means to overcome the current limitations of existing robotic solutions for picking objects in cluttered environments [20]. This autonomous robot, which can also be used for commercial purposes, has an integrated two-finger gripper and a soft robot end effector to grab objects of various shapes. A special algorithm solves 3D perception problems caused by messy environments and selects the right grabbing point. When using lines, the time required depends significantly on the configuration of the objects, and ranges from 0.02 s when the objects have almost the same depth, to 0.06 s in the worst case when the depth of the tactile objects is greater than the lowest depth but not perceived [20].
In robotics, the task of object recognition often includes not only recognition and the determinaton of coordinates, but it also plays an essential role in the creation of a robot control program. Based on the ABB IRB 140 robot and a digital camera, a low-cost shapes identification system was developed and implemented, which is particularly important due to the high variability of welded products [21]. The authors developed an algorithm that recognises the required toolpath from a taken image. The algorithm defines a path as a complex polynomial. It later approximates it by simpler shapes with a lower number of coordinates (line, arc, spline) to realise the tool movement using standard robot programming language features.
Moreover, object recognition can be used for robot machine learning to analyse humans’ behaviour. Such an approach was presented by Hiroaki et al. [22], where the authors studied the behaviour of a human crowd, and formulated a new forecasting task, called crowd density forecasting, using a fixed surveillance camera. The main goal of this experiment was to predict how the density of the crowd would change in unseen future frames. To address this issue, patch-based density forecasting networks (PDFNs) were developed. PDFNs project a variety of complex dynamics of crowd density throughout the scene, based on a set of spatially or spatially overlapping patches, thus adapting the receptive fields of fully convolutional networks. Such a solution could be used to train robotic swarms because they behave similarly to humans in crowded areas.
Table 3. Research focused on object recognition in robotics.
Objective Technology Approach Improvement Ref.
Extended default “program from demonstration” feature of collaborative robots to adapt them to environments with moving objects. Franka Emika Panda cobot with 7 degrees of freedom, with a Realsense D435 RGB-D camera mounted on the end-effector. Grasping method to fine-tune using reinforcement learning techniques. The system can grasp various objects from a demonstration, regardless of their position and orientation, in less than 5 min of training time. [12][23]
Introduction of a set of metrics for primary comparison of robotic systems’ detailed functionality and performance. Robot with different grippers. Recognition method and the grasping method. Developed original robot performance metrics and tested on four robot systems used in the Amazon Robotics Challenge competition. Results of analysis showed the difference between the systems and promising solutions for further improvements. [13][22][24]
To build a low-cost system for identifying shapes to program industrial robots for the 2D welding process. Robot ABB IRB 140 with a digital camera, which detects contours on a 2D surface. A binarisation and contour recognition method. A low-cost system based on an industrial vision was developed and implemented for the simple programming of the movement path. [25][26]
The patch-based density forecasting networks (PDFNs) directly forecast crowd density maps of future frames instead of trajectories of each moving person in the crowd. Fixed surveillance camera Density Forecasting in Image Space.
Density Forecasting in Latent Space.
PDFNs.
Spatio-Temporal Patch-Based Gaussian filter.
Proposed patch-based models, PDFN-S and PDFN-ST, outperformed baselines on all the datasets. PDFN-ST successfully forecasted dynamics of individuals, a small group, and a crowd. The approach cannot always forecast sudden changes in walking directions, especially when they happened in the later frames. [22]
To separate the objects from a set according to their colour. Pneumatic Robot arm Force in
response to applied pressure.
The proposed robotic arm may be considered for sorting. Servo motors and image processing cameras can be used to achieve higher repeatability and accuracy. [14][27]
An image processing-based method for coal and gangue sorting. Development of a positioning and identification system. Coal and gangue sorting robot Threshold segmentation methods. Clustering method.
Morphological corrosion and expansion methods. The centre of mass method.
Efficiency is evaluated using the images of coal and gangue, which are randomly picked from the production environment. The average coordinate errors in the x and y directions are 2.73% and 2.72%, and the identification accuracy of coal and gangue samples is 88.3% and 90.0%, respectively, and the sum of the time for identification, positioning, and opening the camera for a single sample averaged 0.130 s. [18][28][29]
A computer vision-based robotic sorter is capable of simultaneously detecting and sorting objects by their colours and heights. Vision-based process encompasses identification, manipulation, selection, and sorting objects depending on colour and geometry. A 5 or 6 DOF robotic arm and a camera with the computer vision software detecting various colours and heights and geometries. Computer Vision methods with the Haar Cascade algorithm. The Canny edge detection algorithm is used for shape identification. A robotic arm is used for picking and placing objects based on colour and height. In the proposed system, colour and height sorting efficiency is around 99%. Effectiveness, high accuracy and low cost of computer vision with a robotic arm in the sorting process according to color and shape are revealed. [15][30][31]
A novel multimodal convolutional neural network for RGB-D object detection. A base solid waste sorting system consisting of a server, vision sensors, industrial robot, and rotational speedometer. Comparison with single modal methods.
Washington RGB-D object recognition benchmark evaluated.
Meeting the real-time requirements and ensuring high precision. Achieved 49.1% mean average precision, processing images in real-time at 35.3 FPS on one single Nvidia GTX1080 GPU.
Novel dataset.
[17][32]
Practicality and feasibility of a faster R-CNN model using a dataset containing images of symmetric objects. Five DoF robot arm “OWI Robotic Arm Edge.” CNN learning algorithm that processes images with multiple layers (filters) and classifies objects in images.
Regional Proposal Network (RPN)
The accuracy and precision rate are steadily enhanced. The accuracy rate of detecting defective and non-defective objects is successfully improved, increasing the training dataset to up to 400 images of defective and non-defective objects. [16][33][34]
An automatic sorting robot with height maps and near-infrared (NIR) hyperspectral images to locate objects’ ROI and conduct online statistic pixel-based classification in contours.
24/7 monitoring.
The robotic system with four modules: (1) the main conveyor, (2) a detection module, (3) a light source module, and (4) a manipulator.
Mask-RCNN and YOLOv3 algorithms.
Method for an automatic sorting robot.
Identification include pixel, sub-pixel, object-based methods.
The prototype machine can automatically sort construction and demolition waste with a size range of 0.05–0.5 m. The sorting efficiency can reach 2028 picks/h, and the online recognition accuracy nearly reaches 100%.
Can be applied in technology for land monitoring.
[18][35][36]
Overcoming current limitations on the existing robotic solutions for picking objects in cluttered environments. Intelligent autonomous robots for picking different kinds of objects.
Universal jamming gripper.
A comparative study of the algorithmic performance of the proposed method. When a corner is detected, it takes just 0.003 s to output the target point. With lines, the required time depends on the object’s configuration, ranging from 0.02 s, when objects have almost the same depth, to 0.06 s in the worst-case scenario. [20][37][38][39]
A few main trends can be highlighted from the research analysis related to object recognition in robotics. These can be defined as object recognition for localisation and further manipulation; object recognition for shape evaluation and automatic generation of the robot program code for the corresponding robot movement; and object recognition for behaviour analysis to use as initial data for machine learning algorithms. A large number of reliable solutions have been tested in the industrial environment for the first trend, in contrast to the second and third cases, which are currently being developed.

3. Medical Application

The da Vinci Surgical System is the best-known robotic manipulator used in surgery applications. Florian Richter et al. [40] presented a Patient Side Manipulator (PSM) arm technology to implement reinforcement learning algorithms for the surgical da Vinci robots. The authors presented the first open-source reinforcement learning environment for surgical robots, called dVRL [40]. This environment allows fast training of da Vinci robots for autonomous assistance, and collaborative or repetitive tasks, during surgery. During the experiments, the dVRL control policy was effectively learned, and it was found that it could be transferred to a realrobot- with minimal efforts. Although the proposed environment resulted in the simple and primitive actions of reaching and picking, it was useful for suction and debris removal in a real surgical setting.
Meanwhile, in their work, Yohannes Kassahun et al. reviewed the role of machine learning techniques in surgery, focusing on surgical robotics [41]. They found that currently, the research community faces many challenges in applying machine learning in surgery and robotic surgery. The main issues are a lack of high-quality medical and surgical data, a lack of reliable metrics that adequately reflect learning characteristics, and a lack of a structured approach to the effective transfer of surgical skills for automated execution [41]. Nevertheless, the application of deep learning in robotics is a very widely studied field. The article by Harry A. Pierson et al. in 2017 provides a recent review emphasising the benefits and challenges vis-à-vis robotics [42]. Similarly to [41], they found that the main limitations preventing deep learning in medical robotics are the huge volume of training data required and a relatively long training time.
Surgery is not the only field in medicine in which robotic manipulators can be used. Another autonomous robotic grasping system, described by John E. Downey et al., introduces shared control of a robotic arm based on the interaction of a brain–machine interface (BMI) and a vision guiding system [43]. A BMI is used to define a user’s intent to grasp or transfer an object. Visual guidance is used for low-level control tasks, short-range movements, definition of the optimal grasping position, alignment of the robot end-effector, and grasping. Experiments proved that shared control movements were more accurate, efficient, and less complicated than transfer tasks using BMI alone.
Another case that requires fast robot programming methods and is implemented in medicine is the assessment of functional abilities in functional capacity evaluations (FCEs) [44]. Currently, there is no single rational solution that simulates all or many of the standard work tasks that can be used to improve the assessment and rehabilitation of injured workers. Therefore, the authors proposed that, with the use of the robotic system and machine learning algorithms, it is possible to simulate workplace tasks. Such a system can improve the assessment of functional abilities in FCEs and functional rehabilitation by performing reaching manoeuvres or more complex tasks learned from an experienced therapist. Although this type of research is still in its infancy, robotics with integrated machine learning algorithms can improve the assessment of functional abilities [44].
Although the main task of robotic manipulators is the direct manipulation of objects or tools in medicine, these manipulators can also be used for therapeutic purposes for people with mental or physical disorders. Such applications are often limited by the ability to automatically perceive and respond as needed to maintain an engaging interaction. Ognjen Rudovic et al. presented a personalised deep learning framework that can adapt robot perception [45]. The researchers in the experiment focused on robot perception, for which they developed an individualised deep learning system that could automatically assess a patient’s emotional states and level of engagement. This makes it easier to monitor treatment progress and optimise the interaction between the patient and the robot.
Robotic technologies can also be applied in dentistry. To date, there has been a lack of implementation of fundamental ideas. In a comprehensive review of robotics and the application of artificial intelligence, Jasmin Grischke et al. present numerous approaches to apply these technologies [46]. Robotic technologies in dentistry can be used for maxillofacial surgery [47], tooth preparation [48], testing of toothbrushes [49], root canal treatment and plaque removal [50], orthodontics and jaw movement [51], tooth arrangement for full dentures [52], X-ray imaging radiography [53], swab sampling [54], etc.
A summary of research focused on robotics in medical applications is provided in Table 4. It can be seen that robots are still not very popular in this area, and technological and phycological/ethical factors can explain this. From the technical point of view, more active implementation is limited by the lack of fast and reliable robot program preparation methods. Regarding psychological and ethical factors, robots are still unreliable for a large portion of society. Therefore, they are only accepted with significant hesitation.
Table 4. Robotic solutions in medical applications.
Objective Technology Approach Improvement Ref.
Create bridge between reinforcement learning and the surgical robotics communities by presenting the first open-sourced reinforcement learning environments for surgical da Vinci robots. Patient Side Manipulator (PSM) arm.
Da VinciR©Surgical Robot.
Large Needle Driver (LND), with a jaw gripper to grab objects such as suturing needle.
Reinforced learning,
OpenAI Gym
DDPG (Deep Deterministic Policy Gradients) and HER (Hindsight Experience Replay)
V-REP physics simulator
Developed new reinforced learning environment for fast and effective training of surgical da Vinci robots for autonomous operations. [40]
A method of shared control where the user controls a prosthetic arm using a brain–machine interface and receives assistance with positioning the hand when it approaches an object. Brain–machine interface system.
Robotic arm.
RGB-D camera mounted above the arm base.
Shared control system.
An autonomous robotic grasping system
Shared control system for a robotic manipulator, making control more accurate, more efficient, and less difficult than an alone control system. [43]
A personalised deep learning framework can adapt robot perception of children’s affective states and engagement to different cultures and individuals. Unobtrusive audiovisual sensors and wearable sensors, providing the child’s heart-rate, skin-conductance (EDA), body temperature, and accelerometer data. Feed-forward multilayer neural networks.
GPA-net
Achieved an average agreement of ~60% with human experts to estimate effect and engagement. [45]
An overview of existing applications and concepts of robotic systems and artificial intelligence in dentistry, for functional capacity evaluations, of the role of ML in surgery using surgical robotics, of deep learning vis-à-vis physical robotic systems, focused on contemporary research. An overview An overview An overview [41][42][44][46]
Transoral robot towards COVID-19 swab sampling. Flexible manipulator, an endoscope with a monitor, a master device. Teleoperated configuration for swab sampling A flexible transoral robot with a teleoperated configuration is proposed to address the surgeons’ risks during the face-to-face COVID-19 swab sampling. [54]

4. Path Planning, Path Optimisation

The process known as robotic navigation aims to achieve accurate positioning and avoiding obstacles in the pathway. It is essential to satisfy constraints such as limited operating space, distance, energy, and time [55]. The path trajectory formation process consists of these four separate modules: perception, when the robot receives the necessary information from the sensors; localisation, when the robot aims to control its position in the environment; path planning; and motion control [56]. The development of autonomous robot path planning and path optimisation algorithms is one of the most challenging current research areas. Nevertheless, any kind of path planning requires information about the initial robot position. In the stationary robot’s case, such information is usually easily accessible, contrary to industrial manipulators mounted on mobile platforms. In mobile robots and automatically guided vehicles (AGV), accurate self-localisation in various environments [57][58] is a basis for further trajectory planning and optimisation.
According to the amount of available information, robot path planning can be categorised into two categories, namely, local and global path planning. Through a local path planning strategy, the robot has rather limited knowledge of the navigation environment. The robot has in-depth knowledge of the navigation environment when planning the global path to reach its destination by following a predetermined path. The robotic path planning method has been applied in many fields, such as reconstructive surgery, ocean and space exploration, and vehicle control. In the case of pure industrial robots, path planning refers to finding the best trajectory to transfer a tool or object to the destination in the robot workspace. It is essential to note that typical industrial robots are not feasible for real-time path planning. Usually, trajectories are prepared in advance using online or offline programming methods. One of the possible techniques is the implementation of specialised commercial computer-aided manufacturing (CAM) software such as Mastercam/Robotmaster or Sprutcam. However, the functionality of such software is relatively constrained and does not go beyond the framework of classical tasks, such as welding or milling. The use of CAM software also requires highly qualified professionals. As a result, the application of this software to individual installations is economically disadvantageous. As an alternative to CAM software, methods based on the copying movements of highly skilled specialists using commercially available equipment, such as MIMIC from Nordbo Robotics (Antvorskov, Denmark), may be used. This platform allows using demonstrations to teach robots smooth, complex paths by recording required movements that are smoothed and optimised. To overcome the limitations caused by the lack of real-time path planning features in robot controllers, additional external controllers and real-time communication with the manipulator is required. In the area of path planning and optimisation, experiments have been conducted for automatic object and 3D position detection [59] quasi-static path optimisation [60], image analysis [61], path smoothing [62], BIM [63], and accurate self-localisation in harsh industrial environments [57][58].

5. Food Industry

As the world’s population grows, the demand for food also continues to grow. Food suppliers are under pressure to work more efficiently, and consumers want more convenient and sustainable food. Robotics and automation are a key part of the solution to this goal. The food production sector has been relatively slowly robotised compared to other industries [64]. Robotics is applied in food manufacture, packaging, delivery, and cookery (cake decoration) [65]. Although the food industry is ranked fourth in terms of the most-automated sectors, robotic devices capable of processing nutrients of different shapes and materials are in high demand. In addition, these devices help to avoid consequences such as food-borne illness caused directly by the contamination of nutrients by nutrient handlers [66]. For this purpose, a dual-mode soft gripper was developed that can grasp and suck various objects having a weight of up to 1 kg. Soft grippers prevent damage to food [67].
Artificial intelligence-enabled robotic applications are entering the restaurant industry in the food processing and guest service operations. In a review assessing the potential for process innovation in the restaurant sector, an information process for the use of new technologies for process innovation was developed [68]. However, the past year, particularly due to the circumstances of COVID-19, has been a breakthrough year in robotisation in the food industry.

6. Agricultural Applications

Agricultural robots are a specialised type of technology capable of assisting farmers with a wide range of operations. Their primary role is to tackle labour intensive, repetitive, and physically demanding tasks. Robots are used in planting, seedling identification, and sorting. Autonomous tractors perform the function of weeding and harvesting. Drones and autonomous ground vehicles are used for crop monitoring and condition assessment. In animal husbandry, robots are used for feeding cattle, milking, collecting and sorting eggs, and autonomous cleaning of pens. Cobots are also used in agriculture. These robots possess mechanical arms and make harvesting much easier for farmers. The agriculture robot market size is expected to reach USD 16,640.4 billion by 2026; however, specific robots, rather than industrial robots, will occupy the majority of the market.

7. Civil Engineering Industry

In general, the construction industry is relatively inefficient from the perspective of automation. Robotics are seldom applied [69]. The main identified challenges for higher adoption of robotics in the construction industry were grouped into four categories: contractor-side economic factors; client-side economic factors; technical and work-culture factors; and weak business case factors. Technical and work-culture factors include an untrained workforce; unproven effectiveness and immature technology; and the current work culture and aversion to change [70].
The perspective of robotics in civil engineering is significantly better. Here, robotics provides considerable opportunities to increase productivity, efficiency, and flexibility, from automated modular house production to robotic welding, material handling on construction sites, and 3D printing of houses or certain structures. Robots make the industry safer and more economical, increase sustainability, and reduce its environmental impact, while improving quality and reducing waste. The total global value of the construction industry is forecast to grow by 85% to USD 15.5 trillion by 2030 [71]. Robots can make construction safer by handling large and heavy loads, working in hazardous locations, and enabling new, safer construction methods. Transferring repetitive and dangerous tasks that humans are increasingly reluctant to perform to robots means that automation can help address the labour and skills crisis, and make the construction industry more attractive [72][73]. Few classic robots are used in the construction process due to the dynamic and inaccurately described environment; however, work on 3D buildings and their environmental models reduces this limitation.

References

  1. Tannous, M.; Miraglia, M.; Inglese, F.; Giorgini, L.; Ricciardi, F.; Pelliccia, R.; Milazzo, M.; Stefanini, C. Haptic-based touch detection for collaborative robots in welding applications. Robot. Comput. Integr. Manuf. 2020, 64, 101952.
  2. Tannous, M.; Bologna, F.; Stefanini, C. Load cell torques and force data collection during tele-operated robotic gas tungsten arc welding in presence of collisions. Data Br. 2020, 31, 105981.
  3. Knudsen, M.; Kaivo-oja, J. Collaborative Robots: Frontiers of Current Literature. J. Intell. Syst. Theory Appl. 2020, 3, 13–20.
  4. Ghosh, A.; Soto, D.A.P.; Veres, S.M.; Rossiter, A. Human robot interaction for future remote manipulations in industry 4.0. Proc. IFAC-Pap. 2020, 53, 10223–10228.
  5. Ghosh, A.; Veres, S.M.; Paredes-Soto, D.; Clarke, J.E.; Rossiter, J.A. Intuitive programming with remotely instructed robots inside future gloveboxes. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction, Cambridge, UK, 23–26 March 2020; pp. 209–211.
  6. Weidemann, A.; Rußwinkel, N. The Role of Frustration in Human–Robot Interaction—What Is Needed for a Successful Collaboration? Front. Psychol. 2021, 12, 707.
  7. Spezialetti, M.; Placidi, G.; Rossi, S. Emotion Recognition for Human-Robot Interaction: Recent Advances and Future Perspectives. Front. Robot. AI 2020, 7, 532279.
  8. Ge, S.; Wang, P.; Liu, H.; Lin, P.; Gao, J.; Wang, R.; Iramina, K.; Zhang, Q.; Zheng, W. Neural Activity and Decoding of Action Observation Using Combined EEG and fNIRS Measurement. Front. Hum. Neurosci. 2019, 13, 357.
  9. Mavridis, N. A review of verbal and non-verbal human–robot interactive communication. Robot. Auton. Syst. 2015, 63, 22–35.
  10. Dzedzickis, A.; Kaklauskas, A.; Bucinskas, V. Human emotion recognition: Review of sensors and methods. Sensors 2020, 20, 592.
  11. Shubha, P. International Journal of Engineering Technology Research & Management: A review of multi object recognition based on deep learining. Int. J. Eng. Technol. Res. Manag. 2020, 2, 27–33.
  12. De Coninck, E.; Verbelen, T.; Van Molle, P.; Simoens, P.; Dhoedt, B. Learning robots to grasp by demonstration. Robot. Auton. Syst. 2020, 127, 103474.
  13. Fujita, M.; Domae, Y.; Noda, A.; Garcia Ricardez, G.A.; Nagatani, T.; Zeng, A.; Song, S.; Rodriguez, A.; Causo, A.; Chen, I.M.; et al. What are the important technologies for bin picking? Technology analysis of robots in competitions based on a set of performance metrics. Adv. Robot. 2020, 34, 560–574.
  14. Sughashini, K.R.; Sunanthini, V.; Johnsi, J.; Nagalakshmi, R.; Sudha, R. A pneumatic robot arm for sorting of objects with chromatic sensor module. Mater. Today Proc. 2021, 45, 6364–6368.
  15. Shaikat, A.S.; Akter, S.; Salma, U. Computer Vision Based Industrial Robotic Arm for Sorting Objects by Color and Height. J. Eng. Adv. 2020, 1, 116–122.
  16. Chen, P.; Elangovan, V. Object Sorting using Faster R-CNN. Int. J. Artif. Intell. Appl. 2020, 11, 27–36.
  17. Yu, Y.; Zou, S.; Yin, K. A novel detection fusion network for solid waste sorting. Int. J. Adv. Robot. Syst. 2020, 17, 172988142094177.
  18. Xiao, W.; Yang, J.; Fang, H.; Zhuang, J.; Ku, Y.; Zhang, X. Development of an automatic sorting robot for construction and demolition waste. Clean Technol. Environ. Policy 2020, 22, 1829–1841.
  19. Li, M.; Duan, Y.; He, X.; Yang, M. Image positioning and identification method and system for coal and gangue sorting robot. Int. J. Coal Prep. Util. 2020, 1–19.
  20. D’Avella, S.; Tripicchio, P.; Avizzano, C.A. A study on picking objects in cluttered environments: Exploiting depth features for a custom low-cost universal jamming gripper. Robot. Comput. Integr. Manuf. 2020, 63, 101888.
  21. Ciszak, O.; Juszkiewicz, J.; Suszyński, M. Programming of Industrial Robots Using the Recognition of Geometric Signs in Flexible Welding Process. Symmetry 2020, 12, 1429.
  22. Minoura, H.; Yonetani, R.; Nishimura, M.; Ushiku, Y. Crowd Density Forecasting by Modeling Patch-Based Dynamics. IEEE Robot. Autom. Lett. 2021, 6, 287–294.
  23. De Coninck, E.; Verbelen, T.; Van Molle, P.; Simoens, P.; Idlab, B.D. Learning to Grasp Arbitrary Household Objects from a Single Demonstration. IEEE Int. Conf. Intell. Robot. Syst. 2019, 2372–2377.
  24. Kaya, O.; Tağlıoğlu, G.B.; Ertuğrul, Ş. The Series Elastic Gripper Design, Object Detection, and Recognition by Touch. J. Mech. Robot. 2022, 14, 014501.
  25. Kulkarni, R.G. Robot Path Planning with Sensor Feedback for Industrial Applications; Wichita State University: Wichita, KS, USA, 2021.
  26. Abdalrahman, M.; Brice, A.; Hanson, L. New Era of Automation in Scania’ s Manufacturing Systems—A Method to Automate a Manual Assembly Process; Libraries at Lund University: Lund, Sweden, 2021.
  27. Thike, A.; Moe San, Z.Z.; Min Oo, D.Z. Design and Development of an Automatic Color Sorting Machine on Belt Conveyor. Int. J. Sci. Eng. Appl. 2019, 8, 176–179.
  28. Wang, Z.; Xie, S.; Chen, G.; Chi, W.; Ding, Z.; Wang, P. An Online Flexible Sorting Model for Coal and Gangue Based on Multi-Information Fusion. IEEE Access 2021, 9, 90816–90827.
  29. Sun, Z.; Huang, L.; Jia, R. Coal and gangue separating robot system based on computer vision. Sensors 2021, 21, 1349.
  30. Fadhil, A.T.; Abbar, K.A.; Qusay, A.M. Computer Vision-Based System for Classification and Sorting Color Objects. IOP Conf. Ser. Mater. Sci. Eng. 2020, 745, 012030.
  31. Peršak, T.; Viltužnik, B.; Hernavs, J.; Klancnik, S. Vision-Based Sorting Systems for Transparent Plastic Granulate. Appl. Sci. 2020, 10, 4269.
  32. Sun, L.; Zhao, C.; Yan, Z.; Liu, P.; Duckett, T.; Stolkin, R. A novel weakly-supervised approach for RGB-D-based nuclear waste object detection. IEEE Sens. J. 2019, 19, 3487–3500.
  33. Albinali, H.; Alzahrani, F.A. Faster R-CNN for detecting regions in human-annotated micrograph images. In Proceedings of the 2021 International Conference of Women in Data Science at Taif University (WiDSTaif), Taif, Saudi Arabia, 30–31 March 2021.
  34. Li, S.; Zhao, X.; Li, W. Analysis of Object Detection Performance Based on Faster R-CNN. J. Phys. Conf. Ser. 2021, 1827, 012085.
  35. Cipta Ramadhan Kete, S.; Darma Tarigan, S.; Effendi, H. Land use classification based on object and pixel using Landsat 8 OLI in Kendari City, Southeast Sulawesi Province, Indonesia. IOP Conf. Ser. Earth Environ. Sci. 2019, 284, 012019.
  36. Hespeler, S.C.; Nemati, H.; Dehghan-Niri, E. Non-destructive thermal imaging for object detection via advanced deep learning for robotic inspection and harvesting of chili peppers. Artif. Intell. Agric. 2021, 5, 102–117.
  37. Birglen, L.; Schlicht, T. A statistical review of industrial robotic grippers. Robot. Comput. Integr. Manuf. 2018, 49, 88–97.
  38. Shim, M.; Kim, J.H. Design and optimization of a robotic gripper for the FEM assembly process of vehicles. Mech. Mach. Theory 2018, 129, 1–16.
  39. Linghu, C.; Zhang, S.; Wang, C.; Yu, K.; Li, C.; Zeng, Y.; Zhu, H.; Jin, X.; You, Z.; Song, J. Universal SMP gripper with massive and selective capabilities for multiscaled, arbitrarily shaped objects. Sci. Adv. 2020, 6, eaay5120.
  40. Richter, F.; Orosco, R.K.; Yip, M.C. Open-Sourced Reinforcement Learning Environments for Surgical Robotics. arXiv 2019, arXiv:1903.02090.
  41. Kassahun, Y.; Yu, B.; Tibebu, A.T.; Stoyanov, D.; Giannarou, S.; Metzen, J.H.; Vander Poorten, E. Surgical robotics beyond enhanced dexterity instrumentation: A survey of machine learning techniques and their role in intelligent and autonomous surgical actions. Int. J. Comput. Assist. Radiol. Surg. 2016, 11, 553–568.
  42. Pierson, H.A.; Gashler, M.S. Deep learning in robotics: A review of recent research. Adv. Robot. 2017, 31, 821–835.
  43. Downey, J.E.; Weiss, J.M.; Muelling, K.; Venkatraman, A.; Valois, J.S.; Hebert, M.; Bagnell, J.A.; Schwartz, A.B.; Collinger, J.L. Blending of brain-machine interface and vision-guided autonomous robotics improves neuroprosthetic arm performance during grasping. J. Neuroeng. Rehabil. 2016, 13, 28.
  44. Fong, J.; Ocampo, R.; Gross, D.P.; Tavakoli, M. Intelligent Robotics Incorporating Machine Learning Algorithms for Improving Functional Capacity Evaluation and Occupational Rehabilitation. J. Occup. Rehabil. 2020, 30, 362–370.
  45. Rudovic, O.; Lee, J.; Dai, M.; Schuller, B.; Picard, R.W. Personalized machine learning for robot perception of affect and engagement in autism therapy. Sci. Robot. 2018, 3, eaao6760.
  46. Grischke, J.; Johannsmeier, L.; Eich, L.; Griga, L.; Haddadin, S. Dentronics: Towards robotics and artificial intelligence in dentistry. Dent. Mater. 2020, 36, 765–778.
  47. Ma, Q.; Kobayashi, E.; Wang, J.; Hara, K.; Suenaga, H.; Sakuma, I.; Masamune, K. Development and preliminary evaluation of an autonomous surgical system for oral and maxillofacial surgery. Int. J. Med. Robot. Comput. Assist. Surg. 2019, 15, e1997.
  48. Otani, T.; Raigrodski, A.J.; Mancl, L.; Kanuma, I.; Rosen, J. In vitro evaluation of accuracy and precision of automated robotic tooth preparation system for porcelain laminate veneers. J. Prosthet. Dent. 2015, 114, 229–235.
  49. Lang, T.; Staufer, S.; Jennes, B.; Gaengler, P. Clinical validation of robot simulation of toothbrushing—Comparative plaque removal efficacy. BMC Oral Health 2014, 14, 82.
  50. Nelson, C.A.; Hossain, S.G.M.; Al-Okaily, A.; Ong, J. A novel vending machine for supplying root canal tools during surgery. J. Med. Eng. Technol. 2012, 36, 102–116.
  51. Lepidi, L.; Chen, Z.; Ravida, A.; Lan, T.; Wang, H.L.; Li, J. A Full-Digital Technique to Mount a Maxillary Arch Scan on a Virtual Articulator. J. Prosthodont. 2019, 28, 335–338.
  52. Zhang, Y.; De Jiang, J.G.; Liang, T.; Hu, W.P. Kinematics modeling and experimentation of the multi-manipulator tooth-arrangement robot for full denture manufacturing. J. Med. Syst. 2011, 35, 1421–1429.
  53. Spin-Neto, R.; Mudrak, J.; Matzen, L.H.; Christensen, J.; Gotfredsen, E.; Wenzel, A. Cone beam CT image artefacts related to head motion simulated by a robot skull: Visual characteristics and impact on image quality. Dentomaxillofacial Radiol. 2013, 42, 32310645.
  54. Li, C.; Gu, X.; Xiao, X.; Lim, C.M.; Duan, X.; Ren, H. A Flexible Transoral Robot Towards COVID-19 Swab Sampling. Front. Robot. AI 2021, 8, 51.
  55. Jose, K.; Pratihar, D.K. Task allocation and collision-free path planning of centralized multi-robots system for industrial plant inspection using heuristic methods. Rob. Auton. Syst. 2016, 80, 34–42.
  56. Das, P.K.; Jena, P.K. Multi-robot path planning using improved particle swarm optimization algorithm through novel evolutionary operators. Appl. Soft Comput. J. 2020, 92, 106312.
  57. Fascista, A.; Coluccia, A.; Ricci, G. A Pseudo Maximum likelihood approach to position estimation in dynamic multipath environments. Signal Processing 2021, 181, 107907.
  58. Karaagac, A.; Haxhibeqiri, J.; Ridolfi, M.; Joseph, W.; Moerman, I.; Hoebeke, J. Evaluation of accurate indoor localization systems in industrial environments. In Proceedings of the 2017 22nd IEEE International Conference on Emerging Technologies and Factory Automation (ETFA), Limassol, Cyprus, 12–15 September 2017; pp. 1–8.
  59. Makomo, T.J.; Erin, K.; Boru, B. Real Time Application for Automatic Object and 3D Position Detection and Sorting with Robotic Manipulator. Sak. Univ. J. Sci. 2020, 24, 703–711.
  60. Hermansson, T.; Carlson, J.S.; Linn, J.; Kressin, J. Quasi-static path optimization for industrial robots with dress packs. Robot. Comput. Integr. Manuf. 2021, 68, 102055.
  61. Nguyen, V.; Melkote, S. Hybrid statistical modelling of the frequency response function of industrial robots. Robot. Comput. Integr. Manuf. 2021, 70, 102134.
  62. Jiao, J.; Tian, W.; Zhang, L.; Li, B.; Hu, J.; Li, Y.; Li, D.; Zhang, J. Variable stiffness identification and configuration optimization of industrial robots for machining tasks. Res. Sq. 2020.
  63. Ding, L.; Jiang, W.; Zhou, Y.; Zhou, C.; Liu, S. BIM-based task-level planning for robotic brick assembly through image-based 3D modeling. Adv. Eng. Inform. 2020, 43, 100993.
  64. Bader, F.; Rahimifard, S. Challenges for industrial robot applications in food manufacturing. In Proceedings of the 2nd International Symposium on Computer Science and Intelligent Control, Stockholm, Sweden, 21–23 September 2018.
  65. Grobbelaar, W.; Verma, A.; Shukla, V.K. Analyzing human robotic interaction in the food industry. J. Phys. Conf. Ser. 2021, 1714, 012032.
  66. Sandey, K.K.; Qureshi, M.A.; Meshram, B.D.; Agrawal, A.; Uprit, S. Robotics—An Emerging Technology in Dairy Industry. Int. J. Eng. Trends Technol. 2017, 43, 58–62.
  67. Wang, Z.; Or, K.; Hirai, S. A dual-mode soft gripper for food packaging. Rob. Auton. Syst. 2020, 125, 103427.
  68. Blöcher, K.; Alt, R. AI and robotics in the European restaurant sector: Assessing potentials for process innovation in a high-contact service industry. Electron. Mark. 2020, 31, 529–551.
  69. Tankova, T.; da Silva, L.S. Robotics and Additive Manufacturing in the Construction Industry. Curr. Robot. Rep. 2020, 1, 13–18.
  70. Davila Delgado, J.M.; Oyedele, L.; Ajayi, A.; Akanbi, L.; Akinade, O.; Bilal, M.; Owolabi, H. Robotics and automated systems in construction: Understanding industry-specific challenges for adoption. J. Build. Eng. 2019, 26, 100868.
  71. Robinson, G. Global Construction Market to Grow $8 Trillion by 2030: Driven by China, US and India; Global Construction Perspectives and Oxford Economics: London, UK, 2016; Volume 44, pp. 1–3.
  72. Aparicio, C.C.; Balzan, A.; Trabucco, D. Robotics in construction: Framework and future directions. Int. J. High-Rise Build. 2020, 9, 105–111.
  73. Follini, C.; Magnago, V.; Freitag, K.; Terzer, M.; Marcher, C.; Riedl, M.; Giusti, A.; Matt, D.T. Bim-integrated collaborative robotics for application in building construction and maintenance. Robotics 2021, 10, 2.
More
Information
Contributor MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register :
View Times: 721
Revisions: 2 times (View History)
Update Date: 13 Jan 2022
1000/1000
ScholarVision Creations