Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 -- 2030 2024-01-24 14:12:36 |
2 layout Meta information modification 2030 2024-01-25 04:01:43 |

Video Upload Options

Do you have a full video?

Confirm

Are you sure to Delete?
Cite
If you have any further questions, please contact Encyclopedia Editorial Office.
Kulaç, N.; Engin, M. Machine Learning Algorithm for Service Robots in Industry. Encyclopedia. Available online: https://encyclopedia.pub/entry/54294 (accessed on 17 May 2024).
Kulaç N, Engin M. Machine Learning Algorithm for Service Robots in Industry. Encyclopedia. Available at: https://encyclopedia.pub/entry/54294. Accessed May 17, 2024.
Kulaç, Nizamettin, Mustafa Engin. "Machine Learning Algorithm for Service Robots in Industry" Encyclopedia, https://encyclopedia.pub/entry/54294 (accessed May 17, 2024).
Kulaç, N., & Engin, M. (2024, January 24). Machine Learning Algorithm for Service Robots in Industry. In Encyclopedia. https://encyclopedia.pub/entry/54294
Kulaç, Nizamettin and Mustafa Engin. "Machine Learning Algorithm for Service Robots in Industry." Encyclopedia. Web. 24 January, 2024.
Machine Learning Algorithm for Service Robots in Industry
Edit

The definition of a robot by Gonzalez-Aguirre et al. has been defined in three classes with a focus on the human–robot interaction. According to the first definition, the robot completely replaces the human worker in a dangerous or polluted environment that is unsuitable for human work.

autonomous mobile robot navigation dynamic environments vision-based simultaneous localization and mapping

1. Introduction

Recently, the usage areas of autonomous mobile robots have been increasing rapidly, and these robots affect a large part of human life depending on this increase. Robots, which have mostly been effective in areas such as industrial, agricultural, and production facilities, have started to take a place in the service sector, as their technologies have become lower in cost and more easily accessible. This situation has attracted the attention of companies and researchers, accelerating studies on this subject. In the future, autonomous robots will form a large part of our lives [1][2].
Technological developments that increase the level of digitalization in production facilities and warehouse automation have accelerated with the emergence of the concept of Industry 4.0 [3]. This process has made it necessary to manage industrial robots, mobile robotic systems, sensor networks, and all production tools in warehouses or factories, even products in warehouses, from a central server [4]. In this area, there are deficiencies, such as the localization of mobile robots, the manipulation of raw materials, intermediates, end products in the right places for appropriate tasks, the coordination of the entire system, the management of security, and alarm events that await solutions. It is impractical to fully automate production and distribution processes at all times. Interaction with people remains another exposed problem to be solved.

2. Developing a Machine Learning Algorithm for Service Robots in Industrial Applications

The definition of a robot by Gonzalez-Aguirre et al. has been defined in three classes with a focus on the human–robot interaction [5]. According to the first definition, the robot completely replaces the human worker in a dangerous or polluted environment that is unsuitable for human work. According to the second definition, the robot works closely with a human to increase comfort or minimize discomfort. In the third definition, the robot works as a part of the human body. According to the first and third definitions, using the robot is not an alternative; it is a necessity, and its use is quite common [6]. In the second definition, there is no obligation, but the use of service robots has been increasing rapidly in recent years, especially after the pandemic. Service robots, as opposed to the robots used in the other two definitions, do not work in an area reserved for robots. For this reason, to work with people, they need to learn the environment and perform given tasks by using this information [7][8]. Service robots are defined as a type of autonomous robot that assists humans by performing repeatable tasks. A service robot that will work in a storage area is a good example of repetitive work. The storage of products in factory warehouses, their transport from one place to another with forklifts and similar transport vehicles, and their placement on shelves are usually done by human operators [9]. This process is repeated many times for each order received by companies. These processes require large numbers of employees, take significant time, and their effectiveness cannot be guaranteed [10].
The selection of material handling elements is very important in the manufacturing process. The selection of unsuitable material handling equipment will also disrupt the production process time. In addition, improper selection will result in increased material handling costs [11]. There are five types of material handling equipment commonly used in manufacturing. These are industrial forklifts, automated guided vehicles, conveyors, cranes, and hoists. There are five types of automated guided vehicles: tow trucks, pallet trucks, forklifts, light duty carriers, and assembly line vehicles [12]. Industrial automation is a new but rapidly advancing technology. There is no single solution for automation. There are diverse options. Within robotics, special attention is paid to mobile robots, as they can navigate their environment and are not fixed in a single physical location. The tasks they can accomplish are endless, and the possibilities for their contribution to our lives are innumerable [13].
In study by Tatang Mulyana et al., Festo’s Robotino robot was assigned to the task of transporting materials to the right places as material handling equipment [14]. During transportation, grippers were used for Robotino. In the study, materials in yellow, blue, and red colors come from three different conveyors. The task of the service robot is to take the incoming different-colored materials to three different production stations according to their colors. In this study, inductive lines are used as a guide for the navigation system. Robotino can transport materials along predetermined lines. At the end of the study, the color of Robotino materials was determined by means of a webcam. After determining the color, it activated the gripper and grasped the material. After making sure that it was in front of the production station with the distance sensors, it left the material it brought to the production station. Robotino checked whether there was material in all conveyors; if there was material in the conveyor, it continued these processes according to its color.
Another study investigated robots providing home security. The project addresses the issue of anti-theft using modern methods, such as an autonomous mobile security robot. The main goal of the research is the development of autonomous navigation techniques that allow the robot to randomly follow a complex route and detect suspicious situations using onboard image processing. Robotino was used for this study. The mapping program is a safe tool to show navigation efficiency and the path of previous navigations. In the project, researchers used the odometry technique for mapping. With this mechanism, all detections were motion based, and the robot needed it as an indoor security robot [15].
Anıl Akkız worked on the design and control of a mobile autonomous library robot [16]. There is a constant exchange of books in university libraries. Students or teachers need to find the code of a book in the library on a website, find the shelf containing the code after learning the code, and finally find the book on the shelf to borrow. With the robot designed in the study, it is intended to help the user access the book they are looking for in a comfortable way and to save time for the user by facilitating this access. In addition, it aims to place the books on shelves by eliminating human errors. This autonomous library robot design consists of two main parts. One of them is a mobile platform that enables the movement of the robot. The other is a robotic arm that determines the detection and retrieval of books. In the study, the need for service robots designed for the same or similar purposes, analysis, project management, and how to produce robots for specific tasks are shown. The robot produced as a result of the study was able to take itself to the target position, avoid dynamic objects, detect the desired book, and deliver it to the end user.
Rothomphiwat et al. developed a factory model controlled by robots in their study [17]. For the study, an autonomous drone, an autonomous mobile manipulator robot, and an autonomous dual-armed robot were used. The researchers set up a 6.5 m × 8.5 m factory environment for these collaborative robots to work. In the study, robots are designed to cooperate to move an object from one place to a shelf or storage area. They assumed an area that the mobile robot could not reach. The autonomous drone used was used to move the object to a place within reach of the mobile robot. This platform has been demonstrated through message queuing telemetry transport (MQTT), a common data communication protocol used for the Internet of Things (IoT) and robot operating systems (ROSs). From advanced collaborative robots working in a factory environment, the drone started flying to deliver an object to the target location and then returned to the home position. The mobile manipulator sent a message via MQTT to activate the robot to take the object onto a conveyor while returning to the home position. After the mobile robot placed the object on the conveyor, it sent a message via MQTT to activate the conveyor. The conveyor moved the object to a position where the double-armed robot could detect and grasp it. The double-armed robot picked up the object and carried it to the human worker. Finally, the human worker delivered the object to the double-armed robot, and the double-armed robot placed the object on the shelf. This experiment was repeated 5 times with a 100% success rate. The platform has also demonstrated the potential of a new manufacturing process that incorporates adaptive and collaborative intelligence to increase the efficiency of mass customization.
Chan et al. proposed using a light detection and ranging (LIDAR) sensor to scan the interiors of commercial and industrial buildings in real-time 3D and, using the obtained data, a simultaneous localization and mapping (SLAM) algorithm for mobile robots [18]. Hanagi et al. suggested using the D* algorithm for simultaneous mapping and navigation using cheaper ultrasonic or infrared rangefinder sensors instead of using expensive LIDAR [19]. In the case of using the D* algorithm, using absolute audiometry, the simultaneous mapping and navigation problem in industrial environments can be approximately solved without using external sensors.
Tsintotas et al. proposes a fast and accurate indexing technique for location recognition in SLAM when action vision is used [20]. They saved time performing real-time vision loop-closure by using the Burkhard Keller tree instead of the bag-of-tracked-words model. With the experimental results, they have shown that more costly approaches can perform high-accuracy location identification.
Bai et al. reviewed the development of vision-based navigation and guidance applied in agricultural autonomous vehicles and robots [21]. They stated that the success of the use of vision-based SLAM in agricultural robots is of great importance for the selection of sensors suitable for the working environment. In the environmental data processing and navigation line extraction section, after summarizing the filtering-based data calculation method, segmentation-based data calculation method, and line detection-based data calculation methods, the authors stated that the combined use of these methods, depending on the product and environment, yields more successful results. The authors stated that the still unresolved problems in agricultural robots in the VSLAM method after combined use were light conditions, dust, different growth stages of plants, shading, and leaf overlap.
Narazaki et al. developed an approach for vision-based autonomous unmanned aerial vehicle (UAV) navigation planning for the rapid post-earthquake inspection of reinforced concrete railway viaducts [22]. The approach mimics the way human inspectors perform the task: the system does not require a complete 3D model of the environment and instead uses key features of the target structure as prior knowledge. The system parses sparse point cloud data using frame-by-frame semantic partitioning results online and defines the approximate scale using frame-by-frame depth estimation results. Then, rectangles are placed at the points labeled “Column”, and columns are determined by finding near-parallel or near-perpendicular pairs. In addition to columns detected in this way, the approach attempts to find a grid pattern of column positions based on which detection results are improved, and information about missing columns is extracted. With this approach, the UAV can initiate the inspection task before obtaining full information about the target structure. The map can be progressively developed as the UAV acquires more images during inspection. The ability of the developed approach to detect columns robustly and reliably was evaluated using synthetic data. The results showed that all eight columns of target viaducts were detected with a centimeter accuracy across all 100 trials. Finally, the entire approach was demonstrated in a synthetic environment, demonstrating the significant potential of efficiently and rapidly collecting high-quality images for post-earthquake structural investigation.
Adamkiewicz et al. proposed a trajectory planner and exposure filter that allow robots to take advantage of neural brightness field (NeRF) representation for collision-free navigation [23]. The proposed method aims to further integrate perception and control in an active planning style, both by inducing trajectories to orient the camera in directions with greater gradient information and by using uncertainty measures calculated by the state estimator to reduce the risk of collisions.

References

  1. Rubio, F.; Valero, F.; Llopis-Albert, C. A review of mobile robots: Concepts, methods, theoretical framework, and applications. Int. J. Adv. Robot. Syst. 2019, 16, 1729881419839596.
  2. Khan, M.G.; Huda, N.U.; Zaman, U.K.U. Smart Warehouse Management System: Architecture, Real-Time Implementation and Prototype Design. Machines 2022, 10, 150.
  3. Tong, Q.; Ming, X.; Zhang, X. Construction of Sustainable Digital Factory for Automated Warehouse Based on Integration of ERP and WMS. Sustainability 2023, 15, 1022.
  4. Fatima, Z.; Tanveer, M.H.; Waseemullah; Zardari, S.; Naz, L.F.; Khadim, H.; Ahmed, N.; Tahir, M. Production Plant and Warehouse Automation with IoT and Industry 5.0. Appl. Sci. 2022, 12, 2053.
  5. Gonzalez-Aguirre, J.A.; Osorio-Oliveros, R.; Rodríguez-Hernández, K.L.; Lizárraga-Iturralde, J.; Menendez, R.M.; Ramírez-Mendoza, R.A.; Ramírez-Moreno, M.A.; Lozoya-Santos, J.D.J. Service Robots: Trends and Technology. Appl. Sci. 2021, 11, 10702.
  6. Holland, J.; Kingston, L.; McCarthy, C.; Armstrong, E.; O’Dwyer, P.; Merz, F.; McConnell, M. Service Robots in the Healthcare Sector. Robotics 2021, 10, 47.
  7. Wang, S.; Wang, K.; Tang, R.; Qiao, J.; Liu, H.; Hou, Z.-G. Design of a Low-Cost Miniature Robot to Assist the COVID-19 Nasopharyngeal Swab Sampling. IEEE Trans. Med. Robot. Bionics 2021, 3, 289–293.
  8. Lee, H.; Jeong, J. Mobile Robot Path Optimization Technique Based on Reinforcement Learning Algorithm in Warehouse Environment. Appl. Sci. 2021, 11, 1209.
  9. Belanche, D.; Casaló, L.V.; Flavián, C.; Schepers, J. Service robot implementation: A theoretical framework and research agenda. Serv. Ind. J. 2020, 40, 203–225.
  10. Tofangchi, S.; Hanelt, A.; Marz, D.; Kolbe, L.M. Handling the Efficiency–Personalization Trade-Off in Service Robotics: A Machine-Learning Approach. J. Manag. Inf. Syst. 2021, 38, 246–276.
  11. Groover, M.P. Automation, Production Systems, and Computer-Integrated Manufacturing, 5th ed.; Pearson: Chennai, India, 2016.
  12. Nivas, V.M.; Krishnan, P.G.; Fredrhic, A.C. Automated Guided Car (AGC) for industrial automation. In Proceedings of the 2016 International Conference on Emerging Trends in Engineering, Technology and Science (ICETETS), Pudukkottai, India, 24–26 February 2016; pp. 1–6.
  13. Hussain, R.A.A.; Mohd-Mokhtar, R. Development of a Logistics Autonomous Mobile Robot (EasyBot). In Lecture Notes in Electrical Engineering; Springer: Singapore, 2022; Volume 770, pp. 1009–1025.
  14. Mulyana, T.; Rachmat, H.; Yuliarso, P.P. An Automated Guided Vehicle Simulation through Robotino to Help Learning Course Industrial Automation. In Proceedings of the 9th International Seminar on Industrial Engineering and Management, Padang, Indonesia, 20–22 September 2016; pp. 38–44. Available online: https://isiem.net/wp-content/uploads/2016/10/9th_ISIEM_2016_paper_33_ps_Proceeding.pdf (accessed on 11 March 2023).
  15. Di Paola, D.; Milella, A.; Cicirelli, G.; Distante, G.C.A.A. An Autonomous Mobile Robotic System for Surveillance of Indoor Environments. Int. J. Adv. Robot. Syst. 2010, 7, 8.
  16. Anıl, A. Ulusal Tez Merkezi | Anasayfa. Design and Control of a Mobile Autonomous Library Robot. 2019. Available online: https://tez.yok.gov.tr/UlusalTezMerkezi/tezDetay.jsp?id=RKNLg_j824Tn9ypepbNPyA&no=tXxm3pkChXRv2bAjYtyosw (accessed on 11 March 2023).
  17. Rothomphiwat, K.; Harnkhamen, A.; Tothong, T.; Suthisomboon, T.; Dilokthanakul, N.; Manoonpong, P. Advanced Collaborative Robots for the Factory of the Future. In Proceedings of the 2021 IEEE/SICE International Symposium on System Integration (SII), Iwaki, Japan, 11–14 January 2021; pp. 578–579.
  18. Chan, T.H.; Hesse, H.; Ho, S.G. LiDAR-Based 3D SLAM for Indoor Mapping. In Proceedings of the 2021 7th International Conference on Control, Automation and Robotics (ICCAR), Singapore, 23–26 April 2021; pp. 285–289.
  19. Hanagi, R.R.; Gurav, O.S.; Khandekar, S.A. SLAM using AD* Algorithm with Absolute Odometry. In Proceedings of the 2021 6th International Conference for Convergence in Technology (I2CT), Maharashtra, India, 2–4 April 2021; pp. 1–4.
  20. Tsintotas, K.A.; Sevetlidis, V.; Papapetros, I.T.; Balaska, V.; Psomoulis, A.; Gasteratos, A. BK tree indexing for active vision-based loop-closure detection in autonomous navigation. In Proceedings of the 2022 30th Mediterranean Conference on Control and Automation (MED), Vouliagmeni, Greece, 28 June–1 July 2022; pp. 532–537.
  21. Bai, Y.; Zhang, B.; Xu, N.; Zhou, J.; Shi, J.; Diao, Z. Vision-based navigation and guidance for agricultural autonomous vehicles and robots: A review. Comput. Electron. Agric. 2023, 205, 107584.
  22. Narazaki, Y.; Hoskere, V.; Chowdhary, G.; Spencer, B.F. Vision-based navigation planning for autonomous post-earthquake inspection of reinforced concrete railway viaducts using unmanned aerial vehicles. Autom. Constr. 2022, 137, 104214.
  23. Adamkiewicz, M.; Chen, T.; Caccavale, A.; Gardner, R.; Culbertson, P.; Bohg, J.; Schwager, M. Vision-Only Robot Navigation in a Neural Radiance World. IEEE Robot. Autom. Lett. 2022, 7, 4606–4613.
More
Information
Contributors MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register : ,
View Times: 81
Revisions: 2 times (View History)
Update Date: 25 Jan 2024
1000/1000