Autonomous Navigation of Robots: Comparison
Please note this is a comparison between Version 2 by Rita Xu and Version 1 by Marcelo V Garcia.

In the field of artificial intelligence, control systems for mobile robots have undergone significant advancements, particularly within the realm of autonomous learning.

  • autonomous navigation
  • optimization DQN (Deep Q-Network)
  • robotics

1. Introduction

Industries in the present era are confronted with the challenge of staying abreast of technological progress by fostering innovation and investing in the advancement of efficient equipment and work methods. Over recent years, a notable transformation has taken place, driven by the adoption of artificial intelligence (AI) and the implementation of the DQN algorithm. This paradigm shift has given rise to the emergence of intelligent, adaptable, and dynamic manufacturing processes. By integrating AI and DQN into manufacturing operations, several benefits have been realized, including the automation of manufacturing processes, the seamless exchange of real-time data leveraging cloud computing, and the establishment of interconnected cyberphysical systems. Consequently, this integration has paved the way for the development of control algorithms rooted in machine learning, thereby further augmenting the capabilities of manufacturing processes [1].
The implementation of AI- and DQN-powered autonomous systems, sensors, and actuators has revolutionized the manufacturing industry, enabling companies to achieve elevated quality standards, cost reductions, minimized downtime, and increased competitiveness. This integration has not only been successful, but also serves as a testament to the immense potential for further advancements in the creation of intelligent, efficient, and highly productive manufacturing systems in the future [2,3][2][3].
In this new era of the industrial revolution, we are witnessing the emergence of various transformative advancements. These include the deployment of autonomous production lines, the adoption of intelligent manufacturing practices, the implementation of efficient and secure data management systems, and the attainment of unparalleled process quality. These developments have given rise to a heightened sense of competitiveness among companies and have opened up new avenues for productivity across diverse sectors of the market. Notably, these advancements are fueled by innovative approaches and the integration of predictive tools that leverage data analysis to facilitate informed decision-making, thereby significantly reducing risks [4,5,6,7][4][5][6][7].
In this era of industrial transformation, we are currently witnessing a transformation in manufacturing practices driven by the integration of the DQN algorithm. This has led to the emergence of autonomous production lines, intelligent manufacturing practices, effective and protected data handling, and high-quality processes [8,9][8][9]. The use of the DQN algorithm for predictive modeling substantially reduces risks by providing accurate and reliable insights into the manufacturing process. By leveraging the power of AI and the DQN algorithm, manufacturing companies can optimize their operations, reduce costs, improve efficiency, and ultimately gain a competitive edge in the market [10,11][10][11].
Recently, mostly in globalized countries, statistical methods have been developed and applied for the analysis of data models, because prediction is important when making decisions, which can be risky and can represent a positive or negative change within a manufacturing process [12[12][13],13], and the utilization of machine learning systems has emerged as an innovative approach that utilizes statistical techniques to analyze and optimize algorithms. These algorithms were developed based on insights derived from prior outcomes, leading to the emergence of a regression-based learning methodology, whose application focuses on systems being reconfigured in real-time, and this being carried out in such a way that they automatically find the most optimal and efficient way to perform their respective operations [14,15,16][14][15][16].

2. Autonomous Navigation of Robots

Reinforcement learning (RL) is a type of machine learning that involves an agent interacting with an environment to learn the best actions to take in order to maximize a reward signal. RL has been successfully applied in robotics to solve a variety of tasks, including path determination [17,18,19,20][17][18][19][20]. Path determination in robotics involves finding a safe and efficient path for a robot to follow in order to complete a task. This can be a challenging problem, as robots need to navigate through unknown environments while avoiding obstacles and minimizing the risk of collisions [21,22,23,24,25,26,27,28][21][22][23][24][25][26][27][28]. RL can be used to train a robot to determine the best path to take in order to achieve its goals [29,30,31,32][29][30][31][32]. The robot can be trained by interacting with its environment and receiving feedback in the form of rewards or penalties based on the actions it takes. For example, the robot may receive a reward for successfully navigating to a particular location, while receiving a penalty for colliding with an obstacle [33,34,35,36,37][33][34][35][36][37]. Over time, the robot can use the feedback it receives to learn the optimal path to take in different environments. This allows the robot to adapt to changes in its environment and make decisions in real time based on the current situation [38,39,40,41][38][39][40][41]. Due to technological advances, the research and implementation of robotic systems are in constant development, trying to optimize self-control, leading their system to be based on autonomous operations and intelligent decision making [42,43][42][43]. Especially for movement, different control methods have been designed that vary according to their field of application; however, the most used is the predictive model, which is based on generating a decision based on statistics, which in turn uses a large amount of data in industrial environments [44,45,46,47,48,49][44][45][46][47][48][49]. Various investigations [50,51][50][51] indicate that mobile robots are extensively utilized in diverse industrial and commercial settings, often replacing human labor. Instances can be found in warehouses and hospitals, where these robots are responsible for material transportation and assisting workers in repetitive tasks that may have adverse effects on their wellbeing [52,53][52][53]. Additionally, it is observed that in many domains involving mobile robots, the processing of a substantial amount of information poses a significant challenge. Within the context of machine learning, this refers to the learning process taking place within an environment that encompasses both fixed and mobile obstacles. The navigation tasks of mobile robots typically involve various optimization concepts, such as cost reduction, shorter trajectories, and minimized processing time. However, in complex and unpredictable industrial environments, adaptability to surroundings becomes essential [54,55,56][54][55][56]. The Deep Q-Network (DQN) algorithm is a popular variant of reinforcement learning that has been successfully applied to a wide range of problems, including path determination in robotics [57,58,59][57][58][59]. In DQN, the agent uses a neural network to approximate the optimal action-value function, which maps states to the expected long-term reward for each possible action. The network is trained using a variant of the Q-learning algorithm, where the agent learns from the transitions between states and the rewards received for each action [60,61,62][60][61][62]. To apply DQN to path determination in robotics, the agent must first be trained on a set of sample environments. During training, the agent explores the environment and learns to predict the optimal action to take in each state. The agent’s performance is evaluated based on its ability to navigate to a specified target location while avoiding obstacles [63,64,65,66][63][64][65][66]. A novel work is presented in [54], where reinforcement learning is applied; it is mentioned that the robot will learn by training the environment in which it is located through a scoring system designed in a Deep Q network, which later will allow it to take the optimal action or strategy to move to its target position while avoiding obstacles. Also in [67], it is mentioned that laser sensors are used for the autonomous navigation of mobile robots because they have a wide detection range and high precision; in addition to this research, it is detailed that robots need enough information to correctly detect obstacles and map the environment to subsequently carry out correct navigation. Programming interfaces have been developed for the design and control of open-source applications, owing to the changes in computer systems and the incorporation of robotics into the industry [68,69][68][69]. ROS has the stability to program any type of robotic function, either for a manipulator or a mobile robot; in [70], this framework is used to control the entire flow of information through the tools and libraries included in its API, which contains functional packages that facilitate the creation and communication between nodes. As mentioned in [71], a successful application developed between big data and machine learning can provide different solutions for solving problems in manufacturing industries, especially in the product life cycle and the entire supply chain. Also in this research, a model based on energy saving is presented using machine learning to determine the optimal trajectory for an industrial robot; the keys to this design are based on data collection and the control algorithm. For mobile robots, autonomous navigation based on machine learning is the key to high-impact technological advancement; several works [17,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88][17][72][73][74][75][76][77][78][79][80][81][82][83][84][85][86][87][88] indicate that industrial robots that have the appropriate sensors and the specific control algorithm can understand the environment in which they are located, to which humans generally do not have access, such as in war zones or nuclear plants. The research by [89] shows three different control algorithms based on different techniques; in addition to the development of a simulation environment for the training of the mobile robot, a function focused on Q-learning was also introduced to return the reward value according to the records of executions and thus validate the performance of the robot. The Q-values are reported to be real, which turns the neural network into a regression task, allowing for optimization using the squared error loss function. Overall, the combination of reinforcement learning with the DQN algorithm has shown great promise in enabling robots to autonomously navigate through complex environments and find safe and efficient paths to their goals.

References

  1. Zong, L.; Yu, Y.; Wang, J.; Liu, P.; Feng, W.; Dai, X.; Chen, L.; Gunawan, C.; Jimmy Yun, S.; Amal, R.; et al. Oxygen-vacancy-rich molybdenum carbide MXene nanonetworks for ultrasound-triggered and capturing-enhanced sonocatalytic bacteria eradication. Biomaterials 2023, 296, 122074.
  2. Hamid, M.S.R.A.; Masrom, N.R.; Mazlan, N.A.B. The key factors of the industrial revolution 4.0 in the Malaysian smart manufacturing context. Int. J. Asian Bus. Inf. Manag. 2022, 13, 1–19.
  3. Zou, T.; Situ, W.; Yang, W.; Zeng, W.; Wang, Y. A Method for Long-Term Target Anti-Interference Tracking Combining Deep Learning and CKF for LARS Tracking and Capturing. Remote Sens. 2023, 15, 748.
  4. Ochoa-Zezzatti, A.; Oliva, D. Impact of Industry 4.0: Improving Hybrid Laser-Arc Welding with Big Data for Subsequent Functionality in UnderwaterWelding. In Studies in Systems, Decision and Control; Springer: Berlin/Heidelberg, Germany, 2022; Volume 347, pp. 87–94.
  5. de Castro, G.; Pinto, M.; Biundini, I.; Melo, A.; Marcato, A.; Haddad, D. Dynamic Path Planning Based on Neural Networks for Aerial Inspection. J. Control. Autom. Electr. Syst. 2023, 34, 85–105.
  6. Mizoguchi, Y.; Hamada, D.; Fukuda, R.; Inniyaka, I.; Kuwata, K.; Nishimuta, K.; Sugino, A.; Tanaka, R.; Yoshiki, T.; Nishida, Y.; et al. Image-based navigation of Small-size Autonomous Underwater Vehicle “Kyubic” in International Underwater Robot Competition; ALife Robotics Corporation Ltd.: Oita, Japan, 2023; pp. 473–476.
  7. Pavel, M.D.; Roșioru, S.; Arghira, N.; Stamatescu, G. Control of Open Mobile Robotic Platform Using Deep Reinforcement Learning. In Service Oriented, Holonic and Multi-Agent Manufacturing Systems for Industry of the Future: Proceedings of SOHOMA 2022; Springer: Berlin/Heidelberg, Germany, 2023; Volume 1083, pp. 368–379.
  8. Cerquitelli, T.; Ventura, F.; Apiletti, D.; Baralis, E.; Macii, E.; Poncino, M. Enhancing manufacturing intelligence through an unsupervised data-driven methodology for cyclic industrial processes. Expert Syst. Appl. 2021, 182, 115269.
  9. Cruz Ulloa, C.; Garcia, M.; del Cerro, J.; Barrientos, A. Deep Learning for Victims Detection from Virtual and Real Search and Rescue Environments. In ROBOT2022: Fifth Iberian Robotics Conference: Advances in Robotics; Springer: Berlin/Heidelberg, Germany, 2023; Volume 590, pp. 3–13.
  10. Cordeiro, A.; Rocha, L.; Costa, C.; Silva, M. Object Segmentation for Bin Picking Using Deep Learning. In ROBOT2022: Fifth Iberian Robotics Conference: Advances in Robotics; Lecture Notes in Networks and Systems; Springer: Berlin/Heidelberg, Germany, 2023; Volume 590, pp. 53–66.
  11. Rodrigues, N.; Sousa, A.; Reis, L.; Coelho, A. Intelligent Wheelchairs Rolling in Pairs Using Reinforcement Learning. In ROBOT2022: Fifth Iberian Robotics Conference: Advances in Robotics; Lecture Notes in Networks and Systems; Springer: Berlin/Heidelberg, Germany, 2023; Volume 590, pp. 274–285.
  12. Crawford, B.; Sourki, R.; Khayyam, H.; Milani, A.S. A machine learning framework with dataset-knowledgeability pre-assessment and a local decision-boundary crispness score: An industry 4.0-based case study on composite autoclave manufacturing. Comput. Ind. 2021, 132, 103510.
  13. Vidal-Soroa, D.; Furelos, P.; Bellas, F.; Becerra, J. An Approach to 3D Object Detection in Real-Time for Cognitive Robotics Experiments. In ROBOT2022: Fifth Iberian Robotics Conference: Advances in Robotics; Lecture Notes in Networks and Systems; Springer: Berlin/Heidelberg, Germany, 2023; Volume 589, pp. 283–294.
  14. Lu, H.; Liu, J.; Luo, Y.; Hua, Y.; Qiu, S.; Huang, Y. An autonomous learning mobile robot using biological reward modulate STDP. Neurocomputing 2021, 458, 308–318.
  15. Sivaranjani, A.; Vinod, B. Artificial Potential Field Incorporated Deep-Q-Network Algorithm for Mobile Robot Path Prediction. Intell. Autom. Soft Comput. 2023, 35, 1135–1150.
  16. Raja, V.; Talwar, D.; Manchikanti, A.; Jha, S. Autonomous Navigation for Mobile Robots with Sensor Fusion Technology. In Industry 4.0 and Advanced Manufacturing: Proceedings of I-4AM 2022; Lecture Notes in Mechanical Engineering; Springer: Berlin/Heidelberg, Germany, 2023; pp. 13–23.
  17. Cruz Ulloa, C.; Krus, A.; Barrientos, A.; Cerro, J.; Valero, C. Robotic Fertilization in Strip Cropping using a CNN Vegetables Detection-Characterization Method. Comput. Electron. Agric. 2022, 193, 106684.
  18. Herr, G.; Weerakoon, L.; Yu, M.; Chopra, N. Cardynet: Deep Learning Based Navigation for Car-Like Robots in Dynamic Environments; American Society of Mechanical Engineers (ASME): New York, NY, USA, 2022; Volume 5.
  19. Jaiswal, A.; Ashutosh, K.; Rousseau, J.; Peng, Y.; Wang, Z.; Ding, Y. RoS-KD: A Robust Stochastic Knowledge Distillation Approach for Noisy Medical Imaging; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2022; pp. 981–986.
  20. Chen, C.W.; Tsai, A.C.; Zhang, Y.H.; Wang, J.F. 3D Object Detection Combined with Inverse Kinematics to Achieve Robotic Arm Grasping; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2022.
  21. Kulkarni, J.; Pantawane, P. Person Following Robot Based on Real Time Single Object Tracking and RGB-D Image; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2022.
  22. M’Sila, C.; Ayad, R.; Ait-Oufroukh, N. Automated Foreign Object Debris Detection System Based on UAV; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2022.
  23. Balachandran, A.; Lal S, A.; Sreedharan, P. Autonomous Navigation of an AMR Using Deep Reinforcement Learning in a Warehouse Environment; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2022.
  24. Ghodake, A.; Uttam, P.; Ahuja, B. Accurate 6-DOF Grasp Pose Detection in Cluttered Environments Using Deep Learning; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2022.
  25. Zhang, J.; Xu, Z.; Wu, J.; Chen, Q.; Wang, F. Lightweight Intelligent Autonomous Unmanned Vehicle Based on Deep Neural Network in ROS System; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2022; pp. 679–684.
  26. Miyama, M. Robust inference of multi-task convolutional neural network for advanced driving assistance by embedding coordinates. In Proceedings of the 8th World Congress on Electrical Engineering and Computer Systems and Science, EECSS 2022, Prague, Czech Republic, 28–30 July 2022; pp. 105-1–105-9.
  27. Jebbar, M.; Maizate, A.; Ait Abdelouahid, R. Moroccan’s Arabic Speech Training And Deploying Machine Learning Models with Teachable Machine; Elsevier: Amsterdam, The Netherlands, 2022; Volume 203, pp. 801–806.
  28. Copot, C.; Shi, L.; Smet, E.; Ionescu, C.; Vanlanduit, S. Comparison of Deep Learning Models in Position Based Visual Servoing; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2022; Volume 2022.
  29. Liu, J.; Rangwala, M.; Ahluwalia, K.; Ghajar, S.; Dhami, H.; Tokekar, P.; Tracy, B.; Williams, R. Intermittent Deployment for Large-Scale Multi-Robot Forage Perception: Data Synthesis, Prediction, and Planning. IEEE Trans. Autom. Sci. Eng. 2022, 1–21.
  30. Lai, J.; Ramli, H.; Ismail, L.; Hasan, W. Real-Time Detection of Ripe Oil Palm Fresh Fruit Bunch Based on YOLOv4. IEEE Access 2022, 10, 95763–95770.
  31. Lin, H.Z.; Chen, H.H.; Choophutthakan, K.; Li, C.H. Autonomous Mobile Robot as a Cyber-Physical System Featuring Networked Deep Learning and Control; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2022; Volume 2022, pp. 268–274.
  32. Mandel, N.; Sandino, J.; Galvez-Serna, J.; Vanegas, F.; Milford, M.; Gonzalez, F. Resolution-adaptive Quadtrees for Semantic Segmentation Mapping in UAV Applications; IEEE Computer Society: Piscataway, NJ, USA, 2022; Volume 2022.
  33. Chen, Y.; Li, D.; Zhong, H.; Zhao, R. The Method for Automatic Adjustment of AGV’s PID Based on Deep Reinforcement Learning. Inst. Phys. 2022, 2320, 012008.
  34. Chen, Y.; Li, D.; Zhong, H.; Zhu, O.; Zhao, Z. The Determination of Reward Function in AGV Motion Control Based on DQN. Inst. Phys. 2022, 2320, 012002.
  35. Chavez-Galaviz, J.; Mahmoudian, N. Underwater Dock Detection through Convolutional Neural Networks Trained with Artificial Image Generation; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2022; pp. 4621–4627.
  36. Carvalho, E.; Susbielle, P.; Hably, A.; Dibangoye, J.; Marchand, N. Neural Enhanced Control for Quadrotor Linear Behavior Fitting; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2022; pp. 378–385.
  37. Liu, K.; Zhou, X.; Zhao, B.; Ou, H.; Chen, B. An Integrated Visual System for Unmanned Aerial Vehicles Following Ground Vehicles: Simulations and Experiments. In Proceedings of the 2022 IEEE 17th International Conference on Control & Automation (ICCA), 2022 IEEE 17th International Conference on Control & Automation (ICCA), Naples, Italy, 27–30 June 2022; pp. 593–598.
  38. Yun, J.; Jiang, D.; Sun, Y.; Huang, L.; Tao, B.; Jiang, G.; Kong, J.; Weng, Y.; Li, G.; Fang, Z. Grasping Pose Detection for Loose Stacked Object Based on Convolutional Neural Network with Multiple Self-Powered Sensors Information. IEEE Sens. J. 2022.
  39. Zhu, C.; Chen, L.; Cai, Y.; Wang, H.; Li, Y. Vehicle-Mounted Multi-Object Tracking Based on Self-Query. In Proceedings of the International Conference on Advanced Algorithms and Neural Networks (AANN 2022), Zhuhai, China, 25–27 February 2022; Volume 12285.
  40. Nawabi, A.; Jinfang, S.; Abbasi, R.; Iqbal, M.; Heyat, M.; Akhtar, F.; Wu, K.; Twumasi, B. Segmentation of Drug-Treated Cell Image and Mitochondrial-Oxidative Stress Using Deep Convolutional Neural Network. Oxidative Med. Cell. Longev. 2022, 2022, 5641727.
  41. Saripuddin, M.; Suliman, A.; Sameon, S. Impact of Resampling and Deep Learning to Detect Anomaly in Imbalance Time-Series Data; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2022; pp. 37–41.
  42. Yu, Y.; Zhang, J.Z.; Cao, Y.; Kazancoglu, Y. Intelligent transformation of the manufacturing industry for Industry 4.0: Seizing financial benefits from supply chain relationship capital through enterprise green management. Technol. Forecast. Soc. Chang. 2021, 172, 120999.
  43. Santoso, F.; Finn, A. A Data-Driven Cyber-Physical System Using Deep-Learning Convolutional Neural Networks: Study on False-Data Injection Attacks in an Unmanned Ground Vehicle Under Fault-Tolerant Conditions. IEEE Trans. Syst. Man, Cybern. Syst. 2023, 53, 346–356.
  44. Sinulingga, H.R.; Munir, R. Road Recognition System with Heuristic Method and Machine Learning. In Proceedings of the 2020 7th International Conference on Advance Informatics: Concepts, Theory and Applications (ICAICTA), Tokoname, Japan, 8–9 September 2020; pp. 1–6.
  45. Bhattacharya, S.; Dutta, S.; Maiti, T.K.; Miura-Mattausch, M.; Navarro, D.; Mattausch, H.J. Machine learning algorithm for autonomous control of walking robot. In Proceedings of the 2018 International Symposium on Devices, Circuits and Systems (ISDCS), Howrah, India, 29–31 March 2018; pp. 1–4.
  46. Mishra, P.; Jain, U.; Choudhury, S.; Singh, S.; Pandey, A.; Sharma, A.; Singh, R.; Pathak, V.; Saxena, K.; Gehlot, A. Footstep planning of humanoid robot in ROS environment using Generative Adversarial Networks (GANs) deep learning. Robot. Auton. Syst. 2022, 158, 104269.
  47. Mahmeen, M.; Sanchez, R.; Friebe, M.; Pech, M.; Haider, S. Collision Avoidance Route Planning for Autonomous Medical Devices Using Multiple Depth Cameras. IEEE Access 2022, 10, 29903–29915.
  48. Isaac, M.M.; Wilscy, M.; Aji, S. A Skip-Connected CNN and Residual Image-Based Deep Network for Image Splicing Localization. In Pervasive Computing and Social Networking; Ranganathan, G., Bestak, R., Palanisamy, R., Rocha, Á., Eds.; Lecture Notes in Networks and Systems; Springer: Singapore, 2022; Volume 317.
  49. Domingo, J.; Gómez-Garcà a-Bermejo, J.; Zalama, E. Optimization and improvement of a robotics gaze control system using LSTM networks. Multimed. Tools Appl. 2022, 81, 3351–3368.
  50. Caesarendra, W.; Wijaya, T.; Pappachan, B.K.; Tjahjowidodo, T. Adaptation to industry 4.0 using machine learning and cloud computing to improve the conventional method of deburring in aerospace manufacturing industry. In Proceedings of the 2019 International Conference on Information and Communication Technology and Systems, ICTS 2019, Surabaya, Indonesia, 18 July 2019; pp. 120–124.
  51. Gonzalez, A.G.C.; Alves, M.V.S.; Viana, G.S.; Carvalho, L.K.; Basilio, J.C. Supervisory Control-Based Navigation Architecture: A New Framework for Autonomous Robots in Industry 4.0 Environments. IEEE Trans. Ind. Inform. 2018, 14, 1732–1743.
  52. Mayer, C.; Ofek, E.; Fridrich, D.; Molchanov, Y.; Yacobi, R.; Gazy, I.; Hayun, I.; Zalach, J.; Paz-Yaacov, N.; Barshack, I. Direct identification of ALK and ROS1 fusions in non-small cell lung cancer from hematoxylin and eosin-stained slides using deep learning algorithms. Mod. Pathol. 2022, 35, 1882–1887.
  53. Ma, C.Y.; Zhou, J.Y.; Xu, X.T.; Qin, S.B.; Han, M.F.; Cao, X.H.; Gao, Y.Z.; Xu, L.; Zhou, J.J.; Zhang, W.; et al. Clinical evaluation of deep learning–based clinical target volume three-channel auto-segmentation algorithm for adaptive radiotherapy in cervical cancer. BMC Med. Imaging 2022, 22, 123.
  54. Xin, J.; Zhao, H.; Liu, D.; Li, M. Application of deep reinforcement learning in mobile robot path planning. In Proceedings of the Proceedings—2017 Chinese Automation Congress, CAC 2017, Jinan, China, 20–22 October 2017; Volume 2017, pp. 7112–7116.
  55. Gattu, S.; Penumacha, K. Autonomous Navigation and Obstacle Avoidance using Self-Guided and Self-Regularized Actor-Critic. In Proceedings of the 8th International Conference on Robotics and Artificial Intelligence, Singapore, 14–16 September 2022; pp. 52–58.
  56. Xie, L.; Shen, Y.; Zhang, M.; Zhong, Y.; Lu, Y.; Yang, L.; Li, Z. Single-model multi-tasks deep learning network for recognition and quantitation of surface-enhanced Raman spectroscopy. Opt. Express 2022, 30, 41580–41589.
  57. Bravo-Arrabal, J.; Toscano-Moreno, M.; Fernandez-Lozano, J.; Mandow, A.; Gomez-Ruiz, J.; Garcà a-Cerezo, A. The internet of cooperative agents architecture (X-ioca) for robots, hybrid sensor networks, and mec centers in complex environments: A search and rescue case study. Sensors 2021, 21, 7843.
  58. Liu, Z.; Shi, Y.; Chen, H.; Qin, T.; Zhou, X.; Huo, J.; Dong, H.; Yang, X.; Zhu, X.; Chen, X.; et al. Machine learning on properties of multiscale multisource hydroxyapatite nanoparticles datasets with different morphologies and sizes. NPJ Comput. Mater. 2021, 7, 142.
  59. Bae, S.Y.; Lee, J.; Jeong, J.; Lim, C.; Choi, J. Effective data-balancing methods for class-imbalanced genotoxicity datasets using machine learning algorithms and molecular fingerprints. Comput. Toxicol. 2021, 20, 100178.
  60. Choi, S.; Park, S. Development of Smart Mobile Manipulator Controlled by a Single Windows PC Equipped with Real-Time Control Software. Int. J. Precis. Eng. Manuf. 2021, 22, 1707–1717.
  61. Saeedvand, S.; Mandala, H.; Baltes, J. Hierarchical deep reinforcement learning to drag heavy objects by adult-sized humanoid robot. Appl. Soft Comput. 2021, 110, 107601.
  62. Chen, K.; Liang, Y.; Jha, N.; Ichnowski, J.; Danielczuk, M.; Gonzalez, J.; Kubiatowicz, J.; Goldberg, K. FogROS: An Adaptive Framework for Automating Fog Robotics Deployment. In Proceedings of the 2021 IEEE 17th International Conference on Automation Science and Engineering (CASE), Lyon, France, 23–27 August 2021; pp. 2035–2042.
  63. Ou, J.; Guo, X.; Lou, W.; Zhu, M. Learning the Spatial Perception and Obstacle Avoidance with the Monocular Vision on a Quadrotor; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2021; pp. 582–587.
  64. Jembre, Y.; Nugroho, Y.; Khan, M.; Attique, M.; Paul, R.; Shah, S.; Kim, B. Evaluation of reinforcement and deep learning algorithms in controlling unmanned aerial vehicles. Appl. Sci. 2021, 11, 7240.
  65. Yuhas, M.; Feng, Y.; Ng, D.; Rahiminasab, Z.; Easwaran, A. Embedded Out-of-Distribution Detection on an Autonomous Robot Platform; Association for Computing Machinery, Inc.: New York, NY, USA, 2021; pp. 13–18.
  66. Timmis, I.; Paul, N.; Chung, C.J. Teaching Vehicles to Steer Themselves with Deep Learning. In Proceedings of the 2021 IEEE International Conference on Electro Information Technology (EIT), Mt. Pleasant, MI, USA, 14–15 May 2021; pp. 419–421.
  67. Sui, J.; Yang, L.; Zhang, X.; Zhang, X. Laser measurement key technologies and application in robot autonomous navigation. Int. J. Pattern Recognit. Artif. Intell. 2011, 25, 1127–1146.
  68. Parikh, A.; Karamchandani, S.; Lalani, A.; Bhavsar, D.; Kamat, G. Unmanned Terrestrial Deep Stereo ConvNet Gofer Embedded with CNN Architecture. Int. J. Mech. Eng. Robot. Res. 2022, 11, 807–819.
  69. Aslan, M.; Durdu, A.; Yusefi, A.; Yilmaz, A. HVIOnet: A deep learning based hybrid visual–inertial odometry approach for unmanned aerial system position estimation. Neural Netw. 2022, 155, 461–474.
  70. Zhi, L.; Xuesong, M. Navigation and Control System of Mobile Robot Based on ROS. In Proceedings of the 2018 IEEE 3rd Advanced Information Technology, Electronic and Automation Control Conference (IAEAC), Chongqing, China, 12–14 October 2018; pp. 368–372.
  71. Yin, S.; Ji, W.; Wang, L. A machine learning based energy efficient trajectory planning approach for industrial robots. Procedia CIRP 2019, 81, 429–434.
  72. Wang, L. Automatic control of mobile robot based on autonomous navigation algorithm. Artif. Life Robot. 2019, 24, 494–498.
  73. Yuan, M.; Shan, J.; Mi, K. Deep Reinforcement Learning Based Game-Theoretic Decision-Making for Autonomous Vehicles. IEEE Robot. Autom. Lett. 2022, 7, 818–825.
  74. Guan, W.; Guo, Y. A Visual Learning based Robotic Grasping System. In Proceedings of the 2022 The 6th International Conference on Advances in Artificial Intelligence, Birmingham, UK, 22–24 October 2022; pp. 22–28.
  75. Liu, G.; Sun, W.; Xie, W.; Xu, Y. Learning visual path–following skills for industrial robot using deep reinforcement learning. Int. J. Adv. Manuf. Technol. 2022, 122, 1099–1111.
  76. Su, L.; Hua, Y.; Dong, X.; Ren, Z. Human-UAV swarm multi-modal intelligent interaction methods. Hangkong Xuebao/Acta Aeronaut. Astronaut. Sin. 2022, 43.
  77. Tsai, J.; Chang, C.C.; Ou, Y.C.; Sieh, B.H.; Ooi, Y.M. Autonomous Driving Control Based on the Perception of a Lidar Sensor and Odometer. Appl. Sci. 2022, 12, 7775.
  78. Yinka-Banjo, C.; Ugot, O.; Ehiorobo, E. Object Detection for Robot Coordination in Robotics Soccer. Niger. J. Technol. Dev. 2022, 19, 136–142.
  79. Khalifa, A.; Abdelrahman, A.; Strazdas, D.; Hintz, J.; Hempel, T.; Al-Hamadi, A. Face Recognition and Tracking Framework for Human–Robot Interaction. Appl. Sci. 2022, 12, 5568.
  80. Ravi, N.; El-Sharkawy, M. Real-Time Embedded Implementation of Improved Object Detector for Resource-Constrained Devices. J. Low Power Electron. Appl. 2022, 12, 21.
  81. Pu, L.; Zhang, X. Deep learning based UAV vision object detection and tracking. Beijing Hangkong Hangtian Daxue Xuebao/J. Beijing Univ. Aeronaut. Astronaut. 2022, 48, 872–880.
  82. Gong, H.; Wang, P.; Ni, C.; Cheng, N. Efficient Path Planning for Mobile Robot Based on Deep Deterministic Policy Gradient. Sensors 2022, 22, 3579.
  83. Baek, E.T.; Im, D.Y. ROS-Based Unmanned Mobile Robot Platform for Agriculture. Appl. Sci. 2022, 12, 4335.
  84. Zhu, X.; Li, N.; Wang, Y. Software change-proneness prediction based on deep learning. J. Software: Evol. Process. 2022, 34, e2434.
  85. Bui, K.; Truong, G.; Ngoc, D. GCTD3: Modeling of Bipedal Locomotion by Combination of TD3 Algorithms and Graph Convolutional Network. Appl. Sci. 2022, 12, 2948.
  86. Escobar-Naranjo, J.; Garcia, M.V. Self-supervised Learning Approach to Local Trajectory Planning for Mobile Robots Using Optimization of Trajectories. Lect. Notes Netw. Syst. 2023, 578, 741–748.
  87. Montalvo, W.; Garcia, C.A.; Naranjo, J.E.; Ortiz, A.; Garcia, M.V. Tele-operation system for mobile robots using in oil & gas industry; . Risti - Rev. Iber. Sist. Tecnol. Inf. 2020, 2020, 351–365.
  88. Caiza, G.; Garcia, C.A.; Naranjo, J.E.; Garcia, M.V. Flexible robotic teleoperation architecture for intelligent oil fields. Heliyon 2020, 6, e03833.
  89. Mohanty, P.K.; Sah, A.K.; Kumar, V.; Kundu, S. Application of deep Q-learning for wheel mobile robot navigation. In Proceedings of the 2017 3rd International Conference on Computational Intelligence and Networks (CINE), Odisha, India, 28 October 2017; pp. 88–93.
More
ScholarVision Creations