Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 -- 2854 2022-08-09 17:10:55 |
2 format correction -26 word(s) 2828 2022-08-10 03:28:25 | |
3 format correction + 11 word(s) 2839 2022-08-12 07:55:21 |

Video Upload Options

Do you have a full video?

Confirm

Are you sure to Delete?
Cite
If you have any further questions, please contact Encyclopedia Editorial Office.
Kot, R. Fundamentals of Path-Planning Algorithms. Encyclopedia. Available online: https://encyclopedia.pub/entry/25993 (accessed on 15 September 2024).
Kot R. Fundamentals of Path-Planning Algorithms. Encyclopedia. Available at: https://encyclopedia.pub/entry/25993. Accessed September 15, 2024.
Kot, Rafał. "Fundamentals of Path-Planning Algorithms" Encyclopedia, https://encyclopedia.pub/entry/25993 (accessed September 15, 2024).
Kot, R. (2022, August 09). Fundamentals of Path-Planning Algorithms. In Encyclopedia. https://encyclopedia.pub/entry/25993
Kot, Rafał. "Fundamentals of Path-Planning Algorithms." Encyclopedia. Web. 09 August, 2022.
Fundamentals of Path-Planning Algorithms
Edit

The rapid technological development of computing power and system operations today allows for increasingly advanced algorithm implementation, as well as path planning in real time.

collision avoidance path planning obstacle detection autonomous underwater vehicle

1. A* Algorithm

In 1986, the heuristic method [1] A* was proposed, which consists of dividing the known area into individual cells and calculating the total cost of reaching the target for each of them. Given the total distance from the starting point to the currently analysed cell and the distance from the analysed cell to the target, the total path cost is calculated with the following formula:
F(n)=G(n)+H(n),            (1)
where:
F(n) is the total path cost;
G(n) is the cost of reaching from the starting point to the analysed cell; and
H(n) is the cost of reaching from the analysed cell to the target.
The A* algorithm focuses on finding the shortest path to a destination in the presence of static obstacles assuming that both the environment and the location of the obstacles are known. In this method, however, an increased amount of computation is needed for large areas analysis or areas with many obstacles. This significantly increases the path planning time. For the marine vehicles, AUVs and autonomous surface vehicles (ASV), the A* algorithm method is in most cases used in combination with the visibility graph algorithm [2][3][4]. In [5], simulations of two grid-based methods in the 3D environment were compared. Compared to the Dijkstra algorithm, the multi-directional A* proved to be more efficient in terms of the number of nodes and the total length of the path. The practical implementation of this method is associated with the problem of variability of the underwater environment and the lack of precise knowledge about the location of obstacles. Another challenge for practical implementation of this method in AUV may be the unforeseen effect of sea currents.

2. Artificial Potential Field

The artificial potential field (APF) method has its origins in 1986 [6] and assumes the presence of a repulsive field around the obstacle and an attractive area around the target that affect a moving vehicle (e.g., AUV). As a result of the interaction of these virtual fields, the resultant force is determined according to which way the vehicle is moving. In this method, prior knowledge of the environment and the location of obstacles is not needed. It can be used regardless of whether the obstacles in the environment in which the AUV is moving are static or dynamic and whether they have regular or irregular shapes. The crucial condition for this algorithm to be efficient is accurate obstacle detection. The advantage of the APF method is the ease of implementation and low computational requirements, which makes it possible to control AUVs and avoid collisions in close to real time. Despite the many abovementioned advantages, the possibility of local minima or trap situations is considered as a significant disadvantage [7]. Under certain conditions, e.g., a U-shaped obstacle, when the resultant force acting on the AUV is balanced, the algorithm will control the movement of the AUV in a closed infinite loop without reaching the goal. In addition, the passage of the AUV between closely spaced obstacles may not be possible or cause oscillation due to the alternating influence of force fields from opposing obstacles [7][8]. Also, the AUV tends to demonstrate unstable movements when passing around obstacles. In order to solve the local minima problem, APF is combined with other methods, such as [9][10]. In the simulation-based study [11] for AUVs a solution was proposed based on the introduction of a virtual obstacle in the place where the local minimum occurs. Another way to avoid a trap situation can be by using random movements to lead the AUV out of the adverse area (randomised potential field) [12]. Despite the limitations of this method, it is used to control a swarm of robots [13][14][15][16]. Simulation-based studies also prove the possibility of controlling multi-AUV formations [17] and its application in mission planners [18]. In [19], the potential field-based method was used in the practical implementation in NPS ARIES AUV in a real-world environment. To avoid the negative impact of APF method limitations in real implementations, it is necessary to use the global path-planner module based on, e.g., heuristic methods.

3. Rapidly Exploring Random Tree

Introduced in 1998 [20], the motion-planning algorithm named rapid-exploring random tree (RRT) is a sampling-based method. Any node in the spatial is randomly determined from the initial point (Figure 1). Then, depending on the given direction of movement and the maximum length of the section from the analysed point, the intermediate node is determined. The longer the sections’ connecting nodes, the higher the risk of moving in the wrong direction for a long time and encountering obstacles on the designated path. If any obstacle is detected between waypoints, further route calculation in that direction is ignored. If no obstacles are detected, a random point in the spatial is determined, followed by a new intermediate point.
Figure 1. An illustration of the RRT mechanism.
As a result of subsequent iterations (time steps), new edges and path points are determined. The method is easy to process and ensures the finding of a collision-free path (if there is one) in an unknown environment, both 2D and 3D. RRT can be used in both static and dynamic environments [21]. A modification of this method was used in the SPARUS-II AUV and tested in a real-world environment [22]. The study [23] shows the validity of the modified RRT* algorithm for mapping in a 3D environment for multiple AUVs. In [24], an RRT-based approach was used to solve the problem of local minima in the fast warm starting module in the Aqua2 AUV. It was noted that this method is not always efficient in an environment where the path to the target leads through a narrow opening or gap. Another limitation of this method is the need to provide information about large areas of the environment, which is not always possible in practical implementations due to the technical limitations of sensors [25][26][27]. Additionally, the calculated path is suboptimal [28] which requires the use of additional optimisation algorithms.

4. Artificial Neural Network

An artificial neural network (ANN) is a machine learning method based on the mathematical mapping of information processing by the human brain. The general structure consists of three layers: input layer, hidden layers, and output layers (Figure 2). The neurons in each layer are connected to all the neurons in the neighboring layer and process the inputs based on the weights in between. The data from each neuron in the input layer multiplied by weights is sent as input to hidden layers where each neuron is assigned a value (bias). Depending on the activation function and bias value, only neurons with a value higher than the threshold value are activated. The output from each layer is also the input for the next layer. The data transfer between the layers is performed only by active neurons. The ANN method requires learning, i.e., indicating the accuracy of the output data. Based on the determination of the expected results, the neural network adjusts the weights between the individual neurons in each iteration in such a way that the output data is as close to the expectations as possible.
Figure 2. Neural network architecture
Figure 2. Neural network architecture.
The method has learning ability and is applicable in systems that implement complex functions, supporting many outputs based on data from multiple sensors. The algorithm is also efficient for the systems where the input data is incomplete or distorted, or in cases in which there poorly modelled, nonlinear dynamical systems [29][30][31]. The main drawback of this algorithm is the need for long-term training to achieve satisfactory results [32]. In the case of practical implementation for AUV, online learning does not bring satisfactory results due to slow learning speed and long training time [33]. Therefore, offline training for controllers is essential before the AUV can be used in a real-world environment. Reference [34] presents a collision-avoidance controller for static and dynamic obstacles based on neural networks that do not require training. The resulting behaviour of the AUV was defined by neuron weights. It should be noted, however, that this type of controller is suitable for simple AUV operational cases. In [35], the neural network method was used in real implementation in AUV-UPCT. The vehicle required prior learning in ROV mode. In [36], the neural network algorithm was used in a simulation study to control and avoid collisions for multiple AUVs in a 3D environment. In [37], the neural network was used to process real images of the underwater environment (simple, coloured monocular camera) to determine the free space enabling the escape of the AUV from the cluttered environment.

5. Genetic Algorithm

Genetic algorithm (GA) refers to research and optimisation methods inspired by the natural evolution process where only the fittest organisms have a chance for survival. In general, the algorithm starts looking for a solution to a problem by generating a random population of possible solutions. Depending on the applied function, a selection is made during which the least suitable solutions are eliminated [38]. For path planning, the main criterion is the energy cost required to run each path [39]. Then, through an operation called crossover, further potential solutions to the problem (offspring) are created, combining the best solutions from the previous generation (parents). Additionally, in the mutation process, random modifications of the best solutions are created. Then, the GA runs the selection again, adding to the population’s successive possible solutions that are closer and closer to the correct result until a specific final condition is reached, e.g., the number of generations. The main advantage of the GA method is the possibility of fast and global stochastic searching for optimal solutions [40]. In addition, the algorithm is easy to implement and can be used to solve complex problems, such as determining the optimal AUV path in the presence of static and dynamic obstacles in the underwater environment. Due to the random nature of searching for solutions, the algorithm reduces the risk of the local minima problem [41]. In general, the method does not require large numbers of calculations to solve the path-planning problem. However, the route may be suboptimal if too few generations are executed. As the number of generations increases, the route becomes closer and closer to the optimal one due to the constant elimination of the least optimal solutions in the population. It entails an increase in computational costs. Similarly, in a rapidly changing environment or when the environment is very extensive, the amount of computation needed to determine a solution increases significantly. In [39], a modification of the GA was presented in order to determine the energy’s optimal path. The modification involved the introduction of iterations consisting of additional runs of the algorithm with different initial conditions and the operator based on the random immigrants’ mechanism, which sets the level of randomness of the developing population. Reference [42] discussed the framework of the collision-avoidance system based on the GA. The simulation test proved the proper functioning of the method for static and dynamic obstacles. Improved GA was also used in a simulation-based study [43] to optimise energetic routing in a complex 3D environment in the presence of static and dynamic obstacles. The algorithm with improved crossover and mutation probability and the modified fitness function was tested in simulation [40] in comparison with the traditional GA method. The simulation confirmed that the improved GA allows a shorter and smoother path with fewer generations. The method worked very well in the energy optimisation of routing. It should be noted that the efficiency of this method in real testing depends on the correct detection and determination of the obstacle location as well as the target.

6. Fuzzy Logic

The fuzzy logic (FL) method [44][45] is based on the evaluation of the input data depending on the fuzzy rules which can be determined by using the knowledge and experience of experts. In AUV obstacle avoidance systems, the fuzzy controller sensor processes data containing information about the surrounding environment and makes decisions based on it. Then, appropriate signals are transmitted to the actuators. The first stage of the algorithm’s operation is fuzzification, i.e., assigning the input data to the appropriate membership function. Each of the functions is based on a descriptive classification of the input data, e.g., low, medium or high collision risk. Then, in the fuzzy inference process, a data evaluation is performed based on “if prerequisite then result” statements rule. Ultimately, the defuzzification process determines specific system output values (e.g., actuator control signal values). The method can be used for both static and dynamic obstacles. However, when an AUV operates in an unknown environment, the usability of the algorithm directly depends on the implemented rules, and therefore also on the knowledge and experience of experts. The main advantage of the algorithm is its usability in the case of incomplete information about the environment, even containing noises or errors [46]. The method is easy to implement and provides satisfactory results in real-time processing but requires a precise definition of membership functions and fuzzy rules [47]. Additionally, as the number of inputs increases, the amount of input data necessary for the system to process increases. An important issue affecting the effectiveness of the algorithm is also the AUV speed and the complexity of the environment. A fuzzy system usually has at least two inputs. In a very dynamic environment, the use of additional inputs can increase the efficiency of the controller [48]. In some cases, the necessity of performing complex maneuvers to avoid a collision may cause the AUV to be diverted far from the optimal path. For this reason, it is necessary to use additional algorithms that control the path in terms of energy. A simulation study [41] showed that the use of GA to optimise fuzzy logic path planner allows one to achieve greater efficiency and reduce cross-track errors and total traveled path. A modification of this method was also used in a practical implementation in a study [49] where the Bandler and Kohout product of fuzzy relations was used for preplanning in horizontal plane maneuvers. The fuzzy logic method was also used in [50] to control a single unmanned surface vehicle (USV) and in the simulation [51] to control virtual AUVs in the leader–follower formation of AUVs.

7. Reinforcement Learning

The reinforcement learning (RL) method is based on research about the observation of animal behaviour. As a result of strengthening the pattern of behavior by receiving a stimulus by the animal, it increases the tendency to choose actions [52]. Machine learning based on this idea was very intensively developed in the second half of the 19th century. Depending on the requirements of the environment model, the simplicity of data processing, and the iterative nature of calculations, several methods of solving RL problems have been developed, e.g., dynamic programming, Monte Carlo methods, and temporal difference learning. The most crucial method of solving RL problems is the temporal difference method. Depending on the occurrence of the policy function, it is divided into several algorithms, e.g., Actor-Critic, Q-learning or Sarsa method [52]. The actor-critic method consists of autonomous learning of the correct problem solving depending on the received reward or punishment. By performing action, the agent influences the environment by observing the effects of the action. Depending on the implemented reward function, different ways of influencing the environment are assessed. The objective of this method is to learn how to perform such actions in order to get the greatest reward. In the case of path planning, the agent will receive the greatest reward if moving toward the given destination. In general, an agent includes a combination of neural network’s actor and critic. The interaction between them is a closed-loop learning situation [53]. The actor chooses the action for performing to improve the current policy. The critic observes the effects of this action and tries to assess them. The assessment is then compared with the reward function. Based on the error rate, the critic network is updated to predict actor network behaviour in the future better. The abovementioned approach was used in [54] to control four robots. Each of them was able to avoid collisions with other robots and with obstacles. In the study [55], the RL approach was used in a two-dimensional simulation of cooperative pursuit of unauthorized UAV by using a team of UAVs in an urbanized environment. In [56], a simulation of smart ship control algorithm was presented. Moreover, in simulation studies [57][58] the RL approach was used for path planning and motion control of AUVs. The method can be used for both static and dynamic obstacles in an unknown environment. It is characterised by a strong focus on problem solving and shows high environmental adaptability. The RL algorithm can learn to solve very complex problems as long as the correct reward function is used. The inability to manually change the parameters of the learned network is considered as the main drawback of this method. To change the operation of the algorithm, the network must be redesigned, and a long learning process must be performed. Also, checking this method’s operation by simulation does not guarantee its correct operation in a real-world environment.

References

  1. Hart, P.E.; Nilsson, N.J.; Raphael, B. A formal basis for the heuristic determination of minimum cost paths. IEEE Trans. Syst. Sci. Cybern. 1968, 4, 100–107.
  2. Eichhorn, M. An obstacle avoidance system for an autonomous underwater vehicle. In Proceedings of the 2004 IEEE International Symposium on Underwater Technology (IEEE Cat. No. 04EX869), Taipei, Taiwan, 20–23 April 2004; pp. 75–82.
  3. Casalino, G.; Turetta, A.; Simetti, E. A three-layered architecture for real time path planning and obstacle avoidance for surveillance USVs operating in harbour fields. In Proceedings of the Oceans 2009 IEEE-Europe, Bremen, Germany, 11–14 May 2009; pp. 1–8.
  4. Li, J.H.; Lee, M.J.; Park, S.H.; Kim, J.G. Real time path planning for a class of torpedo-type AUVs in unknown environment. In Proceedings of the 2012 IEEE/OES Autonomous Underwater Vehicles (AUV), Southampton, UK, 24–27 September 2012; pp. 1–6.
  5. Li, M.; Zhang, H. AUV 3D path planning based on A* algorithm. In Proceedings of the 2020 IEEE Chinese Automation Congress (CAC), Shanghai, China, 6–8 November 2020; pp. 11–16.
  6. Khatib, O. Real-time obstacle avoidance for manipulators and mobile robots. In Autonomous Robot Vehicles; Springer: New York, NY, USA, 1986; pp. 396–404.
  7. Koren, Y.; Borenstein, J. Potential field methods and their inherent limitations for mobile robot navigation. In Proceedings of the ICRA, Sacramento, CA, USA, 9–11 April 1991; Volume 2, pp. 1398–1404.
  8. Teo, K.; Ong, K.W.; Lai, H.C. Obstacle detection, avoidance and anti collision for MEREDITH AUV. In Proceedings of the OCEANS IEEE 2009, Biloxi, MS, USA, 26–29 October 2009; pp. 1–10.
  9. Qing, L.; Li-jun, W.; Bo, C.; Zhou, Z.; Yi-xin, Y. An improved artificial potential field method with parameters optimization based on genetic algorithms. J. Univ. Sci. Technol. 2012, 34, 202–206.
  10. Barraquand, J.; Latombe, J.C. Robot motion planning: A distributed representation approach. Int. J. Robot. Res. 1991, 10, 628–649.
  11. Solari, F.J.; Rozenfeld, A.F.; Villar, S.A.; Acosta, G.G. Artificial potential fields for the obstacles avoidance system of an AUV using a mechanical scanning sonar. In Proceedings of the 2016 3rd IEEE/OES South American International Symposium on Oceanic Engineering (SAISOE), Buenos Aires, Arhentina, 14–17 June 2016; pp. 1–6.
  12. Youakim, D.; Ridao, P. Motion planning survey for autonomous mobile manipulators underwater manipulator case study. Robot. Auton. Syst. 2018, 107, 20–44.
  13. Leonard, N.E.; Fiorelli, E. Virtual leaders, artificial potentials and coordinated control of groups. In Proceedings of the 40th IEEE Conference on Decision and Control (Cat. No. 01CH37228), Orlando, FL, USA, 4–7 December 2001; Volume 3, pp. 2968–2973.
  14. Gazi, V. Swarm aggregations using artificial potentials and sliding-mode control. IEEE Trans. Robot. 2005, 21, 1208–1214.
  15. Olfati-Saber, R. Flocking for multi-agent dynamic systems: Algorithms and theory. IEEE Trans. Autom. Control 2006, 51, 401–420.
  16. Meng, Z.; Lin, Z.; Ren, W. Leader–follower swarm tracking for networked Lagrange systems. Syst. Control Lett. 2012, 61, 117–126.
  17. Hou, S.P.; Cheah, C.C. Can a simple control scheme work for a formation control of multiple autonomous underwater vehicles? IEEE Trans. Control Syst. Technol. 2010, 19, 1090–1101.
  18. Sorbi, L.; De Capua, G.P.; Toni, L.; Fontaine, J.G. Target detection and recognition: A mission planner for Autonomous Underwater Vehicles. In Proceedings of the IEEE OCEANS’11 MTS/IEEE KONA, Waikoloa, HI, USA, 19–22 September 2011; pp. 1–5.
  19. Horner, D.; Healey, A.; Kragelund, S. AUV experiments in obstacle avoidance. In Proceedings of the IEEE OCEANS 2005 MTS/IEEE, Washington, DC, USA, 17–23 September 2005; pp. 1464–1470.
  20. LaValle, S.M. Rapidly-Exploring Random Trees: A New Tool for Path Planning. Available online: https://www.cs.csustan.edu/~xliang/Courses/CS4710-21S/Papers/06 (accessed on 1 July 2022).
  21. Tan, C.S.; Sutton, R.; Chudley, J. An integrated collision avoidance system for autonomous underwater vehicles. Int. J. Control 2007, 80, 1027–1049.
  22. Hernández, J.D.; Vidal, E.; Vallicrosa, G.; Galceran, E.; Carreras, M. Online path planning for autonomous underwater vehicles in unknown environments. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015; pp. 1152–1157.
  23. Cui, R.; Li, Y.; Yan, W. Mutual information-based multi-AUV path planning for scalar field sampling using multidimensional RRT. IEEE Trans. Syst. Man Cybern. Syst. 2015, 46, 993–1004.
  24. Xanthidis, M.; Karapetyan, N.; Damron, H.; Rahman, S.; Johnson, J.; O’Connell, A.; O’Kane, J.M.; Rekleitis, I. Navigation in the presence of obstacles for an agile autonomous underwater vehicle. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 892–899.
  25. Tan, C.S. A Collision Avoidance System for Autonomous Underwater Vehicles. Available online: https://pearl.plymouth.ac.uk/handle/10026.1/2258 (accessed on 1 July 2022).
  26. Piskur, P.; Szymak, P.; Jaskólski, K.; Flis, L.; Gąsiorowski, M. Hydroacoustic system in a biomimetic underwater vehicle to avoid collision with vessels with low-speed propellers in a controlled environment. Sensors 2020, 20, 968.
  27. Piskur, P.; Gasiorowski, M. Digital Signal Processing for Hydroacoustic System in Biomimetic Underwater Vehicle. NAŠE MORE Znan. časopis Za More I Pomor. 2020, 67, 14–18.
  28. Tan, C.S.; Sutton, R.; Chudley, J. An incremental stochastic motion planning technique for autonomous underwater vehicles. IFAC Proc. 2004, 37, 483–488.
  29. Zhao, S.; Lu, T.F.; Anvar, A. Multiple obstacles detection using fuzzy interface system for auv navigation in natural water. In Proceedings of the 2010 IEEE 5th Conference on Industrial Electronics and Applications, Taichung, Taiwan, 15–17 June 2010; pp. 50–55.
  30. Piskur, P.; Szymak, P.; Flis, L.; Sznajder, J. Analysis of a Fin Drag Force in a Biomimetic Underwater Vehicle. NAŠE MORE Znan. časopis Za More I Pomor. 2020, 67, 192–198.
  31. Piskur, P.; Szymak, P.; Przybylski, M.; Naus, K.; Jaskólski, K.; Żokowski, M. Innovative energy-saving propulsion system for low-speed biomimetic underwater vehicles. Energies 2021, 14, 8418.
  32. Szymak, P.; Piskur, P.; Naus, K. The Effectiveness of Using a Pretrained Deep Learning Neural Networks for Object Classification in Underwater Video. Remote Sens. 2020, 12, 3020.
  33. Meng, F.; Liu, A.; Jing, S.; Zu, Y. FSM trajectory tracking controllers of OB-AUV in the horizontal plane. In Proceedings of the 2021 IEEE International Conference on Intelligence and Safety for Robotics (ISR), Tokoname, Japan, 4–6 March 2021; pp. 204–208.
  34. DeMuth, G.; Springsteen, S. Obstacle avoidance using neural networks. In Proceedings of the IEEE Symposium on Autonomous Underwater Vehicle Technology, Washington, DC, USA, 5–6 June 1990; pp. 213–215.
  35. Guerrero-González, A.; García-Córdova, F.; Gilabert, J. A biologically inspired neural network for navigation with obstacle avoidance in autonomous underwater and surface vehicles. In Proceedings of the OCEANS 2011 IEEE, Santander, Spain, 6–9 June 2011; pp. 1–8.
  36. Ding, G.; Zhu, D.; Sun, B. Formation control and obstacle avoidance of multi-AUV for 3-D underwater environment. In Proceedings of the 33rd IEEE Chinese Control Conference, Nanjing, China, 28–30 July 2014; pp. 8347–8352.
  37. Gaya, J.O.; Gonçalves, L.T.; Duarte, A.C.; Zanchetta, B.; Drews, P.; Botelho, S.S. Vision-based obstacle avoidance using deep learning. In Proceedings of the 2016 XIII IEEE Latin American Robotics Symposium and IV Brazilian Robotics Symposium (LARS/SBR), Recife, Brazil, 8–12 October 2016; pp. 7–12.
  38. Jurczyk, K.; Piskur, P.; Szymak, P. Parameters identification of the flexible fin kinematics model using vision and Genetic Algorithms. Pol. Marit. Res. 2020, 27, 39–47.
  39. Alvarez, A.; Caiti, A.; Onken, R. Evolutionary path planning for autonomous underwater vehicles in a variable ocean. IEEE J. Ocean. Eng. 2004, 29, 418–429.
  40. Yan, S.; Pan, F. Research on route planning of auv based on genetic algorithms. In Proceedings of the 2019 IEEE International Conference on Unmanned Systems and Artificial Intelligence (ICUSAI), Xi’an, China, 22–24 November 2019; pp. 184–187.
  41. Wu, X.; Feng, Z.; Zhu, J.; Allen, R. Line of sight guidance with intelligent obstacle avoidance for autonomous underwater vehicles. In Proceedings of the OCEANS 2006 IEEE, Singapore, 16–19 May 2006; pp. 1–6.
  42. Chang, Z.H.; Tang, Z.D.; Cai, H.G.; Shi, X.C.; Bian, X.Q. GA path planning for AUV to avoid moving obstacles based on forward looking sonar. In Proceedings of the 2005 IEEE International Conference on Machine Learning and Cybernetics, Waikoloa, HI, USA, 10–12 October 2005; Volume 3, pp. 1498–1502.
  43. Yao, P.; Zhao, S. Three-dimensional path planning for AUV based on interfered fluid dynamical system under ocean current (June 2018). IEEE Access 2018, 6, 42904–42916.
  44. Zadeh, L. Fuzzy sets. Inf. Control 1965, 8, 338–353.
  45. Zadeh, L. Fuzzy algorithms. Inf. Control 1968, 12, 94–102.
  46. Galarza, C.; Masmitja, I.; Prat, J.; Gomaríz, S. Design of obstacle detection and avoidance system for Guanay II AUV. In Proceedings of the 2016 IEEE 24th Mediterranean Conference on Control and Automation (MED), Athens, Greece, 21–24 June 2016; pp. 410–414.
  47. Zhu, D.; Yang, Y.; Yan, M. Path planning algorithm for AUV based on a Fuzzy-PSO in dynamic environments. In Proceedings of the 2011 IEEE Eighth International Conference on Fuzzy Systems and Knowledge Discovery (FSKD), Shanghai, China, 26–28 July 2011; Volume 1, pp. 525–530.
  48. Li, X.; Wang, W.; Song, J.; Liu, D. Path planning for autonomous underwater vehicle in presence of moving obstacle based on three inputs fuzzy logic. In Proceedings of the 2019 IEEE 4th Asia-Pacific Conference on Intelligent Robot Systems (ACIRS), Nagoya, Japan, 13–15 July 2019; pp. 265–268.
  49. Braginsky, B.; Guterman, H. Obstacle avoidance approaches for autonomous underwater vehicle: Simulation and experimental results. IEEE J. Ocean. Eng. 2016, 41, 882–892.
  50. Szymak, P. Zorientowany na Sterowanie Model Ruchu oraz Neuro-Ewolucyjno-Rozmyta Metoda Sterowania bezzałOgowymi Jednostkami Pływającymi; Politechnika Krakowska: Krakow, Poland, 2015.
  51. Huang, W.; Fang, H.; Li, L. Obstacle avoiding policy of multi-AUV formation based on virtual AUV. In Proceedings of the 2009 IEEE Sixth International Conference on Fuzzy Systems and Knowledge Discovery, Tianjin, China, 14–16 August 2009; Volume 4, pp. 131–135.
  52. Sutton, R.S.; Barto, A.G. Introduction to Reinforcement Learning. Introduction to Reinforcement Learning. Available online: https://login.cs.utexas.edu/sites/default/files/legacy_files/research/documents/1%20intro%20up%20to%20RL%3ATD.pdf (accessed on 1 July 2022).
  53. Szepesvári, C. Algorithms for reinforcement learning. Synth. Lect. Artif. Intell. Mach. Learn. 2010, 4, 1–103.
  54. Arai, Y.; Fujii, T.; Asama, H.; Kaetsu, H.; Endo, I. Collision avoidance in multi-robot systems based on multi-layered reinforcement learning. Robot. Auton. Syst. 1999, 29, 21–32.
  55. Du, W.; Guo, T.; Chen, J.; Li, B.; Zhu, G.; Cao, X. Cooperative pursuit of unauthorized UAVs in urban airspace via Multi-agent reinforcement learning. Transp. Res. Part C Emerg. Technol. 2021, 128, 103122.
  56. Chen, C.; Chen, X.Q.; Ma, F.; Zeng, X.J.; Wang, J. A knowledge-free path planning approach for smart ships based on reinforcement learning. Ocean Eng. 2019, 189, 106299.
  57. Gore, R.; Pattanaik, K.; Bharti, S. Efficient Re-Planned Path for Autonomous Underwater Vehicle in Random Obstacle Scenario. In Proceedings of the 2019 IEEE 5th International Conference for Convergence in Technology (I2CT), Bombay, India, 29–31 March 2019; pp. 1–5.
  58. Li, W.; Yang, X.; Yan, J.; Luo, X. An obstacle avoiding method of autonomous underwater vehicle based on the reinforcement learning. In Proceedings of the 2020 IEEE 39th Chinese Control Conference (CCC), Shenyang, China, 27–29 July 2020; pp. 4538–4543.
More
Information
Contributor MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register :
View Times: 562
Revisions: 3 times (View History)
Update Date: 12 Aug 2022
1000/1000
ScholarVision Creations