Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 -- 3158 2022-11-09 11:20:29 |
2 format -2 word(s) 3156 2022-11-10 03:37:17 | |
3 format -1 word(s) 3155 2022-11-11 03:55:08 |

Video Upload Options

Do you have a full video?


Are you sure to Delete?
If you have any further questions, please contact Encyclopedia Editorial Office.
Kot, R. Obstacle Detection Systems of Autonomous Underwater Vehicles. Encyclopedia. Available online: (accessed on 20 April 2024).
Kot R. Obstacle Detection Systems of Autonomous Underwater Vehicles. Encyclopedia. Available at: Accessed April 20, 2024.
Kot, Rafał. "Obstacle Detection Systems of Autonomous Underwater Vehicles" Encyclopedia, (accessed April 20, 2024).
Kot, R. (2022, November 09). Obstacle Detection Systems of Autonomous Underwater Vehicles. In Encyclopedia.
Kot, Rafał. "Obstacle Detection Systems of Autonomous Underwater Vehicles." Encyclopedia. Web. 09 November, 2022.
Obstacle Detection Systems of Autonomous Underwater Vehicles

The high efficiency of obstacle detection system (ODS) is essential to obtain the high performance of autonomous underwater vehicles (AUVs) carrying out a mission in a complex underwater environment. ODSs being improved and provide greater detection accuracy that results in better AUV response time. Almost all analysed methods are based on the conventional approach to obstacle detection. In the future, even better ODSs parameters could be achieved by using artificial intelligence (AI) methods. 

obstacle detection collision avoidance path planning image processing

1. Introduction

The efficiency and accuracy of the obstacle detection systems (ODSs) are strictly dependent on the parameters of the equipment and devices used. In recent years, significant technological progress has been made in this field, including increasing the accuracy and speed of the operational devices and perception sensors as well as increasing the efficiency of computing systems. The air, ground, and underwater environment presents different characteristics and parameters of signals’ attenuation, reflection, and propagation, so that the hardware setup solution must be adjusted to the environment in which the ODS is expected to operate.
ODS is an essential element of the autonomous underwater vehicles (AUVs), allowing collision-free movement in an unfamiliar environment in the presence of obstacles. They are also a necessary component of path planning and collision avoidance systems or high-level controller of AUV [1]. Efficiency of ODS has a decisive impact on the operation speed and decision-making about the movement of the AUV in the event of an obstacle. The initial stage includes pre-processing and environment detection. At this stage, the selection of the environmental perception sensor (e.g., sonar, camera, echosounder, laser scanner) and the appropriate tuning of the detection parameters have a key importance for the final parameters of the ODS. Next, image processing steps so-called image segmentation and morphological operations are performed. Based on the collected data from the above-mentioned procedures, the AUV path of movement is determined. Using the implemented AUV collision avoidance algorithms, the vehicle moves in accordance with the designated path, performing collision avoidance maneuvers.

2. Image Processing in ODS

This section discusses the basic image processing operations related to the classical obstacle detection process. It should be noted that the scheme of obstacle detection in AI methods is different. Each step in the classic sense of the problem corresponds to the other operation in, e.g., a convolutional neural network (CNN) based scheme. This focuses mainly on the classical approach to image detection and processing of underwater objects.
The raw sonar data includes the echo of the reflected signal and the noise interference. Filtering out this noise is crucial for proper obstacle detection. For this purpose, sonar data is processed using methods such as mean or median filtering, a histogram for local image values (e.g., 5 × 5 pixels), threshold segmentation, and filtration in removing groups of pixels smaller than a × b pixels window. The next step in sonar image processing is the image morphology process. In this case, methods such as edge determination, template matching, dilation, erosion, and combinations of the above techniques are used. After such processing of the sonar image, the obstacles’ characteristics can be determined in detail, and false reflections can be avoided. Processing a high-resolution image increases the computation time needed to determine the features of the detected objects. Therefore, selecting the correct sonar resolution and image processing methods is crucial. In the case of vision, image processing is based mainly on visual characteristics such as color, contrast, and the intensity of individual colors. The above features are also crucial in object detection techniques based on machine learning. The use of AI methods in obstacle detection systems seems very promising and prospective. The development of this type of method began about ten years ago when the authors of [2] presented an extensive deep convolutional neural network called AlexNet to recognize objects in high-resolution images with outstanding results. Since then, many modifications have been made to improve the speed and accuracy of obstacle detection and object features using CNN. The advantage of this type of solution is the high efficiency and precision of identifying and classifying obstacles, which is not comparable to classical methods. The condition is the appropriate adaptation of the neural network consisting of training with images of real obstacles. Supervised learning is very time-consuming. Additionally, it is uncertain whether, if CNN has been trained to detect, e.g., mines or submarines, it will be equally good at dealing with other obstacles that have not been trained. Another problem is acquiring a large enough quantity of training materials that DL methods guarantee a very high level of performance. Due to the limited training material and the quality of available sonar images, detecting and classifying obstacles based on deep learning algorithms are not very developed for sonar images [3]. Choosing inappropriate training resources can have a negative impact on training results.

2.1. Pre-Processing and Detection

At this stage, depending on the purpose of the obstacle detection method, various operations may be performed. The main objective is to avoid disturbances at the detection stage or to reduce disturbances generated during the detection. Pre-processing algorithms prepare the image for further image-processing steps. In the pre-processing and detection stage, methods based on white balance, color correction, and histogram equalization are used in camera images. For example, the study [4] presents an algorithm based on contrast mask and contrast-limited adaptive histogram equalization (CLAHE), which improves the image by visualizing the details of the object and compensates for light attenuation in captured images in an underwater environment. CLAHE was also used in [5] for video image processing in the pre-processing step in the simultaneous localization and mapping (SLAM) system. At this stage, the image can be divided into individual matrices containing grayscale with color intensity in the RGB system [6]. In the case of mine detection, the preparation of the image requires prior determination of shaded areas, areas of reflection from the bottom and water surface, and reflections from the object [7]. The image is pre-normalized to reduce noise and distinguish the background from the highlight and shadow of the mine by, e.g., using the histogram equalization operation [8]. In references [9][10], median filtering was used as part of pre-processing for sonar data, which consists in ordering adjacent pixels and then applying the median value for a specific filter size. The researchers of the references above chose a window size of 5 × 5 pixels. The mean filter pre-processing method was used in the detection method presented in [11]. The operating principle of the technique is analogous to the median method.
Authors of [12] present a comparison of the effects of the median, mean, wavelet-based, and morphological smoothing methods. The wavelet-based operation uses wavelet transformation for filtering the noise in the signal by splitting into the different scale (e.g., frequency) components [13]. The morphological smoothing method is based on erosion or dilation operations, which are more often used during the morphological processing stage. It reduces noise in the image obtained during the detection process. In a result of comparing the processing time, the obtained effect, the peak signal to noise ratio (PSNR), and the mean square error (MSE) of the above methods, the researchers concluded that the most optimal methods are the median and mean methods.
As part of the pre-processing step, the scanning sector or region of interest can also be specified by selecting the distance range and the angular range of the area to be later segmented [14]. This reduces the amount of data processed in further image processing steps. This operation shortens the image processing time and is conducive to achieving real-time ODS operation.

2.2. Image Segmentation

Image segmentation consists of creating classes of objects and qualifying individual pixels of the processed image. For example, in mine detection systems, classes of objects can be a reflection from the object, shadow, and background [15]. The main groups of segmentation methods are threshold segmentation, clustering, and Markov random fields method.
Threshold segmentation methods are based on comparing individual pixel values with a set threshold value. Based on that comparison, the pixel value is set to a specific value (0 or 1). In literature, modifications and improvements to this method can be found, such as Otsu threshold [16]. The thresholding operation with the gradient operator was presented in [17] to the vision-based image segmentation by searching the edge between areas.
Cluster analysis consists of classifying points into subgroups containing an appropriate degree of similarity. The purpose of segmentation based on cluster analysis is to distinguish such objects as echo, shadow, reflections, etc. Among the methods based on clustering, it can be distinguished by the K-means algorithm or region growing method. The K-means clustering technique [18] is based on determining K random points in the image and then assigning the closest points to each of them. Then the centroid of each cluster is calculated. Over time, the algorithm has been improved and modified. For example, [19] introduced the K-means clustering algorithm in conjunction with mathematical morphology. Another method is the region growing technique which is iterative checking of neighboring pixels and comparing their values with the averaged local value. The point is assigned to the region when the difference does not exceed the specified value. This method was used in [20] in the online processing framework for FLS images.
Markov random field is a method based on probability analysis of connections between adjacent pixels [21]. E.g., for an image obtained from SSS, if the pixel is close to the shadow, the probability that it belongs to it increases [15]. Various Markovian models for segmentation of sonar-based detection were presented in [22][23].

2.3. Image Morphological Operations

Image morphology operations aim to improve the features of detected objects resulting from segmentation imperfections [24]. Basic operations in this step are dilation, erosion, opening, closing, and edge detection. Dilation operation is the expansion of an image of an object shape. It removes irregularities in the object’s shape by extending its surface by the number of pixels depending on the structuring element (e.g., 3 × 3 or 5 × 5 pixels window). Erosion operation reduces the area of the object in the image as a result of comparison with the structuring element. In addition, image processing also uses other operations, which may be a combination of both of the above (opening, closing) or differing in how the structuring element affects individual pixels (skeletonization). After adjusting the image, it can be detected for such features as, e.g., boundary, edge, or point, depending on the application. They usually work by looking for significant value differences between neighboring pixels and marking them as an edge.

2.4. Summary

Traditional image processing methods ensure adequate reduction of noise and interference generated in the sensor’s perception of the environment. The result of such processing is detailed information about objects/obstacles near the AUV. Due to the fact that most techniques require checking the value of each pixel and subjecting them to mathematical or statistical operations, they often require a large amount of computation: the more complex the method, the greater the processing time. Additionally, the researchers’experience in analysing and interpreting images is necessary for the methods to be properly tuned [9].

3. ODS in Practically Tested AUV Capable of Collision Avoidance

This chapter discusses the obstacle detection approaches implemented in AUVs with obstacle avoidance and path planning capabilities. Based on the literature review presented in [25], only studies demonstrating path planning and collision avoidance algorithms tested practically in a real-world environment were selected for further analysis. Each selected research was analysed in detail regarding the equipment used to perceive the environment and the image processing operations. In addition, the solutions were assessed in terms of: complexity of the environment, static and dynamic obstacles detection ability, operational speed, and path planning suitability.
The operation of the vehicle named NPS ARIES [26] starts with the identification of the bottom. Then the ROI is determined. Once obstacle detected, the information about the obstacle’s distance, height, and the centroid is sent to the controller. In image processing, a binary image is first created using a threshold. Then, during the erosion process, the value of each pixel in the 3 × 3 window is set to the minimum value. In the next step, the algorithm searches for the linear features of the object. Based on that operation, the position of the bottom is determined. The obstacle is identified using a Kalman Filter based on the vehicle pitch, pitch rate, and rotation angle. In the process of segmentation, areas forming lines are separated. The most significant areas are treated as potential obstacles. On this basis, the ROI is determined. Then, using the binary image, the contours of the obstacles are detected, which are tracked using the Kalman Filter. The method is effective and, together with appropriate path-planning algorithms, can provide optimal collision avoidance maneuvers. However, it is difficult to assess its effectiveness in a complex environment with more obstacles because the algorithm has been tested in a low-complication environment. In [27][28] no imaging methods were applied. The ODS works with the obstacle distance data obtained from the echosounders. Thanks to this, it was operated effectively in missions conducted by the researchers. However, it does not guarantee effective operation in complicated conditions. Additionally, the AUV’s obstacle avoidance maneuvers may not be optimal.
In reference [29], threshold and median filtering were first applied in the image processing, and then in the morphology step, erosion, dilation, and edge detection methods were used. In the experiment, the vehicle correctly avoided the breakwater obstacle. It proves the correct operation of the obstacle detection method, but the environment in which the experiment was performed was not complex. In reference [30], the vehicle’s operation in the underwater environment with the presence of obstacles was tested. The obstacle detection method is based on measuring the distance to an object using a collision sensor or proximity sensor and is effective for static obstacles. The authors [31] presented a solution based on three echosounders for measuring the distance to obstacles in front of the vehicle. In the study [32], five echosounders were used in the ODS for octree space representation. Obstacle detection systems that use only echosounders to measure the distance to an obstacle are prone to interference. In addition, it does not allow for optimal collision avoidance maneuvers. In research [33], sonar imaging was used in the obstacle detection system, which is then filtered using mean and median filtering. In the segmentation step, a fuzzy K-mean clustering algorithm is used. In morphological processing, the occupancy map is determined from the grayscale image. The method ensures sufficient accuracy and, after applying appropriate algorithms, allows for near-optimal path planning. In reference [34], the sonar image is first speckle noise suppression by a 17 × 17 filter, then the local image histogram entropy method (9 × 9) is used. In the next step the hysteretic threshold of entropy is applied to the image. Finally, the edge detection process is performed, and the obstacles are saved on the map. The method allows making decisions in near real-time and planning near the optimal path. The study [35] presented a vehicle capable of avoiding a collision. An experiment confirmed the correct operation in a real underwater environment. However, the obstacles were simulated, which allowed for bypassing the detection process and the related problems. Therefore, this will not be considered for further analysis. The authors of [36] presented a solution based on two sets of line laser and camera. The image obtained from the sensors in this configuration is subjected to top-hat transformation based on opening and subtraction operations through the 1 × 21 pixels window to remove bright points from the background of the image (the background is not completely black). Then the image is binarized by the threshold value. White boxes are annotated using the fast label method, and groups of less than 80 pixels are removed as noise. An obstacle is identified if it is present in 5 or more frames. The method is effective, but it has a small operating range, and the processing time does not allow the vehicle to be controlled in a complex environment close to real-time. The same solution with an additional sensor in the form of FLS was presented in [37]. By that implementation, the researchers obtained a greater range of ODS activities. In reference [38], two sonars were used to provide 210 degrees FOV. The vehicle has the ability to follow the wall parallel to the AUV axis. The vehicle will perform avoidance maneuvers depending on the sector in which the obstacle appears. The system detects an obstacle when it is present on five or more returns or when the tracked wall is in front of the AUV. The method does not focus on the features of the obstacle and its size. Therefore it does not allow for the determination of the optimal route and the effective movement of the AUV in a complex underwater environment. In [39], the echo intensity matrix obtained from sonar is filtered by the specified threshold method. Then a range is computed for points with intensity greater than the threshold, representing the distance to the obstacle. Extensive AUV experiments were carried out in various scenarios, confirming ODS’s effectiveness. By the use of appropriate path planning algorithms, the AUV has the ability to move near the optimal path. In reference [40] detection is based on the point cloud, the parameters of which in space are estimated using scaling factors based on depth and inertial measurement unit (IMU) measurements. In a pre-processing step, the contrast adjustment is performed along with histogram equalization. The SVIn method [5] outputs a representation of the sensed environment as a 3D point cloud, which is later subjected to extracting visual objects with a high density of features. The method uses the density-based clustering operation in the segmentation step. After clusters detection, their centroids are determined. The method allows people to determine the near-optimal path and operate in real-time. In the study [41], a threshold-based operation for segmentation was first performed in sonar image processing. Then edge detection is applied. The method provides real-time obstacle detection and can be used to move in the underwater environment in the presence of both static and dynamic obstacles. Reference [42] presents ODS based on DL methods for fishing net detection. Pre-processing of the FLS images is conducted by gray stretching and threshold operations. Researchers trained and tested their network to achieve ATR. Learning of MRF-Net based on the data collected in the sea and mix-up strategy based on using randomly synthesized virtual data. As a result, the system detects and classifies an obstacle as a fishing net with very high accuracy. The method is very effective and allows the AUV to operate in real-time. However, applying the technique to other obstacles must be preceded by a learning process based on images of the specific obstacle. In reference [43], vision images are used in ODS. First, image features such as intensity, color, contrast, and light transmission contrast are determined. The next step is the appropriate global contrast calculation. After that, ROI is detected, and threshold-based segmentation is executed. AUV successfully detected and avoided obstacles in a complex environment in pool tests. The method allows for near real-time operation. The disadvantage of this method is the range which depends on the visibility


  1. Szymak, P.; Kot, R. Trajectory Tracking Control of Autonomous Underwater Vehicle Called PAST. Pomiary Autom. Robot. 2022, 266, 112731.
  2. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90.
  3. Karimanzira, D.; Renkewitz, H.; Shea, D.; Albiez, J. Object detection in sonar images. Electronics 2020, 9, 1180.
  4. Rizzini, D.L.; Kallasi, F.; Oleari, F.; Caselli, S. Investigation of vision-based underwater object detection with multiple datasets. Int. J. Adv. Robot. Syst. 2015, 12, 77.
  5. Rahman, S.; Li, A.Q.; Rekleitis, I. Svin2: An underwater slam system using sonar, visual, inertial, and depth sensor. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 4–8 November 2019; pp. 1861–1868.
  6. Hożyń, S.; Zalewski, J. Shoreline Detection and Land Segmentation for Autonomous Surface Vehicle Navigation with the Use of an Optical System. Sensors 2020, 20, 2799.
  7. Reed, S.; Petillot, Y.; Bell, J. An automatic approach to the detection and extraction of mine features in sidescan sonar. IEEE J. Ocean. Eng. 2003, 28, 90–105.
  8. Hożyń, S. A Review of Underwater Mine Detection and Classification in Sonar Imagery. Electronics 2021, 10, 2943.
  9. Cao, X.; Ren, L.; Sun, C. Research on Obstacle Detection and Avoidance of Autonomous Underwater Vehicle Based on Forward-Looking Sonar. IEEE Trans. Neural Netw. Learn. Syst. 2022.
  10. Sheng, M.; Tang, S.; Wan, L.; Zhu, Z.; Li, J. Fuzzy Preprocessing and Clustering Analysis Method of Underwater Multiple Targets in Forward Looking Sonar Image for AUV Tracking. Int. J. Fuzzy Syst. 2020, 22, 1261–1276.
  11. Bharti, V.; Lane, D.; Wang, S. Robust subsea pipeline tracking with noisy multibeam echosounder. In Proceedings of the 2018 IEEE/OES Autonomous Underwater Vehicle Workshop (AUV), Porto, Portugal, 6–9 November 2018; pp. 1–5.
  12. Tan, K.; Xu, X.; Bian, H. The application of NDT algorithm in sonar image processing. In Proceedings of the 2016 IEEE/OES China Ocean Acoustics (COA), Harbin, China, 9–11 January 2016; pp. 1–4.
  13. Al-Haj, A. Wavelets pre-processing of Artificial Neural Networks classifiers. In Proceedings of the 2008 5th International Multi-Conference on Systems, Signals and Devices, Amman, Jordan, 20–22 July 2008; pp. 1–5.
  14. Lee, M.; Kim, J.; Yu, S.C. Robust 3d shape classification method using simulated multi view sonar images and convolutional nueral network. In Proceedings of the OCEANS 2019-Marseille, Marseille, France, 17–20 June 2019; pp. 1–5.
  15. Dura, E.; Rakheja, S.; Honghai, L.; Kolev, N. Image processing techniques for the detection and classification of man made objects in side-scan sonar images. In Sonar Systems; InTech: Makati, Philippines, 2011.
  16. Otsu, N. A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man, Cybern. 1979, 9, 62–66.
  17. Żak, B.; Hożyń, S. Segmentation Algorithm Using Method of Edge Detection. Solid State Phenomena 2013, 196, 206–211.
  18. Krishna, K.; Murty, M.N. Genetic K-means algorithm. IEEE Trans. Syst. Man, Cybern. Part B (Cybern.) 1999, 29, 433–439.
  19. Wang, X.; Wang, Z.; Sheng, M.; Li, Q.; Sheng, W. An adaptive and opposite K-means operation based memetic algorithm for data clustering. Neurocomputing 2021, 437, 131–142.
  20. Zhang, T.; Liu, S.; He, X.; Huang, H.; Hao, K. Underwater target tracking using forward-looking sonar for autonomous underwater vehicles. Sensors 2019, 20, 102.
  21. Kato, Z.; Zerubia, J. Markov random fields in image segmentation. Found. Trends® Signal Process. 2012, 5, 1–155.
  22. Dugelay, S.; Graffigne, C.; Augustin, J. Deep seafloor characterization with multibeam echosounders by image segmentation using angular acoustic variations. In Proceedings of the Statistical and Stochastic Methods for Image Processing, Fort Lauderdale, FL, USA, 23–26 September 1996; Volume 2823, pp. 255–266.
  23. Mignotte, M.; Collet, C.; Pérez, P.; Bouthemy, P. Three-class Markovian segmentation of high-resolution sonar images. Comput. Vis. Image Underst. 1999, 76, 191–204.
  24. Goyal, M. Morphological image processing. IJCST 2011, 2, 59.
  25. Kot, R. Review of Collision Avoidance and Path Planning Algorithms Used in Autonomous Underwater Vehicles. Electronics 2022, 11, 2301.
  26. Horner, D.; Healey, A.; Kragelund, S. AUV experiments in obstacle avoidance. In Proceedings of the OCEANS 2005 MTS/IEEE, Washington, DC, USA, 17–23 September 2005; pp. 1464–1470.
  27. Pebody, M. Autonomous underwater vehicle collision avoidance for under-ice exploration. Proc. Inst. Mech. Eng. Part M J. Eng. Marit. Environ. 2008, 222, 53–66.
  28. McPhail, S.D.; Furlong, M.E.; Pebody, M.; Perrett, J.; Stevenson, P.; Webb, A.; White, D. Exploring beneath the PIG Ice Shelf with the Autosub3 AUV. In Proceedings of the Oceans 2009-Europe, Bremen, Germany, 11–14 May 2009; pp. 1–8.
  29. Teo, K.; Ong, K.W.; Lai, H.C. Obstacle detection, avoidance and anti collision for MEREDITH AUV. In Proceedings of the OCEANS 2009, Biloxi, MS, USA, 26–29 October 2009; pp. 1–10.
  30. Guerrero-González, A.; García-Córdova, F.; Gilabert, J. A biologically inspired neural network for navigation with obstacle avoidance in autonomous underwater and surface vehicles. In Proceedings of the OCEANS 2011 IEEE-Spain, Santander, Spain, 6–9 June 2011; pp. 1–8.
  31. Millar, G. An obstacle avoidance system for autonomous underwater vehicles: A reflexive vector field approach utilizing obstacle localization. In Proceedings of the 2014 IEEE/OES Autonomous Underwater Vehicles (AUV), Oxford, MS, USA, 6–9 October 2014; pp. 1–4.
  32. Hernández, J.D.; Vidal, E.; Vallicrosa, G.; Galceran, E.; Carreras, M. Online path planning for autonomous underwater vehicles in unknown environments. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015; pp. 1152–1157.
  33. Xu, H.; Gao, L.; Liu, J.; Wang, Y.; Zhao, H. Experiments with obstacle and terrain avoidance of autonomous underwater vehicle. In Proceedings of the OCEANS 2015-MTS/IEEE Washington, Washington, DC, USA, 19–22 October 2015; pp. 1–4.
  34. Braginsky, B.; Guterman, H. Obstacle avoidance approaches for autonomous underwater vehicle: Simulation and experimental results. IEEE J. Ocean. Eng. 2016, 41, 882–892.
  35. McMahon, J.; Plaku, E. Mission and motion planning for autonomous underwater vehicles operating in spatially and temporally complex environments. IEEE J. Ocean. Eng. 2016, 41, 893–912.
  36. Okamoto, A.; Sasano, M.; Seta, T.; Inaba, S.; Sato, K.; Tamura, K.; Nishida, Y.; Ura, T. Obstacle avoidance method appropriate for the steep terrain of the deep seafloor. In Proceedings of the 2016 Techno-Ocean (Techno-Ocean), Kobe, Japan, 6–8 October 2016; pp. 195–198.
  37. Okamoto, A.; Sasano, M.; Seta, T.; Hirao, S.C.; Inaba, S. Deployment of the auv hobalin to an active hydrothermal vent field with an improved obstacle avoidance system. In Proceedings of the 2018 OCEANS-MTS/IEEE Kobe Techno-Oceans (OTO), Kobe, Japan, 28–31 May 2018; pp. 1–6.
  38. McEwen, R.S.; Rock, S.P.; Hobson, B. Iceberg wall following and obstacle avoidance by an AUV. In Proceedings of the 2018 IEEE/OES Autonomous Underwater Vehicle Workshop (AUV), Porto, Portugal, 6–9 November 2018; pp. 1–8.
  39. Hernández, J.D.; Vidal, E.; Moll, M.; Palomeras, N.; Carreras, M.; Kavraki, L.E. Online motion planning for unexplored underwater environments using autonomous underwater vehicles. J. Field Robot. 2019, 36, 370–396.
  40. Xanthidis, M.; Karapetyan, N.; Damron, H.; Rahman, S.; Johnson, J.; O’Connell, A.; O’Kane, J.M.; Rekleitis, I. Navigation in the presence of obstacles for an agile autonomous underwater vehicle. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 892–899.
  41. Zhang, H.; Zhang, S.; Wang, Y.; Liu, Y.; Yang, Y.; Zhou, T.; Bian, H. Subsea pipeline leak inspection by autonomous underwater vehicle. Appl. Ocean Res. 2021, 107, 102321.
  42. Qin, R.; Zhao, X.; Zhu, W.; Yang, Q.; He, B.; Li, G.; Yan, T. Multiple receptive field network (MRF-Net) for autonomous underwater vehicle fishing net detection using forward-looking sonar images. Sensors 2021, 21, 1933.
  43. An, R.; Guo, S.; Zheng, L.; Hirata, H.; Gu, S. Uncertain moving obstacles avoiding method in 3D arbitrary path planning for a spherical underwater robot. Robot. Auton. Syst. 2022, 151, 104011.
Contributor MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to :
View Times: 613
Revisions: 3 times (View History)
Update Date: 11 Nov 2022