Off-Road Detection Analysis for Autonomous Ground Vehicles: History
Please note this is an old version of this entry, which may differ significantly from the current revision.

When it comes to some essential abilities of autonomous ground vehicles (AGV), detection is one of them. In order to safely navigate through any known or unknown environment, AGV must be able to detect important elements on the path. Detection is applicable both on-road and off-road, but they are much different in each environment. The key elements of any environment that AGV must identify are the drivable pathway and whether there are any obstacles around it. Many works have been published focusing on different detection components in various ways.

  • autonomous ground vehicles
  • off-road environment
  • drivable ground
  • positive obstacles
  • negative obstacles

1. Introduction

For years, industry and academia have been interested in developing autonomous technologies, such as driverless vehicles. Driverless vehicles are also known as autonomous ground vehicles (AGV) [1], unmanned ground vehicles (UGV) [2], autonomous guided vehicles [3], or autonomous land vehicles (ALV) [4]. These terms refer to vehicles that can navigate without or with minimal human assistance [5]. They are one of the first modern applications in robotics research. From the early days, AGV has been constantly being improvised to provide advanced driver assistance, road safety, and collision avoidance [6][7][8]. However, research in the unstructured environment still falls behind compared to structured environments. Many uncertain factors are responsible for this, like lack of labeled dataset, accessibility of data, lack of applicable data [9], etc.
Off-road environments are regions of suburban or non-urban, non-structured, or weakly structured road areas that lack well-defined routes and driving instructions like road signs, traffic signals, etc. Some good examples of off-road environments are forests, country roads, muddy or sandy roads, or terrain covered by emergent plants [10]. Oliver et al. [11] defined unstructured environments as situations or environments that “have not been previously adjusted to make it easier for a robot to complete a task”. In layman’s terms, an off-road environment can be any environment that does not have basic driving facilities, road instructions, and more challenging than usual conditions. Figure 1 shows some examples of off-road environments that have unstructured roadways and without proper driving facilities. Figure 1a–c shows rocky, muddy, and sandy road environments, respectively.
Figure 1. Examples of some off-road environments with unstructured roadways; The figure presents (a) rocky, (b) muddy, and (c) sandy road environments. Images are collected from the autonomous testing ground of the Center for Advanced Vehicular Systems (CAVS) at Mississippi State University (MSU).
The on-road environment is an urban area with structured roadways, along with necessary driving instructions, e.g., road signs, street markings, etc. On-road detection techniques involve lanes, traffic signals, road signs, pedestrians, vehicles, and building detection. Most of the objects on-road are specific and identifiable with a vision-based method. However, in off-road scenarios, there may not have any specific element. Therefore, it is very difficult to sense the environment. In the off-road environment, two major detections are essential to navigate smoothly. At first, it needs to identify the traversable or drivable ground where nothing can block the movement of the vehicle. Secondly, the vehicle needs to look for obstacles to find the appropriate path. Any prior knowledge about the environment facilitates the vehicle system to reach its goal at the minimum cost with the shortest distance.

2. Ground or Drivable Pathway Detection

One of the most important criteria in autonomous driving is identifying drivable regions. On-road pathways are easily identifiable because of their structure, color, smoothness, and horizontality. Off-road areas are more challenging, requiring advanced sensing equipment and techniques because, in off-road areas, roads are not the same in all areas. Instead, certain areas of the ground may be unsmooth, sloppy, and bumpy. In addition, having dense vegetation, grass, sand, or dirt, the driving pathway may not be distinguishable by a visual identifier. As a result, the AGV may lose its autonomous navigation capability.

2.1. Single Sensor-Based Detection

Liu et al. [12] used only 3D lidar to detect drivable ground, positive, negative, and hanging obstacles. They primarily detected obstacles to uncovering the traversable region in their work. They used 3D lidar points and analyzed the radial and transverse features. These features detect the obstacles, and then the leftover areas are defined as drivable regions for traversing AGV.
Gao et al. [13] also used lidar and proposed a deep-learning approach where they used the vehicle trails as input and obstacle marks as the label for the network. The suggested network topology was created considering the obscure and uncertain zone in the off-road environment. For their network, no human involvement is required to label the obstacles as it is done automatically. The main advantage of this network is that it takes data without labeling or weakly labeling but still provides a satisfactory result.
Chen et al. [14] detected traversable road sections and obstacles in one unified model. They collected lidar image data and converted it to a lidar point cloud. A histogram map has been generated with the lidar point cloud, where the traversable road area can be visible in front of the vehicle. Apart from the traversable path, the obstacles are also visible around it.
Katramados et al. [15] created a “traversability map” by extracting color and texture from images. For that, they collected camera data, mounted it on the top of the vehicle, and removed some unnecessary information. Then, they generated the map and adjusted the image’s temporal filtering, which helped to detect edges from blurry images. They removed lighting effects like shadows and reflections from the dataset, making the final detection result accurate.
Shaban et al. [16] developed a deep learning model named Bird’s Eye View Network (BEVNet). This model took aerial images from the lidar sensor and semantically segmented them into four terrain classes- “free, low-cost, medium-cost, and obstacle”. The advantage of this model is that it can fill any gap with information using previous knowledge, and thus, it can overcome the problem of missing values for those areas where no lidar hits are found.
In another work, Gao et al. [17] suggested an approach based on contrastive learning using camera images. In contrastive learning, a single feature is trained for classification. They used a set of human-labeled bounding boxes as features and detected different traversable areas for this work. Those areas are semantically segmented to generate an understandable map of the environment.
Overall, lidar and camera sensors have primarily been used for traversable ground detection when using only a single sensor. Furthermore, the deep learning method is quite interesting to the researchers, as they can overcome some limitations like uncertain zone, missing data, etc.

2.2. Multi-Sensor-Based Detection

Zhu et al. [18] combined three lidars to detect three different types of obstacles and traversable paths. They used lidar odometry, which converts the detection output into a structured form. They recorded several findings, combining each with the following result. This combination uses the Bayesian theory, which provides a reward. However, this method will not work for long-distance traversability.
A drivable ground detection method in a dynamic environment has been presented by Dahlkamp et al. [19]. This method has been applied to the vehicle that participated and won in the “DARPA Grand Challenge robot race”. DARPA is the “defense advanced research projects agency”. A “drivability map” is created by a laser range finder and camera sensor with an appearance model. This vehicle was able to navigate through the desert terrain very fast.
Mei et al. [20] designed an algorithm to detect the traversable area in the off-road environment. They captured images with a monocular camera. The image of the same area is also captured by a lidar-based and observed by a human. The far-field capability is also measured. The final traversable region is defined by comparing the image data with the three measurements.
An unsupervised learning-based method has been developed by Tang et al. [21] for segmenting passable areas in unstructured environments. They used a deep convolutional neural network to classify free, obstacle, and unknown road areas. They used both camera and laser sensors for training data and generated automatic labeling. For testing data, only a monocular camera is sufficient.
Semantic segmentation, also known as image segmentation, is the process of allocating one of N predefined classes to every pixel of an image [22]. It is a deep learning algorithm that depends on a large set of labeled datasets. Dabbiru et al. [9] published a labeled dataset using semantic segmentation of three different vehicle types for an off-road environment. They used two 2D cameras and a 3D lidar sensor for data collection and annotated them based on the vehicle class.
Reina et al. [23] detected drivable ground by combining lidar and stereo data and two classifiers. Each classifier takes data from each sensor, and then the classification result is fused to get the final result. Likewise, Sock et al. [24] used lidar and camera data to measure road boundaries and shape. They generated two probabilistic maps with two different sensors and then classified the traversable region with a linear support vector machine (SVM). The two classification results have been fused with the Bayes rule. McDaniel et al. [25] proposed a method for detecting tree stems using an SVM classifier. They used a lightweight lidar scanner from a single point of view. This method has two steps where the non-ground points have been filtered out in the first step, and then SVM classifies the points that belong to the ground from the remaining.
In summary, fusing multiple sensors can provide a better detection result for drivable ground. Lidar and camera fusion is the most common fusion method. The classification result from two different sensors is being compared with the Bayesian rule. However, using multiple sensors may be costly. Table 1 encapsulates the works presented in next section.
Table 1. The methods used for traversable ground detection.
Literature Sensors Method Detection
Gao et al. [13], 2019 Lidar Deep Learning Drivable ground
Chen et al. [14], 2017 Lidar Lidar-histogram Obstacles and drivable ground
Liu et al. [12], 2019
Zhu et al. [18], 2019
Radial and Transverse feature
Bayesian Network
Katramados et al. [15], 2009 Camera Vision/Traversability map Drivable ground
Shaban et al. [16], 2021 Lidar
Camera
Semantic Segmentation Drivable ground
Gao et al. [17], 2021
Dabbiru et al. [9], 2021
Tang et al. [21], 2018 Laser + Camera Unsupervised Learning Drivable ground
Reina et al. [23], 2016
Dahlkamp et al. [19], 2006
Lidar/Laser + Camera Supervised Learning/SVM Drivable ground
Sock et al. [24], 2016
McDaniel et al. [25], 2012

3. Positive Obstacles Detection and Analysis

AGV considers any particle as an obstacle that obstructs its smooth driving. Navigating securely without colliding with objects or falling into gaps is a key criterion for AGV. Obstacle detection and navigation in an unknown environment are one of the major capabilities of AGV. Obstacles are multiple objects that hinder the usual speed of the vehicles or make it a complete stop. Dima et al. [26] defined obstacles as “a region that cannot or should not be traversed by the vehicle”. Pedestrians, vehicles, houses, trees, animals, boulders, giant cracks, vast quantities of water, snow, etc., can be considered obstacles.

3.1. Single Sensor Based

Huertas et al. [27], determined the diameters of trees from the stereo camera image to assess whether they constitute traversable obstacles using edge contours. Edge contours are generated from the stereo pair (left and right) images, which is called 3D fragment information. Then, this information is coded with orientation to confine processing to object borders to match the edges on opposing tree trunk boundaries.
Maturana et al. [28] aim to differentiate small objects from large obstacles. They built a 2.5D grid map where they labeled terrain elevation, trail, and grass information. A 2.5D grid map is an image type where the height and depth information is provided. That information has been collected through lidar and other image data. The semantic map dataset is further trained and tested with a customized CNN for path planning and cost calculation.
Manderson et al. [29] presented a reinforcement learning method using labeled images. They collected images from the front end and overhead with various obstacles like vegetation, different rock kinds, and sandy paths. These images are used as inputs, and the labeling process is self-supervised. Value Prediction Networks (VPN) [30] have been used as a network. VPN is a hybrid network consisting of model-based and model-free architecture.
Nadav and Katz [30] presented a system with low computational cost. They used a smartphone to collect images, and they converted the images into a 3D point cloud model and processed it to detect obstacles and distance information. Another benefit of this system is that it can operate individually without other sensors.
Broggi et al. [31] presented a system that provides real-time obstacle detection using “stereoscopic images”. These images are collected from a moving vehicle through two cameras on the left and right sides. Then, the system calculates “V-disparity”, which determines the pitch oscillation of the camera from vehicle movement. V-disparity is a method that uses a single pair of stereo pictures to determine the camera’s pitch angle at the moment of acquisition [32]. The obstacles are then identified and mapped in real-world coordinates.
Foroutan et al. [33] presented a different approach than typical obstacle detection. They look into the effect of understory vegetation density on obstacle detection and use a machine learning-based framework. For that, they take point cloud data from the lidar sensor using an autonomous driving simulator. If the understory vegetation increases, the classification performance decreases.
A laser-based system with the Sober algorithm and the Gaussian kernel function has been applied by Chen et al. [34]. The goal is to group each obstacle’s point clouds, so the super-voxel has been optimized with the Euclidean clustering technique. Then, “the Levenberg–Marquardt back-propagation (LM-BP) neural network” has been applied to extract the features of the obstacles.
Zhang et al. [35] provided a faster detection method using stereo images. The detection method has two stages. In the first stage, it rapidly identifies the easily visible obstacles, and then in the second stage, it uses space-variant resolution (SVR) [35] to improve small obstacle detection. SVR is an algorithm that analyzes the geometric features and the level of interest in each area. However, SVR has a very high computation cost.
Overall, cameras are widely used sensors for obstacle detection. However, lidar and laser also provide good detection. While identifying the obstacles from the image, different image processing techniques and detection algorithms have been applied in different works. Some took different approaches by considering small objects.

3.2. Multi-Sensor Based

Kuthirummal et al. [36] presented an approach that can be applied to lidar and camera sensors. They created a grid-based obstacle map to define traversability. To map the obstacles in each cell, they calculate the elevation histogram and plot them on the graph with the label. Thus, the information about obstacles can be known.
Manduchi et al. [37] presented a sensor processing algorithm using two sensors. A color stereo camera is used for detecting obstacles based on color and terrain type. A single-axis ladar classifies the traversable and large obstacles. The camera is mounted on the vehicle’s top, while the ladar is placed in the lower portion. These two systems provide better navigation together.
Reina et al. [38] built perception algorithms to improve the autonomous terrain traversability of AGV. They presented two approaches. One deals with the stereo data to classify drivable ground and uses the self-learning method. The other one uses a radar-stereo integrated system to detect and classify obstacles. They have done field experiments in different environments, including rural areas, agricultural land, etc.
A low-cost and multi-sensor-based obstacle detection method has been developed by Giannì et al. [39]. They have unified three sensing technologies: radar, lidar, and sonar. Then, the data is “sieved” and passed to the Kalman filter, and this technique estimates the distance of the obstacle accurately. Meichen et al. [40] take a similar Kalman filter-based approach. They have selected IMU (Inertial Measurement Unit) and lidar sensors to collect the coordinates of the obstacle, and both coordinates are fused to get the obstacle position.
Kragh and Underwood [41] used semantic segmentation by fusing lidar and camera sensors. The appearance information comes from a 2D camera, and the geometry information comes from 3D lidar data, and both information has been fused to get the final detection result. The advantage of this model is that it can differentiate between traversable overgrown grass or fallen leaves and non-traversable trees and bushes. Furthermore, using this method, ground, sky, vegetation, and object can be classifiable.
Ollis and Jochem [42] used a set of sensors to generate a “density map” to detect different obstacles and classify terrain. They used various ladar and radar sensors that can update 70 times in a second. The terrain has been divided into six subclasses based on the obstacle. They also classified traversable regions based on the difficulty level, such as non-traversable, traversable, partially traversable, etc.
Bradley et al. [43] utilized the infrared ray for detecting vegetation that has previously been used to detect chlorophyll. A near-infrared and a video camera are fused to capture image data. The visible light has been removed from each pixel of the image by applying a threshold value. Thus, the presence of chlorophyll or vegetation can be known.

4. Negative Obstacles Detection and Analysis

Positive obstacles are those objects with a positive height above the ground, while negative obstacles are those with a negative height below the ground [4]. Positive obstacles are easily visible and captured by the sensors. Because of the negative height and position below the ground, negative obstacles are somewhat challenging. Furthermore, a regular vision-based system may not measure their depth and area. It could be unsafe for the vehicle if the negative obstacles are not identified correctly.

4.1. Missing Data Analysis

A common practice in negative obstacle detection is dealing with missing data from sensor signals. Many pieces of literature have worked with three-dimensional Lidar data. For example, Larson and Trivedi [44] have presented an algorithm for negative obstacle detection. They used a 3D laser to collect point cloud data. Two algorithms have been used for the classification method—Negative Obstacle DetectoR (NODR) and support vector machine (SVM). NODR, a geometry-based classifier, works by identifying missing information, which can lead to a negative obstacle. On the other SVM classifies the rays that return from Lidar.
Sinha and Papadakis [45] also considered the information gap from 2D morphological images for negative obstacle detection. The advantage of their approach is that they process in real-time and provide a high-level accuracy without analyzing the 3D scene. The difference from other works is that they denoised the signals, extracted features through principal component analysis (PCA), and classified them according to the area.
Similarly, Heckman et al. [46] have also considered missing data from a 3D laser. By identifying areas where data is missing, they aim to detect the negative obstacles which could be a reason for missing data. The benefit of this method is that it can be applied to a complex environment even with sloped ground.
Analyzing missing data from the sensors has been a unique but effective method for detecting negative obstacles. The data come from the point cloud, lidar, or laser sensor for missing data analysis.

4.2. Vision-Based Detection

Much literature focuses on stereo vision to detect negative obstacles like the Lidar sensor. Karunasekera et al. [47] utilized the stereo camera’s 3D and color information. They generated a disparity map, where v-disparity and u-disparity have been applied to identify road profiles. U-V-disparity is an algorithm for understanding a 3D road scene, where it can classify different features of that scene [48]. Negative obstacles can be identified by scanning through every pixel of the disparity map.
Shang et al. [4] also used Lidar to collect data about negative obstacles. They mounted the sensor in an upright position in their work, which has a great advantage. The vehicle collects more information about its blind spot and data in this position. The width and the background information have been fused with the Bayesian network. Then, they used SVM classifiers to get the final detection.
Bajracharya et al. [49] used stereo vision for a special vehicle to detect sparse vegetation and negative obstacles. So, they build a terrain map and walk along with it. The system uses spatial and temporal information, making detection results more accurate. This system can work in dynamic locations and weather, like rain, snow, and even at nighttime.
Hu et al. [50] fuse different geometric cues from stereo cameras in the image sequence to detect negative obstacles. The stereo images contain range, color, and geometric information, and a Bayesian network calculates the probability of detection. The benefit of this method is that the obstacles can be detected from a far distance.
In [51], the performance of seven obstacle identification methods with 21 obstacles has been examined. Among these, two of them are negative obstacles. One was detected with a stereo image, and another was on the local map through software.
Overall, negative obstacle detection using vision is a difficult task, which has been possible with some image processing techniques and stereo vision.

This entry is adapted from the peer-reviewed paper 10.3390/s22218463

References

  1. Gomi, T.; Ide, K.-I.; Matsuo, H. The development of a fully autonomous ground vehicle (FAGV). In Proceedings of the Intelligent Vehicles’94 Symposium, Paris, France, 24–26 October 1994; pp. 62–67.
  2. Gage, D.W. Ugv history 101: A brief history of unmanned ground vehicle (ugv) development efforts. DTIC Document. Tech. Rep. 1995, 13, 1–9.
  3. Thakkar, J.J. Applications of structural equation modelling with AMOS 21, IBM SPSS. In Structural Equation Modelling; Springer: Berlin/Heidelberg, Germany, 2020; pp. 35–89.
  4. Shang, E.; An, X.; Li, J.; He, H. A novel setup method of 3D LIDAR for negative obstacle detection in field environment. In Proceedings of the 2014 IEEE 17th International Conference on Intelligent Transportation Systems (ITSC 2014), Qingdao, China, 8–11 October 2014; pp. 1436–1441.
  5. Luettel, T.; Himmelsbach, M.; Wuensche, H.-J. Autonomous Ground Vehicles—Concepts and a Path to the Future. Proc. IEEE 2012, 100, 1831–1839.
  6. Folsom, T. Energy and Autonomous Urban Land Vehicles. IEEE Technol. Soc. Mag. 2012, 31, 28–38.
  7. Islam, F.; Nabi, M.M.; Farhad, M.; Peranich, P.L.; Ball, J.E.; Goodin, C. Evaluating performance of extended Kalman filter based adaptive cruise control using PID controller. Auton. Syst. Sens. Process. Secur. Veh. Infrastruct. 2021, 11748, 46–56.
  8. Johnson, E.N.; Mooney, J.G. A Comparison of Automatic Nap-of-the-earth Guidance Strategies for Helicopters. J. Field Robot. 2014, 31, 637–653.
  9. Dabbiru, L.; Sharma, S.; Goodin, C.; Ozier, S.; Hudson, C.R.; Carruth, D.W.; Doude, M.; Mason, G.; Ball, J.E. Traversability mapping in off-road environment using semantic segmentation. Auton. Syst. Sens. Process. Secur. Veh. Infrastruct. 2021, 11748, 78–83.
  10. Choi, J.; Lee, J.; Kim, D.; Soprani, G.; Cerri, P.; Broggi, A.; Yi, K. Environment-Detection-and-Mapping Algorithm for Autonomous Driving in Rural or Off-Road Environment. IEEE Trans. Intell. Transp. Syst. 2012, 13, 974–982.
  11. Brock, O.; Park, J.; Toussaint, M. Mobility and Manipulation. In Springer Handbook of Robotics; Siciliano, B., Khatib, O., Eds.; Springer: New York, NY, USA, 2016; pp. 1007–1036.
  12. Liu, T.; Liu, D.; Yang, Y.; Chen, Z. Lidar-based Traversable Region Detection in Off-road Environment. In Proceedings of the 38th Chinese Control Conference (CCC2019), Guangzhou, China, 27–30 July 2019; pp. 4548–4553.
  13. Gao, B.; Xu, A.; Pan, Y.; Zhao, X.; Yao, W.; Zhao, H. Off-Road Drivable Area Extraction Using 3D LiDAR Data. In Proceedings of the Intelligent Vehicles Symposium (IV), Paris, France, 9–12 June 2019; pp. 1505–1511.
  14. Chen, L.; Yang, J.; Kong, H. Lidar-histogram for fast road and obstacle detection. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2019; pp. 1343–1348.
  15. Katramados, I.; Crumpler, S.; Breckon, T.P. Real-Time Traversable Surface Detection by Colour Space Fusion and Temporal Analysis. In Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2009; Volume 5815, pp. 265–274.
  16. Shaban, A.; Meng, X.; Lee, J.; Boots, B.; Fox, D. Semantic terrain classification for off-road autonomous driving. In Proceedings of the Machine Learning Research (PMLR), Almería, Spain, 5–7 October 2022; pp. 619–629.
  17. Gao, B.; Hu, S.; Zhao, X.; Zhao, H. Fine-Grained Off-Road Semantic Segmentation and Mapping via Contrastive Learning. In Proceedings of the IRC 2021: IEEE International Conference on Robotic Computing, Taichung, Taiwan, 15–17 November 2021; pp. 5950–5957.
  18. Zhu, B.; Xiong, G.; Di, H.; Ji, K.; Zhang, X.; Gong, J. A Novel Method of Traversable Area Extraction Fused with LiDAR Odometry in Off-road Environment. In Proceedings of the IEEE ICVES 2019 2019: IEEE International Conference on Vehicular Electronics and Safety 2019, Cairo, Egypt, 4–6 September 2019; pp. 1–6.
  19. Dahlkamp, H.; Kaehler, A.; Stavens, D.; Thrun, S.; Bradski, G. Self-supervised Monocular Road Detection in Desert Terrain. Robot. Sci. Syst. 2006, 38.
  20. Mei, J.; Yu, Y.; Zhao, H.; Zha, H. Scene-Adaptive Off-Road Detection Using a Monocular Camera. In Proceedings of the 5th IEEE International Conference on Models and Technologies for Intelligent Transportation Systems, Naples, Italy, 26–28 June 2017; pp. 242–253.
  21. Tang, L.; Ding, X.; Yin, H.; Wang, Y.; Xiong, R. From one to many: Unsupervised traversable area segmentation in off-road environment. In Proceedings of the 2017 IEEE International Conference on Robotics and Biometics, Parisian Macao, China, 5–8 December 2017; pp. 1–6.
  22. Zhang, H.; Dana, K.; Shi, J.; Zhang, Z.; Wang, X.; Tyagi, A.; Agrawal, A. Context Encoding for Semantic Segmentation. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 7151–7160.
  23. Reina, G.; Milella, A.; Worst, R. LIDAR and stereo combination for traversability assessment of off-road robotic vehicles. Robotica 2015, 34, 2823–2841.
  24. Sock, J.; Kim, J.; Min, J.; Kwak, K. Probabilistic traversability map generation using 3D-LIDAR and camera. In Proceedings of the 2016 IEEE International Symposium on Robotics and Manufacturing Automation (IEEE-ROMA2016), Ipoh, Malaysia, 25–27 September 2016; pp. 5631–5637.
  25. McDaniel, M.W.; Nishihata, T.; Brooks, C.A.; Salesses, P.; Iagnemma, K. Terrain classification and identification of tree stems using ground-based LiDAR. J. Field Robot. 2012, 29, 891–910.
  26. Dima, C.; Vandapel, N.; Hebert, M. Classifier fusion for outdoor obstacle detection. In Proceedings of the 2004 IEEE International Conference on Robotics and Automation, New Orleans, LA, USA, 26 April–1 May 2004; pp. 665–671.
  27. Huertas, A.; Matthies, L.; Rankin, A. Stereo-Based Tree Traversability Analysis for Autonomous Off-Road Navigation. In Proceedings of the IEEE Workshop on Applications of Computer Vision (WACV), Breckenridge, CO, USA, 5–7 January 2005; pp. 210–217.
  28. Maturana, D.; Chou, P.-W.; Uenoyama, M.; Scherer, S. Real-Time Semantic Mapping for Autonomous Off-Road Navigation. Field Serv. Robot. 2017, 5, 335–350.
  29. Manderson, T.; Wapnick, S.; Meger, D.; Dudek, G. Learning to Drive Off Road on Smooth Terrain in Unstructured Environments Using an On-Board Camera and Sparse Aerial Images. In Proceedings of the 2020 International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–4 June 2020; pp. 1263–1269.
  30. Nadav, I.; Katz, E. Off-road path and obstacle detection using monocular camera. In Proceedings of the 20th International Computer Science and Engineering Conference 2016, Chiang Mai, Thailand, 14–17 December 2017; pp. 22–26.
  31. Broggi, A.; Caraffi, C.; Fedriga, R.; Grisleri, P. Obstacle Detection with Stereo Vision for Off-Road Vehicle Navigation. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Diego, CA, USA, 21–23 September 2005.
  32. Labayrade, R.; Aubert, D. A single framework for vehicle roll, pitch, yaw estimation and obstacles detection by stereovision. In Proceedings of the 2003 IEEE Symposium on Intelligent Vehicle, Columbus, OH, USA, 9–11 June 2003; pp. 31–36.
  33. Foroutan, M.; Tian, W.; Goodin, C.T. Assessing Impact of Understory Vegetation Density on Solid Obstacle Detection for Off-Road Autonomous Ground Vehicles. ASME Lett. Dyn. Syst. Control 2020, 1, 021008.
  34. Chen, W.; Liu, Q.; Hu, H.; Liu, J.; Wang, S.; Zhu, Q. Novel Laser-Based Obstacle Detection for Autonomous Robots on Unstructured Terrain. Sensors 2020, 20, 5048.
  35. Zhang, Y.; Xu, X.; Lu, H.; Dai, Y. Two-Stage Obstacle Detection Based on Stereo Vision in Unstructured Environment. In Proceedings of the 2014 6th International Conference on Intelligent Human-Machine Systems and Cybernetics (IHMSC 2014), Hangzhou, China, 26–27 August 2014; pp. 168–172.
  36. Kuthirummal, S.; Das, A.; Samarasekera, S. A graph traversal based algorithm for obstacle detection using lidar or stereo. In Proceedings of the 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, San Francisco, CA, USA, 25–30 September 2011; pp. 3874–3880.
  37. Manduchi, R.; Castano, A.; Talukder, A.; Matthies, L. Obstacle Detection and Terrain Classification for Autonomous Off-Road Navigation. Auton. Robot. 2005, 18, 81–102.
  38. Reina, G.; Milella, A.; Rouveure, R. Traversability analysis for off-road vehicles using stereo and radar data. In Proceedings of the Industrial Technology IEEE International Conference. 2015. (ICIT 2015), Seville, Spain, 17–19 March 2015; pp. 540–546.
  39. Gianni', C.; Balsi, M.; Esposito, S.; Fallavollita, P. Obstacle detection system involving fusion of multiple sensor technologies. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, XLII-2/W6, 127–134.
  40. Meichen, L.; Jun, C.; Xiang, Z.; Lu, W.; Yongpeng, T. Dynamic obstacle detection based on multi-sensor information fusion. IFAC-PapersOnLine 2018, 51, 861–865.
  41. Kragh, M.; Underwood, J. Multimodal obstacle detection in unstructured environments with conditional random fields. J. Field Robot. 2019, 37, 53–72.
  42. Ollis, M.; Jochem, T.M. Structural method for obstacle detection and terrain classification. Unmanned Ground Veh. Technol. V 2013, 5083, 1–12.
  43. Bradley, D.; Thayer, S.; Stentz, A.; Rander, P. Vegetation Detection for Mobile Robot Navigation; Technical Report CMU-RI-TR-04-12; Robotics Institute, Carnegie Mellon University: Pittsburgh, PA, USA, 2004.
  44. Larson, J.; Trivedi, M. Lidar based off-road negative obstacle detection and analysis. In Proceedings of the 2011 14th International IEEE Conference on Intelligent Transportation Systems (ITSC), Washington, DC, USA, 5–7 October 2011; pp. 192–197.
  45. Sinha, A.; Papadakis, P. Mind the gap: Detection and traversability analysis of terrain gaps using LIDAR for safe robot navigation. Robotica 2013, 31, 1085–1101.
  46. Heckman, N.; Lalonde, J.-F.; Vandapel, N.; Hebert, M. Potential negative obstacle detection by occlusion labeling. In Proceedings of the 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems, San Diego, CA, USA, 29 October–2 November 2007; pp. 2168–2173.
  47. Karunasekera, H.; Zhang, H.; Xi, T.; Wang, H. Stereo vision based negative obstacle detection. In Proceedings of the 2017 13th IEEE International Conference on Control & Automation (ICCA), Ohrid, Macedonia, 3–6 July 2017; pp. 834–838.
  48. Hu, Z.; Uchimura, K. U-V-disparity: An efficient algorithm for stereovision based scene analysis. In Proceedings of the 2005 IEEE Intelligent Vehicles Symposium Proceedings, Las Vegas, NV, USA, 6–8 June 2005; pp. 48–54.
  49. Bajracharya, M.; Ma, J.; Malchano, M.; Perkins, A.; Rizzi, A.A.; Matthies, L. High fidelity day/night stereo mapping with vegetation and negative obstacle detection for vision-in-the-loop walking. In Proceedings of the IROS 2013—IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, Japan, 3–7 November 2013; pp. 3663–3670.
  50. Hu, T.; Nie, Y.; Wu, T.; He, H. Negative obstacle detection from image sequences. In Proceedings of the ICDIP 2011: 2011 3rd International Conference on Digital Image Processing III ROUND, Chengdu, China, 15–17 April 2011; p. 80090Y.
  51. Rankin, A.; Huertas, A.; Matthies, L. Evaluation of stereo vision obstacle detection algorithms for off-road autonomous navigation. In Proceedings of the AUVSI's Unmanned Systems North America, Baltimore, MD, USA, 28–30 June 2005; pp. 1197–1211.
More
This entry is offline, you can click here to edit this entry!
ScholarVision Creations