Image-Based Obstacle Detection Methods: History
Please note this is an old version of this entry, which may differ significantly from the current revision.

Mobile robots lack a driver or a pilot and, thus, should be able to detect obstacles autonomously. These various image-based obstacle detection techniques include Unmanned Surface Vehicles (USVs), Unmanned Aerial Vehicles (UAVs), and Micro Aerial Vehicles (MAVs). The techniques were divided into monocular and stereo. The former uses a single camera, while the latter makes use of images taken by two synchronised cameras. Monocular obstacle detection methods are discussed in appearance-based, motion-based, depth-based, and expansion-based categories. Monocular obstacle detection approaches have simple, fast, and straightforward computations. Thus, they are more suited for robots like MAVs and compact UAVs, which usually are small and have limited processing power. On the other hand, stereo-based methods use pair(s) of synchronised cameras to generate a real-time 3D map from the surrounding objects to locate the obstacles. Stereo-based approaches have been classified into Inverse Perspective Mapping (IPM)-based and disparity histogram-based methods. Whether aerial or terrestrial, disparity histogram-based methods suffer from common problems: computational complexity, sensitivity to illumination changes, and the need for accurate camera calibration, especially when implemented on small robots.

  • obstacle detection
  • image-based
  • UAV

1. Introduction

The use of mobile robots such as Unmanned Aerial Vehicles (UAVs), Unmanned Ground Vehicles (UGVs) has increased in recent years for photogrammetry [1][2][3], and many other applications. A remotely piloted robot should be able to detect obstacles automatically. In general, obstacle detection techniques can be divided into three groups: Image-based [4][5], sensor-based [6][7], and hybrid [8][9]. In sensor-based methods, various active sensors such as lasers [10][11][12][13], radar [14][15], sonar [16], ultrasonic [17][18], and Kinect [19] have been used.
Sensor-based methods have their own merits and disadvantages. For instance, in addition to being reasonably priced, sonar and ultrasonic sensors can determine the direction and position of an obstacle. However, sonar and ultrasonic waves are affected by both constructive and destructive interference of ultrasonic reflections from multiple environmental obstacles [20]. In some situations, radar waves may be an excellent alternate, mainly when no visual data is available. Nevertheless, radar sensors are not small or light, which means installing them on small robots is not always feasible [21][22]. Moreover, infrared waves have a limited Field Of View (FOV), and their performance is dependent on weather conditions [23]. Despite being a popular sensor, LiDAR is relatively large and, thus, cannot permanently be installed on small robots like MAVs. Therefore, despite their popularity and ease of use, active sensors may not be ideal for obstacle detection when weight, size, energy consumption, sensitivity to weather conditions, and radio-frequency interference issues matter [23][24].
Alternative to active sensors is lightweight cameras that provide visual information about the environment the robot travels in. Cameras are passive sensors and are used by numerous image-based algorithms to detect obstacles using grayscale values [25] and point [26] or edge [27][28] features. They can provide details regarding the amount of the robot’s movement or displacement and the obstacle’s colour, shape, and size [29]. In addition to enabling real-time and safe obstacle detection, image-based techniques are not disturbed by environmental electromagnetic noises. Additionally, the visual data can be used to guide the robot through various image-based navigation techniques currently available.

 

2. Monocular Obstacle Detection Techniques

2.1. Appearance-Based

These methods consider an obstacle as a foreground object against a uniform background (i.e., ground or sky). They work based on some prior knowledge from the relevant background in the form of the edge [21], colour [25], texture [30], or shape [23] features. Obstacle detection is performed on single images taken sequentially using a camera mounted in front of the robot. The acquired image is examined to see if it conforms to that of sky or ground features; if it does not, it is considered an obstacle pixel. This process is performed for every pixel in the image. The result is a binary image in which obstacles are presented in white and the rest in black pixels.
In terrestrial robots, the ground data such as road or floor are detected first. Then, to detect the obstacles, ground data is used. Ulrich and Nourbakhsh [30] proposed a technique where each pixel in the image is labelled as an obstacle or ground using its pixel values. Their system is trained by moving the robot within different environments. As a result, in practical situations, if the illumination conditions vary from those used during the training phase, obstacles will not effectively be identified. In another study, Lee et al. [23] used Markov’s Random Field (MRF) segmentation to detect small and narrow obstacles in an indoor environment. However, the camera should not be more than 6.3 cm away from the ground [23]. An omnidirectional camera system and the Hue Saturation Value (HSV) colour model were used to separate obstacles by Shih An et al. [31]. They created a binary image and filtered out the noise using a width-first search technique to cope with image noise. In another study [32], object-based background subtraction and image-based obstacle detection techniques were used for static and moving objects. They used a single wide-angle camera for real-time obstacle detection. Moreover, Liu et al. [33] proposed a real-time monocular obstacle detection method to identify the water horizon line and saliency estimation for USVs like boats or ships. The system was developed to detect objects below the water edge that may pose a threat to USVs. They claimed their method outperforms similar state-of-the-art techniques [33].
Conventional image processing techniques do not usually meet the expectations of real-time applications. Therefore, recent research has focused on increasing the speed of obstacle detection using Convolutional Neural Networks (CNNs). For example, to increase the speed, Talele et al. [34] used TensorFlow [35] and OpenCV [36] to detect obstacles by scanning the ground for distinct pixels and classifying them as obstacles. Similarly, Rane et al. [37] used TensorFlow to identify pixels different from the ground. Their method was real-time and applicable to various environments. To recognise and track typical moving obstacles, Qiu et al. [38] used YOLOv3 and Simple Online and Real-time Tracking (SORT). To solve the low accuracy and slow reaction time of existing detection systems, the You Only Look Once v4 (YOLOv4) network, which is an upgraded version of YOLO [39], was proposed by He et al. [40]. It improved the recognition of obstacles at medium and long distances [40].
Furthermore, He and Liu [41] developed a real-time technique for fusing features to boost the effectiveness of detecting obstacles in misty conditions. Additionally, Liu et al. [42] introduced a novel semantic segmentation algorithm based on a spatially constrained mixture model for real-time obstacle detection in marine environments. A Prior Estimation Network (PEN) was proposed to improve the mixture model.
As for airborne robots, most research looks for a way to separate the sky from the ground. For example, Huh et al. [21] separated the sky from the ground using a horizon line. They then determined moving obstacles using the particle filter algorithm. Their method could be used in complex environments and low-altitude flights. Despite being efficient in detecting moving obstacles, their technique could not be used for stationary obstacles [21]. In another study, a method was introduced by Mashaly et al. [25] to find the sky in a complex environment, with obstacles separated from the sky in a binary image [25]. De Croon and De Wagter [43] suggested a self-supervised learning method to discover the horizon line [43]. In another study [32], a single wide-angle camera was used for real-time obstacle detection. The object-based background subtraction and image-based obstacle detection techniques were used for static and moving objects.
As can be seen, appearance-based methods have been used on various robots. Every study has had its concerns. One research has investigated narrow and small obstacle detection (i.e., [23]), whereas moving obstacle detection has been the centre of focus in a few others. Except for Shih An et al. [31], other research have focused on detecting obstacles in front of the robot. It is also evident that the attention in the last three years has been on the speed of obstacle detection procedures. Appearance-based methods are generally limited to environments where obstacles can easily be distinguished from the background. This assumption can easily be violated, particularly in complex environments containing objects, like buildings, trees, and humans [44] with varying shapes and colours. Some of the techniques are affected by the distance to the object or the noise in the images. Moreover, when using deep learning approaches, the detection of obstacles is primarily affected by the change in the environment and the number and types of samples in the training data set. Therefore, researchers suggest using a semantic segmentation algorithm performed by deep learning networks, provided that sufficient training data is available. Another alternative would be to enrich appearance-based methods by providing them with distance to object data that can be obtained using a sensor or a depth-based algorithm (see the following sections).

2.2. Motion-Based

In motion-based methods, it is assumed that nearby objects have sharp movements that can be detected using motion vectors in the image. The process involves taking two successive images or frames in a very short time. At first, several match points are extracted on both frames. Then, the displacement vectors of the match points are computed. Since objects closer to the camera have larger displacements, any point with a displacement value that exceeds a particular threshold is considered an obstacle pixel.
Various studies have been conducted in this field. Jia et al. [45] introduced a novel method that uses motion features to distinguish obstacles from shadows and road markings. Instead of using all pixels, they only used corners and Scale Invariant Feature Transform (SIFT) features to achieve real-time obstacle detection. Such an algorithm can fail if the number of mismatched features is high [45].
Optical flow is the data used in most motion-based approaches. Ohnishi and Imiya [46] prevented a mobile robot from colliding with obstacles without having a map of the environment. Gharani and Karimi [47] used two consecutive frames to estimate the optical flow for obstacle detection on smartphones to help visually impaired people navigate indoor environments. Using a context-aware combination data method, they determined the distance between two consecutive frames. Tsai et al. [48] used Support Vector Machine (SVM) [49] to validate Speeded-up Robust Features (SURF) [50] point detector locations as obstacles. In a research dense optical flow approach was used to extract the data for training SVM. Then, they used obstacle points and measures related to the spatial weighted saliency map to find the obstacle locations. The algorithm presented in their research applies to mobile robots with a camera installed at low altitudes. Consequently, it might not be possible to use it on UAVs that usually fly at high altitudes.
Motion-based obstacle detection relies mainly on the quality of the matching points. Thus, its quality can decrease if the number of mismatched features is high. In addition, if the optical flow is used for motion estimating, care must be taken for image points close to the centre. This is because, in optical flow, the number of motion vectors is not high. Indeed, detecting obstacles in front of the robot using optical flow is still challenging [51][52]. To resolve this problem, researchers suggest using an expansion-based approach to detect obstacles in the central parts of the images. This integration ensures the strength of the expansion-based technique in detecting frontal objects is employed, while the other parts are analysed by the motion-based method. Alternatively, researchers may use deep learning networks to solve such problems.

2.3. Depth-Based

Like motion-based methods, depth-based approaches obtain depth information from images taken by a single camera. There are two ways to accomplish this, the first being motion stereo and the second being deep learning. Two cameras are placed on the robot’s sides in the former, and a pair of consecutive images are captured. Although these images are only taken using a single camera, they can be considered as a pair of stereo images, from which the depth of object points can be estimated. For this, the images are searched for matching points. Then, using standard depth estimation calculations [53], the depth of object points is computed. Pixels whose depth is less than a threshold value are regarded as obstacles.
A recent alternative to the above process is employing a deep learning network. At first, the network is trained using appropriate data, so it can produce a depth map from a single image [54]. Samples of such networks can be found in [55][56]. The process then tests any image taken by the robot’s camera to determine the depth of its pixels. Then, similar to a classic approach, pixels having a depth smaller than a threshold are considered obstacles.
However, instead of using motion vectors, a complete three-dimensional model of the surroundings is constructed and used to detect nearby obstacles [57][58]. Some of these methods use motion stereo. For example, Häne et al. [59] used motion stereo to produce depth maps using four fisheye images. In this system, an object on the ground was considered an obstacle. Such an algorithm cannot detect moving obstacles. Moreover, it provides a complete map of the environment which requires complex computations.
In another research, Lin et al. [60] used a fisheye camera and an Inertial Measurement Unit (IMU) for autonomous navigation of a MAV. As part of the system, an algorithm was developed to detect obstacles in a wide FOV using fisheye images. Each fisheye image was converted into two pinhole images without distortion with a sum horizontal viewing angle of 180°. Depth estimation was based on keyframes. Because the depth can only be estimated when the drone moves, this system will not work on MAVs when in hovering mode. Moreover, as the quality of the parts on the sides of a fisheye image is low, the accuracy of the resulting depth image can be low. Besides, the production of two horizontal pinhole images can decrease the vertical FOV and, thus, limit the areas where the obstacles can be detected.
Artificial neural networks and deep learning have been used to estimate depth in recent years [61]. Contrary to methods like 3D model construction, deep learning-based techniques do not require complex computations for obstacle detection. Kumar et al. [62] used four single fisheye images and a CNN to estimate the depth in all directions. They used LiDAR data as ground truth for depth estimation to train the network. The dataset they used in their self-driving car had a 64-beam Velodyne LiDAR and four wide-angle fisheye cameras. The distortion of the fisheye image was not corrected. It is recommended to improve the results by using more consecutive frames to exploit the motion parallax and better CNN encoders [62]. Their research required further training. Therefore, another future goal for the work is to improve semi-supervised learning using synthetic data and run unsupervised learning algorithms [62]. In another study, Mancini et al. [63] developed a new CNN framework that uses image features obtained via fine-tuning the VGG19 network to compute the depth and consequently detect obstacles.
Moreover, Haseeb et al. [64] presented DisNet, a distance estimation system based on multi-hidden-layer neural networks. They evaluated the system under static conditions, while evaluation of the system mounted on a moving locomotive remained a challenge. In another research, Hatch et al. [65] presented an obstacle avoidance system for small UAVs, in which the depth is computed using a vision algorithm. The system works by incorporating a high-level control network, a collision prediction network, and a contingency policy. Urban and Caplier [52] developed a navigation module for visually impaired pedestrians, using a video camera in an intelligent light-weighted glasses device. It includes two modules: a static data extractor and a dynamic data extractor. The first is a convolutional neural network used to determine the obstacle’s location and distance from the robot. In contrast, using a fully connected neural network, the dynamic data extractor computes the Time-to-Collision by stacking the obstacle data from multiple frames.
Furthermore, some researchers have developed methods to create and use a semantic map of the environment to recognise obstacles [66][67]. A semantic map is a representation of the robot’s environment that incorporates both geometric (e.g., height, roughness) and semantic data (e.g., navigation-relevant classes such as trail, grass, obstacle, etc.) [68]. When used in urban autonomous vehicle applications, they can provide autonomous vehicles with a longer sensing range and more excellent manoeuvrability than onboard sensory devices. Some studies have used multi-sensor fusion to improve the robustness of their segmentation algorithms to create semantic maps. More details can be found in [69][70][71][72][73].

2.4. Expansion–Based

These methods employ the same principle used by humans to detect obstacles, i.e., the object expansion rate between consecutive images. As we know, an object continuously grows larger when it approaches. Thus determining obstacles, points and/or regions on two sequential images can be used to estimate the object’s enlargement value. This value could be computed between homologous areas, distances, or even the SIFT scales of the extracted points. In expansion-based algorithms, if the enlargement value relating to an object exceeds a specific threshold, that object is considered an obstacle.
Expansion-based methods use the objects’ enlargement rate in between successive images. They use a concept similar to human perception. Several expansion-based studies have been conducted. In these methods, the obstacle is defined as an object enlarged or resized in consecutive frames. Therefore, sequential frames and various enlargement criteria are used to detect obstacles. For example, Mori and Scherer [74] used the characteristics of the SURF algorithm to detect the initial positions of obstacles that differed in size. This algorithm has simple calculations but may fail due to the slow reaction time to obstacles. Zeng et al. [44] used edge motion in two successive frames to identify approaching obstacles in another research. If the object’s edge shifts outwards (relative to its centre in successive frames), the object becomes large [44]. This approach applies to both fixed and mobile robots when the background is homogeneous. However, if the background is complicated, this approach only applies to static objects. Aguilar et al. [75] only detected obstacles conforming to some primary patterns. They use this concept to detect specific obstacles. As a result, obstacles other than those following the predefined patterns cannot be identified.
To detect obstacles, Al-Kaff et al. [26] used SIFT [76] to extract and match points across successive frames. He then formed the convex hull of the matched points. The points were regarded as obstacle points if the change in their SIFT scale values and the convex hull area exceeded a certain threshold. The technique may simultaneously identify both near and far points as obstacles. As a result, the mobile robot will have limited manoeuvrability in complex environments. In addition, the ratio of change of the convex hull region criterion will lose its efficiency if the corresponding points are wrong.
Badrloo and Varshosaz [77] used points with an average distance ratio greater than a specified threshold to identify obstacle points to solve this problem. Their technique was able to distinguish far and near obstacles properly. Others have solved the problem in different ways. For example, like Badrloo and Varshosaz [77], Euclidean distance was acquired between each point and the centroid of all other matched points by Padhy et al. [28]. Escobar et al. [78] computed the optical flow to obtain the expansion rate for obstacle recognition in unknown and complex environments in another study. In another study, Badrloo et al. [79] used the expansion rate of region areas for accurate obstacle detection.
Recently, deep learning solutions have been proposed to improve both the speed and the accuracy of obstacle detection, especially in complex and unknown environments. For instance, Lee et al. [5] detected obstacle trees in tree plantations. They trained a machine learning model, the so-called Faster Region-based Convolutional Neural Network (Faster R-CNN), to detect tree trunks for drone navigation. This approach uses the ratio of an obstacle height in the image to the image height. Additionally, the image widths between trees were used to find obstacle-free pathways.
Compared to other monocular techniques, expansion-based approaches employ a simple principle, i.e., the expansion rate. Such techniques are fast, as they do not require extensive computations. However, they may fail when the surrounding objects become complex. Thus, in recent years, deep neural networks have been employed to meet the expectations of real-time applications.
As seen from the above, expansion-based approaches use points or convex hulls to detect obstacles to increase speed. This leads to the inclusion of incomplete obstacle shapes, which can limit its accuracy. A recent method provides regions of an obstacle for complete and precise obstacle detection [79], although it does not yet meet the requirements of real-time applications. Researchers suggest using methods based on deep neural networks to accelerate the complete and precise detection of obstacles in expansion-based methods.

3. Stereo-Based Obstacle Detection Techniques

Obstacle detection based on stereo uses two synchronised cameras fixed on the robot [27]

3.1. IPM–Based Method

The IPM-based methods were primarily used to detect all types of road obstacles [80] and to eliminate the perspective effect of the original images in lane detection problems [81][82]. Currently, IPM images are mostly used in monocular methods [83][84][85].
Assuming the road has a flat surface, the IPM algorithm produces an image representing the road seen from the top, using internal and external parameters of the cameras. Then, the difference in the grey levels of pixels in the overlapping regions is computed, from which a polar histogram image is generated. If the image textures are uniform, this histogram contains two triangles/peaks: one for the lane and one for the potential obstacle. These peaks are then used to detect the obstacle, i.e., non-lane object. In effect, obstacle detection relies on identifying these two triangles based on their shapes and positions. In practice, it becomes difficult to form such ideal triangles due to the diversity of textures in the images, objects of irregular shapes, and variations in the brightness of the pixels.
There is limited research on stereo IPM obstacle detection [86]. An example is that by Bertozzi et al. [86] for short-range obstacle detection. This method detects obstacles using the difference between the left and right IPM images. Although it may be accurate in some conditions, it has a limited range and cannot show the actual distance to obstacles. Kim et al. [87] used a stereo pair of cameras to create two IPM images for each camera. These images were then combined with another IPM image created using a pair of consecutive images taken with the camera having a smaller FOV to detect the obstacles.
Although IPM-based methods are very fast, they have two notable limitations. First, since they use object portions with uniform texture or colour for obstacle detection [88][89], they can only be used to detect objects like a car that has a uniform material [90]. Second, errors in the homography model, unknown camera motion, and light reflection from the floor can generate noise in the images [90]. Indeed, implementing the IPM transform requires a priori knowledge of the specific acquisition conditions (camera location, orientation, etc.) and some assumptions regarding the objects being imaged. Consequently, it can only be utilised in structured environments, where, for instance, the camera is fixed or when the system calibration and the surrounding environment can be monitored by another type of sensor [91]. Due to the limitations of this method, researchers recommend using it only for lane detection and obstacle detection in cars. This is because the necessary data and conditions for this method in unknown environments, particularly when using drones, are not necessarily available.

3.2. Disparity Histogram-Based

Two cameras are installed at a fixed distance in front of the robot in these methods. The cameras have similar properties like focal length and FOV. They simultaneously capture two images of the surroundings. The acquired images are rectified. The distance between the matched pixels (disparity) is then calculated. This is repeated for all of the image pixels. The result is a disparity map, which is then used to compute the depth map of the surrounding objects [92]. Pixels having a depth smaller than a threshold are considered obstacle points. The majority of stereo-based obstacle detection techniques developed so far are disparity histogram-based which are studied in this section.
Disparity histogram-based methods can be discussed for robots on the ground and in the air. In the following, both groups will be stated.

3.2.1. Disparity Histogram-Based Obstacle Detection for Terrestrial Robots

Disparity histogram-based obstacle detection techniques were initially developed for terrestrial robots. Kim et al. [93] proposed a Hierarchical Census Transform (HCT) matching method to develop car parking assistance using images taken by a pair of synchronised fisheye cameras. As the quality of points at the edges of a fisheye image is low, the detection was only accurate enough in areas close to the image centre. Moreover, the algorithm’s accuracy decreased when shadows and complicated or reflective backgrounds were present. Later, Ball et al. [94] introduced an obstacle detection algorithm that could continuously adapt to changes in the illumination and brightness in farm environments. They developed two distinct steps for obstacle detection. The first removes both the crop and the stubble. After that, stereo matching is performed only on the remaining small portions to increase the speed. The technique is unable to detect hidden obstacles. The second part bypasses this constraint by defining obstacles as unique observations in their appearance and structural cues [94]. Salhi and Amiri [95] proposed a faster algorithm implemented on Field Programmable Gate Arrays (FPGA) to simulate human visual systems.
Disparity histogram-based techniques rely on matching computationally intensive algorithms. A solution to speed up the computations is to reduce the matching search space. Jung et al. [96] and Huh et al. [27] removed the road pixels to reduce the search space and regarded the other pixels as obstacles for vehicles travelling along a road. To detect the road, they used the normal FOV cameras. As a result, only obstacles in front of the vehicle could be detected. In a similar approach, to guide visually impaired individuals, Huang et al. [97] used depth data obtained using a Kinect scanner to identify and remove the road points. Furthermore, Muhovič et al. [98] approximated the water surface by fitting a plane to the point cloud, and outlying points are processed further to identify potential obstacles. As a recent technique, Murmu and Nandi [99] presented a novel lane and obstacle detection algorithm that uses video frames captured by a low-cost stereo vision system. The suggested system generates a real-time disparity map from the sequential frames to identify lanes and other cars. Moreover, Sun et al. [100] used 3D point cloud candidates extracted by height analysis for obstacle detection instead of using all 3D point clouds.
With the development of neural networks, many researchers have recently turned their attention to deep learning methods [101][102][103]. Choe et al. [104] proposed a stereo object matching technique that uses 2D contextual information from images and 3D object-level information in the field of stereo matching. Luo et al. [105] also used CNNs that can produce extremely accurate results in less than one second. Moreover, in disparity histogram-based obstacle detection studies, Dairi et al. [101] developed a hybrid encoder that combines Deep Boltzmann Machines (DBM) and Auto-Encoders (AE). In addition, Song et al. [106] trained a convolutional neural network using manually labelled Region Of Interest (ROI) from the KITTI data set to classify the left/right side of the host lane. The 3-D data generated by stereo matching is used to generate an obstacle mask. Zhang et al. [102] introduced a method that uses stereo images and deep learning methods to avoid car accidents. The algorithm was developed for drivers reversing and with a limited view of the objects behind. This method detects and locates obstacles in the image using a faster R-CNN algorithm.
Haris and Hou [107] addressed how to improve the robustness of obstacle detection methods in a complex environment by integrating an MRF for obstacle detection, road segmentation, and the CNN model to navigate safely. Their research evaluated the detection of small obstacles left on the road [107]. Furthermore, Mukherjee et al. [108] provided a method for detecting and localising pedestrians using a ZED stereo camera. They used the Darknet YOLOv2 to locate and achieve more accurate and rapid obstacle detection results.
Compared with traditional methods, deep learning has the advantages of robustness, accuracy, and speed. In addition, it can achieve real-time, high-precision recognition and distance measurement through the combination of stereo vision techniques [102].

3.2.2. Disparity Histogram-Based Obstacle Detection for Aerial Robots

Despite terrestrial robots mainly being surrounded by known objects, aerial robots move in unknown environments. Processing disparity histogram-based methods can be too heavy for onboard MAV microprocessors. To simplify the search space and speed up the depth calculation, McGuire et al. [29] used vertical edges within the stereo images to detect obstacles. Such an algorithm would not work in complicated environments where horizontal and diagonal edges are present.
Tijmons et al. [109] introduced the strategy of Droplet to identify and use only strong matched points and, thus, decrease the search space. The resolution of the images was reduced to increase the speed of disparity map generation. When the environment becomes complex, the processing speed of this technique decreases. Moreover, reducing the resolution may eliminate tiny obstacles such as tree branches, rope, and wire.
Barry et al. [4] concentrated only on fixed obstacles at a 5–10 m distance from a UAV to speed up the process. The algorithm was implemented on a light drone (less than 1 kg) and detected obstacles at 120 frames per second. The baseline of the cameras was only 14 inches. Thus, in addition to being limited to detecting fixed objects, a major challenge of the work would be its need for accurate calibration of the system to obtain reliable results. Lin et al. [110] considered dynamic environments and stereo cameras to detect moving obstacles using depth from stereo images. However, they only considered the obstacles’ estimated position, size, and velocity. Therefore, some characteristics of objects such as direction, volume, shape, and influencing factors like environmental conditions were not considered. Hence, such algorithms may have difficulty detecting some of the moving obstacles.
One of the state-of-the-art methods is that by Grinberg and Ruf [111], which includes several components: image rectification, pixel matching, semi-global matching optimisation (SGM), compatibility check, and median filtering. This algorithm runs on an ARM processor of the Towards Ubiquitous Low-Power Image Processing Platforms (TULIPP). Therefore, image processing shows a performance suitable for real-time applications on a UAV [111].
The second problem is sensitivity to illumination variations, which have been tackled using deep learning networks in recent years. The third problem is the accurate calibration of the stereo cameras. Suppose the stereo cameras are not calibrated correctly. The detection error increases very quickly over time. This is due to system instability which will affect the accuracy of computing the distance from the obstacle, especially when the baseline of the cameras is small, e.g., when they are mounted on small-sized UAVs.

This entry is adapted from the peer-reviewed paper 10.3390/rs14153824

References

  1. Colomina, I.; Molina, P. Unmanned aerial systems for photogrammetry and remote sensing: A review. ISPRS J. Photogramm. Remote Sens. 2014, 92, 79–97.
  2. Reinoso, J.; Gonçalves, J.; Pereira, C.; Bleninger, T. Cartography for Civil Engineering Projects: Photogrammetry Supported by Unmanned Aerial Vehicles. Iran. J. Sci. Technol. Trans. Civ. Eng. 2018, 42, 91–96.
  3. Janoušek, J.; Jambor, V.; Marcoň, P.; Dohnal, P.; Synková, H.; Fiala, P. Using UAV-Based Photogrammetry to Obtain Correlation between the Vegetation Indices and Chemical Analysis of Agricultural Crops. Remote Sens. 2021, 13, 1878.
  4. Barry, A.J.; Florence, P.R.; Tedrake, R. High-speed autonomous obstacle avoidance with pushbroom stereo. J. Field Robot. 2018, 35, 52–68.
  5. Lee, H.; Ho, H.; Zhou, Y. Deep Learning-based Monocular Obstacle Avoidance for Unmanned Aerial Vehicle Navigation in Tree Plantations. J. Intell. Robot. Syst. 2021, 101, 5.
  6. Toth, C.; Jóźków, G. Remote sensing platforms and sensors: A survey. ISPRS J. Photogramm. Remote Sens. 2016, 115, 22–36.
  7. Goodin, C.; Carrillo, J.; Monroe, J.G.; Carruth, D.W.; Hudson, C.R. An Analytic Model for Negative Obstacle Detection with Lidar and Numerical Validation Using Physics-Based Simulation. Sensors 2021, 21, 3211.
  8. Hu, J.-w.; Zheng, B.-y.; Wang, C.; Zhao, C.-h.; Hou, X.-l.; Pan, Q.; Xu, Z. A survey on multi-sensor fusion based obstacle detection for intelligent ground vehicles in off-road environments. Front. Inf. Technol. Electron. Eng. 2020, 21, 675–692.
  9. John, V.; Mita, S. Deep Feature-Level Sensor Fusion Using Skip Connections for Real-Time Object Detection in Autonomous Driving. Electronics 2021, 10, 424.
  10. Serna, A.; Marcotegui, B. Urban accessibility diagnosis from mobile laser scanning data. ISPRS J. Photogramm. Remote Sens. 2013, 84, 23–32.
  11. Díaz-Vilariño, L.; Boguslawski, P.; Khoshelham, K.; Lorenzo, H.; Mahdjoubi, L. INDOOR NAVIGATION FROM POINT CLOUDS: 3D MODELLING AND OBSTACLE DETECTION. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 41, 275–281.
  12. Li, F.; Wang, H.; Akwensi, P.H.; Kang, Z. Construction of Obstacle Element Map Based on Indoor Scene Recognition. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, XLII-2/W13, 819–825.
  13. Keramatian, A.; Gulisano, V.; Papatriantafilou, M.; Tsigas, P. Mad-c: Multi-stage approximate distributed cluster-combining for obstacle detection and localization. J. Parallel Distrib. Comput. 2021, 147, 248–267.
  14. Giannì, C.; Balsi, M.; Esposito, S.; Fallavollita, P. Obstacle Detection System Involving Fusion of Multiple Sensor Technologies. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, 42, 127–134.
  15. Xie, G.; Zhang, J.; Tang, J.; Zhao, H.; Sun, N.; Hu, M. Obstacle detection based on depth fusion of lidar and radar in challenging conditions. In Industrial Robot: The International Journal of Robotics Research and Application; Emerald Group Publishing Limited: Bradford, UK, 2021.
  16. Qin, R.; Zhao, X.; Zhu, W.; Yang, Q.; He, B.; Li, G.; Yan, T. Multiple Receptive Field Network (MRF-Net) for Autonomous Underwater Vehicle Fishing Net Detection Using Forward-Looking Sonar Images. Sensors 2021, 21, 1933.
  17. Yılmaz, E.; Özyer, S.T. Remote and Autonomous Controlled Robotic Car based on Arduino with Real Time Obstacle Detection and Avoidance. Univers. J. Eng. Sci. 2019, 7, 1–7.
  18. Singh, B.; Kapoor, M. A Framework for the Generation of Obstacle Data for the Study of Obstacle Detection by Ultrasonic Sensors. IEEE Sens. J. 2021, 21, 9475–9483.
  19. Kucukyildiz, G.; Ocak, H.; Karakaya, S.; Sayli, O. Design and implementation of a multi sensor based brain computer interface for a robotic wheelchair. J. Intell. Robot. Syst. 2017, 87, 247–263.
  20. Tondin Ferreira Dias, E.; Vieira Neto, H.; Schneider, F.K. A Compressed Sensing Approach for Multiple Obstacle Localisation Using Sonar Sensors in Air. Sensors 2020, 20, 5511.
  21. Huh, S.; Cho, S.; Jung, Y.; Shim, D.H. Vision-based sense-and-avoid framework for unmanned aerial vehicles. IEEE Trans. Aerosp. Electron. Syst. 2015, 51, 3427–3439.
  22. Aswini, N.; Krishna Kumar, E.; Uma, S. UAV and obstacle sensing techniques–a perspective. Int. J. Intell. Unmanned Syst. 2018, 6, 32–46.
  23. Lee, T.-J.; Yi, D.-H.; Cho, D.-I. A monocular vision sensor-based obstacle detection algorithm for autonomous robots. Sensors 2016, 16, 311.
  24. Zahran, S.; Moussa, A.M.; Sesay, A.B.; El-Sheimy, N. A new velocity meter based on Hall effect sensors for UAV indoor navigation. IEEE Sens. J. 2018, 19, 3067–3076.
  25. Mashaly, A.S.; Wang, Y.; Liu, Q. Efficient sky segmentation approach for small UAV autonomous obstacles avoidance in cluttered environment. In Proceedings of the Geoscience and Remote Sensing Symposium (IGARSS), 2016 IEEE International, Beijing, China, 10–15 July 2016; pp. 6710–6713.
  26. Al-Kaff, A.; García, F.; Martín, D.; De La Escalera, A.; Armingol, J.M. Obstacle detection and avoidance system based on monocular camera and size expansion algorithm for UAVs. Sensors 2017, 17, 1061.
  27. Huh, K.; Park, J.; Hwang, J.; Hong, D. A stereo vision-based obstacle detection system in vehicles. Opt. Lasers Eng. 2008, 46, 168–178.
  28. Padhy, R.P.; Choudhury, S.K.; Sa, P.K.; Bakshi, S. Obstacle Avoidance for Unmanned Aerial Vehicles: Using Visual Features in Unknown Environments. IEEE Consum. Electron. Mag. 2019, 8, 74–80.
  29. McGuire, K.; de Croon, G.; De Wagter, C.; Tuyls, K.; Kappen, H.J. Efficient Optical Flow and Stereo Vision for Velocity Estimation and Obstacle Avoidance on an Autonomous Pocket Drone. IEEE Robot. Autom. Lett. 2017, 2, 1070–1076.
  30. Ulrich, I.; Nourbakhsh, I. Appearance-based obstacle detection with monocular color vision. In Proceedings of the AAAI/IAAI, Austin, TX, USA, 30 July 2000; pp. 866–871.
  31. Shih An, L.; Chou, L.-H.; Chang, T.-H.; Yang, C.-H.; Chang, Y.-C. Obstacle Avoidance of Mobile Robot Based on HyperOmni Vision. Sens. Mater. 2019, 31, 1021.
  32. Wang, S.-H.; Li, X.-X. A Real-Time Monocular Vision-Based Obstacle Detection. In Proceedings of the 2020 6th International Conference on Control, Automation and Robotics (ICCAR), Singapore, 20–23 April 2020; pp. 695–699.
  33. Liu, J.; Li, H.; Liu, J.; Xie, S.; Luo, J. Real-Time Monocular Obstacle Detection Based on Horizon Line and Saliency Estimation for Unmanned Surface Vehicles. Mob. Netw. Appl. 2021, 26, 1372–1385.
  34. Talele, A.; Patil, A.; Barse, B. Detection of real time objects using TensorFlow and OpenCV. Asian J. Converg. Technol. (AJCT) 2019, 5.
  35. Abadi, M.; Barham, P.; Chen, J.; Chen, Z.; Davis, A.; Dean, J.; Devin, M.; Ghemawat, S.; Irving, G.; Isard, M. : A System for Machine Learning. In Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16), Savannah, GA, USA, 2–4 November 2016; pp. 265–283.
  36. Bradski, G. The openCV library. In Dr. Dobb’s Journal: Software Tools for the Professional Programmer; M & T Pub., the University of Michigan: Ann Arbor, MI, USA, 2000; Volume 25, pp. 120–123.
  37. Rane, M.; Patil, A.; Barse, B. Real object detection using TensorFlow. In ICCCE 2019; Springer: Berlin/Heidelberg, Germany, 2020; pp. 39–45.
  38. Qiu, Z.; Zhao, N.; Zhou, L.; Wang, M.; Yang, L.; Fang, H.; He, Y.; Liu, Y. Vision-based moving obstacle detection and tracking in paddy field using improved yolov3 and deep SORT. Sensors 2020, 20, 4082.
  39. Redmon, J.; Farhadi, A. Yolov3: An incremental improvement. arXiv 2018, arXiv:1804.02767.
  40. He, D.; Zou, Z.; Chen, Y.; Liu, B.; Yao, X.; Shan, S. Obstacle detection of rail transit based on deep learning. Measurement 2021, 176, 109241.
  41. He, Y.; Liu, Z. A Feature Fusion Method to Improve the Driving Obstacle Detection under Foggy Weather. IEEE Trans. Transp. Electrif. 2021, 7, 2505–2515.
  42. Liu, J.; Li, H.; Luo, J.; Xie, S.; Sun, Y. Efficient obstacle detection based on prior estimation network and spatially constrained mixture model for unmanned surface vehicles. J. Field Robot. 2021, 38, 212–228.
  43. de Croon, G.; De Wagter, C. Learning what is above and what is below: Horizon approach to monocular obstacle detection. arXiv 2018, arXiv:1806.08007.
  44. Zeng, Y.; Zhao, F.; Wang, G.; Zhang, L.; Xu, B. Brain-Inspired Obstacle Detection Based on the Biological Visual Pathway. In Proceedings of the International Conference on Brain and Health Informatics, Omaha, NE, USA, 13–16 October 2016; pp. 355–364.
  45. Jia, B.; Liu, R.; Zhu, M. Real-time obstacle detection with motion features using monocular vision. Vis. Comput. 2015, 31, 281–293.
  46. Ohnishi, N.; Imiya, A. Appearance-based navigation and homing for autonomous mobile robot. Image Vis. Comput. 2013, 31, 511–532.
  47. Gharani, P.; Karimi, H.A. Context-aware obstacle detection for navigation by visually impaired. Image Vis. Comput. 2017, 64, 103–115.
  48. Tsai, C.-C.; Chang, C.-W.; Tao, C.-W. Vision-Based Obstacle Detection for Mobile Robot in Outdoor Environment. J. Inf. Sci. Eng. 2018, 34, 21–34.
  49. Gunn, S.R. Support vector machines for classification and regression. ISIS Tech. Rep. 1998, 14, 5–16.
  50. Bay, H.; Ess, A.; Tuytelaars, T.; Van Gool, L. Speeded-up robust features (SURF). Comput. Vis. Image Underst. 2008, 110, 346–359.
  51. de Croon, G.; De Wagter, C.; Seidl, T. Enhancing optical-flow-based control by learning visual appearance cues for flying robots. Nat. Mach. Intell. 2021, 3, 33–41.
  52. Urban, D.; Caplier, A. Time- and Resource-Efficient Time-to-Collision Forecasting for Indoor Pedestrian Obstacles Avoidance. J. Imaging 2021, 7, 61.
  53. Nalpantidis, L.; Gasteratos, A. Stereo vision depth estimation methods for robotic applications. In Depth Map and 3D Imaging Applications: Algorithms and Technologies; IGI Global: Hershey/Derry/Dauphin, PA, USA, 2012; pp. 397–417.
  54. Lee, J.; Jeong, J.; Cho, J.; Yoo, D.; Lee, B.; Lee, B. Deep neural network for multi-depth hologram generation and its training strategy. Opt. Express 2020, 28, 27137–27154.
  55. Almalioglu, Y.; Saputra, M.R.U.; De Gusmao, P.P.; Markham, A.; Trigoni, N. GANVO: Unsupervised deep monocular visual odometry and depth estimation with generative adversarial networks. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Colombo, Sri Lanka, 9–10 May 2019; pp. 5474–5480.
  56. Kim, D.; Ga, W.; Ahn, P.; Joo, D.; Chun, S.; Kim, J. Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth. arXiv 2022, arXiv:2201.07436.
  57. Gao, W.; Wang, K.; Ding, W.; Gao, F.; Qin, T.; Shen, S. Autonomous aerial robot using dual-fisheye cameras. J. Field Robot. 2020, 37, 497–514.
  58. Silva, A.; Mendonça, R.; Santana, P. Monocular Trail Detection and Tracking Aided by Visual SLAM for Small Unmanned Aerial Vehicles. J. Intell. Robot. Syst. 2020, 97, 531–551.
  59. Häne, C.; Heng, L.; Lee, G.H.; Fraundorfer, F.; Furgale, P.; Sattler, T.; Pollefeys, M. 3D visual perception for self-driving cars using a multi-camera system: Calibration, mapping, localization, and obstacle detection. Image Vis. Comput. 2017, 68, 14–27.
  60. Lin, Y.; Gao, F.; Qin, T.; Gao, W.; Liu, T.; Wu, W.; Yang, Z.; Shen, S. Autonomous aerial navigation using monocular visual-inertial fusion. J. Field Robot. 2018, 35, 23–51.
  61. Zhao, C.; Sun, Q.; Zhang, C.; Tang, Y.; Qian, F. Monocular depth estimation based on deep learning: An overview. Sci. China Technol. Sci. 2020, 63, 1612–1627.
  62. Kumar, V.R.; Milz, S.; Simon, M.; Witt, C.; Amende, K.; Petzold, J.; Yogamani, S. Monocular Fisheye Camera Depth Estimation Using Semi-supervised Sparse Velodyne Data. arXiv 2018, arXiv:1803.06192.
  63. Mancini, M.; Costante, G.; Valigi, P.; Ciarfuglia, T.A. J-MOD 2: Joint Monocular Obstacle Detection and Depth Estimation. IEEE Robot. Autom. Lett. 2018, 3, 1490–1497.
  64. Haseeb, M.A.; Guan, J.; Ristic-Durrant, D.; Gräser, A. DisNet: A novel method for distance estimation from monocular camera. In Proceedings of the 10th Planning, Perception and Navigation for Intelligent Vehicles (PPNIV18), IROS, Madrid, Spain, 1 October 2018.
  65. Hatch, K.; Mern, J.M.; Kochenderfer, M.J. Obstacle Avoidance Using a Monocular Camera. In Proceedings of the AIAA Scitech 2021 Forum, Virtual Event, 11–22 January 2021; p. 0269.
  66. Máttyus, G.; Luo, W.; Urtasun, R. Deeproadmapper: Extracting road topology from aerial images. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 3438–3446.
  67. Homayounfar, N.; Ma, W.-C.; Liang, J.; Wu, X.; Fan, J.; Urtasun, R. Dagmapper: Learning to map by discovering lane topology. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Long Beach, CA, USA, 16–17 June 2019; pp. 2911–2920.
  68. Maturana, D.; Chou, P.-W.; Uenoyama, M.; Scherer, S. Real-time semantic mapping for autonomous off-road navigation. In Proceedings of the Field and Service Robotics, Toronto, ON, Canada, 25 April 2018; pp. 335–350.
  69. Sengupta, S.; Sturgess, P.; Ladický, L.u.; Torr, P.H. Automatic dense visual semantic mapping from street-level imagery. In Proceedings of the 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, Algarve, Portugal, 7–12 October 2012; pp. 857–862.
  70. Gerke, M.; Xiao, J. Fusion of airborne laserscanning point clouds and images for supervised and unsupervised scene classification. ISPRS J. Photogramm. Remote Sens. 2014, 87, 78–92.
  71. Seif, H.G.; Hu, X. Autonomous driving in the iCity—HD maps as a key challenge of the automotive industry. Engineering 2016, 2, 159–162.
  72. Jiao, J. Machine learning assisted high-definition map creation. In Proceedings of the 2018 IEEE 42nd Annual Computer Software and Applications Conference (COMPSAC), Tokyo, Japan, 23–27 July 2018; pp. 367–373.
  73. Ye, C.; Zhao, H.; Ma, L.; Jiang, H.; Li, H.; Wang, R.; Chapman, M.A.; Junior, J.M.; Li, J. Robust lane extraction from MLS point clouds towards HD maps especially in curve road. IEEE Trans. Intell. Transp. Syst. 2020, 23, 1505–1518.
  74. Mori, T.; Scherer, S. First results in detecting and avoiding frontal obstacles from a monocular camera for micro unmanned aerial vehicles. In Proceedings of the Robotics and Automation (Icra), 2013 IEEE International Conference on, Karlsruhe, Germany, 6–10 May 2013; pp. 1750–1757.
  75. Aguilar, W.G.; Casaliglla, V.P.; Pólit, J.L. Obstacle avoidance based-visual navigation for micro aerial vehicles. Electronics 2017, 6, 10.
  76. Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110.
  77. Badrloo, S.; Varshosaz, M. Monocular vision based obstacle detection. Earth Obs. Geomat. Eng. 2017, 1, 122–130.
  78. Escobar-Alvarez, H.D.; Johnson, N.; Hebble, T.; Klingebiel, K.; Quintero, S.A.; Regenstein, J.; Browning, N.A. R-ADVANCE: Rapid Adaptive Prediction for Vision-based Autonomous Navigation, Control, and Evasion. J. Field Robot. 2018, 35, 91–100.
  79. Badrloo, S.; Varshosaz, M.; Pirasteh, S.; Li, J. A novel region-based expansion rate obstacle detection method for MAVs using a fisheye camera. Int. J. Appl. Earth Obs. Geoinf. 2022, 108, 102739.
  80. Parmar, M.M.; Rawlo, R.R.; Shirke, J.L.; Sangam, S. Self-Driving Car. Int. J. Res. Appl. Sci. Eng. Technol. (IJRASET). 2022, 10, 2305–2309.
  81. Wang, C.; Shi, Z.-k. A novel traffic stream detection method based on inverse perspective mapping. Procedia Eng. 2012, 29, 1938–1943.
  82. Muad, A.M.; Hussain, A.; Samad, S.A.; Mustaffa, M.M.; Majlis, B.Y. Implementation of inverse perspective mapping algorithm for the development of an automatic lane tracking system. In Proceedings of the 2004 IEEE Region 10 Conference TENCON, Chiang Mai, Thailand, 21–24 November 2004; pp. 207–210.
  83. Kuo, L.-C.; Tai, C.-C. Robust Image-Based Water-Level Estimation Using Single-Camera Monitoring. IEEE Trans. Instrum. Meas. 2022, 71, 1–11.
  84. Hu, Z.; Xiao, H.; Zhou, Z.; Li, N. Detection of parking slots occupation by temporal difference of inverse perspective mapping from vehicle-borne monocular camera. Proc. Inst. Mech. Eng. Part D J. Automob. Eng. 2021, 235, 3119–3126.
  85. Lin, C.-T.; Shen, T.-K.; Shou, Y.-W. Construction of fisheye lens inverse perspective mapping model and its applications of obstacle detection. EURASIP J. Adv. Signal Processing 2010, 2010, 296598.
  86. Bertozzi, M.; Broggi, A. GOLD: A parallel real-time stereo vision system for generic obstacle and lane detection. IEEE Trans. Image Processing 1998, 7, 62–81.
  87. Kim, C.-h.; Lee, T.-j. An application of stereo camera with two different FoVs for SLAM and obstacle detection. IFAC-PapersOnLine 2018, 51, 148–153.
  88. Wang, H.; Yuan, K.; Zou, W.; Peng, Y. Real-time region-based obstacle detection with monocular vision. In Proceedings of the 2005 IEEE International Conference on Robotics and Biomimetics-ROBIO, Hong Kong, China, 5–9 July 2005; pp. 615–619.
  89. Fazl-Ersi, E.; Tsotsos, J.K. Region classification for robust floor detection in indoor environments. In Proceedings of the International Conference Image Analysis and Recognition, Halifax, NS, Canada, 6–8 July 2009; pp. 717–726.
  90. Cucchiara, R.; Perini, E.; Pistoni, G. Efficient Stereo Vision for Obstacle Detection and AGV Navigation. In Proceedings of the ICIAP, Modena, Italy, 10–14 September 2007; pp. 291–296.
  91. Tanveer, M.H.; Sgorbissa, A. An Inverse Perspective Mapping Approach using Monocular Camera of Pepper Humanoid Robot to Determine the Position of Other Moving Robot in Plane. In Proceedings of the ICINCO (2), Porto, Portugal, 29–31 July 2018; pp. 229–235.
  92. Song, W.; Xiong, G.; Cao, L.; Jiang, Y. Depth calculation and object detection using stereo vision with subpixel disparity and hog feature. In Advances in Information Technology and Education; Springer: Berlin/Heidelberg, Germany, 2011; pp. 489–494.
  93. Kim, D.; Choi, J.; Yoo, H.; Yang, U.; Sohn, K. Rear obstacle detection system with fisheye stereo camera using HCT. Expert Syst. Appl. 2015, 42, 6295–6305.
  94. Ball, D.; Ross, P.; English, A.; Milani, P.; Richards, D.; Bate, A.; Upcroft, B.; Wyeth, G.; Corke, P. Farm Workers of the Future: Vision-Based Robotics for Broad-Acre Agriculture. IEEE Robot. Autom. Mag. 2017, 24, 97–107.
  95. Salhi, M.S.; Amiri, H. Design on FPGA of an obstacle detection module over stereo image for robotic learning. Indian J. Eng. 2022, 19, 72–84.
  96. Jung, H.; Lee, Y.; Kim, B.; Yoon, P.; Kim, J. Stereo vision-based forward obstacle detection. Int. J. Automot. Technol. 2007, 8, 493–504.
  97. Huang, H.-C.; Hsieh, C.-T.; Yeh, C.-H. An indoor obstacle detection system using depth information and region growth. Sensors 2015, 15, 27116–27141.
  98. Muhovič, J.; Bovcon, B.; Kristan, M.; Perš, J. Obstacle tracking for unmanned surface vessels using 3-D point cloud. IEEE J. Ocean. Eng. 2019, 45, 786–798.
  99. Murmu, N.; Nandi, D. Lane and Obstacle Detection System Based on Single Camera-Based Stereo Vision System. In Applications of Advanced Computing in Systems; Springer: Berlin/Heidelberg, Germany, 2021; pp. 259–266.
  100. Sun, T.; Pan, W.; Wang, Y.; Liu, Y. Region of Interest Constrained Negative Obstacle Detection and Tracking With a Stereo Camera. IEEE Sens. J. 2022, 22, 3616–3625.
  101. Dairi, A.; Harrou, F.; Senouci, M.; Sun, Y. Unsupervised obstacle detection in driving environments using deep-learning-based stereovision. Robot. Auton. Syst. 2018, 100, 287–301.
  102. Zhang, J.; Hu, S.; Shi, H. Deep learning based object distance measurement method for binocular stereo vision blind area. Methods 2018, 9, 606–613.
  103. Zhang, Y.; Song, J.; Ding, Y.; Yuan, Y.; Wei, H.-L. FSD-BRIEF: A Distorted BRIEF Descriptor for Fisheye Image Based on Spherical Perspective Model. Sensors 2021, 21, 1839.
  104. Choe, J.; Joo, K.; Rameau, F.; Kweon, I.S. Stereo object matching network. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xian, China, 30 May–5 June 2021; pp. 12918–12924.
  105. Luo, W.; Schwing, A.G.; Urtasun, R. Efficient deep learning for stereo matching. In Proceedings of the Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 30 June 2016; pp. 5695–5703.
  106. Song, W.; Yang, Y.; Fu, M.; Li, Y.; Wang, M. Lane detection and classification for forward collision warning system based on stereo vision. IEEE Sens. J. 2018, 18, 5151–5163.
  107. Haris, M.; Hou, J. Obstacle Detection and Safely Navigate the Autonomous Vehicle from Unexpected Obstacles on the Driving Lane. Sensors 2020, 20, 4719.
  108. Mukherjee, A.; Adarsh, S.; Ramachandran, K. ROS-Based Pedestrian Detection and Distance Estimation Algorithm Using Stereo Vision, Leddar and CNN. In Intelligent System Design; Springer: Berlin/Heidelberg, Germany, 2021; pp. 117–127.
  109. Tijmons, S.; de Croon, G.C.; Remes, B.D.; De Wagter, C.; Mulder, M. Obstacle avoidance strategy using onboard stereo vision on a flapping wing mav. IEEE Trans. Robot. 2017, 33, 858–874.
  110. Lin, J.; Zhu, H.; Alonso-Mora, J. Robust vision-based obstacle avoidance for micro aerial vehicles in dynamic environments. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 2682–2688.
  111. Grinberg, M.; Ruf, B. UAV Use Case: Real-Time Obstacle Avoidance System for Unmanned Aerial Vehicles Based on Stereo Vision. In Towards Ubiquitous Low-Power Image Processing Platforms; Springer: Berlin/Heidelberg, Germany, 2021; pp. 139–149.
More
This entry is offline, you can click here to edit this entry!
Video Production Service