Autonomous Landing Site Detection: Comparison
Please note this is a comparison between Version 2 by Rita Xu and Version 1 by Jayasurya Arasur Subramanian.

The advancement of autonomous technology in Unmanned Aerial Vehicles (UAVs) has piloted a new era in aviation. While UAVs were initially utilized only for the military, rescue, and disaster response, they are now being utilized for domestic and civilian purposes as well. In order to deal with its expanded applications and to increase autonomy, the ability for UAVs to perform autonomous landing will be a crucial component.

  • autonomous landing
  • unmanned aerial vehicle
  • computer vision

1. Introduction

Drones, or Unmanned Aerial Vehicles (UAV), have a wide range of uses, from military applications to domestic uses, due to their ease of control, maneuverability, cost-effectiveness, and lack of pilot involvement. Certain drone operations, such as resource exploration and extraction, surveillance, and disaster management, demand autonomous landing as human involvement in these operations may be dangerous, particularly if communication is lost. As a result, recent developments in drone technology have focused on replacing human involvement with autonomous controllers that can make their own decisions.
Launching a drone autonomously is relatively straightforward; the challenging aspect is landing it in a specific location with precision in case of emergency situations. For an autonomous landing to be successful, the drone must have accurate information regarding its landing position, altitude, wind speed, and wind direction. Armed with these data, the UAV can make adjustments to its landing approach, such as reducing speed or altering its altitude, to ensure a safe and successful landing. The landing area can be classified into three categories: static, dynamic, and complex. Static locations are those that are firmly fixed to the ground, such as helipads, airports, and roads. Dynamic locations, on the other hand, are landing areas that are positioned on moving objects, such as a helipad on a ship or drone landing areas on trucks or cars. Complex landing areas are those that have no markings on the surface and can pose a challenge, such as areas near water bodies, hilly regions, rocky terrain, and areas affected by natural disasters such as earthquakes and floods. Figure 1 gives a view of these types of landing areas.
Figure 1. Type of Landing Area: (A) Static. (B) Dynamic. (C) Complex.

2. Type of Landing Areas

The categorization of target locations is divided into three types: Static, Dynamic, and Complex, with Static locations further subdivided into cooperative target-based, and Dynamic locations being classified into vehicle-based and ship-based locations. Xin et al. [27][1] discuss cooperative target-based autonomous landing, which is further classified into classical machine learning solutions such as Hough transformation, template matching, Edge detection, line mapping, and sliding window approaches. The author begins with basic clustering algorithms and progresses through deep learning algorithms to address the static location.
Cooperative-based landing pertains to landing sites that are clearly defined and labeled with identifiable patterns, such as the letter “T” or “H”, a circle, a rectangle, or a combination of these shapes, based on specific geometric principles, as described in Xin (2022) [27][1]. The different types of landing area markings are depicted in Figure 2.
Figure 2. Commonly used Landing Signs/Markings.

2.1. Static Landing Area

The localization of different types of markings relies on a range of techniques, from image processing to advanced machine learning. Localization of the “T” marking helipad achieves the maximum precision at specific poses using Canny Edge detection, Hough Transform, Hu invariant, Affine moments, and adaptive threshold selection [28,29][2][3]. Localization of the “H” marking achieves a success rate of 97.3% using image segmentation, depth-first search, and adaptive threshold selection in [30][4], while it achieves the maximum precision at pose 0.56 using image extraction and Zernike moments obtained in [31][5].
The “Circle” marking’s localization is addressed in two studies, namely [32,33][6][7]. Through the implementation of solvePnPRansac and Kalman filters, Benini et al. [32][6] were able to achieve a position error that is less than 8% of the diameter of the landing area, while maximum precision at pose 0.08 using an Extended Kalman filter was achieved by [33][7]. Detecting combined marking types is discussed in [34,35,36,37][8][9][10][11]. The maximum precision at specific poses using template matching, Kalman filter, and a profile-checker algorithm was achieved in [34][8]. Meanwhile, the other combined detection method predicts the maximum precision at pose 0.5 using tag boundary segmentation, image gradients, and adaptive threshold [35][9]. Ref. [36][10] achieves maximum precision at a position of less than 10 cm using Canny Edge detection, adaptive thresholding, and Levenberg–Marquardt. Lastly, the final combined detection approach obtains maximum precision at a position of less than 1% using HOG, NCC, and AprilTags [37][11].
Forster et al. [38][12] designed a system to detect and track a target mounted on the vehicle using computer vision algorithms and hardware. The system used in the Mohamed Bin Zayed International Robotics Challenge 2017 (MBZIRC) employed a precise RTK-DGPS to determine the target location, followed by a circle Hough transform to accurately detect the center of the target. By tracking the target, the UAV adjusted its trajectory to match the movement of the vehicle. The system successfully met the requirements of the task and was ranked as the best solution, considering various constraints. There are some limitations in the system, such as weather conditions and vehicle speed, which may affect its performance. Overall, this preseaperrch provides an interesting approach to solving an important problem in robotics research, which has potential applications in various fields such as aerial monitoring and humanitarian demining operations.

2.2. Dynamic Landing Area

Two categories of dynamic landing areas exist based on the motion of the platform, namely ship-based and vehicle-based landings. Due to the complexity of landing on a moving platform, LiDAR sensors are used in conjunction with computer vision techniques. The landing process is facilitated using a model predictive feedback linear Kalman filter, resulting in a landing time of 25 s and a position error of less than 10 cm [39][13]. Another algorithm uses nonlinear controllers, state estimation, convolutional neural network (CNN), and velocity observer to achieve maximum precision at positions less than (10, 10) cm [40][14], while the algorithm employs a deep deterministic policy gradient with Gazebo-based reinforcement learning and achieves a landing time of 17.5 s with a position error of less than 6 cm [41][15]. Lastly, the algorithm proposed in [42][16] uses extended Kalman, extended H∞, perspective-n-point (PnP), and visual–inertial data fusion to achieve maximum precision at positions less than 13 cm.
The algorithm presented in [43][17] utilizes extended Kalman and visual–inertial data fusion techniques to achieve a landing time of 40 s for ship-based landing area detection. Meanwhile, another algorithm employs a Kalman filter, artificial neural network (ANN), feature matching, and Hu moments to achieve a position error of (4.33, 1.42) cm [44][18]. The approach outlined in [45][19] utilizes the EPnP algorithm and a Kalman filter, but no experimental results are presented. Battiato et al. [46][20] have introduced a system that enables real-time 3D terrain reconstruction and detection of landing spots for micro aerial vehicles. The system is designed to run on an onboard smartphone processor and uses only a single down-looking camera and an inertial measurement unit. A probabilistic two-dimensional elevation map, centered around the robot, is generated and continuously updated at a rate of 1 Hz using probabilistic depth maps computed from multiple monocular views. This mapping framework is shown to be useful for the autonomous navigation of micro aerial vehicles, as demonstrated through successful fully autonomous landing. The proposed system is efficient in terms of resources, computation, and the accumulation of measurements from different observations. It is also less susceptible to drifting pose estimates.

2.3. Complex Landing Area

The complex landing area is a challenging task for autonomous landing systems. The terrain in these areas can have various obstacles and hazards, and it is not always possible to find a suitable landing area. Researchers have explored different methods for identifying safe landing areas in complex terrain, but the research in this area is limited. Fitzgerald and Mejia’s research on a UAV critical landing place selection system is one such effort [43,44][17][18]. To locate a good landing spot, a monocular camera and a digital elevation model (DEM) are used. This system has multiple stages, including primary landing location selection, candidate landing area identification, DEM flatness analysis, and decision-making. The method’s limitations stemmed from the fact that it only used the Canny operator to remove edges and that the flat estimation stage’s DEM computation lacked resilience.
Research on unstructured emergency autonomous UAV landings using SLAM was conducted during the year 2018, where the use of DEM and LiDAR in their approach are evaluated and their advantages and limitations are discussed [47][21]. The research was conducted by using monocular vision SLAM and a point cloud map to identify the UAV and split the grid into different heights to locate a safe landing zone. After denoising and filtering the map using a mid-pass filter and 3D attributes, the landing process lasted 52 s, starting at a height of 20 m. The experimental validation was conducted in challenging environments, demonstrating the system’s adaptability. In order to fulfill the demands of a self-governing landing, the sparse point cloud was partitioned based on different elevations. Furthermore, Lin et al. [48][22] deliberated landing scenarios in low illumination settings.
To account for the constantly changing terrain in complex landing areas, it is advisable to identify multiple suitable landing sites to ensure the safety of the UAV. Once landing coordinates have been identified, the optimal landing spot and approach path should be determined. Cui, et al. [49][23] proposed a way that calculates the landing area’s criteria based on energy consumption, the safety of the terrain, and the craft’s performance. A clearer comprehension of landing point selection is provided in Figure 3, which displays the identification of two landing targets—Target A and Target B—along with the corresponding trajectories to reach them. It is advisable to determine the shortest trajectory to the alternative target in the event of a last-minute change.
Figure 3. Landing Spot and Trajectory Selection.

3. Landing on Extra-Terrestrial Bodies

For the above-said problem, wresearchers can also consider adopting the landing approach utilized by spacecraft on other extra-terrestrial bodies [50][24]. Precise Landing and Hazard Avoidance (PL&HA) [51,52,53,54,55,56][25][26][27][28][29][30] and the Safe and Precise Landing Integrated Capabilities Evolution (SPLICE) are two projects that NASA has been working on [57][31]. Moreover, NASA is working on a project called ALHAT, which stands for Autonomous Landing Hazard Avoidance Technology [58,59,60,61,62,63,64,65,66,67,68][32][33][34][35][36][37][38][39][40][41][42]. However, a significant limitation of ALHAT is that it relies solely on static information such as slopes and roughness to select a landing area, making the maneuvering of the craft more challenging. S.R. Ploen et al. [69][43] proposed an algorithm that uses Bayesian networks to calculate approximate landing footprints and make it feasible. Researchers are currently working on similar aspects. In that case, the classical method of object classification can be employed to simplify the process of identifying objects in space. However, prior to delving into object classification, it is essential to determine the angle and pose of the spacecraft for greater accuracy.

3.1. Pose Estimation Techniques

Determining the spacecraft’s pose requires an estimation based on the camera’s angle with respect to the ground view. Deep learning models are mainly used for pose estimation. The previously available pose estimation techniques are studied by Uma et al. [70][44], which has been summarized as follows.
In their respective studies, Sharma et al. (2018) [71][45], Harvard et al. (2020) [72][46], and Proencca et al. (2020) [73][47] employed different techniques to analyze the PRISMA dataset. Sharma et al. [71][45] utilized the Sobel operator, the Hough transform, and a WGW approach to detect features of the target, regardless of their size. On the other hand, Harvard et al. [72][46] utilized landmark locations as key point detectors to address the challenge of high relative dynamics between the object and the camera. They also employed the 2010 ImageNet ILSVRC dataset in their approach. A deep learning framework that utilizes soft classification and orientation and outperformed straightforward regression is presented by Proencca et al. [73][47]. The Dataset used in the study was the URSO, an Unreal Engine 4 simulator. According to Sharma et al. (2018) [74][48], their approach demonstrated higher feature identification accuracy compared to traditional methods, even when dealing with high levels of Gaussian white noise in the images. The SA-LMPE method used by Chen et al. [75][49] improves posture refinement and removes erroneous predictions, while the HRNet model correctly predicted two-dimensional landmarks, using the SPEED Dataset. Zeng et al. (2017) [76][50] utilized deep learning methods to identify and efficiently represent significant features from a simulated space target dataset generated by the Systems Tool Kit (STK). Wu et al. (2019) [77][51] used a T-SCNN model trained on images from a public database to successfully identify and detect space targets in deep space photos. Finally, Tao et al. (2018) [78][52] used a DCNN model trained on the Apollo spacecraft simulation Dataset from TERRIER, which showed resistance to variations in brightness, rotation, and reflections, as well as efficacy in learning and detecting high-level characteristics.

3.2. Object Classification

The objects of interest in this scenario are craters and boulders found on a particular extraterrestrial body. A crater is a concave structure that typically forms as a result of a meteoroid, asteroid, or comet’s impact on a planet or moon’s surface. Craters can vary greatly in size, ranging from small to very large. Conversely, a boulder is a large rock that usually has a diameter exceeding 25 cm. The emergence of boulders on the surfaces of planets, moons, and asteroids can be attributed to several factors including impact events, volcanic activity, and erosion.

3.2.1. Deep-Learning Approach

Once the spacecraft’s pose has been determined, the objective is to land the spacecraft safely by avoiding craters and boulders. To achieve this objective, several algorithms have been developed. Li et al. (2021) [8][53] suggest a novel approach to detect and classify planetary craters using deep learning. The approach involves three main steps: extracting candidate regions, detecting edges, and recognizing craters. The first stage of the proposed method involves extracting candidate regions that are likely to contain craters, which is done using a structure random forest algorithm. In the second stage: the edge detection stage, the edges of craters are extracted from the candidate regions through the application of morphological techniques. Lastly, in the recognition stage, the extracted features are classified as craters or non-craters using a deep learning model based on the AlexNet architecture. Wang, Song et al. [9][54] propose a new architecture called “ERU-Net” (Effective Residual U-Net) for lunar crater recognition. ERU-Net is an improvement over the standard U-Net architecture, which is commonly used in image segmentation tasks. The ERU-Net architecture employs residual connections between its encoder and decoder blocks, along with an attention mechanism that aids the network in prioritizing significant features while being trained.

3.2.2. Supervised Learning Approach

Supervised detection approaches utilize machine learning techniques and a labeled training dataset in the relevant domain to create classifiers, such as neural networks (Li & Hsu, 2020) [79][55], support vector machines (Kang et al., 2019) [22][56], and the AdaBoost method (Xin et al., 2017) [23][57]. Kang et al. (2018) [22][56] presented a method for automatically detecting small-scale impact craters from charge-coupled device (CCD) images using a coarse-to-fine resolution approach. The proposed method involves two stages. Firstly, large-scale craters are extracted as samples from Chang’E-1 images with a spatial resolution of 120 m. Then, the histogram of oriented gradient (HOG) features and a support vector machine (SVM) classifier are used to establish the criteria for distinguishing craters and non-craters. Finally, the established criteria are used to extract small-scale craters from higher-resolution Chang’E-2 CCD images with spatial resolutions of 1.4 m, 7 m, and 50 m. Apart from that Xin et al. [23][57] propose an automated approach to identify fresh impact sites on the Martian surface using images captured by the High-Resolution Imaging Science Experiment (HiRISE) camera aboard the Mars Reconnaissance Orbiter. The method being proposed comprises three primary stages: the pre-processing of the HiRISE images, the detection of potential impact sites, and the validation of the detected sites using the AdaBoost method. The potential impact sites are detected using a machine-learning-based approach that uses multiple features, such as intensity, texture, and shape information. The validation of the detected sites is done by comparing them with a database of known impact sites on Mars.
An automated approach for detecting small craters with diameters less than 1 km on planetary surfaces using high-resolution images is presented by Urbach and Stepinski [24][58]. The three primary stages of the suggested technique include pre-processing, candidate selection, and crater recognition, with the pre-processing stage transforming the input image to improve features and minimize noise. In the candidate selection step, a Gaussian filter and adaptive thresholding are used to detect potential crater candidates. In the crater recognition step, a shape-based method is employed to differentiate between craters and non-craters. It is shown that the suggested technique works well for finding tiny craters on the Moon and Mars.

3.2.3. Unsupervised Learning Approach

The unsupervised detection approach utilizes image processing and target identification theories to identify craters by estimating their boundaries based on the circular or elliptical properties of the image [80][59]. The Hough transform and its improved algorithms (Emami et al., 2019) [17][60], the genetic algorithm (Hong et al., 2012) [18][61], the radial consistency approach (Earl et al., 2005) [19][62], and the template matching method (Cadogan, 2020; Lee et al., 2020) [20][63] are among the common techniques utilized for this method. The morphological image processing-based approach for identifying craters involves three primary steps: firstly, the morphological method is used to identify candidate regions, followed by the removal of noise to pinpoint potential crater areas; secondly, fast Fourier transform-based template matching is used to establish the association between candidate regions and templates; and finally, a probability analysis is utilized to identify the crater areas [21][64]. The advantage of the unsupervised approach is that it can train an accurate classifier without requiring the labeling of a sizable number of samples. This strategy can be used when an autonomous navigation system’s processing power is constrained. Nevertheless, it struggles to recognize challenging terrain.

3.2.4. Combined Learning Approach

To detect craters, a combined detection methodology employs both unsupervised and supervised detection methods. For example, consider the KLT detector, which is a combination detection technique, to extract probable crater regions [25][65]. In this approach, supervised detection methodology was used, and image blocks were used as inputs, while the detection accuracy was significantly influenced by the KLT detector’s parameters. Li and Hsu’s (2020) [26][66] study employed template matching and neural networks for crater identification. However, this approach has the drawback of being unable to significantly decrease the number of craters in rocky areas, leading to weak crater recognition in mountainous regions. Li et al. [8][53] propose a three-stage approach for combined crater detection and recognition. In the first stage, a structured random forest algorithm is utilized for extracting the crater edges. The second stage involves candidate area determination through edge detection techniques based on morphological methods. The third stage involves the recognition of candidate regions using the AlexNet deep learning model. Experimental results demonstrate that the recommended crater edge detection technique outperforms other edge detection methods. Additionally, the proposed approach shows relatively high detection accuracy and accurate detection rate when compared to other crater detection approaches.

References

  1. Xin, L.; Tang, Z.; Gai, W.; Liu, H. Vision-Based Autonomous Landing for the UAV: A Review. Aerospace 2022, 9, 634.
  2. Tsai, A.C.; Gibbens, P.W.; Stone, R.H. Terminal phase vision-based target recognition and 3D pose estimation for a tail-sitter, vertical takeoff and landing unmanned air vehicle. In Proceedings of the Advances in Image and Video Technology: First Pacific Rim Symposium, PSIVT 2006, Hsinchu, Taiwan, 10–13 December 2006; Springer: Berlin/Heidelberg, Germany, 2006; pp. 672–681.
  3. Xu, G.; Zhang, Y.; Ji, S.; Cheng, Y.; Tian, Y. Research on computer vision-based for UAV autonomous landing on a ship. Pattern Recognit. Lett. 2009, 30, 600–605.
  4. Fucen, Z.; Haiqing, S.; Hong, W. The object recognition and adaptive threshold selection in the vision system for landing an unmanned aerial vehicle. In Proceedings of the 2009 International Conference on Information and Automation, Shenyang, China, 5–7 August 2009; IEEE: Piscataway, NJ, USA, 2009; pp. 117–122.
  5. Fan, Y.; Haiqing, S.; Hong, W. A vision-based algorithm for landing unmanned aerial vehicles. In Proceedings of the 2008 International Conference on Computer Science and Software Engineering, Wuhan, China, 12–14 December 2008; IEEE: Piscataway, NJ, USA, 2008; Volume 1, pp. 993–996.
  6. Benini, A.; Rutherford, M.J.; Valavanis, K.P. Real-time, GPU-based pose estimation of a UAV for autonomous takeoff and landing. In Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 3463–3470.
  7. Yuan, H.; Xiao, C.; Xiu, S.; Zhan, W.; Ye, Z.; Zhang, F.; Zhou, C.; Wen, Y.; Li, Q. A hierarchical vision-based UAV localization for an open landing. Electronics 2018, 7, 68.
  8. Nguyen, P.H.; Kim, K.W.; Lee, Y.W.; Park, K.R. Remote marker-based tracking for UAV landing using visible-light camera sensor. Sensors 2017, 17, 1987.
  9. Wang, J.; Olson, E. AprilTag 2: Efficient and robust fiducial detection. In Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, Republic of Korea, 9–14 October 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 4193–4198.
  10. Xiu, S.; Wen, Y.; Xiao, C.; Yuan, H.; Zhan, W. Design and Simulation on Autonomous Landing of a Quad Tilt Rotor. J. Syst. Simul. 2020, 32, 1676.
  11. Li, Z.; Chen, Y.; Lu, H.; Wu, H.; Cheng, L. UAV autonomous landing technology based on AprilTags vision positioning algorithm. In Proceedings of the 2019 Chinese Control Conference (CCC), Guangzhou, China, 27-30 July 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 8148–8153.
  12. Forster, C.; Faessler, M.; Fontana, F.; Werlberger, M.; Scaramuzza, D. Continuous on-board monocular-vision-based elevation mapping applied to autonomous landing of micro aerial vehicles. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 111–118.
  13. Baca, T.; Stepan, P.; Spurny, V.; Hert, D.; Penicka, R.; Saska, M.; Thomas, J.; Loianno, G.; Kumar, V. Autonomous landing on a moving vehicle with an unmanned aerial vehicle. J. Field Robot. 2019, 36, 874–891.
  14. Yang, T.; Ren, Q.; Zhang, F.; Xie, B.; Ren, H.; Li, J.; Zhang, Y. Hybrid camera array-based uav auto-landing on moving ugv in gps-denied environment. Remote Sens. 2018, 10, 1829.
  15. Rodriguez-Ramos, A.; Sampedro, C.; Bavle, H.; De La Puente, P.; Campoy, P. A deep reinforcement learning strategy for UAV autonomous landing on a moving platform. J. Intell. Robot. Syst. 2019, 93, 351–366.
  16. Araar, O.; Aouf, N.; Vitanov, I. Vision based autonomous landing of multirotor UAV on moving platform. J. Intell. Robot. Syst. 2017, 85, 369–384.
  17. Fitzgerald, D.; Walker, R.; Campbell, D. A vision based forced landing site selection system for an autonomous UAV. In Proceedings of the 2005 International Conference on Intelligent Sensors, Sensor Networks and Information Processing, Seoul, Republic of Korea, 5–9 June 2005; IEEE: Piscataway, NJ, USA, 2005; pp. 397–402.
  18. Mejias, L.; Fitzgerald, D.L.; Eng, P.C.; Xi, L. Forced landing technologies for unmanned aerial vehicles: Towards safer operations. Aer. Veh. 2009, 1, 415–442.
  19. Morais, F.; Ramalho, T.; Sinogas, P.; Marques, M.M.; Santos, N.P.; Lobo, V. Trajectory and Guidance Mode for autonomously landing an UAV on a naval platform using a vision approach. In Proceedings of the OCEANS 2015, Genova, Italy, 18–21 May 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 1–7.
  20. Battiato, S.; Cantelli, L.; D’Urso, F.; Farinella, G.M.; Guarnera, L.; Guastella, D.; Melita, C.D.; Muscato, G.; Ortis, A.; Ragusa, F.; et al. A system for autonomous landing of a UAV on a moving vehicle. In Proceedings of the Image Analysis and Processing-ICIAP 2017: 19th International Conference, Catania, Italy, 11–15 September 2017; Springer: Berlin/Heidelberg, Germany, 2017; pp. 129–139.
  21. Yang, B.; Ali, F.; Zhou, B.; Li, S.; Yu, Y.; Yang, T.; Liu, X.; Liang, Z.; Zhang, K. A novel approach of efficient 3D reconstruction for real scene using unmanned aerial vehicle oblique photogrammetry with five cameras. Comput. Electr. Eng. 2022, 99, 107804.
  22. Lin, S.; Jin, L.; Chen, Z. Real-time monocular vision system for UAV autonomous landing in outdoor low-illumination environments. Sensors 2021, 21, 6226.
  23. Cui, P.; Ge, D.; Gao, A. Optimal landing site selection based on safety index during planetary descent. Acta Astronaut. 2017, 132, 326–336.
  24. Wilkinson, F. The History of Space Exploration; National Geographic Society: Washington, DC, USA, 2022.
  25. Huertas, A.; Cheng, Y.; Matthies, L.H. Real-time hazard detection for landers. In Proceedings of the NASA Science Technology Conference, Galveston, TX, USA, 14–15 November 2007; Citeseer: Princeton, NJ, USA, 2007.
  26. Serrano, N. A bayesian framework for landing site selection during autonomous spacecraft descent. In Proceedings of the 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, Beijing, China, 9–15 October 2006; IEEE: Piscataway, NJ, USA, 2006; pp. 5112–5117.
  27. Johnson, A.E.; Klumpp, A.R.; Collier, J.B.; Wolf, A.A. Lidar-based hazard avoidance for safe landing on Mars. J. Guid. Control Dyn. 2002, 25, 1091–1099.
  28. Serrano, N.; Seraji, H. Landing site selection using fuzzy rule-based reasoning. In Proceedings of the 2007 IEEE International Conference on Robotics and Automation, Rome, Italy, 10–14 April 2007; IEEE: Piscataway, NJ, USA, 2007; pp. 4899–4904.
  29. Cheng, Y.; Johnson, A.E.; Mattheis, L.H.; Wolf, A.A. Passive Imaging Based Hazard Avoidance for Spacecraft Safe Landing; NASA: Washington, DC, USA, 2001.
  30. Matthies, L.; Huertas, A.; Cheng, Y.; Johnson, A. Stereo vision and shadow analysis for landing hazard detection. In Proceedings of the 2008 IEEE International Conference on Robotics and Automation, Woburn, MA, USA, 10–11 November 2008; IEEE: Piscataway, NJ, USA, 2008; pp. 2735–2742.
  31. Dunbar, B. July 20, 1969: One Giant Leap for Mankind; NASA: Washington, DC, USA, 2015.
  32. Epp, C.; Robertson, E.; Carson, J.M. Real-time hazard detection and avoidance demonstration for a planetary lander. In Proceedings of the AIAA SPACE 2014 conference and exposition, San Diego, CA, USA, 4–7 August 2014; p. 4312.
  33. Brady, T.; Schwartz, J. ALHAT system architecture and operational concept. In Proceedings of the 2007 IEEE Aerospace Conference, Big Sky, MT, USA, 3–10 March 2007; IEEE: Piscataway, NJ, USA, 2007; pp. 1–13.
  34. Paschall, S.; Brady, T. Demonstration of a safe & precise planetary landing system on-board a terrestrial rocket. In Proceedings of the 2012 IEEE Aerospace Conference, Beijing, China, 23–25 May 2012; IEEE: Piscataway, NJ, USA, 2012; pp. 1–8.
  35. Johnson, A.E.; Huertas, A.; Werner, R.A.; Montgomery, J.F. Analysis of on-board hazard detection and avoidance for safe lunar landing. In Proceedings of the 2008 IEEE Aerospace Conference, Big Sky, MT, USA, 1–8 March 2008; IEEE: Piscataway, NJ, USA, 2008; pp. 1–9.
  36. Brady, T.; Robertson, E.; Epp, C.; Paschall, S.; Zimpfer, D. Hazard detection methods for lunar landing. In Proceedings of the 2009 IEEE Aerospace Conference, Big Sky, MT, USA, 7–14 March 2009; IEEE: Piscataway, NJ, USA, 2009; pp. 1–8.
  37. Rutishauser, D.; Epp, C.; Robertson, E. Free-flight terrestrial rocket lander demonstration for nasa’s autonomous landing and hazard avoidance technology (alhat) system. In Proceedings of the AIAA Space 2012 Conference & Exposition, Pasadena, CA, USA, 11–13 September 2012; p. 5239.
  38. Cohanim, B.E.; Collins, B.K. Landing point designation algorithm for lunar landing. J. Spacecr. Rocket. 2009, 46, 858–864.
  39. Ivanov, T.; Huertas, A.; Carson, J.M. Probabilistic hazard detection for autonomous safe landing. In Proceedings of the AIAA Guidance, Navigation, and Control (GNC) Conference, Boston, MA, USA, 19-22 August 2013; p. 5019.
  40. Huertas, A.; Johnson, A.E.; Werner, R.A.; Maddock, R.A. Performance evaluation of hazard detection and avoidance algorithms for safe Lunar landings. In Proceedings of the 2010 IEEE Aerospace Conference, Big Sky, MT, USA, 6–13 March 2010; IEEE: Piscataway, NJ, USA, 2010; pp. 1–20.
  41. Epp, C.D.; Smith, T.B. Autonomous precision landing and hazard detection and avoidance technology (ALHAT). In Proceedings of the 2007 IEEE Aerospace Conference, Big Sky, MT, USA, 3–10 March 2007; IEEE: Piscataway, NJ, USA, 2007; pp. 1–7.
  42. Striepe, S.A.; Epp, C.D.; Robertson, E.A. Autonomous precision landing and hazard avoidance technology (ALHAT) project status as of May 2010. In Proceedings of the International Planetary Probe Workshop 2010 (IPPW-7), Barcelona, Spain, 15 June 2010. number NF1676L-10317.
  43. Ploen, S.R.; Seraji, H.; Kinney, C.E. Determination of spacecraft landing footprint for safe planetary landing. IEEE Trans. Aerosp. Electron. Syst. 2009, 45, 3–16.
  44. Uma Rani, M.; Thangavel, S.K.; Lagisetty, R.K. Satellite Pose Estimation Using Modified Residual Networks. In Disruptive Technologies for Big Data and Cloud Applications: Proceedings of ICBDCC 2021; Springer: Berlin/Heidelberg, Germany, 2022; pp. 869–882.
  45. Sharma, S.; Ventura, J.; D’Amico, S. Robust model-based monocular pose initialization for noncooperative spacecraft rendezvous. J. Spacecr. Rocket. 2018, 55, 1414–1429.
  46. Harvard, A.; Capuano, V.; Shao, E.Y.; Chung, S.J. Pose Estimation of Uncooperative Spacecraft from Monocular Images Using Neural Network Based Keypoints; American Institute of Aeronautics and Astronautics: Reston, VA, USA, 2020.
  47. Proença, P.F.; Gao, Y. Deep learning for spacecraft pose estimation from photorealistic rendering. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 6007–6013.
  48. Sharma, S.; Beierle, C.; D’Amico, S. Pose estimation for non-cooperative spacecraft rendezvous using convolutional neural networks. In Proceedings of the 2018 IEEE Aerospace Conference, Chengdu, China, 7–11 May 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1–12.
  49. Chen, B.; Cao, J.; Parra, A.; Chin, T.J. Satellite pose estimation with deep landmark regression and nonlinear pose refinement. In Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, Long Beach, CA, USA, 16–17 June 2019.
  50. Zeng, H.; Xia, Y. Space target recognition based on deep learning. In Proceedings of the 2017 20th International Conference on Information Fusion (Fusion), Xi’an, China, 10–13 July 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1–5.
  51. Wu, T.; Yang, X.; Song, B.; Wang, N.; Gao, X.; Kuang, L.; Nan, X.; Chen, Y.; Yang, D. T-SCNN: A two-stage convolutional neural network for space target recognition. In Proceedings of the IGARSS 2019-2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July–2 August 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1334–1337.
  52. Tao, J.; Cao, Y.; Ding, M.; Zhang, Z. Visible and infrared image fusion for space debris recognition with convolutional sparse representaiton. In Proceedings of the 2018 IEEE CSAA Guidance, Navigation and Control Conference (CGNCC), Xiamen, China, 10–12 August 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1–5.
  53. Li, H.; Jiang, B.; Li, Y.; Cao, L. A combined method of crater detection and recognition based on deep learning. Syst. Sci. Control Eng. 2021, 9, 132–140.
  54. Wang, S.; Fan, Z.; Li, Z.; Zhang, H.; Wei, C. An effective lunar crater recognition algorithm based on convolutional neural network. Remote Sens. 2020, 12, 2694.
  55. Hu, Y.; Xiao, J.; Liu, L.; Zhang, L.; Wang, Y. Detection of Small Impact Craters via Semantic Segmenting Lunar Point Clouds Using Deep Learning Network. Remote Sens. 2021, 13, 1826.
  56. Kang, Z.; Wang, X.; Hu, T.; Yang, J. Coarse-to-fine extraction of small-scale lunar impact craters from the CCD images of the Chang’E lunar orbiters. IEEE Trans. Geosci. Remote Sens. 2018, 57, 181–193.
  57. Xin, X.; Di, K.; Wang, Y.; Wan, W.; Yue, Z. Automated detection of new impact sites on Martian surface from HiRISE images. Adv. Space Res. 2017, 60, 1557–1569.
  58. Urbach, E.R.; Stepinski, T.F. Automatic detection of sub-km craters in high resolution planetary images. Planet. Space Sci. 2009, 57, 880–887.
  59. Lee, T.; Yoon, Y.; Chun, C.; Ryu, S. Cnn-based road-surface crack detection model that responds to brightness changes. Electronics 2021, 10, 1402.
  60. Kim, J.; Caire, G.; Molisch, A.F. Quality-aware streaming and scheduling for device-to-device video delivery. IEEE/ACM Trans. Netw. 2015, 24, 2319–2331.
  61. Hong, X.; Chen, S.; Harris, C.J. Using zero-norm constraint for sparse probability density function estimation. Int. J. Syst. Sci. 2012, 43, 2107–2113.
  62. Earl, J.; Chicarro, A.; Koeberl, C.; Marchetti, P.G.; Milnes, M. Automatic recognition of crater-like structures in terrestrial and planetary images. In Proceedings of the 36th Annual Lunar and Planetary Science Conference, League City, TX, USA, 14–18 March 2005; p. 1319.
  63. Cadogan, P.H. Automated precision counting of very small craters at lunar landing sites. Icarus 2020, 348, 113822.
  64. Pedrosa, M.M.; de Azevedo, S.C.; da Silva, E.A.; Dias, M.A. Improved automatic impact crater detection on Mars based on morphological image processing and template matching. Geomat. Nat. Hazards Risk 2017, 8, 1306–1319.
  65. Yan, Y.; Qi, D.; Li, C. Vision-based crater and rock detection using a cascade decision forest. IET Comput. Vis. 2019, 13, 549–555.
  66. Lee, H.; Choi, H.L.; Jung, D.; Choi, S. Deep neural network-based landmark selection method for optical navigation on lunar highlands. IEEE Access 2020, 8, 99010–99023.
More
Video Production Service