Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 -- 1039 2023-06-01 12:41:59 |
2 update references and layout Meta information modification 1039 2023-06-02 04:03:58 |

Video Upload Options

Do you have a full video?

Confirm

Are you sure to Delete?
Cite
If you have any further questions, please contact Encyclopedia Editorial Office.
Zhou, H.; Ji, Z.; You, X.; Liu, Y.; Chen, L.; Zhao, K.; Lin, S.; Huang, X. Geometric Primitive-Guided UAV Path Planning. Encyclopedia. Available online: https://encyclopedia.pub/entry/45104 (accessed on 27 July 2024).
Zhou H, Ji Z, You X, Liu Y, Chen L, Zhao K, et al. Geometric Primitive-Guided UAV Path Planning. Encyclopedia. Available at: https://encyclopedia.pub/entry/45104. Accessed July 27, 2024.
Zhou, Hao, Zheng Ji, Xiangyu You, Yuchen Liu, Lingfeng Chen, Kun Zhao, Shan Lin, Xiangxiang Huang. "Geometric Primitive-Guided UAV Path Planning" Encyclopedia, https://encyclopedia.pub/entry/45104 (accessed July 27, 2024).
Zhou, H., Ji, Z., You, X., Liu, Y., Chen, L., Zhao, K., Lin, S., & Huang, X. (2023, June 01). Geometric Primitive-Guided UAV Path Planning. In Encyclopedia. https://encyclopedia.pub/entry/45104
Zhou, Hao, et al. "Geometric Primitive-Guided UAV Path Planning." Encyclopedia. Web. 01 June, 2023.
Geometric Primitive-Guided UAV Path Planning
Edit

Image-based refined 3D reconstruction relies on high-resolution and multi-angle images of scenes. The assistance of multi-rotor drones and gimbal provides great convenience for image acquisition.

3D reconstruction UAV path planning geometric primitives

1. Introduction

There is an increasing demand for 3D reconstruction of large scenes in areas such as urban planning, autonomous driving, virtual reality and gaming. Currently, considering data sources, the 3D reconstruction methods can be divided into two types: laser scanning-based and image-based [1][2]. Generally, the laser scanning-based model is most costly, and the reconstructed model suffers from a lack of texture. Image-based 3D reconstruction methods, on the other hand, are less expensive and more effective even with only a monocular camera [3]. Many fine-grained reconstruction efforts are carried out using image-based 3D reconstruction methods. These works mainly revolve around the SFM (Structure from Motion) and MVS (Multi-View Stereo) theories [4][5], observing the target from different perspectives and capturing the 3D information to obtain better reconstruction results. The core idea of the image-based 3D reconstruction method is the efficient use of geometric information on multi-view images [6][7][8].
To obtain high-quality models, current research focuses on two aspects. One is to improve the accuracy and computational performance of reconstruction algorithms. There are already many mature algorithms and software that can achieve high-quality image 3D reconstruction. Open-source algorithms include Colmap, OpenMVG, VisualSfM, etc., while commercial software includes Context Capture, MetaShape, Reality Capture, Pix4D, etc. In addition, some studies such as MVSNet [9] and R-MVSNet [10] have used an end-to-end depth estimation framework based on deep learning to obtain 3D dense point clouds by estimating depth directly from images to improve accuracy in scenes with repeated or missing textures and drastic changes in illumination. The reconstruction algorithms have been maturely developed relatively. The other is to improve the quality of the reconstructed model by acquiring or selecting high-quality images. Unlike the optimization of reconstruction algorithms, this is mainly applied in the data acquisition phase to input high-quality images to the reconstruction system. The quality of the images determines the reconstructed model quality [11], while the number and resolution of the images determine the time cost of the reconstruction process. Inadequate and insufficient coverage can result in mismatches between images or holes in reconstructed models. On the other hand, excessively redundant images would increase the time and calculation cost during image acquisition and reconstruction processes, and even lead to poor reconstruction quality [7][12]. Image collection is increasingly becoming an essential issue in 3D reconstruction work [13].
UAVs are widely used in the image acquisition process for 3D reconstruction to acquire images from multiple views, including different orientations and positions. However, most of the flight paths of UAVs are performed under manual control or some predefined flight modes in practical operations [14], in which situation a greater number of images is prone to be captured and thus cause redundancy and long-time consumption. An efficient path planning solution for UAVs to capture images autonomously which can ensure flight safety and reconstructability is urgently required [15][16]. This research focuses on the planning of UAV photographic viewpoints and paths to achieve high-quality image acquisition and ultimately high-quality model reconstruction. The multi-rotor UAV with RTK (Real Time Kinematic) module and gimbal provides the hardware implementation basis for high-quality image collection [17][18][19], so that the one-to-one correspondence of captured images and the planned viewpoint can be met.

2. Priori Geometry Proxy

Current viewpoint selection methods for UAV path planning can be divided into two categories depending on whether an initial proxy is required: estimating the viewpoints in an unknown environment iteratively and determining the viewpoints based on an initial coarse model [20]. The former estimates new viewpoints through iterative computation to increase information gain without prior knowledge [21][22][23][24][25][26][27][28]. Ref. [23] dynamically estimated 3D boundary boxes of buildings to guide online path planning for scene exploration and building observation. Ref. [22] estimated building height and captured close-up images through the SLAM framework to reveal architectural details. Generally referred to as the next-best-view, it is challenging for this approach to meet the full coverage requirement of refined reconstruction and it relies on the real-time computing power of the UAV.
The latter solution is also called the priori proxy-based viewpoint selection approach [15][16][20][21][29][30][31][32][33][34], which is often referred to as explore-then-exploit. It requires an initial model, based on which analysis and planning are carried out to identify viewpoints that satisfy the reconstruction requirements. The proxy can be some existing 3D data of the scene with height information, or a low-precision model obtained from an initial flight. Ref. [30] used a 2D map of the scene to estimate the building height from the shadows, then generated a 2.5D coarse model. Ref. [31] took the reconstructed dense point cloud as the initial model and determined the viewpoints covering the whole scene after preprocessing and quality evaluation of the point cloud. Some studies used a 3D mesh as the proxy and planned the photographic viewpoint based on the surface geometric information [15][16][20]. Considering safety and robustness, researchers choose the latter, that the rough proxy of the scene to be reconstructed is utilized to plan the flight path. For generality, a triangular mesh is utilized as the proxy. Any 3D data that can be converted to it can also be used, including DEM (Digital Elevation Model), point cloud, BIM model and 3D mesh of the large scene.

3. Viewpoint Optimization

To meet the requirements of 3D reconstruction and ensure the efficiency of the UAV, it is necessary to optimize the viewpoints set continuously and generate fewer viewpoints to complete the image acquisition and 3D reconstruction. Ref. [35] selected viewpoints covering the whole scene considering visibility and self-occlusion, but did not consider the observation of particular parts and limitations on the number of viewpoints and flight time. Refs. [15][16][17] applied submodular optimization methods to select candidate viewpoints, considering factors such as the number of viewpoints, camera angle, flight time and space obstacle avoidance, etc., hoping to obtain more information with the fewest viewpoints under the given constraints. Ref. [20] applied a reconstruction heuristic to plan the location and orientation of photographic viewpoints in a continuous optimization problem, intending to produce a more accurate and complete reconstruction result with fewer images. The above methods require many iterations, and each iteration requires traversing every viewpoint. In addition, methods such as [15][20][23] would generate many viewpoints, resulting in extremely long optimization time and local optimum [33].

References

  1. Aharchi, M.; Ait Kbir, M. A Review on 3D Reconstruction Techniques from 2D Images. In Innovations in Smart Cities Applications, 3rd ed.; Ben Ahmed, M., Boudhir, A.A., Santos, D., El Aroussi, M., Karas, İ.R., Eds.; Springer International Publishing: Cham, Switzerland, 2020; pp. 510–522.
  2. Ma, Z.; Liu, S. A Review of 3D Reconstruction Techniques in Civil Engineering and Their Applications. Adv. Eng. Inform. 2018, 37, 163–174.
  3. De Reu, J.; De Smedt, P.; Herremans, D.; Van Meirvenne, M.; Laloo, P.; De Clercq, W. On Introducing an Image-Based 3D Reconstruction Method in Archaeological Excavation Practice. J. Archaeol. Sci. 2014, 41, 251–262.
  4. Furukawa, Y.; Hernández, C. Multi-View Stereo: A Tutorial. Found. Trends® Comput. Graph. Vis. 2015, 9, 1–148.
  5. Westoby, M.J.; Brasington, J.; Glasser, N.F.; Hambrey, M.J.; Reynolds, J.M. ‘Structure-from-Motion’ Photogrammetry: A Low-Cost, Effective Tool for Geoscience Applications. Geomorphology 2012, 179, 300–314.
  6. Schönberger, J.L.; Zheng, E.; Frahm, J.-M.; Pollefeys, M. Pixelwise View Selection for Unstructured Multi-View Stereo. In Proceedings of the Computer Vision—ECCV 2016, Amsterdam, The Netherlands, 11–14 October 2016; Leibe, B., Matas, J., Sebe, N., Welling, M., Eds.; Springer International Publishing: Cham, Switzerland, 2016; pp. 501–518.
  7. Seitz, S.M.; Curless, B.; Diebel, J.; Scharstein, D.; Szeliski, R. A Comparison and Evaluation of Multi-View Stereo Reconstruction Algorithms. In Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06), New York, NY, USA, 17–22 June 2006; Volume 1, pp. 519–528.
  8. Snavely, N.; Seitz, S.M.; Szeliski, R. Photo Tourism: Exploring Photo Collections in 3D. ACM Trans. Graph. 2006, 25, 835–846.
  9. Yao, Y.; Luo, Z.; Li, S.; Fang, T.; Quan, L. MVSNet: Depth Inference for Unstructured Multi-View Stereo. In Proceedings of the Computer Vision—ECCV 2018, Munich, Germany, 8–14 September 2018; Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y., Eds.; Springer International Publishing: Cham, Switzerland, 2018; pp. 785–801.
  10. Yao, Y.; Luo, Z.; Li, S.; Shen, T.; Fang, T.; Quan, L. Recurrent MVSNet for High-Resolution Multi-View Stereo Depth Inference. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–17 June 2019; pp. 5525–5534.
  11. Goesele, M.; Snavely, N.; Curless, B.; Hoppe, H.; Seitz, S.M. Multi-View Stereo for Community Photo Collections. In Proceedings of the 2007 IEEE 11th International Conference on Computer Vision, Rio De Janeiro, Brazil, 14–21 October 2007; pp. 1–8.
  12. Hornung, A.; Zeng, B.; Kobbelt, L. Image Selection for Improved Multi-View Stereo. In Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, Alaska, 23–28 June 2008; pp. 1–8.
  13. Fathi, H.; Dai, F.; Lourakis, M. Automated As-Built 3D Reconstruction of Civil Infrastructure Using Computer Vision: Achievements, Opportunities, and Challenges. Adv. Eng. Inform. 2015, 29, 149–161.
  14. Liu, X.; Ji, Z.; Zhou, H.; Zhang, Z.; Tao, P.; Xi, K.; Chen, L.; Junior, J. An object-oriented uav 3d path planning method applied in cultural heritage documentation. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2022, V-1–2022, 33–40.
  15. Hepp, B.; Nießner, M.; Hilliges, O. Plan3D: Viewpoint and Trajectory Optimization for Aerial Multi-View Stereo Reconstruction. ACM Trans. Graph. 2019, 38, 1–17.
  16. Roberts, M.; Shah, S.; Dey, D.; Truong, A.; Sinha, S.; Kapoor, A.; Hanrahan, P.; Joshi, N. Submodular Trajectory Optimization for Aerial 3D Scanning. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 5334–5343.
  17. Koch, T.; Körner, M.; Fraundorfer, F. Automatic and Semantically-Aware 3D UAV Flight Planning for Image-Based 3D Reconstruction. Remote Sens. 2019, 11, 1550.
  18. Li, T.; Hailes, S.; Julier, S.; Liu, M. UAV-Based SLAM and 3D Reconstruction System. In Proceedings of the 2017 IEEE International Conference on Robotics and Biomimetics (ROBIO), Macau, China, 5–8 December 2017; pp. 2496–2501.
  19. Nex, F.; Remondino, F. UAV for 3D Mapping Applications: A Review. Appl. Geomat. 2014, 6, 1–15.
  20. Smith, N.; Moehrle, N.; Goesele, M.; Heidrich, W. Aerial Path Planning for Urban Scene Reconstruction: A Continuous Optimization Method and Benchmark. ACM Trans. Graph. 2018, 37, 1–15.
  21. Hepp, B.; Dey, D.; Sinha, S.N.; Kapoor, A.; Joshi, N.; Hilliges, O. Learn-to-Score: Efficient 3D Scene Exploration by Predicting View Utility. In Proceedings of the Computer Vision—ECCV 2018, Munich, Germany, 8–14 September 2018; Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y., Eds.; Springer International Publishing: Cham, Switzerland, 2018; pp. 455–472.
  22. Kuang, Q.; Wu, J.; Pan, J.; Zhou, B. Real-Time UAV Path Planning for Autonomous Urban Scene Reconstruction. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 1156–1162.
  23. Liu, Y.; Cui, R.; Xie, K.; Gong, M.; Huang, H. Aerial Path Planning for Online Real-Time Exploration and Offline High-Quality Reconstruction of Large-Scale Urban Scenes. ACM Trans. Graph. 2021, 40, 1–16.
  24. Palazzolo, E.; Stachniss, C. Effective Exploration for MAVs Based on the Expected Information Gain. Drones 2018, 2, 9.
  25. Wu, S.; Sun, W.; Long, P.; Huang, H.; Cohen-Or, D.; Gong, M.; Deussen, O.; Chen, B. Quality-Driven Poisson-Guided Autoscanning. ACM Trans. Graph. 2014, 33, 203:1–203:12.
  26. Xu, K.; Shi, Y.; Zheng, L.; Zhang, J.; Liu, M.; Huang, H.; Su, H.; Cohen-Or, D.; Chen, B. 3D Attention-Driven Depth Acquisition for Object Identification. ACM Trans. Graph. 2016, 35, 238:1–238:14.
  27. Song, S.; Kim, D.; Jo, S. Online Coverage and Inspection Planning for 3D Modeling. Auton. Robot. 2020, 44, 1431–1450.
  28. Schmid, L.; Pantic, M.; Khanna, R.; Ott, L.; Siegwart, R.; Nieto, J. An Efficient Sampling-Based Method for Online Informative Path Planning in Unknown Environments. IEEE Robot. Autom. Lett. 2020, 5, 1500–1507.
  29. Zhang, H.; Yao, Y.; Xie, K.; Fu, C.-W.; Zhang, H.; Huang, H. Continuous Aerial Path Planning for 3D Urban Scene Reconstruction. ACM Trans. Graph. 2021, 40, 1–15.
  30. Zhou, X.; Xie, K.; Huang, K.; Liu, Y.; Zhou, Y.; Gong, M.; Huang, H. Offsite Aerial Path Planning for Efficient Urban Scene Reconstruction. ACM Trans. Graph. 2020, 39, 1–16.
  31. Yan, F.; Xia, E.; Li, Z.; Zhou, Z. Sampling-Based Path Planning for High-Quality Aerial 3D Reconstruction of Urban Scenes. Remote Sens. 2021, 13, 989.
  32. Zheng, X.; Wang, F.; Li, Z. A Multi-UAV Cooperative Route Planning Methodology for 3D Fine-Resolution Building Model Reconstruction. ISPRS J. Photogramm. Remote Sens. 2018, 146, 483–494.
  33. Liu, Y.; Lin, L.; Hu, Y.; Xie, K.; Fu, C.-W.; Zhang, H.; Huang, H. Learning Reconstructability for Drone Aerial Path Planning. ACM Trans. Graph. 2022, 41, 1–17.
  34. Li, Q.; Huang, H.; Yu, W.; Jiang, S. Optimized Views Photogrammetry: Precision Analysis and a Large-Scale Case Study in Qingdao. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 16, 1144–1159.
  35. Hoppe, C.; Wendel, A.; Zollmann, S.; Pirker, K.; Irschara, A.; Bischof, H.; Kluckner, S. Photogrammetric Camera Network Design for Micro Aerial Vehicles. In Proceedings of the Computer Vision Winter Workshop, Waikoloa, HI, USA, 3–7 January 2012; pp. 1–3.
More
Information
Contributors MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register : , , , , , , ,
View Times: 213
Revisions: 2 times (View History)
Update Date: 02 Jun 2023
1000/1000
Video Production Service