UAV-Based Computer Vision for tree inventory: History
Please note this is an old version of this entry, which may differ significantly from the current revision.

Accurate and efficient orchard tree inventories are essential for acquiring up-to-date information, which is necessary for effective treatments and crop insurance purposes. Surveying orchard trees, including tasks such as counting, locating, and assessing health status, plays a vital role in predicting production volumes and facilitating orchard management.

  • unmanned aerial vehicle (UAV)
  • DeepForest
  • YOLO

1. Background

Tree diseases can have a significant impact on orchard quality and productivity, which is a major concern for the agricultural industry. Unhealthy or stressed orchard trees are more susceptible to pests, diseases, and environmental stressors, which can reduce the yield and quality of fruit and lead to financial losses for growers. Therefore, developing methods for surveying and monitoring tree health and production quality is essential for orchard management. This can help growers make informed decisions about practices such as irrigation, fertilization, and pest control, optimize orchard yield and quality, reduce input usage, and improve the long-term sustainability of orchard production systems.
Traditional methods of monitoring orchard tree health, such as manual inspection and visual examination, rely on human expertise to determine quantitative orchard tree parameters. These methods are labor-intensive, time-consuming, costly, and subject to errors. In recent years, remote sensing platforms such as satellites, airplanes, and unmanned aerial vehicles (UAVs) [1][2][3][4] have provided new tools that offer an alternative to traditional methods. Deep neural networks (DNNs) [5] have also emerged as a powerful tool in the field of machine learning. The high spatial resolution provided by UAV images combined with computer vision algorithms have made tremendous advances in several domains such as forestry [6], agriculture [7], geology [8], surveillance [9], and traffic monitoring [10].

2. Tree Detection

Over the years, both classical machine learning and deep learning methods have been extensively explored to address the tree detection problem.
Classical object detection methods often involve the utilization of handcrafted features and machine learning algorithms. Local binary pattern (LBP) [11], scale-invariant feature transform (SIFT) [12][13], and histogram of oriented gradients (HOG) [14][15] are the most frequently used handcrafted features in object detection. For example, the work in [16] presented a traditional method for walnut, mango, orange, and apple tree detection. It adopts the template matching image processing approach to very high resolution (VHR) Google Earth images acquired over a variety of orchard trees. The template is based on a geometrical optical model created from a series of parameters, such as illumination angles, maximum and ambient radiance, and tree size specifications. In [17][18], the authors detected palm trees on UAV RGB images by extracting a set of key points using the scale-invariant feature transform (SIFT). The key points are then analyzed with an extreme learning machine (ELM) classifier, which is a priori trained on a set of palm and no-palm tree keypoints. Similarly, ref. [19] employed a support vector machine (SVM) for image classification into vegetation and non-vegetation patches. Subsequently, the HOG feature extractor was applied on vegetation patches for feature extraction. These extracted features were then used to train a SVM to recognize palm tree images from background regions. The study in [20] proposed an object detection method using shape features for detecting and counting palm trees. The authors employed circular autocorrelation of the polar shape (CAPS) matrix representation as the shape feature and the linear SVM to standardize and reduce the dimensions of the feature. Finally, the study uses a local maximum detection algorithm based on the spatial distribution of standardized features to detect palm trees. The work in [7] presented a method to detect apple trees using multispectral UAV images. The authors identified trees using thresholding techniques applied on the Normalized Difference Vegetation Index (NDVI) and entropy images, as trees are chlorophyllous bodies that have high NDVI values and are heterogeneous with high entropy. The work in [21] proposed an automated approach to detect and count individual palm trees from UAV images. It is based on two processing steps: first, the authors employed the NDVI to perform the classification of image features as trees and non-trees. Then, palm trees were identified based on texture analysis using the Circular Hough Transform (CHT) and the morphological operators. In [22], the authors applied k-means to perform color-based clustering followed by a thresholding technique to segment out the green portion of the image. Then, trees were identified by applying an entropy filter and morphological operations on the segmented image.
On the other hand, numerous studies have investigated the use of deep-learning algorithms to detect trees in UAV RGB imagery. For instance, ref. [23] detected citrus and other crop trees from UAV images using a CNN algorithm applied to four spectral bands (i.e., green, red, near infrared and red edge). The initial detection was followed by a classification refinement procedure using superpixels derived from a Simple Linear Iterative Clustering (SLIC) algorithm and a thresholding technique to address the confusion between trees and weeds and deal with the difficulty in distinguishing small trees from large trees. In [24], the authors adopted a sliding window approach for oil palm tree detection. A sliding window was integrated with a pre-trained AlexNet classifier to scan the input image and identify regions containing trees. The work in [25] exploited the use of state-of-the-art CNNs, including YOLO-v5 with its four sub-versions, YOLO-v3, YOLO-v4, and SSD300 in detecting date palm trees. Similarly, in [26], the authors explored the use of the YOLO-v5 with its subversions and DeepForest for the detection of orchard trees. In [27], three state-of-the-art object detection methods were evaluated for the detection of law-protected tree species: Faster Region-based Convolutional Neural Network (Faster R-CNN) [28], YOLOv3 [29], and RetinaNet [30]. Similarly, the work in [31] explored the use of Faster R-CNN, Single Shot Multi-Box Detector (SSD) [32], and R-FCN [33] architectures to detect seedlings.
Most of these works explored fine-tuning state-of-the-art object detectors for tree detection by taking an object detection model that is pre-trained on Benchmark datasets [34][35][36] and adapting it specifically for the task of detecting trees. However, applying these methods to UAV images has particular challenges [37] compared to conventional object detection tasks. For example, UAV images often have a large field of view with complex background regions, which can significantly disrupt detection accuracy. Furthermore, the objects of interest are often not uniformly distributed with respect to the background regions, creating an imbalance between positive and negative examples. Data imbalance can also be observed between easy and hard negative examples, since with UAV images, a large part of the background has regular patterns and can be easily analyzed for detection. Researchers believe that applying deep learning detection algorithms directly in these situations is not an optimal choice [38], as they mostly assign the same weight to all training examples, so that during the training step easy examples may dominate the total loss and reduce training efficiency.
To mitigate this issue, hard negative mining (HNM) can be adopted for object detection. Various HNM approaches [37][39][40] involve iteratively bootstrapping a small set of negative examples, by selecting those that trigger a false positive alarm in the detector. For example, ref. [41] presented a training process of a state-of-the-art face detector by exploiting the idea of hard negative mining and iteratively updating the Faster R-CNN-based face detector with hard negatives harvested from a large set of background examples. Their method outperforms state-of-the-art detectors on the Face Detection Data Set and Benchmark (FDDB). Similarly, an improved version of faster R-CNN is proposed in [42], by using hard negative sample mining for object detection using PASCAL VOC dataset [36]. Likewise, ref. [43] used the bootstrapping of hard negatives to improve the performance of face detection on WIDER FACE dataset [44]. The authors pre-trained Faster R-CNN to mine hard negatives, before retraining the model. The work of [45] presented a cascaded Boosted Forest for pedestrian detection, which performs effective hard negative mining and sample reweighting, to classify the region proposals generated by RPN. The A-Fast-RCNN method, described in [46], adopts a different approach for generating hard negative samples, by using occlusion and spatial deformations through an adversarial process. The authors conducted their experiments on PASCAL VOC and MS-COCO datasets. Another approach to apply HNM using Single Shot multi-box Detector (SSD) is proposed in [47], where the authors use medium priors, anchor boxes with 20% to 50% overlap with ground truth boxes, to enhance object detector performance on the PASCAL VOC dataset. The proposed framework updates the loss function so that it considers the anchor boxes with partial and marginal overlap.

3. Tree Health Classification

Vegetation indices (VIs) have been introduced as indicators of vegetation status, as they provide information on the physiological and biochemical status of trees. These mathematical combinations of reflectance measurements are sensitive to different vegetation parameters, such as chlorophyll content, leaf area, and water stress. It has been shown through many studies [48][49][50] that by analyzing these indices, researchers can gain insights into the health and vitality of trees.
For example, the work in [51] presented a framework for orchard tree segmentation and health assessment. The proposed approach is applied to five different orchard tree species, namely plum, apricot, walnut, olive, and almond. Two vegetation indices, including visible atmospherically resistant index (VARI) and green leaf index (GLI) were used with the standard score (which is also known as z-score) for tree health assessment. The study in [52] proposed a process workflow for mapping and monitoring olive orchards at tree scale detail. Five VIs were investigated, including normalized difference vegetation index (NDVI), modified soil adjusted vegetation index 2 (MSAVI 2), normalized difference red edge Vegetation index (NDRE), modified chlorophyll absorption ratio index improved (MCARI2), and NDVI2. The authors applied statistical analyses to all calculated VIs. Similarly, ref. [53] presented an approach for Huanglongbing (HLB) disease detection on citrus trees. First, the trees were segmented using thresholding techniques applied on the normalized difference vegetation index (NDVI). Then, for each segmented tree, a total number of thirteen spectral features was computed, which include six spectral bands and seven vegetation indices. The indices studied were: NDVI, green normalized difference vegetation index (GNDVI), soil-adjusted vegetation index (SAVI), near infrared (NIR)—red(R), R/NIR, green (G)/R and NIR/R. A SVM classifier is then applied to distinguish between healthy and HLB-infected trees. The work in [54] presented a method for the identification of stress in olive trees. An SVM model was applied to VIs to classify each tree pixel into two categories: healthy and stressed. The work in [48] presented a method to monitor grapevine diseases affecting European vineyards. The authors explored the use of different features including spectral bands, vegetation indices and biophysical parameters. They conducted a statistical analysis for the selection of the best discriminating variables to separate between symptomatic vines including FD and GTD from asymptomatic vines (Case 1) and FD vines from GTD ones (Case 2).

This entry is adapted from the peer-reviewed paper 10.3390/rs15143558

References

  1. Kim, J.; Kim, S.; Ju, C.; Son, H.I. Unmanned aerial vehicles in agriculture: A review of perspective of platform, control, and applications. IEEE Access 2019, 7, 105100–105115.
  2. Barbedo, J.G.A. A review on the use of unmanned aerial vehicles and imaging sensors for monitoring and assessing plant stresses. Drones 2019, 3, 40.
  3. Costa, F.G.; Ueyama, J.; Braun, T.; Pessin, G.; Osório, F.S.; Vargas, P.A. The use of unmanned aerial vehicles and wireless sensor network in agricultural applications. In Proceedings of the 2012 IEEE International Geoscience and Remote Sensing Symposium, Munich, Germany, 22–27 July 2012; pp. 5045–5048.
  4. Urbahs, A.; Jonaite, I. Features of the use of unmanned aerial vehicles for agriculture applications. Aviation 2013, 17, 170–175.
  5. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444.
  6. Bouachir, W.; Ihou, K.E.; Gueziri, H.E.; Bouguila, N.; Bélanger, N. Computer Vision System for Automatic Counting of Planting Microsites Using UAV Imagery. IEEE Access 2019, 7, 82491–82500.
  7. Haddadi, A.; Leblon, B.; Patterson, G. Detecting and Counting Orchard Trees on Unmanned Aerial Vehicle (UAV)-Based Images Using Entropy and Ndvi Features. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2020, 43, 1211–1215.
  8. Zhang, Y.; Wang, G.; Li, M.; Han, S. Automated classification analysis of geological structures based on images data and deep learning model. Appl. Sci. 2018, 8, 2493.
  9. Geng, L.; Zhang, Y.; Wang, P.; Wang, J.J.; Fuh, J.Y.; Teo, S. UAV surveillance mission planning with gimbaled sensors. In Proceedings of the 11th IEEE International Conference on Control & Automation (ICCA), Taichung, Taiwan, 21 November 2014; pp. 320–325.
  10. Kanistras, K.; Martins, G.; Rutherford, M.J.; Valavanis, K.P. A survey of unmanned aerial vehicles (UAVs) for traffic monitoring. In Proceedings of the 2013 International Conference on Unmanned Aircraft Systems (ICUAS), Atlanta, GA, USA, 28–31 May 2013; pp. 221–234.
  11. Ojala, T.; Pietikainen, M.; Maenpaa, T. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 971–987.
  12. Sedaghat, A.; Mokhtarzade, M.; Ebadi, H. Uniform robust scale-invariant feature matching for optical remote sensing images. IEEE Trans. Geosci. Remote Sens. 2011, 49, 4516–4527.
  13. Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110.
  14. Dalal, N.; Triggs, B. Histograms of oriented gradients for human detection. In Proceedings of the CVPR, San Diego, CA, USA, 20–26 June 2005; pp. 886–893.
  15. Shao, W.; Yang, W.; Liu, G.; Liu, J. Car detection from high-resolution aerial imagery using multiple features. In Proceedings of the 2012 IEEE International Geoscience and Remote Sensing Symposium, Munich, Germany, 22–27 July 2012; pp. 4379–4382.
  16. Maillard, P.; Gomes, M.F. Detection and counting of orchard trees from vhr images using a geometrical-optical model and marked template matching. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 3, 75.
  17. Malek, S.; Bazi, Y.; Alajlan, N.; AlHichri, H.; Melgani, F. Efficient framework for palm tree detection in UAV images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 4692–4703.
  18. Bazi, Y.; Malek, S.; Alajlan, N.A.; Alhichri, H.S. An automatic approach for palm tree counting in UAV images. In Proceedings of the 2014 IEEE Geoscience and Remote Sensing Symposium, Quebec City, QC, Canada, 13–18 July 2014; pp. 537–540.
  19. Wang, Y.; Zhu, X.; Wu, B. Automatic detection of individual oil palm trees from UAV images using HOG features and an SVM classifier. Int. J. Remote Sens. 2019, 40, 7356–7370.
  20. Manandhar, A.; Hoegner, L.; Stilla, U. Palm tree detection using circular autocorrelation of polar shape matrix. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 3, 465.
  21. Mansoori, S.A.; Kunhu, A.; Ahmad, H.A. Automatic palm trees detection from multispectral UAV data using normalized difference vegetation index and circular Hough transform. Remote Sens. 2018, 10792, 11–19.
  22. Hassaan, O.; Nasir, A.K.; Roth, H.; Khan, M.F. Precision forestry: Trees counting in urban areas using visible imagery based on an unmanned aerial vehicle. IFAC-PapersOnLine 2016, 49, 16–21.
  23. Csillik, O.; Cherbini, J.; Johnson, R.; Lyons, A.; Kelly, M. Identification of citrus trees from unmanned aerial vehicle imagery using convolutional neural networks. Drones 2018, 2, 39.
  24. Li, W.; Fu, H.; Yu, L. Deep convolutional neural network based large-scale oil palm tree detection for high-resolution remote sensing images. In Proceedings of the 2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), New York, NY, USA, 11–13 December 2017; pp. 846–849.
  25. Jintasuttisak, T.; Edirisinghe, E.; Elbattay, A. Deep neural network based date palm tree detection in drone imagery. Comput. Electron. Agric. 2022, 192, 106560.
  26. Jemaa, H.; Bouachir, W.; Leblon, B.; Bouguila, N. Computer vision system for detecting orchard trees from UAV images. ISPRS-Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2022, 43, 661–668.
  27. Santos, A.A.D.; Marcato Junior, J.; Araújo, M.S.; Di Martini, D.R.; Tetila, E.C.; Siqueira, H.L.; Aoki, C.; Eltner, A.; Matsubara, E.T.; Pistori, H.; et al. Assessment of CNN-based methods for individual tree detection on images captured by RGB cameras attached to UAVs. Sensors 2019, 19, 3595.
  28. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster r-cnn: Towards real-time object detection with region proposal networks. Adv. Neural Inf. Process. Syst. 2015, 28, 1137–1149.
  29. Redmon, J.; Farhadi, A. YOLOv3: An Incremental Improvement. arXiv 2018, arXiv:1804.02767.
  30. Lin, T.Y.; Goyal, P.; Girshick, R.B.; He, K.; Dollár, P. Focal Loss for Dense Object Detection. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 42, 318–327.
  31. Fromm, M.; Schubert, M.; Castilla, G.; Linke, J.; McDermid, G. Automated detection of conifer seedlings in drone imagery using convolutional neural networks. Remote Sens. 2019, 11, 2585.
  32. Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.E.; Fu, C.Y.; Berg, A.C. SSD: Single Shot MultiBox Detector. In Proceedings of the European Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 21–37.
  33. Dai, J.; Li, Y.; He, K.; Sun, J. R-fcn: Object detection via region-based fully convolutional networks. Adv. Neural Inf. Process. Syst. 2016, 29, 379–387.
  34. Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255.
  35. Lin, T.Y.; Maire, M.; Belongie, S.J.; Hays, J.; Perona, P.; Ramanan, D.; Dollár, P.; Zitnick, C.L. Microsoft COCO: Common Objects in Context. In Proceedings of the European Conference on Computer Vision, Zurich, Switzerland, 6–12 September 2014; pp. 740–755.
  36. Hoiem, D.; Divvala, S.K.; Hays, J.H. Pascal VOC 2008 challenge. World Lit. Today 2009, 24.
  37. Zhang, L.; Wang, Y.; Huo, Y. Object detection in high-resolution remote sensing images based on a hard-example-mining network. IEEE Trans. Geosci. Remote Sens. 2020, 59, 8768–8780.
  38. Xia, G.S.; Bai, X.; Ding, J.; Zhu, Z.; Belongie, S.J.; Luo, J.; Datcu, M.; Pelillo, M.; Zhang, L. DOTA: A Large-Scale Dataset for Object Detection in Aerial Images. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 3974–3983.
  39. Jin, S.; RoyChowdhury, A.; Jiang, H.; Singh, A.; Prasad, A.; Chakraborty, D.; Learned-Miller, E.G. Unsupervised Hard Example Mining from Videos for Improved Object Detection. In Proceedings of the European Conference on Computer Vision, Munich, Germany, 8–14 September 2018; pp. 307–324.
  40. Shrivastava, A.; Gupta, A.K.; Girshick, R.B. Training Region-Based Object Detectors with Online Hard Example Mining. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 761–769.
  41. Wan, S.; Chen, Z.; Tao, Z.; Zhang, B.; kat Wong, K. Bootstrapping Face Detection with Hard Negative Examples. arXiv 2016, arXiv:1608.02236.
  42. Liu, Y. An Improved Faster R-CNN for Object Detection. In Proceedings of the 2018 11th International Symposium on Computational Intelligence and Design (ISCID), Hangzhou, China, 8–9 December 2018; Volume 2, pp. 119–123.
  43. Sun, X.; Wu, P.; Hoi, S.C. Face detection using deep learning: An improved faster RCNN approach. Neurocomputing 2018, 299, 42–50.
  44. Yang, S.; Luo, P.; Loy, C.C.; Tang, X. WIDER FACE: A Face Detection Benchmark. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 5525–5533.
  45. Zhang, L.; Lin, L.; Liang, X.; He, K. Is Faster R-CNN Doing Well for Pedestrian Detection? arXiv 2016, arXiv:1607.07032.
  46. Wang, X.; Shrivastava, A.; Gupta, A.K. A-Fast-RCNN: Hard Positive Generation via Adversary for Object Detection. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 3039–3048.
  47. Ravi, N.; El-Sharkawy, M. Improved Single Shot Detector with Enhanced Hard Negative Mining Approach. In Proceedings of the 2022 International Conference on Advanced Computer Science and Information Systems (ICACSIS), Depok, Indonesia, 1–3 October 2022; pp. 25–30.
  48. Albetis, J.; Jacquin, A.; Goulard, M.; Poilvé, H.; Rousseau, J.; Clenet, H.; Dedieu, G.; Duthoit, S. On the potentiality of UAV multispectral imagery to detect Flavescence dorée and Grapevine Trunk Diseases. Remote Sens. 2018, 11, 23.
  49. Vélez, S.; Ariza-Sentís, M.; Valente, J. Mapping the spatial variability of Botrytis bunch rot risk in vineyards using UAV multispectral imagery. Eur. J. Agron. 2023, 142, 126691.
  50. Chang, A.; Yeom, J.; Jung, J.; Landivar, J. Comparison of canopy shape and vegetation indices of citrus trees derived from UAV multispectral images for characterization of citrus greening disease. Remote Sens. 2020, 12, 4122.
  51. Șandric, I.; Irimia, R.; Petropoulos, G.P.; Anand, A.; Srivastava, P.K.; Pleșoianu, A.; Faraslis, I.; Stateras, D.; Kalivas, D. Tree’s detection & health’s assessment from ultra-high resolution UAV imagery and deep learning. Geocarto Int. 2022, 37, 10459–10479.
  52. Solano, F.; Di Fazio, S.; Modica, G. A methodology based on GEOBIA and WorldView-3 imagery to derive vegetation indices at tree crown detail in olive orchards. Int. J. Appl. Earth Obs. Geoinf. 2019, 83, 101912.
  53. Garcia-Ruiz, F.; Sankaran, S.; Maja, J.M.; Lee, W.S.; Rasmussen, J.; Ehsani, R. Comparison of two aerial imaging platforms for identification of Huanglongbing-infected citrus trees. Comput. Electron. Agric. 2013, 91, 106–115.
  54. Navrozidis, I.; Haugommard, A.; Kasampalis, D.; Alexandridis, T.; Castel, F.; Moshou, D.; Ovakoglou, G.; Pantazi, X.E.; Tamouridou, A.A.; Lagopodi, A.L.; et al. Assessing Olive Trees Health Using Vegetation Indices and Mundi Web Services for Sentinel-2 Images. In Proceedings of the Hellenic Association on Information and Communication Technologies in Agriculture, Food & Environment, Thessaloniki, Greece, 24–27 September 2020; pp. 130–136.
More
This entry is offline, you can click here to edit this entry!
Video Production Service