Deep Learning-Based Weed Detection Using UAV Images: History
Please note this is an old version of this entry, which may differ significantly from the current revision.
Subjects: Remote Sensing

Deep learning-based weed detection using UAV images. Recently, the Unmanned Aerial Vehicle (UAV) has made significant progress in its design and capability, including payload flexibility, communication and connectivity, navigation and autonomy, speed and flight time, which has potential to revolutionize the precision agriculture.

  • semantic segmentation
  • UAV
  • drones
  • deep learning
  • weed detection
  • precision agriculture

1. Introduction

Global food demand is projected to surge by a significant margin of 35% to 56% from 2010 to 2050 [1]. However, the expansion of industrialization, desertification and urbanization has led to a reduction in the crop production area and, hence, productivity of food [2]. In addition to these challenges, climate change is increasingly creating favorable conditions for pests such as insects and weeds, harming crops [3]. Therefore, crop quality and quantity will be affected by insects and weeds if the appropriate treatment is not devised in a timely manner. Traditionally, herbicides and pesticides have been employed as a means of control [4]. When these herbicides are sprayed throughout entire fields without making precise identification of weeds, such an application of herbicides, while serving its purpose, results in a detrimental impact on both crop yield and the environment. While they effectively combat pests and diseases that threaten crops, their use can lead to reduced agricultural productivity due to their excessive use where no weeds are present [5]. Therefore, it is essential to precisely identify the weeds vs. crops, so that cultivated plants can be saved from pesticide harm. As such, there is a requirement for a method of weed management that can gather and assess weed-related data within the agricultural field, while also taking appropriate measures to effectively regulate weed growth on farms [6].
Remote sensing (RS)-based approaches can be an alternative for automated weed detection using satellite imagery [7]. However, the success of satellite-based RS in weed detection is significantly influenced by three major limitations. First, the satellites acquire images with spatial resolutions measured in meters (e.g., Landsat at 30 m and Sentinel at 10 m), which is generally insufficient for analyzing plant- or individual plot-level weed data. Moreover, the fixed schedule of satellite revisits may not align with the timing needed to capture essential crop field images. Additionally, environmental factors like cloud cover frequently hinder the dependable quality of these images.
Recently, the Unmanned Aerial Vehicle (UAV) has made significant progress in its design and capability, including payload flexibility, communication and connectivity, navigation and autonomy, speed and flight time, etc. [8]. It offers versatile revisiting capabilities, allowing farmers/researchers to deploy it when weather conditions permit, ensuring frequent image capture (thus achieving high temporal resolution). Moreover, UAVs can capture images with remarkable spatial detail, closely observing individual plants from an elevated perspective, leading to centimeter-level image resolutions. Additionally, by flying at lower altitudes, UAVs can bypass cloud cover, obtaining clear and high-quality images [9]. Combined with the high-resolution crop field images acquired with UAV, semantic segmentation methods based on deep learning (DL) can provide a promising method for precise weed detection.
Semantic segmentation (SS) in computer vision is a pixel-level classification task that has revolutionized various fields, such as medical image segmentation [10][11] and precision agriculture (PA) [12]. For instance, Liu et al. [11] utilized the segmentation of retinal images to help diagnose and treat retinal diseases. In the PA domain, SS has been adopted for different problems such as agricultural field boundary segmentation [12], agricultural land segmentation [13], diseased vs. healthy plant detection [14] and weed segmentation [15]. Weed segmentation, which helps to identify unnecessary plants disturbing the growth of crops, is considered one of the major areas that directly contribute to improving crop productivity.
Over recent years, SS has gained significant attraction in the weed detection area of PA. Computer vision techniques that utilize image processing and machine learning methods for weed detection are widely investigated in the literature [16][17][18]. However, deep learning methods for SS have shown state-of-the-art (SOTA) results for image segmentation tasks in general. The availability of pre-trained deep neural networks on large datasets such as ImageNet [19] made it possible to transfer cross-domain knowledge to agriculture field images. For instance, convolutional neural networks (CNNs) such as DeepLab [20], UNet [21] and SegNet [22] are implemented for weed detection on various crop fields. The performance of these neural networks depends on multiple factors, such as the resolution of images, types of crops and field conditions. Since the colour and texture of weeds are very similar to crops, it is a complex problem to make a differentiation between crops and weeds. Furthermore, if more than one type of weed is present in the field, it becomes more challenging to segment such regions.

2. Deep Learning-Based Weed Detection Using UAV Images

Owing to the recent advancement in drone and sensor technology, the research on weed detection using DL methods has been swiftly progressing. For instance, a CNN was implemented by Dos et al. [15] for weed detection using aerial images. They acquired soybean (Glycine max) field images in Brazil with a drone and created a database of more than 1500 images including images of the soil, soybeans, broad-leaf and grass weeds. A classification accuracy of 98% was achieved using ConvNet while detecting the broadleaf and grass weeds. However, their approach employed the classification of whole images into different categories for weed detection rather than the segmentation of image pixels into various classes. Similarly, a CNN was implemented for weed mapping in Sod production using aerial images by Zhang et al. [23]. They first processed the UAV images using Pix4DMapper and produced an orthomosaic of the agricultural field. Then, the orthomosaic was divided into smaller image tiles. A CNN was built with an image size of (125px×125px) as input. The CNN achieved a maximum precision of 0.87, 0.82, 0.83, 0.90 and 0.88 for broadleaf, grass weeds, spurge (Euphorbia spp.), sedges (Cyperus spp.) and no weeds, respectively. Ong et al. [24] performed weed detection on a Chinese cabbage field using UAV images. They adapted AlexNet [25] to perform weed detection and compared its performance with traditional machine learning classifiers such as Random forest [26]. The results showed that CNN achieved the highest accuracy of 92.41%, which was 6% higher than that of Random forest. A lightweight deep learning framework for weed detection in soybean fields was implemented by Razfar et al. [27] using MobileNetV2 [28] and ResNet50 [29] networks.
Aside from the single-stage CNNs, few works have been reported that use multi-stage pipelines for weed detection on UAV images. For instance, Bah et al. [30] implemented a three-step method for weed detection on spinach and bean fields using UAV images. First, they detected the crop rows using the Hough transform [31] technique; then, the weeds between these crop rows were used as training samples where a CNN was trained to detect the crop and weed in the UAV images. However, their proposal depends on the accuracy of the line detection technique, which might not be robust when the UAV images contain varying backgrounds and image contrast. A two-stage classifier for weed detection in tobacco crops was implemented in [32]. Here, they first segmented the background pixel from the vegetation pixels which included both weed as well as tobacco pixels. Then, a three-class image segmentation model was implemented. Their proposal achieved the maximum Intersection of Union (IoU) of 0.91 for weed segmentation. However, the two-stage segmentation model requires separate training at each stage, and hence it is not possible to train the model in an end-to-end fashion, adding extra complexity to its deployment.
Object detection approaches such as single shot detector (SSD) [33], Faster RCNN [34] and YOLO [35] were also employed for weed detection using UAV images. For instance, Veeranampalayam et al. [33] compared two object detectors, namely, Faster RCNN and SSD, for weed detection using UAV images. The InceptionV2 [36] model was used for feature extraction in both detectors (Faster RCNN and SSD). The comparison revealed that Faster RCNN models produced a higher accuracy as well as less inference time for weed detection.
The segmentation of images into weed and non-weed regions at the pixel level is more precise and can be beneficial for the accurate application of pesticides. Xu et al. [20] combined the visible color index with a DL-based segmentation model for weed detection in soybean fields. They first generated the visible color index image for each UAV image and fed it into a DL-based segmentation model which utilized the DeeplabV3 [37] network. When comparing its performance with other SOTA segmentation architectures such as fully convolutional neural network (FCNN) [38] and UNet [39], it provided an accuracy of 90.50% and an IoU score of 95.90% for weed segmentation.

This entry is adapted from the peer-reviewed paper 10.3390/drones7100624

References

  1. Van Dijk, M.; Morley, T.; Rau, M.L.; Saghai, Y. A meta-analysis of projected global food demand and population at risk of hunger for the period 2010–2050. Nat. Food 2021, 2, 494–501.
  2. Satterthwaite, D.; McGranahan, G.; Tacoli, C. Urbanization and its implications for food and farming. Philos. Trans. R. Soc. B Biol. Sci. 2010, 365, 2809–2820.
  3. Oerke, E.C. Crop losses to pests. J. Agric. Sci. 2006, 144, 31–43.
  4. Huang, H.; Lan, Y.; Deng, J.; Yang, A.; Deng, X.; Zhang, L.; Wen, S. A semantic labeling approach for accurate weed mapping of high resolution UAV imagery. Sensors 2018, 18, 2113.
  5. Molina-Villa, M.A.; Solaque-Guzmán, L.E. Machine vision system for weed detection using image filtering in vegetables crops. Rev. Fac. Ing. Univ. Antioq. 2016, 80, 124–130.
  6. Ofosu, R.; Agyemang, E.D.; Márton, A.; Pásztor, G.; Taller, J.; Kazinczi, G. Herbicide Resistance: Managing Weeds in a Changing World. Agronomy 2023, 13, 1595.
  7. Shendryk, Y.; Rossiter-Rachor, N.A.; Setterfield, S.A.; Levick, S.R. Leveraging high-resolution satellite imagery and gradient boosting for invasive weed mapping. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 4443–4450.
  8. Mohsan, S.A.H.; Othman, N.Q.H.; Li, Y.; Alsharif, M.H.; Khan, M.A. Unmanned aerial vehicles (UAVs): Practical aspects, applications, open challenges, security issues, and future trends. Intell. Serv. Robot. 2023, 16, 109–137.
  9. Luna, I.; Lobo, A. Mapping crop planting quality in sugarcane from UAV imagery: A pilot study in Nicaragua. Remote Sens. 2016, 8, 500.
  10. Ryu, J.; Rehman, M.U.; Nizami, I.F.; Chong, K.T. SegR-Net: A deep learning framework with multi-scale feature fusion for robust retinal vessel segmentation. Comput. Biol. Med. 2023, 163, 107132.
  11. Liu, H.; Huo, G.; Li, Q.; Guan, X.; Tseng, M.L. Multiscale lightweight 3D segmentation algorithm with attention mechanism: Brain tumor image segmentation. Expert Syst. Appl. 2023, 214, 119166.
  12. Waldner, F.; Diakogiannis, F.I. Deep learning on edge: Extracting field boundaries from satellite images with a convolutional neural network. Remote Sens. Environ. 2020, 245, 111741.
  13. Safarov, F.; Temurbek, K.; Jamoljon, D.; Temur, O.; Chedjou, J.C.; Abdusalomov, A.B.; Cho, Y.I. Improved Agricultural Field Segmentation in Satellite Imagery Using TL-ResUNet Architecture. Sensors 2022, 22, 9784.
  14. Shahi, T.B.; Xu, C.Y.; Neupane, A.; Guo, W. Recent Advances in Crop Disease Detection Using UAV and Deep Learning Techniques. Remote Sens. 2023, 15, 2450.
  15. dos Santos Ferreira, A.; Freitas, D.M.; da Silva, G.G.; Pistori, H.; Folhes, M.T. Weed detection in soybean crops using ConvNets. Comput. Electron. Agric. 2017, 143, 314–324.
  16. Al-Badri, A.H.; Ismail, N.A.; Al-Dulaimi, K.; Salman, G.A.; Khan, A.; Al-Sabaawi, A.; Salam, M.S.H. Classification of weed using machine learning techniques: A review—challenges, current and future potential techniques. J. Plant Dis. Prot. 2022, 129, 745–768.
  17. Tellaeche, A.; Pajares, G.; Burgos-Artizzu, X.P.; Ribeiro, A. A computer vision approach for weeds identification through Support Vector Machines. Appl. Soft Comput. 2011, 11, 908–915.
  18. Wu, Z.; Chen, Y.; Zhao, B.; Kang, X.; Ding, Y. Review of weed detection methods based on computer vision. Sensors 2021, 21, 3647.
  19. Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, IEEE, Miami, FL, USA, 20–25 June 2009; pp. 248–255.
  20. Xu, B.; Fan, J.; Chao, J.; Arsenijevic, N.; Werle, R.; Zhang, Z. Instance segmentation method for weed detection using UAV imagery in soybean fields. Comput. Electron. Agric. 2023, 211, 107994.
  21. Genze, N.; Ajekwe, R.; Güreli, Z.; Haselbeck, F.; Grieb, M.; Grimm, D.G. Deep learning-based early weed segmentation using motion blurred UAV images of sorghum fields. Comput. Electron. Agric. 2022, 202, 107388.
  22. Ma, X.; Deng, X.; Qi, L.; Jiang, Y.; Li, H.; Wang, Y.; Xing, X. Fully convolutional network for rice seedling and weed image segmentation at the seedling stage in paddy fields. PLoS ONE 2019, 14, e0215676.
  23. Zhang, J.; Maleski, J.; Jespersen, D.; Waltz Jr, F.; Rains, G.; Schwartz, B. Unmanned Aerial System-Based Weed Mapping in Sod Production Using a Convolutional Neural Network. Front. Plant Sci. 2021, 12, 702626.
  24. Ong, P.; Teo, K.S.; Sia, C.K. UAV-based weed detection in Chinese cabbage using deep learning. Smart Agric. Technol. 2023, 4, 100181.
  25. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2012, 25, 1–9.
  26. Shahi, T.B.; Xu, C.Y.; Neupane, A.; Fleischfresser, D.B.; O’Connor, D.J.; Wright, G.C.; Guo, W. Peanut yield prediction with UAV multispectral imagery using a cooperative machine learning approach. Electron. Res. Arch. 2023, 31, 3343–3361.
  27. Razfar, N.; True, J.; Bassiouny, R.; Venkatesh, V.; Kashef, R. Weed detection in soybean crops using custom lightweight deep learning models. J. Agric. Food Res. 2022, 8, 100308.
  28. Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.C. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4510–4520.
  29. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 770–778.
  30. Bah, M.D.; Hafiane, A.; Canals, R. Deep learning with unsupervised data labeling for weed detection in line crops in UAV images. Remote Sens. 2018, 10, 1690.
  31. Mukhopadhyay, P.; Chaudhuri, B.B. A survey of Hough Transform. Pattern Recognit. 2015, 48, 993–1010.
  32. Moazzam, S.I.; Khan, U.S.; Qureshi, W.S.; Nawaz, T.; Kunwar, F. Towards automated weed detection through two-stage semantic segmentation of tobacco and weed pixels in aerial Imagery. Smart Agric. Technol. 2023, 4, 100142.
  33. Veeranampalayam Sivakumar, A.N.; Li, J.; Scott, S.; Psota, E.; J. Jhala, A.; Luck, J.D.; Shi, Y. Comparison of object detection and patch-based classification deep learning models on mid-to late-season weed detection in UAV imagery. Remote Sens. 2020, 12, 2136.
  34. Ajayi, O.G.; Ashi, J. Effect of varying training epochs of a Faster Region-Based Convolutional Neural Network on the Accuracy of an Automatic Weed Classification Scheme. Smart Agric. Technol. 2023, 3, 100128.
  35. Gallo, I.; Rehman, A.U.; Dehkordi, R.H.; Landro, N.; La Grassa, R.; Boschetti, M. Deep object detection of crop weeds: Performance of YOLOv7 on a real case dataset from UAV images. Remote Sens. 2023, 15, 539.
  36. Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2818–2826.
  37. Chen, L.C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A.L. Semantic image segmentation with deep convolutional nets and fully connected CRFs. arXiv 2014, arXiv:1412.7062.
  38. Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3431–3440.
  39. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, 5–9 October 2015; Proceedings, Part III 18. Springer: Berlin/Heidelberg, Germany, 2015; pp. 234–241.
More
This entry is offline, you can click here to edit this entry!
ScholarVision Creations