Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 -- 1754 2023-05-15 11:15:07 |
2 format correct Meta information modification 1754 2023-05-16 04:32:08 |

Video Upload Options

Do you have a full video?

Confirm

Are you sure to Delete?
Cite
If you have any further questions, please contact Encyclopedia Editorial Office.
Shahi, T.B.; Xu, C.; Neupane, A.; Guo, W. Deep Learning-Based Methods for Crop Disease Estimation. Encyclopedia. Available online: https://encyclopedia.pub/entry/44295 (accessed on 27 July 2024).
Shahi TB, Xu C, Neupane A, Guo W. Deep Learning-Based Methods for Crop Disease Estimation. Encyclopedia. Available at: https://encyclopedia.pub/entry/44295. Accessed July 27, 2024.
Shahi, Tej Bahadur, Cheng-Yuan Xu, Arjun Neupane, William Guo. "Deep Learning-Based Methods for Crop Disease Estimation" Encyclopedia, https://encyclopedia.pub/entry/44295 (accessed July 27, 2024).
Shahi, T.B., Xu, C., Neupane, A., & Guo, W. (2023, May 15). Deep Learning-Based Methods for Crop Disease Estimation. In Encyclopedia. https://encyclopedia.pub/entry/44295
Shahi, Tej Bahadur, et al. "Deep Learning-Based Methods for Crop Disease Estimation." Encyclopedia. Web. 15 May, 2023.
Deep Learning-Based Methods for Crop Disease Estimation
Edit

Deep learning methods such as U-Net, SegNet, YOLO, Faster R-CNN, VGG and ResNet have been used extensively for crop disease estimation using Unmanned Aerial Vehicle (UAV)  imagery. The basic building block of the deep learning architecture is basically the success of convolutional neural networks (CNN). The deep learning models implemented for crop disease estimation using UAV imagery can be categorized into classification-based, segmentation-based and detection-based approaches. Segmentation-based models attempt to classify each pixel in an image into different categories such as healthy vs. diseased pixels, whereas classification-based models look into overall images and classify the image into pre-defined disease classes.

UAV crop disease drone deep learning drone

1. Pixel-Based Segmentation Models

The image segmentation model normally classifies the image pixels into different regions or categories. Traditional methods of image segmentation include clustering the pixels into different groups using iterative techniques such as K-means or ISODATA. Deep learning-based image segmentation methods utilized the encoder–decoder structure where the encoder includes the combination of convolutions and down-sampling operations to represent the input image into the latent space and then the decoder reconstructs the segmentation map from such latent space using up-sampling operations. The popular encoder–decoder architectures for image segmentation used cooperatively with UAV imagery are U-Net [1] PSPNet [2], SegNet [3] and others [4], as reported in Table 1.
The majority of the existing works [1][5][6] for crop disease segmentation in UAV imagery utilized U-Net [7], one of the widely used DL architectures for semantic segmentation. A wheat yellow rust monitoring using UAV was implemented by Su et al. [1]. The U-Net with various input combinations were designed and tested where the five-band input outperforms all other combinations such as RGB only and VIs. Furthermore, U-Net was used to detect the nematodes on coffee with RGB images acquired at a flight altitude of 10 m by Oliveira et al. [5]. They also trained PSPNet to detect the nematode pest on coffee images at different resolutions and compared its performance to U-Net, where the U-net outperformed the PSPNet with an overall precision of 69.00%. A modified version of U-Net was proposed by Zhang et al. [6] for wheat yellow rust detection with the RGB aerial images. They improved U-net architecture by adding irregular encoder and decoder modules along with a channel-wise re-weight module and compared its performance with the original U-net. Their results showed that the modified U-Net achieved an overall accuracy of 97.13% with five bands of an input image. Another study on yellow rust detection on wheat with multispectral images and U-net was conducted by [8] with an overall accuracy of 96.3%. With these observations, it can be put forward that the U-net has merits for crop disease segmentation with aerial images either with multispectral sensors or RGB sensors. However, it should also be noted that these UAV images should come with high resolutions and be acquired within the range of less than 30 m of altitude.
Similarly, Mask R-CNN [9], SegNet [3], FCN [10], PSPNet [2], DeepLabV3 [4], CropDocNet [11] and VddNet [12] were also utilized for the segmentation of various crop diseases, as reported in Table 1. For instance, Stewart et al. [9] implemented an instance segmentation model based on Mask R-CNN for northern leaf blight (NLB) on maize. They achieved an average precision of 0.96, while the intersection over union (IOU) was set to 0.50. With such promising results, it is projected that deep learning-based methods for instance segmentation using UAV imagery have a great potential for plant disease detection. Mildew disease detection in vine was investigated by Kerkech et al. [3] using multispectral images and SegNet [13]. They used SegNet to classify each pixel of the vine field images into shadow, ground, healthy and mildew symptom classes. Their method achieved the highest detection accuracy of 92% at the grapevine level while the detection accuracy was 87% at the leaf level. Similarly, Cercospora leaf spot (CLS) detection on sugar beet was investigated by [10] using a fully connected neural network (FCN). Their FCN was based on DenseNet [14], which was trained on pixels classified as CLS, healthy and background. Their method achieved an f-score of 44.48%, 88.26% and 93.90 for CLS, healthy and background pixels, respectively, under changing field conditions.
Table 1. Summary of pixel-based segmentation DL models for crop disease detection using UAV imagery. Note that the disease abbreviations are denoted as NLB (northern leaf blight), VD (vine disease), CLS (Cercospora leaf spot), YR (yellow rust), NM (nematodes), SR (stripe rust), LB (late blight).

2. Object-Level Classification Models

The object-level classification models take an image as input and classify the images into one of the predetermined object classes. Since UAV images are acquired as overlapped tiles of agricultural fields and are later stitched into a single agricultural field map, the crop field region can be divided into small object-level tiles. Then, using such tiles, the deep learning model can be trained to classify the image tiles into diseased regions or healthy regions. As a post-processing task, these outputs can be merged again to rebuild the original agricultural field map with diseased vs. healthy regions. As reported in Table 2, two types of DL methods were used for crop disease classification using UAV imagery: first, existing pre-trained deep learning architectures such as ResNet [15], Inception-v3 [16], VGG [17], DenseNet [18], MobileNet [19] and GoogleNet [20], which were mostly trained on ImageNet [21] and are easily available for use in any other task as transfer learning, and second, custom-designed convolutional neural networks (CNNs) specific to a particular task that need training from scratch.
A transfer learning approach using ResNet [22] architecture was implemented by Wu et al. [15] for lesion detection on maize with high-resolution RGB UAV imagery captured by flying a drone 6m above the ground in two stages. In the first stage, they trained a backbone CNN (ResNet [22]) by randomly cropping sub-images of size 500×500500×500. Furthermore, transfer learning was implemented using the ResNet-34 pre-trained on ImageNet [21]. Next, a disease heat map was generated with the output of a previously trained CNN while feeding the patch generated with sliding windows over the original UAV images. Similarly, a transfer learning approach with multiple existing deep learning architectures, such as VGG, ReseNet, Inception and Xceptio, for soybean leaf disease classification using RGB imagery was implemented by Tetila et al. [16]. Their framework included three steps: (a) UAV image acquisition, (b) leaf segmentation using SLIC and (c) the classification of leaves into various disease levels using existing DL methods. Comparing the performance of the DL models, the Inception network outperformed all other DL models with an overall accuracy of 99.04%. Similarly, a deep learning method based on InceptionResNet [23] was investigated by Zhang et al. [24] for yellow rust detection on wheat using hyperspectral imagery. Here, a sliding window approach was used to create the patch of an image, and then these patches are fed into a DCNN (Inception-ResNet) for rust classification with an overall accuracy of 85.00%. Finally, post-processing was carried out to visualize the rust map.
Besides the existing pre-trained DL models, few researchers have implemented custom CNNs specially designed for the detection of particular crop diseases. For instance, Kerkech et al. [25] implemented a CNN (inspired by LeNet-5 [26]) for RGB images at the block or patch level that classifies sliding windows of images (object) into four designated classes: ground, healthy, partially diseased and diseased. Then, each image patch was post-processed to generate the disease map. They reported the highest accuracy of 95.8%, while classifying the tiles into four classes. Similarly, a convolutional neural network (CNN) that shared the basic architecture of the classic LeNet-5 was designed by Huang et al. [27] for HLB classification on wheat with RGB imagery. When comparing its performance with SVM with various features such as LBP, histogram and VIs, an overall accuracy of 91.43% was achieved with CNN, whereas SVM provided only 90.00% accuracy.
Table 2. Summary of object-level classification-based DL models for crop disease detection using UAV imagery. Note that the abbreviations used are NLB (northern late blight), YR (yellow rust), SD (Soybean disease), FW (Fusarium wilt), corn disease (CD), BD (banana diseases), FAW (fall army-worms), VD (vine disease) and HLB (Helminthosporium leaf blotch).

3. Object Detection-Based Models

Object detection is one of the most investigated tasks in the computer vision field [30]. It consists of both object classification and localization, which makes it more challenging compared to image classification tasks. Image classification involves assigning a specific class to a single image, whereas object detection involves assigning a label to an object and drawing a bounding box around the object of interest (localization) [31].
There are various methods proposed for object detection, which can be grouped into two broad categories: two-stage and single-stage detectors. The two-stage object detectors, such as R-CNN [32], first propose a set of RoIs (regions of interest) using an algorithm such as selective search. From these candidate regions, a DL architecture, such as VGG [33], extracts the deep features, and finally, a classifier, such as a linear SVM, classifies them into known classes. However, in one-stage detectors, the input images are required to pass through the DL model only once, and thereby, the bounding boxes for the object are predicted. As shown in Table 3, most works use one-stage detectors such as YOLO [31], RetinaNet [34], CenterNet [35] and so on. Two-stage detectors, such as Faster R-CNN [32], are used by very few works [36].
Table 3. The summary of object-detection-based crop disease detection using UAV imagery. Note that the abbreviations used for the diseases are CRR (cotton rot root), WW (worm-whole) WLD (white leaf disease), DS (drought stress) and TLB (tea leaf blight).

References

  1. Su, J.; Yi, D.; Su, B.; Mi, Z.; Liu, C.; Hu, X.; Xu, X.; Guo, L.; Chen, W.H. Aerial visual perception in smart farming: Field study of wheat yellow rust monitoring. IEEE Trans. Ind. Inform. 2020, 17, 2242–2249.
  2. Pan, Q.; Gao, M.; Wu, P.; Yan, J.; Li, S. A deep-learning-based approach for wheat yellow rust disease recognition from unmanned aerial vehicle images. Sensors 2021, 21, 6540.
  3. Kerkech, M.; Hafiane, A.; Canals, R. Vine disease detection in UAV multispectral images using optimized image registration and deep learning segmentation approach. Comput. Electron. Agric. 2020, 174, 105446.
  4. Deng, J.; Zhou, H.; Lv, X.; Yang, L.; Shang, J.; Sun, Q.; Zheng, X.; Zhou, C.; Zhao, B.; Wu, J.; et al. Applying convolutional neural networks for detecting wheat stripe rust transmission centers under complex field conditions using RGB-based high spatial resolution images from UAVs. Comput. Electron. Agric. 2022, 200, 107211.
  5. Oliveira, A.J.; Assis, G.A.; Faria, E.R.; Souza, J.R.; Vivaldini, K.C.; Guizilini, V.; Ramos, F.; Mendes, C.C.; Wolf, D.F. Analysis of nematodes in coffee crops at different altitudes using aerial images. In Proceedings of the 2019 27th European Signal Processing Conference (EUSIPCO), A Coruna, Spain, 2–6 September 2019; pp. 1–5.
  6. Zhang, T.; Xu, Z.; Su, J.; Yang, Z.; Liu, C.; Chen, W.H.; Li, J. Ir-unet: Irregular segmentation u-shape network for wheat yellow rust detection by UAV multispectral imagery. Remote Sens. 2021, 13, 3892.
  7. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015: 18th International Conference, Munich, Germany, 5–9 October 2015; Proceedings, Part III 18. Springer: Berlin/Heidelberg, Germany, 2015; pp. 234–241.
  8. Zhang, T.; Yang, Z.; Xu, Z.; Li, J. Wheat yellow rust severity detection by efficient DF-UNet and UAV multispectral imagery. IEEE Sens. J. 2022, 22, 9057–9068.
  9. Stewart, E.L.; Wiesner-Hanks, T.; Kaczmar, N.; DeChant, C.; Wu, H.; Lipson, H.; Nelson, R.J.; Gore, M.A. Quantitative phenotyping of Northern Leaf Blight in UAV images using deep learning. Remote Sens. 2019, 11, 2209.
  10. Görlich, F.; Marks, E.; Mahlein, A.K.; König, K.; Lottes, P.; Stachniss, C. Uav-based classification of cercospora leaf spot using rgb images. Drones 2021, 5, 34.
  11. Shi, Y.; Han, L.; Kleerekoper, A.; Chang, S.; Hu, T. Novel cropdocnet model for automated potato late blight disease detection from unmanned aerial vehicle-based hyperspectral imagery. Remote Sens. 2022, 14, 396.
  12. Kerkech, M.; Hafiane, A.; Canals, R. VddNet: Vine disease detection network based on multispectral images and depth map. Remote Sens. 2020, 12, 3305.
  13. Badrinarayanan, V.; Kendall, A.; Cipolla, R. Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495.
  14. Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708.
  15. Wu, H.; Wiesner-Hanks, T.; Stewart, E.L.; DeChant, C.; Kaczmar, N.; Gore, M.A.; Nelson, R.J.; Lipson, H. Autonomous detection of plant disease symptoms directly from aerial imagery. Plant Phenome J. 2019, 2, 1–9.
  16. Tetila, E.C.; Machado, B.B.; Menezes, G.K.; Oliveira, A.d.S.; Alvarez, M.; Amorim, W.P.; Belete, N.A.D.S.; Da Silva, G.G.; Pistori, H. Automatic recognition of soybean leaf diseases using UAV images and deep convolutional neural networks. IEEE Geosci. Remote Sens. Lett. 2019, 17, 903–907.
  17. Ha, J.G.; Moon, H.; Kwak, J.T.; Hassan, S.I.; Dang, M.; Lee, O.N.; Park, H.Y. Deep convolutional neural network for classifying Fusarium wilt of radish from unmanned aerial vehicles. J. Appl. Remote Sens. 2017, 11, 042621.
  18. Ahmad, A.; Aggarwal, V.; Saraswat, D.; El Gamal, A.; Johal, G.S. GeoDLS: A deep learning-based corn disease tracking and location system using RTK geolocated UAS imagery. Remote Sens. 2022, 14, 4140.
  19. Ishengoma, F.S.; Rai, I.A.; Said, R.N. Identification of maize leaves infected by fall armyworms using UAV-based imagery and convolutional neural networks. Comput. Electron. Agric. 2021, 184, 106124.
  20. Dang, L.M.; Hassan, S.I.; Suhyeon, I.; kumar Sangaiah, A.; Mehmood, I.; Rho, S.; Seo, S.; Moon, H. UAV based wilt detection system via convolutional neural networks. Sustain. Comput. Inform. Syst. 2020, 28, 100250.
  21. Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; et al. Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 2015, 115, 211–252.
  22. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778.
  23. Szegedy, C.; Ioffe, S.; Vanhoucke, V.; Alemi, A. Inception-v4, inception-resnet and the impact of residual connections on learning. In Proceedings of the AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017; Volume 31.
  24. Zhang, X.; Han, L.; Dong, Y.; Shi, Y.; Huang, W.; Han, L.; González-Moreno, P.; Ma, H.; Ye, H.; Sobeih, T. A deep learning-based approach for automated yellow rust disease detection from high-resolution hyperspectral UAV images. Remote Sens. 2019, 11, 1554.
  25. Kerkech, M.; Hafiane, A.; Canals, R. Deep leaning approach with colorimetric spaces and vegetation indices for vine diseases detection in UAV images. Comput. Electron. Agric. 2018, 155, 237–243.
  26. LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324.
  27. Huang, H.; Deng, J.; Lan, Y.; Yang, A.; Zhang, L.; Wen, S.; Zhang, H.; Zhang, Y.; Deng, Y. Detection of helminthosporium leaf blotch disease based on UAV imagery. Appl. Sci. 2019, 9, 558.
  28. Sugiura, R.; Tsuda, S.; Tsuji, H.; Murakami, N. Virus-infected plant detection in potato seed production field by UAV imagery. In Proceedings of the 2018 ASABE Annual International Meeting. American Society of Agricultural and Biological Engineers, Detroit, MI, USA, 29 July–1 August 2018; p. 1.
  29. Selvaraj, M.G.; Vergara, A.; Montenegro, F.; Ruiz, H.A.; Safari, N.; Raymaekers, D.; Ocimati, W.; Ntamwira, J.; Tits, L.; Omondi, A.B.; et al. Detection of banana plants and their major diseases through aerial images and machine learning methods: A case study in DR Congo and Republic of Benin. ISPRS J. Photogramm. Remote Sens. 2020, 169, 110–124.
  30. Zhao, Z.Q.; Zheng, P.; Xu, S.t.; Wu, X. Object detection with deep learning: A review. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 3212–3232.
  31. Qian, Q.; Yu, K.; Yadav, P.K.; Dhal, S.; Kalafatis, S.; Thomasson, J.A.; Hardin IV, R.G. Cotton crop disease detection on remotely collected aerial images with deep learning. In Proceedings of the Autonomous Air and Ground Sensing Systems for Agricultural Optimization and Phenotyping VII; SPIE: Bellingham, DC, USA, 2022; Volume 12114, pp. 23–31.
  32. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster r-cnn: Towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 39, 1137–1149.
  33. Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. In Proceedings of the International Conference on Learning Representations, San Diego, CA, USA, 7–9 May 2015.
  34. Butte, S.; Vakanski, A.; Duellman, K.; Wang, H.; Mirkouei, A. Potato crop stress identification in aerial images using deep learning-based object detection. Agron. J. 2021, 113, 3991–4002.
  35. Zhao, R.; Shi, F. A novel strategy for pest disease detection of Brassica chinensis based on UAV imagery and deep learning. Int. J. Remote Sens. 2022, 43, 7083–7103.
  36. Amarasingam, N.; Gonzalez, F.; Salgadoe, A.S.A.; Sandino, J.; Powell, K. Detection of White Leaf Disease in Sugarcane Crops Using UAV-Derived RGB Imagery with Existing Deep Learning Models. Remote Sens. 2022, 14, 6137.
  37. Bao, W.; Zhu, Z.; Hu, G.; Zhou, X.; Zhang, D.; Yang, X. UAV remote sensing detection of tea leaf blight based on DDMA-YOLO. Comput. Electron. Agric. 2023, 205, 107637.
More
Information
Contributors MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register : , , ,
View Times: 573
Revisions: 2 times (View History)
Update Date: 16 May 2023
1000/1000
Video Production Service