Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 -- 6890 2023-06-17 16:27:54 |
2 format Meta information modification 6890 2023-06-19 04:14:43 |

Video Upload Options

Do you have a full video?

Confirm

Are you sure to Delete?
Cite
If you have any further questions, please contact Encyclopedia Editorial Office.
Zualkernan, I.; Abuhani, D.A.; Hussain, M.H.; Khan, J.; Elmohandes, M. Machine Learning for Precision Agriculture with UAV. Encyclopedia. Available online: https://encyclopedia.pub/entry/45751 (accessed on 28 July 2024).
Zualkernan I, Abuhani DA, Hussain MH, Khan J, Elmohandes M. Machine Learning for Precision Agriculture with UAV. Encyclopedia. Available at: https://encyclopedia.pub/entry/45751. Accessed July 28, 2024.
Zualkernan, Imran, Diaa Addeen Abuhani, Maya Haj Hussain, Jowaria Khan, Mohamed Elmohandes. "Machine Learning for Precision Agriculture with UAV" Encyclopedia, https://encyclopedia.pub/entry/45751 (accessed July 28, 2024).
Zualkernan, I., Abuhani, D.A., Hussain, M.H., Khan, J., & Elmohandes, M. (2023, June 17). Machine Learning for Precision Agriculture with UAV. In Encyclopedia. https://encyclopedia.pub/entry/45751
Zualkernan, Imran, et al. "Machine Learning for Precision Agriculture with UAV." Encyclopedia. Web. 17 June, 2023.
Machine Learning for Precision Agriculture with UAV
Edit

Unmanned aerial vehicles (UAVs) are increasingly being integrated into the domain of precision agriculture, revolutionizing the agricultural landscape. Specifically, UAVs are being used in conjunction with machine learning techniques to solve a variety of complex agricultural problems. 

precision farming UAVs agriculture machine learning deep learning

1. Traditional Machine Learning

1.1. Support Vector Machines (SVM)

Support vector machines (SVM) were used for classifying vegetation by health status [1], classifying trees by type [2], identifying and classifying weeds to generate weed maps [3], and lastly, segmenting crop rows [4].
Tendolkar et al. [1] proposed the use of an Agrocopter, a multipurpose farming drone, to assess and evaluate plant health status and to take corrective actions. The system assessed plant health on the basis of the NVDI index, texture, and color features of the individual pixels. These features were extracted utilizing a filter bank of 17 Gaussian and Laplacian filters. SVM was then used to perform semantic segmentation on the image pixels and to classify the pixels as healthy or unhealthy. Lastly, a segmented mask was generated and used to find the health ratio of the images according to the ratio of the area of healthy pixels to the total area of the image. The health ratio was then used to classify images into healthy, moderately healthy, and unhealthy. The trained model had 85% precision, 81% recall, and an F1-score of 79%.
Natividade et al. [2] proposed a pattern recognition system (PRS) to identify and classify vegetation using the NDVI scale as a segmentation threshold. An SVM was trained on two datasets: a tree dataset with five classes and a vineyard dataset with three classes. The best models achieved an accuracy of around 72% on the two datasets.
Pérez-Ortiz et al. [3] introduced a UAV-based weed mapping system for the early detection of weeds in crop fields. They used a semi-supervised SVM (SSVM) which aims to find an optimal labeling for the test portion of the data using both labeled and unlabeled data. The system used crop-row detection, vegetation indices, and spectral features to classify pixels in field images as belonging to one of three classes of crop, weed, or soil. Crop-row detection was introduced to improve classifier performance in differentiating crops and weeds because their spectral features were similar. The proposed system took UAV-captured images, partitioned them into 1000 × 1000 pixel images, and then calculated the vegetation index of all image pixels. NDVI was used for multispectral images, and the excess green index (ExG) was employed for visible images. The Otsu thresholding procedure was then applied to the vegetation indices to create thresholds that divided the indices into three classes where the highest vegetation index (VI) pertaining to crops, lower values to weeds, and the lowest values to soil. The image was then binarized by taking crop pixels as 1s and weed and soil pixels as 0s. The binarized image was then fed into the Hough transform (HT) method to detect crop rows in the images. Lastly, a crop-row data feature, along with VI and spectral features, was used to train different machine learning models to classify pixels as soil, crop, or weed. The SSVM returned an MAE of 12.68%.
César Pereira et al. [4] compared the performance of multiple machine learning algorithms for the problem of crop-row segmentation. Their study used a single image of a sugar cane field as its dataset and compared the segmentation results of running this image through different classifiers to a manually labeled image. The manually segmented image’s pixels were classified into the two classes of crop row and background. Spectral features were extracted using ExG and VI, and textural features were extracted through a four-filter Gabor filter bank and a gray-level co-occurrence matrix (GLCM). The feature vectors and color features (RGB) were used to train SVM models. For the linear SVM model, the best combination of features was RGB, EXG, and Gabor filters. This combination yielded an F1-score of 88.01% and an IoU percentage of 78.86%. The worst feature combination was RGB and GLCM. This combination yielded an F1-score of 62.48% and an IoU percentage of 46.08%.

1.2. K-Nearest Neighbors (KNN)

The K-nearest neighbor algorithm (KNN) has been used extensively in precision agriculture in land-cover classification [5], sugarcane planting line detection/fault studies [6], and crop-row segmentation [4].
Rodríguez-Garlito and Paz-Gallardo [5] proposed a KNN-based land-cover classification system. This system classified land cover into olive trees, soil, weeds, and shadow. In this system, high-resolution, multispectral images of the studied field were first captured using a UAV. These images went through spatial partitioning to reduce the memory costs of the machine learning algorithm. As a result, processing windows were formed, with each window holding the spectral information of a row of image pixels. The KNN algorithm was then applied to one processing window at a time to perform land-cover classification, and to classify individual pixels into the classes to which they belonged. The trained KNN model had a precision of 95.5%, an accuracy of 91.8%, and an accuracy score of 90.9% on an equally balanced dataset. Similarly, Rocha et al. [6] used KNN to detect gaps in curved sugarcane planting lines from aerial images. The training and test sets were created using RGB images and classified using decision tree, linear discriminant analysis, and KNN. KNN had the best results with a relative error of 1.65%, and it effectively evaluated the planting conditions.
Pereira Júnior et al. [4] studied the use of the KNN algorithm in crop-row segmentation. Two KNN models with two different K values of 3 and 11 were used. Constructing a KNN model with a K value of either 3 or 11 yielded similar results. The models used Euclidean distance and RGB, ExG, and Gabor filters as features, and both models achieved an IoU score of about 76% and an F1-score of about 86%.

1.3. Decision Trees (DT) and Random Forests (RF)

Decision tree classifiers were used in precision agriculture to classify vegetation like trees and vineyards [2]. Similarly, the random forest algorithm was used to classify sugar beet crops and weeds [7].
Natividade et al. [2] used decision trees to detect and classify trees and vineyards in a field, where trees were classified into five distinct types and vineyards into three types. On the tree data set, the best model resulted in 87% precision, 88% recall, and 74% accuracy. On the vineyard data, 87% precision, 90% recall, and 79% accuracy were achieved.
Lottes et al. [7] proposed a crop and weed detection, feature extraction, and classification system that could identify and classify sugar beets and several types of weeds. NDVI and ExG were used as features. A segmented mask based on the VI threshold was then used to extract a spectral feature vector per segmented object in the image and a feature vector per key point in the image. These feature vectors, along with geometric and statistical features, were used to train a random forest model. The Phantom and Matrice-graining datasets contained UAV-captured images of crops and weeds, while the JAI training dataset contained ground-captured images. The Phantom dataset was used to test how well the model could classify vegetation into sugar beet crops, saltbush weeds, chamomile weeds, and other weeds. The model yielded a precision of 85% for both saltbush and chamomile weeds. The recall values were 95% and 87% for saltbush weeds and chamomile weeds, respectively. Lastly, a recall of only 45% was attained for other weeds. The overall accuracy of the model was 86%. When weed-type classification was ignored, and vegetation was classified into two classes, 99% recall and 97% precision were achieved.

2. Neural Networks and Deep Learning

2.1. Convolutional Neural Networks (CNN)

Convolutional neural networks (CNN) have been used extensively in analyzing images for precision agriculture. Specifically, transfer learning has often been used successfully using a variety of pretrained models, including Inception V3 and VGG. For example, Crimaldi et al. [8] used the Inception V3 model and achieved 78.1% accuracy for classifying a crop into one of 14 crop types using data consisting of 54,309 images. Milioto et al. [9] built a CNN model using RGB and NIR camera images. The model had 97.3% accuracy for images of early crop growth and 89.2% accuracy for images of crops in later stages. However, both models had the same recall percentage, with the early stage scoring 98% and the later stage scoring 99%. Similarly, Bah et al. [10] used the AlexNet model on spinach, beet, and beans datasets and achieved precision of 93%, 81%, and 69%, respectively. The authors claimed that the bad results were primarily due to leaves overlapping between crops and weeds. Reddy et al. [11] used a customized CNN model for their work on plant species identification and achieved 99.5% precision for Flavia, Swedish leaf, and UCI leaf datasets. Sembiring et al. [12] focused on tomato plant disease detection. Their proposed model achieved 97.15% validation accuracy using the tomato leaf dataset from Plant Village. However, their model did not achieve the highest validation accuracy among all four trained models. The highest accuracy score of 98.28% was achieved by the VGG16 model. Geetharamani et al. [13] achieved a classification accuracy of 96.46% using a customized nine-layer CNN model. The authors of [14] used a residual learning CNN with an attention mechanism. The goal was to perform real-time corn leaf disease recognition. They also used the Plant Village disease classification challenge dataset [15]. An overall accuracy of 98% was achieved. Nanni et al. [16] used different combinations of CNNs, including ResNet50, GoogleNet, ShuffleNet, MobileNetv2, and DenseNet201, with different Adam optimization methods. These CNN models were trained on three datasets of insect images: the Deng dataset, the IP102 dataset, and the Xie2 dataset. The best-performing CNN achieved state-of-the-art accuracy on both insect datasets: 95.52% on Deng, a score that competed with human expert classifications, and 73.46% on IP102.
Atila et al. [17] proposed using the EfficientNet architecture for plant disease classification on the Plant Village dataset and achieved 99.91% and 99.97% accuracy on original and augmented datasets, respectively. Prasad et al. [18] proposed a two-step machine learning approach that analyzed low-fidelity and high-fidelity images from drones in sequence, preserving the efficiency and accuracy of plant diagnosis. The Pathology 2020 dataset and a set of synthetically generated images were used. A semi-supervised model derived from EfficientNet called EfficientDet was used. The end goal was to perform segmentation and classification. The model scored 75.5% for the average accuracy of the identifier model. Albattah et al. [19] proposed a customized model of using EfficientNet called EfficientNetV2-B4 backbones to address plant disease classification. The Plant Village dataset and additional UAV images were used to train the model. The results were 99.63%, 99.93%, 99.99%, and 99.78% for precision, recall, accuracy, and F1-score, respectively.
Mishra et al. [20] developed a standard CNN model to detect corn plant diseases in real time. The model was deployed on an Intel Movidius NCS and a Raspberry Pi 3b+ module. The authors used the Plant Village disease classification challenge dataset and divided the images into three classes: rust, northern leaf blight, and healthy. The system achieved an accuracy of 98.40% using a GPU and 88.56% on the NCS chip. Bah et al. [21] used unsupervised data labeling for weed detection from UAV images. The dataset consisted of two fields: beans and spinach. Each dataset was divided into the two classes of crop and weed. Two-thirds of the data were labeled in a supervised manner, while one-third were labeled using unsupervised methods. The ResNet18 model was used to perform the classification. ResNet18 significantly outperformed SVM and RF methods in the bean field as it achieved an average AUC of 91.7% on both supervised and unsupervised labeled data in comparison to 52.68% using SVM and 66.7% using RF. On the other hand, RF resulted in a slightly better average AUC% in the spinach field compared to that achieved using ResNet18.
Zheng et al. [22] proposed multiple CNN models to estimate percentage canopy cover and vineyard leaf area index in each field. The authors compared the estimation performance of five different models, including a CNN–ConvLSTM model, a vision transformer model, a joint Model, a CNN model of 71 layers (Xception model), and a ResNet50 model. The five models were trained on a dataset containing approximately 840 images extracted from UAV videos taken of vineyard fields at Alcorn State University. The five models were evaluated using the RMSE of both leaf area index (LAI) and percentage canopy cover. For the prediction of leaf area index, Xception, CNN-ConvLSTM, vision transformer, ResNet50, and the joint model had RMSEs of 0.28, 0.32, 0.34, 0.41, and 0.43, respectively. For predicting percentage canopy cover, Xception, CNN-ConvLSTM, vision transformer, ResNet50, and the joint model had RMSEs of 4.01, 4.50, 4.56, 5.98, and 6.08, respectively. Clearly, Xception performed best in both LAI estimation and percentage canopy cover estimation.
Yang et al. [23] proposed a method of multisource data fusion for disease and pest detection of grape foliage using the ShuffleNet V2 model. The dataset consisted of 834 groups of grape foliage images. Each group contained three types of images of grape foliage: RGB image (RGBI) (2592 × 1944, three channels), multispectral image (MSI) (409 × 216, 25 channels), and thermal infrared image (TIRI) (640 × 512, three channels). The accuracy of MSI was 82.4%, that of RGB was 93.41%, and that of TIRI was 68.26%.
Briechle et al. [24] used multispectral images to classify tree species and standing dead trees. They used the PointNet++ model. The data used were UAV-based light detection and ranging, including laser echo pulse width (LIDAR) data and five-channel MS imagery. They also applied segmentation to the images during the preprocessing of the data. Their model achieved an accuracy of 90.2%.
Aiger et al. [25] proposed a method of image classification based on multi-view image projections. Their method used projections of multiple images at multiple depth planes near the reconstructed surface. This enabled the classification of categories whose most noticeable aspect was appearance change under different viewpoints, such as water, trees, and other materials with complex reflection/light response properties. They obtained the best accuracy of 96.3% on their proposed 3D CNN.
Weinstein et al. [26] developed a semi-supervised model for individual tree detection from UAV imagery. The model used an existing LIDAR algorithm to generate RGB trees that could be used for training as a starting point. The model was then retained using a small number of manual labels to correct errors from the unsupervised detection. Then a pretrained ResNet50 backbone was used to classify the images. The model was tested on the NEON public dataset and achieved the best performance among existing LIDAR-based models (+2%) in comparison to that achieved by Silva et al. [27].

2.2. U-Net Architecture

The U-Net architecture was originally introduced in the medical domain by Ronneberger et al. [28] and is commonly used for image segmentation. U-Net follows an encoder–decoder architecture. Many factors, such as the density of the crops, their growth stage, and the flight height of the drone, have an impact on how well a U-Net will perform. According to Kitano et al. [29], U-Net did not perform well when the plants were remarkably close together. However, some techniques could be used to solve this problem, such as using the opening morphological operator [30]
Lin et al. [31] used U-Net to achieve an accuracy of 95.5% and an RMSE of 2.5% with 1000 manually labeled training images. Arun et al. [32] achieved an accuracy of 95.34% and an RMSE of 7.45 using reduced U-Net by designing an efficient pixel-wise classifier for weeds and crops in agricultural field images. Hoummaidi et al. [33] used the U-Net model to perform vegetarian extraction and achieved an overall accuracy of 89.7%. However, palm trees and Ghaf trees had higher detection rates of 96.03% and 94.54%, respectively. The authors justified their results with the fact that trees were obstructed by other trees. Palm trees also caused some errors due to their physical characteristics and the small crown sizes of some trees. The authors suggested that including young palms in the training data could improve the crown size error rate. Doha et al. [34] used the U-Net architecture to detect crop rows by performing semantic segmentation on vertical aerial images. Zhang et al. [35] used the dual-flow U-Net (DF-U-Net) to detect yellow rust severity in farmlands. The dataset was from the Yangling experiment field, which used a red-edge camera on board a DJI M100 UAV with a sensor size of 1336 × 2991. The F1-score, accuracy, and precision scores were 94.13%, 96.93%, and 94.02%, respectively. Sparse channel attention (SCA) was designed to increase the receptive field of the network and improve the ability to distinguish each category. Using U-Net, Lin et al. [31] achieved high accuracy with a small dataset. Similarly, with only 48 images, Tsuichihara et al. [36] achieved an accuracy of about 80% in detecting broad-leaved weeds.

2.3. Other Segmentation Models

Efficient dense modules of asymmetric convolution (EDANet) is another model that works well for real-time semantic segmentation. Therefore, EDANet can be useful for real-time applications such as UAVs. Yang et al. [37] proposed an EDANet that performs semantic segmentation for detecting rice lodging. Lodging occurs when the stem weakens and the plant falls over. EDANet outperformed many systems because of its efficiency, low computational cost, and model size. The model identified normal rice at 95.28% and lodging at 86.17% accuracy. The model accuracy was improved to 99.25% when less than 2.5% of rice lodging was neglected.
Weyler et al. [38] proposed an ERFNet-based instance segmentation model that segments individual crop leaves in plant imagery to extract relevant phenotyping information and then groups the instances that belong to one crop together. This model made use of two decoders, one of which was used to predict the offset of image pixels from leaf regions, while the other was used to predict the offset of image pixels from plant regions. The two decoder outputs were then used to generate one image with leaf clusters and another with plant clusters. The model was trained on a dataset of 1316 RGB images of sugar beet fields captured by a camera onboard a UAV. The model was evaluated on its ability to perform crop leaf segmentation, as well as full crop segmentation. In crop leaf segmentation, the model was able to achieve an average precision of 48.7% and an average recall of 57.3%. The model achieved an average precision of 60.4% and an average recall of 68% for crop segmentation.
Guo et al. [39] developed a three-stage model to perform plant disease identification for smart farming. The model located the diseased leaves using a region proposal network (RPN) algorithm trained on a leaf dataset in complex environments, after which regression and classification neural networks were used to locate and retrieve the diseased leaves. Later, the Chan-Vese algorithm [40] was used to perform segmentation according to the set zero level set and minimum energy function. Lastly, the diseases were identified using a pretrained transfer learning model. The proposed model outperformed the traditional ResNet101 model significantly, with an accuracy of 83.75% in comparison to 42.5% by the latter.
Sanchez et al. [41] used a multilayer perceptron (MLP) neural network for the early detection of broad-leaved weeds and grass weeds in wide-row crops from UAV imagery. The data were manually collected using a UAV quadcopter equipped with a low-cost RGB camera. Image segmentation was done using the multiresolution segmentation algorithm (MRSA). The model achieved an average overall accuracy of 80.9% on two classes of crops.
Zhang et al. [42] proposed a unified CNN called UniStemNet for joint crop recognition and stem detection in real time. The architecture of UniStemNet is similar to that of Mask-RCNN. The architecture consists of a backbone and two subnets, among which the first performs crop recognition, while the other performs stem detection simultaneously. The backbone consists of five convolutional stages, where the first is a standard CNN with batch normalization, while the other four contain two MobileNet2 inverted residual modules (IRMs). The subnets follow a varied-span feature fusion structure, as each has different detection targets. The evaluation was performed on the open-source CWF-788 dataset, and labels were manually annotated. The model obtained an F1-score of 97.4% and an IoU score of 94.5 in segmentation, which were slightly lower than those achieved by CR-DSS [43]. Nonetheless, the model achieved the best-known results in stem detection with an SDR of 97.8%.

2.4. You Only Look Once (YOLO)

You Only Look Once (YOLO) is a real-time object detection neural network model where a single-stage neural network is applied to the full image. The network divides the image into regions and predicts bounding boxes along with probabilities for each region. The use of YOLO in agricultural disease and crop detection has recently been gaining popularity. For example, Chen et al. [44] proposed a UAV to photograph and detect pests and employed a Tiny-YOLOv3 model built on NVIDIA Jetson TX2 to recognize their position in real time. The detected pest positions could later be used to plan optimal pesticide spraying routes, which agricultural UAVs would later follow. The model attained the best mAP score of 95.33% and 89.72% on 640 × 640 pixel test images.
Similarly, Qin et al. [45] proposed a solution for precision crop protection based on a light deep neural network (DNN) called Ag-YOLO consisting of a modified version of ShuffleNet-v2 backbone, a ResBlock neck, and a YOLOv3 head. This model enabled the crop protection UAV to perform embedded real-time pest detection and autonomous spraying of pesticides. The model was tested on the Intel NCS2 hardware accelerator owing to its low weight and low power consumption. The detection system achieved an average F1-score of 92.05%.
Parico et al. [46] proposed YOLO-WEED, a weed detection system trained with 720 annotated UAV images to detect instances of weeds, based on YOLOv3 using NVIDIA GeForce GTX 1060 for green onion crops. They obtained an mAP score of 93.81% and an F1-score of 94%.
Rui et al. [47] proposed a novel comprehensive approach that combined transfer learning based on simulation data and adaptive fusion using YOLOv5 for improved detection of small objects. Their transfer learning and adaptive fusion mechanism led to a 7.1% improvement as compared to the original YOLOv5 model.
Parico et al. [48] proposed a robust real-time pear fruit counter for mobile applications using only RGB data. Various variants of YOLOv4 (YOLOv4, YOLOv4-tiny, and YOLOv4-CSP) were compared. In terms of accuracy, YOLOv4-CSP was the best model, with an AP of 98%. In terms of speed and computational cost, YOLOv4-tiny showed a promising performance at a comparable rate with YOLOv4 at lower network resolutions. If considering the balance in terms of accuracy, speed, and computational cost, YOLOv4 was found to be the most suitable with AP >96%, inference speed of 37.3 FPS, and FN rate of 6%. Thus, YOLOv4-512 was chosen as the detection model for the pear counting system with Deep SORT.
Jintasuttisak et al. [49] exploited the effective use of YOLO-V5 in detecting date palm trees in images captured by a UAV flying above farmlands in the Northern Emirates of the United Arab Emirates (UAE). The results of using YOLO-V5 for date palm tree detection in drone imagery were compared with those obtainable with other popular CNN architectures, YOLOv3, YOLOv4, and SSD300, both quantitatively and qualitatively. The results showed that, for the training data used, the YOLO-V5m (medium depth) model had the highest accuracy, resulting in an mAP of 92.34%. Furthermore, it provided the ability to detect and localize date palm trees of varied sizes in crowded, overlapped environments and areas where the date palm tree distribution was sparse.
Tian et al. [50] proposed an anthracnose lesion detection method based on deep learning. Cycle GAN was used for data augmentation. DenseNet was then utilized to optimize the feature layers of the YOLO-V3 model, which had a lower resolution. The improved model exceeded faster RCNN with VGG16 and the original YOLO-V3 model and could realize real-time detection. The model obtained an F1-score of 81.6% and 91.7% IoU on the entire dataset.

2.5. Single-Shot Detector (SSD)

The single-shot detector (SSD) is a one-stage object detection network that can detect objects in one feed-forward pass with low-resolution input images [51]. The model consists of three different modules. The first is a feature extraction module. This module is made up of a truncated base CNN model that is followed by convolutional layers used for the extraction of features at various scales. The second module is the object detection module which takes in feature maps and runs a set of default bounding boxes on their cells. The result is a defined number of box predictions, all of which have a shape offset and a class confidence score associated with them. The last module is the nonmaximal suppression module which chooses the best predictions out of the set presented by the detection module using a specific value of IoU and confidence score as a threshold. Lately, SSDs have made an appearance in precision agriculture for their ability to perform fast inference and work with low-resolution input images. These two features of SSDs make them desirable in real-time precision agriculture applications.
Veeranampalayam Sivakumar et al. [52] proposed using a single-shot detector to detect mid-to-late season weeds in soybean fields for weed-spread suppression. The authors used a feature extractor from the Inception V2 network and a stack of four extra convolutional layers to extract features at varying scales. The output of this feature extraction module was six feature maps that were then fed into the SSD’s detection module. A set of bounding boxes with five different aspect ratios and six different scales were used on all locations in all six feature maps, resulting in several box-bounded detection predictions, each with its own shape offset and class confidence score. An RMS prop optimizer was used. After training the model over 25,000 epochs, the model achieved a precision of 66%, a recall of 68%, an F1-score of 67%, a mean IoU of 84%, and an inference time of 21 s over 1152 × 1152 image test data.
Ridho and Irwan [53] proposed a strawberry-picking robot that could detect strawberries of different health states in real time. The robot ran an SSD-MobileNet architecture on a single-board computer (SBC) to perform real-time inference. The network used a feature extraction module built with a MobileNet backbone. The choice of MobileNet was prompted by computational power and time restrictions associated with running a real-time inference model on a low-computational power single-board computer. Using transfer learning, the SSD-MobileNet V1 model was previously trained on 91 classes from the COCO dataset. The model was then retrained on two new datasets containing a total of 250 training images of strawberries in good and bad condition. The result of the training returned an accuracy of 90% in detecting good and bad strawberries on image input extracted from a real-time-streamed video.

2.6. Region-Based Convolutional Neural Networks

The region-based convolutional neural network (RCNN) is a two-stage object detection system that extracts many region proposals from input images, uses a CNN to perform forward propagation on each region proposal to extract its features, and then uses these features to predict the class and bounding box of this region proposal.
Sivakumar et al. [52] proposed an approach where object detection-based CNN models were trained and evaluated using low-altitude UAV images to detect weeds in middle and late seasons in soybean fields. Faster RCNN and SSD were both evaluated and compared in terms of weed detection performance. When faster RCNN was configured with 200 box proposals, its weed detection performance was like the SSD model. The faster RCNN model with 200 box proposals returned a precision of 0.65, a recall of 0.68, an F1-score of 0.66, and an IoU of 0.85. On the other hand, the SSD model returned 0.66, 0.68, 0.67, and 0.84 for precision, recall, F1-score, and IoU, respectively. The performance of a patch-based CNN model was also evaluated and compared to the previous models. The faster RCNN model performed better than the patch-based CNN model.
Ammar et al. [54] proposed an original deep-learning framework for the automated counting and geolocation of palm trees from aerial images. They applied several recent convolutional neural network models (faster RCNN, YOLOv3, YOLOv4, and EfficientDet) to detect palm trees and other trees and conducted a complete comparative evaluation in terms of average precision and inference speed. YOLOv4 and EfficientDet-D5 yielded the best tradeoff between accuracy and speed (up to 99% mAP and 7.4 FPS).
Su et al. [55] used the Mask-RCNN model for identifying Fusarium head blight disease in wheat spikes and its degree of severity. To perform this task, two Mask-RCNNs performed instance segmentation on the input images, one of which segments individual spikes in the images and the other segments diseased areas of spikes. Thereafter, the severity of the infection on the spikes was evaluated by calculating the ratio of infected spike pixels in the images to the total number of spike pixels. The backbone of this model for feature map extraction was composed of a combination of a ResNet101 model and an FPN model. The model returned a prediction accuracy of 77.19% after comparing the results to a set of manually labeled images.
Yang et al. [56] used an FCN-AlexNet model to perform real-time crop classification using edge computing. The authors collected 224 images using a UAV during the growing period of rice and corn. The quantitative analysis showed that the SegNet model slightly outperformed FCN-AlexNet by 1% in the overall recall rate of object classification.
Menshchikov et al. [57] proposed an approach for fast and accurate detection of hogweed. The approach includes a UAV with an embedded system on board running various fully convolutional neural networks (FCNNs). They proposed an optimal architecture of FCNN for the embedded system relying on the tradeoff between the detection quality and frame rate. In their pilot study, they determined that different architectures could successfully solve the semantic segmentation task for the aerial hogweed detection of two classes. The SegNet model achieved the best ROC AUC with 96.9%. This model could detect hogweed, which was not initially labeled. The modified U-Net architecture was characterized by a high frame rate (up to 0.7 FPS) and a reasonable recognition quality (ROC AUC > 0.938). Along with the low power consumption, the U-Net architecture demonstrated its applicability for real-time scenarios and running on edge-computing devices. One of the U-Net modifications could achieve 0.46 FPS on the NVIDIA Jetson Nano platform with an ROC AUC of 0.958.
Bah et al. [58] proposed a model that combined CNN and the Hough transform to detect crop rows in images taken by a UAV. The model called CRowNet was a combination of SegNet (S-SegNet) and a CNN Hough transform (HoughCNet). The model achieved an accuracy of 93.58% and an IoU of 70%, respectively.
Hosseiny et al. [59] proposed a model with the framework’s core based on a faster regional CNN (RCNN) model with a backbone of ResNet101 for object detection. The proposed framework’s primary idea was to generate unlimited simulated training data from an input image automatically. The authors proposed a fully unsupervised model for plant detection in UAV-acquired pictures of agricultural fields. Two datasets were used with 442 and 328 field patches, respectively. The precision, recall, and F1-score were 0.868, 0.849, and 0.855, respectively.

2.7. Autoencoders

Weyner et al. [60] addressed the problem of automated, instance-level plant monitoring in agricultural fields and breeding plots. They proposed a vision-based approach to perform a joint instance segmentation of crop plants and leaves in breeding plots. They developed a CNN-based encoder–decoder network with lateral skip connections that follows a two-branch architecture with two task-specific decoders to determine the position of specific plant key points and group pixels to detect individual leaf and plant instances. Lastly, they conducted pixel-wise instance segmentation of each crop and its associated leaves based on orthorectified RGB images captured by UAVs. Their method outperformed state-of-the-art instance segmentation approaches such as Mask-RCNN on this task. They achieved the highest score of 0.94 for AP50 at intermediate growth stages compared to 0.71 by Mask-RCNN with respect to the instance segmentation of sugar beet plants.
Lottes et al. [61] presented a novel approach for joint stem detection and crop–weed segmentation using a fully convolutional network (FCN) integrating sequential information. Their proposed architecture enables the sharing of feature computations in the encoder while using two distinct task-specific decoder networks for stem detection and pixel-wise semantic segmentation of the input images. All their experiments were conducted using different generations of the BoniRob platform. BoniRob was built by BOSCH DeepField Robotics as a multipurpose field robot for research and development applications in precision agriculture, such as weed control, plant phenotyping, and soil monitoring. The system achieved the best mAP scores of 85.4%, 66.9%, 42.9%, and 50.1% for Bonn, Stuttgart, Ancona, and Eschikon datasets, respectively, for stem detection and 69.7%, 58.9%, 52.9% and 44.2% mAP scores for Bonn, Stuttgart, Ancona, and Eschikon datasets, respectively, for segmentation.
Su et al. [62] proposed a deep neural network (DNN) that exploits the geometric location of ryegrass for the real-time segmentation of inter-row ryegrass weeds in a wheat field. Their proposed method introduced two subnets in a conventional encoder–decoder style DNN to improve segmentation accuracy. The two subnets treat inter-row and intra-row pixels differently and provide corrections to preliminary segmentation results of the conventional encoder–decoder DNN. A dataset captured in a wheat farm by an agricultural robot at different time instances was used to evaluate the segmentation performance, and the proposed method performed the best among various popular semantic segmentation algorithms (Bonnet, SegNet, PSPNet, DeepLabV3, and U-Net). The proposed method ran at 48.95 FPS with a consumer-level graphics processing unit and, thus, is real-time deployable at a camera frame rate. Their proposed model achieved the best mean accuracy and IoU scores of 96.22% and 64.21%, respectively.

2.8. Transformers

Vaswani et al. [63] proposed the transformer architecture based on the attention mechanism. A transformer is a sequence transduction model initially designed to tackle natural language processing (NLP) problems. Using transformers for computer vision tasks was limited initially due to the high computational cost of training. To address this issue, Dosovitskiy et al. [64] proposed the vision transformer (ViT) that requires fewer resources while outperforming convolutional networks (CNNs). Other notable contributions include utilizing detection transformers (DETR) targeting the same problem. [65].
Thai et al. [66] used ViTs for the early detection of infected cassava leaves and the classification of their diseases. Initially, they used the ImageNet pretrained ViT model published by the Google Research Team [67]. The model was then tuned using the cassava leaf disease dataset [68]. Later, the model was quantized to reduce its size and accelerate the inference step (FPS) before deploying it on a Raspberry Pi 4 Model B. Their model achieved a 90.3% F1-score in comparison to the best CNN score of 89.2% achieved by the Resnet50 model. Furthermore, they proposed a smart solution powered by the Internet of Things (IoT) that can be used in the agriculture industry for real-time detection of leaf diseases. The system consists of a drone that captures the leaf images, including the exact position of the spot in the field. The ViT model installed on the Drones Pi classifies the images and clusters the infected leaves. The results are then combined with the spot’s position and sent to a server via a 4G network to create a survey map of the field. Farmers and rescue agencies can obtain the map on their mobile phones and prevent the loss of crops beforehand.
Reedha et al. [69] used two different models of ViT for plant classification of UAV images. Images were collected using a drone mounted with a high-resolution camera and deployed in a crop field of beet, parsley, and spinach located in France. The camera captured RGB orthorectified images at regular intervals in the field. The data were manually labeled into five classes: weeds, beet, parsley, spinach, and off-type green leaves. They also employed data augmentation to help improve the robustness of the model and the generalization capabilities of the training dataset. Later, they used ViT-B32 and ViTB16 models. They also tested the training data on EfficientNet and ResNet CNN architectures for comparison purposes. The results showed that ViT models outperformed the CNN models, as F1-scores of 99.4% and 99.2% were obtained from ViT-B16 and ViT-B32, respectively. In comparison, CNN models achieved slightly lower scores of 98.7% for EfficientNet B0, 98.9% for B1, and a close 99.2% using ResNet50. The authors pointed out that although all techniques obtained high accuracy and F1-scores, the classification of crops and weed images using ViTs yielded the best prediction performance. However, the inefficiency of ViT as compared with CNNs is another consideration if the model is to be deployed for real-time processing on a UAV.
Karila et al. [70] used ViT models to estimate grass sward (i.e., short grass) quality and quantity in a field. The datasets were captured in the spring “primary growth phase”, and the same dataset was captured again in the summer “regrowth phase” using a quadcopter drone equipped with two cameras. The first captured RGB images, while the second captured Fabry–Pérot (FPI) images. The results showed that ViT RGB models performed the best on different datasets. Similarly, VGG CNN models provided equally satisfactory results in most cases.
Dersch et al. [71] used a detection transformer (DETR) to detect single trees in high-resolution RGB true orthophotos (TDOPs) and compared it to a YOLOv4 single-stage detector. The multispectral images were collected by a 10-channel camera system with a horizontal field of view. Later, the images were post-processed using structure-from-motion (SFM) software. The data were later manually labeled with a split of 80% training and 20% validation. DETR outperformed YOLOv4 in mixed and deciduous plots with a 20% difference in F1-score in mixed plots and 4% in the latter plots: 86% to 65% and 71% to 67%, respectively. Across all three test plots, both methods had problems with over-segmentation. Furthermore, DETR failed to detect smaller trees far worse than YOLOv4 in multiple cases. The authors justified these poor results by the fact that DETR uses lower-resolution feature maps than that of YOLOv4.
Chen et al. [72] proposed a new efficient deep learning model called the density transformer (DENT) for automatic tree counting from aerial images. The model’s architecture contains four stages: a multi-receptive field CNN (Multi-RF CNN) to compute a feature map over the input images, followed by a standard transformer encoder, and a density map generator (DMG) to predict the density distribution over the input images. They also introduced a benchmark dataset that contains aerial images for tree counting called the Yosemite tree dataset and released it to the public [72]. The model outperformed most state-of-the-art methods with an MAE of 10.7 and an RMSE of 13.7 in comparison to 17.3 and 22.6, respectively, using YOLOv3. It is worth mentioning that the CANNet model [73] achieved the closest values of 10.8 and 13.8, respectively, and achieved a better MAE score in one of four regions than the DENT models.
Lastly, Zhang et al. [74] developed a spectral–spatial attention-based transformer (SSVT) to estimate crop nitrogen status from UAV imagery. The model is an improved version of the standard vision transformer (ViT) that can extract the spatial information of images. The newly proposed model can predict the spectral information which contains most of the features in agricultural applications. The model also tackles the computational complexity of large images that ViT suffers from by adopting a self-supervised learning (SSL) technology to allow models to train with unlabeled data. The results showed that the model with 96.2% accuracy outperformed the ViT model with 94.4% accuracy. However, this model required four million additional parameters compared to those required for a ViT model.

2.9. Semi-Supervised Convolutional Neural Networks

Bosilj et al. [75] used the fundamental SegNet architecture to perform pixel-level classification and segmentation of three classes of soil. The input comprised RGB and near-infrared (NIR) images. The authors used a median frequency weighting to avoid unbalanced labeling, as soil pixels are dominant in any given field with respect to crops or weeds. The input data were directly taken in the form of RGB and NIR channels because NDVI preprocessing typically results in minimal differences. The model was trained on three different datasets of sugar beets, carrots, and onions (SB16, CA17, and ON17) in which there were fully labeled examples in one, and partially labeled examples in the other, with pixel-level and object-level training. Object-based detection performed better than pixel-based detection precision-wise. However, pixel-based detection performed better in terms of recall. It is worth noting that the partially labeled ON17 dataset with SB16 weights outperformed the fully labeled dataset. The partially labeled CO17 dataset performed significantly worse than the fully labeled dataset, with a difference of almost 20% on weeds and 5% on crops.

2.10. Miscellaneous

Coletta et al. [76] used a semi-supervised classification algorithm that can aggregate information from clusters with those provided by a supervised algorithm such as SVM to discover new classes in an active learning manner. According to the authors, such an ability is largely convenient for inconsistent agricultural environments. The data were collected through a SenseFly eBee equipped with an RGB camera. The model consisted of two blocks: a classification block (ClaB) representing an area of 0.16 m2 to be classified and a contextual block (ConB) providing supplementary context information. Both blocks formed a concentric pair that generates feature vectors to be classified. These vectors were manually labeled as belonging to one of three classes. Then a semi-supervised classifier was used to quantify the uncertainty of classification, and a density measure evaluated the importance of a classified feature vector. If the instances resulted in highly uncertain labels, they were denoted as novelties to be learned, which were labeled later by an entropy- and density-based selection (EDS) domain expert and incorporated into the training set. The results showed that the all-class accuracy and recall improved iteratively.
Li et al. [77] used a radial basis function neural network (RBFNN) to predict farmland moisture accurately. In their work, they deployed a high-precision infrared sensor mounted on a UAV to collect discrete-time images of farmland for later analysis and used 20 uniformly distributed soil moisture sensors to extract ground-truth data. To extract relevant information from the images, the authors used an image preprocessing pipeline that included adaptive median filtering, mean filtering, and edge information extraction using the Canny edge detection algorithms. Principal component analysis (PCA) was thereafter used for dimensionality reduction, and its effect was studied by comparing the original model trained on the full dataset with the model trained on the dataset resulting from PCA. The evaluation results showed that the performance of the two models was similar, with the original achieving an R-squared score of 0.92176 and a mean percentage error (MPE) of 0.063, and the PCA-RBFNN model achieving an R-squared of 0.90157 and an MPE of 0.061. Ultimately, it could be concluded that applying PCA helped reduce the model’s workload while maintaining similar accuracy.

References

  1. Tendolkar, A.; Choraria, A.; Manohara Pai, M.M.; Girisha, S.; Dsouza, G.; Adithya, K.S. Modified crop health monitoring and pesticide spraying system using NDVI and Semantic Segmentation: An AGROCOPTER based approach. In Proceedings of the 2021 IEEE International Conference on Autonomous Systems (ICAS), Montreal, QC, Canada, 11–13 August 2021; pp. 1–5.
  2. Natividade, J.; Prado, J.; Marques, L. Low-cost multi-spectral vegetation classification using an Unmanned Aerial Vehicle. In Proceedings of the 2017 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC), Coimbra, Portugal, 26–28 April 2017; pp. 336–342.
  3. Pérez-Ortiz, M.; Peña, J.M.; Gutiérrez, P.A.; Torres-Sánchez, J.; Hervás-Martínez, C.; López-Granados, F. A semi-supervised system for weed mapping in sunflower crops using unmanned aerial vehicles and a crop row detection method. Appl. Soft Comput. 2015, 37, 533–544.
  4. Júnior, P.C.P.; Monteiro, A.; Ribeiro, R.D.L.; Sobieranski, A.C.; Wangenheim, A.V. Comparison of Supervised Classifiers and Image Features for Crop Rows Segmentation on Aerial Images. Appl. Artif. Intell. 2020, 34, 271–291.
  5. Rodríguez-Garlito, E.C.; Paz-Gallardo, A. Efficiently Mapping Large Areas of Olive Trees Using Drones in Extremadura, Spain. IEEE J. Miniat. Air Space Syst. 2021, 2, 148–156.
  6. Rocha, B.M.; da Silva Vieira, G.; Fonseca, A.U.; Pedrini, H.; de Sousa, N.M.; Soares, F. Evaluation and Detection of Gaps in Curved Sugarcane Planting Lines in Aerial Images. In Proceedings of the 2020 IEEE Canadian Conference on Electrical and Computer Engineering (CCECE), London, ON, Canada, 30 August–2 September 2020; pp. 1–4.
  7. Lottes, P.; Khanna, R.; Pfeifer, J.; Siegwart, R.; Stachniss, C. UAV-based crop and weed classification for smart farming. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2017; pp. 3024–3031.
  8. Crimaldi, M.; Cristiano, V.; De Vivo, A.; Isernia, M.; Ivanov, P.; Sarghini, F. Neural Network Algorithms for Real Time Plant Diseases Detection Using UAVs. In Innovative Biosystems Engineering for Sustainable Agriculture, Forestry and Food Production, Proceedings of the International Mid-Term Conference 2019 of the Italian Association of Agricultural Engineering (AIIA), Matera, Italy, 12–13 September 2019; Coppola, A., Di Renzo, G.C., Altieri, G., D’Antonio, P., Eds.; Springer International Publishing: Cham, Switzerland, 2020; pp. 827–835.
  9. Milioto, A.; Lottes, P.; Stachniss, C. Real-Time Blob-Wise Sugar Beets vs Weeds Classification for Monitoring Fields Using Convolutional Neural Networks. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, IV-2/W3, 41–48.
  10. Bah, M.D.; Dericquebourg, E.; Hafiane, A.; Canals, R. Deep Learning Based Classification System for Identifying Weeds Using High-Resolution UAV Imagery. In Intelligent Computing; Arai, K., Kapoor, S., Bhatia, R., Eds.; Advances in Intelligent Systems and Computing; Springer International Publishing: Cham, Switzerland, 2019; Volume 857, pp. 176–187. ISBN 978-3-030-01176-5.
  11. Reddy, S.R.G.; Varma, G.P.S.; Davuluri, R.L. Optimized convolutional neural network model for plant species identification from leaf images using computer vision. Int. J. Speech Technol. 2023, 26, 23–50.
  12. Sembiring, A.; Away, Y.; Arnia, F.; Muharar, R. Development of Concise Convolutional Neural Network for Tomato Plant Disease Classification Based on Leaf Images. J. Phys. Conf. Ser. 2021, 1845, 012009.
  13. Geetharamani, G.; Arun Pandian, J. Identification of plant leaf diseases using a nine-layer deep convolutional neural network. Comput. Electr. Eng. 2019, 76, 323–338.
  14. Karthik, R.; Hariharan, M.; Anand, S.; Mathikshara, P.; Johnson, A.; Menaka, R. Attention embedded residual CNN for disease detection in tomato leaves. Appl. Soft Comput. 2020, 86, 105933.
  15. Mohanty, S.P. PlantVillage-Dataset. 19 May 2023. Available online: https://github.com/spMohanty/PlantVillage-Dataset (accessed on 30 April 2023).
  16. Nanni, L.; Manfè, A.; Maguolo, G.; Lumini, A.; Brahnam, S. High performing ensemble of convolutional neural networks for insect pest image detection. Ecol. Inform. 2022, 67, 101515.
  17. Atila, Ü.; Uçar, M.; Akyol, K.; Uçar, E. Plant leaf disease classification using EfficientNet deep learning model. Ecol. Inform. 2021, 61, 101182.
  18. Prasad, A.; Mehta, N.; Horak, M.; Bae, W.D. A two-step machine learning approach for crop disease detection: An application of GAN and UAV technology. Remote Sens. 2022, 14, 4765.
  19. Albattah, W.; Javed, A.; Nawaz, M.; Masood, M.; Albahli, S. Artificial Intelligence-Based Drone System for Multiclass Plant Disease Detection Using an Improved Efficient Convolutional Neural Network. Front. Plant Sci. 2022, 13, 808380.
  20. Mishra, S.; Sachan, R.; Rajpal, D. Deep Convolutional Neural Network based Detection System for Real-Time Corn Plant Disease Recognition. Procedia Comput. Sci. 2020, 167, 2003–2010.
  21. Bah, M.D.; Hafiane, A.; Canals, R. Deep Learning with Unsupervised Data Labeling for Weed Detection in Line Crops in UAV Images. Remote Sens. 2018, 10, 1690.
  22. Zheng, Y.; Sarigul, E.; Panicker, G.; Stott, D. Vineyard LAI and canopy coverage estimation with convolutional neural network models and drone pictures. In Sensing for Agriculture and Food Quality and Safety XIV, Proceedings of the SPIE Defense + Commercial Sensing, Orlando, FL, USA, 3 April–13 June 2022; SPIE: Bellingham, WA, USA, 2022; Volume 12120, pp. 29–38.
  23. Yang, R.; Lu, X.; Huang, J.; Zhou, J.; Jiao, J.; Liu, Y.; Liu, F.; Su, B.; Gu, P. A Multi-Source Data Fusion Decision-Making Method for Disease and Pest Detection of Grape Foliage Based on ShuffleNet V2. Remote Sens. 2021, 13, 5102.
  24. Briechle, S.; Krzystek, P.; Vosselman, G. Classification of tree species and standing dead trees by fusing UAV-based LiDAR data and multispectral imagery in the 3D deep neural network pointnet++. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2020, V-2–2020, 203–210.
  25. Aiger, D.; Allen, B.; Golovinskiy, A. Large-Scale 3D Scene Classification with Multi-View Volumetric CNN. arXiv 2017.
  26. Weinstein, B.G.; Marconi, S.; Bohlman, S.; Zare, A.; White, E. Individual Tree-Crown Detection in RGB Imagery Using Semi-Supervised Deep Learning Neural Networks. Remote Sens. 2019, 11, 1309.
  27. Silva, C.; Hudak, A.; Vierling, L.; Louise Loudermilk, E.; O’Brien, J.; Kevin Hiers, J.; Jack, S.; Gonzalez-Benecke, C.; Lee, H.; Falkowski, M.; et al. Imputation of Individual Longleaf Pine (Pinus palustris Mill.) Tree Attributes from Field and LiDAR Data. Can. J. Remote Sens. 2016, 42, 554–573.
  28. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. arXiv 2022.
  29. Kitano, B.T.; Mendes, C.C.T.; Geus, A.R.; Oliveira, H.C.; Souza, J.R. Corn Plant Counting Using Deep Learning and UAV Images. IEEE Geosci. Remote Sens. Lett. 2019, 1–5.
  30. Sreedhar, K. Enhancement of Images Using Morphological Transformations. IJCSIT 2012, 4, 33–50.
  31. Lin, Z.; Guo, W. Sorghum Panicle Detection and Counting Using Unmanned Aerial System Images and Deep Learning. Front. Plant Sci. 2020, 11, 534853.
  32. Arun, R.A.; Umamaheswari, S.; Jain, A.V. Reduced U-Net Architecture for Classifying Crop and Weed Using Pixel-Wise Segmentation. In Proceedings of the 2020 IEEE International Conference for Innovation in Technology (INOCON), Bangluru, India, 6–8 November 2020; pp. 1–6.
  33. El Hoummaidi, L.; Larabi, A.; Alam, K. Using unmanned aerial systems and deep learning for agriculture mapping in Dubai. Heliyon 2021, 7, e08154.
  34. Doha, R.; Al Hasan, M.; Anwar, S.; Rajendran, V. Deep Learning based Crop Row Detection with Online Domain Adaptation. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, Singapore, 14–18 August 2021; pp. 2773–2781.
  35. Zhang, T.; Yang, Z.; Xu, Z.; Li, J. Wheat Yellow Rust Severity Detection by Efficient DF-UNet and UAV Multispectral Imagery. IEEE Sens. J. 2022, 22, 9057–9068.
  36. Tsuichihara, S.; Akita, S.; Ike, R.; Shigeta, M.; Takemura, H.; Natori, T.; Aikawa, N.; Shindo, K.; Ide, Y.; Tejima, S. Drone and GPS Sensors-Based Grassland Management Using Deep-Learning Image Segmentation. In Proceedings of the 2019 Third IEEE International Conference on Robotic Computing (IRC), Naples, Italy, 25–27 February 2019; pp. 608–611.
  37. Yang, M.-D.; Boubin, J.G.; Tsai, H.P.; Tseng, H.-H.; Hsu, Y.-C.; Stewart, C.C. Adaptive autonomous UAV scouting for rice lodging assessment using edge computing with deep learning EDANet. Comput. Electron. Agric. 2020, 179, 105817.
  38. Weyler, J.; Magistri, F.; Seitz, P.; Behley, J.; Stachniss, C. In-Field Phenotyping Based on Crop Leaf and Plant Instance Segmentation. In Proceedings of the 2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Waikoloa, HI, USA, 3–8 January 2022; pp. 2968–2977.
  39. Guo, Y.; Zhang, J.; Yin, C.; Hu, X.; Zou, Y.; Xue, Z.; Wang, W. Plant Disease Identification Based on Deep Learning Algorithm in Smart Farming. Discret. Dyn. Nat. Soc. 2020, 2020, 2479172.
  40. Getreuer, P. Chan-Vese Segmentation. Image Process. On Line 2012, 2, 214–224.
  41. Torres-Sánchez, J.; Mesas-Carrascosa, F.J.; Jiménez-Brenes, F.M.; de Castro, A.I.; López-Granados, F. Early Detection of Broad-Leaved and Grass Weeds in Wide Row Crops Using Artificial Neural Networks and UAV Imagery. Agronomy 2021, 11, 749.
  42. Zhang, X.; Li, N.; Ge, L.; Xia, X.; Ding, N. A Unified Model for Real-Time Crop Recognition and Stem Localization Exploiting Cross-Task Feature Fusion. In Proceedings of the 2020 IEEE International Conference on Real-time Computing and Robotics (RCAR), Asahikawa, Japan, 28–29 September 2020; pp. 327–332.
  43. Li, N.; Zhang, X.; Zhang, C.; Guo, H.; Sun, Z.; Wu, X. Real-Time Crop Recognition in Transplanted Fields with Prominent Weed Growth: A Visual-Attention-Based Approach. IEEE Access 2019, 7, 185310–185321.
  44. Chen, C.-J.; Huang, Y.-Y.; Li, Y.-S.; Chen, Y.-C.; Chang, C.-Y.; Huang, Y.-M. Identification of Fruit Tree Pests With Deep Learning on Embedded Drone to Achieve Accurate Pesticide Spraying. IEEE Access 2021, 9, 21986–21997.
  45. Qin, Z.; Wang, W.; Dammer, K.-H.; Guo, L.; Cao, Z. A Real-time Low-cost Artificial Intelligence System for Autonomous Spraying in Palm Plantations. arXiv 2021.
  46. Parico, A.I.B.; Ahamed, T. An Aerial Weed Detection System for Green Onion Crops Using the You Only Look Once (YOLOv3) Deep Learning Algorithm. Eng. Agric. Environ. Food 2020, 13, 42–48.
  47. Rui, C.; Youwei, G.; Huafei, Z.; Hongyu, J. A Comprehensive Approach for UAV Small Object Detection with Simulation-based Transfer Learning and Adaptive Fusion. arXiv 2021.
  48. Parico, A.I.B.; Ahamed, T. Real Time Pear Fruit Detection and Counting Using YOLOv4 Models and Deep SORT. Sensors 2021, 21, 4803.
  49. Jintasuttisak, T.; Edirisinghe, E.; Elbattay, A. Deep neural network based date palm tree detection in drone imagery. Comput. Electron. Agric. 2022, 192, 106560.
  50. Tian, Y.; Yang, G.; Wang, Z.; Li, E.; Liang, Z. Detection of Apple Lesions in Orchards Based on Deep Learning Methods of CycleGAN and YOLOV3-Dense. J. Sens. 2019, 2019, 7630926.
  51. Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.-Y.; Berg, A.C. SSD: Single Shot MultiBox Detector. In Lecture Notes in Computer Science, Proceedings of the Computer Vision—ECCV 2016, Amsterdam, The Netherlands, 11–14 October 2016; Leibe, B., Matas, J., Sebe, N., Welling, M., Eds.; Springer International Publishing: Cham, Switzerland, 2016; pp. 21–37.
  52. Veeranampalayam Sivakumar, A.N.; Li, J.; Scott, S.; Psota, E.; Jhala, A.J.; Luck, J.D.; Shi, Y. Comparison of Object Detection and Patch-Based Classification Deep Learning Models on Mid- to Late-Season Weed Detection in UAV Imagery. Remote Sens. 2020, 12, 2136.
  53. Ridho, M.F.; Irwan. Strawberry Fruit Quality Assessment for Harvesting Robot Using SSD Convolutional Neural Network. In Proceedings of the 2021 8th International Conference on Electrical Engineering, Computer Science and Informatics (EECSI), Semarang, Indonesia, 20–21 October 2021; pp. 157–162.
  54. Ammar, A.; Koubaa, A.; Benjdira, B. Deep-Learning-Based Automated Palm Tree Counting and Geolocation in Large Farms from Aerial Geotagged Images. Agronomy 2021, 11, 1458.
  55. Su, W.-H.; Zhang, J.; Yang, C.; Page, R.; Szinyei, T.; Hirsch, C.D.; Steffenson, B.J. Automatic Evaluation of Wheat Resistance to Fusarium Head Blight Using Dual Mask-RCNN Deep Learning Frameworks in Computer Vision. Remote Sens. 2021, 13, 26.
  56. Yang, M.D.; Tseng, H.H.; Hsu, Y.C.; Tseng, W.C. Real-time Crop Classification Using Edge Computing and Deep Learning. In Proceedings of the 2020 IEEE 17th Annual Consumer Communications Networking Conference (CCNC), Las Vegas, NV, USA, 10–13 January 2020; pp. 1–4.
  57. Menshchikov, A.; Shadrin, D.; Prutyanov, V.; Lopatkin, D.; Sosnin, S.; Tsykunov, E.; Iakovlev, E.; Somov, A. Real-Time Detection of Hogweed: UAV Platform Empowered by Deep Learning. IEEE Trans. Comput. 2021, 70, 1175–1188.
  58. Bah, M.D.; Hafiane, A.; Canals, R. CRowNet: Deep Network for Crop Row Detection in UAV Images. IEEE Access 2020, 8, 5189–5200.
  59. Hosseiny, B.; Rastiveis, H.; Homayouni, S. An Automated Framework for Plant Detection Based on Deep Simulated Learning from Drone Imagery. Remote Sens. 2020, 12, 3521.
  60. Weyler, J.; Quakernack, J.; Lottes, P.; Behley, J.; Stachniss, C. Joint Plant and Leaf Instance Segmentation on Field-Scale UAV Imagery. IEEE Robot. Autom. Lett. 2022, 7, 3787–3794.
  61. Lottes, P.; Behley, J.; Chebrolu, N.; Milioto, A.; Stachniss, C. Robust joint stem detection and crop-weed classification using image sequences for plant-specific treatment in precision farming. J. Field Robot. 2020, 37, 20–34.
  62. Su, D.; Qiao, Y.; Kong, H.; Sukkarieh, S. Real time detection of inter-row ryegrass in wheat farms using deep learning. Biosyst. Eng. 2021, 204, 198–211.
  63. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. In Proceedings of the 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA; 2017; Volume 30.
  64. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. arXiv 2021.
  65. Carion, N.; Massa, F.; Synnaeve, G.; Usunier, N.; Kirillov, A.; Zagoruyko, S. End-to-End Object Detection with Transformers. arXiv 2020.
  66. Thai, H.-T.; Tran-Van, N.-Y.; Le, K.-H. Artificial Cognition for Early Leaf Disease Detection using Vision Transformers. In Proceedings of the 2021 International Conference on Advanced Technologies for Communications (ATC), Ho Chi Minh City, Vietnam, 14–16 October 2021; pp. 33–38.
  67. Deng, J.; Dong, W.; Socher, R.; Li, L.-J.; Li, K.; Li, F.-F. ImageNet: A Large-Scale Hierarchical Image Database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; p. 8.
  68. Cassava Leaf Disease Classification. Available online: https://kaggle.com/competitions/cassava-leaf-disease-classification (accessed on 18 June 2022).
  69. Reedha, R.; Dericquebourg, E.; Canals, R.; Hafiane, A. Transformer Neural Network for Weed and Crop Classification of High Resolution UAV Images. Remote Sens. 2022, 14, 592.
  70. Karila, K.; Alves Oliveira, R.; Ek, J.; Kaivosoja, J.; Koivumäki, N.; Korhonen, P.; Niemeläinen, O.; Nyholm, L.; Näsi, R.; Pölönen, I.; et al. Estimating Grass Sward Quality and Quantity Parameters Using Drone Remote Sensing with Deep Neural Networks. Remote Sens. 2022, 14, 2692.
  71. Dersch, S.; Schottl, A.; Krzystek, P.; Heurich, M. Novel Single Tree Detection By Transformers Using Uav-Based Multispectral Imagery. ProQuest. Available online: https://www.proquest.com/openview/228f8f292353d30b26ebcdd38372d40d/1?pq-origsite=gscholar&cbl=2037674 (accessed on 15 June 2022).
  72. Chen, G.; Shang, Y. Transformer for Tree Counting in Aerial Images. Remote Sens. 2022, 14, 476.
  73. Liu, W.; Salzmann, M.; Fua, P. Context-Aware Crowd Counting. arXiv 2019.
  74. Zhang, X.; Han, L.; Sobeih, T.; Lappin, L.; Lee, M.; Howard, A.; Kisdi, A. The self-supervised spectral-spatial attention-based transformer network for automated, accurate prediction of crop nitrogen status from UAV imagery. arXiv 2022.
  75. Bosilj, P.; Aptoula, E.; Duckett, T.; Cielniak, G. Transfer learning between crop types for semantic segmentation of crops versus weeds in precision agriculture. J. Field Robot. 2020, 37, 7–19.
  76. Coletta, L.F.S.; de Almeida, D.C.; Souza, J.R.; Manzione, R.L. Novelty detection in UAV images to identify emerging threats in eucalyptus crops. Comput. Electron. Agric. 2022, 196, 106901.
  77. Li, W.; Liu, C.; Yang, Y.; Awais, M.; Li, W.; Ying, P.; Ru, W.; Cheema, M.J.M. A UAV-aided prediction system of soil moisture content relying on thermal infrared remote sensing. Int. J. Environ. Sci. Technol. 2022, 19, 9587–9600.
More
Information
Contributors MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register : , , , ,
View Times: 479
Revisions: 2 times (View History)
Update Date: 19 Jun 2023
1000/1000
Video Production Service