Deep Learning for Image Annotation in Agriculture: Comparison
Please note this is a comparison between Version 1 by NORMAISHARAH MAMAT and Version 2 by Beatrix Zheng.

The implementation of intelligent technology in agriculture is seriously investigated as a way to increase agriculture production while reducing the amount of human labor. In agriculture, recent technology has seen image annotation utilizing deep learning techniques. Due to the rapid development of image data, image annotation has gained a lot of attention. The use of deep learning in image annotation can extract features from images and has been shown to analyze enormous amounts of data successfully. Deep learning is a type of machine learning method inspired by the structure of the human brain and based on artificial neural network concepts. Through training phases that can label a massive amount of data and connect them up with their corresponding characteristics, deep learning can conclude unlabeled data in image processing. For complicated and ambiguous situations, deep learning technology provides accurate predictions. This technology strives to improve productivity, quality and economy and minimize deficiency rates in the agriculture industry.

  • Image annotation
  • Deep learning
  • Agriculture
  • Plant recognition
  • Disease detection
  • Classification
  • Yiled estimation

1. Introduction

The agriculture sector is the backbone of most countries, providing enormous employment opportunities to the community as well as goods manufacturing and food supply. Fruit plantation is one of the most important agricultural activities. The production and protection of fruit per capita has recently been considered an essential indicator of a country’s growth and quality of life [1]. Population growth of 7.2 to 9.6 billion people is expected by 2100. The advanced approach of smart agriculture must be used to meet the demand for food from agriculture [2]. Several studies have recommended that the critical issue of improving management and production in the agriculture industry is addressed [3][4]. Agriculture production has challenges in terms of productivity, environmental impact and sustainability. Agriculture ecosystems necessitate constant monitoring of several variables, resulting in a large amount of data. The data could be in the form of images that can be processed with various image processing algorithms to identify plants, diseases and other cases in varied agricultural situations [5]. Advanced technology improvements have been made in agriculture with limited resources to ensure production, quality, processing, storage and distribution [6]. The technology used in this field involves various scientific disciplines covering sensors, big data, artificial intelligence and robotics [7]. Apart from using sensor technology to advance the agriculture industry [8], the use of image annotation techniques to improve agriculture production is a relatively new invention in technology.
Image annotation has attracted widespread attention in the past few years due to the rapid growth of image data [9][10][11]. This method is used to analyze big data images and predict labels for the images [12]. Image annotation is the technique of labeling an image with keywords which reflect the character of the image and assist in the intelligent retrieval of relevant images using a simple query representation [13]. Image annotation in the agriculture sector can annotate images according to the user’s requirement. Everything from plants and fruits to soil can be annotated to be recognized and classified. Moreover, it helps in plant detection, classification and segmentation based on the plant species, type, health condition or maturity. It can predict the label of a given image and can correspond well to the image content [12]. Image annotation can describe images at the semantic level and has many applications that are not only focused on image analysis but also on urban management and biomedical engineering. Basically, image annotation algorithms are divided into traditional and deep neural network-based methods [14]. However, traditional or manual image annotation has inherent weaknesses. Therefore, automatic image annotation (AIA) was introduced in the late 1990s by Mori et al. [15].
The objective of automatic image annotation is to predict several textual labels for an unseen image representing its content, which is a labeling problem. This technology automatically annotates the image using its semantic tags and has been applied in image retrieval classification and the medical domain. The training data attempt to teach a model to assign semantic labels to the new image automatically. One or more tags will be transferred to the image based on image metadata or visual features. For instance, the technology has been proposed in many areas and shows outstanding achievement [13][16]. Large amounts of data are required to improve the accuracy of annotating images of plants or diseases. To assist researchers in overcoming these severe challenges, Deng et al. [17] introduced ImageNet, a publicly available collection of existing plants extensively used in computer vision. It has been frequently used as a benchmark for various visualization types of computer vision issues. Another public dataset is PlantVillage [18], an open-access platform for disease plant leaf images by Penn State University. Moreover, the datasets that are dedicated to fruit detection are MinneApple [19], Date Fruit [20] and MangoYOLO [21], weed control datasets are DeepWeeds [22] and Open Plant Phenotype Dataset [23] and a dataset of plant seedlings at different growth stages is V2 Plant Seedling Dataset [24].
AIA can be classified into many categories. The difference in the classes is based on the contribution, computational complexity, computational time and annotation accuracy. One of the categories is deep learning-based image annotation [25][26]. Deep learning in research on AIA has attracted extensive attention in the theoretical study and various image processing and computer vision task applications. It shows high potential in image processing capabilities for the future needs of agriculture [27][28]. Deep learning, which is a subset of machine learning, was firstly introduced by Dechter [29] in 1986 to machine learning and by Aizerberg et al. [30] in 2000 to the artificial neural network. It can transform the data using various functions that allow data representation in a hierarchical way and defined as a simpler concept. It learns to perform any task directly from the images and produce high-accuracy responses [31][32]. Several AIA techniques have been proposed other than the deep learning approach such as support vector machines, Bayesian, texture resemblance and instance-based method. Deep learning techniques, on the other hand, have succeeded in image processing throughout the last decade [33]. The high accuracy of deep learning is generated by high computational and storage requirements during the training and inference phase. This is because the training process is both space consuming and computationally intensive, as millions of parameters are needed to refine over multiple periods of time [34]. Due to complexity of the data models, training is quite expensive. Furthermore, deep learning necessitates the use of costly graphic user interfaces (GPUs) and many machines. This raises the cost to the users. The image annotation training set based on deep learning can be classified into supervised, unsupervised and semi-supervised categories.
Supervised deep learning involves training a data sample from a data source that has been classified correctly. Its algorithm is trained on input data that has been labeled for a certain output until it is able to discern the underlying links between the inputs and output findings. The system is supplied with labeled datasets during the training phase, which will inform it which outputs are associated with certain input values. Supervised learning provides a significant challenge due to the requirement of a huge amount of labeled data [35][36] and at least hundreds of annotated images are required during the supervised training [37]. The training approach consists of providing a large number of annotated images to the algorithm to assist the model to learn, then testing the trained model on unannotated images. To determine the accuracy of this method, annotated images with hidden labels are often employed in the algorithm’s testing stage. Thus, annotated images for training supervised deep learning models achieve acceptable performance levels. Most of the studies applied supervised learning, as this method promises high accuracy as proposed in [38][39][40]. Another attractive annotation method is based on unsupervised learning. Unsupervised learning, in contrast to supervised learning, deals with unlabeled data. In addition, labels for these cases are frequently difficult to obtain due to insufficient knowledge data or the labeling is prohibitively expensive. Furthermore, the lack of labels makes setting goals for the trained model problematic. Consequently, determining whether or not the results are accurate is difficult. The study by [41] employed unsupervised learning in two real weed datasets using a recent unsupervised deep clustering technique. These datasets’ results signal a potential direction in the use of unsupervised learning and clustering in agricultural challenges. For circumstances where cluster and class numbers vary, the suggested modified unsupervised clustering accuracy has proven to be a robust and easier to interpret evaluation clustering measure. It is also feasible to demonstrate how data augmentation and transfer learning can significantly improve unsupervised learning.
Semi-supervised learning, like supervised and unsupervised learning, involves working with a dataset. However, the dataset is separated into labeled and unlabeled parts. When the labeling of acquired data is too difficult or expensive, this technique is frequently used. In fact, it is also possible to use it if the labeled data are poor quality [42]. The fundamental issue in large-scale image annotation approaches based on semi-supervised learning is dealing with a large, noisy dataset in which the number of images expands faster. The ability to identify unwanted plants has improved because of the advancement in farm image analysis. However, the majority of these systems rely on supervised learning, which necessitates a large number of manually annotated images. As a result, due to the huge variety of plant species being cultivated, supervised learning is economically infeasible for the individual farmer. Therefore, [43][44][45] proposed an unsupervised image annotation technique to solve weed detection in farms using deep learning approaches.
Deep learning has significant potential in the agriculture sector in increasing the amount and quality of the produce by image-based classification. Consequently, many researchers have employed the technology and method of deep learning to improve and automate tasks [3]. Its role in this sector gives excellent results in plant counting, leaf counting, leaf segmentation and yield prediction [46]. Noon et al. [47] have reviewed the application of deep learning in the agriculture sector by identifying plant leaf stress in early detection to enable farmers to apply the suitable treatment. Deep learning is effective in detecting leaf stress for various plants. However, implementing deep learning in agriculture requires a large amount of data regarding the plants, in terms of collecting and processing. The necessary data are basically collected using wireless sensors, drones, robots and satellites [48]. The more data used to train the deep learning model, the more robust and pervasive the model becomes [49].
Unmanned aerial vehicles (UAVs) and unmanned ground vehicles (UGVs) are examples of robotics systems that provide a cost-effective, adaptable and scalable solution for product management and crop quality [50]. Weeds are able to reduce crop production and their growth must be monitored regularly to keep them under control. Additionally, applying the same amount of herbicide to the entire field results in waste, pollution and a higher cost for farmers. The combination of image analytics from UAV footage and precision agriculture is able to assist agronomists in advising farmers on where to focus herbicides in particular regions in the field [51][52]. As stated in [53], the first stage in site-specific weed management is to detect weed patches in the field quickly and accurately. Therefore, the authors proposed object detection implemented with Faster RCNN in training and evaluating weed detection in soybean fields using a low-altitude UAV. The proposed technique was the best model in detecting weeds by obtaining an intersection over union (IoU) performance of 0.85. Franco et al. [54] have captured a thistle weed species, Cirsium arvense, in cereal crops by utilizing a UAV. This tool is used to gather a view of an agriculture site with detailed exploration and is attractive due to its low operational costs and flexible driving. A UAV captured RGB images of thistles at 50 m above the ground, annotated weed and cereal classes and grouped them under a unique label of pixels. According to [51], labeling plants in a field image consumes a lot of time and there is very little attention paid to annotating the data by training a deep learning model. Therefore, the authors proposed a deep learning technique to detect weeds using UAV images by applying overlapping windows for weed detection [51]. Deep learning techniques will provide the probability of the plant being a weed or crop for each window location. Deep learning can make harvesting robots more effective when generating robust and reliable computer vision algorithms to detect fruit [55]. The usage of UAVs in dataset collection has also been applied in palm oil tree detection [56], rice phenology [57], detection and classification of soybean pests [58], potato plant detection [59], paddy field yield assessment [60] and corn classification [61].
Over the last few decades, UGVs have been used to achieve efficiency, particularly by reducing manpower requirements. UGVs have been employed for soil analysis [62], precision spraying [63], controlled weeding [64] and crop harvesting [65]. Mazzia et al. [66] employed a UGV for path planning using deep learning as an estimator. Row-based crops are ideal for testing and deploying UGVs that can monitor and manage to harvest the crops. The researchtudy proposed by the authors proved the feasibility of the deep learning technique by demonstrating the viability of a complete autonomous global path planner. In [67], a robot harvester with the implementation of a deep learning algorithm is used to detect an obstacle and observe the surrounding environment for rice. The image cascade network’s employment successfully detects obstacles and avoids collision with an average success rate of 96.6%. Besides UAVs and UGVs, deep learning provides a practical solution in the agriculture field from satellite imagery. A vital component of agricultural monitoring systems is having accurate maps of crop types and acreage. Therefore, the application of satellites is able to determine the boundary of smallholder farms since their boundaries are hazy, in irregular shapes and frequently mixed with other land uses. Persello et al. [68] presented a deep learning technique to automatically delineate smallholder farms using a convolutional network in combination with a globalization and grouping algorithm. The proposed solution outperforms alternative strategies by autonomously delineating field boundaries with F scores greater than 0.7 and 0.6 for the proposed test regions, respectively. Furthermore, satellites are implemented to capture images in identifying crops as presented in [69]. The authors utilized multiexposure satellite imagery of agricultural land using image analysis and deep learning techniques for edge segmentation in an image. The implementation of a CNN for image edge smoothing achieves accuracy of 98.17%. According to [70], enough data should be collected for training in order to predict crop yields and forecast crop prices reliably. Data availability is a significant limitation that can be overcome using satellite imagery that can cover huge geographic areas. The combination of utilizing deep learning using satellite imagery applications gives a significant advantage results in extracting field boundaries [71], monitoring agricultural areas [72], weather prediction [73], crop classification [74] and soil moisture forecast [75].
Various implementations of deep learning in agriculture approaches have been extensively reviewed in recent years as proposed in [5][37][76][77][78][79]. Among those, Koirala et al. [77] reviewed the application of deep learning in fruit detection and yield estimation, Zhang et al. [80] explore dense scene analysis of the application deep learning in agriculture and Moazzam et al. [79] emphasized the challenges of weed and crop classification using deep learning. Based on the great attention on the implementation of deep learning in the agriculture sector in recent years, and contrary to existing surveys, this researchticle concisely reviews the use of deep learning techniques in image annotation, focusing on plants and crop areas. This researchview article presents the most recent five years of research on this method in agriculture, covering the new technology and trends. The presentation covers the techniques of annotating images, the learning techniques, the various architectures proposed, the tools used and, finally, the applications. The application issues are basically in plant detection, disease detection, counting, yield estimation, segmentation and classification in the agriculture sector. These tasks are difficult to perform manually, time consuming and require workforce involvement. The lack of people’s ability to identify objects for these tasks is finally compensated for by using current technology and trends, particularly image annotation and deep learning techniques, which also boost process efficiency. There are many different types of plants. To identify plants, especially rare ones, knowledge is required. Additionally, a systematic and disciplined approach to classifying various plants is crucial for recognizing and categorizing the vast amount of data acquired on the many known plants. To solve this problem, plant detection and classification are crucial tasks. Since segmentation helps to extract features from an image, it will improve classification accuracy. A crucial concern in agriculture is disease detection. Disease control procedures can waste time and resources and result in additional plant losses without accurate identification of the disease and its causative agent. Furthermore, in the agriculture industry, counting is essential in managing orchards, yet it can be difficult because of various issues, including overlapping. In particular, counting leaves provides a clear image of the plant’s condition and stage of development. Especially in the age of global climate change, agricultural output assessment is essential for solving new concerns in food security. Accurate yield estimation benefits famine prevention efforts in addition to assisting farmers in making appropriate economic and management decisions.

2. Deep Learning for Image Annotation

Image annotation using deep learning is the most informative method that requires more complex training data. It is essential for functional datasets because it informs the training model about the crucial parts of the image and may use those details to recognize the classes in test images. The majority of automatic image annotation methods perform by extracting features from training and testing images at the first step. Secondly, based on the training data, the annotation model is developed. Finally, annotations are developed based on the characteristics of the test images [81]Figure 1 illustrates the detail of the image annotation process. Feature extraction is a technique for indexing and extracting visual content from images. Color, texture, shape and domain-specific features are examples of primitive or low-level image features [82].
Figure 1.
 The process of image annotation algorithm.
Depending on the approach utilized, various annotation types are used to annotate images. The popular image annotation techniques employed in agriculture based on deep learning are bounding box [83][84][85][86] and segmentation [87][88][89][90]. The study in [91] proposed the tools to boost the efficiency of identifying agriculture images, which frequently have more various objects and more detailed shapes than those in many general datasets. Feature extraction in the architecture of deep learning can be found in imaging applications. Different types of this architecture in deep learning that have frequently been applied in recent years are unsupervised pre-trained networks (UPNs), recurrent neural networks (RNNs) and convolutional neural networks (CNNs) [92]. An RNN has the advantage of processing time-series data and making decisions about the future based on historical data. An RNN has been proposed by Alibabaei et al. [93] to predict tomato yield according to the date, climate, irrigation amount and soil water content. RNN architecture consists of long-shot term memory (LSTM), gated recurrent units (GRUs), bidirectional LSTM (BLSTM) and bidirectional GRU (BGRU). The researchtudy shows that BLSTM is able to capture the relationship of the past and new observations and accurately predict the yield. However, the BLSTM model has a longer training time compared to implemented models. The authors also conclude that deep learning has the ability to estimate the yield at the end of the seasons.
A CNN is mainly used among deep learning architecture due to its high detection accuracy, reliability and feasibility [94]. CNNs or convNets are designed to learn the spatial features, for example edges, textures, corners or more abstract shapes. The core of learning these characteristics is the diverse and successive transformation of the input object, which is convolution at different spatial scales such as pooling operation. This operation identifies and combines both high-level concepts and low-level features [95]. This method has been proven to be good in extracting abstract features from a raw image through convolutional and pooling layers [96]. The architecture of CNNs was introduced by Fukushima [97] who proposed the algorithm of supervised and unsupervised training of the parameter that learns from the incoming data. In general, a CNN receives the image data that form input layers and generates a vector of different characteristics assigned to object classes in the form of an output layer. There are hidden layers between the input and output layers consisting of a series of convolution and pooling layers and ending with a fully connected layer [98]. CNNs are widely used as a powerful class of models to classify images in a multiple problems in agriculture such as fruit classification, plant disease detection, weed identification and pest classification [99]. In addition, they can also detect and count the number of crops. Huang et al. [100] chose a CNN to classify green coffee beans because CNN characteristics are good at extracting image color and shape.
Two categories of object detection in deep learning are defined by drawing bounding boxes around the images and classifying the object’s pixels. From a label perspective, drawing rectangular bounding boxes around the object is much easier compared to labeling the object’s pixels by drawing outlines. However, from a mapping perspective, pixel-level object detection is more accurate compared to the bounding box technique [101]. According to Hamidinekoo et al. [102], it is challenging to segment and compute the detection of individual fruits from images. Therefore, the authors applied a CNN to classify various parts of the plant inflorescence and estimate fruit numbers from the images. CNNs are also used in detecting fruit and disease. Onishi et al. [103] proposed a high-speed and accurate method to detect the position of fruit and automated harvesting using a robot arm. The authors utilized a shot multibox detector (SSD) based on the CNN method to detect objects in an image using a single deep neural network. To achieve a high level of recognition accuracy, the SSD creates multiscale predictions from multiscale feature maps and explicitly separates the predictions based on ratio aspect. The image of fruit detection utilized in this method is shown in Figure 2. Other fruits and leaves occlude some apples, but the method can still detect the apples. The result of the researchtudy showed that the fruit detection using the SSD is 90% and this accuracy was achieved in only 2 s.
Figure 2.
 Fruit detection using CNN
[103]
.
Another major concern in the agriculture sector nowadays is that many pathogens and insects threaten many farms. Since deep learning can dive into deep analysis and computation, this technique is one of the prominent methods for plant disease detection [104]. Many approaches help to monitor the health of the crop, from semantic segmentation to other popular image annotation techniques. When compared to labeling data for classification, segmentation data are more challenging. Several image annotations based on supervised learning for object segmentation methods have been presented in recent years for this reason. Sharma et al. [105] used image segmentation to detect disease by employing the CNN method. In order to obtain maximum data on disease symptoms, the image is segmented by extracting the affected parts of leaves rather than the whole images. The quantifying result for each type of disease shows that the data are trained very well and achieved that excellent result even under real conditions. Kang and Chen [106] performed detection and segmentation of apple fruit and branches as shown in Figure 3. As shown in Figure 3a–f, apples are drawn in distinct colors, and branches are drawn in blue. These detections and segmentations are recognized by utilizing a CNN. The experiment achieved 0.873 accuracy of instance segmentation of apple fruits and 0.794 accuracy of branch segmentation.
Figure 3.
 Detection and segmentation of fruit and branch
[106]
.
Khattak et al. [107] proposed a CNN to identify fruits and leaves in healthy and diseased conditions. The result shows that the CNN has a test accuracy of 94.55 percent, making it a suggested support tool for farmers in classifying citrus fruit/leaf condition as either healthy or diseased. In yield estimation, Yang et al. [108] trained a CNN to estimate corn grain yield. The experiment conducted by the authors produced 75.50% classification accuracy of spectral and color images. Fuentes [109] successfully proved that the implementation of a deep learning technique can detect disease and pests in tomato plants. In addition, the technique is able to deal with a complex scenario from the surrounding area of the plant. The result obtained is shown in Figure 4a–d, where the deep learning generates high accuracy in detecting disease and pests. The image from left to right for each sub-figure is the input image, annotated image and predicted results.
Figure 4.
 Detection result of disease and pests that affected tomato plants. (
a
) Gray mold (
b
) Canker (
c
) Leaf mold (
d
) Plague
[109]
.
The architectures of CNNs have been classified gradually with the increasing number of convolutional layers, namely LeNet, AlexNet, Visual Geometri Group 16 (VGG16), VGG19, ResNet, GoogLeNet ResNext, DenseNet and You Only Look Once (YOLO). The differences between these architectures are the number of layers, non-linearity function and the pooling type used [110]. Mu et al. [111] applied VggNet to detect the quality of blueberry through the skin pigments during the seven stages of its maturity. The technique was used to solve the difficulty and identify the maturity and quality grade of the blueberry fruit measured by the human eye. In fact, the method has improved the accuracy and efficiency of detection of the quality of blueberry. Lee et al. [112] proposed three types of CNN architecture with different layers, namely, VGG16 with 16 layers, InceptionV3 with 48 layers and GoogLeNetBN with 34 layers. The InceptionV2 inspired GoogLeNetBN and InceptionV3 architecture and has the capability of improving the accuracy and reducing the complexity of computation. Batch normalization (BN) has been proven to be able to limit overfitting and speed up convergence. In a study by [113], three CNN architectures, AlexNet, InceptionV3 and SqueezeNet, were compared to assess their accuracy in evaluating tomato late blight disease. Among these architectures, AlexNet generates the highest accuracy in feature extraction with 93.4%. Gehlot and Saini [114] also compared the performance of CNN architectures in classifying diseases in tomato leaves. The architectures assessed in the researchtudy are AlexNet, GoogLeNet, VGG-16, ResNet-101 and DenseNet-121. The accuracy of all these architectures are almost equal. However, the size of DenseNet-121 is much smaller, at 89.6MB, and the largest size is 504.33 MB, obtained by ResNet-101.
Figure 5 presents the details on the image annotation and its deep learning approach technique. Low-level features are used to represent images in image classification and retrieval. The initial stage in semantic comprehension is to extract efficient and effective visual features from an image’s unstructured array of pixels. The performance of semantic learning approaches is considerably improved by appropriate feature representation. Numerous feature extraction techniques, including image segmentation, color features, texture characteristics, shape features and spatial relationships, have been proposed [115]. There are five categories of image annotation methods, which are generative model-based image annotation, nearest neighbor-based image annotation, discriminative model-based image annotation, tag completion-based image annotation and deep learning-based image annotation [25][26]. In the past decade, tremendous progress has been made in deep learning techniques, allowing image annotation tasks to be solved using deep learning-based feature representation. The most recent advancements in deep learning enable a number of deep models for large-scale image annotation. A CNN is commonly used by deep learning-based approaches to extract robust visual characteristics. Several versions of CNN architecture, such as LeNet, VGG, GooLeNet, etc., have been proposed. The following section describes the most commonly employed CNN architectures. The four types of image annotation are image classification, object detection or recognition, segmentation and boundary recognition. All of these task types can be annotated using deep learning techniques. The training process of deep learning can be supervised, unsupervised or semi-supervised, depending on how the neural network is used. In most cases, supervised learning is used to predict a label or a number. Commonly used benchmarks for evaluating image annotation techniques are based on the performance metrics. Section 4.8 provides the specifics on performance evaluation metrics.
Figure 5. Image annotation for deep learning-based technique.
 Image annotation for deep learning-based technique.

This entry is adapted from 10.3390/agriculture12071033

 

References

  1. Khan, T.; Sherazi, H.; Ali, M.; Letchmunan, S.; Butt, U. Deep Learning-Based Growth Prediction System: A Use Case of China Agriculture. Agronomy 2021, 11, 1551.
  2. Ahmad, N.; Singh, S. Comparative study of disease detection in plants using machine learning and deep learning. In Proceedings of the 2nd International Conference on Electrical, Computer and Energy Technologies (ICECET), Prague, Czech Republic, 20–22 July 2021; pp. 54–59.
  3. Zheng, Y.-Y.; Kong, J.-L.; Jin, X.-B.; Wang, X.-Y.; Su, T.-L.; Zuo, M. CropDeep: The Crop Vision Dataset for Deep-Learning-Based Classification and Detection in Precision Agriculture. Sensors 2019, 19, 1058.
  4. Velumani, K.; Madec, S.; de Solan, B.; Lopez-Lozano, R.; Gillet, J.; Labrosse, J.; Jezequel, S.; Comar, A.; Baret, F. An automatic method based on daily in situ images and deep learning to date wheat heading stage. Field Crop. Res. 2020, 252, 107793.
  5. Santos, L.; Santos, F.N.; Oliveira, P.M.; Shinde, P. Deep learning applications in agriculture: A short review. In Proceedings of the Iberian Robotics Conference, Proceedings of the Robot 2019: Fourth Iberian Robotics Conference, Porto, Portugal, 20–22 November 2019; Springer: Cham, Switzerland, 2019; pp. 139–151.
  6. Khan, N.; Ray, R.; Sargani, G.; Ihtisham, M.; Khayyam, M.; Ismail, S. Current Progress and Future Prospects of Agriculture Technology: Gateway to Sustainable Agriculture. Sustainability 2021, 13, 4883.
  7. Cecotti, H.; Rivera, A.; Farhadloo, M.; Pedroza, M.A. Grape detection with convolutional neural networks. Expert Syst. Appl. 2020, 159, 113588.
  8. Kayad, A.; Paraforos, D.; Marinello, F.; Fountas, S. Latest Advances in Sensor Applications in Agriculture. Agriculture 2020, 10, 362.
  9. Cheng, C.Y.; Liu, L.; Tao, J.; Chen, X.; Xia, R.; Zhang, Q.; Xiong, J.; Yang, K.; Xie, J. The image annotation algorithm using convolutional features from intermediate layer of deep learning. Multimed. Tools Appl. 2021, 80, 4237–4261.
  10. Niu, Y.; Lu, Z.; Wen, J.-R.; Xiang, T.; Chang, S.-F. Multi-Modal Multi-Scale Deep Learning for Large-Scale Image Annotation. IEEE Trans. Image Process. 2018, 28, 1720–1731.
  11. Blok, P.M.; van Henten, E.J.; van Evert, F.K.; Kootstra, G. Image-based size estimation of broccoli heads under varying degrees of occlusion. Biosyst. Eng. 2021, 208, 213–233.
  12. Chen, S.; Wang, M.; Chen, X. Image Annotation via Reconstitution Graph Learning Model. Wirel. Commun. Mob. Comput. 2020, 2020, 8818616.
  13. Bhagat, P.; Choudhary, P. Image annotation: Then and now. Image Vis. Comput. 2018, 80, 1–23.
  14. Wang, R.; Xie, Y.; Yang, J.; Xue, L.; Hu, M.; Zhang, Q. Large scale automatic image annotation based on convolutional neural network. J. Vis. Commun. Image Represent. 2017, 49, 213–224.
  15. Mori, Y.; Takahashi, H.; Oka, R. Image-to-word transformation based on dividing and vector quantizing images with words. In Proceedings of the First International Workshop on Multimedia Intelligent Storage and Retrieval Management, Orlando, FL, USA, October 1999; pp. 1–9.
  16. Ma, Y.; Liu, Y.; Xie, Q.; Li, L. CNN-feature based automatic image annotation method. Multimed. Tools Appl. 2019, 78, 3767–3780.
  17. Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255.
  18. Mohanty, S.P.; Hughes, D.P.; Salathé, M. Using Deep Learning for Image-Based Plant Disease Detection. Front. Plant Sci. 2016, 7, 1419.
  19. Hani, N.; Roy, P.; Isler, V. MinneApple: A Benchmark Dataset for Apple Detection and Segmentation. IEEE Robot. Autom. Lett. 2020, 5, 852–858.
  20. Altaheri, H.; Alsulaiman, M.; Muhammad, G.; Amin, S.U.; Bencherif, M.; Mekhtiche, M. Date fruit dataset for intelligent harvesting. Data Brief 2019, 26, 104514.
  21. Koirala, A.; Walsh, K.B.; Wang, Z.; McCarthy, C. Deep learning for real-time fruit detection and orchard fruit load estimation: Benchmarking of ‘MangoYOLO’. Precis. Agric. 2019, 20, 1107–1135.
  22. Olsen, A.; Konovalov, D.A.; Philippa, B.; Ridd, P.; Wood, J.C.; Johns, J.; Banks, W.; Girgenti, B.; Kenny, O.; Whinney, J.; et al. DeepWeeds: A Multiclass Weed Species Image Dataset for Deep Learning. Sci. Rep. 2019, 9, 2058.
  23. Madsen, S.L.; Mathiassen, S.K.; Dyrmann, M.; Laursen, M.S.; Paz, L.-C.; Jørgensen, R.N. Open Plant Phenotype Database of Common Weeds in Denmark. Remote Sens. 2020, 12, 1246.
  24. Giselsson, T.M.; Jørgensen, R.N.; Jensen, P.K.; Dyrmann, M.; Midtiby, H.S. Midtiby, A public image database for benchmark of plant seedling classification algorithms. arXiv 2017, arXiv:1711.05458.
  25. Cheng, Q.; Zhang, Q.; Fu, P.; Tu, C.; Li, S. A survey and analysis on automatic image annotation. Pattern Recognit. 2018, 79, 242–259.
  26. Randive, K.; Mohan, R. A State-of-Art Review on Automatic Video Annotation Techniques. In Proceedings of the International Conference on Intelligent Systems Design and Applications, Vellore, India, 6–8 December 2018; Springer: Cham, Switzerland, 2018; pp. 1060–1069.
  27. Sudars, K.; Jasko, J.; Namatevs, I.; Ozola, L.; Badaukis, N. Dataset of annotated food crops and weed images for robotic computer vision control. Data Brief 2020, 31, 105833.
  28. Cao, J.; Zhao, A.; Zhang, Z. Automatic image annotation method based on a convolutional neural network with threshold optimization. PLoS ONE 2020, 15, e0238956.
  29. Dechter, R. Learning while searching in constraint-satisfaction problems. In Proceedings of the Fifth National Conference on Artificial Intelligence (AAAI-86), Philadelphia, PN, USA, 11–15 August 1986.
  30. Aizenberg, I.; Aizenberg, N.N.; Vandewalle, J.P. Multi-Valued and Universal Binary Neurons: Theory, Learning and Applications; Springer Science & Business Media: Cham, Switzerland, 2000.
  31. Schmidhuber, J. Deep learning. Scholarpedia 2015, 10, 32832.
  32. Schmidhuber, J. Deep Learning in Neural Networks: An Overview. Neural Netw. 2015, 61, 85–117.
  33. Adnan, M.M.; Rahim, M.S.M.; Rehman, A.; Mehmood, Z.; Saba, T.; Naqvi, R.A. Automatic Image Annotation Based on Deep Learning Models: A Systematic Review and Future Challenges. IEEE Access 2021, 9, 50253–50264.
  34. Chen, J.; Ran, X. Deep Learning with Edge Computing: A Review. Proc. IEEE 2019, 107, 1655–1674.
  35. Caron, M.; Bojanowski, P.; Joulin, A.; Douze, M. Deep clustering for unsupervised learning of visual features. In Proceedings of the ECCV: European Conference on Computer Vision, Munich, Germany, 8–14 September 2018; pp. 139–156.
  36. Bakhshipour, A.; Jafari, A. Evaluation of support vector machine and artificial neural networks in weed detection using shape features. Comput. Electron. Agric. 2018, 145, 153–160.
  37. Kamilaris, A.; Prenafeta-Boldú, F.X. Deep learning in agriculture: A survey. Comput. Electron. Agric. 2018, 147, 70–90.
  38. Bresilla, K.; Perulli, G.D.; Boini, A.; Morandi, B.; Corelli Grappadelli, L.; Manfrini, L. Single-Shot Convolution Neural Networks for Real-Time Fruit Detection Within the Tree. Front. Plant Sci. 2019, 10, 611.
  39. Tsironis, V.; Bourou, S.; Stentoumis, C. Evaluation of Object Detection Algorithms on A New Real-World Tomato Dataset. ISPRS Arch. 2020, 43, 1077–1084.
  40. Santos, T.T.; de Souza, L.L.; dos Santos, A.A.; Avila, S. Grape detection, segmentation, and tracking using deep neural networks and three-dimensional association. Comput. Electron. Agric. 2020, 170, 105247.
  41. Dos Santos Ferreira, A.; Freitas, D.M.; da Silva, G.G.; Pistori, H.; Folhes, M.T. Unsupervised deep learning and semi-automatic data labeling in weed discrimination. Comput. Electron. Agric. 2019, 165, 104963.
  42. Van Engelen, J.E.; Hoos, H.H. A survey on semi-supervised learning. Mach. Learn. 2020, 109, 373–440.
  43. Shorewala, S.; Ashfaque, A.; Sidharth, R.; Verma, U. Weed Density and Distribution Estimation for Precision Agriculture Using Semi-Supervised Learning. IEEE Access 2021, 9, 27971–27986.
  44. Hu, C.; Thomasson, J.A.; Bagavathiannan, M.V. A powerful image synthesis and semi-supervised learning pipeline for site-specific weed detection. Comput. Electron. Agric. 2021, 190, 106423.
  45. Khan, S.; Tufail, M.; Khan, M.T.; Khan, Z.A.; Iqbal, J.; Alam, M. A novel semi-supervised framework for UAV based crop/weed classification. PLoS ONE 2021, 16, e0251008.
  46. Karami, A.; Crawford, M.; Delp, E.J. Automatic Plant Counting and Location Based on a Few-Shot Learning Technique. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 5872–5886.
  47. Noon, S.K.; Amjad, M.; Qureshi, M.A.; Mannan, A. Use of deep learning techniques for identification of plant leaf stresses: A review. Sustain. Comput. Inform. Syst. 2020, 28, 100443.
  48. Fountsop, A.N.; Fendji, J.L.E.K.; Atemkeng, M. Deep Learning Models Compression for Agricultural Plants. Appl. Sci. 2020, 10, 6866.
  49. Xuan, G.; Gao, C.; Shao, Y.; Zhang, M.; Wang, Y.; Zhong, J.; Li, Q.; Peng, H. Apple Detection in Natural Environment Using Deep Learning Algorithms. IEEE Access 2020, 8, 216772–216780.
  50. Rahnemoonfar, M.; Sheppard, C. Real-time yield estimation based on deep learning. Autonomous Air and Ground Sensing Systems for Agricultural Optimization and Phenotyping II. In Proceedings of the SPIE Commercial + Scientific Sensing and Imaging, Anaheim, CA, USA, 8 May 2017; p. 1021809.
  51. Bah, M.D.; Hafiane, A.; Canals, R. Deep Learning with Unsupervised Data Labeling for Weed Detection in Line Crops in UAV Images. Remote. Sens. 2018, 10, 1690.
  52. Huang, H.; Lan, Y.; Yang, A.; Zhang, Y.; Wen, S.; Deng, J. Deep learning versus Object-based Image Analysis (OBIA) in weed mapping of UAV imagery. Int. J. Remote Sens. 2020, 41, 3446–3479.
  53. Veeranampalayam Sivakumar, A.N.; Li, J.; Scott, S.; Psota, E.; Jhala, A.J.; Luck, J.D.; Shi, Y. Comparison of Object Detection and Patch-Based Classification Deep Learning Models on Mid- to Late-Season Weed Detection in UAV Imagery. Remote Sens. 2020, 12, 2136.
  54. Franco, C.; Guada, C.; Rodríguez, J.T.; Nielsen, J.; Rasmussen, J.; Gómez, D.; Montero, J. Automatic detection of thistle-weeds in cereal crops from aerial RGB images. In Proceedings of the International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems, Cádiz, Spain, 11–15 June 2018; Springer: Cham, Switzerland, 2018; pp. 441–452.
  55. Kalampokas, Τ.; Vrochidou, Ε.; Papakostas, G.A.; Pachidis, T.; Kaburlasos, V.G. Grape stem detection using regression convolutional neural networks. Comput. Electron. Agric. 2021, 186, 106220.
  56. Liu, X.; Ghazali, K.H.; Han, F.; Mohamed, I.I. Automatic Detection of Oil Palm Tree from UAV Images Based on the Deep Learning Method. Appl. Artif. Intell. 2021, 35, 13–24.
  57. Yang, Q.; Shi, L.; Han, J.; Yu, J.; Huang, K. A near real-time deep learning approach for detecting rice phenology based on UAV images. Agric. For. Meteorol. 2020, 287, 107938.
  58. Tetila, E.C.; Machado, B.B.; Astolfi, G.; Belete, N.A.D.S.; Amorim, W.P.; Roel, A.R.; Pistori, H. Detection and classification of soybean pests using deep learning with UAV images. Comput. Electron. Agric. 2020, 179, 105836.
  59. Mhango, J.; Harris, E.; Green, R.; Monaghan, J. Mapping Potato Plant Density Variation Using Aerial Imagery and Deep Learning Techniques for Precision Agriculture. Remote Sens. 2021, 13, 2705.
  60. Tri, N.C.; Duong, H.N.; Van Hoai, T.; Van Hoa, T.; Nguyen, V.H.; Toan, N.T.; Snasel, V. A novel approach based on deep learning techniques and UAVs to yield assessment of paddy fields. In Proceedings of the 2017 9th International Conference on Knowledge and Systems Engineering (KSE), Hue, Vietnam, 19–21 October 2017; pp. 257–262.
  61. Trujillano, F.; Flores, A.; Saito, C.; Balcazar, M.; Racoceanu, D. Corn classification using Deep Learning with UAV imagery. An operational proof of concept. In Proceedings of the IEEE 1st Colombian Conference on Applications in Computational Intelligence (ColCACI), Medellin, Colombia, 16–18 May 2018; pp. 1–4.
  62. Vaeljaots, E.; Lehiste, H.; Kiik, M.; Leemet, T. Soil sampling automation case-study using unmanned ground vehicle. Eng. Rural Dev. 2018, 17, 982–987.
  63. Cantelli, L.; Bonaccorso, F.; Longo, D.; Melita, C.D.; Schillaci, G.; Muscato, G. A Small Versatile Electrical Robot for Autonomous Spraying in Agriculture. AgriEngineering 2019, 1, 29.
  64. Cutulle, M.A.; Maja, J.M. Determining the utility of an unmanned ground vehicle for weed control in specialty crop system. Ital. J. Agron. 2021, 16, 1426–1435.
  65. Jun, J.; Kim, J.; Seol, J.; Kim, J.; Son, H.I. Towards an Efficient Tomato Harvesting Robot: 3D Perception, Manipulation, and End-Effector. IEEE Access 2021, 9, 17631–17640.
  66. Mazzia, V.; Salvetti, F.; Aghi, D.; Chiaberge, M. Deepway: A deep learning estimator for unmanned ground vehicle global path planning. arXiv 2020, arXiv:2010.16322.
  67. Li, Y.; Iida, M.; Suyama, T.; Suguri, M.; Masuda, R. Implementation of deep-learning algorithm for obstacle detection and collision avoidance for robotic harvester. Comput. Electron. Agric. 2020, 174, 105499.
  68. Persello, C.; Tolpekin, V.; Bergado, J.; de By, R. Delineation of agricultural fields in smallholder farms from satellite images using fully convolutional networks and combinatorial grouping. Remote Sens. Environ. 2019, 231, 111253.
  69. Mounir, A.J.; Mallat, S.; Zrigui, M. Analyzing satellite images by apply deep learning instance segmentation of agricultural fields. Period. Eng. Nat. Sci. 2021, 9, 1056–1069.
  70. Gastli, M.S.; Nassar, L.; Karray, F. Satellite images and deep learning tools for crop yield prediction and price forecasting. In Proceedings of the 2021 International Joint Conference on Neural Networks (IJCNN), Shenzhen, China, 18–22 July 2021; pp. 1–8.
  71. Waldner, F.; Diakogiannis, F.I. Deep learning on edge: Extracting field boundaries from satellite images with a convolutional neural network. Remote Sens. Environ. 2020, 245, 111741.
  72. Nguyen, T.T.; Hoang, T.D.; Pham, M.T.; Vu, T.T.; Huynh, Q.-T.; Jo, J. Monitoring agriculture areas with satellite images and deep learning. Appl. Soft Comput. 2020, 95, 106565.
  73. Dhyani, Y.; Pandya, R.J. Deep learning oriented satellite remote sensing for drought and prediction in agriculture. In Proceedings of the 2021 IEEE 18th India Council International Conference (INDICON), Guwahati, India, 19–21 December 2021; pp. 1–5.
  74. Gadiraju, K.K.; Ramachandra, B.; Chen, Z.; Vatsavai, R.R. Multimodal deep learning based crop classification using multispectral and multitemporal satellite imagery. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, Virtual Event, 6–10 July 2020; pp. 3234–3242.
  75. Ahmed, A.; Deo, R.; Raj, N.; Ghahramani, A.; Feng, Q.; Yin, Z.; Yang, L. Deep Learning Forecasts of Soil Moisture: Convolutional Neural Network and Gated Recurrent Unit Models Coupled with Satellite-Derived MODIS, Observations and Synoptic-Scale Climate Index Data. Remote Sens. 2021, 13, 554.
  76. Wang, C.; Liu, B.; Liu, L.; Zhu, Y.; Hou, J.; Liu, P.; Li, X. A review of deep learning used in the hyperspectral image analysis for agriculture. Artif. Intell. Rev. 2021, 54, 5205–5253.
  77. Koirala, A.; Walsh, K.B.; Wang, Z.; McCarthy, C. Deep learning—Method overview and review of use for fruit detection and yield estimation. Comput. Electron. Agric. 2019, 162, 219–234.
  78. Darwin, B.; Dharmaraj, P.; Prince, S.; Popescu, D.; Hemanth, D. Recognition of Bloom/Yield in Crop Images Using Deep Learning Models for Smart Agriculture: A Review. Agronomy 2021, 11, 646.
  79. Moazzam, S.I.; Khan, U.S.; Tiwana, M.I.; Iqbal, J.; Qureshi, W.S.; Shah, S.I. A Review of application of deep learning for weeds and crops classification in agriculture. In Proceedings of the 2019 International Conference on Robotics and Automation in Industry (ICRAI), Rawalpindi, Pakistan, 21–22 October 2019; pp. 1–6.
  80. Zhang, Q.; Liu, Y.; Gong, C.; Chen, Y.; Yu, H. Applications of Deep Learning for Dense Scenes Analysis in Agriculture: A Review. Sensors 2020, 20, 1520.
  81. Chen, Y.; Zeng, X.; Chen, X.; Guo, W. A survey on automatic image annotation. Appl. Intell. 2020, 50, 3412–3428.
  82. Bouchakwa, M.; Ayadi, Y.; Amous, I. A review on visual content-based and users’ tags-based image annotation: Methods and techniques. Multimedia Tools Appl. 2020, 79, 21679–21741.
  83. Dananjayan, S.; Tang, Y.; Zhuang, J.; Hou, C.; Luo, S. Assessment of state-of-the-art deep learning based citrus disease detection techniques using annotated optical leaf images. Comput. Electron. Agric. 2022, 193, 106658.
  84. He, Z.; Xiong, J.; Chen, S.; Li, Z.; Chen, S.; Zhong, Z.; Yang, Z. A method of green citrus detection based on a deep bounding box regression forest. Biosyst. Eng. 2020, 193, 206–215.
  85. Morbekar, A.; Parihar, A.; Jadhav, R. Crop disease detection using YOLO. In Proceedings of the 2020 International Conference for Emerging Technology (INCET), Belgaum, Karnataka, India, 5–7 June 2020; pp. 1–5.
  86. Lamb, N.; Chuah, M.C. A strawberry detection system using convolutional neural networks. In Proceedings of the 2018 IEEE International Conference on Big Data (Big Data), Seattle, WA, USA, 10–13 December 2018; pp. 2515–2520.
  87. Tassis, L.M.; de Souza, J.E.T.; Krohling, R.A. A deep learning approach combining instance and semantic segmentation to identify diseases and pests of coffee leaves from in-field images. Comput. Electron. Agric. 2021, 186, 106191.
  88. Fawakherji, M.; Youssef, A.; Bloisi, D.; Pretto, A.; Nardi, D. Crop and weeds classification for precision agriculture using context-independent pixel-wise segmentation. In Proceedings of the Third IEEE International Conference on Robotic Computing (IRC), Naples, Italy, 25–27 February 2019; pp. 146–152.
  89. Bosilj, P.; Aptoula, E.; Duckett, T.; Cielniak, G. Transfer learning between crop types for semantic segmentation of crops versus weeds in precision agriculture. J. Field Robot. 2019, 37, 7–19.
  90. Storey, G.; Meng, Q.; Li, B. Leaf Disease Segmentation and Detection in Apple Orchards for Precise Smart Spraying in Sustainable Agriculture. Sustainability 2022, 14, 1458.
  91. Wspanialy, P.; Brooks, J.; Moussa, M. An image labeling tool and agricultural dataset for deep learning. arXiv 2021, arXiv:2004.03351.
  92. Biffi, L.; Mitishita, E.; Liesenberg, V.; Santos, A.; Gonçalves, D.; Estrabis, N.; Silva, J.; Osco, L.P.; Ramos, A.; Centeno, J.; et al. ATSS Deep Learning-Based Approach to Detect Apple Fruits. Remote Sens. 2020, 13, 54.
  93. Alibabaei, K.; Gaspar, P.D.; Lima, T.M. Crop Yield Estimation Using Deep Learning Based on Climate Big Data and Irrigation Scheduling. Energies 2021, 14, 3004.
  94. Mamdouh, N.; Khattab, A. YOLO-Based Deep Learning Framework for Olive Fruit Fly Detection and Counting. IEEE Access 2021, 9, 84252–84262.
  95. Kattenborn, T.; Leitloff, J.; Schiefer, F.; Hinz, S. Review on Convolutional Neural Networks (CNN) in vegetation remote sensing. ISPRS J. Photogramm. Remote Sens. 2021, 173, 24–49.
  96. Zhu, X.X.; Tuia, D.; Mou, L.; Xia, G.-S.; Zhang, L.; Xu, F.; Fraundorfer, F. Deep Learning in Remote Sensing: A Comprehensive Review and List of Resources. IEEE Geosci. Remote Sens. Mag. 2017, 5, 8–36.
  97. Fukushima, K. Neocognitron: A hierarchical neural network capable of visual pattern recognition. Neural Netw. 1988, 1, 119–130.
  98. Rahnemoonfar, M.; Sheppard, C. Deep Count: Fruit Counting Based on Deep Simulated Learning. Sensors 2017, 17, 905.
  99. Thenmozhi, K.; Reddy, U.S. Crop pest classification based on deep convolutional neural network and transfer learning. Comput. Electron. Agric. 2019, 164, 104906.
  100. Huang, N.; Chou, D.; Lee, C.; Wu, F.; Chuang, A.; Chen, Y.; Tsai, Y. Smart agriculture: Real-time classification of green coffee beans by using a convolutional neural network. IET Smart Cities 2020, 2, 167–172.
  101. Asad, M.H.; Bais, A. Weed detection in canola fields using maximum likelihood classification and deep convolutional neural network. Inf. Process. Agric. 2020, 7, 535–545.
  102. Hamidinekoo, A.; Martínez, G.A.G.; Ghahremani, M.; Corke, F.; Zwiggelaar, R.; Doonan, J.H.; Lu, C. DeepPod: A convolutional neural network based quantification of fruit number in Arabidopsis. GigaScience 2020, 9, giaa012.
  103. Onishi, Y.; Yoshida, T.; Kurita, H.; Fukao, T.; Arihara, H.; Iwai, A. An automated fruit harvesting robot by using deep learning. ROBOMECH J. 2019, 6, 13.
  104. Adi, M.; Singh, A.K.; Reddy, H.; Kumar, Y.; Challa, V.R.; Rana, P.; Mittal, U. An overview on plant disease detection algorithm using deep learning. In Proceedings of the 2021 2nd International Conference on Intelligent Engineering and Management (ICIEM), London, UK, 28–30 April 2021; pp. 305–309.
  105. Sharma, P.; Berwal, Y.P.S.; Ghai, W. Performance analysis of deep learning CNN models for disease detection in plants using image segmentation. Inf. Process. Agric. 2020, 7, 566–574.
  106. Kang, H.; Chen, C. Fruit detection, segmentation and 3D visualisation of environments in apple orchards. Comput. Electron. Agric. 2020, 171, 105302.
  107. Khattak, A.; Asghar, M.U.; Batool, U.; Ullah, H.; Al-Rakhami, M.; Gumaei, A. Automatic Detection of Citrus Fruit and Leaves Diseases Using Deep Neural Network Model. IEEE Access 2021, 9, 112942–112954.
  108. Yang, W.; Nigon, T.; Hao, Z.; Paiao, G.D.; Fernández, F.G.; Mulla, D.; Yang, C. Estimation of corn yield based on hyperspectral imagery and convolutional neural network. Comput. Electron. Agric. 2021, 184, 106092.
  109. Fuentes, A.; Yoon, S.; Kim, S.C.; Park, D.S. A Robust Deep-Learning-Based Detector for Real-Time Tomato Plant Diseases and Pests Recognition. Sensors 2017, 17, 2022.
  110. Maheswari, P.; Raja, P.; Apolo-Apolo, O.E.; Pérez-Ruiz, M. Intelligent Fruit Yield Estimation for Orchards Using Deep Learning Based Semantic Segmentation Techniques—A Review. Front. Plant Sci. 2021, 12, 684328.
  111. Mu, C.; Yuan, Z.; Ouyang, X.; Sun, P.; Wang, B. Non-destructive detection of blueberry skin pigments and intrinsic fruit qualities based on deep learning. J. Sci. Food Agric. 2021, 101, 3165–3175.
  112. Lee, S.H.; Goëau, H.; Bonnet, P.; Joly, A. New perspectives on plant disease characterization based on deep learning. Comput. Electron. Agric. 2020, 170, 105220.
  113. Verma, S.; Chug, A.; Singh, A.P. Application of convolutional neural networks for evaluation of disease severity in tomato plant. J. Discret. Math. Sci. Cryptogr. 2020, 23, 273–282.
  114. Gehlot, M.; Saini, M.L. Analysis of different CNN architectures for tomato leaf disease classification. In Proceedings of the 2020 5th IEEE International Conference on Recent Advances and Innovations in Engineering (ICRAIE), Jaipur, India, 1–3 December 2020; pp. 1–6.
  115. Zhang, D.; Islam, M.; Lu, G. A review on automatic image annotation techniques. Pattern Recognit. 2012, 45, 346–362.
More
ScholarVision Creations