Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 + 2284 word(s) 2284 2021-04-14 07:37:49

Video Upload Options

Do you have a full video?

Confirm

Are you sure to Delete?
Cite
If you have any further questions, please contact Encyclopedia Editorial Office.
Hemanth, J. Smart Agriculture. Encyclopedia. Available online: https://encyclopedia.pub/entry/9128 (accessed on 06 July 2024).
Hemanth J. Smart Agriculture. Encyclopedia. Available at: https://encyclopedia.pub/entry/9128. Accessed July 06, 2024.
Hemanth, Jude. "Smart Agriculture" Encyclopedia, https://encyclopedia.pub/entry/9128 (accessed July 06, 2024).
Hemanth, J. (2021, April 28). Smart Agriculture. In Encyclopedia. https://encyclopedia.pub/entry/9128
Hemanth, Jude. "Smart Agriculture." Encyclopedia. Web. 28 April, 2021.
Smart Agriculture
Edit

Smart agriculture, or precision agriculture, is a crucial way to achieve greater yields by utilizing the natural deposits in a diverse environment. The yield of a crop may vary from year to year depending on the variations in climate, soil parameters and fertilizers used. Automation in the agricultural industry moderates the usage of resources and can increase the quality of food in the post-pandemic world. Agricultural robots have been developed for crop seeding, monitoring, weed control, pest management and harvesting. Physical counting of fruitlets, flowers or fruits at various phases of growth is labour intensive as well as an expensive procedure for crop yield estimation. Remote sensing technologies offer accuracy and reliability in crop yield prediction and estimation. The automation in image analysis with computer vision and deep learning models provides precise field and yield maps. In this review, it has been observed that the application of deep learning techniques has provided a better accuracy for smart farming. The crops taken for the study are fruits such as grapes, apples, citrus, tomatoes and vegetables such as sugarcane, corn, soybean, cucumber, maize, wheat. The research works which are carried out in this research paper are available as products for applications such as robot harvesting, weed detection and pest infestation. The methods which made use of conventional deep learning techniques have provided an average accuracy of 92.51%.

precision agriculture crop yield estimation plant disease detection robot harvesting post harvesting

1. Introduction

Smart farming helps farmers plan their work with the data obtained with agricultural drones, satellites and sensors. The detailed topography, climate forecasts, temperature and acidity of the soil can be accessed by sensors positioned on the agricultural farms. Precision agriculture affords farmers with compilations of statistics to:

  • create an outline of the agricultural land
  • detect environmental risks
  • manage the usage of fertilizers and pesticides
  • forecast crop yields
  • organize for harvest
  • improve the marketing and distribution of the farm products.

According to the 2011 census, in India nearly 54.6% of the entire workforce is dedicated to agricultural and associated sector tasks, which in 2017–2018 accounted for 17.1% of the nation’s Gross Value Added. To safeguard from the risks inherent to agriculture, the Ministry of Agriculture and Farmers Welfare announced an insurance scheme for crops in 1985. Problems have emerged in the scheme technology to collect data and lessen the delays in responding to insurance claims by the farmers. Crop yield estimation is mandatory for this and are recorded by conducting Crop Cutting Experiments (CCE) conducted in regions of the states by the Government of India. The directorate of Economics and Statistics is presently guiding Crop Cutting Experiments for 13 chief crops under the General Crop Estimation Scheme. To improve the quality of statistics collection of Crop Cutting Experiments, Global Positioning System (GPS) data such as elevation of fields, area, latitude and longitude are being recorded by remote sensing [1][2]. The vegetation indices acquired through the satellite images track the phenological profiles of the crops throughout the year [3][4].

The conventional crop yield estimation requires crop acreages along with sample assessments that depend on crop cutting experiments. The crop yield data is the most essential data for the area-yield insurance schemes such as Pradhan Mantri Fasal Bima Yojana (PMFBY) in India. The PMFBY scheme was launched to support the Indian farmers financially during times of crop failure caused by natural disasters or pest attacks [5]. To implement these national scale agricultural policies, crop cutting experiments are carried out by government officers in various regions in different districts of the state. Because the costs involved are pretty high, the desired crop data from large specific regions is limited to small scale crop cutting experiments and surveys of small zones. The present-day industry methods for yield estimation use automated computer vision technology to detect and estimate the count of various harvests [6]. The progress in computing capabilities has provided appropriate techniques for small area yield estimation. The proficiency of crop yield estimation can be improved by using remote sensing data for a considerably larger area [7]. Satellite images are quantitatively processed to obtain high accuracy in agricultural applications such as crop yield estimation [8].

The crop yield prediction has been possible by counting the number of flowers and comparinf this number with the count of fruits prior to harvesting stage for citrus trees [6]. The bloom intensity existing in an orchard influences the crop management in the early season. The estimation of flower count with a deep learning model will be effectual for crop yield prediction, thinning and pruning which impact the fruit yield [9]. The prediction of vine yield helps the farmer to prepare for harvest, transport the crop and plan for distribution in the market. The plant diseases during the flowering and fruit development stage may affect crop yield forecasts. Deep learning classifier models are advanced to execute crop disease identification to operate in agricultural farms under controlled and real cultivation environments [10][11].

At Iwate University, Japan, a robotic harvester with a machine vision system was able to recognize Fuji apples on the tree and estimate the fruit yield with an accuracy of 88%. The bimodal distribution of the enhanced image with its histogram uses optimal thresholding segmentation to extract the fruit portion from the background [12]. The maturity level for tomato berries can be detected with a supervised backpropagation neural network classifier, with the green, orange and red color extraction technique as explained in [13]. Agricultural robots execute their farm duties either as self-propelled autonomous vehicles or manually controlled smart machines. The autonomous vehicles may be an unmanned aerial vehicle (UAV) or an unmanned ground vehicle (UGV) guided by GPS and a global navigation satellite system (GNSS). The autonomous agricultural tasks that can be accomplished by the larger robots range from seeding to harvesting and post-harvesting tasks as well in some cases.

Automation in agriculture to perform farm duties must face challenges due to lighting conditions and crop variations [14][15]. In Norway, an autonomous strawberry harvester was developed considering light variations. The machine vision system changed its color threshold in response to alterations in the light intensity [16]. Robot harvesting machines achieve lower accuracy in spotting and picking crops due to occlusions caused by leaves and twigs [17][18]. Modern machine vision techniques and machine learning models with assorted sensors and cameras can overcome these inadequacies. The basic system of a robot harvester must perform functions such as: detect the fruit or detect the disease, pick the fruit/ berry without damaging it, guide the harvester to navigate the field, maneuver irrespective of the lighting and weather conditions, be cost-effective and have a simple mechanical design [19].

2. Deep Architectures in Smart Farming

A deep convolutional neural network (DCNN) is a multi-layered neuron, which is trained with complex patterns provided with appropriately classified features of an image. The InceptionV3 model assists as a conventional image feature extractor to classify fruit and background pixels in an image. The classifier localizes the fruits to count the quantities of fruit present [20][21] and classify the species of tomato [22]. A K-nearest neighbour (KNN) classifier was employed to classify the fruit pixels in trained datasets with a threshold pixel value set as a fruit pixel. The SVM functions for pattern classification as well as linear regression assessment, based on the selected features. Darknet classifier with a trained “you only look once” (YOLO) model detected iceberg lettuce [23] and grapes [24] with edges for harvesting using a Vegebot. YOLO models offer a high objects detection rate in real-time when compared to faster region-based CNN (FRCNN) [25].

The AdaBoost model structures the strong traditional classifier by combining the weak classifiers linearly with minimal thresholding tasks and Haar-like features to detect tomato berries with an accuracy of 96% [26]. A multi-modal faster region-based CNN model constructs an efficient fruit yield detection technique with multifarious modalities by the fusion of RGB and near-infrared images and has improved the performance up to 0.83 F1 score [27]. The dataset images were fed to the R-CNN model to generate the feature map for classification.

The spatiotemporal exploration from remote sensing image data of normalized difference vegetation indices were trained with a spiking neural network (SNN) to plan crop yield prediction and crop yield estimation of winter wheat [28]. A better prediction algorithm for corn, soybean [29] and paddy crops was proposed with a (feed forward back propagation) artificial neural network (ANN) and later with a fusion of multiple linear regression (MLR). The linear discriminant analysis (LDA) approach eradicates the imbalance generated from the performance value attained through an ANN classifier [30]. The fusion of huge datasets was implemented and compared with various machine learning models like SVM, DL, extremely randomized trees (ERT) and random forest (RF) for the estimation of corn yield [31]. The deep learning (DL) model succeeded with high accuracies with respect to correlation coefficients. The detection of flowers in an image accomplished by a deep learning model in semantic segmentation of CNN and SVM classifier helps crop yield management. The image segmentation techniques and canopy features were used by backpropagation neural network (BPNN) model to train the system for the apple yield prediction [32]. The SVM and kNN classifiers were efficient, with an accuracy of 98.49% and 98.50%. Deep convolutional neural networks were developed to identify plant diseases and to predict the macronutrient deficiencies during the flowering and fruit development stage [33]. The visual geometry group (VGG) CNN architecture identified plant diseases with the leaf images of the plants and communicated the results to farmers through smart phones [34][35]. The endemic fungal infection diagnosis in the winter wheat [36] was validated and trained with Imagenet datasets and implemented with an adaptive deep CNN. The deep CNN model with GoogleNet classified nine diseases in the tomato leaves [37]. The defects in the external regions and the occlusion of flower and berries of tomatoes were identified with deep autoencoders and a residual neural network (ResNet) 50 classifier [38][39]. A leaf-based disease identification model was developed with a random forest classifier trained with HOG features and could detect diseases on papaya leaves [40].

Ripeness estimation is required in the agricultural industry to know the quality and level of maturity of the fruit. The ripening of tomatoes was detected with the fusion of features extracted and classified using a weighted relevance vector machine (RVM) as a bilayer classification approach for harvesting agrobots. The maturity levels in tomatoes were detected with the color features classified with BPNN model. A fuzzy rule-based classification (FRBCS) approach was proposed based on the color feature with decision trees (DT) and Mamdani fuzzy technique to estimate six stages of maturity level in tomato berries [41]. A mature-tomato can be identified with a SVM classifier trained by HOG features along with false elimination and overlap removal features.

3.Advantages and Disadvantages

The acquired images may be prone to be degradation caused by misfocus of the camera, poor lighting conditions or sensor noise. The image enhancement techniques have a visual impact on the desired information in a real time captured image. The image enhancement techniques do not afford augmented results unless the color modifications are made under multiple light sources. The median filter removes the blurred effect and reduces the noise. The nonlinear filtering technique can be employed to upgrade the quality of blurred images with the light source being refined. Adding noise to the image can improve the image in certain applications.

The image segmentation techniques are easy to implement and modify to classify pixels with less computation. The threshold segmentation requires appropriate lighting conditions. The optimal threshold value has to be selected, but it may not be pertinent for every application. Any background complexity increases the error rate and computation time. The color-based segmentation has constraints due to the non-uniform light sensitivity. Otsu thresholds excel in the detection of edges and select the threshold value based on the features provided to the image. The watershed segmentation provides continuous boundaries however, with consequent complexity in the calculation of the gradients. The texture and shape-based segmentation are time-consuming and provide blurred boundaries. To optimize the computer vision technology, further exploration in unstable agricultural environments has to be formulated.

The feature selection process reduces the quantity of input data while developing a predictive classifier model. Haar wavelet features combined with an AdaBoost classifier achieved high accuracy. The feature selection prioritizes the existing features in a dataset. The PCA can outperform other features with high accuracy by the pixel-level identification of input as original image compared with the input features. The SIFT detection algorithm requires scaling of local features in the images. The HOG method can extract global features by computing the edge gradient. HOG+FCR+NMS achieved a computation time of 0.95s for maturity detection. The hybrid approaches in feature extraction can improve classification and computation time.

The DL model with SVM, BPNN classifiers outperformed other classifiers. The SVM classifiers provide less error with effective prediction but require abundant datasets and are more complex and delicate to handle varied datatypes. The TensorFlow library endeavors to uncover optimal policy and does not wait till the termination to update the utility function. K-NN classifiers are robust in classifying the data with zero cost in learning process. These classifiers require large datasets with high computation for mixed data. The DL can extract the required features based on color, texture, shape and SIFT feature extraction processes. The combination of ANN and MLR classifier provided the highest accuracy in crop prediction. DL classifiers were used in a wide range of agricultural applications with an average performance F1 score of 0.8. Errors occurred due to the occlusion of leaves or cluster of fruits. The fruit detection for robot harvesting and yield estimation outperformed using a combination of CNN and linear regression models. The need of large datasets as input for training increases the computation time for DL approach. The SVM classifiers provide high accuracy with improved computation time. The fusion of the classifiers with assorted features may improve the computer vision technique and DL model.

References

  1. Adamchuk, V.I.; Hummel, J.W.; Morgan, M.K.; Upadhyaya, S. On-the-go soil sensors for precision agriculture. Comput. Electron. Agric. 2004, 44, 71–91.
  2. Perez-Ruiz, M.; Slaughter, D.C.; Gliever, C.; Upadhyaya, S.K. Tractor-based Real-time Kinematic-Global Positioning System (RTK-GPS) guidance system for geospatial mapping of row crop transplant. Biosyst. Eng. 2012, 111, 64–71.
  3. Pastor-Guzman, J.; Dash, J.; Atkinson, P.M. Remote sensing of mangrove forest phenology and its environmental drivers. Remote Sens. Environ. 2018, 205, 71–84.
  4. Zhang, X.; Friedl, M.A.; Schaaf, C.B.; Strahler, A.H.; Hodges, J.C.F.; Gao, F.; Reed, B.C.; Huete, A. Monitoring vegetation phenology using MODIS. Remote Sens. Environ. 2003, 84, 471–475.
  5. Tiwari, R.; Chand, K.; Anjum, B. Crop insurance in India: A review of Pradhan Mantri Fasal Bima Yojana (PMFBY). FIIB Bus. Rev. 2020, 9, 249–255.
  6. Dorj, U.-O.; Lee, M.; Yun, S.-S. An yield estimation in citrus orchards via fruit detection and counting using image processing. Comput. Electron. Agric. 2017, 140, 103–112.
  7. Singh, R.; Goyal, R.C.; Saha, S.K.; Chhikara, R.S. Use of satellite spectral data in crop yield estimation surveys. Int. J. Remote Sens. 1992, 13, 2583–2592.
  8. Ferencz, C.; Bognár, P.; Lichtenberger, J.; Hamar, D.; Tarcsai, G.; Timár, G.; Molnár, G.; Pásztor, S.; Steinbach, P.; Székely, B.; et al. Crop yield estimation by satellite remote sensing. Int. J. Remote Sens. 2004, 25, 4113–4149.
  9. Dias, P.A.; Tabb, A.; Medeiros, H. Multispecies fruit flower detection using a refined semantic segmentation network. IEEE Robot. Autom. Lett. 2018, 3, 3003–3010.
  10. Hong, H.; Lin, J.; Huang, F. Tomato disease detection and classification by deep learning. In Proceedings of the 2020 International Conference on Big Data, Artificial Intelligence and Internet of Things Engineering (ICBAIE), Fuzhou, China, 12–14 June 2020; p. 0001.
  11. Liu, J.; Wang, X. Tomato diseases and pests detection based on improved YOLO V3 convolutional neural network. Front. Plant Sci. 2020, 11, 1–12.
  12. Bulanon, D.; Kataoka, T.; Ota, Y.; Hiroma, T. AE—automation and emerging technologies: A segmentation algorithm for the automatic recognition of Fuji apples at harvest. Biosyst. Eng. 2002, 83, 405–412.
  13. Wan, P.; Toudeshki, A.; Tan, H.; Ehsani, R. A methodology for fresh tomato maturity detection using computer vision. Comput. Electron. Agric. 2018, 146, 43–50.
  14. Payne, A.B.; Walsh, K.B.; Subedi, P.P.; Jarvis, D. Estimation of mango crop yield using image analysis–Segmentation method. Comput. Electron. Agric. 2013, 91, 57–64.
  15. Xiang, R.; Ying, Y.; Jiang, H. Research on image segmentation methods of tomato in natural conditions. In Proceedings of the 2011 4th International Congress on Image and Signal Processing, Shanghai, China, 15–17 October 2011; pp. 1268–1272.
  16. Xiong, Y.; Ge, Y.; Grimstad, L.; From, P.J. An autonomous strawberry-harvesting robot: Design, development, integration, and field evaluation. J. Field Robot. 2020, 37, 202–224.
  17. Horng, G.-J.; Liu, M.-X.; Chen, C.-C. The smart image recognition mechanism for crop harvesting system in intelligent agriculture. IEEE Sensors J. 2020, 20, 2766–2781.
  18. Hua, Y.; Zhang, N.; Yuan, X.; Quan, L.; Yang, J.; Nagasaka, K.; Zhou, X.-G. Recent advances in intelligent automated fruit harvesting robots. Open Agric. J. 2019, 13, 101–106.
  19. Silwal, A.; Davidson, J.R.; Karkee, M.; Mo, C.; Zhang, Q.; Lewis, K. Design, integration, and field evaluation of a robotic apple harvester. J. Field Robot. 2017, 34, 1140–1159.
  20. Fourie, J.; Hsiao, J.; Werner, A. Crop yield estimation using deep learning. In Proceedings of the 7th Asian-Australasian Conference Precis. Agric., Hamilton, New Zealand, 16 October 2017; pp. 1–10.
  21. Lee, J.; Nazki, H.; Baek, J.; Hong, Y.; Lee, M. Artificial intelligence approach for tomato detection and mass estimation in precision agriculture. Sustainability 2020, 12, 9138.
  22. Alajrami, M.A.; Abunaser, S.S. Type of tomato classification using deep learning. Int. J. Acad. Pedagog. Res. 2020, 3, 21–25.
  23. Birrell, S.; Hughes, J.; Cai, J.Y.; Iida, F. A field-tested robotic harvesting system for iceberg lettuce. J. Field Robot. 2019, 37, 225–245.
  24. Santos, T.T.; De Souza, L.L.; Dos Santos, A.A.; Avila, S. Grape detection, segmentation, and tracking using deep neural networks and three-dimensional association. Comput. Electron. Agric. 2020, 170, 105247.
  25. Koirala, A.; Walsh, K.B.; Wang, Z.; Anderson, N. Deep learning for mango (Mangifera Indica) panicle stage classification. Agronomy 2020, 10, 143.
  26. Zhao, Y.; Gong, L.; Zhou, B.; Huang, Y.; Liu, C. Detecting tomatoes in greenhouse scenes by combining AdaBoost classifier and colour analysis. Biosyst. Eng. 2016, 148, 127–137.
  27. Bender, A.; Whelan, B.; Sukkarieh, S. A high-resolution, multimodal data set for agricultural robotics: A Ladybird ’s-eye view of Brassica. J. Field Robot. 2020, 37, 73–96.
  28. Bose, P.; Kasabov, N.K.; Bruzzone, L.; Hartono, R.N. Spiking neural networks for crop yield estimation based on spatiotemporal analysis of image time series. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6563–6573.
  29. Kaul, M.; Hill, R.L.; Walthall, C. Artificial neural networks for corn and soybean yield prediction. Agric. Syst. 2005, 85, 1–18.
  30. Pourdarbani, R.; Sabzi, S.; Hernández-Hernández, M.; Hernández-Hernández, J.L.; García-Mateos, G.; Kalantari, D.; Molina-Martínez, J.M. Comparison of different classifiers and the majority voting rule for the detection of plum fruits in garden conditions. Remote Sens. 2019, 11, 2546.
  31. Kim, N.; Lee, Y.-W. Machine learning approaches to corn yield estimation using satellite images and climate data: A case of Iowa state. J. Korean Soc. Surv. Geodesy Photogramm. Cartogr. 2016, 34, 383–390.
  32. Cheng, H.; Damerow, L.; Sun, Y.; Blanke, M. Early yield prediction using image analysis of apple fruit and tree canopy features with neural networks. J. Imaging 2017, 3, 6.
  33. Tran, T.-T.; Choi, J.-W.; Le, T.-T.H.; Kim, J.-W. A comparative study of deep CNN in forecasting and classifying the macronutrient deficiencies on development of tomato plant. Appl. Sci. 2019, 9, 1601.
  34. Ferentinos, K.P. Deep learning models for plant disease detection and diagnosis. Comput. Electron. Agric. 2018, 145, 311–318.
  35. Yang, K.; Zhong, W.; Li, F. Leaf segmentation and classification with a complicated background using deep learning. Agronomy 2020, 10, 1721.
  36. Picon, A.; Alvarez-Gila, A.; Seitz, M.; Ortiz-Barredo, A.; Echazarra, J.; Johannes, A. Deep convolutional neural networks for mobile capture device-based crop disease classification in the wild. Comput. Electron. Agric. 2019, 161, 280–290.
  37. Brahimi, M.; Boukhalfa, K.; Moussaoui, A. Deep learning for tomato diseases: Classification and symptoms visualization. Appl. Artif. Intell. 2017, 31, 299–315.
  38. Da Costa, A.Z.; Figueroa, H.E.H.; Fracarolli, J.A. Computer vision based detection of external defects on tomatoes using deep learning. Biosyst. Eng. 2020, 190, 131–144.
  39. Sun, J.; He, X.; Ge, X.; Wu, X.; Shen, J.; Song, Y. Detection of key organs in tomato based on deep migration learning in a complex background. Agriculture 2018, 8, 196.
  40. Ramesh, S.; Hebbar, R.; Niveditha, M.; Pooja, R.; Shashank, N.; Vinod, P.V. Plant disease detection using machine learning. In Proceedings of the 2018 International Conference on Design Innovations for 3Cs Compute Communicate Control (ICDI3C), Bengaluru, India, 24–26 April 2018; pp. 41–45.
  41. Goel, N.; Sehgal, P. Fuzzy classification of pre-harvest tomatoes for ripeness estimation–An approach based on automatic rule learning using decision tree. Appl. Soft Comput. 2015, 36, 45–56.
More
Information
Contributor MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register :
View Times: 550
Revision: 1 time (View History)
Update Date: 28 Apr 2021
1000/1000
Video Production Service