1000/1000
Hot
Most Recent
Smart agriculture, or precision agriculture, is a crucial way to achieve greater yields by utilizing the natural deposits in a diverse environment. The yield of a crop may vary from year to year depending on the variations in climate, soil parameters and fertilizers used. Automation in the agricultural industry moderates the usage of resources and can increase the quality of food in the post-pandemic world. Agricultural robots have been developed for crop seeding, monitoring, weed control, pest management and harvesting. Physical counting of fruitlets, flowers or fruits at various phases of growth is labour intensive as well as an expensive procedure for crop yield estimation. Remote sensing technologies offer accuracy and reliability in crop yield prediction and estimation. The automation in image analysis with computer vision and deep learning models provides precise field and yield maps. In this review, it has been observed that the application of deep learning techniques has provided a better accuracy for smart farming. The crops taken for the study are fruits such as grapes, apples, citrus, tomatoes and vegetables such as sugarcane, corn, soybean, cucumber, maize, wheat. The research works which are carried out in this research paper are available as products for applications such as robot harvesting, weed detection and pest infestation. The methods which made use of conventional deep learning techniques have provided an average accuracy of 92.51%.
Smart farming helps farmers plan their work with the data obtained with agricultural drones, satellites and sensors. The detailed topography, climate forecasts, temperature and acidity of the soil can be accessed by sensors positioned on the agricultural farms. Precision agriculture affords farmers with compilations of statistics to:
According to the 2011 census, in India nearly 54.6% of the entire workforce is dedicated to agricultural and associated sector tasks, which in 2017–2018 accounted for 17.1% of the nation’s Gross Value Added. To safeguard from the risks inherent to agriculture, the Ministry of Agriculture and Farmers Welfare announced an insurance scheme for crops in 1985. Problems have emerged in the scheme technology to collect data and lessen the delays in responding to insurance claims by the farmers. Crop yield estimation is mandatory for this and are recorded by conducting Crop Cutting Experiments (CCE) conducted in regions of the states by the Government of India. The directorate of Economics and Statistics is presently guiding Crop Cutting Experiments for 13 chief crops under the General Crop Estimation Scheme. To improve the quality of statistics collection of Crop Cutting Experiments, Global Positioning System (GPS) data such as elevation of fields, area, latitude and longitude are being recorded by remote sensing [1][2]. The vegetation indices acquired through the satellite images track the phenological profiles of the crops throughout the year [3][4].
The conventional crop yield estimation requires crop acreages along with sample assessments that depend on crop cutting experiments. The crop yield data is the most essential data for the area-yield insurance schemes such as Pradhan Mantri Fasal Bima Yojana (PMFBY) in India. The PMFBY scheme was launched to support the Indian farmers financially during times of crop failure caused by natural disasters or pest attacks [5]. To implement these national scale agricultural policies, crop cutting experiments are carried out by government officers in various regions in different districts of the state. Because the costs involved are pretty high, the desired crop data from large specific regions is limited to small scale crop cutting experiments and surveys of small zones. The present-day industry methods for yield estimation use automated computer vision technology to detect and estimate the count of various harvests [6]. The progress in computing capabilities has provided appropriate techniques for small area yield estimation. The proficiency of crop yield estimation can be improved by using remote sensing data for a considerably larger area [7]. Satellite images are quantitatively processed to obtain high accuracy in agricultural applications such as crop yield estimation [8].
The crop yield prediction has been possible by counting the number of flowers and comparinf this number with the count of fruits prior to harvesting stage for citrus trees [6]. The bloom intensity existing in an orchard influences the crop management in the early season. The estimation of flower count with a deep learning model will be effectual for crop yield prediction, thinning and pruning which impact the fruit yield [9]. The prediction of vine yield helps the farmer to prepare for harvest, transport the crop and plan for distribution in the market. The plant diseases during the flowering and fruit development stage may affect crop yield forecasts. Deep learning classifier models are advanced to execute crop disease identification to operate in agricultural farms under controlled and real cultivation environments [10][11].
At Iwate University, Japan, a robotic harvester with a machine vision system was able to recognize Fuji apples on the tree and estimate the fruit yield with an accuracy of 88%. The bimodal distribution of the enhanced image with its histogram uses optimal thresholding segmentation to extract the fruit portion from the background [12]. The maturity level for tomato berries can be detected with a supervised backpropagation neural network classifier, with the green, orange and red color extraction technique as explained in [13]. Agricultural robots execute their farm duties either as self-propelled autonomous vehicles or manually controlled smart machines. The autonomous vehicles may be an unmanned aerial vehicle (UAV) or an unmanned ground vehicle (UGV) guided by GPS and a global navigation satellite system (GNSS). The autonomous agricultural tasks that can be accomplished by the larger robots range from seeding to harvesting and post-harvesting tasks as well in some cases.
Automation in agriculture to perform farm duties must face challenges due to lighting conditions and crop variations [14][15]. In Norway, an autonomous strawberry harvester was developed considering light variations. The machine vision system changed its color threshold in response to alterations in the light intensity [16]. Robot harvesting machines achieve lower accuracy in spotting and picking crops due to occlusions caused by leaves and twigs [17][18]. Modern machine vision techniques and machine learning models with assorted sensors and cameras can overcome these inadequacies. The basic system of a robot harvester must perform functions such as: detect the fruit or detect the disease, pick the fruit/ berry without damaging it, guide the harvester to navigate the field, maneuver irrespective of the lighting and weather conditions, be cost-effective and have a simple mechanical design [19].
A deep convolutional neural network (DCNN) is a multi-layered neuron, which is trained with complex patterns provided with appropriately classified features of an image. The InceptionV3 model assists as a conventional image feature extractor to classify fruit and background pixels in an image. The classifier localizes the fruits to count the quantities of fruit present [20][21] and classify the species of tomato [22]. A K-nearest neighbour (KNN) classifier was employed to classify the fruit pixels in trained datasets with a threshold pixel value set as a fruit pixel. The SVM functions for pattern classification as well as linear regression assessment, based on the selected features. Darknet classifier with a trained “you only look once” (YOLO) model detected iceberg lettuce [23] and grapes [24] with edges for harvesting using a Vegebot. YOLO models offer a high objects detection rate in real-time when compared to faster region-based CNN (FRCNN) [25].
The AdaBoost model structures the strong traditional classifier by combining the weak classifiers linearly with minimal thresholding tasks and Haar-like features to detect tomato berries with an accuracy of 96% [26]. A multi-modal faster region-based CNN model constructs an efficient fruit yield detection technique with multifarious modalities by the fusion of RGB and near-infrared images and has improved the performance up to 0.83 F1 score [27]. The dataset images were fed to the R-CNN model to generate the feature map for classification.
The spatiotemporal exploration from remote sensing image data of normalized difference vegetation indices were trained with a spiking neural network (SNN) to plan crop yield prediction and crop yield estimation of winter wheat [28]. A better prediction algorithm for corn, soybean [29] and paddy crops was proposed with a (feed forward back propagation) artificial neural network (ANN) and later with a fusion of multiple linear regression (MLR). The linear discriminant analysis (LDA) approach eradicates the imbalance generated from the performance value attained through an ANN classifier [30]. The fusion of huge datasets was implemented and compared with various machine learning models like SVM, DL, extremely randomized trees (ERT) and random forest (RF) for the estimation of corn yield [31]. The deep learning (DL) model succeeded with high accuracies with respect to correlation coefficients. The detection of flowers in an image accomplished by a deep learning model in semantic segmentation of CNN and SVM classifier helps crop yield management. The image segmentation techniques and canopy features were used by backpropagation neural network (BPNN) model to train the system for the apple yield prediction [32]. The SVM and kNN classifiers were efficient, with an accuracy of 98.49% and 98.50%. Deep convolutional neural networks were developed to identify plant diseases and to predict the macronutrient deficiencies during the flowering and fruit development stage [33]. The visual geometry group (VGG) CNN architecture identified plant diseases with the leaf images of the plants and communicated the results to farmers through smart phones [34][35]. The endemic fungal infection diagnosis in the winter wheat [36] was validated and trained with Imagenet datasets and implemented with an adaptive deep CNN. The deep CNN model with GoogleNet classified nine diseases in the tomato leaves [37]. The defects in the external regions and the occlusion of flower and berries of tomatoes were identified with deep autoencoders and a residual neural network (ResNet) 50 classifier [38][39]. A leaf-based disease identification model was developed with a random forest classifier trained with HOG features and could detect diseases on papaya leaves [40].
Ripeness estimation is required in the agricultural industry to know the quality and level of maturity of the fruit. The ripening of tomatoes was detected with the fusion of features extracted and classified using a weighted relevance vector machine (RVM) as a bilayer classification approach for harvesting agrobots. The maturity levels in tomatoes were detected with the color features classified with BPNN model. A fuzzy rule-based classification (FRBCS) approach was proposed based on the color feature with decision trees (DT) and Mamdani fuzzy technique to estimate six stages of maturity level in tomato berries [41]. A mature-tomato can be identified with a SVM classifier trained by HOG features along with false elimination and overlap removal features.
The acquired images may be prone to be degradation caused by misfocus of the camera, poor lighting conditions or sensor noise. The image enhancement techniques have a visual impact on the desired information in a real time captured image. The image enhancement techniques do not afford augmented results unless the color modifications are made under multiple light sources. The median filter removes the blurred effect and reduces the noise. The nonlinear filtering technique can be employed to upgrade the quality of blurred images with the light source being refined. Adding noise to the image can improve the image in certain applications.
The image segmentation techniques are easy to implement and modify to classify pixels with less computation. The threshold segmentation requires appropriate lighting conditions. The optimal threshold value has to be selected, but it may not be pertinent for every application. Any background complexity increases the error rate and computation time. The color-based segmentation has constraints due to the non-uniform light sensitivity. Otsu thresholds excel in the detection of edges and select the threshold value based on the features provided to the image. The watershed segmentation provides continuous boundaries however, with consequent complexity in the calculation of the gradients. The texture and shape-based segmentation are time-consuming and provide blurred boundaries. To optimize the computer vision technology, further exploration in unstable agricultural environments has to be formulated.
The feature selection process reduces the quantity of input data while developing a predictive classifier model. Haar wavelet features combined with an AdaBoost classifier achieved high accuracy. The feature selection prioritizes the existing features in a dataset. The PCA can outperform other features with high accuracy by the pixel-level identification of input as original image compared with the input features. The SIFT detection algorithm requires scaling of local features in the images. The HOG method can extract global features by computing the edge gradient. HOG+FCR+NMS achieved a computation time of 0.95s for maturity detection. The hybrid approaches in feature extraction can improve classification and computation time.
The DL model with SVM, BPNN classifiers outperformed other classifiers. The SVM classifiers provide less error with effective prediction but require abundant datasets and are more complex and delicate to handle varied datatypes. The TensorFlow library endeavors to uncover optimal policy and does not wait till the termination to update the utility function. K-NN classifiers are robust in classifying the data with zero cost in learning process. These classifiers require large datasets with high computation for mixed data. The DL can extract the required features based on color, texture, shape and SIFT feature extraction processes. The combination of ANN and MLR classifier provided the highest accuracy in crop prediction. DL classifiers were used in a wide range of agricultural applications with an average performance F1 score of 0.8. Errors occurred due to the occlusion of leaves or cluster of fruits. The fruit detection for robot harvesting and yield estimation outperformed using a combination of CNN and linear regression models. The need of large datasets as input for training increases the computation time for DL approach. The SVM classifiers provide high accuracy with improved computation time. The fusion of the classifiers with assorted features may improve the computer vision technique and DL model.