Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 -- 2688 2024-01-23 08:17:49 |
2 layout Meta information modification 2688 2024-01-23 08:38:53 |

Video Upload Options

Do you have a full video?

Confirm

Are you sure to Delete?
Cite
If you have any further questions, please contact Encyclopedia Editorial Office.
Emmi, L.; Cortinas, E.; Gonzalez-De-Santos, P. Autonomous Navigation of Agricultural Robots. Encyclopedia. Available online: https://encyclopedia.pub/entry/54225 (accessed on 28 April 2024).
Emmi L, Cortinas E, Gonzalez-De-Santos P. Autonomous Navigation of Agricultural Robots. Encyclopedia. Available at: https://encyclopedia.pub/entry/54225. Accessed April 28, 2024.
Emmi, Luis, Eloisa Cortinas, Pablo Gonzalez-De-Santos. "Autonomous Navigation of Agricultural Robots" Encyclopedia, https://encyclopedia.pub/entry/54225 (accessed April 28, 2024).
Emmi, L., Cortinas, E., & Gonzalez-De-Santos, P. (2024, January 23). Autonomous Navigation of Agricultural Robots. In Encyclopedia. https://encyclopedia.pub/entry/54225
Emmi, Luis, et al. "Autonomous Navigation of Agricultural Robots." Encyclopedia. Web. 23 January, 2024.
Autonomous Navigation of Agricultural Robots
Edit

Regarding agricultural harvesting robots, they typically consist of mobile platforms carrying robotic arms. These robots require advanced vision systems, employing adaptive thresholding algorithms, as well as texture-based methods and color shape characteristic extraction, to identify target fruits.

object detection precision agriculture agricultural robots crop identification

1. Introduction

In the last hundred years, the world’s population has quadrupled. In 1915, there were 1.8 billion individuals inhabiting the planet. Based on the latest UN estimate, the global population stands at 8 billion, with projections suggesting a potential increase to 9.7 billion by 2050 and 10.4 billion by 2100 [1]. Population growth, compounded by growing challenges to global food security, including increasing dependence on animal-based foods, declining water and land resources, and the impacts of climate change, is amplifying the urgent global need for food [2].
The expected demand for food would maintain constant growth over the next 30 years, reaching a 46.8% increase in global request for food crop production in 2050 relative to 2020 [2][3]. Therefore, farmers worldwide need to boost crop production by expanding agricultural land for cultivation or improving productivity on existing farmlands by applying fertilizers and irrigation. However, one of the most promising strategies in recent years is the adoption of innovative approaches like precision agriculture (PA), characterized by utilizing modern information and communication technologies to enhance agricultural productivity and profitability, which has recently garnered significant interest [4]. PA involves technologies that integrate sensors, information systems, sophisticated machinery, and informed decision making and aims to enhance agricultural production by effectively managing variations and uncertainties inherent in agriculture systems.

2. Significance of Technology and Automation in Agriculture

One of the elements that has shown the most potential to apply advanced PA techniques has been the use of robotic systems, whose usage and incorporation into the crop field have increased in recent years [5]. These robots are produced to perform all types of tasks in the field, such as guidance and mapping, automated harvesting, site-specific fertilization, environmental conditions monitoring, livestock monitoring, pesticide spraying, and precision weed management, among others.
Localization and guidance techniques in agriculture encompass a variety of approaches, such as Extended Kalman Filter (EKF) [6], Particle Filter (PF) [7], and Visual Odometry (VO) [8], among others, which have enabled robots to determine their position within the agricultural environment. Mapping applications involve the creation of maps using techniques like metric-semantic mapping, Light Detection and Ranging (LiDAR) mapping, and fusion of point cloud maps, enabling robots to navigate and interact with the agricultural landscape while facilitating specific tasks like fruit monitoring or weed control [9].
Regarding agricultural harvesting robots, they typically consist of mobile platforms carrying robotic arms [10]. These robots require advanced vision systems, employing adaptive thresholding algorithms, as well as texture-based methods and color shape characteristic extraction, to identify target fruits. The integration of color and depth data, the utilization of reinforcement learning methods [11][12], and the application of deep Convolutional Neural Networks for image segmentation [13] are some of the strategies currently commonly used in the background and foreground. Additionally, these robots rely on human–robot interaction strategies with 3D visualization, systematic operational planning, efficient grasping methods, and well-designed grippers [14].
One of the applications that has shown the most interest in being automated is pesticide spraying [15] and weed management in general [16]. These robots are designed to minimize pesticide wastage by targeting pest-affected areas on plants. Strategies such as Convolutional Neural Network (CNN) have proven to be highly beneficial for accurately identifying pests on crops, ensuring precise pesticide application. Moreover, several autonomous commercial robots have emerged designed for weed removal and precision agriculture tasks in recent years. The Small Robot Company [17] has introduced three robots: Tom digitizes the field, Dick zaps weeds with electricity, and Harry sows and records seed location, reducing the need for chemicals and heavy machinery. EcoRobotix [18], a Swiss prototype, employs computer vision to identify weeds and selectively spray them with a small dose of herbicide, significantly reducing herbicide usage. AVO [19] uses machine learning for centimeter-precise weed detection and spraying, minimizing herbicide volume by over 95% while preserving crop yield. Tertill by Franklin Robotics [20] recognizes and cuts weeds using sensors and operates on solar power. TerraSentia [21] is an agricultural robot designed for autonomous weed detection.
One of the main ingredients that allows the inclusion of mobile robots in the field and the execution of precision agriculture tasks is Global Navigation Satellite Systems (GNSS) [22]. Currently, commercial mobile robots perform tasks where full coverage maps are generally used, such as land preparation and seeding [23]. Many others focus on perennial crops. This is because navigating in arable land with crops in an early stage of growth is challenging, especially if a precise map of the location of each plant is not available, which can create a risk of damaging the plants.

3. Machine Vision in Agriculture

In the past five decades, AI has demonstrated its resilience and ubiquity across various domains, and agriculture is no exception [24]. Currently, agriculture deals with numerous challenges, such as crop disease control, management of pesticide use, ecological weed control, and efficient irrigation management, among many others. Machine learning (ML), a subset of AI, has contributed to addressing all of these challenges [25]. AI-based systems, especially those utilizing artificial neural networks, have become reliable and effective solutions for agricultural purposes thanks to their predictive capabilities, parallel reasoning, and ability to adapt through training [26]. They excel in complex mapping tasks when they are provided with a reliable set of variables [27], like forecasting water resource variables [28] or predicting nutrition level in crops [29].
Neural networks have played a pivotal role in driving substantial advancements in machine vision, leading to a remarkable surge in their utilization within agriculture. Machine vision facilitates precise crop identification and assessment, delivering significant benefits [30]. Capturing insights into crop development enables actions in accordance with the agricultural cycle stage. Moreover, it aids in detecting plant pests and diseases, assessing fertilizer requirements, and optimizing irrigation management [31]. To leverage machine vision for identification, close-range or even overhead views are often needed.
Additionally, machine vision offers the valuable capability of accurate crop localization [32]. This feature empowers various on-field tasks, including treatments at distinct crop growth stages and efficient harvesting. Furthermore, having precise crop location data proves advantageous for field navigation, mitigating the risk of crop damage during agricultural operations [22].
Numerous experts within smart agriculture demand the involvement of agricultural robots, encompassing activities like fruit harvesting and crop yield tracking [33]. Consequently, agricultural robots and associated technologies hold a crucial role within the domain of smart agriculture [34]. Among these technologies, automatic navigation stands out as the fundamental and central function of autonomous agricultural robots [35].
In contrast to alternative navigation technologies, machine vision has witnessed growing adoption in autonomous agricultural robots. This preference arises from its merits, including cost-effectiveness, simplicity of maintenance, versatile applicability, and a high degree of intelligence [36]. Recent years have seen a proliferation of novel approaches, technologies, and platforms within the topic of machine vision, all of which have found pertinent applications in agricultural robotics [36]. Li et al. [37] proposed a navigation line extraction method specifically designed to cater to various stages of cotton plant growth, from emergence to blooming. Zhang et al. [38] investigated navigation in greenhouses by employing contour and height data of crop rows. The fusion of these features generated a confidence density image, which aided in determining heading and lateral errors. Experiments conducted in an indoor simulation environment have shown that agricultural robots can maneuver through rows of crops in S and O configurations.
In the agricultural field, image analysis stands as a significant research domain, where intelligent data analysis methods are actively deployed for tasks such as image recognition, classification, anomaly detection, and more across diverse agricultural applications [39].

4. Crop Detection and Localization

Numerous algorithms have been devised and tested to facilitate robot autonomous navigation, and new solutions are still being sought [36], many of which use crop row detection as a guide [40]. The challenge in row recognition lies in identifying robust features that remain unchanged in different environmental situations. The complexity of this task is heightened by factors like incomplete rows, absent plants, irregular size and shape of plants, and variability in light intensity, which is one of the significant challenges of computer vision in open environments. Additionally, the presence of weeds within the row can introduce noise and disrupt the row recognition process. These research studies mainly focus on advancing image segmentation techniques to extract orientation cues for crop row applications [41].
When dealing with crops in their early growth stages, characterized by considerable spacing between individual plants, applying segmentation techniques may not be the most suitable approach. While proficient at delineating the entire plant’s surface, image segmentation models face limitations in accurately capturing the morphology of plants during their initial growth phases [42]. Moreover, the presence of weeds and other vegetation can introduce inaccuracies in the results generated by segmentation algorithms. Another drawback lies in the intricacies associated with data preparation for effective model training. In such scenarios, it proves advantageous to employ object detection techniques [43], which facilitate the precise identification of each plant and subsequently enable the crop lines to be estimated. While segmentation models require delineating all plant edges, a task further complicated by the limited availability of open databases for training purposes, object detection models necessitate the identification of bounding boxes around objects of interest [44], simplifying the annotation procedure.
Object detection models have traditionally found primary applications in agriculture for tasks such as fruit detection [45]; however, their distinct advantages over segmentation models, particularly when crops exhibit spacing between individual plants, position them as a compelling alternative [36]. The versatility and accuracy offered by object detection models in identifying and localizing individual plants in such settings underscore their potential utility for various agricultural applications beyond the conventional use cases.

5. Object Detection

Object detection algorithms represent a vital computer vision technique within Artificial Intelligence and machine learning. Their fundamental purpose involves identifying and localizing objects within images or video frames. These algorithms play a pivotal role in recognizing multiple objects of interest within a given image while providing crucial information regarding their positions and shapes [25]. Over the past two decades, object detection has seen a remarkable technological evolution, significantly impacting the entire field of computer vision. This evolution has transitioned from traditional detection models in the early 2000s to the profound influence of deep learning, notably with the advent of Convolutional Neural Networks (CNNs) [46].
The introduction of regions with CNN features (RCNN) by R. Girshick in 2014 [44] marked a pivotal moment in the rapid advancement of object detection. In the era of deep learning, object detectors fall into two categories: “two-stage detectors” and “one-stage detectors”. The former follows a “coarse to fine” approach, while the latter aims to “complete in one step”. Examples of “two-stage detectors” include RCNN, SPPNet (Spatial Pyramid Pooling Network), Fast RCNN, Faster RCNN, and Feature Pyramid Networks (FPN). On the other hand, “one-stage detectors” encompass models like You Only Look Once (YOLO), Single Shot MultiBox Detector (SSD), CenterNet, and DEtection TRansformer (DETR) [46][47].
YOLO has emerged as a widely adopted and influential algorithm in object detection. Its distinctive features include model compactness and rapid computation speed [48]. YOLO gained prominence by introducing its first version by Redmon et al. in 2015 [49] and has since seen several subsequent versions published by scholars. Comparisons of object detection open-source models, such as the one created by B. Jabir et al. [50], have underscored the speed and lightweight nature of YOLO v5.
Generally, the Mean Average Precision (mAP) metric is commonly used to evaluate the performance of any trained model. mAP is a standard evaluation metric widely employed in machine learning models and benchmark challenges like Pattern Analysis, Statistical Modelling, and Computational Learning Visual Object Classes (PASCAL VOC) [51]; Common Objects in Context (COCO) [52]; ImageNET (database for visual object recognition software) [53]; and the Google Open Images Challenge [54]. mAP offers a comprehensive assessment of a model’s performance, mainly in tasks involving multi-class or multi-label classification, such as object detection and image segmentation, by considering precision and recall across multiple classes or labels.
To determine the success of object detection models, it is crucial to establish the level of overlap between the bounding box and the ground truth data. One way to determine this is through the use of intersections over unions (IOU), where mAP50 refers to the accuracy level where IOU equals 50%. Thus, detection is successful when there is more than a 50% overlap.

6. Crop Identification

Crop identification using computer vision is a research field that merges image processing, machine learning, and agronomy practices to recognize diverse crop types from images taken mainly by color cameras. The objective focuses on facilitating the management and protection of crops.
Computer vision originated in the 1960s [55] when early studies acknowledged the necessity of aligning two-dimensional image features with three-dimensional object representations, which were applied to pattern recognition systems. Since then, significant progress has been made by using the results in many sectors, among which agriculture stands out, where it has been applied to the recognition of plants with significant results. In the last decade, some works have implemented the identification of plants and their position, relying on the plants’ outline characteristics [56]. CNNs have also been used for soybean image recognition [57]. Other methods have mixed deep CNNs and binary hash codes for field weed identification [58].
Moreover, some methods have focused on identifying color characteristics to discern between sugar beet plants and weeds, reaching accuracies more significantly than 90% [59]. Further work on sugar beet recognition [60] reports the utilization of local features such as speeded-up robust features (SURF), scale-invariant feature transform (SIFT), and twin leaf region (TLR) in place of characteristics that describe the complete plant outlines. The method used SIFT to define points of interest that were previously found by applying the Hessian–Laplace detector [61]. This method can distinguish thistles and sugar beets with an accuracy close to 100%.
Significant advances have also been carried out to identify maize in weeding tasks. This crop is challenging to identify because it has thin and sparsely distributed leaves [62], which hurts calculation time and the robustness of the algorithms [63][64]. In some recent proposals [65], the primary emphasis is on comprehensive image processing and centroid detection strategies. The images are first subjected to processing to separate the green color using the RGB color mode and Otsu’s threshold method [66]. Subsequently, the positioning of the maize plants is computed with the pixel projection histogram.
One challenge in this research is the constrained capacity to differentiate among various plant species. This suffices to locate crop plants for weeding, especially when using mechanical tools, where everything that is not a crop is destroyed. However, in weeding tasks that require distinguishing each weed species to apply a specific process (herbicides, laser, etc.), they need innovative techniques to manage a significant variety of plant species.

7. Determination of Crop Growth Stage

Until a few years ago, the estimation of the growth stage of crops using computer vision techniques presented low precision rates with the limitation that the experiments reported in the literature were carried out with very few images or were limited to very few growth stages [67]. For example, a study was reported in [68] in which two growth stages of maize (emergence and three leaves) were estimated using an image segmentation method combined with affinity propagation clustering to classify only two growth stages and training with a few samples. With these restrictions, the algorithm achieved a classification accuracy greater than 96%.
Other methods tested have been regression analyses from 2D images to model rice panicles [69]. This study focused only on the last growth stage, achieving an accuracy greater than 90%. At the same time, color histograms and a Support Vector Machine (SVM) classifier were studied to estimate four different stages of rice growth using RGB images [70].
Regarding maize, a previous study examined early growth (6 days) using RGB images [71]. The process involved taking images digitally, changing the RGB level to grayscale, cropping the image, and then calculating plant growth using regional growth. From the study results, regional growth can be used to estimate plant length growth with length and time parameters.
Advances in machine learning, especially deep CNN processing, now allow crop growth to be estimated with reasonable precision based on images. One applied method [72] uses low-level image feature extraction schemes, scale-invariant, mid-level representation, and an SVM to learn and classify wheat growth stages. The work focuses on estimating only two growth stages for six wheat varieties.

References

  1. Department of Economic and Social Affairs. World Population Prospects 2022: Summary of Results. Available online: https://population.un.org/wpp/ (accessed on 25 September 2023).
  2. Tian, X.; Engel, B.A.; Qian, H.; Hua, E.; Sun, S.; Wang, Y. Will Reaching the Maximum Achievable Yield Potential Meet Future Global Food Demand? J. Clean. Prod. 2021, 294, 126285.
  3. Falcon, W.P.; Naylor, R.L.; Shankar, N.D. Rethinking Global Food Demand for 2050. Popul. Dev. Rev. 2022, 48, 921–957.
  4. Precision Agriculture and Food Security|Science. Available online: https://www.science.org/doi/abs/10.1126/science.1183899 (accessed on 25 October 2023).
  5. Botta, A.; Cavallone, P.; Baglieri, L.; Colucci, G.; Tagliavini, L.; Quaglia, G. A Review of Robots, Perception, and Tasks in Precision Agriculture. Appl. Mech. 2022, 3, 830–854.
  6. Lv, M.; Wei, H.; Fu, X.; Wang, W.; Zhou, D. A Loosely Coupled Extended Kalman Filter Algorithm for Agricultural Scene-Based Multi-Sensor Fusion. Front. Plant Sci. 2022, 13, 849260.
  7. Blok, P.M.; van Boheemen, K.; van Evert, F.K.; IJsselmuiden, J.; Kim, G.-H. Robot Navigation in Orchards with Localization Based on Particle Filter and Kalman Filter. Comput. Electron. Agric. 2019, 157, 261–269.
  8. Yu, T.; Zhou, J.; Wang, L.; Xiong, S. Accurate and Robust Stereo Direct Visual Odometry for Agricultural Environment. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021; pp. 2480–2486.
  9. Aguiar, A.S.; dos Santos, F.N.; Cunha, J.B.; Sobreira, H.; Sousa, A.J. Localization and Mapping for Robots in Agriculture and Forestry: A Survey. Robotics 2020, 9, 97.
  10. Bac, C.W.; van Henten, E.J.; Hemming, J.; Edan, Y. Harvesting Robots for High-Value Crops: State-of-the-Art Review and Challenges Ahead. J. Field Robot. 2014, 31, 888–911.
  11. Hernández-Hernández, J.L.; García-Mateos, G.; González-Esquiva, J.M.; Escarabajal-Henarejos, D.; Ruiz-Canales, A.; Molina-Martínez, J.M. Optimal Color Space Selection Method for Plant/Soil Segmentation in Agriculture. Comput. Electron. Agric. 2016, 122, 124–132.
  12. Lin, G.; Tang, Y.; Zou, X.; Xiong, J.; Fang, Y. Color-, Depth-, and Shape-Based 3D Fruit Detection. Precis. Agric. 2020, 21, 1–17.
  13. Kamilaris, A.; Prenafeta-Boldú, F.X. A Review of the Use of Convolutional Neural Networks in Agriculture. J. Agric. Sci. 2018, 156, 312–322.
  14. Droukas, L.; Doulgeri, Z.; Tsakiridis, N.L.; Triantafyllou, D.; Kleitsiotis, I.; Mariolis, I.; Giakoumis, D.; Tzovaras, D.; Kateris, D.; Bochtis, D. A Survey of Robotic Harvesting Systems and Enabling Technologies. J. Intell. Robot. Syst. 2023, 107, 21.
  15. Meshram, A.T.; Vanalkar, A.V.; Kalambe, K.B.; Badar, A.M. Pesticide Spraying Robot for Precision Agriculture: A Categorical Literature Review and Future Trends. J. Field Robot. 2022, 39, 153–171.
  16. Zhang, W.; Miao, Z.; Li, N.; He, C.; Sun, T. Review of Current Robotic Approaches for Precision Weed Management. Curr. Robot. Rep. 2022, 3, 139–151.
  17. Small Robot Co. Available online: https://smallrobotco.com/#perplant (accessed on 8 November 2023).
  18. Ecorobotix: Smart Spraying for Ultra-Localised Treatments. Available online: https://ecorobotix.com/en/ (accessed on 8 November 2023).
  19. Our Vision for the Future: Autonomous Weeding (in Development) AVO. Available online: https://ecorobotix.com/en/avo/ (accessed on 8 November 2023).
  20. Sanchez, J.; Gallandt, E.R. Functionality and Efficacy of Franklin Robotics’ TertillTM Robotic Weeder. Weed Technol. 2021, 35, 166–170.
  21. EarthSense. Available online: https://www.earthsense.co/home (accessed on 8 November 2023).
  22. Rovira-Más, F.; Chatterjee, I.; Sáiz-Rubio, V. The Role of GNSS in the Navigation Strategies of Cost-Effective Agricultural Robots. Comput. Electron. Agric. 2015, 112, 172–183.
  23. Galceran, E.; Carreras, M. A Survey on Coverage Path Planning for Robotics. Robot. Auton. Syst. 2013, 61, 1258–1276.
  24. Eli-Chukwu, N.C. Applications of Artificial Intelligence in Agriculture: A Review. Eng. Technol. Appl. Sci. Res. 2019, 9, 4377–4383.
  25. Benos, L.; Tagarakis, A.C.; Dolias, G.; Berruto, R.; Kateris, D.; Bochtis, D. Machine Learning in Agriculture: A Comprehensive Updated Review. Sensors 2021, 21, 3758.
  26. Kujawa, S.; Niedbała, G. Artificial Neural Networks in Agriculture. Agriculture 2021, 11, 497.
  27. Jha, K.; Doshi, A.; Patel, P.; Shah, M. A Comprehensive Review on Automation in Agriculture Using Artificial Intelligence. Artif. Intell. Agric. 2019, 2, 1–12.
  28. Maier, H.R.; Dandy, G.C. Neural Networks for the Prediction and Forecasting of Water Resources Variables: A Review of Modelling Issues and Applications. Environ. Model. Softw. 2000, 15, 101–124.
  29. Song, H.; He, Y. Crop Nutrition Diagnosis Expert System Based on Artificial Neural Networks. In Proceedings of the Third International Conference on Information Technology and Applications (ICITA’05), Sydney, Australia, 4–7 July 2005; Volume 1, pp. 357–362.
  30. Mavridou, E.; Vrochidou, E.; Papakostas, G.A.; Pachidis, T.; Kaburlasos, V.G. Machine Vision Systems in Precision Agriculture for Crop Farming. J. Imaging 2019, 5, 89.
  31. Das, S.; Ghosh, I.; Banerjee, G.; Sarkar, U. Artificial Intelligence in Agriculture: A Literature Survey. Int. J. Sci. Res. Comput. Sci. Appl. Manag. Stud. 2018, 7, 1–6.
  32. Zhang, J.-L.; Su, W.-H.; Zhang, H.-Y.; Peng, Y. SE-YOLOv5x: An Optimized Model Based on Transfer Learning and Visual Attention Mechanism for Identifying and Localizing Weeds and Vegetables. Agronomy 2022, 12, 2061.
  33. Zhou, H.; Wang, X.; Au, W.; Kang, H.; Chen, C. Intelligent Robots for Fruit Harvesting: Recent Developments and Future Challenges. Precis. Agric. 2022, 23, 1856–1907.
  34. Idoje, G.; Dagiuklas, T.; Iqbal, M. Survey for Smart Farming Technologies: Challenges and Issues. Comput. Electr. Eng. 2021, 92, 107104.
  35. Gan, H.; Lee, W.S. Development of a Navigation System for a Smart Farm. IFAC-PapersOnLine 2018, 51, 1–4.
  36. Wang, T.; Chen, B.; Zhang, Z.; Li, H.; Zhang, M. Applications of Machine Vision in Agricultural Robot Navigation: A Review. Comput. Electron. Agric. 2022, 198, 107085.
  37. Li, J.; Zhu, R.; Chen, B. Image Detection and Verification of Visual Navigation Route during Cotton Field Management Period. Int. J. Agric. Biol. Eng. 2018, 11, 159–165.
  38. Zhang, Z.; Li, P.; Zhao, S.; Lv, Z.; Du, F.; An, Y. An Adaptive Vision Navigation Algorithm in Agricultural IoT System for Smart Agricultural Robots. Comput. Mater. Contin. 2020, 66, 1043–1056.
  39. Kamilaris, A.; Prenafeta-Boldú, F.X. Deep Learning in Agriculture: A Survey. Comput. Electron. Agric. 2018, 147, 70–90.
  40. Bai, Y.; Zhang, B.; Xu, N.; Zhou, J.; Shi, J.; Diao, Z. Vision-Based Navigation and Guidance for Agricultural Autonomous Vehicles and Robots: A Review. Comput. Electron. Agric. 2023, 205, 107584.
  41. Shalal, N.; Low, T.; McCarthy, C.; Hancock, N. A Review of Autonomous Navigation Systems in Agricultural Environments; University of Southern Queensland: Barton, Australia, 2013.
  42. Hamuda, E.; Glavin, M.; Jones, E. A Survey of Image Processing Techniques for Plant Extraction and Segmentation in the Field. Comput. Electron. Agric. 2016, 125, 184–199.
  43. Bharati, S.; Wu, Y.; Sui, Y.; Padgett, C.; Wang, G. Real-Time Obstacle Detection and Tracking for Sense-and-Avoid Mechanism in UAVs. IEEE Trans. Intell. Veh. 2018, 3, 185–197.
  44. Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 580–587.
  45. Koirala, A.; Walsh, K.B.; Wang, Z.; McCarthy, C. Deep Learning—Method Overview and Review of Use for Fruit Detection and Yield Estimation. Comput. Electron. Agric. 2019, 162, 219–234.
  46. Zou, Z.; Chen, K.; Shi, Z.; Guo, Y.; Ye, J. Object Detection in 20 Years: A Survey. Proc. IEEE 2023, 111, 257–276.
  47. Carion, N.; Massa, F.; Synnaeve, G.; Usunier, N.; Kirillov, A.; Zagoruyko, S. End-to-End Object Detection with Transformers. In Computer Vision—ECCV 2020; Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M., Eds.; Springer International Publishing: Cham, Switzerland, 2020; pp. 213–229.
  48. Jiang, P.; Ergu, D.; Liu, F.; Cai, Y.; Ma, B. A Review of Yolo Algorithm Developments. Procedia Comput. Sci. 2022, 199, 1066–1073.
  49. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 779–788.
  50. Jabir, B.; Noureddine, F.; Rahmani, K. Accuracy and Efficiency Comparison of Object Detection Open-Source Models. Int. J. Online Biomed. Eng. 2021, 17, 165.
  51. Bentley, P. Pattern Analysis, Statistical Modelling and Computational Learning 2004–2008, 1st ed.; PASCAL Network of Excellence; University College London: London, UK, 2008; ISBN 978-0-9559301-0-2.
  52. Lin, T.-Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Dollár, P.; Zitnick, C.L. Microsoft COCO: Common Objects in Context. In Computer Vision—ECCV 2014; Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T., Eds.; Springer International Publishing: Cham, Switzerland, 2014; pp. 740–755.
  53. Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; et al. ImageNet Large Scale Visual Recognition Challenge. Int. J. Comput. Vis. 2015, 115, 211–252.
  54. Addison, H.; Alina; Walker, J.; Uijlings, J.; Pont-Tuset, J.; McDonald, M.G.V.; Kan, W. Google AI Open Images—Object Detection Track. Available online: https://kaggle.com/competitions/google-ai-open-images-object-detection-track (accessed on 6 November 2023).
  55. Andreopoulos, A.; Tsotsos, J.K. 50 Years of Object Recognition: Directions Forward. Comput. Vis. Image Underst. 2013, 117, 827–891.
  56. Shahbazi, N.; Ashworth, M.B.; Callow, J.N.; Mian, A.; Beckie, H.J.; Speidel, S.; Nicholls, E.; Flower, K.C. Assessing the Capability and Potential of LiDAR for Weed Detection. Sensors 2021, 21, 2328.
  57. dos Santos Ferreira, A.; Matte Freitas, D.; Gonçalves da Silva, G.; Pistori, H.; Theophilo Folhes, M. Weed Detection in Soybean Crops Using ConvNets. Comput. Electron. Agric. 2017, 143, 314–324.
  58. JIANG Honghua, W.P. Fast Identification of Field Weeds Based on Deep Convolutional Network and Binary Hash Code. Nongye Jixie XuebaoTransactions Chin. Soc. Agric. Mach. 2018, 49, 30–38.
  59. Åstrand, B.; Baerveldt, A.-J. An Agricultural Mobile Robot with Vision-Based Perception for Mechanical Weed Control. Auton. Robots 2002, 13, 21–35.
  60. Dyrmann, M.; Karstoft, H.; Midtiby, H.S. Plant Species Classification Using Deep Convolutional Neural Network. Biosyst. Eng. 2016, 151, 72–80.
  61. Ferrari, F.; Verbitsky, I.E. Radial Fractional Laplace Operators and Hessian Inequalities. J. Differ. Equ. 2012, 253, 244–272.
  62. Esposito, M.; Crimaldi, M.; Cirillo, V.; Sarghini, F.; Maggio, A. Drone and Sensor Technology for Sustainable Weed Management: A Review. Chem. Biol. Technol. Agric. 2021, 8, 18.
  63. Peteinatos, G.G.; Reichel, P.; Karouta, J.; Andújar, D.; Gerhards, R. Weed Identification in Maize, Sunflower, and Potatoes with the Aid of Convolutional Neural Networks. Remote Sens. 2020, 12, 4185.
  64. Liu, B.; Bruch, R. Weed Detection for Selective Spraying: A Review. Curr. Robot. Rep. 2020, 1, 19–26.
  65. Xu, B.; Chai, L.; Zhang, C. Research and Application on Corn Crop Identification and Positioning Method Based on Machine Vision. Inf. Process. Agric. 2023, 10, 106–113.
  66. Otsu, N. A Tlreshold Selection Method from Gray-Level Histograms. IEEE Trans. Syst. Man Cybern. 1979, 9, 62–66.
  67. Rasti, S.; Bleakley, C.J.; Silvestre, G.C.M.; Holden, N.M.; Langton, D.; O’Hare, G.M.P. Crop Growth Stage Estimation Prior to Canopy Closure Using Deep Learning Algorithms. Neural Comput. Appl. 2021, 33, 1733–1743.
  68. Yu, Z.; Cao, Z.; Wu, X.; Bai, X.; Qin, Y.; Zhuo, W.; Xiao, Y.; Zhang, X.; Xue, H. Automatic Image-Based Detection Technology for Two Critical Growth Stages of Maize: Emergence and Three-Leaf Stage. Agric. For. Meteorol. 2013, 174–175, 65–84.
  69. Zhao, S.; Zheng, H.; Chi, M.; Chai, X.; Liu, Y. Rapid Yield Prediction in Paddy Fields Based on 2D Image Modelling of Rice Panicles. Comput. Electron. Agric. 2019, 162, 759–766.
  70. Marsujitullah; Zainuddin, Z.; Manjang, S.; Wijaya, A.S. Rice Farming Age Detection Use Drone Based on SVM Histogram Image Classification. J. Phys. Conf. Ser. 2019, 1198, 092001.
  71. Yudhana, A.; Umar, R.; Ayudewi, F.M. The Monitoring of Corn Sprouts Growth Using The Region Growing Methods. J. Phys. Conf. Ser. 2019, 1373, 012054.
  72. Sadeghi-Tehran, P.; Sabermanesh, K.; Virlet, N.; Hawkesford, M.J. Automated Method to Determine Two Critical Growth Stages of Wheat: Heading and Flowering. Front. Plant Sci. 2017, 8, 252.
More
Information
Contributors MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register : , ,
View Times: 82
Revisions: 2 times (View History)
Update Date: 23 Jan 2024
1000/1000