Methods and Algorithms for Crop-Row Detection: History
Please note this is an old version of this entry, which may differ significantly from the current revision.
Contributor: , , , , ,

Crop row detection is one of the foundational and pivotal technologies of agricultural robots and autonomous vehicles for navigation, guidance, path planning, and automated farming in row crop fields. However, due to a complex and dynamic agricultural environment, crop row detection remains a challenging task. The surrounding background, such as weeds, trees, and stones, can interfere with crop appearance and increase the difficulty of detection. The detection accuracy of crop rows is also impacted by different growth stages, environmental conditions, curves, and occlusion. Therefore, appropriate sensors and multiple adaptable models are required to achieve high-precision crop row detection. 

  • Hough Transform
  • method
  • crop
  • extraction

1. Traditional Methods

1.1. Hough Transform (HT)

HT is a classical computer vision algorithm for crop row detection and navigation line extraction [1]. The idea behind this approach is to transform the image-coordinate space to the Hough-parameter space using the mapping relationship between points and lines, followed by detecting the target lines in the image. The Hough Transform (HT)-based detection approach is robust to image noise and outliers and performs well even in parallel structure crop fields with gaps [2]. To improve the efficiency and accuracy of these inspection results, edge detection, and image binarization are often performed prior to the HT-based detection process [3]. One limitation of the classic Hough transform is its high computational complexity, which makes it unsuitable for real-time applications. Another limitation of the classic Hough transform is its sensitivity to noise and outliers. To address this issue, researchers have proposed various modifications to HT, such as the Probabilistic Hough Transform (PHT), which uses a probabilistic voting scheme to reduce the effect of noise and outliers [4]. Other modifications include the Directional Hough Transform (DHT), which was designed to detect lines with a specific orientation [5], and the Multi-scale Hough Transform (MHT), which detects lines of different scales [6].

1.2. Linear Regression Method (LRM)

LRM is a widely utilized technique in detecting row crops in agriculture through image analysis. In regression analysis, one or more independent variables are studied to determine their impact on the dependent variable, with the aim of generating a hypothesis analysis [7]. The most common implementation of LRM is the least squares method, where the sum of the squared errors between the predicted and actual values is minimized to find the best-fit line. In the context of crop row detection, LRM can be used to predict the position and orientation of crop rows using image data. The goal is to find a linear relationship between the independent variables (such as pixel coordinates) and the dependent variable (crop row position or orientation). Before applying LRM to crop row detection, image preprocessing steps such as image segmentation and feature extraction can be performed to isolate the crop rows from the background and extract useful features for regression analysis [8]. One of the advantages of LRM is its simplicity and computational efficiency. However, it may encounter difficulties in handling complex data with noise in farmlands. In such cases, additional preprocessing steps, such as separating weed and crop pixels or using non-linear regression techniques, may be necessary to improve the accuracy of the model [9].

1.3. Horizontal Strips Method

The horizontal strips method is a reliable approach for detecting crop rows using agronomic image analysis [10]. The key concept of this technique is to divide the input image into several horizontal strips, which can serve as regions of interest (ROI). Within each ROI, feature points are determined based on the calculated center of gravity. Compared with other crop row detection methods, the horizontal strip analysis method does not require an additional image segmentation step, which improves the computational efficiency of image processing and reduces storage space [11]. Moreover, this technique was clearly superior in terms of real-time performance and precision in continuous crop rows with low weed density. Nevertheless, the horizontal strip method might not perform well in agricultural environments where crop rows are partially missing or overgrown with weeds, as these factors can affect the accuracy of feature point detection. Furthermore, the accuracy of this method is sensitive to the camera angle, which can affect the determination of feature pixel values. To mitigate this issue, the vertical projection method is often used in conjunction with the horizontal strip method to enhance accuracy [12].

1.4. Blob Analysis (BA)

The Blob Analysis (BA) method is a useful technique for crop row detection that operates on binarized images to group connected pixels into blobs with the same gray value [13]. The blobs that contain more than a certain number of pixels are then used to generate straight lines that represent crop rows. Unlike other machine vision techniques, BA considers features in an image as objects rather than individual pixels or lines, leading to more accurate identification of crop rows [14]. This approach leverages the unique shape and color characteristics of crop rows to accurately locate and identify them by calculating the center of gravity and principal axis position of each crop row [15]. In crop row detection, the BA technique has proven effective, particularly in situations where the crop rows have a clear definition and a distinct contrast with the surrounding field, such as in the case of newly planted crops with a different color or texture than the soil. However, BA may have limitations in fields with a high weed density or an unclear crop row definition. In such cases, the noise in the clustered blobs can lead to errors, which can affect the accuracy of the crop row detection results [16].

1.5. Random Sample Consensus (RANSAC)

The RANSAC algorithm is a robust and widely used technique for row detection in crops. The algorithm estimates a mathematical model and calculates the optimal solution of parameters from a dataset that may contain outliers [17]. In crop row detection, outliers can be weed points, soil points, or other objects that do not belong to the crop row. This property makes it suitable for the centerline fitting of crop rows, even when a significant proportion of weed data points are present [18]. Furthermore, the RANSAC algorithm can optimize point cloud matching and 3D coordinate calculations for complex 3D crop row detection [19]. However, the effectiveness of the RANSAC algorithm depends on several factors, such as the number of iterations, the threshold values, and the size of the data set. In the case of crop row detection, the quality of the feature points extracted from the image data also plays a crucial role in the success of the algorithm [20]. In recent years, several variations of the RANSAC algorithm have been proposed to address some of its limitations in crop row detection, such as the Progressive Sample Consensus (PROSAC) algorithm and the M-estimator Sample Consensus (MSAC) algorithm [21].

1.6. Frequency Analysis

Frequency analysis is a signal processing technique for analyzing local spatial patterns, which is widely used in crop row detection [22]. This mathematical method involves converting images from the image space to the frequency space through frequency domain filtering. By analyzing the resulting spectrum, this method can extract details from the image and enhance object detection with some simple logical operations. Common methods used in frequency-domain characterization include Fourier transform (FT), fast Fourier transform (FFT), and wavelet analysis [23]. Through these methods, the grayscale levels of weeds and shadows (tractors or crops) in field images can be attenuated, enabling the efficient detection of the position and direction of crop rows [24]. However, the frequency analysis method may not be suitable for the detection of curved crop rows with irregular crop spacing. Furthermore, the accuracy of this method may be affected by factors such as lighting conditions and the presence of noise in the image [25].

2. Machine Learning Methods

2.1. Clustering

The clustering algorithm is an unsupervised learning method that automatically groups data points into clusters according to various standard attributes or features like color, texture, or edge information [26]. This method does not require labeled data, which makes it a useful tool for detecting crop rows. The cluster-based algorithm is known for its quick detection of objects, high efficiency, and fast operation speed [27]. Data clustering methods mainly include partition-based methods, density-based methods, and hierarchical methods. Among these, the K-means clustering algorithm is the simplest and most commonly used method in crop row detection [28]. It can cluster data effectively, even when weed pixels are present between rows and are significantly smaller than planting crops. The scalability and efficiency of the K-means algorithm make it suitable for processing large datasets in cropland [29]. However, it has been noted that the K-means algorithm assumes that the clusters are spherical, equally sized, and have similar densities, which can lead to over-clustering or under-clustering in certain situations [30]. In recent years, several studies have attempted to address the limitations of traditional clustering algorithms in crop row detection. For example, some researchers have used hybrid clustering algorithms that combine the strengths of multiple clustering methods to achieve better results. Others have developed clustering algorithms that can detect irregularly shaped clusters, such as Gaussian mixture models (GMMs) or fuzzy clustering algorithms [31].

2.2. Deep Learning

Deep learning is a new research direction of machine learning that has been applied to crop row detection [32]. Unlike traditional shallow learning, deep learning places more emphasis on the depth and feature learning of model structures, with the goal of establishing a neural network that can analyze and learn in a manner similar to the human brain. This method has demonstrated significant improvements over traditional computer vision algorithms for identifying crop rows, especially in challenging conditions such as variable lighting, weather, and field conditions [33]. One of the main advantages of deep learning is that it can autonomously learn from large datasets and adapt to new data distributions. This makes it well-suited for precision agriculture, where it can be used to identify crops, pests, and diseases, optimize planting patterns, and monitor crop growth and health. Object detection and semantic segmentation play crucial roles in crop row detection by enhancing the accuracy and understanding of field images. Object detection algorithms enable the identification and localization of crop rows within an image, allowing for the precise mapping and measurement of their positions. This helps when optimizing planting patterns and ensuring uniform spacing between the rows. Moreover, object detection enables the detection of other objects or obstacles in the field, such as machinery or structures, which can help to avoid potential collisions or disturbances during farming operations [34]. On the other hand, semantic segmentation goes beyond object detection by providing detailed pixel-level labeling of an image. In the context of crop row detection, semantic segmentation helps differentiate the crop rows from other objects or background elements that are present in the image. By accurately segmenting the crop rows, semantic segmentation facilitates the analysis of their spatial distribution and arrangement [35]. It enables the identification of irregularities or gaps between rows, which can indicate potential issues such as missing plants, weed infestations, or uneven growth. This information is invaluable for farmers when making informed decisions regarding subsequent farming operations. Recent studies have used deep learning techniques such as Faster R-CNN, YOLOv3, Mask R-CNN, and DeepLabv3+ to detect crop rows from images captured by drones, tractors, or robots [36]. The significant challenge of deep learning-based crop detection is a lack of annotated training data for specific crops, growth stages, and field conditions [37]. Creating such datasets requires significant time and resources, and their quality and size can significantly impact the accuracy and robustness of the models. Moreover, the computational cost of training deep learning models can be prohibitive for resource-constrained devices and systems [38].

This entry is adapted from the peer-reviewed paper 10.3390/agronomy13071780

References

  1. de Silva, R.; Cielniak, G.; Gao, J. Towards agricultural autonomy: Crop row detection under varying field conditions using deep learning. arXiv 2021, arXiv:2109.08247.
  2. Meng, Q.; Qiu, R.; He, J.; Zhang, M.; Ma, X.; Liu, G. Development of agricultural implement system based on machine vision and fuzzy control. Comput. Electron. Agric. 2015, 112, 128–138.
  3. Xu, Z.; Shin, B.S.; Klette, R. Closed form line-segment extraction using the Hough transform. Pattern Recognit. 2015, 48, 4012–4023.
  4. Marzougui, M.; Alasiry, A.; Kortli, Y.; Baili, J. A lane tracking method based on progressive probabilistic Hough transform. IEEE Access 2020, 8, 84893–84905.
  5. Chung, K.L.; Huang, Y.H.; Tsai, S.R. Orientation-based discrete Hough transform for line detection with low computational complexity. Appl. Math. Comput. 2014, 237, 430–437.
  6. Chai, Y.; Wei, S.J.; Li, X.C. The multi-scale Hough transform lane detection method based on the algorithm of Otsu and Canny. Adv. Mater. Res. 2014, 1042, 126–130.
  7. Akinwande, M.O.; Dikko, H.G.; Samson, A. Variance inflation factor: As a condition for the inclusion of suppressor variable(s) in regression analysis. Open J. Stat. 2015, 5, 754.
  8. Andargie, A.A.; Rao, K.S. Estimation of a linear model with two-parameter symmetric platykurtic distributed errors. J. Uncertain. Anal. Appl. 2013, 1, 13.
  9. Milioto, A.; Lottes, P.; Stachniss, C. Real-time semantic segmentation of crop and weed for precision agriculture robots leveraging background knowledge in CNNs. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, QLD, Australia, 21–25 May 2018; pp. 2229–2235.
  10. Yang, Z.; Yang, Y.; Li, C.; Zhou, Y.; Zhang, X.; Yu, Y.; Liu, D. Tasseled Crop Rows Detection Based on Micro-Region of Interest and Logarithmic Transformation. Front. Plant Sci. 2022, 13, 916474.
  11. Zheng, L.Y.; Xu, J.X. Multi-crop-row detection based on strip analysis. In Proceedings of the 2014 International Conference on Machine Learning and Cybernetics, Lanzhou, China, 13–16 July 2014; Volume 2, pp. 611–614.
  12. Zhou, Y.; Yang, Y.; Zhang, B.; Wen, X.; Yue, X.; Chen, L. Autonomous detection of crop rows based on adaptive multi-ROI in maize fields. Int. J. Agric. Biol. Eng. 2021, 14, 217–225.
  13. Zhai, Z.; Zhu, Z.; Du, Y.; Song, Z.; Mao, E. Multi-crop-row detection algorithm based on binocular vision. Biosyst. Eng. 2016, 150, 89–103.
  14. Benson, E.R.; Reid, J.F.; Zhang, Q. Machine vision–based guidance system for an agricultural small–grain harvester. Trans. ASAE 2003, 46, 1255.
  15. Fontaine, V.; Crowe, T.G. Development of line-detection algorithms for local positioning in densely seeded crops. Can. Biosyst. Eng. 2006, 48, 7.
  16. Wang, A.; Zhang, W.; Wei, X. A review on weed detection using ground-based machine vision and image processing techniques. Comput. Electron. Agric. 2019, 158, 226–240.
  17. Zhou, M.; Xia, J.; Yang, F.; Zheng, K.; Hu, M.; Li, D.; Zhang, S. Design and experiment of visual navigated UGV for orchard based on Hough matrix and RANSAC. Int. J. Agric. Biol. Eng. 2021, 14, 176–184.
  18. Khan, N.; Rajendran, V.P.; Al Hasan, M.; Anwar, S. Clustering Algorithm Based Straight and Curved Crop Row Detection Using Color Based Segmentation. In Proceedings of the ASME 2020 International Mechanical Engineering Congress and Exposition, Virtual, 16–19 November 2020; American Society of Mechanical Engineers: New York, NY, USA, 2020; Volume 84553, p. V07BT07A003.
  19. Ghahremani, M.; Williams, K.; Corke, F.; Tiddeman, B.; Liu, Y.; Wang, X.; Doonan, J.H. Direct and accurate feature extraction from 3D point clouds of plants using RANSAC. Comput. Electron. Agric. 2021, 187, 106240.
  20. Guo, J.; Wei, Z.; Miao, D. Lane detection method based on improved RANSAC algorithm. In Proceedings of the 2015 IEEE Twelfth International Symposium on Autonomous Decentralized Systems, Taichung, Taiwan, 25–27 March 2015; pp. 285–288.
  21. Ma, S.; Guo, P.; You, H.; He, P.; Li, G.; Li, H. An image matching optimization algorithm based on pixel shift clustering RANSAC. Inf. Sci. 2021, 562, 452–474.
  22. Bossu, J.; Gée, C.; Jones, G.; Truchetet, F. Wavelet transform to discriminate between crop and weed in perspective agronomic images. Comput. Electron. Agric. 2009, 65, 133–143.
  23. Arts, L.P.; van den Broek, E.L. The fast continuous wavelet transformation (fCWT) for real-time, high-quality, noise-resistant time–frequency analysis. Nat. Comput. Sci. 2022, 2, 47–58.
  24. Hague, T.; Tillett, N.D. A bandpass filter-based approach to crop row location and tracking. Mechatronics 2001, 11, 1–12.
  25. García-Santillán, I.D.; Montalvo, M.; Guerrero, J.M.; Pajares, G. Automatic detection of curved and straight crop rows from images in maize fields. Biosyst. Eng. 2017, 156, 61–79.
  26. Saxena, A.; Prasad, M.; Gupta, A.; Bharill, N.; Patel, O.P.; Tiwari, A.; Lin, C.T. A review of clustering techniques and developments. Neurocomputing 2017, 267, 664–681.
  27. Vidović, I.; Scitovski, R. Center-based clustering for line detection and application to crop rows detection. Comput. Electron. Agric. 2014, 109, 212–220.
  28. Behura, A. The cluster analysis and feature selection: Perspective of machine learning and image processing. Data Anal. Bioinform. Mach. Learn. Perspect. 2021, 10, 249–280.
  29. Steward, B.L.; Gai, J.; Tang, L. The use of agricultural robots in weed management and control. Robot. Autom. Improv. Agric. 2019, 44, 1–25.
  30. Yu, Y.; Bao, Y.; Wang, J.; Chu, H.; Zhao, N.; He, Y.; Liu, Y. Crop row segmentation and detection in paddy fields based on treble-classification otsu and double-dimensional clustering method. Remote Sens. 2021, 13, 901.
  31. Ezugwu, A.E.; Ikotun, A.M.; Oyelade, O.O.; Abualigah, L.; Agushaka, J.O.; Eke, C.I.; Akinyelu, A.A. A comprehensive survey of clustering algorithms: State-of-the-art machine learning applications, taxonomy, challenges, and future research prospects. Eng. Appl. Artif. Intell. 2022, 110, 104743.
  32. Lachgar, M.; Hrimech, H.; Kartit, A. Optimization techniques in deep convolutional neuronal networks applied to olive diseases classification. Artif. Intell. Agric. 2022, 6, 77–89.
  33. Kamilaris, A.; Prenafeta-Boldú, F.X. Deep learning in agriculture: A survey. Comput. Electron. Agric. 2018, 147, 70–90.
  34. De Castro, A.I.; Torres-Sánchez, J.; Peña, J.M.; Jiménez-Brenes, F.M.; Csillik, O.; López-Granados, F. An automatic random forest-OBIA algorithm for early weed mapping between and within crop rows using UAV imagery. Remote Sens. 2018, 10, 285.
  35. You, J.; Liu, W.; Lee, J. A DNN-based semantic segmentation for detecting weed and crop. Comput. Electron. Agric. 2020, 178, 105750.
  36. Doha, R.; Al Hasan, M.; Anwar, S.; Rajendran, V. Deep learning based crop row detection with online domain adaptation. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery Data Mining, Singapore, 14–18 August 2021; pp. 2773–2781.
  37. Picon, A.; San-Emeterio, M.G.; Bereciartua-Perez, A.; Klukas, C.; Eggers, T.; Navarra-Mestre, R. Deep learning-based segmentation of multiple species of weeds and corn crop using synthetic and real image datasets. Comput. Electron. Agric. 2022, 194, 106719.
  38. de Silva, R.; Cielniak, G.; Wang, G.; Gao, J. Deep learning-based Crop Row Following for Infield Navigation of Agri-Robots. arXiv 2022, arXiv:2209.04278.
More
This entry is offline, you can click here to edit this entry!
Video Production Service