Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 -- 1719 2023-09-27 03:45:19 |
2 update references and layout -9 word(s) 1710 2023-09-27 05:03:36 |

Video Upload Options

We provide professional Video Production Services to translate complex research into visually appealing presentations. Would you like to try it?

Confirm

Are you sure to Delete?
Cite
If you have any further questions, please contact Encyclopedia Editorial Office.
Li, Y.; Wang, D.; Yuan, C.; Li, H.; Hu, J. Agricultural Image Segmentation. Encyclopedia. Available online: https://encyclopedia.pub/entry/49690 (accessed on 19 November 2024).
Li Y, Wang D, Yuan C, Li H, Hu J. Agricultural Image Segmentation. Encyclopedia. Available at: https://encyclopedia.pub/entry/49690. Accessed November 19, 2024.
Li, Yaqin, Dandan Wang, Cao Yuan, Hao Li, Jing Hu. "Agricultural Image Segmentation" Encyclopedia, https://encyclopedia.pub/entry/49690 (accessed November 19, 2024).
Li, Y., Wang, D., Yuan, C., Li, H., & Hu, J. (2023, September 27). Agricultural Image Segmentation. In Encyclopedia. https://encyclopedia.pub/entry/49690
Li, Yaqin, et al. "Agricultural Image Segmentation." Encyclopedia. Web. 27 September, 2023.
Agricultural Image Segmentation
Edit

The Segment Anything Model (SAM) is a versatile image segmentation model that enables zero-shot segmentation of various objects in any image using prompts, including bounding boxes, points, texts, and more. However, studies have shown that the SAM performs poorly in agricultural tasks like crop disease segmentation and pest segmentation. To address this issue, the agricultural SAM adapter (ASA) is proposed, which incorporates agricultural domain expertise into the segmentation model through a simple but effective adapter technique.

image segmentation adapters agricultural image segmentation

1. Introduction

The development and use of fundamental models in artificial intelligence have experienced rapid growth in recent years. These models are trained on large datasets for generalization across various tasks and domains. Large Language Models (LLMs) [1] have become widely adopted, making larger models a recent trend, for example, GPT4 [2] developed by OpenAI (2023). The Segmentation Anything Model (SAM) [3] is a powerful and flexible visual segmentation model that has gained a lot of attention for its ability to generate accurate and detailed segmentation masks based on user prompts. The model is trained on a large dataset of 11 million images and over 1 billion masks. It has a fast design and training process that enables it to adapt to new image distributions and tasks with zero samples. The SAM has shown its excellent segmentation capabilities in various scenarios, and it is bringing innovations to the field of image segmentation and computer vision.
The SAM is introduced into agricultural segmentation tasks. In this context, it not only segments agricultural images with zero samples but also improves the accuracy and efficiency of crop disease identification and pest detection. In this case, the SAM can be a solution to agricultural segmentation problems caused by its specialization and insufficiency of available datasets, and provide scientific guidance and support for the agricultural industry. Crop disease segmentation is an important task in the field of agriculture, as it can prevent the spread and deterioration of diseases by detecting and locating them and taking necessary measures such as spraying pesticides and trimming leaves in time. This not only reduces pesticide usage and protects the ecosystem [4] but also improves crop yield and quality. Similarly, effective image segmentation of agricultural pests can also reduce their harm to crops [5]. For example, the level of damage and the effectiveness of pest control can be evaluated by segmenting out different types and numbers of pests to formulate a reasonable control program.
However, the SAM also has limitations like other basic models, especially in the case of a very wide range of applications of computer vision. Since the training data cannot cover all possible image types and the working scenarios are constantly changing, the SAM does not perform well in some agricultural image segmentation tasks and difficult scenarios. To demonstrate this, the SAM was tested on crop disease and pest segmentation tasks and it was found that the SAM does not “segment anything” well. This is because the training dataset of the SAM mainly consists of natural images, which often have clear edge information and differentiation, whereas agricultural images are taken in complex field environments, and have the following characteristics: (1) low contrast between the target and the background, which makes segmentation difficult, (2) the agricultural background is complex and varied, with foliage, soil, and rocks, (3) uneven illumination and shadows reduce the image quality and clarity, and (4) the targets have different morphologies, such as pests and diseases. These characteristics make it difficult for the SAM to adapt to the needs and characteristics of agricultural image segmentation, resulting in inaccurate or incomplete segmentation results. Therefore, the theoretical and practical problem focuses on as following: How can these challenges be overcome so that the segmentation ability learned by the SAM on a massive dataset can benefit agricultural segmentation tasks?
Adaption [6][7][8] is an effective tool for fine-tuning basic, large vision models for downstream tasks, not only in NLP but also in computer vision. It requires only a small number of parameters to be learned (typically less than 5% of the total parameters) to allow efficient learning and faster updates while keeping most of the parameters frozen. It has also been shown that adaption methods work better than full fine-tuning because they avoid catastrophic forgetting and generalize better to out-of-domain scenarios, especially in low data states. It is believed that adaption is the most suitable technique for transferring the SAM to the agriculture domain. Therefore, a simple but effective adapter specifically designed for agricultural image segmentation is proposed. This adapter is a lightweight model that leverages internal and external knowledge to adapt to relatively fewer data and injects task-specific guidance information from the samples of that task. By using visual prompts to transfer information to the network, it can efficiently adapt the frozen large-scale base model to various agricultural image segmentation tasks with minimal additional trainable parameters. This adapter is integrated into the SAM to obtain an adapted model called the agricultural SAM adapter (ASA). To assess the performance of the ASA, a dataset containing 5464 images of agricultural pests and a dataset containing 1100 images of coffee-leaf diseases is collected. The pest dataset consists of 10 common pest types, whereas the coffee-leaf-disease dataset includes individual leaf images, as well as localized images of coffee trees. The extensive experimental results on 12 two-dimensional agricultural image segmentation tasks demonstrate that this approach significantly enhances the performance of the SAM in agricultural image segmentation tasks and compensates for its limitations in this context.

2. Agricultural Image Segmentation

Adapters. The concept of adapters was initially introduced in the field of Natural Language Processing (NLP) [9] to fine-tune a large pre-trained model using a compact and scalable model for each specific downstream task. Stickland, Cooper, and Murray [10] explored multi-task approaches that shared a single BERT model with a small number of additional task-specific parameters using new adaptation modules, PALs, or “Projected Attention Layers” and obtained state-of-the-art results on the Recognizing Textual Entailment dataset. In the realm of computer vision, a method proposed by Facebook AI Research [11] achieved competitive results in object detection tasks with minimal adaptations for fine-tuning the ViT (Vision Transformer) architecture [12]. More recently, Chen et al. [13] designed a simple but powerful adapter for the ViT architecture for dense prediction tasks, and it demonstrated excellent performance in several downstream tasks, including object detection, instance segmentation, and semantic segmentation. Liu et al. [14] were inspired by the widely used pre-training and prompt-tuning protocols in NLP and proposed an EVP (Explicit Visual Prompting) technique that could efficiently combine explicit visual prompting with an adapter. This technique achieved state-of-the-art performance in low-level structure segmentation tasks. Additionally, Chen et al. presented the SAM adapter [15], achieving state-of-the-art results in camouflaged object detection and shadow detection tasks by incorporating domain-specific information and visual prompts into segmented networks. The adapter approach is applied to the SAM to solve agricultural image segmentation tasks.
Segmentation. Image segmentation [16][17][18] is an important task in computer vision that aims to assign each pixel in an image to a specific semantic category. Image segmentation has a wide range of applications in fields such as agriculture, medicine, and remote sensing. However, traditional image segmentation methods usually rely on a large amount of annotated data for training models, which are difficult to obtain or costly in some specific domains. Therefore, exploring how to achieve image segmentation with few or no annotated data is both a challenging and valuable problem.
Zero-Shot Segmentation. Zero-shot segmentation is a method used for image segmentation that utilizes unlabeled data. This approach can segment objects of any class in an image based on various types of prompts, including points, bounding boxes, and text. The research field of zero-shot image segmentation has gained significant attention in recent years. For instance, Lüddecke and Ecker [19] proposed a backbone network based on CLIP and a Transformer-based decoder. This framework enables the generation of image segments by leveraging arbitrary text or image prompts, and its versatility in handling diverse segmentation tasks and generalizing to new queries has been demonstrated. Roy et al. [20] introduced SAM.MD, a zero-shot medical image segmentation method based on the SAM. The method effectively segments abdominal CT organs by utilizing point or bounding box prompts, and it has exhibited excellent performance across multiple medical datasets. Furthermore, research endeavors have explored the utilization of zero-shot image segmentation for more intricate tasks, including instance segmentation [21][22][23] and video segmentation [24]. Despite these advancements, the current state of zero-shot image segmentation methods in the field of agricultural image segmentation remains underexplored.
Agricultural Image Segmentation. Agricultural image segmentation is a crucial task with significant applications and practical relevance, as it aids agricultural producers in identifying and monitoring crop conditions and pests. This, in turn, improves the efficiency and quality of agricultural production. In recent years, deep learning methods, for example, the FCN (Fully Convolutional Network) [25], U-Net [26], and Mask R-CNN (Region-Based Convolutional Neural Networks) [23] have been widely used in agricultural image segmentation. These methods excel in automatically learning effective feature representations from data and are suitable for high-resolution and multi-class image segmentation tasks. Notably, deep learning techniques have achieved substantial performance advancements in crop disease and pest segmentation tasks, making them a mainstream approach. For instance, Ma et al. [27] used a DCNN (Deep Convolutional Neural Network) to develop a method for the automatic diagnosis of cucumber diseases. This method provided a scientific basis for farmers to apply pesticides appropriately. Another notable work by Esgario et al. [28] proposed a neural network-based multitasking system for the classification and severity estimation of coffee-leaf diseases. The system contains a suitable tool to assist experts and farmers in identifying and quantifying biotic stresses in coffee plantations. However, traditional deep learning approaches require extensive annotated data for training, which are scarce in the domain of agricultural image segmentation. Therefore, exploring how to extend zero-shot image segmentation methods to agricultural image segmentation presents a challenging and valuable research problem.
To address this research gap, the agricultural SAM adapter (ASA) is proposed, which is a fine-tuned version of the SAM customized for agricultural image segmentation, specifically designed for zero-shot segmentation with appropriate prompts. Experiments on 12 agricultural image segmentation tasks are conducted, and the results are shown in Figure 1, which demonstrates that the proposed approach significantly outperforms the original SAM in terms of performance and generalization capability.
Figure 1. Visualized examples of the pre-trained SAM and ASA segmentation results. The ASA significantly improves segmentation performance in agricultural image segmentation tasks.

References

  1. Brown, T.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J.D.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; et al. Language Models are Few-Shot Learners. Adv. Neural Inf. Process. Syst. 2020, 33, 1877–1901.
  2. OpenAI. GPT-4 Technical Report. arXiv 2023, arXiv:2303.08774.
  3. Kirillov, A.; Mintun, E.; Ravi, N.; Mao, H.; Rolland, C.; Gustafson, L.; Xiao, T.; Whitehead, S.; Berg, A.C.; Lo, W.Y.; et al. Segment Anything. arXiv 2023, arXiv:2304.02643.
  4. Martins, M.V.V.; Ventura, J.A.; de Santana, E.N.; Costa, H. Diagnóstico e manejo das doenças do cafeeiro Conilon. In Café Conilon; Ferrão, R.G., de Muner, L.H., da Fonseca, A.F.A., Ferrão, M.A.G., Eds.; Incaper: Vitória, Brazil, 2007.
  5. Oliveira, C.M.; Auad, A.M.; Mendes, S.M.; Frizzas, M.R. Crop losses and the economic impact of insect pests on Brazilian agriculture. Crop Prot. 2014, 56, 50–54.
  6. Hu, E.J.; Shen, Y.; Wallis, P.; Allen-Zhu, Z.; Li, Y.; Wang, S.; Wang, L.; Chen, W. LoRA: Low-Rank Adaptation of Large Language Models. arXiv 2021, arXiv:2106.09685.
  7. Zhang, J.X.; Yang, G.H. Adaptive Fuzzy Fault-Tolerant Control of Uncertain Euler–Lagrange Systems With Process Faults. IEEE Trans. Fuzzy Syst. 2020, 28, 2619–2630.
  8. Zhang, J.X.; Yang, G.H. Fuzzy Adaptive Output Feedback Control of Uncertain Nonlinear Systems With Prescribed Performance. IEEE Trans. Cybern. 2018, 48, 1342–1354.
  9. Houlsby, N.; Giurgiu, A.; Jastrzebski, S.; Morrone, B.; Laroussilhe, Q.D.; Gesmundo, A.; Attariyan, M.; Gelly, S. Parameter-Efficient Transfer Learning for NLP. In Proceedings of the 36th International Conference on Machine Learning, Long Beach, CA, USA, 9–15 June 2019; pp. 2790–2799, ISSN: 2640-3498.
  10. Stickland, A.C.; Murray, I. BERT and PALs: Projected Attention Layers for Efficient Adaptation in Multi-Task Learning. In Proceedings of the 36th International Conference on Machine Learning, Long Beach, CA, USA, 9–15 June 2019; pp. 5986–5995, ISSN: 2640-3498.
  11. Li, Y.; Mao, H.; Girshick, R.; He, K. Exploring Plain Vision Transformer Backbones for Object Detection. In Proceedings of the Computer Vision—ECCV 2022, Tel Aviv, Israel, 23–27 October 2022; Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T., Eds.; Lecture Notes in Computer Science. Springer Nature: Cham, Switzerland, 2022; pp. 280–296.
  12. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An Image is Worth 16 × 16 Words: Transformers for Image Recognition at Scale. arXiv 2021, arXiv:2010.11929.
  13. Chen, Z.; Duan, Y.; Wang, W.; He, J.; Lu, T.; Dai, J.; Qiao, Y. Vision Transformer Adapter for Dense Predictions. arXiv 2023, arXiv:2205.08534.
  14. Liu, W.; Shen, X.; Pun, C.M.; Cun, X. Explicit Visual Prompting for Low-Level Structure Segmentations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 17–24 June 2023; pp. 19434–19445.
  15. Chen, T.; Zhu, L.; Ding, C.; Cao, R.; Wang, Y.; Li, Z.; Sun, L.; Mao, P.; Zang, Y. SAM Fails to Segment Anything?—SAM-Adapter: Adapting SAM in Underperformed Scenes: Camouflage, Shadow, Medical Image Segmentation, and More. arXiv 2023, arXiv:2304.09148.
  16. Cheng, B.; Misra, I.; Schwing, A.G.; Kirillov, A.; Girdhar, R. Masked-Attention Mask Transformer for Universal Image Segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 1290–1299.
  17. Yan, H.; Zhang, J.X.; Zhang, X. Injected Infrared and Visible Image Fusion via L1 Decomposition Model and Guided Filtering. IEEE Trans. Comput. Imaging 2022, 8, 162–173.
  18. Badrinarayanan, V.; Kendall, A.; Cipolla, R. SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495.
  19. Lüddecke, T.; Ecker, A. Image Segmentation Using Text and Image Prompts. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 7086–7096.
  20. Roy, S.; Wald, T.; Koehler, G.; Rokuss, M.R.; Disch, N.; Holzschuh, J.; Zimmerer, D.; Maier-Hein, K.H. SAM.MD: Zero-shot medical image segmentation capabilities of the Segment Anything Model. arXiv 2023, arXiv:2304.05396.
  21. Zheng, Y.; Wu, J.; Qin, Y.; Zhang, F.; Cui, L. Zero-Shot Instance Segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 2593–2602.
  22. Li, F.; Zhang, H.; Xu, H.; Liu, S.; Zhang, L.; Ni, L.M.; Shum, H.Y. Mask DINO: Towards a Unified Transformer-Based Framework for Object Detection and Segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 17–24 June 2023; pp. 3041–3050.
  23. He, K.; Gkioxari, G.; Dollar, P.; Girshick, R. Mask R-CNN. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017.
  24. Wang, W.; Lu, X.; Shen, J.; Crandall, D.J.; Shao, L. Zero-Shot Video Object Segmentation via Attentive Graph Neural Networks. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019.
  25. Long, J.; Shelhamer, E.; Darrell, T. Fully Convolutional Networks for Semantic Segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015.
  26. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015, Munich, Germany, 5–9 October 2015; Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F., Eds.; Lecture Notes in Computer Science. Springer International Publishing: Cham, Switzerland, 2015; pp. 234–241.
  27. Ma, J.; Du, K.; Zheng, F.; Zhang, L.; Gong, Z.; Sun, Z. A recognition method for cucumber diseases using leaf symptom images based on deep convolutional neural network. Comput. Electron. Agric. 2018, 154, 18–24.
  28. Esgario, J.G.M.; Krohling, R.A.; Ventura, J.A. Deep learning for classification and severity estimation of coffee leaf biotic stress. Comput. Electron. Agric. 2020, 169, 105162.
More
Information
Contributors MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register : , , , ,
View Times: 303
Revisions: 2 times (View History)
Update Date: 27 Sep 2023
1000/1000
ScholarVision Creations