Vision Systems for Fruit and Vegetable Classification: History
Please note this is an old version of this entry, which may differ significantly from the current revision.
Contributor: , , , , , ,

Citrus fruits are the second most important crop worldwide. One of the most important tasks is sorting, which involves manually separating the fruit based on its degree of maturity, and in many cases, involves a task carried out manually by human operators. A machine vision-based citrus sorting system can replace labor work for the inspection of fruit sorting.

  • real-time system architecture
  • image segmentation
  • image classification
  • agriculture

1. Introduction

Nowadays, the food industry uses vision systems for fruit and vegetable classification. These systems commonly use conveyors to transport fruits through the sorting system [1]. The classifier system uses an interface to communicate with the actuators that perform the separation task. Generally, these systems determine the size, color, ripening, and quality of fruits [2][3][4][5][6][7][8][9][10][11][12].
Image processing techniques are used in agriculture to detect diseased leaf, stem, and fruit, to quantify the affected area by disease, to estimate or evaluate the productivity, among others, to find the shape of the affected area, to count or calculate the number of fruits entering in a sorting machine, to determine the color of the disease affected area, and to determine the size and shape of fruits.

2. Vision Systems for Fruit and Vegetable Classification

Video streaming and image processing are useful tasks in classification systems. However, these tasks are computationally expensive in terms of time and computational resources. Video streaming provides information about the environment or gives useful visual features in visual quality inspection. Image processing techniques use these visual features as input to classification or clustering algorithms. Many IoT applications, such as surveillance video, healthcare, face recognition, human activities understanding, and farming, use video sensors [13][14]. Some approaches use low-cost and low-power machine vision systems [13]. However, most of these embedded video processing platforms exhibit low performance in real-time classification tasks due to their low computational power and bandwidth. In this case, GPU-based or FPGA-based approaches are suitable.
Machine vision-based fruit sorting systems are capable of replacing labor work for the inspection of fruit size. Seema et al. [15] reviewed fruit grading and classification systems. The authors summarize the most used features to identify the degree of rotting and ripening, the kind of fruits, and the machine-learning (ML) models used by the reviewed algorithms. They found two approaches: the first is multiple fruit identification systems focused on fruit differentiation, but the fruit quality is discarded. The training of these systems requires thousands of images of a series of different fruits. The second one, the specific fruit classification system, uses large image sets of a single fruit type to train and test the sorter. Although the first approach is more general, the second one is more suitable for single-type fruit sorting machines.
Concerning multiple fruit recognition approaches, Blasco et al. in 2003 [7] proposed a system to estimate the quality of oranges, peaches, and apples using four attributes: size, color, stem location, and detection of external blemishes. The proposed segmentation is based on Bayesian discriminant analysis, performing the correlation of fruit color using the colorimetric index values. The authors tested the classification system with apples, obtaining a blemish detection accuracy of 86% and size accuracy of 93%. Seng and Mirisaee [16] proposed an image retrieval method that combines classification models obtained from three features: color-based, shape-based, and size-based features to increase the accuracy of recognition. The proposed system uses the nearest neighbors classification to recognize 15 different fruits from their feature values, obtaining an accuracy of 90%. Jana et al. [12] proposed a system that preprocesses images to separate the fruit in the foreground from the background. Their system extracts texture features from the Gray-level Co-occurrence Matrix (GLCM) and statistical color features from the segmented image. The system creates a single feature descriptor from the extracted features and trains a Support Vector Machine (SVM) classification model. The generated model predicts the category for an unlabeled image from the validation set. The proposed method obtains an 83.33% overall accuracy. De Goma et al. [11] proposed a system to recognize fruits regardless using the K-nearest neighbor clustering based on statistical values of the color moments, GLCM features, and area by pixels for the size and shape roundness. They used a dataset with 15 different categories with 2633 images, obtaining an 81.94% accuracy.
Concerning orange fruit classification systems, Subramaniam and Balasubramanian [17] used parallel computing techniques on a multi-core processor to grade citrus fruits. They used the Task Parallel Library to add parallelism and concurrency to applications. They extracted geometrical features such as diameter, perimeter, area, and circularity under a laboratory-simulated real-time condition without a suitable conveyor. The system demonstrated the ability to estimate the diameter of the fruit with 98% accuracy. Sirisathitkul et al. [18] proposed an image processing technique to perform Chokun orange maturity sorting. In the training step, they captured images of 90 Chokun oranges of three different degrees of maturity with a color digital camera under normal illumination conditions. They performed an RGB to HSV color transformation for each image, using the hue colors to generate a set of decision rules. They tested the proposed model using 50 Chokun orange samples, obtaining a 98% accuracy. Chen et al. [19] proposed an orange sorting detection by obtaining four main features of the oranges, including fruit surface color, size, surface defect, and shape using image processing. They trained a BackPropagation neural network with these features. They report a sorting accuracy of 94.38%. Peter et al. [20] proposed an automatic system for disease identification in infected fruits images. The approach is evaluated on three diseases of the navel orange fruits, namely Citrus canker, Citrus melanose, and Citrus black spot, achieving 93% accuracy using global color histogram, local binary patterns, and Halarick texture features. Patel et al. (2019) [21] reported a system for orange sorting and detecting the bacteria spot defect based on four features: shape, size, color, and texture. They evaluated the SVM classification, obtaining a 67.74% overall accuracy. Behera et al. [22] proposed a system to grade oranges and identify deformities. They used a multi-class SVM with K-means clustering to classify orange diseases with an accuracy of 90%, and they used fuzzy logic to compute the degree of disease severity. Ifmalinda and Putri [23] proposed an orange sorting program based on diameter and skin color. They used diameter and RGB index to generate a set of rules to classify oranges, obtaining an overall accuracy of 87%. Wang et al. [24] proposed an algorithm to predict the sugar content of citrus fruits and performed a classification of the sugar content using light in the visible spectrum. Similar approaches for sorting apples can be found in [5][9][10][25]; for tomatoes in [20][26]; for sorting watermelons in [27]; for palm oil fruit sorting in [28]; and dates in [29].
Related to high-performance implementation using FPGA, there are few works. Martínez-Usó et al. in 2005 [8] proposed an unsupervised segmentation algorithm based on a multi-resolution applied to multi-spectral images of fruits as a quality assessment application. Lyu et al. [30] proposed a citrus flower recognition model based on YOLOv4-Tiny lightweight neural network using software and hardware co-design patterns. They generated the dynamic link library and integrated it into the FPGA-based embedded platform. The recognition accuracy of the citrus flower recognition model deployed on the embedded platform for flowers and buds was not less than 89.30%, and the frame rate was not lower than 16 FPS.
Zhenman et al. proposed an analytical model to compare FPGAs and GPUs performance. FPGAs can provide comparable performance or even achieve better performance than a GPU while consuming an average of 28% of the power required by a GPU for most Rodinia Kernels. Even when FPGAs use a lower clock frequency than GPUs, the FPGA usually achieves a higher number of operations per cycle in each computing pipeline due to its small pipeline initiation interval and considerable pipeline depth [31]. Zhang et al. proposed an FPGA acceleration of the generalized sparse matrix–matrix multiplication, an essential computing kernel for many algorithms in artificial intelligence [32]. They evaluated a Huffman tree scheduler on 20 real-world benchmarks, finding that the energy efficiency and performance are increased by 6× and 4×, respectively. Qasaimeh et al. assessed the energy efficiency of CPU, GPU, and FPGA implementation of computer vision kernels. They benchmarked algorithms for all the computer visions based on the OpenVX standard of GPU and FPGA platforms. Many simple seeds implemented on GPUs obtain a 1.1–3.2× energy/frame reduction. Still, the FPGA outperforms GPUs when complex ones that require a complete vision pipeline are necessary by obtaining a 1.2–22.3× energy/frame reduction [33]. Guo et al. performed a state-of-the-art review of neural network accelerator designs. They concluded that FPGAs achieve more than 10× better speed and energy efficiency than state-of-the-art GPU [34]. Sanaullah and Herbordt evaluated the hardware implementation of 3D Fast Fourier Transforms (FFTs) using OpenCL as Hardware Description Language. Their performance achieves an average speedup of 29× versus the current CPU and 4.1× versus the recent GPU [35]. Fowers et al. compared the performance and energy of sliding window applications when implemented on FPGAs, GPUs, and multicore devices. They concluded that FPGAS provides a significant performance increase in most cases, with speedups up to 11× and 57× compared with GPUs and multicores [36].
Recently, there have been efforts to use deep learning as an effective technique for fruit sorting. In [4], the authors propose a real-time visual inspection system for sorting fruits using a classification model obtained from state-of-the-art deep-learning convolutional networks. They test their system using apples and bananas. During real-time testing, the system obtained an accuracy of 96.7% for apples and 93.8% for bananas. For the training stage, they used a database composed of 8791 apples and 300 bananas of both healthy and defective fruits. Kukreja and Dhiman, in 2020 [37], proposed a dense CNN algorithm to detect the apparent defects of citrus fruit. They generated a first model without preprocessing and data augmentation on 150 images, achieving an accuracy of 67%. In a second model, the applied data augmentation and preprocessing after the model generation using 1200 images attained an accuracy of 89.1%. Sa et al. in 2016 [38] proposed an approach to fruit detection using deep convolutional neural networks, with application to automated harvesting using a robotic platform, completing fruit detection using imagery obtained from two modalities: color (RGB) and near-infrared (NIR). They compute both precision and recall performances, improving from 80.7% to 83.8% for the detection of sweet peppers. They created a model to detect seven fruits, which took four hours to annotate and train the new model per fruit. Leelavathy et al., in 2021 [39], proposed a CNN-based orange fruit image using a binary cross-entropy loss function, obtaining an overall accuracy of 78.57%. Hossain et al., in 2019 [40], proposed a framework based on two different deep learning architectures. The first is a proposed light model of six convolutional neural network layers, while the second is a fine-tuned visual geometry group-16 pre-trained deep learning model. They used two color-image datasets to evaluate their proposed framework. The first dataset contains clear fruit images, while the second dataset contains fruit images with noise, illumination, and pose variations, which are much harder to classify. Classification accuracies of 99.49% and 99.75% were achieved on dataset 1 for the first and second models, respectively. On dataset 2, the first and second models obtained accuracies of 85.43% and 96.75%, respectively.
Recently, existing solutions have used deep learning approaches to classify defects in fruits. In [40], the authors propose a system that classified orange images based on fresh and rotten using a CNN, with SoftMax classifier, using 800 orange images, achieving an accuracy of 78.57%. In [2], the authors generated a dataset of eight different classes of date fruits and compared several CNN models, such as AlexNet, VGG16, InceptionV3, ResNet, and MobileNetV2; MobileNetV2 architecture achieved an accuracy of 99%. In [41], the authors present a deep-learning system for multi-class fruit and vegetable categorization based on an improved YOLOv4 model that first recognizes the object type in an image before classifying it into one of two categories: fresh or rotten. Compared with the previous YOLO series, the proposed method obtained higher average precision than the original YOLOv4 and YOLOv3, with 50.4%, 49.3%, and 41.7%, respectively. In [42], the authors proposed an automatic image annotation to classify the ripeness of oil palm fruit and recognize a variety of fruits, trained with 100 images of oil fruit palm and 400 images of various fruits. From the previous systems, not many focus on classifying citrus fruits by color or size but focus specifically on fruit defects.

This entry is adapted from the peer-reviewed paper 10.3390/electronics12183891

References

  1. Bhargava, A.; Bansal, A. Fruits and vegetables quality evaluation using computer vision: A review. J. King Saud Univ.-Comput. Inf. Sci. 2021, 33, 243–257.
  2. Albarrak, K.; Gulzar, Y.; Hamid, Y.; Mehmood, A.; Soomro, A.B. A deep learning-based model for date fruit classification. Sustainability 2022, 14, 6339.
  3. Behera, S.K.; Sethy, P.K.; Sahoo, S.K.; Panigrahi, S.; Rajpoot, S.C. On-tree fruit monitoring system using IoT and image analysis. Concurr. Eng. 2021, 29, 6–15.
  4. Ismail, N.; Malik, O.A. Real-time visual inspection system for grading fruits using computer vision and deep learning techniques. Inf. Process. Agric. 2021, 9, 24–37.
  5. Leemans, V.; Destain, M.F. A real-time grading method of apples based on features extracted from defects. J. Food Eng. 2004, 61, 83–89.
  6. Cubero, S.; Moltó, E.; Gutiérrez, A.; Aleixos, N.; García-Navarrete, O.L.; Juste, F.; Blasco, J. Real-time inspection of fruit by computer vision on a mobile harvesting platform under field conditions. Prog. Agric. Eng. Sci. 2010, 6, 1–16.
  7. Blasco, J.; Aleixos, N.; Moltó, E. Machine Vision System for Automatic Quality Grading of Fruit. Biosyst. Eng. 2003, 85, 415–423.
  8. Martínez-Usó, A.; Pla, F.; García-Sevilla, P. Multispectral Image Segmentation for Fruit Quality Estimation. In Proceedings of the 2005 Conference on Artificial Intelligence Research and Development, Las Vegas, NV, USA, 27–30 June 2005; IOS Press: Amsterdam, The Netherlands, 2005; pp. 51–58.
  9. Unay, D.; Gosselin, B. Stem and calyx recognition on ‘Jonagold’ apples by pattern recognition. J. Food Eng. 2007, 78, 597–605.
  10. Feng, G.; Qixin, C. Study on color image processing based intelligent fruit sorting system. In Proceedings of the Fifth World Congress on Intelligent Control and Automation (IEEE Cat. No.04EX788), Hangzhou, China, 15–19 June 2004; Volume 6, pp. 4802–4805.
  11. De Goma, J.C.; Quilas, C.A.M.; Valerio, M.A.B.; Young, J.J.P.; Sauli, Z. Fruit recognition using surface and geometric information. J. Telecommun. Electron. Comput. Eng. (JTEC) 2018, 10, 39–42.
  12. Jana, S.; Basak, S.; Parekh, R. Automatic fruit recognition from natural images using color and texture features. In Proceedings of the 2017 Devices for Integrated Circuit (DevIC), Kalyani, India, 23–24 March 2017; pp. 620–624.
  13. Tresanchez, M.; Pujol, A.; Pallejà, T.; Martínez, D.; Clotet, E.; Palacín, J. A proposal of low-cost and low-power embedded wireless image sensor node for IoT applications. Procedia Comput. Sci. 2018, 134, 99–106.
  14. Idoje, G.; Dagiuklas, T.; Iqbal, M. Survey for smart farming technologies: Challenges and issues. Comput. Electr. Eng. 2021, 92, 107104.
  15. Seema; Kumar, A.; Gill, G. Automatic Fruit Grading and Classification System Using Computer Vision: A Review. In Proceedings of the 2015 Second International Conference on Advances in Computing and Communication Engineering, Dehradun, India, 1–2 May 2015; pp. 598–603.
  16. Seng, W.C.; Mirisaee, S.H. A new method for fruits recognition system. In Proceedings of the 2009 International Conference on Electrical Engineering and Informatics, Bangi, Malaysia, 5–7 August 2009; Volume 1, pp. 130–134.
  17. Subramaniam, K.; Balasubramanian, S. Application of parallel computing in image processing for grading of citrus fruits. In Proceedings of the 2015 International Conference on Advanced Computing and Communication Systems, Coimbatore, India, 5–7 January 2015; pp. 1–6.
  18. Sirisathitkul, Y.; Thumpen, N.; Puangtong, W. Automated Chokun Orange Maturity Sorting by Color Grading. Walailak J. Sci. Technol. (WJST) 2011, 3, 195–205.
  19. Chen, Y.; Wu, J.; Cui, M. Automatic Classification and Detection of Oranges Based on Computer Vision. In Proceedings of the 2018 IEEE 4th International Conference on Computer and Communications (ICCC), Chengdu, China, 7–10 December 2018; pp. 1551–1556.
  20. Peter, V.; Khan, M.A.; Luo, H. Automatic Orange Fruit Disease Identification Using Visible Range Images. In Proceedings of the Artificial Intelligence Algorithms and Applications, Sanya, China, 24–26 December 2020; Li, K., Li, W., Wang, H., Liu, Y., Eds.; Springer: Singapore, 2020; pp. 341–359.
  21. Patel, H.; Prajapati, R.; Patel, M. Detection of Quality in Orange Fruit Image using SVM Classifier. In Proceedings of the 2019 3rd International Conference on Trends in Electronics and Informatics (ICOEI), Tirunelveli, India, 23–25 April 2019; pp. 74–78.
  22. Behera, S.K.; Jena, L.; Rath, A.K.; Sethy, P.K. Disease Classification and Grading of Orange Using Machine Learning and Fuzzy Logic. In Proceedings of the 2018 International Conference on Communication and Signal Processing (ICCSP), Chennai, India, 3–5 April 2018; pp. 0678–0682.
  23. Ifmalinda; Andasuryani; Putri, I. Design of orange grading system based on real time image processing. IOP Conf. Ser. Earth Environ. Sci. 2021, 644, 012078.
  24. Wang, X.; Wu, C.; Hirafuji, M. Visible Light Image-Based Method for Sugar Content Classification of Citrus. PLoS ONE 2016, 11, e0147419.
  25. Afrisal, H.; Faris, M.; Utomo P., G.; Grezelda, L.; Soesanti, I.; Andri, F.M. Portable smart sorting and grading machine for fruits using computer vision. In Proceedings of the 2013 International Conference on Computer, Control, Informatics and Its Applications (IC3INA), Jakarta, Indonesia, 19–21 November 2013; pp. 71–75.
  26. Wu, J.; Zhang, B.; Zhou, J.; Xiong, Y.; Gu, B.; Yang, X. Automatic Recognition of Ripening Tomatoes by Combining Multi-Feature Fusion with a Bi-Layer Classification Strategy for Harvesting Robots. Sensors 2019, 19, 612.
  27. Liantoni, F.; Perwira, R.I.; Putri, L.D.; Manurung, R.T.; Kahar, M.S.; Safitri, J.; Muharlisiani, L.T.; Chamidah, D.; Ghofur, A.; Kurniawan, P.S.; et al. Watermelon classification using k-nearest neighbours based on first order statistics extraction. J. Phys. Conf. Ser. 2019, 1175, 012114.
  28. Makky, M.; Soni, P. Development of an automatic grading machine for oil palm fresh fruits bunches (FFBs) based on machine vision. Comput. Electron. Agric. 2013, 93, 129–139.
  29. Al Ohali, Y. Computer vision based date fruit grading system: Design and implementation. J. King Saud Univ.-Comput. Inf. Sci. 2011, 23, 29–36.
  30. Lyu, S.; Zhao, Y.; Li, R.; Chen, Q.; Li, Z. The accurate recognition system of citrus flowers using YOLOv4-Tiny lightweight neural network and FPGA embedded platform. In Proceedings of the International Conference on Mechanical Engineering, Measurement Control, and Instrumentation, Guangzhou, China, 18 July 2021; Liu, G., Chen, S., Eds.; International Society for Optics and Photonics, SPIE: Bellingham, WA, USA, 2021; Volume 11930, p. 119302E.
  31. Cong, J.; Fang, Z.; Lo, M.; Wang, H.; Xu, J.; Zhang, S. Understanding Performance Differences of FPGAs and GPUs. In Proceedings of the 2018 IEEE 26th Annual International Symposium on Field-Programmable Custom Computing Machines (FCCM), Boulder, CO, USA, 29 April–1 May 2018; pp. 93–96.
  32. Zhang, Z.; Wang, H.; Han, S.; Dally, W.J. SpArch: Efficient Architecture for Sparse Matrix Multiplication. In Proceedings of the 2020 IEEE International Symposium on High Performance Computer Architecture (HPCA), San Diego, CA, USA, 22–26 February 2020; pp. 261–274.
  33. Qasaimeh, M.; Denolf, K.; Lo, J.; Vissers, K.; Zambreno, J.; Jones, P.H. Comparing Energy Efficiency of CPU, GPU and FPGA Implementations for Vision Kernels. In Proceedings of the 2019 IEEE International Conference on Embedded Software and Systems (ICESS), Las Vegas, NV, USA, 2–3 June 2019; pp. 1–8.
  34. Guo, K.; Zeng, S.; Yu, J.; Wang, Y.; Yang, H. A Survey of FPGA-Based Neural Network Inference Accelerators. ACM Trans. Reconfigurable Technol. Syst. 2019, 12, 1–26.
  35. Sanaullah, A.; Herbordt, M.C. FPGA HPC Using OpenCL: Case Study in 3D FFT. In Proceedings of the HEART 2018: 9th International Symposium on Highly-Efficient Accelerators and Reconfigurable Technologies, Kusatsu, Japan, 14–16 June 2018; Association for Computing Machinery: New York, NY, USA, 2018.
  36. Fowers, J.; Brown, G.; Cooke, P.; Stitt, G. A Performance and Energy Comparison of FPGAs, GPUs, and Multicores for Sliding-Window Applications. In Proceedings of the FPGA ’12: ACM/SIGDA international symposium on Field Programmable Gate Arrays, Monterey, CA, USA, 22–24 February 2012; Association for Computing Machinery: New York, NY, USA, 2012; pp. 47–56.
  37. Kukreja, V.; Dhiman, P. A Deep Neural Network based disease detection scheme for Citrus fruits. In Proceedings of the 2020 International Conference on Smart Electronics and Communication (ICOSEC), Trichy, India, 10–12 September 2020; pp. 97–101.
  38. Sa, I.; Ge, Z.; Dayoub, F.; Upcroft, B.; Perez, T.; McCool, C. DeepFruits: A Fruit Detection System Using Deep Neural Networks. Sensors 2016, 16, 1222.
  39. Sharma, R.; Kaur, S. Convolution Neural Network based Several Orange Leave Disease Detection and Identification Methods: A Review. In Proceedings of the 2019 International Conference on Smart Systems and Inventive Technology (ICSSIT), Tirunelveli, India, 27–29 November 2019; pp. 196–201.
  40. Leelavathy, B.; Sri Datta, Y.S.S.; Rachana, Y.S. Quality Assessment of Orange Fruit Images Using Convolutional Neural Networks. In Proceedings of the Proceedings of International Conference on Computational Intelligence and Data Engineering; Chaki, N., Pejas, J., Devarakonda, N., Rao Kovvur, R.M., Eds.; Springer: Singapore, 2021; pp. 403–412.
  41. Mukhiddinov, M.; Muminov, A.; Cho, J. Improved classification approach for fruits and vegetables freshness based on deep learning. Sensors 2022, 22, 8192.
  42. Mamat, N.; Othman, M.F.; Abdulghafor, R.; Alwan, A.A.; Gulzar, Y. Enhancing Image Annotation Technique of Fruit Classification Using a Deep Learning Approach. Sustainability 2023, 15, 901.
More
This entry is offline, you can click here to edit this entry!
ScholarVision Creations