Challenges in Agricultural Image Datasets and Filter Algorithms: History
Please note this is an old version of this entry, which may differ significantly from the current revision.
Contributor: , ,
Smart farming is facilitated by remote sensing because it allows for the inexpensive monitoring of crops, crop classification, stress detection yield forecasting using lightweight sensors over a wide area in a relatively short amount of time. Deep learning (DL)-based computer vision is one of the important aspects of the automatic detection and monitoring of plant stress. Challenges for DL algorithms in the agricultural dataset include size variation in objects, image resolution, background clutter, precise annotation with the expert, high object density or the demand for different spectral images.
  • pansharpening
  • filtering
  • unmanned aerial vehicle (UAV) images
  • deep learning

1. Challenges in Agricultural Image Datasets

Machine Learning (ML) models have a great potential for various agricultural applications, e.g., for plant stress detection. However, these agricultural applications produce their own specific challenges, which require image preprocessing. For instance, in the case of disease detection in plant stands, different diseases have almost identical spectral features due to disease stages and environmental influences, making digital image processing difficult. It can challenge even experts in visual differentiation [1]. Likewise, computer vision algorithms encounter difficulties in detecting these diseases [2]. In the case of weed detection, weeds with the same leaf colour as the plant to be protected, spectral features could lead to a low performance in detecting the weeds [3].
In general, large image datasets are required to train robust ML and deep learning (DL) models. Insufficient amount of training images could cause an overfitting issue, leading to a reduction of the model’s generalization capability [4]. Careful annotations of the available image datasets are an indispensable image preprocessing step for agricultural use cases. Disturbing backgrounds scenes and objects, such as soil and other biomass, create problems in target image annotation for visible images in automated disease and weed detection and [5][6]. Noise, blurring, brightness, and contrast issues can degrade the image quality, where image noise results from the interaction of natural light and camera mechanics [7]. Due to the speed of unmanned aerial vehicles (UAVs), captured images could be impacted by motion blur and excessive brightness, posing a significant challenge for classification and object detection [8]. However, low-priced sensors produce low-quality images for UAVs at high altitudes. UAV propellers create shadows and blurring in low-altitude photographs [9]. Clouds and their shadows are significant obstacles for space-borne sensors, which interfere with identifying plants and their diseases [10]. Low-light conditions diminish the image’s clarity and uniformity, resulting in low contrast and a distorted focal point, which is a significant issue in object detection [11].
Multispectral images comprise more bands than RGB images, where each band represents different data characteristics (red, near-infrared) of the same scene. Hence, band-to-band registration is needed to merge the information from the individual bands. However, proper image alignment for registration is a well-known issue in the image registration task [12]. Another issue with multispectral images is that they often lack sufficient spatial resolution, which is undesirable for subsequent image-processing tasks. The fusion of multispectral images with high spatial resolution panchromatic (PAN) images, also called pansharpening, can result in improved images in terms of both spectral and spatial details [13]. Pansharpened images can improve the accuracy in object detection and image classification based on deep neural networks as, e.g., demonstrated in [14][15]. On the other hand, Xu et al. [16] also found that most of the pansharpening algorithms suffered from distortions regarding color and spatial details. Moreover, misregistration and object size differences also resulted in poor and blurry pansharpened images. As a result, the object detection algorithm struggled with the spatially distorted pansharpened images [17].

2. Filter Algorithms

Due to the abnormality in the capturing process, sensor issues, and environmental conditions, images can suffer degradation through blurring, noise, geometric distortions, inadequate illumination, and lack of sharpness [18]. Some high- and low-pass image filtering algorithms are available for various image preprocessing tasks, such as image denoising, image enhancement, image deblurring, histogram equalization, and contrast correction. Image noise is undesired information that degrades the visual quality of the images, which can be happened by various factors, such as data acquisition, signal transmission, and computational errors [19]. Usually, images are corrupted with additive noise, such as salt and pepper, Gaussian, Poisson noise or multiplicative noise, such as speckle noise [20]. Salt and pepper noise arises from the sudden changes in the image acquisition process, for example; dust particles or over heated components, which arises black and white dots [21]. According to the name, the Gaussian noise is a noise distribution that follows the Gaussian distribution. In the noisy image, each pixel is the sum of the true pixel and a random, normally distributed noise value [22]. Poisson noise occurs when the amount of photons detected by the sensor is not enough to provide measurable statistical information [23]. The speckle noise is a common phenomena in the coherent imaging system, for example; laser, Synthetic Aperture Radar (SAR), ultrasonic, and acoustics images, where the noisy image is the product of the obtained signal and the speckle noise [24].
To start with denoising, Gaussian and Wiener denoising, median and bilateral filters are standard filters which eliminate unwanted noises from images [25]. The median filter is a nonlinear denoising filter that removes salt-and-pepper noise and softens edges. The bilateral filter has several applications for denoising tasks due to its property of preserving edges [26][27]. On the other hand, the Wiener filter performs well to denoise the Speckle and Gaussian noisy images [28]. Archana and Sahayadhas [29] stated that the Wiener filter outperformed the Gaussian and mean filter for the case of noise removal in images of paddy leaves.
Image blurring is the unsharp area of images, caused by camera movement, shaking, and lack of focusing, which is classified into average blur, Gaussian blur, motion blur, defocus blur, and atmospheric turbulence blur [30], which is a bottleneck for the high-quality of images and responsible for corrupting important image information [31]. Image deblurring filters are the inverse techniques, which aim to restore the sharpness of the images from the degraded images [31]. The blur algorithms are broadly classified into two main categories: blind and non-blind, depending on the availability of the Point Spread Function (PSF) information [32]. Among them, Wiener is one of most common non-blind image restoration technique of degraded image by motion blur, unfocused optic blur, noise, and linear blur [33]. According to Al-Ameen et al. [34], the Laplacian sharpening filter performs well with Gaussian blur but poorly with noisy images. On the other hand, with a more significant number of iterations, the optimized Richardson-Lucy algorithm is the more stable option for blurry and noisy images [34]. In case of plant disease diagnosis, edge sharpening filters can highlight the pixels around the border of the region and thereby improved image segmentation [35]. On the other hand, maximum likelihood-based image deblurring algorithms do not require the PSF information, hence, they are an effective tool for blind image deblurring [36]. Yi and Shimamura [37] developed an improved maximum likelihood-based blind image restoration technique for degraded images affected by noise and blur. Moreover, in blind-image restoration techniques, unsharp masking is a classic technique to restore the blurry and noisy images and subsequently enhance the details and improve the edge-information [38].
Edge detection filters are essential for extracting the edges of the discontinuities. For the task of image enhancement, Histogram Equalization (HE) is a standard method for transforming a darkened image into a clearer one. It stretches the image’s dynamic range by flattening the histogram to transform lower contrast areas into more distinct contrast [39]. Moreover, it can improve the accuracy of automatic leaf disease detection as has been shown in [40]. Adaptive Histogram Equalization (AHE) is an additional algorithm that reduces HE’s limitations by increasing the local contrast of the input images. Contrast Limited Adaptive Histogram Equalization (CLAHE) is an even further advanced form of HE that reduces noise amplification and can further improve the clarity of distorted input images [41][42]. CLAHE has been demonstrated to be able to improve Convolutional Neural Network (CNN) classification accuracy by enhancing images with low contrast, and poor quality [43].

3. Deep Neural Networks and Generative Adversarial Networks

Deep neural networks have demonstrated a variety of image restoration tasks in recent years, including denoising, deblurring, and super-resolution [44]. Tian et al. [45] reviewed the contemporary state-of-the-art networks for image denoising, where the authors concluded that most DL-based denoising networks performed well with additive noise. On the other hand, ground truth is the most critical factor to allow for robust feature learning in DL models. In contrast, images taken in real-world environments usually exhibit inherent noise not added artificially and thus lack a ground truth. Dealing with this issue constitutes a significant research direction in DL-based denoising models. Recent studies have therefore developed self-supervised-based denoisers such as self2self [46], Neighbor2Neighbor [47], and Deformed2Self [48].
Generative Adversarial Networks (GANs) have been considered a breakthrough in the DL domain focussing on computer vision applications [49]. Since their creation, GANs have been used in various computer vision tasks such as image preprocessing, super-resolution, and image fusion [50]. Recently, a GAN model known as hierarchical Generative Adversarial Network (HI-GAN) [51] has been developed to address the aforementioned issue of real-world noisy images. Unlike other DCNN-based denoisers, HI-GAN not only maintains a higher Peak Signal-To-Noise Ratio (PSNR) score but also preserves high-frequency details and low-contrast features. As for denoising, deblurring has seen the development of several GAN models in recent years. However, most models require a corresponding pair of blurred and sharp images for training purposes (the ground-truth issue again), which is contradictory to the requirements for training on real-world data having innate noise [52].
Nevertheless, unsupervised GANs have been developed to fix these issues. Nimisha and Sunil [53] developed the first self-supervised approach for the unsupervised deblurring of real-world and synthetic images. A self-supervised model for blind model deblurring was later enhanced by Liu et al. [54], while a self-supervised model for event-based real-world blurred photos was developed by Xu et al. [48]. Li et al. [55] designed a self-supervised You Only Look Yourself (YOLY) model that can enhance images without using ground truth and any prior training, reducing the time and effort required for data collection.
In computer vision, deciphering low-resolution images represents a major hurdle in object detection and classification tasks, because the resolution is not sufficient for disease recognition [9]. This challenge is tackled by the invention of image super-resolution techniques. Dong et al. [56] presented SRCNN, the first CNN-based lightweight Single Image Super-Resolution (SISR) approach that performed better than the previous sparse-coding-based super-resolution model. As for deblurring and denoising, unsupervised and self-supervised DL strategies have been devised to approach this image upsampling issue for real-world images [57].
In remote sensing, simultaneously receiving images from multiple sensors is common, including panchromatic images providing high spatial resolution and lower-resolution multispectral images delivering the valuable spectral data. Higher spatial and spectral resolution images can be achieved by fusing images captured simultaneously by individual multispectral and panchromatic sensors, which is called pansharpening [58]. Broadly, pansharpening methods can be categorized into five main groups: those based on Component Substitution (CS), Multi-Resolution Analysis (MRA), Variational Optimization-based (VO) approaches, hybrid and DL-based methods [59][60]. According to Javan et al. [60], Multi-Resolution Analysis (MRA) had a higher spectral quality. At the same time, the hybrid method performed better with the spatial quality, and Component Substitution (CS)-based performed the least in maintaining both spectral and spatial quality. Nonetheless, both CS and MRA-based pansharpening methods produce distorted images in spatial and spectral dimensions due to misregistration. DL models have been shown to resolve this issue [61]. CNN- and GAN-based pansharpening models, see [62] for a recent review, produce a more stable spatial and spectral balance obtaining high correlation to the original multispectral bands.
Considering the current literature on the application of the aforementioned DL-based technology to agricultural image analysis use cases reveals a strong and expanding body of research. Image Super Resolution (SR) has demonstrated a higher classification accuracy in plant disease detection. For instance, SR images by Super-Resolution Generative Adversarial Networks (SRGAN) gained higher classification accuracy for wheat stripe rust classification [63]. Similarly, Wider-activation for Attention-mechanism based on a Generative Adversarial Network (WAGAN) has led to the higher classification accuracy of tomato diseases via LR images [64]. Furthermore, the employment of a SR approach through a Residual Skip Network-based method for enhancing imagery resolution, as demonstrated in grape plant leaf disease detection [65], and the integration of dual-attention and topology-fusion mechanisms within a Generative Adversarial Network (DATFGAN) for agricultural image analysis [66], have collectively contributed to improved classification accuracy.
Despite the limited number of studies conducted on DL-based deblurring and motion deblurring in the context of agricultural images, the available research indicates noteworthy advancements in crop image classification accuracy. Shah and Kumar [67] utilized DeblurGANv2 [68] to fix the motion blur issue in grape detection, which significantly improved the classification accuracy. Correspondingly, WRA-Net—Wide Receptive Field Attention Network, see [69]—was introduced to deblur the motion-blurred images, which improved the crop weed segmentation accuracy. Moreover, Xiao et al. [70] introduced a novel hybrid technique, namely SR-DeblurUGAN, encompassing both image deblurring and super-resolution, which gained a stable performance on agricultural drone image enhancement.

This entry is adapted from the peer-reviewed paper 10.3390/rs16050874

References

  1. Haridasan, A.; Thomas, J.; Raj, E.D. Deep learning system for paddy plant disease detection and classification. Environ. Monit. Assess. 2023, 195, 120.
  2. Bhujade, V.G.; Sambhe, V. Role of digital, hyper spectral, and SAR images in detection of plant disease with deep learning network. Multimed. Tools Appl. 2022, 81, 33645–33670.
  3. Wang, A.; Zhang, W.; Wei, X. A review on weed detection using ground-based machine vision and image processing techniques. Comput. Electron. Agric. 2019, 158, 226–240.
  4. Arsenovic, M.; Karanovic, M.; Sladojevic, S.; Anderla, A.; Stefanovic, D. Solving Current Limitations of Deep Learning Based Approaches for Plant Disease Detection. Symmetry 2019, 11, 939.
  5. Barbedo, J.G.A. A review on the main challenges in automatic plant disease identification based on visible range images. Biosyst. Eng. 2016, 144, 52–60.
  6. Di Cicco, M.; Potena, C.; Grisetti, G.; Pretto, A. Automatic model based dataset generation for fast and accurate crop and weeds detection. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 24–28 September 2017; pp. 5188–5195.
  7. Labhsetwar, S.R.; Haridas, S.; Panmand, R.; Deshpande, R.; Kolte, P.A.; Pati, S. Performance Analysis of Optimizers for Plant Disease Classification with Convolutional Neural Networks. In Proceedings of the 2021 4th Biennial International Conference on Nascent Technologies in Engineering (ICNTE), Navi Mumbai, India, 15–16 January 2021; pp. 1–6.
  8. Barbedo, J.G.A.; Koenigkan, L.V.; Santos, T.T.; Santos, P.M. A Study on the Detection of Cattle in UAV Images Using Deep Learning. Sensors 2019, 19, 5436.
  9. Wen, D.; Ren, A.; Ji, T.; Flores-Parra, I.M.; Yang, X.; Li, M. Segmentation of thermal infrared images of cucumber leaves using K-means clustering for estimating leaf wetness duration. Int. J. Agric. Biol. Eng. 2020, 13, 161–167.
  10. Ouhami, M.; Hafiane, A.; Es-Saady, Y.; Hajji, M.E.; Canals, R. Computer Vision, IoT and Data Fusion for Crop Disease Detection Using Machine Learning: A Survey and Ongoing Research. Remote Sens. 2021, 13, 2486.
  11. Xu, J.X.; Ma, J.; Tang, Y.N.; Wu, W.X.; Shao, J.H.; Wu, W.B.; Wei, S.Y.; Liu, Y.F.; Wang, Y.C.; Guo, H.Q. Estimation of Sugarcane Yield Using a Machine Learning Approach Based on UAV-LiDAR Data. Remote Sens. 2020, 12, 2823.
  12. Laliberte, A.S.; Goforth, M.A.; Steele, C.M.; Rango, A. Multispectral Remote Sensing from Unmanned Aircraft: Image Processing Workflows and Applications for Rangeland Environments. Remote Sens. 2011, 3, 2529–2551.
  13. Choi, M.; Kim, R.Y.; Nam, M.R.; Kim, H.O. Fusion of multispectral and panchromatic satellite images using the curvelet transform. IEEE Geosci. Remote Sens. Lett. 2005, 2, 136–140.
  14. Lu, Y.; Perez, D.; Dao, M.; Kwan, C.; Li, J. Deep Learning with Synthetic Hyperspectral Images for Improved Soil Detection in Multispectral Imagery. In Proceedings of the 2018 9th IEEE Annual Ubiquitous Computing, Electronics and Mobile Communication Conference (UEMCON 2018), New York, NY, USA, 8–10 November 2018; pp. 666–672.
  15. Sekrecka, A.; Kedzierski, M.; Wierzbicki, D. Pre-Processing of Panchromatic Images to Improve Object Detection in Pansharpened Images. Sensors 2019, 19, 5146.
  16. Xu, Q.; Zhang, Y.; Li, B. Recent advances in pansharpening and key problems in applications. Int. J. Image Data Fusion 2014, 5, 175–195.
  17. Chen, F.; Lou, S.; Song, Y. Improving object detection of remotely sensed multispectral imagery via pan-sharpening. In Proceedings of the ICCPR 2020: 2020 9th International Conference on Computing and Pattern Recognition, Xiamen, China, 30 October–1 November 2020; ACM International Conference Proceeding Series. ACM: New York, NY, USA, 2020; pp. 136–140.
  18. Lagendijk, R.L.; Biemond, J. Basic methods for image restoration and identification. In The Essential Guide to Image Processing; Elsevier: Amsterdam, The Netherlands, 2009; pp. 323–348.
  19. Diwakar, M.; Kumar, M. A review on CT image noise and its denoising. Biomed. Signal Process. Control 2018, 42, 73–88.
  20. Saxena, C.; Kourav, D. Noises and image denoising techniques: A brief survey. Int. J. Emerg. Technol. Adv. Eng. 2014, 4, 878–885.
  21. Verma, R.; Ali, J. A comparative study of various types of image noise and efficient noise removal techniques. Int. J. Adv. Res. Comput. Sci. Softw. Eng. 2013, 3, 617–622.
  22. Vijaykumar, V.; Vanathi, P.; Kanagasabapathy, P. Fast and efficient algorithm to remove gaussian noise in digital images. IAENG Int. J. Comput. Sci. 2010, 37, 300–302.
  23. Kumain, S.C.; Singh, M.; Singh, N.; Kumar, K. An efficient Gaussian noise reduction technique for noisy images using optimized filter approach. In Proceedings of the 2018 first international conference on secure cyber computing and communication (ICSCCC), Jalandhar, India, , 2018, 15–17 December 2018; IEEE: New York, NY, USA, 2018; pp. 243–248.
  24. Ren, R.; Guo, Z.; Jia, Z.; Yang, J.; Kasabov, N.K.; Li, C. Speckle noise removal in image-based detection of refractive index changes in porous silicon microarrays. Sci. Rep. 2019, 9, 15001.
  25. Aboshosha, A.; Hassan, M.; Ashour, M.; Mashade, M.E. Image denoising based on spatial filters, an analytical study. In Proceedings of the 2009 International Conference on Computer Engineering and Systems (ICCES’09), Cairo, Egypt, 14–16 December 2009; pp. 245–250.
  26. Bera, T.; Das, A.; Sil, J.; Das, A.K. A survey on rice plant disease identification using image processing and data mining techniques. Adv. Intell. Syst. Comput. 2019, 814, 365–376.
  27. Paris, S.; Kornprobst, P.; Tumblin, J.; Durand, F. Bilateral filtering: Theory and applications. Found. Trends Comput. Graph. Vis. 2009, 4, 1–73.
  28. Kumar, S.; Kumar, P.; Gupta, M.; Nagawat, A.K. Performance Comparison of Median and Wiener Filter in Image De-noising. Int. J. Comput. Appl. 2010, 12, 27–31.
  29. Archana, K.S.; Sahayadhas, A. Comparison of various filters for noise removal in paddy leaf images. Int. J. Eng. Technol. 2018, 7, 372–374.
  30. Gulat, N.; Kaushik, A. Remote sensing image restoration using various techniques: A review. Int. J. Sci. Eng. Res. 2012, 3, 1–6.
  31. Wang, R.; Tao, D. Recent progress in image deblurring. arXiv 2014, arXiv:1409.6838.
  32. Rahimi-Ajdadi, F.; Mollazade, K. Image deblurring to improve the grain monitoring in a rice combine harvester. Smart Agric. Technol. 2023, 4, 100219.
  33. Al-qinani, I.H. Deblurring image and removing noise from medical images for cancerous diseases using a Wiener filter. Int. Res. J. Eng. Technol. 2017, 4, 2354–2365.
  34. Al-Ameen, Z.; Sulong, G.; Johar, M.G.M.; Verma, N.; Kumar, R.; Dachyar, M.; Alkhawlani, M.; Mohsen, A.; Singh, H.; Singh, S.; et al. A comprehensive study on fast image deblurring techniques. Int. J. Adv. Sci. Technol. 2012, 44, 1–10.
  35. Petrellis, N. A Review of Image Processing Techniques Common in Human and Plant Disease Diagnosis. Symmetry 2018, 10, 270.
  36. Holmes, T.J.; Bhattacharyya, S.; Cooper, J.A.; Hanzel, D.; Krishnamurthi, V.; Lin, W.c.; Roysam, B.; Szarowski, D.H.; Turner, J.N. Light microscopic images reconstructed by maximum likelihood deconvolution. In Handbook of Biological Confocal Microscopy; Springer: Boston, MA, USA, 1995; pp. 389–402.
  37. Yi, C.; Shimamura, T. An Improved Maximum-Likelihood Estimation Algorithm for Blind Image Deconvolution Based on Noise Variance Estimation. J. Signal Process. 2012, 16, 629–635.
  38. Liu, L.; Jia, Z.; Yang, J.; Kasabov, N.; Fellow IEEE. A medical image enhancement method using adaptive thresholding in NSCT domain combined unsharp masking. Int. J. Imaging Syst. Technol. 2015, 25, 199–205.
  39. Chourasiya, A.; Khare, N. A Comprehensive Review of Image Enhancement Techniques. Int. J. Innov. Res. Growth 2019, 8, 60–71.
  40. Bashir, S.; Sharma, N. Remote area plant disease detection using image processing. IOSR J. Electron. Commun. Eng. 2012, 2, 31–34.
  41. Ansari, A.S.; Jawarneh, M.; Ritonga, M.; Jamwal, P.; Mohammadi, M.S.; Veluri, R.K.; Kumar, V.; Shah, M.A. Improved Support Vector Machine and Image Processing Enabled Methodology for Detection and Classification of Grape Leaf Disease. J. Food Qual. 2022, 2022, 9502475.
  42. Rubini, C.; Pavithra, N. Contrast Enhancementof MRI Images using AHE and CLAHE Techniques. Int. J. Innov. Technol. Explor. Eng. 2019, 9, 2442–2445.
  43. Lilhore, U.K.; Imoize, A.L.; Lee, C.C.; Simaiya, S.; Pani, S.K.; Goyal, N.; Kumar, A.; Li, C.T. Enhanced Convolutional Neural Network Model for Cassava Leaf Disease Identification and Classification. Mathematics 2022, 10, 580.
  44. Dong, W.; Wang, P.; Yin, W.; Shi, G.; Wu, F.; Lu, X. Denoising Prior Driven Deep Neural Network for Image Restoration. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 41, 2305–2318.
  45. Tian, C.; Fei, L.; Zheng, W.; Xu, Y.; Zuo, W.; Lin, C.W. Deep learning on image denoising: An overview. Neural Netw. 2020, 131, 251–275.
  46. Quan, Y.; Chen, M.; Pang, T.; Ji, H. Self2self with dropout: Learning self-supervised denoising from single image. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020.
  47. Huang, T.; Li, S.; Jia, X.; Lu, H.; Liu, J. Neighbor2neighbor: Self-supervised denoising from single noisy images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021.
  48. Xu, J.; Adalsteinsson, E. Deformed2Self: Self-supervised Denoising for Dynamic Medical Imaging. In Medical Image Computing and Computer Assisted Intervention—MICCAI 2021; Springer International Publishing: Cham, Switzerland, 2021; pp. 25–35.
  49. Goodfellow, I.J.; Pouget-Abadie, J.; Mehdi, B.M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Networks. arXiv 2014, arXiv:1406.2661.
  50. Iglesias, G.; Talavera, E.; Díaz-Álvarez, A. A survey on GANs for computer vision: Recent research, analysis and taxonomy. Comput. Sci. Rev. 2023, 48, 100553.
  51. Vo, D.M.; Nguyen, D.M.; Le, T.P.; Lee, S.W. HI-GAN: A hierarchical generative adversarial network for blind denoising of real photographs. Inf. Sci. 2021, 570, 225–240.
  52. Zhang, K.; Ren, W.; Luo, W.; Lai, W.S.; Stenger, B.; Yang, M.H.; Li, H. Deep Image Deblurring: A Survey. Int. J. Comput. Vis. 2022, 130, 2103–2130.
  53. Nimisha, T.M.; Sunil, K.; Rajagopalan, A.N. Unsupervised class-specific deblurring. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018.
  54. Liu, P.; Janai, J.; Pollefeys, M.; Sattler, T.; Geiger, A. Self-Supervised Linear Motion Deblurring. IEEE Robot. Autom. Lett. 2020, 5, 2475–2482.
  55. Li, B.; Gou, Y.; Gu, S.; Liu, J.Z.; Zhou, J.T.; Peng, X. You Only Look Yourself: Unsupervised and Untrained Single Image Dehazing Neural Network. Int. J. Comput. Vis. 2021, 129, 1754–1767.
  56. Dong, C.; Loy, C.C.; He, K.; Tang, X. Image Super-Resolution Using Deep Convolutional Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 38, 295–307.
  57. Wang, Z.; Chen, J.; Hoi, S.C.H. Deep Learning for Image Super-Resolution: A Survey. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 43, 3365–3387.
  58. Ehlers, M. Multi-image fusion in remote sensing: Spatial enhancement vs. spectral characteristics preservation. In Advances in Visual Computing—ISVC 2008; Lecture Notes in Computer Science—LNCS (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Berlin/Heidelberg, Germany, 2008; Volume 5359, pp. 75–84.
  59. Meng, X.; Shen, H.; Li, H.; Zhang, L.; Fu, R. Review of the pansharpening methods for remote sensing images based on the idea of meta-analysis: Practical discussion and challenges. Inf. Fusion 2019, 46, 102–113.
  60. Javan, F.D.; Samadzadegan, F.; Mehravar, S.; Toosi, A.; Khatami, R.; Stein, A. A review of image fusion techniques for pan-sharpening of high-resolution satellite imagery. ISPRS J. Photogramm. Remote Sens. 2021, 171, 101–117.
  61. Saxena, N.; Saxena, G.; Khare, N.; Rahman, M.H. Pansharpening scheme using spatial detail injection–based convolutional neural networks. IET Image Process. 2022, 16, 2297–2307.
  62. Wang, P.; Alganci, U.; Sertel, E. Comparative analysis on deep learning based pan-sharpening of very high-resolution satellite images. Int. J. Environ. Geoinform. 2021, 8, 150–165.
  63. Maqsood, M.H.; Mumtaz, R.; Haq, I.U.; Shafi, U.; Zaidi, S.M.H.; Hafeez, M. Super resolution generative adversarial network (Srgans) for wheat stripe rust classification. Sensors 2021, 21, 7903.
  64. Salmi, A.; Benierbah, S.; Ghazi, M. Low complexity image enhancement GAN-based algorithm for improving low-resolution image crop disease recognition and diagnosis. Multimed. Tools Appl. 2022, 81, 8519–8538.
  65. Yeswanth, P.; Deivalakshmi, S.; George, S.; Ko, S.B. Residual skip network-based super-resolution for leaf disease detection of grape plant. Circuits Syst. Signal Process. 2023, 42, 6871–6899.
  66. Dai, Q.; Cheng, X.; Qiao, Y.; Zhang, Y. Crop leaf disease image super-resolution and identification with dual attention and topology fusion generative adversarial network. IEEE Access 2020, 8, 55724–55735.
  67. Shah, M.; Kumar, P. Improved handling of motion blur for grape detection after deblurring. In Proceedings of the 2021 8th International Conference on Signal Processing and Integrated Networks (SPIN), Noida, India, 26–27 August 2021; IEEE: New York, NY, USA, 2021; pp. 949–954.
  68. Kupyn, O.; Martyniuk, T.; Wu, J.; Wang, Z. Deblurgan-v2: Deblurring (orders-of-magnitude) faster and better. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 8878–8887.
  69. Yun, C.; Kim, Y.H.; Lee, S.J.; Im, S.J.; Park, K.R. WRA-Net: Wide Receptive Field Attention Network for Motion Deblurring in Crop and Weed Image. Plant Phenomics 2023, 5, 0031.
  70. Xiao, Y.; Zhang, J.; Chen, W.; Wang, Y.; You, J.; Wang, Q. SR-DeblurUGAN: An End-to-End Super-Resolution and Deblurring Model with High Performance. Drones 2022, 6, 162.
More
This entry is offline, you can click here to edit this entry!
ScholarVision Creations