Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 -- 1538 2023-11-23 13:37:27 |
2 format change Meta information modification 1538 2023-11-24 01:46:31 |

Video Upload Options

Do you have a full video?

Confirm

Are you sure to Delete?
Cite
If you have any further questions, please contact Encyclopedia Editorial Office.
Jia, J.; Pan, M.; Li, Y.; Yin, Y.; Chen, S.; Qu, H.; Chen, X.; Jiang, B. Methods for Remote Sensing Image Clouds. Encyclopedia. Available online: https://encyclopedia.pub/entry/51995 (accessed on 10 October 2024).
Jia J, Pan M, Li Y, Yin Y, Chen S, Qu H, et al. Methods for Remote Sensing Image Clouds. Encyclopedia. Available at: https://encyclopedia.pub/entry/51995. Accessed October 10, 2024.
Jia, Junhao, Mingzhong Pan, Yaowei Li, Yanchao Yin, Shengmei Chen, Hongjia Qu, Xiaoxuan Chen, Bo Jiang. "Methods for Remote Sensing Image Clouds" Encyclopedia, https://encyclopedia.pub/entry/51995 (accessed October 10, 2024).
Jia, J., Pan, M., Li, Y., Yin, Y., Chen, S., Qu, H., Chen, X., & Jiang, B. (2023, November 23). Methods for Remote Sensing Image Clouds. In Encyclopedia. https://encyclopedia.pub/entry/51995
Jia, Junhao, et al. "Methods for Remote Sensing Image Clouds." Encyclopedia. Web. 23 November, 2023.
Methods for Remote Sensing Image Clouds
Edit

Remote sensing images are very vulnerable to cloud interference during the imaging process. Cloud occlusion, especially thick cloud occlusion, significantly reduces the imaging quality of remote sensing images, which in turn affects a variety of subsequent tasks using the remote sensing images.

deep learning network multi-temporal cloud removal remote sensing images

1. Introduction

Due to their abundance of data, stable geometrical characteristics, intuitive and interpretable characteristics, and other features, remote sensing images have been widely used in resource investigation, environmental monitoring, military reconnaissance, and other fields in recent years. Cloud occlusion is one of the major challenges to information extraction from remote sensing images. According to the relevant literature[1], the ground occlusion feature in the remote sensing images is typically obscured by cloud and shadow under the influence of geographic environment and weather conditions, reaching about 55% over land and 72% over the ocean. By restoring the scene information under cloud occlusion, particularly thick cloud occlusion, in remote sensing images, employing the appropriate cloud removal method, the availability of remote sensing image data can be significantly increased[2][3][4].
The last ten years have seen a rapid advancement in deep learning technology, and both domestically and internationally, many academics have conducted studies on remote sensing images cloud removal using this technology. However, there is still a significant need for study on how to more effectively use deep learning technology to address the remote sensing images cloud removal challenge, particularly in the aspect of recovery accuracy and processing efficiency.
Many researchers have been working on the challenging issue of removing remote sensing image clouds in recent years, and various technical solutions have been presented. These solutions can basically be divided into three types: multi-spectral-based methods, inpainting-based methods, and multi-temporal-based methods[5], as shown in Table 1.
Table 1. Summary of cloud removal methods.

Methods

Example Studies

Multi-spectral-based methods

FSSRF[6]

MGLRTA[7]

MEcGANs[8]

Slope-Net[9]

Inpainting-based methods

RSTRS[10]

RFR-Net[11]

SICR[12]

MEN-UIN[13]

AACNet[14]

Multi-temporal-based methods

TRGFid[15]

RTCR[16]

WLR[17]

STS-CNN[18]

CMSN[19]

2. Multi-Spectral Based Methods

The multi-spectral-based method relies on the different spectral responses of the cloud in the image across multi-spectral bands, combining the spatial characteristics of each band and the correlation between them, establishing an inclusive functional relationship, and subsequently restoring the pertinent information of the cloud occlusion region of the remote sensing images[20].
This multi-spectral-based method tends to achieve more satisfactory results when removing thin clouds, but it can occupy a wider redundant band and has higher requirements for sensor precision and alignment technology. However, thick clouds are present in the majority of bands in the remote sensing images, making multi-spectral-based methods ineffective for dealing with images involving thick clouds[21].
Typical representations of this type of method are as follows: Based on the spatial–spectral random forests (SSRF) method, Wang et al. proposed a fast spatial–spectral random forests (FSSRF) method. Principal component analysis is used by FSSRF to extract useful information from hyperspectral bands with plenty of redundant information; as a result, it increases computational efficiency while maintaining cloud removal accuracy[6]. By combining proximity-related geometrical information with low-rank tensor approximation (LRTA), Liu et al. improved the hyperspectral image and hyperspectral imagery (HSI) restoration method’s ability for restoration[7]. To remove clouds from remote sensing images, Hasan et al. proposed the multi-spectral edge-filtered conditional generative adversarial networks (MEcGANs) method. In this method, the discriminator identifies and restores the cloud occlusion region and compares the generated and target images with their respective edge-filtered versions[8]. The thin cloud thickness maps of the various bands are calculated by Zi et al. using U-Net and Slope-Net to estimate the thin cloud thickness maps and the thickness coefficients of each band, respectively. The thin cloud thickness maps are then subtracted from the cloud occlusion images to produce cloud-free images[9].

3. Inpainting Based Methods

The amount of information that can be acquired is relatively limited due to the low multi-spectral resolution and wide band, and inpainting-based methods can effectively avoid the aforementioned issue of relying on multi-spectral-based methods. The inpainting-based methods aim to restore the texture details of the cloud occlusion region by the nearby cloud-free parts in the same image and patch the cloud occlusion part[2].
These methods generally depend on mathematical and physical methods to estimate and restore the information of the cloud occlusion part by the information surrounding the region covered by thick clouds. They are primarily used in situations where the scene is simple, the region covered by thick cloud is small, and the texture is repetitive.
Deep-learning technology in computer vision has advanced quickly in recent years as a result of the widespread use of high-performance image processors (GPUs) and the convenience with which big data can be accessed. The benefit of deep learning technology is that it can train cloud removal models with plenty of remote sensing image data by making use of its neural network’s feature learning and characterization ability. The remote sensing image cloud removal task’s semantic reasonableness and detailed features are significantly improved by the excellent feature representation ability of deep learning technology as compared to traditional cloud removal methods. However, the limitation is that it is still challenging to implement the method of restoring the image according to the autocorrelation when the cloud occlusion part is large[15].
These are typical illustrations of this type of method: By employing comparable pixels and distance weights to determine the values of missing pixels, Wang et al. create a quick restoration method for restoring cloud occlusion images of various resolutions[10]. In order to restore a cloud occlusion image, Li et al. propose a recurrent feature reasoning network (RFR-Net), which gradually enriches the information for the masked region[11]. To complete the cloud removal goal, Zheng et al. propose a two-stage method that first uses U-Net for cloud segmentation and thin cloud removal and then uses generative adversarial networks (GANs) for remote sensing images restoration of thick cloud occlusion regions[12]. To achieve the function of image restoration using a single data source as input, Shao et al. propose a GAN-based unified framework with a single input for the restoration of missing information in remote sensing images[13]. When restoring missing information from remote sensing images, Huang et al. propose an adaptive attention method that makes use of an offset position subnet to dynamically reduce irrelevant feature dependencies and avoid the introduction of irrelevant noise[14].

4. Multi-Temporal Based Methods

When the thick cloud occludes a large region, it is difficult for either of these types of methods to implement cloud removal, so multi-temporal-based methods can be used instead. By using the inter-temporal image correlation between each temporality, the multi-temporal-based methods seek to restore the cloudy region[15]. With the rapid advancement of remote sensing (RS) technology in the past few decades, it has become possible to acquire multi-temporal remote sensing images of the same region. In order to restore the information from the thick cloud occlusion images, it makes use of the RS platform to acquire the same region at various times and acquire the complementary image information. The information restoration of thick cloud and large cloud occlusion regions is more frequently achieved using the multi-temporal-based method.
Multi-temporal-based methods have also transitioned from traditional mathematical models to deep learning technology. The restored image has a greater advantage in terms of both the objective image evaluation indexes and the naturalness of visual performance when compared to methods based on traditional mathematical models because deep-learning-based methods can independently learn the distribution characteristics of image data. These methods also better account for the overall image information. However, the limitations are also more obvious, namely that this method requires a great deal of time and effort to establish a multi-temporal matching data set and that deep learning technology itself has a high parameter complexity. Therefore, when processing a large number of RS cloud images, this type of method has poor efficiency and cloud removal performance needs to be improved[22][23].
The following are the typical illustrations of this type of method: By using remote sensing images of the same scene with similar gradients at various temporalities and by estimating the gradients of cloud occlusion regions from cloud-free regions at various temporalities, Li et al. propose a low-rank tensor ring decomposition model based on gradient-domain fidelity (TRGFid) to solve the problem of thick cloud removal in multi-temporal remote sensing images[15]. By combining tensor factorization and an adaptive threshold algorithm, Lin et al. propose a robust thick cloud/shadow removal (RTCR) method to accurately remove clouds and shadows from multi-temporal remote sensing images under inaccurate mask conditions. They also propose a multi-temporal information restoration model to restore cloud occlusion region[16]. With the help of a regression model and a non-reference regularization algorithm to achieve padding, Zeng et al. propose an integration method that predicts the missing information of the cloud occlusion region and restores the scene details[17]. A unified spatio-temporal spectral framework based on deep convolutional neural networks is proposed by Zhang et al., who additionally propose a global–local loss function and optimize the training model by cloud occlusion region[18]. Using the law of thick cloud occlusion images in frequency domain distribution, Jiang et al. propose a learnable three-input and three-output network CMSN that divides the thick cloud removal problem into a coarse stage and a refined stage. This innovation offers a new technical solution for the thick cloud removal issue[19].

References

  1. Michael D. King; Steven Platnick; W. Paul Menzel; Steven A. Ackerman; Paul A. Hubanks; Spatial and Temporal Distribution of Clouds Observed by MODIS Onboard the Terra and Aqua Satellites. IEEE Trans. Geosci. Remote. Sens. 2013, 51, 3826-3852.
  2. Chao Tao; Siyang Fu; Ji Qi; Haifeng Li; Thick Cloud Removal in Optical Remote Sensing Images Using a Texture Complexity Guided Self-Paced Learning Method. IEEE Trans. Geosci. Remote. Sens. 2022, 60, 1-12.
  3. Shoaib Imran; Muhammad Tahir; Zubair Khalid; Momin Uppal. A Deep Unfolded Prior-Aided RPCA Network For Cloud Removal; Institute of Electrical and Electronics Engineers (IEEE): Piscataway, NJ, United States, 2022; pp. 2048–2052.
  4. Meng Xu; Furong Deng; Sen Jia; Xiuping Jia; Antonio J. Plaza; Attention mechanism-based generative adversarial networks for cloud removal in Landsat images. Remote. Sens. Environ. 2022, 271, 112902.
  5. Yong Chen; Wei He; Naoto Yokoya; Ting-Zhu Huang. Total Variation Regularized Low-Rank Sparsity Decomposition for Blind Cloud and Cloud Shadow Removal from Multitemporal Imagery; Institute of Electrical and Electronics Engineers (IEEE): Piscataway, NJ, United States, 2019; pp. 1970-1973.
  6. Lanxing Wang; Qunming Wang; Fast spatial-spectral random forests for thick cloud removal of hyperspectral images. Int. J. Appl. Earth Obs. Geoinformation 2022, 112, 102916.
  7. Na Liu; Wei Li; Ran Tao; Qian Du; Jocelyn Chanussot; Multigraph-Based Low-Rank Tensor Approximation for Hyperspectral Image Restoration. IEEE Trans. Geosci. Remote. Sens. 2022, 60, 1-14.
  8. Cengis Hasan; Ross Horne; Sjouke Mauw; Andrzej Mizera; Cloud removal from satellite imagery using multispectral edge-filtered conditional generative adversarial networks. Int. J. Remote. Sens. 2022, 43, 1881-1893.
  9. Yue Zi; Fengying Xie; Ning Zhang; Zhiguo Jiang; Wentao Zhu; Haopeng Zhang; Thin Cloud Removal for Multispectral Remote Sensing Images Using Convolutional Neural Networks Combined With an Imaging Model. IEEE J. Sel. Top. Appl. Earth Obs. Remote. Sens. 2021, 14, 3811-3823.
  10. Yuxi Wang; Wenjuan Zhang; Shanjing Chen; Zhen Li; Bing Zhang. Rapidly Single-Temporal Remote Sensing Image Cloud Removal based on Land Cover Data; Institute of Electrical and Electronics Engineers (IEEE): Piscataway, NJ, United States, 2022; pp. 3307-3310.
  11. Jingyuan Li; Ning Wang; Lefei Zhang; Bo Du; Dacheng Tao. Recurrent Feature Reasoning for Image Inpainting; Institute of Electrical and Electronics Engineers (IEEE): Piscataway, NJ, United States, 2020; pp. 7757-7765.
  12. Jiahao Zheng; Xiao-Yang Liu; Xiaodong Wang; Single Image Cloud Removal Using U-Net and Generative Adversarial Networks. IEEE Trans. Geosci. Remote. Sens. 2020, 59, 6371-6385.
  13. Mingwen Shao; Chao Wang; Wangmeng Zuo; Deyu Meng; Efficient Pyramidal GAN for Versatile Missing Data Reconstruction in Remote Sensing Images. IEEE Trans. Geosci. Remote. Sens. 2022, 60, 1-14.
  14. Wenli Huang; Ye Deng; Siqi Hui; Jinjun Wang; Adaptive-Attention Completing Network for Remote Sensing Image. Remote. Sens. 2023, 15, 1321.
  15. Li-Yuan Li; Ting-Zhu Huang; Yu-Bang Zheng; Wen-Jie Zheng; Jie Lin; Guo-Cheng Wu; Xi-Le Zhao; Thick Cloud Removal for Multitemporal Remote Sensing Images: When Tensor Ring Decomposition Meets Gradient Domain Fidelity. IEEE Trans. Geosci. Remote. Sens. 2023, 61, 1-14.
  16. Jie Lin; Ting-Zhu Huang; Xi-Le Zhao; Yong Chen; Qiang Zhang; Qiangqiang Yuan; Robust Thick Cloud Removal for Multitemporal Remote Sensing Images Using Coupled Tensor Factorization. IEEE Trans. Geosci. Remote. Sens. 2022, 60, 1-16.
  17. Chao Zeng; Huanfeng Shen; Liangpei Zhang; Recovering missing pixels for Landsat ETM+ SLC-off imagery using multi-temporal regression analysis and a regularization method. Remote. Sens. Environ. 2013, 131, 182-194.
  18. Qiang Zhang; Qiangqiang Yuan; Chao Zeng; Xinghua Li; Yancong Wei; Missing Data Reconstruction in Remote Sensing Image With a Unified Spatial–Temporal–Spectral Deep Convolutional Neural Network. IEEE Trans. Geosci. Remote. Sens. 2018, 56, 4274-4288.
  19. Bo Jiang; Xiaoyang Li; Haozhan Chong; Yuwei Wu; Yaowei Li; Junhao Jia; Shuaibo Wang; Jinshuai Wang; Xiaoxuan Chen; A deep-learning reconstruction method for remote sensing images with large thick cloud cover. Int. J. Appl. Earth Obs. Geoinformation 2022, 115, 103079.
  20. Claas Grohnfeldt; Michael Schmitt; Xiaoxiang Zhu. A Conditional Generative Adversarial Network to Fuse Sar And Multispectral Optical Data For Cloud Removal From Sentinel-2 Images; Institute of Electrical and Electronics Engineers (IEEE): Piscataway, NJ, United States, 2018; pp. 1726-1729.
  21. Chengyue Zhang; Zhiwei Li; Qing Cheng; Xinghua Li; Huanfeng Shen. Cloud removal by fusing multi-source and multi-temporal images; Institute of Electrical and Electronics Engineers (IEEE): Piscataway, NJ, United States, 2017; pp. 2577-2580.
  22. Danang Surya Candra; Stuart Phinn; Peter Scarth. Cloud and cloud shadow removal of landsat 8 images using Multitemporal Cloud Removal method; Institute of Electrical and Electronics Engineers (IEEE): Piscataway, NJ, United States, 2017; pp. 1-5.
  23. Patrick Ebel; Yajin Xu; Michael Schmitt; Xiao Xiang Zhu; SEN12MS-CR-TS: A Remote-Sensing Data Set for Multimodal Multitemporal Cloud Removal. IEEE Trans. Geosci. Remote. Sens. 2022, 60, 1-14.
More
Information
Subjects: Remote Sensing
Contributors MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register : , , , , , , ,
View Times: 279
Revisions: 2 times (View History)
Update Date: 24 Nov 2023
1000/1000
ScholarVision Creations