Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 -- 1449 2024-02-29 10:24:25 |
2 references update and layout Meta information modification 1449 2024-03-05 08:58:41 |

Video Upload Options

Do you have a full video?

Confirm

Are you sure to Delete?
Cite
If you have any further questions, please contact Encyclopedia Editorial Office.
Xiang, Y.; Tian, X.; Xu, Y.; Guan, X.; Chen, Z. Edge-Guided Multimodal Transformers Change Detection. Encyclopedia. Available online: https://encyclopedia.pub/entry/55729 (accessed on 21 April 2024).
Xiang Y, Tian X, Xu Y, Guan X, Chen Z. Edge-Guided Multimodal Transformers Change Detection. Encyclopedia. Available at: https://encyclopedia.pub/entry/55729. Accessed April 21, 2024.
Xiang, Yunfan, Xiangyu Tian, Yue Xu, Xiaokun Guan, Zhengchao Chen. "Edge-Guided Multimodal Transformers Change Detection" Encyclopedia, https://encyclopedia.pub/entry/55729 (accessed April 21, 2024).
Xiang, Y., Tian, X., Xu, Y., Guan, X., & Chen, Z. (2024, February 29). Edge-Guided Multimodal Transformers Change Detection. In Encyclopedia. https://encyclopedia.pub/entry/55729
Xiang, Yunfan, et al. "Edge-Guided Multimodal Transformers Change Detection." Encyclopedia. Web. 29 February, 2024.
Edge-Guided Multimodal Transformers Change Detection
Edit

Change detection from heterogeneous satellite and aerial images plays a progressively important role in many fields, including disaster assessment, urban construction, and land use monitoring. Researchers have mainly devoted their attention to change detection using homologous image pairs and achieved many remarkable results. It is sometimes necessary to use heterogeneous images for change detection in practical scenarios due to missing images, emergency situations, and cloud and fog occlusion.

change detection remote sensing transformer

1. Introduction

Remote sensing change detection refers to detecting the changes between a pair of images in the same geographical area on the Earth that were obtained at different times [1]. Accurate monitoring of the Earth’s surface changes is important for understanding the relationship between humans and the natural environment. With the advancement of aerospace remote sensing (RS) technology, massive multi-temporal remote sensing images provide enough data support for change detection (CD) research and promote the vigorous development of change detection application fields. Change detection is a promising research topic in the field of remote sensing. As an advanced method for monitoring land cover conditions, CD has played a huge role in important fields such as land monitoring [2], urban management [3], geological disasters [4], and emergency support [5].
With the diversification of remote sensing methods, the refinement and integrated monitoring of satellite and aviation data have become new a development trend. Aerial remote sensing has the characteristics of strong mobility, a high resolution at the sub-meter level, and rapid data acquisition, but it is constrained by the lack of pre-temporal historical data and a narrow coverage range in CD tasks. Therefore, it is necessary to complement this with satellite images to form a change monitoring system. According to whether a pair of CD images are obtained using the same RS platform or sensor, change detection algorithms can be divided into homologous change detection and heterogeneous change detection [6]. Traditional satellite image CD algorithms require obtaining multitemporal images in the same area from identical sensors with more strict conditions. Due to the limitations of weather like fog, the orbital repetition period, and the payload width, it cannot fully meet the complex and diverse application needs in the real world today. Thus, it is necessary to use satellite and aerial images for heterogeneous change detection. 
In many practical applications, SACD has played an important role. Especially in emergency situations of disaster evaluation and rescue, fast, flexible, and accurate methods are needed for timely assessment. With the rise and rapid development of aerial remote sensing technology, the characteristics of high maneuverability, high pixel resolution, and timely data capture are very suitable. The pre-image usually uses satellite images due to its abundant historical data and wide cover, while the post-image is obtained through direct flights using aircraft, which is the fastest way and can provide higher resolution with accurate information [7]. Furthermore, SACD has also played a significant role in land resource monitoring. Currently, land resource monitoring and urban management mainly rely on the technology system of satellite RS image monitoring. However, there are still shortcomings in the mobility, resolution, and timeliness of satellite monitoring in cloudy and foggy areas, which are easily constrained by weather conditions. Aerial remote sensing has the characteristics of high spatial resolution, high frequency, and high cost-effectiveness. At the same time, it can avoid the limitations of insufficient coverage and resolution under rain and fog conditions, and complements the capabilities of satellite remote sensing.
However, CD between satellite and aerial images remains a huge challenge. The main challenges are as follows:
(1)
Huge difference in resolution between satellite and aerial images. Due to satellite and aircraft having different shooting heights and sensors, a satellite image’s resolution is usually lower than that of aerial images. A HR satellite’s resolution is approximately 0.5–2 m [8], while an aerial image’s resolution is usually lower than 0.5 m [9], and can even reach the centimeter level. Aligning the resolution of satellite and aerial image pairs through interpolation, convolution, or pooling is a direct solution to the problem, but it can cause the image to lose a large amount of detailed information and introduce some accumulated errors and speckle noise.
(2)
Blurred edges caused by complex terrain scenes and interference from the satellite and aerial image gap. Dense building clusters are often obstructed by shadow occlusion, similar ground objects, and intraclass differences caused by very different materials, resulting in blurred edges. Moreover, the parallax and inference from the lower resolution of satellite images than aerial images further increases the difficulties in change detection for buildings.

2. Edge-Guided Multimodal Transformers Change Detection from Satellite and Aerial Images

2.1. Different Resolution for Change Detection

To address the issue of different resolutions in change detection, existing methods typically address this issue by reconstructing the image sample to make the homologous CD method suitable for SACD tasks. Statistics-based interpolation is the most direct and convenient method to match the differences between SACD images of different resolutions. However, the ability of image interpolation in information restoration is restricted. More specifically, image interpolation methods like bilinear and bicubic interpolation perform poorly in the face of large differences in resolution, resulting in more background noise and blurry edges, which increases the difficulty of feature alignment and generates many pseudo changes [10].
Besides using simplest interpolation methods, sub-pixel-based methods are studied most widely. Considering the superior performance of sub-pixel convolution to obtain high-resolution feature maps from low-resolution images [11][12][13], Ling et al. [14] first introduced sub-pixel convolution into CD to address the gap caused by different resolutions in heterogeneous images. Ling et al. adopted the principle of spatial correlation and designed a new land cover change pattern to obtain changes with sub-pixel convolution. Later, Wang et al. [15] proposed a Hopfield neural network with sub-pixel convolution to build the resolution gap between Landsat and MODIS images. Overall, compared to interpolation methods, the sub-pixel-based methods used to cleverly design a learnable up-sampling module can better reconstruct LR images. However, sub-pixel-based methods are largely restricted by the accuracy of the previous resolution feature map, focusing solely on shallow feature reconstruction without utilizing deep semantic information, resulting in the accumulation of redundant errors.
Furthermore, super resolution (SR) has been an independent task aimed to recover low-resolution (LR) images [10]. Li et al. [16] introduced an iterative super resolution CD method for Landsat-MODIS CD, which combines end-member estimation, spectral unmixing, and sub-pixel-based methods. Wu et al. [17] designed a back propagation network to obtain sub-pixel LCC maps from the soft-classification results of LR images [10]. However, SR is not flexible enough and may be limited by fixed zoom sizes in image recovery. 

2.2. Deep Learning for Change Detection

Deep learning has been widely applied in the field of remote sensing vision [18][19][20]. In CD tasks, deep learning methods have demonstrated their superiority and good generalization ability [21]. At present, deep learning CD tasks are mostly based on the Siamese Network [22], which has two identical branches. The parameters of the two branches choose whether to share the weight according to homologous or heterologous change detection. Previous research [23][24] used a Siamese Network as the encoder to extract features and calculate the changes by concatenating the features directly. Subsequent researchers improved the regional accuracy of change detection by designing various attention modules, including dense attention [25], spatial attention [26], spatial-temporal attention [12], and others. However, existing CD methods strived for the accuracy of regional changes through attention mechanisms, without realizing the importance of edge information.
Many remote sensing objects have their own unique and clear edge features, especially buildings [27]. However, most existing deep learning CD methods design various attention modules to improve the regional accuracy without utilizing building edge information. Ignoring edge information results in the poor performance of change detection in some cases, especially in heterogeneous SACD. In particular, dense building communities are often obstructed by shadow and interference from similar objects like buildings and roads, resulting in blurred edges interfering with the change detection [28]. In SACD, the lower resolution of the satellite image compared to the aerial one can worsen the above situation.
In building a segmentation task, utilizing edge information as prior knowledge can help CD networks pay attention to both semantic and boundary features [25][29][30]. Reference [31] designed an edge detection module and fused segmentation masks, with the loss function also incorporating edge optimization. Reference [32] used an edge refinement module, cooperating channel, and location attention module to enhance the ability of the network in CD tasks. Researchers in a previous study [7] fused and aligned satellite and aerial images in high-dimensional features through convolutional networks, and used the Hough method to obtain building edges as extra information to help the model focus more on building contours and spatial positions. However, existing methods only use edge information as prior knowledge and do not interact with deep semantic information, fully integrating edge features as a learnable part into the whole network.

References

  1. Jérôme, T. Change Detection. In Springer Handbook of Geographic Information; Springer: Berlin/Heidelberg, Germany, 2022; pp. 151–159.
  2. Hu, J.; Zhang, Y. Seasonal Change of Land-Use/Land-Cover (Lulc) Detection Using Modis Data in Rapid Urbanization Regions: A Case Study of the Pearl River Delta Region (China). IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2013, 6, 1913–1920.
  3. Jensen, J.R.; Im, J. Remote Sensing Change Detection in Urban Environments. In Geo-Spatial Technologies in Urban Environments; Springer: Berlin/Heidelberg, Germany, 2007; pp. 7–31.
  4. Zhang, J.-F.; Xie, L.-L.; Tao, X.-X. Change Detection of Earthquake-Damaged Buildings on Remote Sensing Image and Its Application in Seismic Disaster Assessment. In Proceedings of the IGARSS 2003, 2003 IEEE International Geoscience and Remote Sensing Symposium, Proceedings (IEEE Cat. No. 03CH37477), Toulouse, France, 21–25 July 2003.
  5. Bitelli, G.; Camassi, R.; Gusella, L.; Mognol, A. Image Change Detection on Urban Area: The Earthquake Case. In Proceedings of the Xth ISPRS Congress, Istanbul, Turkey, 12–23 July 2004.
  6. Zhan, T.; Gong, M.; Jiang, X.; Li, S. Log-based transformation feature learning for change detection in heterogeneous images. IEEE Geosci. Remote Sens. Lett. 2018, 15, 1352–1356.
  7. Shao, R.; Du, C.; Chen, H.; Li, J. SUNet: Change detection for heterogeneous remote sensing images from satellite and UAV using a dual-channel fully convolution network. Remote Sens. 2021, 13, 3750.
  8. Lu, D.; Mausel, P.; Brondizio, E.; Moran, E. Change detection techniques. Int. J. Remote Sens. 2004, 25, 2365–2401.
  9. Zongjian, L. UAV for mapping—Low altitude photogrammetric survey. Int. Arch. Photogram. Remote Sens. Beijing China 2008, 37, 1183–1186.
  10. Liu, M.; Shi, Q.; Marinoni, A.; He, D.; Liu, X.; Zhang, L. Super-resolution-based change detection network with stacked attention module for images with different resolutions. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–18.
  11. Papadomanolaki, M.; Verma, S.; Vakalopoulou, M.; Gupta, S.; Karantzalos, K. Detecting urban changes with recurrent neural networks from multitemporal Sentinel-2 data. In Proceedings of the IGARSS 2019-2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July–2 August 2019; pp. 214–217.
  12. Daudt, R.C.; Le Saux, B.; Boulch, A. Fully convolutional Siamese networks for change detection. In Proceedings of the 2018 25th IEEE International Conference on Image Processing (ICIP 2018), Athens, Greece, 7–10 October 2018; pp. 4063–4067.
  13. Daudt, R.C.; Le Saux, B.; Boulch, A.; Gousseau, Y. Urban change detection for multispectral earth observation using convolutional neural networks. In Proceedings of the IGARSS 2018-2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018; pp. 2115–2118.
  14. Chen, H.; Shi, Z. A spatial-temporal attention-based method and a new dataset for remote sensing image change detection. Remote Sens. 2020, 12, 1662.
  15. Wang, Q.; Shi, W.; Atkinson, P.M.; Li, Z. Land cover change detection at subpixel resolution with a hopfield neural network. IEEE J. Sel.Topics Appl. Earth Observ. Remote Sens. 2015, 8, 1339–1352.
  16. Li, X.; Ling, F.; Foody, G.M.; Du, Y. A super resolution land-cover change detection method using remotely sensed images with different spatial resolutions. IEEE Trans. Geosci. Remote Sens. 2016, 54, 3822–3841.
  17. Wu, K.; Du, Q.; Wang, Y.; Yang, Y. Supervised sub-pixel mapping for change detection from remotely sensed images with different resolutions. Remote Sens. 2017, 9, 284.
  18. Lin, S.; Zhang, M.; Cheng, X.; Shi, L.; Gamba, P.; Wang, H. Dynamic Low-Rank and Sparse Priors Constrained Deep Autoencoders for Hyperspectral Anomaly Detection. IEEE Trans. Instrum. Meas. 2023, 73, 2500518.
  19. Lin, S.; Zhang, M.; Cheng, X.; Zhou, K.; Zhao, S.; Wang, H. Hyperspectral anomaly detection via sparse representation and collaborative representation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 16, 946–961.
  20. Cheng, X.; Zhang, M.; Lin, S.; Li, Y.; Wang, H. Deep Self-Representation Learning Framework for Hyperspectral Anomaly Detection. IEEE Trans. Instrum. Meas. 2023, 73, 5002016.
  21. Bai, T.; Wang, L.; Yin, D.; Sun, K.; Chen, Y.; Li, W. Deep learning for change detection in remote sensing: A review. Geo-Spat. Inf. Sci. 2023, 26, 262–288.
  22. Bromley, J.; Guyon, I.; Lecun, Y.; Sackinger, E.; Shah, R. Signature verification using a “siamese” time delay neural network. Int. J. Pattern Recognit. Artif. Intell. 1993, 7, 669–688.
  23. MArabi, E.A.; Karoui, M.S.; Djerriri, K. Optical remote sensing change detection through deep siamese network. In Proceedings of the IGARSS 2018–2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018; pp. 5041–5044.
  24. Zhang, M.; Xu, G.; Chen, K.; Yan, M.; Sun, X. Triplet-based semantic relation learning for aerial remote sensing image change detection. IEEE Geosci. Remote Sens. Lett. 2018, 16, 266–270.
  25. Zheng, H.; Gong, M.; Liu, T.; Jiang, F.; Zhan, T.; Lu, D.; Zhang, M. HFA-Net: High frequency attention siamese network for building change detection in VHR remote sensing images. Pattern Recogn. 2022, 129, 108717.
  26. Song, K.; Jiang, J. AGCDetNet: An Attention-Guided Network for Building Change Detection in High-Resolution Remote Sensing Images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 4816–4831.
  27. Wei, Y.; Zhao, Z.; Song, J. Urban Building Extraction from High-Resolution Satellite Panchromatic Image Using Clustering and Edge Detection. In Proceedings of the IGARSS 2004. 2004 IEEE International Geoscience and Remote Sensing Symposium, Anchorage, AK, USA, 20–24 September 2004.
  28. Chen, Z.; Zhou, Y.; Wang, B.; Xu, X.; He, N.; Jin, S.; Jin, S. EGDE-Net: A building change detection method for high-resolution remote sensing imagery based on edge guidance and differential enhancement. ISPRS J. Photogramm. Remote Sens. 2022, 191, 203–222.
  29. Zheng, Z.; Wan, Y.; Zhang, Y.; Xiang, S.; Peng, D.; Zhang, B. CLNet: Cross-layer convolutional neural network for change detection in optical remote sensing imagery. ISPRS J. Photogram. Remote Sens. 2021, 175, 247–267.
  30. Zhou, Y.; Chen, Z.; Wang, B.; Li, S.; Liu, H.; Xu, D.; Ma, C. BOMSC-Net: Boundary Optimization and Multi-Scale Context Awareness Based Building Extraction from High-Resolution Remote Sensing Imagery. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–17.
  31. Jung, H.; Choi, H.-S.; Kang, M. Boundary Enhancement Semantic Segmentation for Building Extraction from Remote Sensed Image. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–12.
  32. Zhang, J.; Shao, Z.; Ding, Q.; Huang, X.; Wang, Y.; Zhou, X.; Li, D. AERNet: An attention-guided edge refinement network and a dataset for remote sensing building change detection. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–16.
More
Information
Contributors MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register : , , , ,
View Times: 63
Revisions: 2 times (View History)
Update Date: 05 Mar 2024
1000/1000