Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 -- 1448 2023-10-23 15:08:23 |
2 layout Meta information modification 1448 2023-10-24 03:05:17 |

Video Upload Options

Do you have a full video?

Confirm

Are you sure to Delete?
Cite
If you have any further questions, please contact Encyclopedia Editorial Office.
Zhou, R.; Wang, G.; Xu, H.; Zhang, Z. A Sub-Second Method for SAR Image Registration. Encyclopedia. Available online: https://encyclopedia.pub/entry/50689 (accessed on 22 May 2024).
Zhou R, Wang G, Xu H, Zhang Z. A Sub-Second Method for SAR Image Registration. Encyclopedia. Available at: https://encyclopedia.pub/entry/50689. Accessed May 22, 2024.
Zhou, Rong, Gengke Wang, Huaping Xu, Zhisheng Zhang. "A Sub-Second Method for SAR Image Registration" Encyclopedia, https://encyclopedia.pub/entry/50689 (accessed May 22, 2024).
Zhou, R., Wang, G., Xu, H., & Zhang, Z. (2023, October 23). A Sub-Second Method for SAR Image Registration. In Encyclopedia. https://encyclopedia.pub/entry/50689
Zhou, Rong, et al. "A Sub-Second Method for SAR Image Registration." Encyclopedia. Web. 23 October, 2023.
A Sub-Second Method for SAR Image Registration
Edit

For Synthetic Aperture Radar (SAR) image registration, successive processes following feature extraction are required by both the traditional feature-based method and the deep learning method. Among these processes, the feature matching process—whose time and space complexity are related to the number of feature points extracted from sensed and reference images, as well as the dimension of feature descriptors—proves to be particularly time consuming. Additionally, the successive processes introduce data sharing and memory occupancy issues, requiring an elaborate design to prevent memory leaks.

reinforcement learning episodic control synthetic aperture radar image registration

1. Introduction

Researchers have an ongoing commitment to monitor and study the Earth’s complex surface and its changes. As an effective means of remote sensing, SAR images are indispensable in various fields, such as ecological development, environmental protection, resource exploration, and military reconnaissance. Research involving change detection, information extraction, and image fusion using multiple SAR images can provide additional information that a single image cannot convey. This necessitates a more-concise and -efficient high-precision image-registration process initially.
Existing SAR-image-registration methods can be categorized into traditional methods and deep-learning-based methods. Traditional methods mainly fall into two categories: gray scale-based methods and structural-feature-based methods. Grayscale registration methods utilize the intensity values of image pixels. They are computationally intensive and are susceptible to image quality issues, noise, and geometric distortion. The registration methods based on structural features typically include the processes of feature extraction, feature matching, fitting the transformation matrix, and interpolation resampling. Among them, feature points extracted based on SIFT [1] or SAR-SIFT [2] have certain invariance to various changes such as rotation, position, scale, and gray scale and have been widely used [3][4][5][6]. Such methods can usually extract a significant number of feature points; for instance, SIFT can extract about 2000 feature points from images with a size of 500×500 [1].
Feature matching involves complex mathematical calculations, and its time and space complexity depend on the number of feature points extracted, the dimension of the feature descriptors, and the matching algorithm used. This process requires substantial computing resources.
When employing feature-point-based methods for SAR image registration, the accuracy can be compromised due to the impact of speckle noise on the primary orientation of traditional feature descriptors [7][8]. There are two primary approaches to address this issue. The first approach involves enhancing feature-based registration techniques such as SIFT [8] or SAR-SIFT [9]. Alternatively, the second approach utilizes neural networks [7].
In recent years, deep neural networks have found wide application in SAR image registration. These networks can flexibly extract multidimensional and deeper features, achieving promising and robust results [7][10]. The common processing flow for deep-learning-based registration involves applying traditional feature-extraction algorithms to obtain image feature points, extracting image blocks based on these points, using deep learning networks to learn the feature and matching labels of image patch pairs, employing constraint algorithms to eliminate mismatches, and calculating transformation matrices based on matching point pairs.
While deep-learning-based SAR image registration holds promise, the scarcity of open-source SAR datasets poses challenges, as creating such datasets requires specialized personnel and resources. A common workaround is to perform self-learning using existing images, involving multiple affine transformations to generate a large training dataset with known correspondences. Despite this, many deep-learning-based SAR registration studies still rely on traditional methods for matching processing. These methods have high time and space complexity, often involving iterative computations and significant computing resource requirements.
Reinforcement learning, a branch of machine learning, has found extensive application in areas such as robot control and intelligent decision-making. Reinforcement learning adjusts model behavior dynamically according to rewards, offering more-flexible error correction compared to supervised learning. Although reinforcement-learning-based computer vision applications have been proposed, they remain relatively unexplored in the realm of SAR image registration.
It is worth noting that mainstream reinforcement learning needs to strike a balance between exploration and exploitation. However, in computer vision application scenarios, extensive exploration might not be necessary. Therefore, the reinforcement learning framework based on Episodic Memory is better suited for computer vision applications. Hierarchical reinforcement learning can further enhance training efficiency, especially in scenarios with significant state differences.

2. Deep Learning

Image registration based on the deep learning framework centers on image feature extraction, leveraging booming neural network architectures such as the Transformer. Previous research has indicated that applying deep neural networks to the registration of complex and diverse SAR image pairs can yield more-accurate matching features compared to manually designed feature extraction algorithms, showcasing their promising performance and applicability [7][11][12]. These methods require a sufficient number of samples for training. However, several challenges remain, including the limited availability of publicly accessible datasets, the scarcity of labeled data [13][14], the substantial computational and time costs during the training phase, and the need for high-performance computer hardware. Moreover, local similarities may lead to mistaken matches. Addressing these challenges represents critical research areas when applying deep learning to SAR image registration.
Neural-network-based SAR image registration [15] falls under the umbrella of feature-based registration [7], transcending the limitations of manually designed features. It can extract multi-level features that reflect distributional, structural, and semantic characteristics. Various researchers have explored this approach, employing methods such as correlating coefficients and neural networks [16][17], utilizing Deep Convolutional Networks (CNNs) and Conditional Generative Adversarial Networks (CGANs) to extract geographic features [18], applying Pulse-Coupled Neural Networks (PCNNs) [19] for edge information [20], and combining SIFT algorithms with deep learning [21]. Fang Shang [22] constructed position vectors and change vectors that cleverly characterize image pixels and classified Polarimetric Synthetic Aperture Radar (PolSAR) images of complex terrain by a Quaternion Neural Network (QNN), which is not influenced by height information. Moreover, advanced techniques integrate self-learning with SIFT feature points for near-subpixel-level registration [7], employ deep forest models to enhance robustness [11], utilize unsupervised learning frameworks for multiscale registration [23][24][25], and leverage Transformer networks for efficient and accurate registration [26][27][28][29][30][31]. Deng, X. [11] employed a unique approach where each key point serves as a distinct class in the design of their multi-class model. This approach effectively circumvents the challenge of constructing matched-point pairs typically encountered in two-classification registration models. In a similar vein, S. Mao [29] introduced an adaptive self-supervised SAR-image-registration method that achieved comparable results. Meanwhile, Li, B. [27] presented a novel Siamese Dense Capsule Network designed to facilitate a more-even distribution of correctly matched keypoint pairs in SAR images featuring complex scenes. Fan, Y. [26] introduced an advanced and high-precision dense matching technique, specifically tailored for registering SAR images in situations characterized by weak texture conditions. The approaches of B. Zou [32] and Ming Zhao [33] involve the adoption of a pseudo-label-generation method, eliminating the need for additional annotations. Y. Ye [24] and D. Quan [34] separately built coarse-to-fine deep learning image registration framework based on stacking several deep models, which can significantly improve the multimodal image registration performances.
In summary, deep-learning-based registration methods for SAR images can leverage multi-level, latent, and multi-structural features to capture complex data variations. They guide feature extraction using registration results, eliminating the need for manually set metrics. These methods have demonstrated favorable accuracy and applicability. However, they require a substantial number of training samples and high computational power during the training phase.
Regarding registration or matching, it plays a significant role in the entire image registration process. This process is used to identify misregistrations between two images or two patches, and for two images, it detects their mapping matrix, ultimately transforming one image to match the other. When it comes to two patches cropped from key points, matching classification performs well. Quan et al. [35] introduced a deep feature Correlation learning network (Cnet) along with a novel feature correlation loss function for multi-modal remote sensing image registration. The experiments demonstrated that the well-designed loss function improved the stability of network training and decreased the risk of overfitting. Li, L. [36] and D. Xiang [37] utilized networks to extract feature information and generate descriptors, which can be used to obtain more-correct matching point pairs.

3. Reinforcement Learning

Blundell and colleagues introduced the Model-Free Episodic Control (MFEC) algorithm [38] as one of the earliest episodic reinforcement learning algorithms. In comparison to traditional parameter-based deep reinforcement learning methods, MFEC employs non-parametric Episodic Memory for value function estimation, resulting in higher sample efficiency compared to DQN algorithms. Neural Episodic Control (NEC) [39] introduced a differentiable neural dictionary to store episodic memories, enabling the estimation of state–action value functions based on the similarity between stored neighboring states.
Savinov et al. [40] utilized Episodic Memory to devise a curiosity-driven exploration strategy. Episodic Memory DQN (EMDQN) [41] combined parameterized neural networks with non-parametric Episodic Memory, enhancing the generalization capabilities of Episodic Memory. Generalizable Episodic Memory (GEM) [42] parameterized the memory module using neural networks, further enhancing the generalization capabilities of Episodic Memory algorithms. Additionally, GEM extended the applicability of Episodic Memory to continuous action spaces.
These algorithms represent significant advancements in the field of episodic reinforcement learning, offering improved memory and learning strategies that contribute to more-effective and -efficient training processes.

References

  1. Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110.
  2. Dellinger, F.; Delon, J.; Gousseau, Y.; Michel, J.; Tupin, F. SAR-SIFT: A SIFT-like algorithm for SAR images. IEEE Trans. Geosci. Remote Sens. 2014, 53, 453–466.
  3. Pan, B.; Jiao, R.; Wang, J.; Han, Y.; Hang, H. SAR image registration based on KECA-SAR-SIFT operator. In Proceedings of the 2022 2nd International Conference on Computer Science, Electronic Information Engineering and Intelligent Control Technology (CEI), Nanjing, China, 23–25 September 2022; pp. 114–119.
  4. Hossein-Nejad, Z.; Nasri, M. Image Registration Based on Redundant Keypoint Elimination SARSIFT Algorithm and MROGH Descriptor. In Proceedings of the 2022 International Conference on Machine Vision and Image Processing (MVIP), Ahvaz, Iran, 23–24 February 2022; pp. 1–5.
  5. Wang, M.; Zhang, J.; Deng, K.; Hua, F. Combining optimized SAR-SIFT features and RD model for multisource SAR image registration. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–16.
  6. Yu, Q.; Wu, P.; Ni, D.; Hu, H.; Lei, Z.; An, J.; Chen, W. SAR pixelwise registration via multiscale coherent point drift with iterative residual map minimization. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–19.
  7. Wang, S.; Quan, D.; Liang, X.; Ning, M.; Guo, Y.; Jiao, L. A deep learning framework for remote sensing image registration. ISPRS J. Photogramm. Remote Sens. 2018, 145, 148–164.
  8. Chang, Y.; Xu, Q.; Xiong, X.; Jin, G.; Hou, H.; Man, D. SAR image matching based on rotation-invariant description. Sci. Rep. 2023, 13, 14510.
  9. Pourfard, M.; Hosseinian, T.; Saeidi, R.; Motamedi, S.A.; Abdollahifard, M.J.; Mansoori, R.; Safabakhsh, R. KAZE-SAR: SAR image registration using KAZE detector and modified SURF descriptor for tackling speckle noise. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–12.
  10. Quan, D.; Wang, S.; Ning, M.; Xiong, T.; Jiao, L. Using deep neural networks for synthetic aperture radar image registration. In Proceedings of the 2016 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Beijing, China, 10–15 July 2016; pp. 2799–2802.
  11. Mao, S.; Yang, J.; Gou, S.; Jiao, L.; Xiong, T.; Xiong, L. Multi-Scale Fused SAR Image Registration Based on Deep Forest. Remote Sens. 2021, 13, 2227.
  12. Jaderberg, M.; Simonyan, K.; Zisserman, A.; Kavukcuoglu, K. Spatial transformer networks. arXiv 2015, arXiv:1506.02025.
  13. Chen, J.; Huang, Z.; Xia, R.; Wu, B.; Sheng, L.; Sun, L.; Yao, B. Large-scale multi-class SAR image target detection dataset-1.0. J. Radars 2022. Available online: https://radars.ac.cn/web/data/getData?dataType=MSAR (accessed on 22 April 2018).
  14. Xia, R.; Chen, J.; Huang, Z.; Wan, H.; Wu, B.; Sun, L.; Yao, B.; Xiang, H.; Xing, M. A Visual Transformer Based on Contextual Joint Representation Learning for SAR Ship Detection. Remote Sens. 2022, 14, 1488.
  15. Schwegmann, C.P.; Kleynhans, W.; Salmon, B. The development of deep learning in synthetic aperture radar imagery. In Proceedings of the 2017 International Workshop on Remote Sensing with Intelligent Processing (RSIP), Shanghai, China, 18–21 May 2017; pp. 1–2.
  16. Jianxu, M. Research on Three-Dimensional Imaging Processing Techniques for Synthetic Aperture Radar Interferometry (InSAR). Ph.D. Thesis, Hunan University, Changsha, China, 2002.
  17. Chang, H.H. Remote Sensing Image Registration Based upon Extensive Convolutional Architecture with Transfer Learning and Network Pruning. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–16.
  18. Jie, R. Key Technology Research for Cartographic Applications of Multi-Source Remote Sensing Data. Ph.D. Thesis, University of Chinese Academy of Sciences (Institute of Remote Sensing and Digital Earth, Chinese Academy of Sciences), Beijing, China, 2017.
  19. Yide, M.; Lian, L.; Yafu, W.; Ruolan, D. The Principles and Applications of Pulse-Coupled Neural Networks. 2006. Available online: https://item.jd.com/10052980.html (accessed on 22 April 2018).
  20. Del Frate, F.; Licciardi, G.; Pacifici, F.; Pratola, C.; Solimini, D. Pulse Coupled Neural Network for automatic features extraction from COSMO-Skymed and TerraSAR-X imagery. In Proceedings of the 2009 IEEE International Geoscience and Remote Sensing Symposium, Cape Town, South Africa, 12–17 July 2009; Volume 3, pp. III-384–III-387.
  21. Zhao, C. SAR Image Registration Method Based on SAR-SIFT and Deep Learning. Master’s Thesis, Xidian University, Xi’an, China, 2017.
  22. Shang, F.; Hirose, A. Quaternion neural-network-based PolSAR land classification in Poincare-sphere-parameter space. IEEE Trans. Geosci. Remote Sens. 2013, 52, 5693–5703.
  23. Hu, J.; Lu, J.; Tan, Y.P. Sharable and individual multi-view metric learning. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 40, 2281–2288.
  24. Ye, Y.; Tang, T.; Zhu, B.; Yang, C.; Li, B.; Hao, S. A multiscale framework with unsupervised learning for remote sensing image registration. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–15.
  25. Quan, D.; Wei, H.; Wang, S.; Li, Y.; Chanussot, J.; Guo, Y.; Hou, B.; Jiao, L. Efficient and Robust: A Cross-modal Registration Deep Wavelet Learning Method for Remote Sensing Images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 4739–4754.
  26. Fan, Y.; Wang, F.; Wang, H. A Transformer-Based Coarse-to-Fine Wide-Swath SAR Image Registration Method under Weak Texture Conditions. Remote Sens. 2022, 14, 1175.
  27. Li, B.; Guan, D.; Zheng, X.; Chen, Z.; Pan, L. SD-CapsNet: A Siamese Dense Capsule Network for SAR Image Registration with Complex Scenes. Remote Sens. 2023, 15, 1871.
  28. Deng, X.; Mao, S.; Yang, J.; Lu, S.; Gou, S.; Zhou, Y.; Jiao, L. Multi-Class Double-Transformation Network for SAR Image Registration. Remote Sens. 2023, 15, 2927.
  29. Mao, S.; Yang, J.; Gou, S.; Lu, K.; Jiao, L.; Xiong, T.; Xiong, L. Adaptive Self-Supervised SAR Image Registration with Modifications of Alignment Transformation. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–15.
  30. Liu, M.; Zhou, G.; Ma, L.; Li, L.; Mei, Q. SIFNet: A self-attention interaction fusion network for multisource satellite imagery template matching. Int. J. Appl. Earth Obs. Geoinf. 2023, 118, 103247.
  31. Chen, J.; Chen, X.; Chen, S.; Liu, Y.; Rao, Y.; Yang, Y.; Wang, H.; Wu, D. Shape-Former: Bridging CNN and Transformer via ShapeConv for multimodal image matching. Inf. Fusion 2023, 91, 445–457.
  32. Zou, B.; Li, H.; Zhang, L. Self-Supervised SAR Image Registration With SAR-Superpoint and Transformation Aggregation. IEEE Trans. Geosci. Remote Sens. 2022, 61, 1–15.
  33. Zhao, M.; Zhang, G.; Ding, M. Heterogeneous self-supervised interest point matching for multi-modal remote sensing image registration. Int. J. Remote Sens. 2022, 43, 915–931.
  34. Quan, D.; Wei, H.; Wang, S.; Gu, Y.; Hou, B.; Jiao, L. A Novel Coarse-to-Fine Deep Learning Registration Framework for Multi-Modal Remote Sensing Images. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–16.
  35. Quan, D.; Wang, S.; Gu, Y.; Lei, R.; Yang, B.; Wei, S.; Hou, B.; Jiao, L. Deep feature correlation learning for multi-modal remote sensing image registration. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–16.
  36. Li, L.; Han, L.; Ye, Y. Self-supervised keypoint detection and cross-fusion matching networks for multimodal remote sensing image registration. Remote Sens. 2022, 14, 3599.
  37. Xiang, D.; Xu, Y.; Cheng, J.; Xie, Y.; Guan, D. Progressive Keypoint Detection with Dense Siamese Network for SAR Image Registration. IEEE Trans. Aerosp. Electron. Syst. 2023, 59, 5847–5858.
  38. Blundell, C.; Uria, B.; Pritzel, A.; Li, Y.; Ruderman, A.; Leibo, J.Z.; Rae, J.; Wierstra, D.; Hassabis, D. Model-free episodic control. arXiv 2016, arXiv:1606.04460.
  39. Pritzel, A.; Uria, B.; Srinivasan, S.; Badia, A.P.; Vinyals, O.; Hassabis, D.; Wierstra, D.; Blundell, C. Neural episodic control. In Proceedings of the International Conference on Machine Learning, Sydney, Australia, 6–11 August 2017; pp. 2827–2836.
  40. Savinov, N.; Raichuk, A.; Marinier, R.; Vincent, D.; Pollefeys, M.; Lillicrap, T.; Gelly, S. Episodic curiosity through reachability. arXiv 2018, arXiv:1810.02274.
  41. Lin, Z.; Zhao, T.; Yang, G.; Zhang, L. Episodic memory deep q-networks. arXiv 2018, arXiv:1805.07603.
  42. Hu, H.; Ye, J.; Zhu, G.; Ren, Z.; Zhang, C. Generalizable Episodic Memory for deep reinforcement learning. arXiv 2021, arXiv:2103.06469.
More
Information
Contributors MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register : , , ,
View Times: 167
Revisions: 2 times (View History)
Update Date: 24 Oct 2023
1000/1000
Video Production Service