Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 -- 1611 2023-11-25 07:17:30 |
2 format correct Meta information modification 1611 2023-11-27 09:57:28 |

Video Upload Options

We provide professional Video Production Services to translate complex research into visually appealing presentations. Would you like to try it?

Confirm

Are you sure to Delete?
Cite
If you have any further questions, please contact Encyclopedia Editorial Office.
Lu, L.; Liu, T.; Jiang, F.; Han, B.; Zhao, P.; Wang, G. Building Footprint Extraction in Very-High-Resolution Remote Sensing Images. Encyclopedia. Available online: https://encyclopedia.pub/entry/52052 (accessed on 19 November 2024).
Lu L, Liu T, Jiang F, Han B, Zhao P, Wang G. Building Footprint Extraction in Very-High-Resolution Remote Sensing Images. Encyclopedia. Available at: https://encyclopedia.pub/entry/52052. Accessed November 19, 2024.
Lu, Lei, Tongfei Liu, Fenlong Jiang, Bei Han, Peng Zhao, Guoqiang Wang. "Building Footprint Extraction in Very-High-Resolution Remote Sensing Images" Encyclopedia, https://encyclopedia.pub/entry/52052 (accessed November 19, 2024).
Lu, L., Liu, T., Jiang, F., Han, B., Zhao, P., & Wang, G. (2023, November 25). Building Footprint Extraction in Very-High-Resolution Remote Sensing Images. In Encyclopedia. https://encyclopedia.pub/entry/52052
Lu, Lei, et al. "Building Footprint Extraction in Very-High-Resolution Remote Sensing Images." Encyclopedia. Web. 25 November, 2023.
Building Footprint Extraction in Very-High-Resolution Remote Sensing Images
Edit

With the rapid development of very-high-resolution (VHR) remote-sensing technology, automatic identification and extraction of building footprints are significant for tracking urban development and evolution. Nevertheless, while VHR can more accurately characterize the details of buildings, it also inevitably enhances the background interference and noise information, which degrades the fine-grained detection of building footprints. In order to tackle the above issues, the attention mechanism is intensively exploited to provide a feasible solution. The attention mechanism is a computational intelligence technique inspired by the biological vision system capable of rapidly and automatically catching critical information.

computational intelligence neural networks building footprint extraction attention mechanism remote-sensing images

1. Introduction

With the rapid development of satellite, aircraft, and UAV technology, it has become easier to obtain high-resolution and very-high-resolution (VHR) remote-sensing images [1]. Based on these high-quality remote-sensing images, the detailed information of ground objects can be clearly depicted, which facilitates many remote-sensing tasks, including but not limited to land-cover classification [2], object detection [3], change detection [4], etc. Among the ground objects covered by VHR images, buildings, as the carrier of human production and living activities, are of vital significance to the human living environment, and are good indicators of population aggregation, energy consumption intensity, and regional development [5]. Therefore, the accurate extraction of buildings from remote-sensing images is conducive to the study of urban dynamic expansion and population distribution patterns, promoting the digital construction and management of cities, and enhancing the sustainable development of cities [6].
Although some research progress has been made in building footprint extraction in recent years, the diversity of remote-sensing image sources and the complexity of the environment still bring many challenges to this task, mainly including:
(a)
In optical remote-sensing images, buildings have small inter-class variance and large intra-class variance [7]. For example, non-buildings such as roads, playgrounds, and parking lots have similar characteristics (such as spectrum, shape, size, structure, etc.), which are easy to confuse the extraction method [8].
(b)
Due to the different imaging angles of sensors, high-rise buildings often produce different degrees of geometric distortion, which increases the difficulty of algorithm recognition [9].
(c)
Due to the difference in the sun’s altitude angle when shooting, buildings tend to produce shadow areas at different angles, which not only interferes with the coverage area of the building itself, but also easily conceals the characteristics of other buildings covered by shadows [10].
In recent years, deep learning methods represented by the convolutional neural network (CNN) have shown great potential in the fields of computer vision [11][12] and remote-sensing image interpretation [13][14]. With the powerful ability to extract high-level features, CNN-based building footprint extraction methods alleviate the above-mentioned problems to a certain extent. Most of these methods adopt the fully convolutional architecture of the encoder–decoder. For example, Ji et al. proposed a Siamese U-shaped network named SiU-Net for building extraction, which enhances the robustness of buildings of different scales by simultaneously processing original images and downsampled low-resolution images [15]. The method proposed by Sun et al. improves the detection accuracy of building edge by combining CNN with active contour model [16]. Yuan et al. designed a CNN with a simple structure, which integrates pixel-level prediction activated by multiple layers and introduces a symbolic distance function to establish boundaries to represent the output, which has a stronger representation ability [17][18]. In addition, BRRNet proposed by Shao et al. introduced the atrous convolution of different dilation rates to extract more global features by gradually increasing the receiving field in the feature extraction process and the residual refinement module to further refine the residual between the result of the prediction module and the real result [19]. However, existing approaches still suffer from challenges and limitations. Most of the methods above are an extension of the general end-to-end semantic segmentation method, do not carry out targeted analysis of the characteristics of the building itself, and do not filter the noise effectively.

2. Building Footprint Extraction Methods 

Remote-sensing imagery can provide effective data support for humans to reform nature, and it has been widely used in Earth observation [20][21][22]. With the rapid development of aerial photography technology such as satellite and aviation, high-resolution remote-sensing images allow for observing detailed ground targets such as buildings, roads, and vehicles. In particular, building footprint extraction is of great significance for urban development planning and urban disaster prevention and mitigation, since buildings are one of the main man-made targets for humans to transform the Earth’s surface [23][24][25][26]. Building footprint extraction has been a constant concern by scholars, and many building footprint extraction methods have been proposed in the past decade. These methods can be grouped into the following two categories: conventional building footprint extraction methods and deep-learning-based building footprint extraction methods.

2.1. Conventional Building Footprint Extraction Methods

Building footprint extraction plays an important role in the interpretation and application of remote-sensing images [27]. In the early stage, scholars worked on extracting building footprints through different mathematical models or combining multiple types of data information. For instance, Reference [28] designed a fully automatic building footprint extraction approach from the differential morphological profile of high-resolution satellite imagery. In Reference [29], a Bayesian-based approach is proposed to extract building footprints through aerial LiDAR data. This method employs the shortest path algorithm and maximizes the posterior probability using linear optimization to automatically obtain building footprints. Sahar et al. utilized vector parcel geometries and their attributes to extract building footprints by using integrated aerial imagery and geographic information system (GIS) data [23]. These methods often require different types of data support to achieve building footprint extraction, and the results are not reliable enough [30][31]. In addition, scholars have devoted themselves to designing various hand-crafted features to automatically extract building footprints from high-resolution remote-sensing images. Zhang et al. devised a pixel shape index to extract buildings by classifying the shape and contour information of pixels [32]. Huang et al. proposed a morphological building index for automatic building extraction in [33]. Similarly, Huang et al. also developed a morphological shadow index for building extraction from high-resolution remote-sensing images [34]. Moreover, some methods use morphological attributes to achieve building footprint extraction [35][36]. In summary, these conventional approaches have been exploited to extract building footprints from high-resolution remote-sensing images.

2.2. Deep-Learning-Based Building Footprint Extraction Methods

Computational intelligence (CI) is a biology- and linguistics-driven computational paradigm [37][38]. In recent years, deep learning technology, as a main pillar, has been widely used in remote-sensing image interpretation with powerful layer-by-layer learning and nonlinear fitting capabilities, such as change detection [14], scene classification [39], semantic segmentation [40], object detection [41][42], etc. In this context, the building footprint extraction method based on deep learning has attracted the attention of many scholars. The building footprint extraction task can be treated as a single-objective semantic segmentation task [43]. Therefore, the direct idea is to use a deep learning-based semantic segmentation network for building footprint extraction, which can fully utilize mainstream deep neural networks (such as VGGNet [44], ResNet [45], etc.) to mine deep semantic features to recognize buildings. For example, compared with conventional methods, semantic segmentation networks such as fully convolutional network (FCN) [46] and U-Net [47] based on VGGNet can achieve a substantial improvement in the performance of building footprint extraction [17]. These methods promote the research of deep-learning-based building footprint extraction methods. According to this, recently, many deep-learning-based approaches have been proposed for building footprint extraction from high-resolution remote-sensing images in an end-to-end manner [43]. These recent methods can be broadly reviewed as follows.
As the spatial resolution of images continues to increase, the features of various building styles, such as material, color, texture, shape, scale, and distribution, have more obvious differences, which makes it difficult to accurately extract pixel-wise building footprints by using conventional semantic segmentation networks [48]. To overcome the above challenges, many novel networks based on multi-scale and attention structures have been proposed for building footprint extraction. For example, Ji et al. proposed a Siamese U-Net (SiU-Net) for multi-source building extraction [15]. SiU-Net [15] trains the network by inputting the down-sampled counterparts as the input of another Siamese branch to enhance the multi-scale perception ability of the network and improve the performance of building extraction. In [49], a novel network with an encoder–decoder structure, named building residual refine network (BRRNet), is devised for building extraction, which introduces a residual refinement module to enlarge the receptive field of the network, thus improving the performance of building extraction with various scales. Chen et al. proposed a context feature enhancement network (CFENet) to extract building footprints [50], which builds a spatial fusion module and focus enhancement module for enhancing multi-scale feature representation. Other similar networks can be found in [51][52]. In addition to these networks with multi-scale structures, attention-based networks have been able to enhance multi-scale feature representation, thus effectively improving building footprint extraction accuracy. For instance, Guo et al. developed a U-Net with an attention block for building extraction in [53]. In Reference [54], a scene-driven multitask parallel attention convolutional network is promoted for building extraction from high-resolution remote-sensing images. An attention-gate-based and pyramid network (AGPNet) with an encoder–decoder structure is designed for building extraction in [55], which is integrated with a grid-based attention gate and atrous spatial pyramid pooling module to enhance multi-scale features. Other attention-based building footprint extraction methods are available in [56][57][58][59].
Recently, some methods have introduced edge information and frequency information to enhance the recognition ability of buildings [48][60]. For instance, Zhu et al. proposed an edge-detail network for building extraction [61], which can consider the edge information of the images to enhance the identification ability to build footprints. In [62], a multi-task frequency–spatial learning network is promoted for building extraction. Zhao et al. adopted a multi-scale attention-guided UNet++ with edge constraint to achieve accurate building footprint segmentation in [63]. For other related papers, one can refer to the following studies [64][65][66]. In addition, advanced transformer-based networks have also received attention for building extraction, such as References [57][67][68]. These methods have largely contributed to the development of building footprint extraction.

References

  1. Lv, Z.; Liu, T.; Benediktsson, J.A.; Falco, N. Land cover change detection techniques: Very-high-resolution optical images: A review. IEEE Geosci. Remote. Sens. Mag. 2021, 10, 44–63.
  2. Gong, M.; Li, J.; Zhang, Y.; Wu, Y.; Zhang, M. Two-path aggregation attention network with quad-patch data augmentation for few-shot scene classification. IEEE Trans. Geosci. Remote. Sens. 2022, 60, 1.
  3. Gong, Y.; Xiao, Z.; Tan, X.; Sui, H.; Xu, C.; Duan, H.; Li, D. Context-aware convolutional neural network for object detection in VHR remote sensing imagery. IEEE Trans. Geosci. Remote. Sens. 2019, 58, 34–44.
  4. Jiang, F.; Gong, M.; Zheng, H.; Liu, T.; Zhang, M.; Liu, J. Self-Supervised Global-Local Contrastive Learning for Fine-Grained Change Detection in VHR Images. IEEE Trans. Geosci. Remote. Sens. 2023, 61, 1–13.
  5. Liu, P.; Liu, X.; Liu, M.; Shi, Q.; Yang, J.; Xu, X.; Zhang, Y. Building footprint extraction from high-resolution images via spatial residual inception convolutional neural network. Remote. Sens. 2019, 11, 830.
  6. Zhang, Y.; Gong, M.; Li, J.; Zhang, M.; Jiang, F.; Zhao, H. Self-supervised monocular depth estimation with multiscale perception. IEEE Trans. Image Process. 2022, 31, 3251–3266.
  7. Zhu, Q.; Liao, C.; Hu, H.; Mei, X.; Li, H. MAP-Net: Multiple attending path neural network for building footprint extraction from remote sensed imagery. IEEE Trans. Geosci. Remote. Sens. 2020, 59, 6169–6181.
  8. Liu, T.; Gong, M.; Lu, D.; Zhang, Q.; Zheng, H.; Jiang, F.; Zhang, M. Building change detection for VHR remote sensing images via local–global pyramid network and cross-task transfer learning strategy. IEEE Trans. Geosci. Remote. Sens. 2021, 60, 1.
  9. Sun, Y.; Hua, Y.; Mou, L.; Zhu, X.X. CG-Net: Conditional GIS-aware network for individual building segmentation in VHR SAR images. IEEE Trans. Geosci. Remote. Sens. 2021, 60, 1.
  10. Kadhim, N.; Mourshed, M. A shadow-overlapping algorithm for estimating building heights from VHR satellite images. IEEE Geosci. Remote. Sens. Lett. 2017, 15, 8–12.
  11. Wu, Y.; Liu, J.; Gong, M.; Gong, P.; Fan, X.; Qin, A.; Miao, Q.; Ma, W. Self-Supervised Intra-Modal and Cross-Modal Contrastive Learning for Point Cloud Understanding. IEEE Trans. Multimed. 2023, 1–13.
  12. Gong, M.; Zhao, Y.; Li, H.; Qin, A.; Xing, L.; Li, J.; Liu, Y.; Liu, Y. Deep Fuzzy Variable C-Means Clustering Incorporated with Curriculum Learning. IEEE Trans. Fuzzy Syst. 2023, 1–15.
  13. Zhang, Y.; Gong, M.; Zhang, M.; Li, J. Self-Supervised Monocular Depth Estimation With Self-Perceptual Anomaly Handling. IEEE Trans. Neural Netw. Learn. Syst. 2023. ahead of print.
  14. Wu, Y.; Li, J.; Yuan, Y.; Qin, A.; Miao, Q.G.; Gong, M.G. Commonality autoencoder: Learning common features for change detection from heterogeneous images. IEEE Trans. Neural Netw. Learn. Syst. 2021, 33, 4257–4270.
  15. Ji, S.; Wei, S.; Lu, M. Fully convolutional networks for multisource building extraction from an open aerial and satellite imagery data set. IEEE Trans. Geosci. Remote. Sens. 2018, 57, 574–586.
  16. Zhang, Y.; Gong, M.; Li, J.; Feng, K.; Zhang, M. Autonomous perception and adaptive standardization for few-shot learning. Knowl.-Based Syst. 2023, 277, 110746.
  17. Yuan, J. Learning building extraction in aerial scenes with convolutional networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 40, 2793–2798.
  18. Liu, T.; Gong, M.; Jiang, F.; Zhang, Y.; Li, H. Landslide inventory mapping method based on adaptive histogram-mean distance with bitemporal VHR aerial images. IEEE Geosci. Remote. Sens. Lett. 2021, 19, 1–5.
  19. Wu, Y.; Liu, J.; Yuan, Y.; Hu, X.; Fan, X.; Tu, K.; Gong, M.; Miao, Q.; Ma, W. Correspondence-Free Point Cloud Registration Via Feature Interaction and Dual Branch . IEEE Comput. Intell. Mag. 2023, 18, 66–79.
  20. Lv, Z.; Zhong, P.; Wang, W.; You, Z.; Shi, C. Novel Piecewise Distance based on Adaptive Region Key-points Extraction for LCCD with VHR Remote Sensing Images. IEEE Trans. Geosci. Remote. Sens. 2023, 61.
  21. Li, J.; Li, H.; Liu, Y.; Gong, M. Multi-fidelity evolutionary multitasking optimization for hyperspectral endmember extraction. Appl. Soft Comput. 2021, 111, 107713.
  22. Lv, Z.; Zhang, P.; Sun, W.; Benediktsson, J.A.; Li, J.; Wang, W. Novel Adaptive Region Spectral-Spatial Features for Land Cover Classification with High Spatial Resolution Remotely Sensed Imagery. IEEE Trans. Geosci. Remote. Sens. 2023, 61, 5609412.
  23. Sahar, L.; Muthukumar, S.; French, S.P. Using aerial imagery and GIS in automated building footprint extraction and shape recognition for earthquake risk assessment of urban inventories. IEEE Trans. Geosci. Remote. Sens. 2010, 48, 3511–3520.
  24. Van Etten, A.; Hogan, D.; Manso, J.M.; Shermeyer, J.; Weir, N.; Lewis, R. The multi-temporal urban development spacenet dataset. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 6398–6407.
  25. Ma, H.; Liu, Y.; Ren, Y.; Yu, J. Detection of collapsed buildings in post-earthquake remote sensing images based on the improved YOLOv3. Remote. Sens. 2019, 12, 44.
  26. Li, H.; Li, J.; Zhao, Y.; Gong, M.; Zhang, Y.; Liu, T. Cost-sensitive self-paced learning with adaptive regularization for classification of image time series. IEEE J. Sel. Top. Appl. Earth Obs. Remote. Sens. 2021, 14, 11713–11727.
  27. Song, W.; Haithcoat, T.L. Development of comprehensive accuracy assessment indexes for building footprint extraction. IEEE Trans. Geosci. Remote. Sens. 2005, 43, 402–404.
  28. Shackelford, A.K.; Davis, C.H.; Wang, X. Automated 2-D building footprint extraction from high-resolution satellite multispectral imagery. In Proceedings of the IGARSS 2004 IEEE International Geoscience and Remote Sensing Symposium, Anchorage, AK, USA, 20–24 September 2004; IEEE: Piscataway, NJ, USA, 2004; Volume 3, pp. 1996–1999.
  29. Wang, O.; Lodha, S.K.; Helmbold, D.P. A bayesian approach to building footprint extraction from aerial lidar data. In Proceedings of the 3rd International Symposium on 3D Data Processing, Visualization, and Transmission (3DPVT’06), Washington, DC, USA, 14–16 June 2006; IEEE: Piscataway, NJ, USA, 2006; pp. 192–199.
  30. Zabuawala, S.; Nguyen, H.; Wei, H.; Yadegar, J. Fusion of LiDAR and aerial imagery for accurate building footprint extraction. In Image Processing: Machine Vision Applications II; SPIE: Bellingham, WA, USA, 2009; Volume 7251, pp. 337–347.
  31. Wang, J.; Zeng, C.; Lehrbass, B. Building extraction from LiDAR and aerial images and its accuracy evaluation. In Proceedings of the 2012 IEEE International Geoscience and Remote Sensing Symposium, Munich, Germany, 22–27 July 2012; IEEE: Piscataway, NJ, USA, 2012; pp. 64–67.
  32. Zhang, L.; Huang, X.; Huang, B.; Li, P. A pixel shape index coupled with spectral information for classification of high spatial resolution remotely sensed imagery. IEEE Trans. Geosci. Remote. Sens. 2006, 44, 2950–2961.
  33. Huang, X.; Zhang, L. A Multidirectional and Multiscale Morphological Index for Automatic Building Extraction from Multispectral GeoEye-1 Imagery. Photogramm. Eng. Remote. Sens. 2011, 77, 721–732.
  34. Huang, X.; Zhang, L. Morphological building/shadow index for building extraction from high-resolution imagery over urban areas. IEEE J. Sel. Top. Appl. Earth Obs. Remote. Sens. 2011, 5, 161–172.
  35. Ma, W.; Wan, Y.; Li, J.; Zhu, S.; Wang, M. An automatic morphological attribute building extraction approach for satellite high spatial resolution imagery. Remote. Sens. 2019, 11, 337.
  36. Li, J.; Cao, J.; Feyissa, M.E.; Yang, X. Automatic building detection from very high-resolution images using multiscale morphological attribute profiles. Remote. Sens. Lett. 2020, 11, 640–649.
  37. Wu, Y.; Ding, H.; Gong, M.; Qin, A.; Ma, W.; Miao, Q.; Tan, K.C. Evolutionary multiform optimization with two-stage bidirectional knowledge transfer strategy for point cloud registration. IEEE Trans. Evol. Comput. 2022, 1.
  38. Wu, Y.; Zhang, Y.; Ma, W.; Gong, M.; Fan, X.; Zhang, M.; Qin, A.; Miao, Q. Rornet: Partial-to-partial registration network with reliable overlapping representations. IEEE Trans. Neural Netw. Learn. Syst. 2023, 1–14.
  39. Li, J.; Gong, M.; Liu, H.; Zhang, Y.; Zhang, M.; Wu, Y. Multiform Ensemble Self-Supervised Learning for Few-Shot Remote Sensing Scene Classification. IEEE Trans. Geosci. Remote. Sens. 2023, 61, 4500416.
  40. Yuan, X.; Shi, J.; Gu, L. A review of deep learning methods for semantic segmentation of remote sensing imagery. Expert Syst. Appl. 2021, 169, 114417.
  41. Hoeser, T.; Kuenzer, C. Object detection and image segmentation with deep learning on earth observation data: A review-part I: Evolution and recent trends. Remote. Sens. 2020, 12, 1667.
  42. Hoeser, T.; Bachofer, F.; Kuenzer, C. Object detection and image segmentation with deep learning on Earth observation data: A review—Part II: Applications. Remote. Sens. 2020, 12, 3053.
  43. Luo, L.; Li, P.; Yan, X. Deep learning-based building extraction from remote sensing images: A comprehensive review. Energies 2021, 14, 7982.
  44. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556.
  45. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 770–778.
  46. Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3431–3440.
  47. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, 5–9 October 2015; Proceedings, Part III 18. Springer: Berlin/Heidelberg, Germany, 2015; pp. 234–241.
  48. Gong, M.; Liu, T.; Zhang, M.; Zhang, Q.; Lu, D.; Zheng, H.; Jiang, F. Context-content collaborative network for building extraction from high-resolution imagery. Knowl.-Based Syst. 2023, 263, 110283.
  49. Shao, Z.; Tang, P.; Wang, Z.; Saleem, N.; Yam, S.; Sommai, C. BRRNet: A fully convolutional neural network for automatic building extraction from high-resolution remote sensing images. Remote. Sens. 2020, 12, 1050.
  50. Chen, J.; Zhang, D.; Wu, Y.; Chen, Y.; Yan, X. A context feature enhancement network for building extraction from high-resolution remote sensing imagery. Remote. Sens. 2022, 14, 2276.
  51. Ma, J.; Wu, L.; Tang, X.; Liu, F.; Zhang, X.; Jiao, L. Building extraction of aerial images by a global and multi-scale encoder-decoder network. Remote. Sens. 2020, 12, 2350.
  52. Ji, S.; Wei, S.; Lu, M. A scale robust convolutional neural network for automatic building extraction from aerial and satellite imagery. Int. J. Remote. Sens. 2019, 40, 3308–3322.
  53. Guo, M.; Liu, H.; Xu, Y.; Huang, Y. Building extraction based on U-Net with an attention block and multiple losses. Remote. Sens. 2020, 12, 1400.
  54. Guo, H.; Shi, Q.; Du, B.; Zhang, L.; Wang, D.; Ding, H. Scene-driven multitask parallel attention network for building extraction in high-resolution remote sensing images. IEEE Trans. Geosci. Remote. Sens. 2020, 59, 4287–4306.
  55. Deng, W.; Shi, Q.; Li, J. Attention-gate-based encoder–decoder network for automatical building extraction. IEEE J. Sel. Top. Appl. Earth Obs. Remote. Sens. 2021, 14, 2611–2620.
  56. Yang, H.; Wu, P.; Yao, X.; Wu, Y.; Wang, B.; Xu, Y. Building extraction in very high resolution imagery by dense-attention networks. Remote. Sens. 2018, 10, 1768.
  57. Yuan, W.; Xu, W. MSST-Net: A multi-scale adaptive network for building extraction from remote sensing images based on swin transformer. Remote. Sens. 2021, 13, 4743.
  58. Tian, Q.; Zhao, Y.; Li, Y.; Chen, J.; Chen, X.; Qin, K. Multiscale building extraction with refined attention pyramid networks. IEEE Geosci. Remote. Sens. Lett. 2021, 19, 1–5.
  59. Zhou, D.; Wang, G.; He, G.; Long, T.; Yin, R.; Zhang, Z.; Chen, S.; Luo, B. Robust building extraction for high spatial resolution remote sensing images with self-attention network. Sensors 2020, 20, 7241.
  60. Zheng, H.; Gong, M.; Liu, T.; Jiang, F.; Zhan, T.; Lu, D.; Zhang, M. HFA-Net: High frequency attention siamese network for building change detection in VHR remote sensing images. Pattern Recognit. 2022, 129, 108717.
  61. Zhu, Y.; Liang, Z.; Yan, J.; Chen, G.; Wang, X. ED-Net: Automatic building extraction from high-resolution aerial images with boundary information. IEEE J. Sel. Top. Appl. Earth Obs. Remote. Sens. 2021, 14, 4595–4606.
  62. Yu, B.; Chen, F.; Wang, N.; Yang, L.; Yang, H.; Wang, L. MSFTrans: A multi-task frequency-spatial learning transformer for building extraction from high spatial resolution remote sensing images. GISci. Remote Sens. 2022, 59, 1978–1996.
  63. Zhao, H.; Zhang, H.; Zheng, X. A multiscale attention-guided UNet++ with edge constraint for building extraction from high spatial resolution imagery. Appl. Sci. 2022, 12, 5960.
  64. Jung, H.; Choi, H.S.; Kang, M. Boundary enhancement semantic segmentation for building extraction from remote sensed image. IEEE Trans. Geosci. Remote. Sens. 2021, 60, 1–12.
  65. Xu, Z.; Xu, C.; Cui, Z.; Zheng, X.; Yang, J. CVNet: Contour Vibration Network for Building Extraction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 1383–1391.
  66. Chen, S.; Shi, W.; Zhou, M.; Zhang, M.; Xuan, Z. CGSANet: A Contour-Guided and Local Structure-Aware Encoder–Decoder Network for Accurate Building Extraction From Very High-Resolution Remote Sensing Imagery. IEEE J. Sel. Top. Appl. Earth Obs. Remote. Sens. 2021, 15, 1526–1542.
  67. Wang, L.; Fang, S.; Meng, X.; Li, R. Building extraction with vision transformer. IEEE Trans. Geosci. Remote. Sens. 2022, 60, 1–11.
  68. Hu, Y.; Wang, Z.; Huang, Z.; Liu, Y. PolyBuilding: Polygon transformer for building extraction. ISPRS J. Photogramm. Remote. Sens. 2023, 199, 15–27.
More
Information
Contributors MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register : , , , , ,
View Times: 326
Revisions: 2 times (View History)
Update Date: 27 Nov 2023
1000/1000
ScholarVision Creations