Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 -- 1482 2023-09-01 11:17:40 |
2 layout Meta information modification 1482 2023-09-04 05:57:05 |

Video Upload Options

Do you have a full video?

Confirm

Are you sure to Delete?
Cite
If you have any further questions, please contact Encyclopedia Editorial Office.
Ge, J.; Tang, H.; Ji, C. Building Damage Identification Methods and Transfer Learning Methods. Encyclopedia. Available online: https://encyclopedia.pub/entry/48739 (accessed on 25 June 2024).
Ge J, Tang H, Ji C. Building Damage Identification Methods and Transfer Learning Methods. Encyclopedia. Available at: https://encyclopedia.pub/entry/48739. Accessed June 25, 2024.
Ge, Jiayi, Hong Tang, Chao Ji. "Building Damage Identification Methods and Transfer Learning Methods" Encyclopedia, https://encyclopedia.pub/entry/48739 (accessed June 25, 2024).
Ge, J., Tang, H., & Ji, C. (2023, September 01). Building Damage Identification Methods and Transfer Learning Methods. In Encyclopedia. https://encyclopedia.pub/entry/48739
Ge, Jiayi, et al. "Building Damage Identification Methods and Transfer Learning Methods." Encyclopedia. Web. 01 September, 2023.
Building Damage Identification Methods and Transfer Learning Methods
Edit

The building damage caused by natural disasters seriously threatens human security. Applying deep learning algorithms to identify collapsed buildings from remote sensing images is crucial for rapid post-disaster emergency response.

building damage remote sensing self-incremental learning sample selection disaster emergency response

1. Introduction

The frequent occurrence of extreme natural disasters seriously threatens the safety of human life. Timely access to the distribution information of collapsed buildings is crucial to emergency response and post-disaster rescue efforts [1]. Currently, remote sensing technology provides an efficient solution for the accurate and rapid extraction of building damage. As a result, post-disaster remote sensing images with high spatial resolution have become indispensable basic data for identifying disaster damage in numerous studies [2][3]. Among these, optical imagery stands out as a common and accessible source of remote sensing data [3], with a wide variety of sensors facilitating easy data acquisition. Some studies have also utilized radar equipment mounted on drones to scan post-disaster buildings [4], which remain unaffected by post-disaster weather conditions and can be combined with optical images for comprehensive analysis [5]. Moreover, LiDAR data proves useful in detecting height changes in buildings, enabling precise extraction of collapsed parts [2].
The vast diversity of buildings in different regions presents a significant challenge in accurately identifying buildings and assessing their damage using a pre-trained model [6]. Currently, deep learning technology, particularly convolutional neural networks, has achieved state-of-the-art results in the task of building damage extraction [3]. Most of the research in this area focuses on proposing or improving a model for change detection to extract building damage information from paired bitemporal images of pre- and post-disaster [3][7][8][9]. However, in the context of emergency response scenarios, relying on pre-disaster imagery can significantly impact both the effectiveness and the efficiency of damage assessment. Therefore, an alternative approach worth considering is that the distribution maps of buildings extracted from pre-disaster images should be prepared before any disaster occurs.
The building distribution maps can filter out complex background categories and provide key information, including building location and shape, which is obviously helpful for building damage identification. Currently, there are only a few studies that exclusively utilize pre-disaster building distribution maps in combination with post-disaster imagery, despite the availability of building footprint or rooftop data that covers the vast majority of the world [10]. Notable examples include Open Street Map (http://www.openstreetmap.org/, accessed on 13 October 2022) and Bing maps of Microsoft (https://github.com/microsoft/GlobalMLBuildingFootprints, accessed on 21 February 2023) open-access data. Admittedly, there is a real problem that these building distribution data cannot guarantee a high update frequency at present, resulting in long time intervals between the availability of pre-disaster building distribution maps and post-disaster images, potentially spanning several years.
In addition, it is still difficult to accurately identify buildings from post-disaster images because the training data may come from different sensors or from different geographical regions [11]. Therefore, simply applying a pre-trained model to post-disaster scenarios can lead to a considerable drop in generalization performance and poor recognition results [9].
Transfer learning is a common solution to adapt the original model for better performance on the target domain. Data-based transfer learning has been shown to improve the model’s application by utilizing target domain data [12][13]. Hu et al. [14] demonstrated that using post-disaster samples effectively enhances the identification accuracy of damaged buildings. On the other hand, incremental learning is a model-based transfer learning approach that improves generalization in specific scenarios by adding new base learners [15][16]. Ge et al. [6] confirmed that incremental learning significantly saves transfer time during emergency response, as it focuses on training only on new data containing post-disaster information. Therefore, learning sufficient post-disaster samples incrementally can effectively and rapidly improve the model’s performance. In this process, the key technology lies in selecting high-quality post-disaster samples with the assistance of building distribution maps.

2. Building Damage Identification Methods

Most studies have focused on using the paired images, i.e., pre- and post-disaster images, to identify building damage [8][17]. Durnov [18] proposed a change detection method utilizing a Siamese structure that achieved top-ranking results in a competition focused on building damage identification. Subsequently, several similar change detection models were introduced [3][7], with a primary focus on optimizing the model structure. However, these methods necessitate the use of bitemporal images for damage detection, thus limiting the efficiency of disaster emergency response due to the reliance on pre-disaster images. Additionally, methods that combine multi-source images with various auxiliary data have been employed to extract high-precision disaster results [5]. For instance, Wang et al. [2] employed multiple types of data, including LiDAR and optical images, to extract the collapsed areas through the changes of the information of building height and corner points. However, the reality is that many types of specific data may be difficult to obtain in the short time after a disaster.
Methods that solely rely on post-disaster images aim to efficiently identify damaged buildings [19][20]. Based on the morphological and spectral characteristics of post-earthquake buildings, Ma et al. [21] proposed a method for depicting collapsed buildings using only post-disaster high-resolution images. Munsif et al. [22] achieved a lightweight CNN model, occupying just 3 MB, which can be deployed on Unmanned Aerial Vehicles (UAVs) with limited hardware resources, by utilizing several data augmentation techniques to enhance the efficiency and accuracy of multi-hazard damage identification. Nia et al. [23] introduced a deep model based on ground-level post-disaster images and demonstrated that using semantic segmentation results as the foreground positively impacted building damage assessment. Miura et al. [20] developed a collapsed building identification method using a CNN model and post-disaster aerial images, and achieved a damage distribution that was basically consistent with the inventories in earthquakes. However, existing studies have shown that the separability between collapsed buildings and the background is relatively low [24]. Solely using post-disaster information often falls short in accurately locating the damaged areas.
It is not easy to meet both the accuracy and efficiency requirements under emergency conditions by relying on bitemporal images or only post-disaster images. Therefore, combining key pre-disaster knowledge (such as pre-disaster building distribution maps, a pre-trained model for buildings identification, and so on) with post-disaster images to identify damaged buildings quickly and accurately is a solution that is being developed in some studies [25]. For example, Galanis et al. [26] introduced the DamageMap model for wildfire disasters, which leverages pre-disaster building segmentation results and post-disaster aerial or satellite imagery for a classification task to determine whether buildings are damaged. At present, there are few studies that make full use of pre-disaster building distribution maps. Even though the building distribution data may not strictly correspond to each building in the post-disaster images, it can still provide much effective information about the location and shape of the buildings. Therefore, it is a promising way to devise methods to better apply the pre-disaster information in the future disaster response tasks.

3. Transfer Learning Methods

When a pre-trained model is directly applied to a target domain with significantly different features from the training data, there can be a considerable drop in accuracy. Transfer learning is used to address this practical problem. Current transfer learning methods can be categorized into the following three categories: (1) Data-based transfer learning [27] usually uses some samples of the target domain to enhance the model’s performance in target applications. An example is the self-training method [28], which improves the model’s generalization ability by automatically generating pseudo-labels. (2) Feature-based transfer learning [29][30] transforms the data of two domains into the same feature space, reducing the distance between the features of the source domain and the target domain, such as domain adversarial networks [31]. (3) Model-based transfer learning [32] usually adds new layers or integrates new base learners to optimize the original model, such as incremental learning [16].
Transfer learning in building damage extraction tasks aims to improve the performance of models in post-disaster scenes. However, these methods encounter challenges in practical applications, such as the scarcity of post-disaster samples, variations in image styles, and the unique features of buildings themselves. Hu et al. [14] conducted a comparison of three transfer learning methods for post-disaster building recognition and discovered that utilizing samples from disaster areas can significantly boost the recognition accuracy for various types of disasters. On the other hand, Lin et al. [33] proposed a novel method to filter historical data relevant to the target task from earthquake cases, aiming to improve the reliability of classification results.
In addition to transfer learning, data augmentation is often used to improve the generalization performance of the models [16][24], including applying various transformations to existing images, such as rotations, flips, or zooms, so that the model becomes more robust. Data synthesis is another valuable strategy that can address data scarcity by combining real data with computer simulations or generative models [34]. In fact, these methods can be combined with transfer learning to provide more precise and timely disaster information in emergency missions. Ge et al. [6] employed the generative network to transfer the style of remote sensing images under an incremental learning framework, and used data augmentation strategy to train the models, which improved the accuracy of building damage recognition.

References

  1. Motosaka, M.; Mitsuji, K. Building damage during the 2011 off the Pacific coast of Tohoku Earthquake. Soils Found. 2012, 52, 929–944.
  2. Wang, X.; Li, P. Extraction of urban building damage using spectral, height and corner information from VHR satellite images and airborne LiDAR data. ISPRS J. Photogramm. Remote Sens. 2020, 159, 322–336.
  3. Zheng, Z.; Zhong, Y.; Wang, J.; Ma, A.; Zhang, L. Building damage assessment for rapid disaster response with a deep object-based semantic change detection framework: From natural disasters to man-made disasters. Remote Sens. Environ. 2021, 265, 112636.
  4. Dong, L.; Shan, J. A comprehensive review of earthquake-induced building damage detection with remote sensing techniques. ISPRS J. Photogramm. Remote Sens. 2013, 84, 85–99.
  5. Brunner, D.; Lemoine, G.; Bruzzone, L. Earthquake damage assessment of buildings using vhr optical and sar imagery. IEEE Trans. Geosci. Remote Sens. 2010, 48, 2403–2420.
  6. Ge, J.; Tang, H.; Yang, N.; Hu, Y. Rapid identification of damaged buildings using incremental learning with transferred data from historical natural disaster cases. ISPRS J. Photogramm. Remote Sens. 2022, 195, 105–128.
  7. Bai, Y.; Hu, J.; Su, J.; Liu, X.; Liu, H.; He, X.; Meng, S.; Mas, E.; Koshimura, S. Pyramid pooling module-based semi-siamese network: A benchmark model for assessing building damage from xBD satellite imagery datasets. Remote Sens. 2020, 12, 4055.
  8. Shen, Y.; Zhu, S.; Yang, T.; Chen, C.; Pan, D.; Chen, J.; Xiao, L.; Du, Q.F. BDANet: Multiscale convolutional neural network with cross-directional attention for building damage assessment from satellite images. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–14.
  9. Yang, W.; Zhang, X.; Luo, P. Transferability of convolutional neural network models for identifying damaged buildings due to earthquake. Remote Sens. 2021, 13, 504.
  10. Hu, Y.; Liu, C.; Li, Z.; Xu, J.; Han, Z.; Guo, J. Few-Shot Building Footprint Shape Classification with Relation Network. ISPRS Int. J. Geo-Inf. 2022, 11, 311.
  11. Wang, C. Investigation and analysis of building structure damage in Yushu Earthquake. Build. Struct. 2010, 40, 106–109.
  12. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial networks. Adv. Neural Inf. Process. Syst. 2014, 3, 2672–2680.
  13. Park, T.; Efros, A.; Zhang, R.; Zhu, J. Contrastive learning for unpaired image-to-image translation. In Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020. Proceedings, Part IX 16.
  14. Hu, Y.; Tang, H. On the generalization ability of a global model for rapid building mapping from heterogeneous satellite images of multiple natural disaster scenarios. Remote Sens. 2021, 13, 984.
  15. Ring, M.B. Continual Learning in Reinforcement Environments. Ph.D. Thesis, University of Texas at Austin, Austin, TX, USA, 1994. Available online: https://www.researchgate.net/publication/2600799 (accessed on 10 April 2023).
  16. Yang, N.; Tang, H. GeoBoost: An incremental deep learning approach toward global mapping of buildings from VHR remote sensing images. Remote Sens. 2020, 12, 1794.
  17. Weber, E.; Kan, H. Building disaster damage assessment in satellite imagery with multi-temporal fusion. arXiv 2020, arXiv:2004.05525.
  18. Durnov, V. xview2 First Place Framework. 2020. Available online: https://github.com/DIUx-xView/xView2_first_place (accessed on 10 April 2023).
  19. Li, X.; Yang, W.; Ao, T.; Li, H.; Chen, W. An improved approach of information extraction for earthquake-damaged buildings using high-resolution imagery. J. Earthq. Tsunami 2011, 5, 389–399.
  20. Miura, H.; Aridome, T.; Matsuoka, M. Deep learning-based identification of collapsed, non-collapsed and blue tarp-covered buildings from post-disaster aerial images. Remote Sens. 2020, 12, 1924.
  21. Ma, J.; Qin, S. Automatic depicting algorithm of earthquake collapsed buildings with airborne high resolution image. In Proceedings of the 2012 IEEE International Geoscience and Remote Sensing Symposium, Munich, Germany, 22–27 July 2012; IEEE: Munich, Germany, 2012; pp. 939–942.
  22. Munsif, M.; Afridi, H.; Ullah, M.; Khan, S.D.; Cheikh, F.A.; Sajjad, M. A lightweight convolution neural network for automatic disasters recognition. In Proceedings of the 2022 10th European Workshop on Visual Information Processing (EUVIP), Lisbon, Portugal, 11–14 September 2022.
  23. Nia, K.R.; Mori, G. Building damage assessment using deep learning and ground-level image data. In Proceedings of the 2017 14th Conference on Computer and Robot Vision (CRV), Edmonton, AB, Canada, 16–19 May 2017.
  24. Qing, Y.; Ming, D.; Wen, Q.; Weng, Q.; Xu, L.; Chen, Y.; Zhang, Y.; Zeng, B. Operational earthquake-induced building damage assessment using CNN-based direct remote sensing change detection on superpixel level. Int. J. Appl. Earth Obs. Geoinf. 2022, 112, 102899.
  25. Tilon, S.; Nex, F.; Kerle, N.; Vosselman, G. Post-disaster building damage detection from earth observation imagery using unsupervised and transferable anomaly detecting generative adversarial networks. Remote Sens. 2020, 12, 4193.
  26. Galanis, M.; Rao, K.; Yao, X.; Tsai, Y.; Ventura, J. Damagemap: A post-wildfire damaged buildings classifier. Int. J. Disaster Risk Reduct. 2021, 65, 102540.
  27. Liu, X.; Liu, Z.; Wang, G.; Zhang, H. Ensemble transfer learning algorithm. IEEE Access 2017, 6, 2389–2396.
  28. Gu, X.; Zhang, C.; Shen, Q.; Han, J.; Plamen, P.A.; Peter, M.A. A self-training hierarchical prototype-based ensemble framework for remote sensing scene classification. Inf. Fusion 2022, 80, 179–204.
  29. Hoffman, J.; Tzeng, E.; Park, T.; Zhu, J. CyCADA: Cycle-consistent adversarial domain adaptation. In Proceedings of the 35th International Conference on Machine Learning, Stockholm, Sweden, 10–15 July 2018.
  30. Tasar, O.; Giros, A.; Tarabalka, Y.; Alliez, P.; Clerc, S. Daugnet: Unsupervised, multisource, multitarget, and life-long domain adaptation for semantic segmentation of satellite images. IEEE Trans. Geosci. Remote Sens. 2020, 59, 1067–1081.
  31. Na, J.; Jung, H.; Chang, H.; Hwang, W. Fixbi: Bridging domain spaces for unsupervised domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021.
  32. Zhao, Z.; Chen, Y.; Liu, J.; Shen, Z.; Liu, M. Cross-people mobile-phone based activity recognition. In Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence, Barcelona, Spain, 16–22 July 2011.
  33. Lin, Q.; Ci, T.; Wang, L.; Mondal, S.; Yin, H.; Wang, Y. Transfer learning for improving seismic building damage assessment. Remote Sens. 2022, 14, 201.
  34. Antoniou, A.; Storkey, A.; Edwards, H. Data augmentation generative adversarial networks. arXiv 2017, arXiv:1711.04340.
More
Information
Subjects: Remote Sensing
Contributors MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register : , ,
View Times: 229
Revisions: 2 times (View History)
Update Date: 04 Sep 2023
1000/1000
Video Production Service