1. Introduction
The frequent occurrence of extreme natural disasters seriously threatens the safety of human life. Timely access to the distribution information of collapsed buildings is crucial to emergency response and post-disaster rescue efforts
[1]. Currently, remote sensing technology provides an efficient solution for the accurate and rapid extraction of building damage. As a result, post-disaster remote sensing images with high spatial resolution have become indispensable basic data for identifying disaster damage in numerous studies
[2][3]. Among these, optical imagery stands out as a common and accessible source of remote sensing data
[3], with a wide variety of sensors facilitating easy data acquisition. Some studies have also utilized radar equipment mounted on drones to scan post-disaster buildings
[4], which remain unaffected by post-disaster weather conditions and can be combined with optical images for comprehensive analysis
[5]. Moreover, LiDAR data proves useful in detecting height changes in buildings, enabling precise extraction of collapsed parts
[2].
The vast diversity of buildings in different regions presents a significant challenge in accurately identifying buildings and assessing their damage using a pre-trained model
[6]. Currently, deep learning technology, particularly convolutional neural networks, has achieved state-of-the-art results in the task of building damage extraction
[3]. Most of the research in this area focuses on proposing or improving a model for change detection to extract building damage information from paired bitemporal images of pre- and post-disaster
[3][7][8][9]. However, in the context of emergency response scenarios, relying on pre-disaster imagery can significantly impact both the effectiveness and the efficiency of damage assessment. Therefore, an alternative approach worth considering is that the distribution maps of buildings extracted from pre-disaster images should be prepared before any disaster occurs.
The building distribution maps can filter out complex background categories and provide key information, including building location and shape, which is obviously helpful for building damage identification. Currently, there are only a few studies that exclusively utilize pre-disaster building distribution maps in combination with post-disaster imagery, despite the availability of building footprint or rooftop data that covers the vast majority of the world
[10]. Notable examples include Open Street Map (
http://www.openstreetmap.org/, accessed on 13 October 2022) and Bing maps of Microsoft (
https://github.com/microsoft/GlobalMLBuildingFootprints, accessed on 21 February 2023) open-access data. Admittedly, there is a real problem that these building distribution data cannot guarantee a high update frequency at present, resulting in long time intervals between the availability of pre-disaster building distribution maps and post-disaster images, potentially spanning several years.
In addition, it is still difficult to accurately identify buildings from post-disaster images because the training data may come from different sensors or from different geographical regions
[11]. Therefore, simply applying a pre-trained model to post-disaster scenarios can lead to a considerable drop in generalization performance and poor recognition results
[9].
Transfer learning is a common solution to adapt the original model for better performance on the target domain. Data-based transfer learning has been shown to improve the model’s application by utilizing target domain data
[12][13]. Hu et al.
[14] demonstrated that using post-disaster samples effectively enhances the identification accuracy of damaged buildings. On the other hand, incremental learning is a model-based transfer learning approach that improves generalization in specific scenarios by adding new base learners
[15][16]. Ge et al.
[6] confirmed that incremental learning significantly saves transfer time during emergency response, as it focuses on training only on new data containing post-disaster information. Therefore, learning sufficient post-disaster samples incrementally can effectively and rapidly improve the model’s performance. In this process, the key technology lies in selecting high-quality post-disaster samples with the assistance of building distribution maps.
2. Building Damage Identification Methods
Most studies have focused on using the paired images, i.e., pre- and post-disaster images, to identify building damage
[8][17]. Durnov
[18] proposed a change detection method utilizing a Siamese structure that achieved top-ranking results in a competition focused on building damage identification. Subsequently, several similar change detection models were introduced
[3][7], with a primary focus on optimizing the model structure. However, these methods necessitate the use of bitemporal images for damage detection, thus limiting the efficiency of disaster emergency response due to the reliance on pre-disaster images. Additionally, methods that combine multi-source images with various auxiliary data have been employed to extract high-precision disaster results
[5]. For instance, Wang et al.
[2] employed multiple types of data, including LiDAR and optical images, to extract the collapsed areas through the changes of the information of building height and corner points. However, the reality is that many types of specific data may be difficult to obtain in the short time after a disaster.
Methods that solely rely on post-disaster images aim to efficiently identify damaged buildings
[19][20]. Based on the morphological and spectral characteristics of post-earthquake buildings, Ma et al.
[21] proposed a method for depicting collapsed buildings using only post-disaster high-resolution images. Munsif et al.
[22] achieved a lightweight CNN model, occupying just 3 MB, which can be deployed on Unmanned Aerial Vehicles (UAVs) with limited hardware resources, by utilizing several data augmentation techniques to enhance the efficiency and accuracy of multi-hazard damage identification. Nia et al.
[23] introduced a deep model based on ground-level post-disaster images and demonstrated that using semantic segmentation results as the foreground positively impacted building damage assessment. Miura et al.
[20] developed a collapsed building identification method using a CNN model and post-disaster aerial images, and achieved a damage distribution that was basically consistent with the inventories in earthquakes. However, existing studies have shown that the separability between collapsed buildings and the background is relatively low
[24]. Solely using post-disaster information often falls short in accurately locating the damaged areas.
It is not easy to meet both the accuracy and efficiency requirements under emergency conditions by relying on bitemporal images or only post-disaster images. Therefore, combining key pre-disaster knowledge (such as pre-disaster building distribution maps, a pre-trained model for buildings identification, and so on) with post-disaster images to identify damaged buildings quickly and accurately is a solution that is being developed in some studies
[25]. For example, Galanis et al.
[26] introduced the DamageMap model for wildfire disasters, which leverages pre-disaster building segmentation results and post-disaster aerial or satellite imagery for a classification task to determine whether buildings are damaged. At present, there are few studies that make full use of pre-disaster building distribution maps. Even though the building distribution data may not strictly correspond to each building in the post-disaster images, it can still provide much effective information about the location and shape of the buildings. Therefore, it is a promising way to devise methods to better apply the pre-disaster information in the future disaster response tasks.
3. Transfer Learning Methods
When a pre-trained model is directly applied to a target domain with significantly different features from the training data, there can be a considerable drop in accuracy. Transfer learning is used to address this practical problem. Current transfer learning methods can be categorized into the following three categories: (1) Data-based transfer learning
[27] usually uses some samples of the target domain to enhance the model’s performance in target applications. An example is the self-training method
[28], which improves the model’s generalization ability by automatically generating pseudo-labels. (2) Feature-based transfer learning
[29][30] transforms the data of two domains into the same feature space, reducing the distance between the features of the source domain and the target domain, such as domain adversarial networks
[31]. (3) Model-based transfer learning
[32] usually adds new layers or integrates new base learners to optimize the original model, such as incremental learning
[16].
Transfer learning in building damage extraction tasks aims to improve the performance of models in post-disaster scenes. However, these methods encounter challenges in practical applications, such as the scarcity of post-disaster samples, variations in image styles, and the unique features of buildings themselves. Hu et al.
[14] conducted a comparison of three transfer learning methods for post-disaster building recognition and discovered that utilizing samples from disaster areas can significantly boost the recognition accuracy for various types of disasters. On the other hand, Lin et al.
[33] proposed a novel method to filter historical data relevant to the target task from earthquake cases, aiming to improve the reliability of classification results.
In addition to transfer learning, data augmentation is often used to improve the generalization performance of the models
[16][24], including applying various transformations to existing images, such as rotations, flips, or zooms, so that the model becomes more robust. Data synthesis is another valuable strategy that can address data scarcity by combining real data with computer simulations or generative models
[34]. In fact, these methods can be combined with transfer learning to provide more precise and timely disaster information in emergency missions. Ge et al.
[6] employed the generative network to transfer the style of remote sensing images under an incremental learning framework, and used data augmentation strategy to train the models, which improved the accuracy of building damage recognition.