Remote Sensing Data Fusion: Comparison
Please note this is a comparison between Version 1 by Gemine Vivone and Version 2 by Catherine Yang.

With the growth of satellite and airborne-based platforms, remote sensing is increasing attention in the last decades. Everyday, sensors acquire data with different modalities and several resolutions. Leveraging on their complementary properties is a key scientific challenge, usually called remote sensing data fusion.

  • • Multimodal Data Fusion
  • • Machine/Deep Learning for Data Fusion
  • • Multitemporal Data Fusion
  • • Multiplatform Data Fusion
  • • Image Enhancement and Restoration
  • • 3D Reconstruction and Multi-View Stereo

Satellite and airborne-based remote sensing play a critical role in a vast number of applications of high societal impact, such as the monitoring of the environment, defense related issues, management of natural hazards and disasters, urban planning, and so forth. With the recent exponential growth of operating sensors, the available set of data have dramatically increased. These data come with different modalities and several resolutions. Taking optimal advantage of their complementary properties and integrating multiple data and knowledge related to the same real-world scene into a consistent, accurate, and useful representation remain a key scientific challenge, referred to as remote sensing data fusion.

Data fusion can be performed at three different processing levels: 1) pixel-based or raw level; 2) object-based or feature level; 3) decision level. Fusion at pixel level is often called image fusion. It means fusion at the lowest processing level referring to the merging of digital numbers or measured physical quantities. It uses co-registered raster data acquired by different sources. The co-registration step is of crucial importance because misregistration usually causes evident artifacts. Fusion at feature level requires the extraction of objects recognized in the several sources of data. An example of feature could be either the shape or the extent of an acquired object. Afterwards, the fusion is performed starting from this information. Finally, in the decision fusion level, the value-added data (obtained by individually processing data acquired from several sensors) are combined applying decision rules to resolve conflicts among the extracted information. A classical example is the use of a multi-expert system combing thematic maps (i.e., decisions) yielded by the application of classification approaches to each data source.

However, in all the cases, the exploitation of the available data requires the development of advanced signal and image processing techniques. This is the goal of this Entry Collection, which will focus both on methodological and practical aspects of remote sensing data fusion.

This Entry Collection will cover (but will not be limited to) the following topics:

• Multimodal Data Fusion

• Machine/Deep Learning for Data Fusion

• Multitemporal Data Fusion

• Multiplatform Data Fusion

• Image Enhancement and Restoration

• Pansharpening

• Superresolution

• 3D Reconstruction and Multi-View Stereo