Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 + 2940 word(s) 2940 2021-09-07 05:38:44 |
2 The format is correct Meta information modification 2940 2021-09-08 03:09:39 |

Video Upload Options

Do you have a full video?

Confirm

Are you sure to Delete?
Cite
If you have any further questions, please contact Encyclopedia Editorial Office.
Kuras, A. Classified Urban Land Cover Classes. Encyclopedia. Available online: https://encyclopedia.pub/entry/13952 (accessed on 29 March 2024).
Kuras A. Classified Urban Land Cover Classes. Encyclopedia. Available at: https://encyclopedia.pub/entry/13952. Accessed March 29, 2024.
Kuras, Agnieszka. "Classified Urban Land Cover Classes" Encyclopedia, https://encyclopedia.pub/entry/13952 (accessed March 29, 2024).
Kuras, A. (2021, September 07). Classified Urban Land Cover Classes. In Encyclopedia. https://encyclopedia.pub/entry/13952
Kuras, Agnieszka. "Classified Urban Land Cover Classes." Encyclopedia. Web. 07 September, 2021.
Classified Urban Land Cover Classes
Edit

The urban land cover consists of very complex physical materials and surfaces that are constantly having anthropological impacts. The urban surface types are a mosaic of seminatural surfaces such as grass, trees, bare soil, water bodies, and human-made materials of diverse age and composition, such as asphalt, concrete, roof tiles for energy conservation and fire danger, and generally impervious surfaces for urban flooding studies and pollution. The complexity of urban analysis also depends on the scale chosen and its purpose.

remote sensing urban environment data fusion sensor fusion urban mapping land cover classification

1. Introduction

Over the last few decades, global urbanization has grown rapidly. By 2050, around 68% of the world’s population will be living in urban areas [1]. This can cause environmental challenges, including ecological problems, poor air quality, deterioration of public health, microclimate changes leading to severe weather, higher temperatures, limited access to water, persistent vulnerability to natural hazards, and the release of toxic particles from fast industrialization into the atmosphere [2][3]. These challenges lead to difficulties in advanced urban analyses due to urban surfaces’ spectral and structural diversity and complexity over a small area [4][5]. Therefore, constant monitoring of urban areas is often highly required. Systematic monitoring and updating of maps are critical in urban areas, where many objects are mobile (vehicles and temporary buildings), and the infrastructure, vegetation, and construction are constantly changing.
Spatiotemporal investigations of the urban regions are today provided by remote sensing technology advances [6]. Especially, airborne remote sensing is a powerful developing tool for urban analysis that offers time-efficient mapping of a city essential for diverse planning [7], management activities [8], and monitoring urban and suburban land uses [9]. It has been proven as a common technique for mapping urban land cover changes to investigate, e.g., social preferences, the regional ecosystem, urbanization change, and biodiversity [10]. Urban remote sensing, in particular, is widely used for the investigation of three-dimensional urban geometry that is crucial for modeling urban morphology [11], identifying various objects, heterogeneous material, and mixtures. However, the growing challenges require a state-of-the-art technological solution in terms of sensors and analysis methods. Continuous development and improvement of remote sensing sensors increase interest in identifying urban land cover types based on spectral, spatial, and structural properties [12][13]. In urban mapping, lidar analyses (light detection and ranging), hyperspectral data (HS), and synthetic aperture radar (SAR) have become significant. Different portions of the electromagnetic spectrum are useful in analyzing urban environments from the reflective spectral range to the microwave radar [14]. The latter provide high-resolution images independent of the time of day and weather; however, due to the requirement of oblique illumination of the scene, occlusion and layover appear, making the analysis of dynamic urban areas difficult [15].
Urban land cover classification accuracy and interpretability based only on a single sensor in complex, dense urban areas are often insufficient [16]. The heterogeneity in the urban areas leads to high spectral variation within one land cover type, resulting in very complex analyses. The impervious surfaces (roofs, parking lots, roads, and pavements) notably vary in the spectral and spatial-structural manner. In addition, scale and spatial resolution are relevant for estimating urban heterogeneity. Scale defines heterogeneity, in which materials are taken into account analytically or absent or grouped into one class, e.g., individual trees, type versus forest, or vegetation in general [17]. Spatial resolution, on the other hand, determines the level of pixel mixing. However, high spatial resolution increases the physical material heterogeneity, increasing the complexity of analyses.
HS data provide spectral information about materials, differentiating them without elevation context. The challenge in the pure spectral analysis is the negligence of object identification, mostly built from various materials maintaining very high intra-object heterogeneity. By contrast, lidar data can distinguish between different land cover classes from the same material at a different height, such as asphaltic open parking lots and roads [18][19]. Furthermore, passive remote sensors, such as HS, are sensitive to atmospheric conditions and illumination, whereas lidar as an active sensor is less sensitive to these factors. This property of lidar enables, e.g., a physical correction of shadow and illumination purposes when combined with HS data [20][21][22][23][24][25] and intensity measurement for urban land cover mapping in shaded areas [26]. Regardless of the spatial and spectral resolution of airborne-based HS sensors, urban environments are characterized by spectral ambiguity and reduced spectral value under the shadow caused by topography changes, buildings, and trees, which can be overcome by adding lidar data as presented by [27].
Moreover, a fusion of spectral, spatial, and elevation features provides robust and unique information relevant to the urban environment [28]. The airborne HL-Fusion has already been investigated for urban land cover classification purposes [28][29][30]. However, diverse combination methods are implemented on different data and product levels based on either physical or empirical approaches [31]. Furthermore, since all fusion processes are very complex, there is no defined framework for fusing these sensors. Therefore, a comprehensive summary of previous research on data fusion may enhance the understanding of fusion possibilities, challenges, and common issues that limit the classification results in the urban environment.

2. Classified Urban Land Cover Classes

The urban land cover consists of very complex physical materials and surfaces that are constantly having anthropological impacts. The urban surface types are a mosaic of seminatural surfaces such as grass, trees, bare soil, water bodies, and human-made materials of diverse age and composition, such as asphalt, concrete, roof tiles for energy conservation and fire danger [32], and generally impervious surfaces for urban flooding studies and pollution [33]. The complexity of urban analysis also depends on the scale chosen and its purpose. Many classifications refer to urban materials with fine spatial resolution deepening the heterogeneity, allowing a more detailed mapping result. The classification of urban objects, which consist of many different materials and variance within a class, although significant (e.g., in city map updates), becomes a challenge due to the highly nonlinear and heterogeneous composition of different objects surfaces and materials, and thus, there is the need to use more training data for classification purposes, which is time-consuming and computationally expensive.

2.1. Buildings

Buildings in an urban context can be recognized as shapes with planar surfaces and straight lines [34]. Building detection based on remote sensing methods plays a crucial role in many applications in the urban environment, such as in 3D monitoring of urban development in time [35], urban planning, telecommunication network planning, vehicle navigation [30], urban energy planning [35], city management, and damage assessment [36]. Many mapping techniques are based on shape identification, outlines, and preliminary model data [36]. Besides detecting buildings as objects, building roof extraction has recently been a hot topic within the remote sensing community. Building roofs are defined by planarity properties and height derivatives based on elevation. A 3D visualization of buildings is of great importance for infrastructure management and modeling, 3D city mapping, simulations, change detection, and more [37]. Both airborne-based optical and lidar data have been used recently to map buildings. A common way to detect buildings is to use a digital surface model (DSM) [38][39], a normalized DSM (nDSM) [40][41], or a point cloud extracted from lidar data [42][43][44][45]. Lidar is capable of extracting building heights and planar roof faces [30]. It is beneficial for spatiotemporal assessment and investigation of building density for sustainability study and residential development in cities [35].
By contrast, airborne-based HS data can better distinguish between materials at the roof surfaces due to their spectral differences [30]. However, not including the elevation information from the lidar scanner, the classification of buildings and their roofs can be too complex without human expertise.

2.2. Vegetation

Vegetation is recognized by its geometrical complexity, defined by parameters such as the roughness, point density ratio measure [46], and chlorophyll spectral feature. In the last decade, active (Sentinel-1, LiDAR, and radar) and passive (Quickbird, Worldview, Sentinel-2, Landsat, and MODIS) remote sensing has been widely applied to vegetation detection. Lidar data are used to generate virtual 3D tree models [47], map low and high vegetation [48], and, using multispectral lidar, assess vegetation variety regarding its health and density [49], as well as extract vegetation indices, e.g., NDVI [50] for monitoring changes caused by urbanization, anthropogenetic activities, and harvesting applying wavelet transform [51][52]. However, vegetation detection is not a straightforward approach. The analysis is often complex and detailed due to the increasingly finer spatial resolution of remote sensing devices, such as distinguishing photosynthetic and nonphotosynthetic vegetation [53]. Vegetation is often not defined as a whole but as groups, for example, as low vegetation (grass), middle vegetation (shrubs), and high vegetation (trees). One of the more complex challenges is the similar morphology of low/young trees and shrubs, causing misclassification of shrubs as high trees [54]. HS data are also used to detect vegetation on a spectral basis (chlorophyll reflectance), differentiating between vegetation types and healthiness. More biophysical parameters can be defined due to more spectral bands than multispectral lidar (usually 2–3 wavelengths), such as the leaf area index, fractional cover, and foliage biochemistry [55]. Both sensors have been fused in many studies, e.g., for canopy characterization for biomass assessment and estimation of risk of natural hazards [56] and urban tree species mapping [57].

2.3. Roads

Road detection from airborne-based HS and lidar data is essential in remote sensing applications, e.g., a road navigation system, urban planning and management, and geographic information actualization [58][59]. The elevation feature derived from lidar data has been proven as a significant parameter to extract time-efficient road methods compared to optical methods [60]. DSM distinguishes more precise boundaries of surfaces, even in occluded regions [61]. However, only lidar-data-based classification is limited when roads are at the same elevation but made of different materials, such as asphalt, concrete, or other impervious materials [18]. Therefore, HS imaging can differentiate between different materials and their conditions to complement road classification purposes. It has already been proven by Herold et al. [62] for the following uses: map alteration, degradation, and structural damages of road surfaces based on spectral analysis. Usually, to detect roads, texture information is implemented [63]. In addition, lane marks can be used as an indicator for new roads; however, this approach is illumination sensitive [64]. HS data classification without topographic information is challenging when differentiating between two objects made from the same material: differentiation between a parking lot, parking at the ground level, cycleway, and a road [28].

2.4. Miscellaneous

Apart from the above-described land cover classes, the urban environment consists of more complex thematic classes. They commonly cannot be chemically or physically described by a single hyperspectral absorption feature or other single features, such as height or shape, which are, however, extracted from contextual information. Thus, spatial context is critical and necessary for identifying industrial areas, commercial or residential buildings, playgrounds, and harbors in coastal cities. The combination of spectral and spatial features from HS and lidar data shows potential, allowing identifying thematic class and assessing its condition in terms of quality and materials.

3. Key Characteristics of Hyperspectral and Lidar Data

3.1. Hyperspectral (HS) Images

HS data retrieved from an imaging spectrometer are a three-dimensional cube that includes two-dimensional spatial information (x, y) with spectral information at each pixel position xiyj [65]. Each pixel in the obtained digital data contains a nearly continuous spectrum covering the reflective spectral range of the visible, near-infrared (VNIR: 400–1000 nm) and short-wave infrared (SWIR: 1000–2500 nm) [66][67]. HS as a passive system is dependent on the given lighting conditions resulting in high intraclass (within a class) spectral variability. In these wavelength ranges of the electromagnetic spectrum, particular absorption features and shapes make it possible to identify the material’s chemical and physical properties [68].

3.1.1. Spectral Features

Within one material, spectral features can vary due to color, coating, degradation, alteration, roughness, the illumination of material, data acquisition, location of the material, and preprocessing data (Figure 1) [69][70][71]. These variations within a material are more and more investigated, generating spectral libraries of complex urban materials [12][72][73] and normalization based on advanced preprocessing. HS images result in high-dimensional data leading to computationally expensive analyses.
Figure 1. At surface reflectance of some urban surfaces (HySpex sensors VNIR-1800 and SWIR-384). The hyperspectral dataset was acquired by the Terratec AS Company in August 2019 over Baerum municipality, Oslo, Norway.

3.1.2. Spatial Information

Spatial-context information is widely used to achieve robust and accurate classification maps considering the neighborhood in the target pixel. While spectral features are the most relevant features in material-based classification, adding spatial features to object classification makes it easier to group pixels with some spectral variance into one class representing an object or land cover type [74]. In addition, the spatial noise of the classification results can be reduced [75][76].

3.2. Lidar Data

Lidar data is a three-dimensional point cloud (x, y, z) which delivers by default information about elevation, multiple-return, the reflected intensity, texture, and waveform-derived feature spaces from the object hit by laser pulse [77][78]. As an active sensor, a lidar system emits radiation from one bandwidth (more in the case of multiwavelength lidar scanners) to the object surface at high repetition rates. Lidar scanners are whiskbroom-type instruments and typically use the monochromatic laser in visible—532 (bathymetric/coastal mapping)—and near-infrared—1064 and 1550 nm—for example, for vegetation detection and differentiation between asphaltic and nonasphaltic roads [79] which can be used as an additional intensity feature in land cover mapping in the reflective spectral range [77]. The advantage of using airborne lidar is insensitivity to relief displacement and illumination conditions [77], retaining full 3D geometry of data.

3.2.1. Height Features and Their Derivatives (HD)

The height feature is used to calculate the three-dimensional coordinates (x,y,z) that generate a gridded 2.5-dimensional topographical profile of the area of interest [77]. Especially in the urban environment, the z value height is crucial for precise contour generation of elevated objects [77]. In addition, the height difference between the lidar return and the lowest point in cylindrical volume has been investigated and proven as an important feature in discriminating ground and nonground points [80][81]. Moreover, a digital surface model (DSM) (Figure 2A) is extracted from the height information applying interpolation of 3D points onto a 2D grid. From a DSM, a surface roughness layer [82] and a normalized DSM (nDSM) (Figure 2C) are calculated, subtracting the digital terrain model (DTM) (Figure 2B) from the DSM [77]. The overlapping of the building height information and the terrain height information is thus excluded. The object representation heterogeneity is therefore reduced, which helps the classification procedure.
Figure 2. Examples of DSM (A), DTM (B), and nDSM (C) from Riegl VG-1560i LiDAR scanner acquired by the Terratec AS Company in August 2019 over Baerum municipality, Oslo, Norway.

3.2.2. Intensity Data

Intensity values extracted from lidar data correspond to the peak amplitudes from the illuminated object [77]. Applying intensity as a feature space, Song et al. [83] presented an approach to determine asphalt roads, grass, house roofs, and trees. However, trees’ diverse intensity values undermine the classification due to the canopies’ complex geometry [84].

3.2.3. Multiple-Return

A lidar-based laser pulse can split into multiple laser returns if it hits a permeable object such as a tree canopy and obtains a response from, e.g., branches, leaves, stems, and the ground [77]. Multiple-return data has been recently used as an additional feature space in the urban mapping in the commercial building, small house, and tree determination [85].

3.2.4. Waveform-Derived Features

Full-waveform lidar scanners can retrieve the entire signal of the backscattered laser pulse as a 1D signal profile in the chronological sequence [78][86][87]. A full-waveform lidar system can better correct the intensity values than the discrete systems, such as accurate estimation of the surface slope [88], eliminating the assumption of Lambertian reflectors [89]. However, before using any classification approach, proper radiometric calibration is needed to adjust waveform data from different flight campaigns.

3.2.5. Eigenvalue-Based Features

The eigenvalues are calculated based on the covariance matrix of x, y, and z dimensions of the 3D point cloud as λ1, λ2, and λ3. Eigenvalues as features help detect geometrical parameters, such as plane, edge, and corner [90]. The following structure features have been applied to lidar data: omnivariance, anisotropy, planarity, sphericity, linearity, and eigenentropy for features for context-driven target detection [91] building detection [90]. Some of them are shown in Figure 3. The planarity feature is proven relevant for road classification or other flat surfaces and sphericity for building and natural ground (low vegetation) detection [80].
Figure 3. Structure features derived from lidar data: omnivariance (A) and linearity (B) from [90].

3.3. Common Features—HS and Lidar

3.3.1. Textural Features

Besides spectral information of hyperspectral sensors, pixel-wise spatial features are relevant for image content, such as textural features. The textural attributes in a hyperspectral scene can be extracted by the local binary patterns (LBP) operator proposed by [92], providing information about the surface granularity [93]. To include spatial information in the classification purposes, the textural operators are window based. Peng et al. [94] extracted them as rotation-invariant features for urban classification purposes except for spectral features and Gabor features [95]. The latter are frequential filters interpreting the texture of the hyperspectral bands used by [96][97]. The texture can be analyzed by applying the gray-level co-occurrence matrix (GLCM) measures [35][98].

3.3.2. Morphological Features

Mathematical morphology contains operators such as erosion, dilation, opening, closing, rank filters, top hat, and other derived transforms. Mainly, these operators are applied on panchromatic images from hyperspectral sensors, binary or greyscale images with isotropic and geodesic metrics with a structural element [99]. For example, the opening operator focuses on the bright spots, removing objects smaller than the structural element, whereas the closing operator acts on the dark objects (Figure 4).
Figure 4. Opening and closing operations on lidar dataset with different kernel sizes (3 and 5) of the structural element.

3.4. Hyperspectral-Lidar Data Fusion

HL-Fusion combines spectral-contextual information obtained by an HS sensor and a lidar scanner’s spectral-spatial-geometrical information. Even if the active and passive sensors characterize different physics, their features can be combined from both sensors. Both sensors cover the reflective spectral range intersecting either in the VIS (532 nm) or the SWIR (1064, 1550 nm) wavelength regions. More rarely, multi-spectral lidar systems are used, which overlap in several of the three common wavelengths, allowing the identification of materials or objects using spectral properties [100]. Under laboratory conditions, prototypical hyperspectral lidar systems are being developed [50][101][102].

References

  1. United Nations. 2018 Year in Review; United Nations: New York, NY, USA, 2018.
  2. Chen, F.; Kusaka, H.; Bornstein, R.; Ching, J.; Grimmond, C.S.B.; Grossman-Clarke, S.; Loridan, T.; Manning, K.W.; Martilli, A.; Miao, S. The integrated WRF/urban modelling system: Development, evaluation, and applications to urban environmental problems. Int. J. Climatol. 2011, 31, 273–288.
  3. Lee, J.H.; Woong, K.B. Characterization of urban stormwater runoff. Water Res. 2000, 34, 1773–1780.
  4. Forster, B.C. Coefficient of variation as a measure of urban spatial attributes, using SPOT HRV and Landsat TM data. Int. J. Remote Sens. 1993, 14, 2403–2409.
  5. Sadler, G.J.; Barnsley, M.J.; Barr, S.L. Information extraction from remotely-sensed images for urban land analysis. In Proceedings of the 2nd European GIS Conference (EGIS’91), Brussels, Belgium, 2–5 April 1991; pp. 955–964.
  6. Carlson, T. Applications of remote sensing to urban problems. Remote Sens. Environ. 2003, 86, 273–274.
  7. Coutts, A.M.; Harris, R.J.; Phan, T.; Livesley, S.J.; Williams, N.S.G.; Tapper, N.J. Thermal infrared remote sensing of urban heat: Hotspots, vegetation, and an assessment of techniques for use in urban planning. Remote Sens. Environ. 2016, 186, 637–651.
  8. Huo, L.Z.; Silva, C.A.; Klauberg, C.; Mohan, M.; Zhao, L.J.; Tang, P.; Hudak, A.T. Supervised spatial classification of multispectral LiDAR data in urban areas. PLoS ONE 2018, 13.
  9. Jürgens, C. Urban and suburban growth assessment with remote sensing. In Proceedings of the OICC 7th International Seminar on GIS Applications in Planning and Sustainable Development, Cairo, Egypt, 13–15 February 2001; pp. 13–15.
  10. Hepinstall, J.A.; Alberti, M.; Marzluff, J.M. Predicting land cover change and avian community responses in rapidly urbanizing environments. Landsc. Ecol. 2008, 23, 1257–1276.
  11. Batty, M.; Longley, P. Fractal Cities: A Geometry of Form and Function; Academic Press: London, UK; San Diego, CA, USA, 1994.
  12. Ben-Dor, E.; Levin, N.; Saaroni, H. A spectral based recognition of the urban environment using the visible and near-infrared spectral region (0.4-1.1 µm). A case study over Tel-Aviv, Israel. Int. J. Remote Sens. 2001, 22, 2193–2218.
  13. Herold, M.; Gardner, M.E.; Roberts, D.A. Spectral resolution requirements for mapping urban areas. IEEE Trans. Geosci. Remote Sens. 2003, 41, 1907–1919.
  14. Brenner, A.R.; Roessing, L. Radar Imaging of Urban Areas by Means of Very High-Resolution SAR and Interferometric SAR. IEEE Trans. Geosci. Remote Sens. 2008, 46, 2971–2982.
  15. Soergel, U. Review of Radar Remote Sensing on Urban Areas. In Radar Remote Sensing of Urban Areas; Soergel, U., Ed.; Springer: Berlin, Germany, 2010; pp. 1–47.
  16. Ghamisi, P.; Höfle, B.; Zhu, X.X. Hyperspectral and LiDAR data fusion using extinction profiles and deep convolutional neural network. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 10.
  17. Benz, U.C.; Hofmann, P.; Willhauck, G.; Lingenfelder, I.; Heynen, M. Multi-resolution, object-oriented fuzzy analysis of remote sensing data for GIS-ready information. ISPRS J. Photogramm. Remote Sens. 2004, 58, 239–258.
  18. Debes, C.; Merentitis, A.; Heremans, R.; Hahn, J.; Frangiadakis, N.; Kasteren, T.v.; Liao, W.; Bellens, R.; Pizurica, A.; Gautama, S.; et al. Hyperspectral and LiDAR data fusion: Outcome of the 2013 GRSS data fusion contest. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 550.
  19. Dalponte, M.; Bruzzone, L.; Gianelle, D. Fusion of hyperspectral and LiDAR remote sensing data for classification of complex forest areas. IEEE Trans. Geosci. Remote Sens. 2008. Available online: https://rslab.disi.unitn.it/papers/R59-TGARS-Dalponte.pdf (accessed on 2 May 2021).
  20. Sohn, H.-G.; Yun, K.-H.; Kim, G.-H.; Park, H.S. Correction of building height effect using LIDAR and GPS. In Proceedings of the International Conference on High Performance Computing and Communications, Sorrento, Italy, 21–23 September 2005; pp. 1087–1095.
  21. Guislain, M.; Digne, J.; Chaine, R.; Kudelski, D.; Lefebvre-Albaret, P. Detecting and correcting shadows in urban point clouds and image collections. In Proceedings of the Fourth International Conference on 3D Vision (3DV), Stanford, CA, USA, 25–28 October 2016; pp. 537–545.
  22. George, G.E. Cloud Shadow Detection and Removal from Aerial Photo Mosaics Using Light Detection and Ranging (LIDAR) Reflectance Images; The University of Southern Mississippi: Hattiesburg, MS, USA, 2011.
  23. Brell, M.; Segl, K.; Guanter, L.; Bookhagen, B. Hyperspectral and Lidar Intensity Data Fusion: A Framework for the Rigorous Correction of Illumination, Anisotropic Effects, and Cross Calibration. IEEE Trans. Geosci. Remote Sens. 2017. Available online: https://www.researchgate.net/publication/313687025_Hyperspectral_and_Lidar_Intensity_Data_Fusion_A_Framework_for_the_Rigorous_Correction_of_Illumination_Anisotropic_Effects_and_Cross_Calibration (accessed on 2 May 2021).
  24. Hui, L.; Di, L.; Xianfeng, H.; Deren, L. Laser intensity used in classification of LiDAR point cloud data. In Proceedings of the International Symposium on Geoscience and Remote Sensing, Boston, MA, USA, 8–11 July 2008.
  25. Liu, W.; Yamazaki, F. Object-based shadow extraction and correction of high-resolution optical satellite images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2012, 5, 1296–1302.
  26. Zhou, W.; Huang, G.; Troy, A.; Cadenasso, M.L. Object-based land cover classification of shaded areas in high spatial resolution imagery of urban areas: A comparison study. Remote Sens. Environ. 2009, 113, 1769–1777.
  27. Priem, F.; Canters, F. Synergistic use of LiDAR and APEX hyperspectral data for high-resolution urban land cover mapping. Remote Sens. 2016, 8, 787.
  28. Li, H.; Ghamisi, P.; Soergel, U.; Zhu, X.X. Hyperspectral and LiDAR fusion using deep three-stream convolutional neural networks. Remote Sens. 2018, 10, 1649.
  29. Alonzo, M.; Bookhagen, B.; Roberts, D.A. Urban tree species mapping using hyperspectral and lidar data fusion. Remote Sens. Environ. 2014, 148, 70–83.
  30. Kokkas, N.; Dowman, I. Fusion of airborne optical and LiDAR data for automated building reconstruction. In Proceedings of the ASPRS Annual Conference, Reno, Nevada, 1–5 May 2006.
  31. Torabzadeh, H.; Morsdorf, F.; Schaepman, M.E. Fusion of imaging spectroscopy and airborne laser scanning data for characterization of forest ecosystems. ISPRS J. Photogramm. Remote Sens. 2014, 97, 25–35.
  32. Medina, M.A. Effects of shingle absorptivity, radiant barrier emissivity, attic ventilation flowrate, and roof slope on the performance of radiant barriers. Int. J. Energy Res. 2000, 24, 665–678.
  33. Ridd, M.K. Exploring a V-I-S-(vegetation—impervious surface-soil) model for urban ecosystem analysis through remote sensing: Comparative anatomy for cities. Int. J. Remote Sens. 1995, 16, 2165–2185.
  34. Haala, N.; Brenner, C. Extraction of buildings and trees in urban environments. ISPRS J. Photogramm. Remote Sens. 1999, 54, 130–137.
  35. Shirowzhan, S.; Trinder, J. Building classification from LiDAR data for spatial-temporal assessment of 3D urban developments. Procedia Eng. 2017, 180, 1453–1461.
  36. Zhou, Z.; Gong, J. Automated residential building detection from airborne LiDAR data with deep neural networks. Adv. Eng. Inform. 2018, 36, 229–241.
  37. Shajahan, D.A.; Nayel, V.; Muthuganapathy, R. Roof classification from 3-D LiDAR point clouds using multiview CNN with self-attention. IEEE Geosci. Remote Sens. Lett. 2019, 99, 1–5.
  38. Matikainen, L.; Hyyppa, J.; Hyyppa, H. Automatic detection of buildings from laser scanner data for map updating. In Proceedings of the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Dresden, Germany, 8–10 October 2003.
  39. Hug, C.; Wehr, A. Detecting and identifying topographic objects in imaging laser altimetry data. In Proceedings of the International Archives of the Photogrammetry and Remote Sensing, Stuttgart, Germany, 17–19 September 1997; pp. 16–29.
  40. Maas, H.G. The potential of height texture measures for the segmentation of airborne laserscanner data. In Proceedings of the 4th International Airborne Remote Sensing Conference and Exhibition and 21st Canadian Symposium on Remote Sensing, Ottawa, ON, Canada, 21–24 June 1999; pp. 154–161.
  41. Tóvári, D.; Vögtle, T. Object classifiaction in laserscanning data. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci.—ISPRS Arch. 2012, 36. Available online: https://www.researchgate.net/publication/228962142_Object_Classification_in_LaserScanning_Data (accessed on 8 May 2021).
  42. Galvanin, E.A.; Poz, A.P.D. Extraction of building roof contours from LiDAR data using a markov-random-field-based approach. IEEE Trans. Geosci. Remote Sens. 2012, 50, 981–987.
  43. Vosselmann, G. Slope based filtering of laser altimetry data. In Proceedings of the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Amsterdam, The Netherlands, 16–22 July 2000; pp. 935–942.
  44. Lohmann, P.; Koch, A.; Schaeffer, M. Approaches to the filtering of laser scanner data. In Proceedings of the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Amsterdam, The Netherlands, 16–22 July 2000.
  45. Tarsha-Kurdi, F.; Landes, T.; Grussenmeyer, P.; Smigiel, E. New approach for automatic detection of buildings in airborne laser scanner data using first echo only. In Proceedings of the ISPRS Commission III Symposium, Photogrammetric Computer Vision, Bonn, Germany, 20–22 September 2006; pp. 25–30.
  46. Rutzinger, M.; Höfle, B.; Pfeifer, N. Detection of high urban vegetation with airborne laser scanning data. In Proceedings of the Forestsat, Montpellier, France, 5–7 November 2007; pp. 1–5.
  47. Morsdorf, F.; Nichol, C.; Matthus, T.; Woodhouse, I.H. Assessing forest structural and physiological information content of multi-spectral LiDAR waveforms by radiative transfer modelling. Remote Sens. Environ. 2009, 113, 2152–2163.
  48. Wang, C.K.; Tseng, Y.H.; Chu, H.J. Airborne dual-wavelength LiDAR data for classifying land cover. Remote Sens. 2014, 6, 700–715.
  49. Wichmann, V.; Bremer, M.; Lindenberger, J.; Rutzinger, M.; Georges, C.; Petrini-Monteferri, F. Evaluating the potential of multispectral airborne LiDAR for topographic mapping and land cover classification. In Proceedings of the ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, La Grande Motte, France, 28 September–3 October 2015.
  50. Puttonen, E.; Hakala, T.; Nevalainen, O.; Kaasalainen, S.; Krooks, A.; Karjalainen, M.; Anttila, K. Artificial target detection with a hyperspectral LiDAR over 26-h measurement. Opt. Eng. 2015. Available online: https://www.spiedigitallibrary.org/journals/optical-engineering/volume-54/issue-01/013105/Artificial-target-detection-with-a-hyperspectral-LiDAR-over-26-h/10.1117/1.OE.54.1.013105.full?SSO=1 (accessed on 8 May 2021).
  51. Ghaderpour, E.; Abbes, A.B.; Rhif, M.; Pagiatakis, S.D.; Farah, I.R. Non-stationary and unequally spaced NDVI time series analyses by the LSWAVE software. Int. J. Remote Sens. 2020, 41, 2374–2390.
  52. Martinez, B.; Gilabert, M.A. Vegetation dynamics from NDVI time series analysis using the wavelet transform. Remote Sens. Environ. 2009, 113, 1823–1842.
  53. Okin, G.S. Relative spectral mixture analysis—A multitemporal index of total vegetation cover. Remote Sens. Environ. 2007, 106, 467–479.
  54. Yang, H.; Chen, W.; Qian, T.; Shen, D.; Wang, J. The Extraction of Vegetation Points from LiDAR Using 3D Fractal Dimension Analyses. Remote Sens. 2015, 7, 10815–10831.
  55. Widlowski, J.L.; Pinty, B.; Gobron, N.; Verstraete, M.M. Detection and characterization of boreal coniferous forests from remote sensing data. J. Geophys. Res. 2001, 106, 33405–33419.
  56. Koetz, B.; Sun, G.; Morsdorf, F.; Ranson, K.J.; Kneubühler, M.; Itten, K.; Allgöwer, B. Fusion of imaging spectrometer and LIDAR data over combined radiative transfer models for forest canopy characterization. Remote Sens. Environ. 2007, 106, 449–459.
  57. Dian, Y.; Pang, Y.; Dong, Y.; Li, Z. Urban tree species mapping using airborne LiDAR and hyperspectral data. J. Indian Soc. Remote Sens. 2016, 44, 595–603.
  58. Zhang, Z.; Liu, Q.; Wang, Y. Road extraction by deep residual u-net. IEEE Geosci. Remote Sens. Lett. 2018. Available online: https://arxiv.org/abs/1711.10684 (accessed on 8 May 2021).
  59. Yang, X.; Li, X.; Ye, Y.; Zhang, X.; Zhang, H.; Huang, X.; Zhang, B. Road detection via deep residual dense u-net. In Proceedings of the International Joint Conference on Neural Networks, Budapest, Hungary, 14–19 July 2019.
  60. Miliaresis, G.; Kokkas, N. Segmentation and object-based classification for the extraction of the building class from LiDAR DEMs. Comput. Geosci. 2007, 33, 1076–1087.
  61. Zhao, X.; Tao, R.; Li, W.; Li, H.C.; Du, Q.; Liao, W.; Philips, W. Joint Classification of Hyperspectral and LiDAR Data Using Hierarchical Random Walk and Deep CNN Architecture. IEEE Trans. Geosci. Remote Sens. 2020, 58, 7355–7370.
  62. Herold, M.; Roberts, D.; Smadi, O.; Noronha, V. Road condition mapping with hyperspectral remote sensing. In Proceedings of the Airborne Earth Science Workshop, Pasadena, CA, USA, 31 March–2 April 2004.
  63. Kong, H.; Audibert, J.Y.; Ponce, J. General Road Detection from a Single Image. IEEE Trans. Image Process. 2010. Available online: https://www.di.ens.fr/willow/pdfs/tip10b.pdf (accessed on 7 May 2021).
  64. Wu, P.C.; Chang, C.Y.; Lin, C. Lane-mark extraction for automobiles under complex conditions. Pattern Recognit. 2014, 47, 2756–2767.
  65. Clark, R.N. Spectroscopy of rocks and minerals, and principles of spectroscopy. In Manual of Remote Sensing, Remote Sensing for the Earth Sciences; Rencz, A.N., Ed.; John Wiley and Sons: New York, NY, USA, 1999; Volume 3.
  66. Signoroni, A.; Savardi, M.; Baronio, A.; Benini, S. Deep learning meets hyperspectral image analysis: A multidisciplinary review. J. Imaging 2019, 5, 52.
  67. Ben-Dor, E. Imaging spectrometry for urban applications. In Imaging Spectrometry; van der Meer, F.D., de Jong, S.M., Eds.; Kluwer Academic Publishers: Amsterdam, The Netherlands, 2001; pp. 243–281.
  68. Ortenberg, F. Hyperspectral Sensor Characteristics. In Fundamentals, Sensor Systems, Spectral Libraries, and Data Mining for Vegetation, 2nd ed.; Huete, A., Lyon, J.G., Thenkabail, P.S., Eds.; Hyperspectral remote sensing of vegetation Volume I; CRC Press: Boca Raton, FL, USA, 2011; p. 449.
  69. Heiden, U.; Segl, K.; Roessner, S.; Kaufmann, H. Determination of robust spectral features for identification of urban surface materials in hyperspectral remote sensing data. Remote Sens. Environ. 2007, 111, 537–552.
  70. Heiden, U.; Segl, K.; Roessner, S.; Kaufmann, H. Determination and verification of robust spectral features for an automated classification of sealed urban surfaces. In Proceedings of the EARSeL Workshop on Imaging Spectroscopy, Warsaw, Poland, 27–29 April 2005.
  71. Lacherade, S.; Miesch, C.; Briottet, X.; Men, H.L. Spectral variability and bidirectional reflectance behavior of urban materials at a 20 cm spatial resolution in the visible and near-infrared wavelength. A case study over Toulouse (France). Int. J. Remote Sens. 2005, 26, 3859–3866.
  72. Herold, M.; Roberts, D.A.; Gardner, M.E.; Dennison, P.E. Spectrometry for urban area remote sensing—Development and analysis of a spectral library from 350 to 2400 nm. Remote Sens. Environ. 2004, 91, 304–319.
  73. Ilehag, R.; Schenk, A.; Huang, Y.; Hinz, S. KLUM: An Urban VNIR and SWIR Spectral Library Consisting of Building Materials. Remote Sens. 2019, 11, 2149.
  74. Xue, J.; Zhao, Y.; Bu, Y.; Liao, W.; Chan, J.C.-W.; Philips, W. Spatial-Spectral Structured Sparse Low-Rank Representation for Hyperspectral Image Super-Resolution. IEEE Trans. Image Process. 2021, 30, 3084–3097.
  75. Rasti, B.; Scheunders, P.; Ghamisi, P.; Licciardi, G.; Chanussot, J. Noise Reduction in Hyperspectral Imagery: Overview and Application. Remote Sens. 2018, 3, 482.
  76. Gómez-Chova, L.; Alonso, L.; Guanter, L.; Camps-Valls, G.; Calpe, J.; Moreno, J. Correction of systematic spatial noise in push-broom hyperspectral sensors: Application to CHRIS/PROBA images. Appl. Opt. 2008, 47, 46–60.
  77. Yan, W.Y.; El-Ashmawy, N.; Shaker, A. Urban land cover classification using airborne LiDAR data: A review. Remote Sens. Environ. 2015.
  78. Wehr, A.; Lohr, U. Airborne laser scanning—An introduction and overview. ISPRS J. Photogramm. Remote Sens. 1999, 54, 68–82.
  79. Clode, S.; Rottensteiner, F.; Kootsookos, P.; Zelniker, E. Detection and vectorization of roads from LiDAR data. Photogramm. Eng. Remote Sens. 2007, 73, 517–535.
  80. Chehata, N.; Guo, L.; Mallet, C. Airborne LiDAR feature selection for urban classification using random forests. Laserscanning 2009, 38.
  81. Guo, L.; Chehata, N.; Mallet, C.; Boukir, S. Relevance of airborne LiDAR and multispectral image data for urban scene classification using random forests. ISPRS J. Photogramm. Remote Sens. 2011, 66, 56–66.
  82. Priestnall, G.; Jaafar, J.; Duncan, A. Extracting urban features from LiDAR digital surface models. Comput. Environ. Urban. Syst. 2000, 24, 65–78.
  83. Song, J.H.; Han, S.H.; Yu, K.Y.; Kim, Y.I. Assessing the possibility of land-cover classification using LiDAR intensity data. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. ISPRS Arch. 2002, 34, 259–262.
  84. Yoon, J.-S.; Lee, J.-I. Land cover characteristics of airborne LiDAR intensity data: A case study. IEEE Geosci. Remote Sens. Lett. 2008, 5, 801–805.
  85. Bartels, M.; Wei, H. Maximum likelihood classification of LiDAR data incorporating multiple co-registered band. In Proceedings of the 4th International Workshop on Pattern Recognition in Remote Sensing in conjunction with the 18th International Conference on Pattern Recognition, Hong Kong, 20–24 August 2006.
  86. Mallet, C.; Bretar, F. Full-waveform topographic LiDAR: State-of-the-art. ISPRS J. Photogramm. Remote Sens. 2009, 64, 1–16.
  87. Bretar, F.; Chauve, A.; Mallet, C.; Jutzi, B. Managing full waveform LiDAR data: A challenging task for the forthcoming years. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. ISPRS Arch. 2008, XXXVII, 415–420.
  88. Kirchhof, M.; Jutzi, B.; Stilla, U. Iterative processing of laser scanning data by full waveform analysis. ISPRS J. Photogramm. Remote Sens. 2008, 63, 99–114.
  89. Höfle, B.; Pfeifer, N. Correction of laser scanning intensity data: Data and model-driven approaches. ISPRS J. Photogramm. Remote Sens. 2007, 62, 1415–1433.
  90. Gross, H.; Thoennessen, U. Extraction of lines from laser point clouds. In Proceedings of the ISPRS Conference Photogrammetric Image Analysis (PIA), Bonn, Germany, 20–22 September 2006; pp. 87–91.
  91. West, K.F.; Webb, B.N.; Lersch, J.R.; Pothier, S.; Triscari, J.M.; Iverson, A.E. Context-driven automated target detection in 3-D data. In Proceedings of the Automatic Target Recognition XIV, Orlando, FL, USA, 13–15 April 2004; pp. 133–143.
  92. Ojala, T.; Pietikainen, M.; Maenpaa, T.T. Multi resolution gray scale and rotation invariant texture classification with local binary pattern. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 971–987.
  93. Ge, C.; Du, Q.; Sun, W.; Wang, K.; Li, J.; Li, Y. Deep Residual Network-Based Fusion Framework for Hyperspectral and LiDAR Data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 2458–2472.
  94. Peng, B.; Li, W.; Xie, X.; Du, Q.; Liu, K. Weighted-Fusion-Based Representation Classifiers for Hyperspectral Imagery. Remote Sens. 2015, 7, 14806–14826.
  95. Manjunath, B.S.; Ma, W.Y. Texture features for browsing and retrieval of image data. IEEE Trans. Pattern Anal. Mach. Intell. 1996, 18, 837–842.
  96. Rajadell, O.; García-Sevilla, P.; Pla, F. Textural Features for Hyperspectral Pixel Classification. In Proceedings of the Iberian Conference on Pattern Recognition and Image Analysis, Póvoa de Varzim, Portugal, 10–12 June 2009; pp. 208–216.
  97. Aksoy, S. Spatial techniques for image classification. In Signal and Image Processing for Remote Sensing; CRC Press: Boca Raton, FL, USA, 2006; pp. 491–513.
  98. Zhang, G.; Jia, X.; Kwok, N.M. Spectral-spatial based super pixel remote sensing image classification. In Proceedings of the 4th International Congress on Image and Signal Processing, Shanghai, China, 15–17 October 2011; pp. 1680–1684.
  99. Pesaresi, M.; Benediktsson, J.A. A New Approach for the Morphological Segmentation of High-Resolution Satellite Imagery. IEEE Trans. Geosci. Remote Sens. 2001, 39, 309–320.
  100. Morsy, S.S.A.; El-Rabbany, A. Multispectral LiDAR Data for Land Cover Classification of Urban Areas. Sensors 2017, 17, 958.
  101. Suomalainen, J.; Hakala, T.; Kaartinen, H.; Räikkönen, E.; Kaasalainen, S. Demonstration of a virtual active hyperspectral LiDAR in automated point cloud classification. ISPRS J. Photogramm. Remote Sens. 2011, 66, 637–641.
  102. Hakala, T.; Suomalainen, J.; Kaasalainen, S.; Chen, Y. Full waveform hyperspectral LiDAR for terrestrial laser scanning. Opt. Express 2012, 20.
More
Information
Subjects: Area Studies
Contributor MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register :
View Times: 1.2K
Revisions: 2 times (View History)
Update Date: 15 Sep 2021
1000/1000