Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 -- 3008 2023-03-08 10:06:22 |
2 Reference format revised. Meta information modification 3008 2023-03-09 08:49:51 |

Video Upload Options

Do you have a full video?

Confirm

Are you sure to Delete?
Cite
If you have any further questions, please contact Encyclopedia Editorial Office.
Yang, S.; Hou, M.; Li, S. Three-Dimensional Point Cloud Semantic Segmentation for Cultural Heritage. Encyclopedia. Available online: https://encyclopedia.pub/entry/41977 (accessed on 03 July 2024).
Yang S, Hou M, Li S. Three-Dimensional Point Cloud Semantic Segmentation for Cultural Heritage. Encyclopedia. Available at: https://encyclopedia.pub/entry/41977. Accessed July 03, 2024.
Yang, Su, Miaole Hou, Songnian Li. "Three-Dimensional Point Cloud Semantic Segmentation for Cultural Heritage" Encyclopedia, https://encyclopedia.pub/entry/41977 (accessed July 03, 2024).
Yang, S., Hou, M., & Li, S. (2023, March 08). Three-Dimensional Point Cloud Semantic Segmentation for Cultural Heritage. In Encyclopedia. https://encyclopedia.pub/entry/41977
Yang, Su, et al. "Three-Dimensional Point Cloud Semantic Segmentation for Cultural Heritage." Encyclopedia. Web. 08 March, 2023.
Three-Dimensional Point Cloud Semantic Segmentation for Cultural Heritage
Edit

In the cultural heritage field, point clouds, as important raw data of geomatics, are not only three-dimensional (3D) spatial presentations of 3D objects but they also have the potential to gradually advance towards an intelligent data structure with scene understanding, autonomous cognition, and a decision-making ability. The approach of point cloud semantic segmentation as a preliminary stage can help to realize this advancement.

point cloud semantic segmentation classification cultural heritage machine learning deep learning

1. Introduction

The use of 3D point cloud data for cultural heritage assets is becoming widespread since 3D models provide various applications of digital documentation [1][2][3][4], interpretation of information and knowledge [5][6], and visual experience [7][8]. Point cloud data include massive 3D geometric information with colour and reflection intensity attributes. However, semi-automatic or automatic, efficient, and reliable solutions for linking point clouds with semantic information still need to be solved. Three-dimensional point cloud semantic segmentation (3DPCSS) has been regarded as a technology that links a single point or a set of points in the point cloud data with semantic labels (e.g., roof, facade, wall of architectural heritage, plants, damaged area, etc.) [9]. Moreover, 3DPCSS is the key step to extract implicit geometric features and semantic information from point cloud data for 3D scene understanding and cognition. Point clouds with semantic and geometric feature information make it easier for non-remote sensing professional users, such as cultural heritage managers, researchers, conservators, and restorers, to apply and understand the content of the data [10]. For example, the scan-to-BIM (building information modeling) [11][12] method, from point cloud to BIM [13] or heritage building information modeling (H-BIM) [14][15], mainly depends on 3DPCSS to extract geometric primitive, semantic information and building structure for parametric modeling. Smart point cloud [16][17] and rich point cloud [18] are new intelligent point cloud data structures that can be used for 3D scene understanding and decision-making, of which 3DPCSS acts as a core step of their technology framework. 3DPCSS can also achieve damaged area investigation, thematic mapping, and surface material analysis of cultural relics [19][20][21]. In addition, the scientometric method has proven that the semantic segmentation of point clouds is one of the hot topics of leveraging point cloud data in the cultural heritage field [22]. Therefore, it is evident that 3DPCSS has become the essential data processing step in cultural heritage.
Cultural heritage is represented by objects of nature, sizes, and complexity, which can be classified into tangible cultural heritage and intangible cultural heritage. Tangible cultural heritage consists of two categories, immovable cultural heritage (e.g., built heritage, monuments, archaeological sites, etc.) and moveable cultural heritage (e.g., paintings, sculptures, furniture, coins, etc.) [23]. The tasks of point cloud semantic segmentation are different at multi spatial scales, from landscape (i.e., immovable heritage and surrounding environments) to small artifacts (i.e., a part of an immovable heritage). In macro geographical space, a cultural heritage landscape is regarded as a complex dynamic environment composed of elements with semantic information such as land, vegetation, water, buildings (modern and historical buildings), and artifacts [24]. These elements can be reconstructed in geographic information systems (GIS) to support landscape and archaeological site annotation [25] and landscape planning [26]. With regard to the immovable heritage, point cloud semantic segmentation can improve the degree of automation of parametric modeling of objects at different levels of detail [27][28]. The common goal is to locate different elements in a three-dimensional point cloud scene, annotate them, and associate them with semantics and attributes and external knowledge databases [29]. At present, the application of point cloud semantic segmentation mainly focuses on immovable cultural heritage, especially historical buildings.
3DPCSS has attracted significant attention in computer vision, remote sensing, and robotics. As a result, similar terms have emerged to describe the same meaning. In existing studies, point cloud semantic segmentation has also been known as classification [30][31] or point labeling [32][33]. In addition, 3D point cloud segmentation (3DPCS) is also an essential task for point cloud data processing. 3DPCS is essentially a process of extracting the specific geometric structures or features in cultural heritage scenes based on geometric constraints and statistical rules rather than explicit supervised learning or prior knowledge [9]. 3DPCS also groups point clouds into subsets with one or more common characteristic, whereas 3DPCSS means defining and assigning points to specific classes with semantic labels according to different criteria [34]

2. Three-Dimensional Point Cloud Data in Cultural Heritage

2.1. Point Cloud Data Acqusition Technologies

Photogrammetry and 3D laser scanning have become the best approaches for 3D point cloud data acquisition of cultural heritage. Salonia et al. (2009) [35] presented a quick photogrammetric system for surveying archaeological and architectural artifacts at different spatial scales. Multi-image photogrammetry with UAV has become the most economical and convenient 3D survey technology for cultural heritage landscapes, archaeological sites, and immovable heritage (e.g., historical buildings) [36][37][38]. Jeon et al. (2017) [39] compared the performance of image-based 3D reconstruction from UAV photography using different commercial software, e.g., Context Capture, Photo Scan, and Pix4Dmapper. Kingsland (2020) [40] used three types of digital photogrammetry processing software including Metashape, Context Capture, and Reality Capture for small-scale artifact digitization. In addition, computer vision (CV) is a mathematical technique for reconstructing 3D models in imagery [41]. CV together with photogrammetry is known as an image-based point cloud data acquisition method.
Three-dimensional laser scanning is another technology used to acquire cultural heritage point cloud data. A variety of 3D laser scanning platforms can be used to obtain 3D point cloud data of cultural heritage at different spatial scales, such as airborne laser scanning (ALS), terrestrial laser scanning (TLS), mobile laser scanning (MLS), and handheld laser scanning. For example, Risbøl et al. (2014) [42] used ALS data for change detection of the detailed information of a landscape and individual monuments automatically. Damięcka-Suchocka et al. (2020) [43] used TLS point cloud data at the millimeter level to investigate historical buildings, walls, and structures for conducting inventory activities, documentation, and conservation work. Di Filippo et al. (2018) [44] proposed a wearable MLS for collecting indoor and outdoor point cloud data of complex cultural heritage buildings. 

2.2. A single Platform with Multiple Sensors

In the field of cultural heritage, the equipment used for acquiring point cloud data is mainly based on 3D lidar and photogrammetric cameras equipped with unmanned aerial vehicles (UAV), mobile vehicles, robots, terrestrial methods, handheld devices, and other platforms. The purpose of multiple sensors equipped with one platform is to obtain more information in one data acquisition process. For example, Nagai et al. (2009) [45] proposed a UAV 3D mapping platform consisting of charge-coupled device cameras, a laser scanner, an inertial measurement unit, and a global positioning system (GPS). Erenoglu et al. (2017) [46] used digital, thermal, and multi-spectral camera systems on a UAV platform to collect visible, thermal, and infrared radiations of the electromagnetic spectrum. That multi-source information was employed to produce a highly accurate geometric model of an ancient theater and it revealed the material features because of the spectral classification. Rodríguez-Gonzálvez et al. (2017) [47] employed the Optech LYNX Mobile Mapper platform on a vehicle to acquire 3D point clouds of enormous cultural heritage sites. The platform includes two lidar sensors, four RGB cameras, and an inertial navigation system. During data collection, the system can simultaneously acquire point cloud data, colour information, and spatial geographic references. Milella et al. (2018) [48] assembled a stereo camera, a visible and near-infrared camera, and a thermal imager on a robot with an inertial measurement unit (IMU) to identify soil characteristics and detect changes through the integration of data from different sensors.
To sum up, two main trends exist in using a single platform for scientific observation of cultural heritage. The first is a mobile platform consisting of a global positioning system and inertial navigation, which can achieve the global geospatial reference of cultural heritage without ground control points. The mobile platform can reduce the time required for planning sensors’ network and installing instrumentation. This approach can quickly obtain point cloud data in a large area with a complex spatial structure, such as cultural heritage landscapes and historical buildings. Planning sensor network deployment by non-professional researchers may not be justified due to a lack of expertise [49]. However, cultural heritage authorities may prohibit UAVs and mobile vehicles from accessing cultural heritage sites. Sometimes, there is not enough space and road for flying drones and driving through. In addition, the accuracy of the mobile platform is lower than the ground-station-based method. The second trend is that the content of point cloud data evolves from geometric to synchronous acquisition with the spectrum and texture information in one data acquisition. 

2.3. Multi-Platform Data Fusion

In cultural heritage, it is difficult for a single platform to meet the requirements of point cloud integrity and accuracy within its limited observation perspective [50]. Multi-platform collaborative observation can effectively integrate the advantages of different platforms. Multi-source point cloud fusion can make the scene have a more complete spatial scale and geometric information.
Multi-platform data fusion can make up for the problems of incomplete data and excessive data errors that exist in a single platform. For example, Fassi et al. (2011) [51] proposed that in order to understand the global structure of complex artifacts (e.g., Milan Cathedral’s main spire, Italy), together with its reconstruction accuracy, connections, and topological and geometrical logic, different instruments and modeling methods must be used and integrated. Achille et al. (2015) [52] constructed a 3D model of interior and exterior buildings with complex structures by integrating UAV photogrammetry and TLS data. Galeazzi (2016) [53] combined 3D laser scanning and photogrammetry to reconstruct the archaeological record of cave microtopography in extreme environments, such as extreme humidity, difficulty of access, and challenging light conditions.

3. 3D Point Cloud Segmentation

This section introduces the main methods and applications of 3DPCS in cultural heritage. 3DPCS is mainly used in the recognition and extraction of basic geometric shapes from point clouds, especially immovable cultural heritage. The 3DPCS algorithms is divided into three parts: region growing, model fitting, and clustering-based. Among them, the region growing method is used to detect and segment planes from point clouds of historic buildings. The model fitting method can segment more basic geometric or irregular geometric shapes. These geometric elements are mainly used for the parametric modeling of BIM and HBIM. The model fitting and clustering-based method can provide surface defect detection and deformation analysis of immovable heritage by calculating the distance between the real point and the fitted plane.

3.1. Region Growing

The region growing method is a classic algorithm in 3DPCS. This algorithm is widely used in the geometric segmentation of plane structures [54][55]. In addition, the region growing algorithm is also applied to segment the plane elements from the point cloud of historic buildings. [56][57]. The basic idea of the region growing algorithm is to merge two spatial points or two spatial regions when they are close enough in a particular geometric measure.
Three critical factors need to be considered when constructing a region growing algorithm with different strategies. The first key factor is selecting the appropriate seed point or region [58][59][60]. The second key factor is dividing the region growth unit to improve the computational efficiency using region unit division or hybrid units division, such as voxel [61], super voxel [62], KD-Tree [58], and Octree [63] structures, etc.
The region growing method is mainly applied to segmenting walls and roofs of historic buildings. For example, Grussenmeyer et al. (2008) [56] selected the gravity point of each voxel unit as the seed point for region growing and then extracted the plane for parameter modeling from the TLS point cloud of a medieval castle. Paiva et al. (2020) [57] extracted plane elements from point cloud data of five different styles and periods of historical buildings. Their method combines hierarchical watershed transform and curvature analysis with region growing to obtain more suitable growing seeds.

3.2. Model Fitting

Model fitting is a shape detection method that can match point clouds to different original geometries. Therefore, it also could also segment regular geometric shapes from point clouds. The two most important model fitting algorithms are Hough transform (HT) and random sample consensus (RANSAC).

3.2.1. Hough Transform (HT)

Hough transform can detect parametric geometric objects such as lines [64], planes [65], cylinders [66], and spheres [67] in point clouds. There have been several articles about 3D Hough transform [68][69][70]. Concerning HT, a sample extracted from the 2D images and 3D point cloud data in an original space is mapped into a discretized parameter space, where an accumulator with a cell array is put. A vote is then cast for each sample’s basic geometric element, represented by parameter coordinates in the original space. The cell with the local maximal score is selected as the output.

One of the applications of the HT algorithm in cultural heritage is to extract planner features. For example, Lerma and Biosca (2005) [71] extracted planner surfaces from the point cloud of a monument through the HT algorithm to remove irrelevant points and reduce the data volume. Another application is to create a digital orthophoto map of cultural relics. A digital orthophoto map is an essential documentation of two-dimensional reference systems in cultural heritage. Selecting the appropriate projection surface is the key to generating orthoimages with sufficient geometric accuracy and visual quality [72]

The advantage of the HT algorithm is that all points are processed independently and they are not affected by outliers. It is robust to noise and can recognize multiple geometric shapes (such as multiple different planes). The disadvantages of the HT algorithm are large computation, high computational complexity, and the choice of parameter length [70].

3.2.2. Random Sample Consensus (RANSAC)

The RANSAC algorithm supports the segmentation of historic buildings to create a subset of point clouds belonging to building components, including basic geometric shapes such as planes, spheres, cylinders, cones, and tori. For example, Aitekadi et al. (2013) [73] extracted the principal planes of classical architecture from coloured point cloud data combined with RGB values, laser intensity, and geometric data. Chan et al. (2021) [74] used the RANSAC algorithm to separate individual planar features, which is the first step of the point cloud colourization method based on a point-to-pixel orthogonal projection. Kivilcim and Zaide (2021) [75] extracted the geometries of architectural façade elements from airborne lidar and ground lidar data that contain noise. The extracted geometric elements facilitate transfer to BIM following Industry Foundation Classes (IFC) standards. Macher et al. (2014) [76] segmented geometric elements based on the RANCAS paradigm, such as planes, cylinders, cones, and spheres from the TLS point cloud data of churches and fortresses in the 11th and 12th centuries. 

The RANSAC algorithm can also detect defects on the surface of cultural relics by calculating the spatial distance between the original point cloud and the fitted plane. For example, Nespeca and Luca (2016) [77] created a deviation map between the point cloud of a wall and the RANSAC-fitted plane in the Saint-Maurice church in Carrom, southern France. The deviation map can show areas of material loss on the artifact’s surface. The roughness of the material surface of the cultural relic shows the degree of surface corrosion and can indicate stormwater runoff.

3.3. Unsupervised Clustering Based

Point cloud unsupervised classification algorithms mainly include K-means [78], mean shift [79], and fuzzy clustering [80]. Cluster-based 3DPCS methods are more suitable for irregular geometric objects than region growing and model fitting methods because they do not require the basic geometry of objects to be predefined. Cultural heritage surface or structural diseases often manifest as irregular geometric features such as biological diseases, weathering, cracks, and partial defects. Therefore, this algorithm is very suitable for extracting surface defects.
Segmenting damaged areas with similar geometric, colour, and reflection intensity features from the 3D point cloud data and then making a map of cultural heritage disease investigation have become an efficient method for diagnosing conservation status [81][82]. Armesto-González et al. (2010) [83] used TLS with unsupervised classification methods to produce a thematic map of damage affecting building materials. This work tested three types of 3D laser scanning equipment (FARO Photon, TRIMBLE GX200, and RIEGL-Z390i) with different wavelengths. The best result for the classification is the fuzzy K-means algorithm rather than K-means or ISO data.

4. Three-Dimensional Point Cloud Semantic Segmentation

Currently, the study case of the 3DPCSS algorithms is mainly based on historical buildings in Europe, with complex structures, diverse roof types, decorated windows, different styles of columns, and complex geometric decorations. The 3DPCSS algorithm can segment building elements and the surrounding environment (the ground and trees) in one task. These elements have different semantic and physical properties and functions in cultural heritage. The result of the segmentations are used for parametric modeling, incorporating digital models of artifacts into the HBIM environment.

4.1. Supervised Machine Learning

3DPCSS based on machine learning has been widely used for the semantic understanding of indoor and outdoor 3D urban scenes [84][85]. This method has also become a trend in classifying complex structures from cultural heritage point clouds, especially ancient architecture [86][87]. The basic process for importing point clouds to outputting semantic annotation 3D point clouds includes four steps [32][88][89]:
(1)
Point cloud neighborhood selection.
(2)
Local feature extraction.
(3)
Salient feature selection.
(4)
Point cloud supervised classification.

4.2. Deep Learning

With the emergence of deep learning techniques, point cloud semantic segmentation has improved tremendously. In recent years, many deep learning models have emerged to achieve semantic segmentation of 3D point clouds [90].Compared with traditional segmentation algorithms, deep-learning-based model technology has a higher performance of multi-scale spatial 3D information with semantic information at a different level of granularity, such as semantic segmentation (scene level), instance segmentation (object level), and part segmentation (part level) [91][92]. According to the irregularity of point clouds, deep-learning-based point cloud semantic segmentation models are divided into indirect and direct methods [93]. The indirect method converts the irregular point cloud into a regular structure to achieve segmentation.

4.3. Public Benchmark Dataset

The public standard benchmark dataset helps train the 3DPCSS models, based on which the automation of point cloud data processing and 3D reconstruction of cultural heritage can be improved. An intensive search indicated only four benchmark datasets that include cultural heritage, while most of them concern architectural heritage. An architectural cultural heritage (ArCH) dataset was collected by 3D laser scanning and oblique photography, including 17 annotated and ten unlabeled scenes [94]. The WHU-TLS point cloud benchmark dataset [95][96][97] is not a dataset dedicated to the cultural heritage field but includes a small part of architectural heritage. SEMANTIC3D.NET [98] presents a 3D point cloud classification benchmark dataset with manually labeled points. It covers a few historic buildings.

References

  1. Bakirman, T.; Bayram, B.; Akpinar, B.; Karabulut, M.F.; Bayrak, O.C.; Yigitoglu, A.; Seker, D.Z. Implementation of ultra-light UAV systems for cultural heritage documentation. J. Cult. Herit. 2020, 44, 174–184.
  2. Pan, Y.; Dong, Y.; Wang, D.; Chen, A.; Ye, Z. Three-Dimensional Reconstruction of Structural Surface Model of Heritage Bridges Using UAV-Based Photogrammetric Point Clouds. Remote Sens. 2019, 11, 1204.
  3. Yastikli, N. Documentation of cultural heritage using digital photogrammetry and laser scanning. J. Cult. Herit. 2007, 8, 423–427.
  4. Pavlidis, G.; Koutsoudis, A.; Arnaoutoglou, F.; Tsioukas, V.; Chamzas, C. Methods for 3D digitization of Cultural Heritage. J. Cult. Herit. 2007, 8, 93–98.
  5. Pepe, M.; Costantino, D.; Alfio, V.S.; Restuccia, A.G.; Papalino, N.M. Scan to BIM for the digital management and representation in 3D GIS environment of cultural heritage site. J. Cult. Herit. 2021, 50, 115–125.
  6. Poux, F.; Neuville, R.; Van Wersch, L.; Nys, G.-A.; Billen, R. 3D Point Clouds in Archaeology: Advances in Acquisition, Processing and Knowledge Integration Applied to Quasi-Planar Objects. Geosciences 2017, 7, 96.
  7. Barrile, V.; Bernardo, E.; Fotia, A.; Bilotta, G. A Combined Study of Cultural Heritage in Archaeological Museums: 3D Survey and Mixed Reality. Heritage 2022, 5, 1330–1349.
  8. Bekele, M.K.; Pierdicca, R.; Frontoni, E.; Malinverni, E.S.; Gain, J. A Survey of Augmented, Virtual, and Mixed Reality for Cultural Heritage. J. Comput. Cult. Herit. 2018, 11, 1–36.
  9. Xie, Y.; Tian, J.; Zhu, X.X. Linking Points with Labels in 3D: A Review of Point Cloud Semantic Segmentation. IEEE Geosci. Remote Sens. Mag. 2020, 8, 38–59.
  10. Poux, F.; Billen, R. Voxel-based 3D Point Cloud Semantic Segmentation: Unsupervised Geometric and Relationship Featuring vs. Deep Learning Methods. ISPRS Int. J. Geo-Inf. 2019, 8, 213.
  11. Bosché, F.; Ahmed, M.; Turkan, Y.; Haas, C.T.; Haas, R. The value of integrating Scan-to-BIM and Scan-vs-BIM techniques for construction monitoring using laser scanning and BIM: The case of cylindrical MEP components. Autom. Constr. 2015, 49, 201–213.
  12. Rocha, G.; Mateus, L.; Fernández, J.; Ferreira, V. A Scan-to-BIM Methodology Applied to Heritage Buildings. Heritage 2020, 3, 47–67.
  13. Volk, R.; Stengel, J.; Schultmann, F. Building Information Modeling (BIM) for existing buildings—Literature review and future needs. Autom. Constr. 2014, 38, 109–127.
  14. López, F.; Lerones, P.; Llamas, J.; Gómez-García-Bermejo, J.; Zalama, E. A Review of Heritage Building Information Modeling (H-BIM). Multimodal Technol. Interact. 2018, 2, 21.
  15. Pocobelli, D.P.; Boehm, J.; Bryan, P.; Still, J.; Grau-Bové, J. BIM for heritage science: A review. Herit. Sci. 2018, 6, 30.
  16. Yang, S.; Hou, M.; Shaker, A.; Li, S. Modeling and Processing of Smart Point Clouds of Cultural Relics with Complex Geometries. ISPRS Int. J. Geo-Inf. 2021, 10, 617.
  17. Florent Poux, R.B. A Smart Point Cloud Infrastructure for intelligent environments. In Laser Scanning; CRC Press: London, UK, 2019; p. 23.
  18. Poux, F.; Neuville, R.; Hallot, P.; Billen, R. Model for Semantically Rich Point Cloud Data. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, IV-4/W5, 107–115.
  19. Alkadri, M.F.; Alam, S.; Santosa, H.; Yudono, A.; Beselly, S.M. Investigating Surface Fractures and Materials Behavior of Cultural Heritage Buildings Based on the Attribute Information of Point Clouds Stored in the TLS Dataset. Remote Sens. 2022, 14, 410.
  20. Arias, P.; GonzÁLez-Aguilera, D.; Riveiro, B.; Caparrini, N. Orthoimage-Based Documentation of Archaeological Structures: The Case of a Mediaeval Wall in Pontevedra, Spain. Archaeometry 2011, 53, 858–872.
  21. Chen, S.; Hu, Q.; Wang, S.; Yang, H. A Virtual Restoration Approach for Ancient Plank Road Using Mechanical Analysis with Precision 3D Data of Heritage Site. Remote Sens. 2016, 8, 828.
  22. Yang, S.; Xu, S.; Huang, W. 3D Point Cloud for Cultural Heritage: A Scientometric Survey. Remote Sens. 2022, 14, 5542.
  23. Ronchi, A.M. Cultural Content. In eCulture: Cultural Content in the Digital Age; Springer: Berlin/Heidelberg, Germany, 2009; pp. 15–20.
  24. Van Eetvelde, V.; Antrop, M. Indicators for assessing changing landscape character of cultural landscapes in Flanders (Belgium). Land Use Policy 2009, 26, 901–910.
  25. Soler, F.; Melero, F.J.; Luzón, M.V. A complete 3D information system for cultural heritage documentation. J. Cult. Herit. 2017, 23, 49–57.
  26. Sánchez, M.L.; Cabrera, A.T.; Del Pulgar, M.L.G. Guidelines from the heritage field for the integration of landscape and heritage planning: A systematic literature review. Landsc. Urban Plan. 2020, 204, 103931.
  27. Moyano, J.; Justo-Estebaranz, Á.; Nieto-Julián, J.E.; Barrera, A.O.; Fernández-Alconchel, M. Evaluation of records using terrestrial laser scanner in architectural heritage for information modeling in HBIM construction: The case study of the La Anunciación church (Seville). J. Build. Eng. 2022, 62, 105190.
  28. Barrile, V.; Fotia, A. A proposal of a 3D segmentation tool for HBIM management. Appl. Geomat. 2021, 14, 197–209.
  29. Pierdicca, R.; Paolanti, M.; Matrone, F.; Martini, M.; Morbidoni, C.; Malinverni, E.S.; Frontoni, E.; Lingua, A.M. Point cloud semantic segmentation using a deep learning framework for cultural heritage. Remote Sens. 2020, 12, 1005.
  30. Grilli, E.; Remondino, F. Classification of 3D Digital Heritage. Remote Sens. 2019, 11, 847.
  31. Li, Y.; Luo, Y.; Gu, X.; Chen, D.; Gao, F.; Shuang, F. Point Cloud Classification Algorithm Based on the Fusion of the Local Binary Pattern Features and Structural Features of Voxels. Remote Sens. 2021, 13, 3156.
  32. Hackel, T.; Wegner, J.D.; Savinov, N.; Ladicky, L.; Schindler, K.; Pollefeys, M. Large-Scale Supervised Learning For 3D Point Cloud Labeling: Semantic3d.Net. Photogramm. Eng. Remote Sens. 2018, 84, 297–308.
  33. Ramiya, A.M.; Nidamanuri, R.R.; Ramakrishnan, K. A supervoxel-based spectro-spatial approach for 3D urban point cloud labelling. Int. J. Remote Sens. 2016, 37, 4172–4200.
  34. Grilli, E.; Menna, F.; Remondino, F. A Review of Point Clouds Segmentation and Classification Algorithms. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, 42, 339–344.
  35. Salonia, P.; Scolastico, S.; Pozzi, A.; Marcolongo, A.; Messina, T.L. Multi-scale cultural heritage survey: Quick digital photogrammetric systems. J. Cult. Herit. 2009, 10, e59–e64.
  36. McCarthy, J. Multi-image photogrammetry as a practical tool for cultural heritage survey and community engagement. J. Archaeol. Sci. 2014, 43, 175–185.
  37. Nikolakopoulos, K.G.; Soura, K.; Koukouvelas, I.K.; Argyropoulos, N.G. UAV vs. classical aerial photogrammetry for archaeological studies. J. Archaeol. Sci. Rep. 2017, 14, 758–773.
  38. Vavulin, M.V.; Chugunov, K.V.; Zaitceva, O.V.; Vodyasov, E.V.; Pushkarev, A.A. UAV-based photogrammetry: Assessing the application potential and effectiveness for archaeological monitoring and surveying in the research on the ‘valley of the kings’ (Tuva, Russia). Digit. Appl. Archaeol. Cult. Herit. 2021, 20, e00172.
  39. Jeon, E.-I.; Yu, S.-J.; Seok, H.-W.; Kang, S.-J.; Lee, K.-Y.; Kwon, O.-S. Comparative evaluation of commercial softwares in UAV imagery for cultural heritage recording: Case study for traditional building in South Korea. Spat. Inf. Res. 2017, 25, 701–712.
  40. Kingsland, K. Comparative analysis of digital photogrammetry software for cultural heritage. Digit. Appl. Archaeol. Cult. Herit. 2020, 18, e00157.
  41. Szeliski, R. Computer Vision: Algorithms and Applications; Springer Nature: Berlin/Heidelberg, Germany, 2022.
  42. Risbøl, O.; Briese, C.; Doneus, M.; Nesbakken, A. Monitoring cultural heritage by comparing DEMs derived from historical aerial photographs and airborne laser scanning. J. Cult. Herit. 2015, 16, 202–209.
  43. Damięcka-Suchocka, M.; Katzer, J.; Suchocki, C. Application of TLS Technology for Documentation of Brickwork Heritage Buildings and Structures. Coatings 2022, 12, 1963.
  44. di Filippo, A.; Sánchez-Aparicio, L.; Barba, S.; Martín-Jiménez, J.; Mora, R.; González Aguilera, D. Use of a Wearable Mobile Laser System in Seamless Indoor 3D Mapping of a Complex Historical Site. Remote Sens. 2018, 10, 1897.
  45. Nagai, M.; Tianen, C.; Shibasaki, R.; Kumagai, H.; Ahmed, A. UAV-Borne 3-D Mapping System by Multisensor Integration. IEEE Trans. Geosci. Remote Sens. 2009, 47, 701–708.
  46. Erenoglu, R.C.; Akcay, O.; Erenoglu, O. An UAS-assisted multi-sensor approach for 3D modeling and reconstruction of cultural heritage site. J. Cult. Herit. 2017, 26, 79–90.
  47. Rodríguez-Gonzálvez, P.; Jiménez Fernández-Palacios, B.; Muñoz-Nieto, Á.; Arias-Sanchez, P.; Gonzalez-Aguilera, D. Mobile LiDAR System: New Possibilities for the Documentation and Dissemination of Large Cultural Heritage Sites. Remote Sens. 2017, 9, 189.
  48. Milella, A.; Reina, G.; Nielsen, M. A multi-sensor robotic platform for ground mapping and estimation beyond the visible spectrum. Precis. Agric. 2018, 20, 423–444.
  49. Alsadik, B. Practicing the geometric designation of sensor networks using the Crowdsource 3D models of cultural heritage objects. J. Cult. Herit. 2018, 31, 202–207.
  50. Ramos, M.M.; Remondino, F. Data fusion in Cultural Heritage—A Review. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, XL-5/W7, 359–363.
  51. Fassi, F.; Achille, C.; Fregonese, L. Surveying and modelling the main spire of Milan Cathedral using multiple data sources. Photogramm. Rec. 2011, 26, 462–487.
  52. Achille, C.; Adami, A.; Chiarini, S.; Cremonesi, S.; Fassi, F.; Fregonese, L.; Taffurelli, L. UAV-Based Photogrammetry and Integrated Technologies for Architectural Applications--Methodological Strategies for the After-Quake Survey of Vertical Structures in Mantua (Italy). Sensors 2015, 15, 15520–15539.
  53. Galeazzi, F. Towards the definition of best 3D practices in archaeology: Assessing 3D documentation techniques for intra-site data recording. J. Cult. Herit. 2016, 17, 159–169.
  54. Nurunnabi, A.; Belton, D.; West, G. Robust segmentation in laser scanning 3D point cloud data. In Proceedings of the 2012 International Conference on Digital Image Computing Techniques and Applications (DICTA), Fremantle, Australia, 3–5 December 2012; pp. 1–8.
  55. Su, Z.; Gao, Z.; Zhou, G.; Li, S.; Song, L.; Lu, X.; Kang, N. Building Plane Segmentation Based on Point Clouds. Remote Sens. 2021, 14, 95.
  56. Grussenmeyer, P.; Landes, T.; Voegtle, T.; Ringle, K. Comparison methods of terrestrial laser scanning, photogrammetry and tacheometry data for recording of cultural heritage buildings. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2008, 37, 213–218.
  57. Paiva, P.V.V.; Cogima, C.K.; Dezen-Kempter, E.; Carvalho, M.A.G. Historical building point cloud segmentation combining hierarchical watershed transform and curvature analysis. Pattern Recognit. Lett. 2020, 135, 114–121.
  58. Deschaud, J.-E.; Goulette, F. A fast and accurate plane detection algorithm for large noisy point clouds using filtered normals and voxel growing. In 3DPVT; Hal Archives-Ouvertes: Paris, France, 2010.
  59. Fan, Y.; Wang, M.; Geng, N.; He, D.; Chang, J.; Zhang, J.J. A self-adaptive segmentation method for a point cloud. Vis. Comput. 2017, 34, 659–673.
  60. Ning, X.; Zhang, X.; Wang, Y.; Jaeger, M. Segmentation of architecture shape information from 3D point cloud. In Proceedings of the 8th International Conference on Virtual Reality Continuum and its Applications in Industry, Yokohama, Japan, 14–15 December 2009; pp. 127–132.
  61. Saglam, A.; Makineci, H.B.; Baykan, N.A.; Baykan, Ö.K. Boundary constrained voxel segmentation for 3D point clouds using local geometric differences. Expert Syst. Appl. 2020, 157, 113439.
  62. Aijazi, A.; Checchin, P.; Trassoudaine, L. Segmentation Based Classification of 3D Urban Point Clouds: A Super-Voxel Based Approach with Evaluation. Remote Sens. 2013, 5, 1624–1650.
  63. Vo, A.-V.; Truong-Hong, L.; Laefer, D.F.; Bertolotto, M. Octree-based region growing for point cloud segmentation. ISPRS J. Photogramm. Remote Sens. 2015, 104, 88–100.
  64. Dalitz, C.; Schramke, T.; Jeltsch, M. Iterative Hough Transform for Line Detection in 3D Point Clouds. Image Process. Line 2017, 7, 184–196.
  65. Tian, P.; Hua, X.; Yu, K.; Tao, W. Robust Segmentation of Building Planar Features From Unorganized Point Cloud. IEEE Access 2020, 8, 30873–30884.
  66. Rabbani, T.; Van Den Heuvel, F. Efficient hough transform for automatic detection of cylinders in point clouds. Isprs Wg Iii/3 Iii/4 2005, 3, 60–65.
  67. Camurri, M.; Vezzani, R.; Cucchiara, R. 3D Hough transform for sphere recognition on point clouds. Mach. Vis. Appl. 2014, 25, 1877–1891.
  68. Borrmann, D.; Elseberg, J.; Lingemann, K.; Nüchter, A. The 3d hough transform for plane detection in point clouds: A review and a new accumulator design. 3D Res. 2011, 2, 3.
  69. Hassanein, A.S.; Mohammad, S.; Sameer, M.; Ragab, M.E. A survey on Hough transform, theory, techniques and applications. arXiv 2015, arXiv:1502.02160.
  70. Kaiser, A.; Ybanez Zepeda, J.A.; Boubekeur, T. A survey of simple geometric primitives detection methods for captured 3D data. In Computer Graphics Forum; Wiley Online Library: Hoboken, NJ, USA, 2019; pp. 167–196.
  71. Lerma, J.; Biosca, J. Segmentation and filtering of laser scanner data for cultural heritage. In Proceedings of the CIPA 2005 XX International Symposium, Torino, Italy, 26 September–1 October 2005; p. 6.
  72. Pierrot-Deseilligny, M.; De Luca, L.; Remondino, F. Automated image-based procedures for accurate artifacts 3D Modeling and orthoimage. J. Geoinform. FCE CTU 2011, 6, 1–10.
  73. Aitelkadi, K.; Tahiri, D.; Simonetto, E.; Sebari, I.; Polidori, L. Segmentation of heritage building by means of geometric and radiometric components from terrestrial laser scanning. ISPRS Ann. Photogramm. Remote Sens Spat. Inf. Sci 2013, 1, 1–6.
  74. Chan, T.O.; Xiao, H.; Liu, L.; Sun, Y.; Chen, T.; Lang, W.; Li, M.H. A Post-Scan Point Cloud Colourization Method for Cultural Heritage Documentation. ISPRS Int. J. Geo-Inf. 2021, 10, 737.
  75. Kivilcim, C.Ö.; Duran, Z. Parametric Architectural Elements from Point Clouds for HBIM Applications. Int. J. Environ. Geoinform. 2021, 8, 144–149.
  76. Macher, H.; Landes, T.; Grussenmeyer, P.; Alby, E. Semi-automatic segmentation and modelling from point clouds towards historical building information modelling. In Proceedings of the Euro-Mediterranean Conference, Limassol, Cyprus, 3– November 2014; pp. 111–120.
  77. Nespeca, R.; De Luca, L. Analysis, thematic maps and data mining from point cloud to ontology for software development. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci 2016, XLI-B5, 347–354.
  78. Shi, B.-Q.; Liang, J.; Liu, Q. Adaptive simplification of point cloud using k-means clustering. Comput. -Aided Des. 2011, 43, 910–922.
  79. Melzer, T. Non-parametric segmentation of ALS point clouds using mean shift. J. Appl. Geod. 2007, 1, 159–170.
  80. Biosca, J.M.; Lerma, J.L. Unsupervised robust planar segmentation of terrestrial laser scanner point clouds based on fuzzy clustering methods. ISPRS J. Photogramm. Remote Sens. 2008, 63, 84–98.
  81. Quagliarini, E.; Clini, P.; Ripanti, M. Fast, low cost and safe methodology for the assessment of the state of conservation of historical buildings from 3D laser scanning: The case study of Santa Maria in Portonovo (Italy). J. Cult. Herit. 2017, 24, 175–183.
  82. Galantucci, R.A.; Fatiguso, F. Advanced damage detection techniques in historical buildings using digital photogrammetry and 3D surface anlysis. J. Cult. Herit. 2019, 36, 51–62.
  83. Armesto-González, J.; Riveiro-Rodríguez, B.; González-Aguilera, D.; Rivas-Brea, M.T. Terrestrial laser scanning intensity data applied to damage detection for historical buildings. J. Archaeol. Sci. 2010, 37, 3037–3047.
  84. Niemeyer, J.; Rottensteiner, F.; Soergel, U. Contextual classification of lidar data and building object detection in urban areas. ISPRS J. Photogramm. Remote Sens. 2014, 87, 152–165.
  85. Vosselman, G.; Coenen, M.; Rottensteiner, F. Contextual segment-based classification of airborne laser scanner data. ISPRS J. Photogramm. Remote Sens. 2017, 128, 354–371.
  86. Fiorucci, M.; Khoroshiltseva, M.; Pontil, M.; Traviglia, A.; Del Bue, A.; James, S. Machine Learning for Cultural Heritage: A Survey. Pattern Recognit. Lett. 2020, 133, 102–108.
  87. Mesanza-Moraza, A.; García-Gómez, I.; Azkarate, A. Machine Learning for the Built Heritage Archaeological Study. J. Comput. Cult. Herit. 2021, 14, 1–21.
  88. Weinmann, M.; Jutzi, B.; Hinz, S.; Mallet, C. Semantic point cloud interpretation based on optimal neighborhoods, relevant features and efficient classifiers. ISPRS J. Photogramm. Remote Sens. 2015, 105, 286–304.
  89. Hackel, T.; Wegner, J.D.; Schindler, K. Fast semantic segmentation of 3D point clouds with strongly varying density. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 3, 177–184.
  90. Guo, Y.; Wang, H.; Hu, Q.; Liu, H.; Liu, L.; Bennamoun, M. Deep learning for 3d point clouds: A survey. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 43, 4338–4364.
  91. Bello, S.A.; Yu, S.; Wang, C.; Adam, J.M.; Li, J. Deep learning on 3D point clouds. Remote Sens. 2020, 12, 1729.
  92. Liu, W.; Sun, J.; Li, W.; Hu, T.; Wang, P. Deep learning on point clouds and its application: A survey. Sensors 2019, 19, 4188.
  93. Zhang, J.; Zhao, X.; Chen, Z.; Lu, Z. A review of deep learning-based semantic segmentation for point cloud. IEEE Access 2019, 7, 179118–179133.
  94. Matrone, F.; Lingua, A.; Pierdicca, R.; Malinverni, E.; Paolanti, M.; Grilli, E.; Remondino, F.; Murtiyoso, A.; Landes, T. A benchmark for large-scale heritage point cloud semantic segmentation. In Proceedings of the XXIV ISPRS Congress, Nice, France, 31 August–2 September 2020; pp. 1419–1426.
  95. Dong, Z.; Liang, F.; Yang, B.; Xu, Y.; Zang, Y.; Li, J.; Wang, Y.; Dai, W.; Fan, H.; Hyyppä, J. Registration of large-scale terrestrial laser scanner point clouds: A review and benchmark. ISPRS J. Photogramm. Remote Sens. 2020, 163, 327–342.
  96. Dong, Z.; Yang, B.; Liang, F.; Huang, R.; Scherer, S. Hierarchical registration of unordered TLS point clouds based on binary shape context descriptor. ISPRS J. Photogramm. Remote Sens. 2018, 144, 61–79.
  97. Dong, Z.; Yang, B.; Liu, Y.; Liang, F.; Li, B.; Zang, Y. A novel binary shape context for 3D local surface description. ISPRS J. Photogramm. Remote Sens. 2017, 130, 431–452.
  98. Hackel, T.; Savinov, N.; Ladicky, L.; Wegner, J.D.; Schindler, K.; Pollefeys, M. Semantic3d. net: A new large-scale point cloud classification benchmark. arXiv 2017, arXiv:1704.03847.
More
Information
Contributors MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register : , ,
View Times: 506
Revisions: 2 times (View History)
Update Date: 09 Mar 2023
1000/1000
Video Production Service