Geomatic Sensors for Heritage Documentation: History
Please note this is an old version of this entry, which may differ significantly from the current revision.

Geomatic technologies have been widely populated for cultural heritage applications, while the scientific field is quite broad: from underwater to close-range to low-altitude and satellite observations. Geomatic sensors have been used in applications such as close-range approaches with red-green-blue (RGB) cameras and Terrestrial Laser Scanners (TLS), as well as underwater studies. Low-altitude sensors on Unmanned Aerial Vehicles (UAVs) have also been widely used with RGB and multispectral cameras, as well as lidar and thermal sensors.

  • Sensors
  • Heritage
  • cameras

1. Introduction

In the past, a variety of sensors has been used for documentation and monitoring purposes of heritage sites [1,2,3,4,5,6]. As the technology advances and sensor capabilities have increased, there have been more studies on the subject. Geomatic sensors have been used in applications such as close-range approaches with red-green-blue (RGB) cameras and Terrestrial Laser Scanners (TLS), as well as underwater studies [7,8,9,10,11]. Low-altitude sensors on Unmanned Aerial Vehicles (UAVs) have also been widely used with RGB and multispectral cameras, as well as lidar and thermal sensors [12,13,14,15]. Additionally, researchers have been interested in using aerial and satellite sensors for observing heritage sites and monuments on a macro scale [16,17].

2. Geomatic Sensors for Heritage Documentation

2.1. Close-Range Sensors

2.1.1. RGB Sensors

RGB sensors are passive sensors, commonly used for closed-range photogrammetric applications. These sensors operate in the visible part of the spectrum, between 380–750 nm. While CMOS sensors are sensitive to approximately 350–1050 nm, an infrared filter (750–1000 nm) is applied to reproduce natural colors visible to humans. By replacing the infrared filter with a red one, some cameras can be easily customized to near infrared, hence CMOS records infrared wavelengths into the red channel of the RGB image file [18,19].
The versatility and variety of available RGB cameras in the market are unmatched by any other sensor in the cultural heritage field [20]. Cameras can be classified based on their sensors and lenses [21]. CMOS sensors are usually classified according to their physical size and resolution, which affects the physical pixel size and amount of recorded light. Other sensor characteristics of interest include the color pattern and an antialiasing digital filter.
Lenses are characterized by their focal length and material. Focal length affects the size of the covered area at a given object-to-camera distance and depends on the application. Two lens materials are available: plastic (acrylic) and glass (crystal), with the latter being preferable. Other important characteristics are the number of elements in the lens, chromatic aberrations, distortion, and whether it is a prime or zoom lens [21].
Camera manufacturers prioritize different characteristics, based on the intended application and the final cost of the camera. For example, camera rigidity is desirable for 3D reconstruction applications, but it increases camera weight, making it challenging to mount the camera on a drone. Cameras with interchangeable lenses are more versatile, but this feature also increases camera size and weight. LCD screens and control dials are desirable for professional users, but useless if the device is mounted on a drone. Other characteristics of interest for specific applications include recording in raw format, a wired or wireless connection for remote control, triggering and data recording, a hot shoe for flash, and synchronization for precise triggering.
The primary purpose of such sensors is general-purpose recording and documentation, but they are increasingly used for 3D reconstruction through Structure from Motion (SfM) and Multi-View Stereo (MVS) techniques [22,23]. Calibration is necessary for both types of measurements to achieve high standards [24]. A color checker in the frame of each photo is usually sufficient to achieve color accuracy, while accurate 3D reconstruction requires a rigid camera [25,26]. The camera must be either calibrated before the photo acquisition or self-calibrated during post-processing, while several ground control points must be measured using a higher-order accuracy method, i.e., a total station, to ensure high geometric accuracy and georeferencing.
While there are many different sensors in the market, it is essential to highlight the 360° cameras, which have gained attention from the community as an easy means to record data quickly without missing any information [27,28,29]. There are two main applications for such cameras: virtual tours and 3D reconstruction. The former is served even by affordable commercial cameras, but the latter requires high-end dedicated cameras. Such cameras consist of an array of sensors, which are triggered simultaneously. The most common and affordable approach is two small image sensors mounted back-to-back, coupled with 180° spherical lenses. Most expensive implementations consist of 3–25 sensors and lenses connected with a rigid body, triggered simultaneously. The advantage of more sensors is that each covers a much smaller field of view, limiting the lens distortions and increasing the overall resolution. The same comments for the single-lens cameras apply to each set of sensors and lenses. It should be mentioned that a camera with multi-lenses/sensors should be used for 3D reconstruction if good results are expected [27].

2.1.2. Terrestrial Laser Scanners

The LiDAR technology [30] being used in Terrestrial Laser Scanners (TLS) are active sensors, emitting laser in the 900–1064 nm wavelength, which is reflected by the surrounding objects and returned to the scanner. The scanner measures the time of flight (TOF) or phase shift and calculates the distance from the reflected surface. Modern TLS cover a 360° × 270° window area or even more, and they can acquire points at a rate between 30 K and 2 M points per second [31,32,33].
Beside the laser measurement technology (TOF or phase shift), which directly affects acquisition rate and range, other essential characteristics of TLS include angular resolution, distance accuracy, signal-to-noise ratio, multiple responses, and a coaxial RGB camera. Final point accuracy from the laser head is a combination of distance and angular accuracy, and varies roughly in the 5–15 mm @ 100 m range.
The collected point clouds from TLS are co-registered or geo-registered during the post-processing. The former may be done using sphere targets, and the latter using targets measured with other methods, usually a combination of total stations and Global Navigation Satellite Systems (GNSS). For the final alignment, Iterative Closest Point (ICP) algorithms are employed [34,35,36,37]. Most TLS use complementary sensors, such as GNSS, barometers, and digital compasses, to estimate the initial position and accelerate alignment during post-processing. Some modern TLS use LiDAR or visual Simultaneous Location And Mapping (SLAM) techniques to co-register neighboring scans instead of the aforementioned sensors.
Simultaneous Localization And Mapping (based on visual, IMU, or combined) is also being used to eliminate the need for the scanner to be stationary [38,39,40,41,42]. The implementation of such scanners maybe handheld, backpack, car, or drone mounted. The user holds a rotating laser profiler while walking around and inside the monument. Recorded data are stored and merged into a single-point cloud during post-processing. Such methods are faster in data acquisition, i.e., a monument can be covered in a fraction of the time if stationary TLS were used. However, they are of inferior accuracy, varying from 30 mm to 50 mm @ 100 m range. Professional calibration and service of TLS is necessary, as they are complex and sensitive equipment [43,44].

2.2. Low-Altitude Sensors

Similar sensors are also used in low-altitude applications. The drone RGB sensor is like the RGB sensor discussed previously, but it onboards a drone, allowing for more advantageous positions and angles for photography. The rise of location-aware drones equipped with single-frequency GNSS at the beginning of the 2010s allowed for autonomous flights aimed at large-scale mapping [45,46]. In the following years, multicopper drones were extensively used with oblique photographs for detailed 3D reconstruction of cultural heritage monuments and sites [47,48,49,50,51].
Cameras onboard drones have similar characteristics to standard ones but must be optimized for weight and space. Additionally, given that the object-to-camera distance may be easily altered by proper flying height, the need for interchangeable lenses is limited. The image scale can be controlled by the flying height rather than the lens focal length. Wide lens cameras are adopted in most cases, since they also provide a favorable base-to-height ratio, for better height precision.
Drone vendors prefer small and light cameras, hence cameras free of LCD screens, dials and buttons, viewfinders, etc. In fact, they adopt small custom-made cameras (Original Equipment Manufacturers), focusing on the best lens–sensor selection and optimizing them for size and weight.
Although the camera and drone should be considered as two separate pieces of equipment, each with its own characteristics, vendors dominating the recreational market have introduced combo solutions and have unified characteristics for their products, limiting users’ choices. Some drones allow for payload choices, including various RGB cameras, thermal, multispectral, hyperspectral, and LiDAR sensors [52,53,54,55,56,57], but these are aimed at specialized applications/customers.

2.3. Underwater Sensors

Underwater RGB is a passive sensor but when using flash/lights it becomes active. The natural sunlight is heavily reduced with depth, and taking photos without an artificial light source becomes impracticable. Apart from the passive/active nature of the underwater RGB sensor and limitations imposed by the environment, two more shortcomings need to be noted concerning the recorded information.
The water strongly absorbs the infrared, red, and green wavelengths (from shallow to deep), and the color is diminished to blue. Therefore, color accuracy cannot be ensured, even with color checkers, because the light attenuation depends on environmental parameters and lights-to-object-to-camera distance, which varies from pixel to pixel. Therefore, intense illumination and color differences appear in underwater photos. This problem is an active field of research on haze-removal and color-restoration techniques [58]. So far, there is no algorithm that can work universally.
Having the camera in a watertight enclosure means the light travels through many media (water, glass, air, glass, sensor). Hence the photogrammetric principle of straight light transmission is invalid. Given that the camera is rigidly fixed to the lens body and there are no severe misalignments, the geometric image deformations are radial and tangential to the principal point or near it. Therefore, they can be compensated with the existing lens-distortion models, and the whole process is resolved through camera self-calibration. Dome ports are more suitable than flat ones; hence, the latter introduces several other deformations, like a strong color aberration. After the emergence of SfM–MVS techniques, several applications for underwater heritage geometric documentation have been released [59,60,61,62,63,64].

2.4. Aerial and Satellite Sensors

Aerial and satellite sensors have been widely used for cultural heritage [65,66,67]. Aerial photogrammetry was one of the oldest techniques for reconnaissance over extensive archaeological landscapes and heritage objects [68]. Archive aerial images are now considered of great value as these can provide valuable information related to a landscape that has been changing due to modern construction [69,70]. Similarly, the role of sensors onboard satellite platforms has increased in the last decade. This increase is mainly due to the increased capabilities and improvement of the space sector that can provide enhanced spatial and spectral imagery. Satellites today can provide multispectral and hyperspectral data covering from approximately 380 nm to 2500 nm, while thermal sensors are becoming available today at a very high resolution (5 m) [71,72,73,74,75].

This entry is adapted from the peer-reviewed paper 10.3390/heritage6100357

This entry is offline, you can click here to edit this entry!
Video Production Service