Submitted Successfully!
Thank you for your contribution! You can also upload a video entry related to this topic through the link below: https://encyclopedia.pub/user/video_add?id=25702
Check Note
2000/2000
Ver. Summary Created by Modification Content Size Created at Operation
1 -- 6393 2022-07-31 09:43:01 |
2 format -42 word(s) 6351 2022-08-01 04:02:43 |
Assessment of Computer Vision-Based Construction Progress Monitoring Process
Edit
Upload a video

The progress monitoring (PM) of construction projects is an essential aspect of project control that enables the stakeholders to make timely decisions to ensure successful project delivery, but ongoing practices are largely manual and document-centric. However, the integration of technologically advanced tools into construction practices has shown the potential to automate construction PM (CPM) using real-time data collection, analysis, and visualization for effective and timely decision making. CPM entails periodically measuring the on-site progress and comparing the data with a planned schedule to get the actual status of a construction project. Traditional CPM practices involve manual data collection, which requires human intervention and hence are slow, error-prone, and labor-intensive. To overcome these issues, various automated CPM processes have been proposed. These processes include but are not limited to the use of enhanced information, geospatial, and imaging technologies. The imaging technologies comprise photogrammetry, videogrammetry, laser scanning, and range imaging. Laser scanning is a promising tool for as-built DAQ due to its accuracy; however, it requires expensive equipment, is technically complex, and requires experts to capture, model, and manipulate data for meaningful interpretations. An alternate technique is CV-based technology, which comprises photogrammetry, videogrammetry, and range images. The CV-based CPM comprises four sub-processes: data acquisition (DAQ), information retrieval, progress estimation, and output visualization. Each process involves various methods and techniques to achieve the desired output posing several benefits and limitations.

construction progress monitoring process assessment computer vision
Information
View Times: 245
Revisions: 2 times (View History)
Update Date: 05 Aug 2022
Table of Contents

    1. Introduction

    Traditional construction progress monitoring (CPM) is based on manual and labor-intensive procedures of information collection, documentation, and reporting of the status of a construction project periodically [1]. These documented reports are used for project monitoring and control against the as-planned project schedule and act as an as-built record throughout the project lifecycle [2]. Accurate progress reporting may keep stakeholders informed about the state of a project and help them make effective decisions about avoiding construction delays and cost overruns by applying required controls to slippage operations and prepare them for managing delay claims [3]. However, traditional progress reporting practices are tedious, error-prone, slow, and often report redundant information, preventing stakeholders from making proactive decisions [4]. More than 70% of contracting firms mentioned poor job site coordination as the primary cause for projects to run over budget and past deadlines. Resultantly, fewer than 30% of the contractors finish projects within the planned budget and on time [5].
    Various emerging disruptive technologies are currently explored in construction to address these issues [6]. Applications of emerging technologies in construction have shown great potential to digitalize the project progress monitoring (PM) by providing real-time status of site activities via automated capturing and reporting of site data using digital tools [7]. These technologies include the use of barcodes by collecting real-time data regarding material, equipment, and labor for calculating the progress of the project [4]. Similarly, Radio Frequency Identification (RFID) has been implemented to measure the live information from the earthmoving equipment for accurately estimating the progress [8]. Ultra-Wide Band (UWB) was implemented for material tracking and activity-based progress monitoring, especially in remote and harsh environments [9]. More advanced technology, i.e., three-dimensional (3D) laser scanning was deployed to collect the as-built data and transform it into the 3D point clouds and estimate the construction progress by comparing it with Building Information Models (BIMs) [10]. Furthermore, Augmented Reality (AR), by comparing the real environment with as-planned models, enabled the project teams to visualize the progress and make necessary decisions [11]. One such example of advanced technological tools for automated CPM is Computer Vision (CV).
    CV is a technology-driven process that accepts inputs in the form of visual media, either photos or videos, and generates outputs as either decisions or other forms of representation [12]. CV mimics human visualization and possesses a function that derives three-dimensional (3D) objects or data from two-dimensional (2D) inputs that can either be photos or videos, providing the opportunity for automatically analyzing captured images and measuring the construction progress [13][14]. Its integration into the construction field is an interdisciplinary endeavor that incorporates computer science, architecture, construction engineering, and management disciplines. The full potential of CV-based CPM requires fully automated processes of capturing, processing, and retrieving useful information without human intervention [13][15].

    2. Data Acquisition (DAQ)

    Successful project management and delivery require control over all the aspects of the project, e.g., resource usage including labor hours, material, and equipment [8]. For efficient project control, project management teams require an accurate data collection strategy to collect from the worksite and compare it with as-planned data to stay aware of the progress and be able to deliver the project within the planned cost and time [16]. DAQ is the first sub-process of the CV-based CPM process and refers to the collection of vision datasets as inputs for the said process. Construction projects are complex and involve hundreds of activities, which create an unstructured and complex environment [17]. Various activities are simultaneously performed at a construction site with hundreds of laborers, equipment, and materials all the time. The CV-based system requires an accurate vision dataset—the image and video datasets are collectively called vision datasets—to identify features or create a point cloud dataset. Owing to the construction site complexity, it is challenging to obtain a clear vision dataset of either photos or videos for efficient DAQ [18]. Many studies have proposed several DAQ techniques using various digital cameras [19][20]. The literature revealed that the following methods are being used to capture site data that can provide input for a CV-based CPM system.

    2.1. Unmanned Aerial Vehicles (UAVs)

    UAV is a generic aircraft design with no human pilot onboard to operate the aircraft [21]. Recently, UAVs have rapidly entered the architecture, engineering, and construction industry, and their use is expected to grow in the future [22]. For CV-based applications, UAVs are equipped with an optical sensor or a digital camera. Modern UAVs are also equipped with a communication system to transmit the captured vision dataset in real time [23]. UAVs are also quick and cost-effective methods and allow for data collection at places inaccessible by ground-based or manned vehicles [24]. To capture an accurate vision dataset, UAVs require an expert operator and a well-planned flight path with various data capturing angles. However, modern UAVs allow for a pre-planned flight path to be programmed into it, allowing a certain degree of automation in DAQ [25][26].
    Mahami et al. attempted to reduce the number of photos required to create an accurate point cloud model and experimented in a physical construction environment. The high-quality camera was attached to a UAV which acquired the vision dataset to extract the measurements of as-built walls to calculate the volume of work achieved. The proposed method with the data acquired through UAVs reported a 99% accuracy for the volume of completed work [27]. Similarly, Kielhauser et al. [28] attempted to estimate the cost of UAV deployment for CPM and quality management and selected a mixed-use commercial building as a test project. The UAV was programmed for an automatic flight on a pre-determined path to target the external wall section and concrete slab only. The study acquired the volumetric as-built data and compared it with the as-planned model to estimate the percent completion of targeted activities. The study successfully demonstrated the use of UAVs for progress monitoring however it reported untidiness and cluttering of construction sites as a potential hindrance to data acquisition through UAVs. The usefulness of the UAVs for acquiring data to enable the CV-based CPM process is evident however studies reported several limitations to the adoption of UAVs, i.e., use is limited to mostly external construction features, the overall process is time-consuming and requires expert manpower, requires costly equipment and hence is costlier than traditional practices in the field [22][28][29].
    Useful 3D point clouds can be generated if all features of the construction process are visible throughout the UAV’s flight path [30]. The UAV-enabled vision DAQ has been compared with crane-mounted and terrestrial handheld digital cameras, showing that the UAV-enabled technique was more efficient and flexible and enabled better coverage [20]. Most studies have explored and reported the benefits of UAVs for outside construction; hence, the use of UAVs in interior CPM is the least explored research area [30][31]. As a result, UAVs are very useful tools to capture the vision dataset for an automated CV-based CPM process provided that there is a good-quality digital camera, global positioning system, communication system, and well-programmed automated flight path to cover all possible elements of a construction project [32][33][34].

    2.2. Handheld Devices

    A handheld device is any compact and portable device that can be held, carried, or used by one or both hands. The use of handheld imaging devices, such as smartphones and digital cameras, is common at present [35]. Smartphone, digital single-lens reflex, mirrorless, film, and 360° cameras are well-known handheld devices to acquire vision datasets. From setting out acquisition geometry, collecting vision datasets to transmit data for further processing is a manual process [20]. Various studies have explored the potential of handheld devices for vision DAQ to measure the construction progress based on feature detection, e.g., concrete walls, drywalls, and bricks [36][37][38]. Daily site photologs captured by handheld devices are useful in generating point clouds, identifying various construction features, and estimating construction progress [39][40][41]. For example, Golparvar-Fard [42] identified that construction site staff usually take more than 500 photos a day using several handheld and off-the-shelf cameras and utilized the unordered daily site photologs to extract useful information to be compared with as-planned. The study successfully demonstrated the extraction of point cloud models for comparison and analysis after automatically ordering, calibrating, and removing occlusions. However, the study focused entirely on addressing the technical feasibility of the proposed concept rather than addressing the progress monitoring and tracking of various construction features. Mahami et al. [39] photographed a real construction environment and extracted the measurements of external construction features. This used a handheld camera and the photographer moved around the entire site taking photos at specific intervals, varying angles, and fixed orientations making the process of data acquisition entirely manual and labor-intensive. Early research in CV-based CPM utilized unordered daily site photologs and other vision datasets captured specifically for extracting the point cloud models but the focus has shifted towards acquiring the vision datasets automatically without human intervention. Handheld devices provide certain flexibility during vision DAQ in adapting according to the site conditions and types of data required; however, the coverage is limited and not useful for an automated CV-based CPM process.

    2.3. Fixed on Mounts

    The term fixed on mounts indicates various camera systems mounted on camera stands, polls, formworks, cranes, robots, etc., for collecting required vision datasets. These systems can be designed to capture vision data on a short- or long-term basis. They are mounted at a specific place to capture the required elements at desired angles. These systems are sometimes connected with a wired or wireless communication system to transmit data for further processing [19].
    Various studies have employed these systems for element recognition, 3D point cloud generation, and progress calculation [19][42][43]. Few studies have also mounted these systems on a crane to cover a large area and provided less occluded 3D point clouds for construction progress estimation [44][45]. One study [45] addressed the technical challenge of multi-building extraction and alignment of as-built point clouds. The study utilized the data captured through two stereo cameras installed on a tower crane on a mi-used construction project including a shopping mall, hotel, housing, and offices. The researchers reported the successful acquisition of the vision datasets using crane-mounted cameras and subsequent analyses to estimate construction progress. Tuttas et al. argued that despite the effort of installation of a camera on a crane jib, associated maintenance, limited range of the motion of the crane, and its fixed position in a single plan of view; data acquisition from crane-mounted cameras can be designed for a fully automated process [20]. A self-navigating robot-mounted camera system has been explored to create 3D point clouds for an interior of a building, and the usefulness of such systems for various construction management purposes has been reported [46]. A fixed on-mount vision DAQ technique can be fully automated by equipping the camera system with a pan–tilt–zoom function and a pre-determined coverage area programmed into it [19][20]. Despite several proofs-of-concept, the cameras fixed on mounts do not provide complete coverage of the construction site and cover all the construction features making the point clouds fragmented. Future research must undertake to explore the possibility of acquiring vision datasets using multiple cameras and integrating the output to get more detailed point cloud models hence a more accurate comparison between as-planned and as-built.

    2.4. Surveillance Cameras

    Surveillance cameras are video cameras installed to observe an area for multiple security and monitoring-related purposes. These camera systems transmit video and audio signals to a digital video recorder where the video data can be viewed, recorded, or processed for the required purpose.
    Few studies have attempted CV-based CPM using the video feed or video data from surveillance cameras installed on a construction site, as opposed to most available studies that explored the viability of image data and retrieved information by image processing using various techniques [47]. For example, Wu et al. [48] recognize the work cycles for an earthmoving excavator by constructing its Stretching–Bending Sequential Patterns (SBSP). This utilized long video sequences and recognized the complete cycle of an excavator, i.e., digging, hauling, swinging, and dumping. This accurately recognizes the work cycle and estimated the progress of the equipment by multiplying the excavator’s capacity by the number of cycles counted [49]. They presented a framework encompassing object detection, instance segmentation, and multiple object tracking to collect the location and temporal information of precast concrete wall installation on a construction site. However, the study reported that the movement of the camera and view range of the surveillance camera on a construction site significantly influence the effectiveness of the vision datasets. Video data from surveillance are successfully processed to obtain the progress of various prefabricated construction elements and the working of machinery at a construction site [47][48][49]. Surveillance cameras can be potential DAQ techniques for the automated CV-based CPM process provided a well-planned layout and several cameras are installed throughout the vicinity of the construction site [47].

    3. Information Retrieval

    The acquired vision datasets contain vital as-built information from the construction environment. In the construction environment, the data collected from the worksite hold significant importance as they help in analyzing and reporting the progress of the project and enable project management teams to gain valuable insights regarding the actual status of the project in terms of physical progress, earned labor hours, material consumed, equipment utilized, etc. [1]. Once the DAQ has been performed and data are transmitted or transferred to a storage medium, the next and most important sub-process is to extract useful information from the vision data. Information retrieval is performed through signal processing or, more precisely, image processing. Images from an image dataset or frames from a video dataset are the inputs, and the outputs are usually some characteristics or features associated with the inputs. For CPM, usually, the information retrieval sub-process aims to obtain an as-built model in the form of a 3D model or a 3D dataset, which is then compared with an as-planned model to estimate the progress of various activities of a construction process [50].
    The research retrieved and selected have proposed various techniques to extract the required information from the data acquired. These can be grouped into four distinct categories, i.e., (1) classification, (2) edge detection, (3) quantification, and (4) object tracking [51]. In addition, each category has several other techniques to process the associated vision datasets. The key techniques are discussed below.

    3.1. Structure from Motion (SfM)

    SfM is a technique that reconstructs a 3D structure/model/point cloud using 2D images of a scene or an object. It is a photogrammetric imaging technique and lies in the quantification category along with digital image correlation [51]. The term quantification means a method of obtaining real-life measurements from a 2D image dataset [52]. SfM reconstructs 3D models by matching features in various images and estimating the relative position of a camera. The inputs are in the form of image data with recommended 60% side overlap and 80% forward overlap between images to realize high-quality and detailed 3D models or 3D point clouds as outputs [53]. This technique automatically detects and matches features from an image dataset of varying scales, angles, and orientations. Various studies have demonstrated the use of image-based reconstruction utilizing high-quality images taken from the construction environment for progress monitoring, productivity measurement, quality control, and safety management, providing the project management teams with a remarkable opportunity to visualize as-built data [27][39][54][55].
    Unordered image collection from construction sites has been used in various studies to test the effectiveness of SfM, and high accuracy of generated 3D as-built models has been reported [42][56]. Moreover, [55] utilized the high-quality images taken from the interior scene of a construction project to demonstrate image-based 3D reconstruction through SfM and compared it with the output of a laser scanner. The study concluded that the accuracy of the model generated from the image-based reconstruction was less than the laser scanner however the proposed approach automatically overlays the hi-resolution images to 3D point clouds models which presented its potential for its use in progress monitoring through as-built visualization. Another study [39] collected several images from proper positions from two real-life construction projects, i.e., one-story, and two-story residential buildings. The SfM technique was deployed to generate a 3D point cloud model for two case study projects and quantities were calculated using the proposed technique. This reported 99% accuracy and identified that this system becomes less accurate as the length of the building/element increases. The process of reconstructing a 3D model from an image dataset remains reliant on human intervention at various steps to improve the output quality.

    3.2. Convolutional Neural Network (CNN)

    CNN is a technique that identifies and differentiates various objects or features in an image by assigning weights and biases to them [57]. CNN is a Deep Learning (DL) algorithm and falls under the feature detection/classification category of CV-based analysis. Long Short-Term Memory (LSTM), which can analyze and obtain information from video frames, belongs to the same category. The term DL refers to Machine Learning (ML) in an artificial learning environment that is capable of learning unlabeled or unstructured data without supervision. CNN comprises a convolutional and a pooling layer; usually, a pooling layer is added after the convolutional layer. The input is in the form of an image, and the convolutional layer uses matrix-based scanning over the image and identifies features. Later, the pooling layer reduces the number of parameters to learn and computations required by a network, thereby reducing the size of feature maps, which is a summarized version of features detected in the input [58]. In recent years these CNN-based techniques have achieved further development in the construction domain [59]. Object detection and tracking have been the interest of many researchers, e.g., unsafe behaviors were detected by tracking the workers while walking on formwork supports [60], and diverse construction activities were also recognized to save the valuable time of project management teams [61], also single worker and equipment were tracked for longer periods for calculating productivity [62], multi-worker/machinery tracking also for productivity estimation [63], etc.
    Few researchers have used CNN to monitor the progress of construction machinery and the installation of various prefabricated components [48][49][64]. In a recent study [49], the installation of precast concrete walls was monitored by detecting and tracking individual wall panels by utilizing the video feed from a surveillance camera installed on a construction site. This vision method was designed to get two types of information, i.e., time information and location information. The study reported the useability of such algorithms for CPM purposes and directed further research to extend this technique to detect other construction features as well. Similarly, another study [48] demonstrated the combination of the CNN technique to identify the work cycle of an earthmoving excavator by utilizing long video sequences. This demonstrated the feasibility of the idea of calculating the stretching-bending cycle of the excavator to estimate the quantity of earth moved during the overall operation. However, the proposed technique was simpler and further research was directed to explore the viability of such techniques for accurate measurement of work cycles and hence an accurate measurement of progress. The CNN process requires pre-training of the algorithm to efficiently identify various features from the input and can automate the entire process.

    3.3. Support Vector Machines (SVM)

    SVM is a technique that classifies the features or information in an image by assigning positive and negative values to features across a hyperplane. SVM is a classifier that lies in the classification/feature detection category. Unlike ANN, SVM is a supervised ML technique highly regarded for its two-group classification with a higher degree of accuracy; however, multigroup classification can be achieved by dividing a problem into several two-group classification problems [65]. The input is in the form of an image, and a pre-trained SVM classifier performs a binary analysis and classifies various features by drawing a hyperplane between two groups.
    Various studies have implemented SVM to detect various construction materials and estimate the project progress [66][67]. For example, [67] inferred the construction activity of girder launching for a rail project. This utilized the structural responses collected from the girder launching equipment and identified the exact state of a girder, i.e., auto launching, segment lifting, post-tensioning, and span lowering. However, this was a demonstration of such techniques and highlighted the limitations of only relying on structural responses, and directed future studies to integrate more sensors to get accurate feedback. Another study [38] investigated the installation of drywall using a video feed from the interior construction environment. Based on the identification of three different states of drywall panel during installation, i.e., installation, plastering, and painting of panels, the progress of drywall installation were measured. The SVM was trained with the extracted feature to demonstrate the success of the proposed technique. The learning of SVM can be significantly improved using the k-nearest neighbor algorithm [68]; however, not many studies can be found on testing its performance in a real-world construction site with a great degree of uncertainty, occlusions, and variability.

    3.4. Simultaneous Localization and Mapping (SLAM)

    SLAM is a technique of reconstructing or updating a 3D map of an unknown location while navigating through it [69]. SLAM is similar to SfM; however, SLAM maps an environment in real time. Similar to SfM, SLAM is a photogrammetric technique and is in the quantification category. SLAM learns by moving around an environment and searching for known features, which can be achieved by moving around in the environment once or multiple times. The inputs are in the form of images obtained from video frames and the outputs are in the form of feature points [46].
    A preliminary study investigated the effectiveness of SLAM and reported its potential application in tracking construction equipment [70]. For example, [70] conducted a pilot study to demonstrate the real-time 3D reconstruction of a construction environment by utilizing visual SLAM and UAV. This discussed the use of the proposed technique on three different projects, i.e., by calculating the volume of earthwork between two instances, measuring the progress of pavement compaction by tracking the equipment on a job site, and tracking site assets, e.g., labor, equipment, material, etc. The study proposed a primitive SLAM algorithm and highlighted various limitations, i.e., limitations of performance in complex construction environments, limited sensing range of visual sensors, memory management, and difficulty of maneuvering UAVs through construction worksites. Despite this demonstration, this technique is less explored for construction progress estimation, and its effectiveness is subject to further research in this domain.

    3.5. Cascading Classifiers (CC)

    The CC technique is a technique to detect and track an object or a feature in an image. CC lies in the classification/detection category [71]. It is an ML-based approach in which the classifier is trained by inputting many positive and negative images. The positive images are the ones intended to be recognized by the said classifier; otherwise, they are negative. The inputs are in the form of images from a construction environment, and then a pre-trained CC identifies various features from the dataset and indicates or highlights them on the input images. The accuracy of this technique depends on a detailed algorithm and pre-training using a well-sorted image dataset.
    Few studies have attempted using CC in progress monitoring by detecting construction features such as drywall or concrete walls and reported a good performance [38][72]. For example, [72] attempted to automate the progress monitoring for the interior construction environment and focused on the visualization and computer vision techniques by utilizing an object-based approach. In the proposed approach, the study compares as-built BIM and as-planned images in a 3D walkthrough model. The rapid object detection scheme based on the Haar-like cascading classifier was deployed to detect features from the acquired vision dataset. The cascading classifier utilized was first trained to detect specific construction features from the images using a couple of hundred positive and negative samples. However, the proposed algorithm was limited to specific construction features and this suggested that detecting multiple features from a complex construction environment requires further research towards modifying and improving such an algorithm. The supervised training of this technique makes it less desirable for a fully automated CV-based CPM process.

    3.6. Histogram of Oriented Gradients (HoG)

    HoG is a feature descriptor and is used for object detection. HoG is a feature extraction technique and lies in the classification/detection category. HoG identifies features in an image by returning a descriptor of each cell that it creates when an input image is given to the algorithm. Each input is decomposed into small cells or blocks, and the algorithm computes the HoG by counting occurrences in each cell or block and returns the detection of various features present in an image. This technique accurately detects various construction features by focusing on their shapes [73]. The detection methods that rely on visual features, e.g., shape and color have been proposed and tested in construction scenarios. HoG feature is among the top two popular shape-based features that are being used to detect construction workers and equipment [74].
    Few studies have explored the effectiveness HoG technique in CPM using a CV-based dataset by identifying and tracking construction workers and equipment [71][75]. For example, [75] attempted to automate the estimation of the progress of earthmoving activity by monitoring the movement of dump trucks on large-scale construction projects. The said study evaluated the combination of HoG algorithms to recognize off-highway dump trucks in a noisy video stream. The study effectively demonstrated the ability of HoG algorithms to detect the activity of trucks in an effective and timely manner and presented its usefulness in productivity measurement, performance control, and other safety-related applications on large-scale civil construction projects. The combination of HoG with tracking techniques has successfully reported very precise detection and tracking of workers and equipment on a construction site for CPM purposes [74]. However, very few studies have explored this technique and further research in this domain is required to ascertain the effectiveness of this technique in CPM-related applications.

    3.7. Laplacian of Gaussian (LoG)

    LoG is a well-known algorithm for detecting edges and is widespread in image processing and CV applications. The Laplacian algorithm is used to detect edges but is sensitive to noise; therefore, a Gaussian filter is commonly applied to images to remove noise, yielding LoG as their combination [76]. LoG lies in the edge detection category. LoG filters are derivative filters that work by detecting the rapid changes in an image. They detect objects and boundaries and extract features.
    CPM by counting brick has been attempted using LoG and reported relatively higher precision values [36][77]. Hui and Brilakis [77] attempted to automatically count the number of bricks ordered vs. consumed to eliminate manual surveys for the said purpose. The study proposed novel and automated method to count bricks during the facade construction of a building. This method utilized images and videos from the construction site and selected the color thresholds based on the color of the bricks. The LoG was deployed to detect the edges of the bricks from the constructed wall and compared various known features, i.e., shape and size to accurately detect the number of bricks. However, the implementation of LoG in CPM is one of the least explored areas. This technique requires various manual steps to achieve the desired accuracy, making it a less desirable option for automating the entire CV-based CPM process.

    3.8. Speeded-Up Robust Features (SURF)

    The SURF technique is a template matching technique that detects features from an image. It is a feature extraction or detection technique that lies in the classification category [78]. It can be used for object recognition, image registration and 3D point cloud generation. The SURF technique computes operators using a box filter that enables fast computation, thereby allowing real-time object detection and tracking.
    A recent study [47] attempted CPM of prefabricated timber construction using surveillance cameras and reported near real-time monitoring. This proposed an automated installation rate measurement system using inexpensive digital cameras installed on the mast of a tower crane. The time-lapse footage of the construction sequence was processed and analyzed for precise progress information. The study also successfully demonstrated the ability of SURF in aligning removing vision differences of images resulting from wind and tower crane vibrations. The study further reported the 95% accuracy rate of detected timber panels during the observation. This also directed future research efforts towards the proper setup of gear by ensuring the minimum level of noise in the footage and algorithm improvements for multiple camera feedback. The process of point cloud generation and registration can also be enhanced using this technique [46]. The implementation of this technique in the CV-based CPM can automate the whole process. However, assessing the benefits of this technique in achieving the intended targets requires further research efforts.

    4. Progress Estimation

    Progress estimation is the process of determining whether construction execution is according to a pre-planned or baseline schedule. In CV-based CPM, this process can also be termed the comparison between as-built and as-planned. This comparison provides information on whether the intended construction activities are executed according to the schedule, upon which construction managers can take necessary actions to keep the project on track and avoid construction delays. Distinct techniques were proposed in the research retrieved and selected to obtain necessary information on the progress status of various construction activities by comparing as-built and as-planned models and otherwise.

    4.1. Building Information Models (BIMs) Registration

    Building information modeling is a process of creating and managing digital representations of any built entity in a highly collaborative environment. BIMs are highly intelligent, data-rich, and object-oriented models; they not only represent various objects and spaces of buildings but also contain knowledge on how these objects and spaces relate [79]. These qualities make BIMs efficient as-planned models to perform the comparison between as-built and as-planned to estimate progress [80]. Usually, 4D BIMs, which are 3D models integrated with the fourth dimension, i.e., time, are used as as-planned models to be compared with superimposed as-built models [81]. For automated CV-based CPM, many studies have attempted various techniques of acquiring 3D point clouds or as-built models and reported the intended results by comparing as-built with as-planned BIMs [82][83][84][85]. The intended results are reported by comparing as-built with as-planned BIMs.
    The process of superimposing an as-built model over an as-planned model is called registration. The registration process requires post-processing of the acquired CV-based data to remove noise. There are two distinct methods of image model registration: coarse registration and fine registration. The coarse registration along with post-processing allows for rough alignment, whereas the fine registration can achieve near-optimal alignment [45]. The coarse registration can be achieved through various approaches, such as plane-based matching, principal component analysis-based alignment [86], plane patches-based matching, 3D to 2D transformation [87], building extraction, and alignment for multi-building point clouds [45], etc. For example, [82] proposed a semi-automated plane-based coarse registration approach and compared the proposed method with already existing general-purpose registration software, and reduced the complexities and time requirements associated with this process. This system addressed the issues of self-similarities at the object and model level by the semi-automated matching stage and demonstrated resilience and robustness in challenging registration cases. The plane-based registration finds the matching planes from as-built and as-planned datasets and aligns both models [82]. However, the plane patched-based system is the state-of-art of current practice, allowing for automatic registration using a 4-plane approach rather than a 3-plane as in a plane-based process [83]. Moreover, the Iterative Closest Point (ICP) is most frequently used for fine registration in CV-based CPM studies [45][82][86][87]. For example, [88] proposed a fully automated registration of 3D data to a 3D CAD model for CPM purposes. This deployed a two-step global-to-local registration procedure, i.e., principal component analysis-based global registration and an ICP-based local registration, and demonstrated that not only this proposed technique fully automates the process but also proved beneficial for project progress monitoring purposes. In conclusion, few studies have proposed semi-automated and fully automated registration techniques and directed further research efforts in exploring the useability of these techniques for CPM in the real construction environment.

    4.2. Progress Estimation through Object Recognition/Matching

    The progress estimation process requires the recognition of various features or objects present in the built environment following the model registration process. The comparison between as-built and as-planned datasets does not provide useful information by the mere superimposition of both models. This requires proper recognition, identification, or classification of construction features in the as-built point cloud, thereby comparing information available in object-based as-planned BIMs to retrieve information.
    Many scholars have proposed techniques to detect, classify and recognize various features of construction, such as walls, panels, tiles, ducts, doors, windows and furniture. [49][89][90]. A few of these techniques are Mask R-CNN [49], DeepSORT [49], Voxel [42], OpenGL [72], Probabilistic Model [91], Point Net [52], Surface-based recognition [82], timestamp [49], point calculation [92], segmentation by color thresholding [43], color images [17], etc. These techniques retrieve useful information from object-based models that calculate progress estimation. Apart from as-built vs. as-planned comparison, a few studies have also estimated progress based on counting equipment cycles [48], material classification [41], material usage [38][93], and installation speed [47].

    5. Output Visualization

    Output visualization corresponds to the presentation of useful information or results obtained from the information retrieval or progress estimation. In the CV-based CPM process, output visualization is as essential as DAQ, information retrieval, and progress estimation. The results of this sub-process are crucial to CMTs, as they must make decisions based on the output extracted from the entire process. Traditionally, CMTs use reports, Gantt charts, or other visual techniques. The literature on the output of the CV-based construction management process suggests a few visualization techniques, which are discussed as follows.

    5.1. Color Labels

    Color labels are the most frequently used form for representing the information on a vision dataset. The color labels used in CV environments are called bounding boxes. These labels can provide a range of information depending on the purpose of an algorithm or a process. The output shown by these labels can be classification, identification, segmentation, verification, detection, recognition, etc. [45][94][95].
    Few studies have also superimposed various forms of color labels on the input images to visualize the current state of construction activities under consideration [42][56][77]. For example [56], the researchers utilized color labels for performance monitoring. A color label was given to each construction component to indicate whether that component was built ahead of schedule (green label), on time (semitransparent white label), or behind schedule (red label). Furthermore, different color variations were also suggested in annotating other factors as well, i.e., darker blue label to indicate that component has not been built as planned. Similarly, another study [42] color-coded the construction elements to indicate whether the element in consideration was behind or on schedule. The green label was utilized to annotate the element on schedule, red labels were assigned to elements behind schedule and grey labels indicated those elements whose progress has not been reported. However, the proposed thresholds were tailored for the specific construction elements and require further research to be proven significant in different cases [44]. The size, shape, and description of these color labels depend on the technique or selected algorithm and can be modified as per the project requirements.

    5.2. Augmented Reality (AR) and Virtual Reality (VR)

    AR is an interactive experience of a physical world with useful information loaded onto the video feed for multiple purposes and operations [96][97]. Output visualization of CV-based CPM can also be enabled by VR after processing vision datasets for extracting construction progress status. Some studies have explored the use of AR by linking it with processed BIMs for monitoring construction progress [72][98]. An object-based interior CPM was proposed by [72] utilizing the common as-built construction photographs and displaying the interior construction progress by imposing color and pattern coding based on the actual status. The study reported the difficulty in object detection and classification in the interior construction environment and directed future studies to improve the algorithm to automatically detect various types of interior objects without manual human intervention. Another study [98] proposed a real-time AR-based system for modular CPM. The proposed system demonstrated the automatic AR registration method represented by relative coordinates and a fixed camera and successfully presented a live animation of the construction sequence. However, the study was conducted in a controlled lab environment using a simple mockup of a building. To sum it up, AR-based visualization requires the accurate alignment of BIMs and real-world data. For such accurate positioning, sophisticated surveying equipment is required. Another approach is to install fiduciary markers to locate and estimate the exact location. Mobile-based AR systems can provide the required accuracy with construction status. However, it requires further research to be implemented for CV-based CPM [8].

    5.3. Earned Value Management (EVM)

    EVM is a project monitoring and control technique that integrates cost, time, and scope to calculate project performance [99]. It requires as-planned and actual information on all three constraints to calculate the schedule and cost variances and provide the schedule and cost performance indexes as an alternative means of accessing the project performance. EVM is a valued output for CMTs as it enables them to access the current state of a project and make necessary decisions to keep the project on track.
    A recent study has suggested that the output of CV-based monitoring of construction projects can be integrated with EVM systems [31]. An automated calculation of EVM indicators can provide necessary project control information to identify potential delays and make useful decisions to control construction delays and cost overruns [31]. However, the retrieved literature does not provide practical evidence of EVM-based output from CV-based CPM.

    References

    1. Hegazy, T. Computer-Based Construction Project Management. Available online: https://www.pearson.ch/HigherEducation/Pearson/EAN/9781292027128/Computer-Based-Construction-Project-Management-Pearson-New-International-Edition (accessed on 14 October 2021).
    2. Son, H.; Bosché, F.; Kim, C. As-built data acquisition and its use in production monitoring and automated layout of civil infrastructure: A survey. Adv. Eng. Inform. 2015, 29, 172–183.
    3. El-Sabek, L.M.; McCabe, B.Y. Coordination Challenges of Production Planning in the Construction of International Mega-Projects in The Middle East. Int. J. Constr. Educ. Res. 2017, 14, 118–140.
    4. Navon, R.; Sacks, R. Assessing research issues in Automated Project Performance Control (APPC). Autom. Constr. 2007, 16, 474–484.
    5. Wolfe, S., Jr. 2020 Construction Survey: Contractors Waste Time & Get Paid Slowly. Available online: https://www.levelset.com/blog/2020-report-construction-wasted-time-slow-payment/ (accessed on 5 April 2022).
    6. Manfren, M.; Tagliabue, L.C.; Cecconi, F.R.; Ricci, M. Long-Term Techno-Economic Performance Monitoring to Promote Built Environment Decarbonisation and Digital Transformation—A Case Study. Sustainability 2022, 14, 644.
    7. Omar, T.; Nehdi, M.L. Data acquisition technologies for construction progress tracking. Autom. Constr. 2016, 70, 143–155.
    8. El-Omari, S.; Moselhi, O. Data acquisition from construction sites for tracking purposes. Eng. Constr. Arch. Manag. 2009, 16, 490–503.
    9. Cheng, T.; Mantripragada, U.; Teizer, J.; Vela, P.A. Automated Trajectory and Path Planning Analysis Based on Ultra Wideband Data. J. Comput. Civ. Eng. 2012, 26, 151–160.
    10. Bosché, F.; Guillemet, A.; Turkan, Y.; Haas, C.T.; Haas, R. Tracking the Built Status of MEP Works: Assessing the Value of a Scan-vs-BIM System. J. Comput. Civ. Eng. 2014, 28, 05014004.
    11. Ibrahim, Y.M.; Kaka, A.P.; Aouad, G.; Kagioglou, M. As-built Documentation of Construction Sequence by Integrating Virtual Reality with Time-lapse Movies. Arch. Eng. Des. Manag. 2008, 4, 73–84.
    12. Bradski, G.; Kaehler, A. Learning OpenCV: Computer Vision with the OpenCV Library; O’Reilly Media, Inc.: Newton, MA, USA, 2008.
    13. Zhang, X.; Bakis, N.; Lukins, T.C.; Ibrahim, Y.M.; Wu, S.; Kagioglou, M.; Aouad, G.; Kaka, A.P.; Trucco, E. Automating progress measurement of construction projects. Autom. Constr. 2009, 18, 294–301.
    14. Fisher, R.B.; Breckon, T.P.; Dawson-Howe, K.; Fitzgibbon, A.; Robertson, C.; Trucco, E.; Williams, C.K.; Williams, I. Dictionary of Computer Vision and Image Processing. In Dictionary of Computer Vision and Image Processing; John Wiley: New York, NY, USA, 2016.
    15. Omar, H.; Mahdjoubi, L.; Kheder, G. Towards an automated photogrammetry-based approach for monitoring and controlling construction site activities. Comput. Ind. 2018, 98, 172–182.
    16. Atkinson, R. Project management: Cost, time and quality, two best guesses and a phenomenon, its time to accept other success criteria. Int. J. Proj. Manag. 1999, 17, 337–342.
    17. Hwang, N.; Son, H.; Kim, C. Is Color an Intrinsic Property of Construction Object’s Representation? Evaluating Color-Based Models to Detect Objects by Using Data Mining Techniques. In Proceedings of the 29th International Symposium of Automation and Robotics in Construction, Eindhoven, The Netherlands, 26–29 June 2012.
    18. Hamledari, H.; McCabe, B. Automated Visual Recognition of Indoor Project-Related Objects: Challenges and Solutions. In Proceedings of the 2016 Construction Research Congress, San Juan, Puerto Rico, 31 May–2 June 2016; pp. 2573–2582.
    19. Kim, C.; Kim, B.; Kim, H. 4D CAD model updating using image processing-based construction progress monitoring. Autom. Constr. 2013, 35, 44–52.
    20. Tuttas, S.; Braun, A.; Borrmann, A.; Stilla, U. Evaluation of Acquisition Strategies for Image-Based Construc-tion Site Monitoring. In Proceedings of the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences (ISPRS-2016), Prague, Czech Republic, 12–19 July 2016; Copernicus Publications on behalf of ISPRS: Prague, Czech Republic, 2016.
    21. Nex, F.; Remondino, F. UAV for 3D mapping applications: A review. Appl. Geomat. 2014, 6, 1–15.
    22. de Melo, R.R.S.; Costa, D.B.; Álvares, J.S.; Irizarry, J. Applicability of unmanned aerial system (UAS) for safety inspection on construction sites. Saf. Sci. 2017, 98, 174–185.
    23. Taj, G.; Anand, S.; Haneefi, A.; Kanishka, R.P.; Mythra, D. Monitoring of Historical Structures using Drones. IOP Conf. Ser. Mater. Sci. Eng. 2020, 955, 012008.
    24. Gheisari, M.; Esmaeili, B. Unmanned Aerial Systems (UAS) for Construction Safety Applications. Construction Research Congress 2016: Old and New Construction Technologies Converge in Historic San Juan. In Proceedings of the 2016 Construction Research Congress, CRC, San Juan, Puerto Rico, 31 May–2 June 2016; pp. 2642–2650.
    25. Ibrahim, A.; Golparvar-Fard, M.; El-Rayes, K. Metrics and methods for evaluating model-driven reality capture plans. Comput. Civ. Infrastruct. Eng. 2021, 37, 55–72.
    26. Colomina, I.; Molina, P. Unmanned aerial systems for photogrammetry and remote sensing: A review. ISPRS J. Photogramm. Remote Sens. 2014, 92, 79–97.
    27. Mahami, H.; Nasirzadeh, F.; Ahmadabadian, A.H.; Esmaeili, F.; Nahavandi, S. Imaging network design to improve the automated construction progress monitoring process. Constr. Innov. 2019, 19, 386–404.
    28. Kielhauser, C.; Manzano, R.R.; Hoffman, J.J.; Adey, B.T. Automated Construction Progress and Quality Monitoring for Commercial Buildings with Unmanned Aerial Systems: An Application Study from Switzerland. Infrastructures 2020, 5, 98.
    29. Álvares, J.S.; Costa, D.B. Literature Review on Visual Construction Progress Monitoring Using Unmanned Aerial Vehicles. In Proceedings of the 26th Annual Conference of the International Group for Lean Construction: Evolving Lean Construction Towards Mature Production Management Across Cultures and Frontiers, Chennai, India, 18–22 July 2018.
    30. Braun, A.; Borrmann, A. Combining inverse photogrammetry and BIM for automated labeling of construction site images for machine learning. Autom. Constr. 2019, 106, 102879.
    31. Ekanayake, B.; Wong, J.K.-W.; Fini, A.A.F.; Smith, P. Computer vision-based interior construction progress monitoring: A literature review and future research directions. Autom. Constr. 2021, 127, 103705.
    32. Han, K.K.; Golparvar-Fard, M. Potential of big visual data and building information modeling for construction performance analytics: An exploratory study. Autom. Constr. 2017, 73, 184–198.
    33. Han, K.; Degol, J.; Golparvar-Fard, M. Geometry- and Appearance-Based Reasoning of Construction Progress Monitoring. J. Constr. Eng. Manag. 2018, 144, 04017110.
    34. McCabe, B.Y.; Hamledari, H.; Shahi, A.; Zangeneh, P.; Azar, E.R. Roles, Benefits, and Challenges of Using UAVs for Indoor Smart Construction Applications. In Proceedings of the Congress on Computing in Civil Engineering, Seattle, Washington, DC, USA, 25–27 June 2017.
    35. Jeon, S.; Hwang, J.; Kim, G.J.; Billinghurst, M. Interaction Techniques in Large Display Environments Using Hand-Held Devices. In Proceedings of the ACM Symposium on Virtual Reality Software and Technology, Limassol, Cyprus, 1–3 November 2006.
    36. Hui, L.; Park, M.-W.; Brilakis, I. Automated Brick Counting for Façade Construction Progress Estimation. J. Comput. Civ. Eng. 2015, 29, 04014091.
    37. Son, H.; Kim, C.; Kim, C. Automated Color Model–Based Concrete Detection in Construction-Site Images by Using Machine Learning Algorithms. J. Comput. Civ. Eng. 2012, 26, 421–433.
    38. Kropp, C.; Koch, C.; König, M. Drywall State Detection in Image Data for Automatic Indoor Progress Monitoring. In Proceedings of the 2014 International Conference on Computing in Civil and Building Engineering, Orlando, FL, USA, 23–25 June 2014; pp. 347–354.
    39. Mahami, H.; Nasirzadeh, F.; Ahmadabadian, A.H.; Nahavandi, S. Automated Progress Controlling and Monitoring Using Daily Site Images and Building Information Modelling. Buildings 2019, 9, 70.
    40. Golparvar-Fard, M.; Peña-Mora, F.; Savarese, S. Monitoring of Construction Performance Using Daily Progress Photograph Logs and 4d As-Planned Models. In Proceedings of the 2009 ASCE International Workshop on Computing in Civil Engineering, Austin, TX, USA, 24–27 June 2009; Volume 346, pp. 53–63.
    41. Han, K.K.; Golparvar-Fard, M. Appearance-based material classification for monitoring of operation-level construction progress using 4D BIM and site photologs. Autom. Constr. 2015, 53, 44–57.
    42. Golparvar-Fard, M.; Pena-Mora, F.; Savarese, S. Monitoring changes of 3D building elements from unordered photo collections. In Proceedings of the IEEE International Conference on Computer Vision, Washington, DC, USA, 6–13 November 2011; pp. 249–256.
    43. Son, H.; Kim, C. 3D structural component recognition and modeling method using color and 3D data for construction progress monitoring. Autom. Constr. 2010, 19, 844–854.
    44. Borrmann, A.; Stilla, U. Automated Progress Monitoring Based on Photogrammetric Point Clouds and Precedence Relationship Graphs. In Proceedings of the 32nd ISARC, Oulu, Finland, 15–18 June 2015; pp. 1–7.
    45. Masood, M.K.; Aikala, A.; Seppänen, O.; Singh, V. Multi-Building Extraction and Alignment for As-Built Point Clouds: A Case Study With Crane Cameras. Front. Built Environ. 2020, 6, 581295.
    46. Kim, P.; Chen, J.; Kim, J.; Cho, Y.K. SLAM-driven intelligent autonomous mobile robot navigation for construction applications. In Workshop of the European Group for Intelligent Computing in Engineering; Springer: Cham, Switzerland, 2018; pp. 254–269.
    47. Fini, A.A.F.; Maghrebi, M.; Forsythe, P.J.; Waller, T.S. Using existing site surveillance cameras to automatically measure the installation speed in prefabricated timber construction. Eng. Constr. Arch. Manag. 2021, 29, 573–600.
    48. Wu, Y.; Wang, M.; Liu, X.; Wang, Z.; Ma, T.; Xie, Y.; Li, X.; Wang, X. Construction of Stretching-Bending Sequential Pattern to Recognize Work Cycles for Earthmoving Excavator from Long Video Sequences. Sensors 2021, 21, 3427.
    49. Wang, Z.; Zhang, Q.; Yang, B.; Wu, T.; Lei, K.; Zhang, B.; Fang, T. Vision-Based Framework for Automatic Progress Monitoring of Precast Walls by Using Surveillance Videos during the Construction Phase. J. Comput. Civ. Eng. 2021, 35, 04020056.
    50. Chen, J.; Kira, Z.; Cho, Y.K. Deep Learning Approach to Point Cloud Scene Understanding for Automated Scan to 3D Reconstruction. J. Comput. Civ. Eng. 2019, 33, 04019027.
    51. Mostafa, K.; Hegazy, T. Review of image-based analysis and applications in construction. Autom. Constr. 2020, 122, 103516.
    52. Liu, C.; Tang, C.-S.; Shi, B.; Suo, W.-B. Automatic quantification of crack patterns by image processing. Comput. Geosci. 2013, 57, 77–80.
    53. Messinger, M.; Silman, M. Unmanned aerial vehicles for the assessment and monitoring of environmental contamination: An example from coal ash spills. Environ. Pollut. 2016, 218, 889–894.
    54. Braun, A.; Tuttas, S.; Borrmann, A.; Stilla, U. Improving progress monitoring by fusing point clouds, semantic data and computer vision. Autom. Constr. 2020, 116, 103210.
    55. Golparvar-Fard, M.; Bohn, J.; Teizer, J.; Savarese, S.; Peña-Mora, F. Evaluation of image-based modeling and laser scanning accuracy for emerging automated performance monitoring techniques. Autom. Constr. 2011, 20, 1143–1155.
    56. Karsch, K.; Golparvar-Fard, M.; Forsyth, D. ConstructAide: Analyzing and Visualizing Construction Sites through Photographs and Building Models. ACM Trans. Graph. 2014, 33, 176.
    57. Shin, H.-C.; Roth, H.R.; Gao, M.; Lu, L.; Xu, Z.; Nogues, I.; Yao, J.; Mollura, D.; Summers, R.M. Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning. IEEE Trans. Med. Imaging 2016, 35, 1285–1298.
    58. Albawi, S.; Mohammed, T.A.; Al-Zawi, S. Understanding of a convolutional neural network. In Proceedings of the 2017 International Conference on Engineering and Technology (ICET), Antalya, Turkey, 21–23 August 2017; pp. 1–6.
    59. Choi, S.; Myeong, W.; Jeong, Y.; Myung, H.; Choi, S.; Myeong, W.; Jeong, Y.; Myung, H. Vision-Based Hybrid 6-DOF Displacement Estimation for Precast Concrete Member Assembly. Smart Struct. Syst. 2017, 20, 397.
    60. Fang, W.; Zhong, B.; Zhao, N.; Love, P.E.; Luo, H.; Xue, J.; Xu, S. A deep learning-based approach for mitigating falls from height with computer vision: Convolutional neural network. Adv. Eng. Inform. 2019, 39, 170–177.
    61. Luo, X.; Li, H.; Cao, D.; Dai, F.; Seo, J.; Lee, S. Recognizing Diverse Construction Activities in Site Images via Relevance Networks of Construction-Related Objects Detected by Convolutional Neural Networks. J. Comput. Civ. Eng. 2018, 32, 04018012.
    62. Park, M.-W.; Makhmalbaf, A.; Brilakis, I. Comparative study of vision tracking methods for tracking of construction site resources. Autom. Constr. 2011, 20, 905–915.
    63. Bügler, M.; Borrmann, A.; Ogunmakin, G.; Vela, P.A.; Teizer, J. Fusion of Photogrammetry and Video Analysis for Productivity Assessment of Earthwork Processes. Comput. Civ. Infrastruct. Eng. 2016, 32, 107–123.
    64. Brilakis, I.; Soibelman, L.; Shinagawa, Y. Material-Based Construction Site Image Retrieval. J. Comput. Civ. Eng. 2005, 19, 341–355.
    65. Seong, H.; Choi, H.; Cho, H.; Lee, S.; Son, H.; Kim, C. Vision-Based Safety Vest Detection in a Construction Scene. In Proceedings of the 34th International Symposium on Automation and Robotics in Construction (ISARC 2017), Taipei, Taiwan, 28 June–1 July 2017.
    66. Dimitrov, A.; Golparvar-Fard, M. Vision-based material recognition for automated monitoring of construction progress and generating building information modeling from unordered site image collections. Adv. Eng. Inform. 2014, 28, 37–49.
    67. Harichandran, A.; Raphael, B.; Varghese, B.R.A.K. Inferring Construction Activities from Structural Responses Using Support Vector Machines. In Proceedings of the International Symposium on Automation and Robotics in Construction, Berlin, Germany, 20–25 July 2018; pp. 1–8.
    68. Caputo, B.; Hayman, E.; Fritz, M.; Eklundh, J.-O. Classifying materials in the real world. Image Vis. Comput. 2010, 28, 150–163.
    69. Bailey, T.; Durrant-Whyte, H. Simultaneous localization and mapping (SLAM): Part II. IEEE Robot. Autom. Mag. 2006, 13, 108–117.
    70. Shang, Z.; Shen, Z. Real-Time 3D Reconstruction on Construction Site Using Visual SLAM and UAV. arXiv 2015, 151, 10–17.
    71. Memarzadeh, M.; Heydarian, A.; Golparvar-Fard, M.; Niebles, J.C. Real-Time and Automated Recognition and 2D Tracking of Construction Workers and Equipment from Site Video Streams. In Proceedings of the ASCE International Conference on Computing in Civil Engineering, Atlanta, GA, USA, 17–19 June 2012; pp. 429–436.
    72. Roh, S.; Aziz, Z.; Peña-Mora, F. An object-based 3D walk-through model for interior construction progress monitoring. Autom. Constr. 2011, 20, 66–75.
    73. Peker, M.; Altun, H.; Karakaya, F. Hardware emulation of HOG and AMDF based scale and rotation invariant robust shape detection. In Proceedings of the International Conference on Engineering and Technology, ICET 2012–Conference Booklet, Caire, Egypt, 10–11 October 2012.
    74. Zhu, Z.; Ren, X.; Chen, Z. Integrated detection and tracking of workforce and equipment from construction jobsite videos. Autom. Constr. 2017, 81, 161–171.
    75. Azar, E.R.; McCabe, B. Automated Visual Recognition of Dump Trucks in Construction Videos. J. Comput. Civ. Eng. 2012, 26, 769–781.
    76. Dalal, N.; Triggs, B. Histograms of Oriented Gradients for Human Detection. In Proceedings of the Computer Vision and Pattern Recognition, San Diego, CA, USA, 20–26 June 2005; pp. 886–893.
    77. Hui, L.; Brilakis, I. Real-Time Brick Counting for Construction Progress Monitoring. In Proceedings of the 2013 ASCE International Workshop on Computing in Civil Engineering, Los Angeles, CA, USA, 23–25 June 2013; pp. 818–824.
    78. Herbert, B.; Andreas, E.; Tinne, T.; Luc, V.G. Speeded-up robust features (SURF). Comput. Vis. Image Underst. 2008, 110, 346–359.
    79. Azhar, S. Building Information Modeling (BIM): Trends, Benefits, Risks, and Challenges for the AEC Industry. Leadersh. Manag. Eng. 2011, 11, 241–252.
    80. Rehman, M.S.U.; Thaheem, M.J.; Nasir, A.R.; Khan, K.I.A. Project schedule risk management through building information modelling. Int. J. Constr. Manag. 2020, 22, 1489–1499.
    81. Kopsida, M.; Brilakis, I.; Vela, P. A Review of Automated Construction Progress and Inspection Methods. In Proceedings of the 32nd CIB W78 Conference on Construction IT, Tokyo, Japan, 27–29 January 2015; pp. 421–431.
    82. Bosché, F. Plane-based registration of construction laser scans with 3D/4D building models. Adv. Eng. Inform. 2012, 26, 90–102.
    83. Bueno, M.; Bosché, F.; González-Jorge, H.; Martínez-Sánchez, J.; Arias, P. 4-Plane congruent sets for automatic registration of as-is 3D point clouds with 3D BIM models. Autom. Constr. 2018, 89, 120–134.
    84. Kropp, C.; Koch, C.; König, M. Interior construction state recognition with 4D BIM registered image sequences. Autom. Constr. 2018, 86, 11–32.
    85. Asadi, K.; Ramshankar, H.; Noghabaei, M.; Han, K. Real-Time Image Localization and Registration with BIM Using Perspective Alignment for Indoor Monitoring of Construction. J. Comput. Civ. Eng. 2019, 33, 04019031.
    86. Kim, C.; Son, H.; Kim, C. Automated construction progress measurement using a 4D building information model and 3D data. Autom. Constr. 2013, 31, 75–82.
    87. Wang, Q.; Kim, M.-K.; Cheng, J.C.; Sohn, H. Automated quality assessment of precast concrete elements with geometry irregularities using terrestrial laser scanning. Autom. Constr. 2016, 68, 170–182.
    88. Kim, C.; Son, H.; Kim, C. Fully automated registration of 3D data to a 3D CAD model for project progress monitoring. Autom. Constr. 2013, 35, 587–594.
    89. Kim, Y.; Nguyen, C.H.P.; Choi, Y. Automatic pipe and elbow recognition from three-dimensional point cloud model of industrial plant piping system using convolutional neural network-based primitive classification. Autom. Constr. 2020, 116, 103236.
    90. Deng, H.; Hong, H.; Luo, D.; Deng, Y.; Su, C. Automatic Indoor Construction Process Monitoring for Tiles Based on BIM and Computer Vision. J. Constr. Eng. Manag. 2020, 146, 04019095.
    91. Golparvar-Fard, M.; Peña-Mora, F.; Savarese, S. Automated Progress Monitoring Using Unordered Daily Construction Photographs and IFC-Based Building Information Models. J. Comput. Civ. Eng. 2015, 29, 04014025.
    92. Zhang, C.; Arditi, D. Automated progress control using laser scanning technology. Autom. Constr. 2013, 36, 108–116.
    93. Bunrit, S.; Kerdprasop, N.; Kerdprasop, K. Evaluating on the Transfer Learning of CNN Architectures to a Construction Material Image Classification Task. Int. J. Mach. Learn. Comput. 2019, 9, 201–207.
    94. Chen, J.; Fang, Y.; Cho, Y.K. Unsupervised Recognition of Volumetric Structural Components from Building Point Clouds. In Proceedings of the ASCE International Workshop on Computing in Civil Engineering 2017, Seattle, DC, USA, 25–27 June 2017.
    95. Xu, Y.; Shen, X.; Lim, S. CorDet: Corner-Aware 3D Object Detection Networks for Automated Scan-to-BIM. J. Comput. Civ. Eng. 2021, 35, 04021002.
    96. Wang, X.; Dunston, P.S. Design, Strategies, and Issues towards an Augmented Reality-Based Construction Training Platform. Electron. J. Inf. Technol. Constr. 2007, 12, 363–380.
    97. Casini, M. Extended Reality for Smart Building Operation and Maintenance: A Review. Energies 2022, 15, 3785.
    98. Lin, Z.; Petzold, F.; Ma, Z. A Real-Time 4D Augmented Reality System for Modular Construction Progress Monitoring. In Proceedings of the 36th International Symposium on Automation and Robotics in Construction, ISARC 2019, Banff Alberta, AB, Canada, 21–24 May 2019; pp. 743–748.
    99. Kim, E.; Wells, W.G.; Duffey, M.R. A model for effective implementation of Earned Value Management methodology. Int. J. Proj. Manag. 2003, 21, 375–382.
    More
    Information
    Contributors MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register : , ,
    View Times: 245
    Revisions: 2 times (View History)
    Update Date: 05 Aug 2022
    Table of Contents
      1000/1000

      Confirm

      Are you sure you want to delete?

      Video Upload Options

      Do you have a full video?
      Cite
      If you have any further questions, please contact Encyclopedia Editorial Office.
      Rehman, M.S.U.; Shafiq, M.T.; Ullah, F. Assessment of Computer Vision-Based Construction Progress Monitoring Process. Encyclopedia. Available online: https://encyclopedia.pub/entry/25702 (accessed on 07 February 2023).
      Rehman MSU, Shafiq MT, Ullah F. Assessment of Computer Vision-Based Construction Progress Monitoring Process. Encyclopedia. Available at: https://encyclopedia.pub/entry/25702. Accessed February 07, 2023.
      Rehman, Muhammad Sami Ur, Muhammad Tariq Shafiq, Fahim Ullah. "Assessment of Computer Vision-Based Construction Progress Monitoring Process," Encyclopedia, https://encyclopedia.pub/entry/25702 (accessed February 07, 2023).
      Rehman, M.S.U., Shafiq, M.T., & Ullah, F. (2022, July 31). Assessment of Computer Vision-Based Construction Progress Monitoring Process. In Encyclopedia. https://encyclopedia.pub/entry/25702
      Rehman, Muhammad Sami Ur, et al. ''Assessment of Computer Vision-Based Construction Progress Monitoring Process.'' Encyclopedia. Web. 31 July, 2022.
      Top
      Feedback