Object 3d reconstruction: History
Please note this is an old version of this entry, which may differ significantly from the current revision.
Subjects: Mathematics
Contributor:

The present paper summarized the existing methods of 3D reconstruction of objects by the Shape-From-Focus (SFF) method. This is a method for recovering depth from an image series of the same object taken with different focus settings, referred to as a multifocal image.

  • 3D reconstruction
  • shape-from-focus
  • optical cut
  • multifocal image
  • Fourier transform
  • phase correlation
  • focusing criteria

1. Introduction

Three-dimensional reconstruction of general surfaces has an important role in a number of fields: the morphological analysis of fracture surfaces, for example, reveals information on the mechanical properties of natural or construction materials.

There are more techniques capable of producing digital three-dimensional (3D) replicas of solid surfaces. In mechanical engineering, contacting electronic profilometers can be used to determine digital two-dimensional (2D) profiles to be combined into 3D surface profiles—see [1,2], for example. The contacting mode of atomic force microscopes is actually in this mechanical category [3]. In addition to the mechanical tools, optical devices exist in diverse modifications [2], light section microscopy [4,5], coherence scanning interferometry [6], speckle metrology [7], stereo projection [8], photogrammetry [9], and various types of light measurement of profiles [10], to mention some of them.

3D laser scanning techniques are among other ways of obtaining 3D data. They have also been tested in some rock engineering projects, such as 3D digital fracture mapping [11,12,13].

The present paper summarized the existing methods of 3D reconstruction of objects by the Shape-From-Focus (SFF) method. This is a method for recovering depth from an image series of the same object taken with different focus settings, referred to as a multifocal image.

2. Data Acquisition

In technical practice, the confocal microscope serves as a standard instrument imaging microscopic three-dimensional surfaces. It has a very small depth of the optical field with its advanced hardware being capable of removing non-sharp points from the images. The points of the object situated close to the focal plane can be seen as sharp points. The parts lying further above or beneath the focal plane (out of the sharpness zone) are invisible, being represented as black regions if the confocal mode is on. In this way, a so-called optical cut is obtained. In the case of the non-confocal (standard) mode, the areas lying outside the sharpness zone are displayed as blurred, as they would be with a standard camera. With a confocal microscope or CCD camera, one can assume that the field of view is small with the projection used being parallel. In this case, all images are provided in the field of view with identical sizes and the corresponding pixels having the same coordinates in separate partial focused images (see Figure 1 ).

However, the confocal microscope with its small visual angle is hardly suitable for technical purposes due to the small size of its visual field (maximal visual field is about 2 cm [5,20,21,24]).

The same output ( Figure 1 b) can be obtained by the classical microscope or (in a wider field) common camera. The difference between the microscope or CCD camera and standard camera is given by the central projection that varies the scaling of partial images in a series and, further, by the non-sharp regions, displayed by the classic camera, while missing when taken by a confocal microscope in the confocal mode (see [25] for more information). The different image scalings, however, require subsequent corrections (including shifts and rotations).

To create a 2D or 3D reconstruction, it is necessary to obtain a series of images of an identical object, each of them with different focusing with each object point being focused in one of the images (in the ideal case, this is referred to as a multifocal image). For the acquisition of a large multifocal image, the camera must be mounted on a stand so that it can be moved in the direction approximately orthogonal to the surface with a controlled step. Different transformations and different sharp parts must be identified and composed in a 2D or 3D model.

3. Image Registration

With a confocal microscope or CCD camera, we can assume that the field of view is small and the projection used is parallel. In this case, all images are provided in the field of view with identical sizes with the corresponding pixels having the identical coordinates in separate partial focused images. However, this assumption does not hold for larger samples; then, the angle of the projection lines is not negligible with the view fields (and the coordinates of the corresponding pixels) being clearly different for each image (see Figure 2 and Figure 3 ).

In practice, the situation may be more sophisticated. The images may differ not just in the scale used but in the content displayed as well (different parts being focused in different images). Due to mechanical inaccuracies, the step in the z axis may be not fully constant, and the images can also be mutually shifted along the x- or y-axis or rotated. Image registration is also complicated by the non-planarity of samples (see Figure 4 on the right). Therefore, sophisticated pre-processing of the image series may be necessary. A method suitable tool for this is the Fourier transform and phase correlation.

Function F is also referred to as the Fourier spectrum of function f . It is possible to obtain the function f from its Fourier spectrum F by the inverse Fourier transform.

This is the principal idea of phase correlation. Using phase correlation, rather than finding a shift between two images, we can just find the only non-zero point in a matrix. If the images are not identical (up to a shift), i.e., if the images are not ideal, the phase-correlation function is more complex, but still has a global maximum at the point whose coordinates correspond to the shift vector.

4. 2D and 3D Reconstructions

The principal deficiency of most of the current methods of 3D reconstruction is that they assume the profile height at a point to be precisely determined by the value of the given sharpness detector. Based on such an unrealistic hypothesis, then, these values are interpolated. Parabolic interpolation [16,34] or Gaussian interpolation [35,36] is used.

This conclusion, however, is false. We can use the series { P i j k } ; k = 1; 2; ⋯ ; n to assess the height of pixel ( i ; j ) . Being of a random rather than deterministic nature, it cannot be interpolated, but must be processed by a statistical method. One of such methods is a regression analysis, which, however, is rather complicated. Direct calculation of the mean value is much easier.

For each pixel ( i ; j ) , in k-th image in the series virtually infinitely many probability distribution functions p i j ( r ) can be constructed using different exponents r applied to the series terms P i j k : (29) p i j ( r ) ( k ) = P i j k r ∑ s = 1n P i j s r

The mean values of the random variables P i j ( r ) given by these probablity distribution functions estimate the height h i j ( r ) of the surface in its pixel ( i ; j ) : (30) h i j ( r ) = E ( P i j ( r ) ) = ∑ k = 1n k · p i j ( r ) ( k ) = ∑ k = 1n k · P i j k r ∑ s = 1n C P i j s r

This entry is adapted from the peer-reviewed paper 10.3390/math9182253

This entry is offline, you can click here to edit this entry!