1. Please check and comment entries here.

Topic review

# Object 3d Reconstruction

Subjects: Mathematics
View times: 12
Submitted by: Dalibor Martišek

## Definition

The present paper summarized the existing methods of 3D reconstruction of objects by the Shape-From-Focus (SFF) method. This is a method for recovering depth from an image series of the same object taken with different focus settings, referred to as a multifocal image.

## 1. Introduction

Three-dimensional reconstruction of general surfaces has an important role in a number of fields: the morphological analysis of fracture surfaces, for example, reveals information on the mechanical properties of natural or construction materials.

There are more techniques capable of producing digital three-dimensional (3D) replicas of solid surfaces. In mechanical engineering, contacting electronic profilometers can be used to determine digital two-dimensional (2D) profiles to be combined into 3D surface profiles—see [1][2], for example. The contacting mode of atomic force microscopes is actually in this mechanical category [3]. In addition to the mechanical tools, optical devices exist in diverse modifications [2], light section microscopy [4][5], coherence scanning interferometry [6], speckle metrology [7], stereo projection [8], photogrammetry [9], and various types of light measurement of profiles [10], to mention some of them.

3D laser scanning techniques are among other ways of obtaining 3D data. They have also been tested in some rock engineering projects, such as 3D digital fracture mapping [11][12][13].

The present paper summarized the existing methods of 3D reconstruction of objects by the Shape-From-Focus (SFF) method. This is a method for recovering depth from an image series of the same object taken with different focus settings, referred to as a multifocal image.

## 2. Data Acquisition

In technical practice, the confocal microscope serves as a standard instrument imaging microscopic three-dimensional surfaces. It has a very small depth of the optical field with its advanced hardware being capable of removing non-sharp points from the images. The points of the object situated close to the focal plane can be seen as sharp points. The parts lying further above or beneath the focal plane (out of the sharpness zone) are invisible, being represented as black regions if the confocal mode is on. In this way, a so-called optical cut is obtained. In the case of the non-confocal (standard) mode, the areas lying outside the sharpness zone are displayed as blurred, as they would be with a standard camera. With a confocal microscope or CCD camera, one can assume that the field of view is small with the projection used being parallel. In this case, all images are provided in the field of view with identical sizes and the corresponding pixels having the same coordinates in separate partial focused images (see Figure 1 ).

However, the confocal microscope with its small visual angle is hardly suitable for technical purposes due to the small size of its visual field (maximal visual field is about 2 cm [5][14][15][16]).

The same output ( Figure 1 b) can be obtained by the classical microscope or (in a wider field) common camera. The difference between the microscope or CCD camera and standard camera is given by the central projection that varies the scaling of partial images in a series and, further, by the non-sharp regions, displayed by the classic camera, while missing when taken by a confocal microscope in the confocal mode (see [17] for more information). The different image scalings, however, require subsequent corrections (including shifts and rotations).

To create a 2D or 3D reconstruction, it is necessary to obtain a series of images of an identical object, each of them with different focusing with each object point being focused in one of the images (in the ideal case, this is referred to as a multifocal image). For the acquisition of a large multifocal image, the camera must be mounted on a stand so that it can be moved in the direction approximately orthogonal to the surface with a controlled step. Different transformations and different sharp parts must be identified and composed in a 2D or 3D model.

## 3. Image Registration

With a confocal microscope or CCD camera, we can assume that the field of view is small and the projection used is parallel. In this case, all images are provided in the field of view with identical sizes with the corresponding pixels having the identical coordinates in separate partial focused images. However, this assumption does not hold for larger samples; then, the angle of the projection lines is not negligible with the view fields (and the coordinates of the corresponding pixels) being clearly different for each image (see Figure 2 and Figure 3 ).

In practice, the situation may be more sophisticated. The images may differ not just in the scale used but in the content displayed as well (different parts being focused in different images). Due to mechanical inaccuracies, the step in the z axis may be not fully constant, and the images can also be mutually shifted along the x- or y-axis or rotated. Image registration is also complicated by the non-planarity of samples (see Figure 4 on the right). Therefore, sophisticated pre-processing of the image series may be necessary. A method suitable tool for this is the Fourier transform and phase correlation.

Function F is also referred to as the Fourier spectrum of function f . It is possible to obtain the function f from its Fourier spectrum F by the inverse Fourier transform.

This is the principal idea of phase correlation. Using phase correlation, rather than finding a shift between two images, we can just find the only non-zero point in a matrix. If the images are not identical (up to a shift), i.e., if the images are not ideal, the phase-correlation function is more complex, but still has a global maximum at the point whose coordinates correspond to the shift vector.

## 4. 2D and 3D Reconstructions

The principal deficiency of most of the current methods of 3D reconstruction is that they assume the profile height at a point to be precisely determined by the value of the given sharpness detector. Based on such an unrealistic hypothesis, then, these values are interpolated. Parabolic interpolation [18][19] or Gaussian interpolation [20][21] is used.

This conclusion, however, is false. We can use the series { P i j k } ; k = 1; 2; ⋯ ; n to assess the height of pixel ( i ; j ) . Being of a random rather than deterministic nature, it cannot be interpolated, but must be processed by a statistical method. One of such methods is a regression analysis, which, however, is rather complicated. Direct calculation of the mean value is much easier.

For each pixel ( i ; j ) , in k-th image in the series virtually infinitely many probability distribution functions p i j ( r ) can be constructed using different exponents r applied to the series terms P i j k : (29) p i j ( r ) ( k ) = P i j k r ∑ s = 1n P i j s r

The mean values of the random variables P i j ( r ) given by these probablity distribution functions estimate the height h i j ( r ) of the surface in its pixel ( i ; j ) : (30) h i j ( r ) = E ( P i j ( r ) ) = ∑ k = 1n k · p i j ( r ) ( k ) = ∑ k = 1n k · P i j k r ∑ s = 1n C P i j s r

This entry is adapted from 10.3390/math9182253

## References

1. Halling, J. Introduction to Tribology; John Wiley & Sons: London, UK, 1976.
2. Bennett, J.M.; Matton, L. Introduction to Surface Roughness and Scattering; Optical Society of America: Washington, DC, USA, 1999.
3. Bowen, W.R. Atomic Force Microscopy in Process Engineering; Hilal, N., Ed.; Butterworth-Heinemann: Oxford, UK, 2009.
4. Tolansky, S. A light-profile microscope for surface studies. Z. Elektrochem. 1952, 56, 263–267.
5. Thiery, V.; Green, D.I. The multifocus imaging technique in petrology. CompGeosci 2012, 45, 131–138.
6. De Groot, P. Principles of interference microscopy for the measurement of surface topography. Adv. Opt. Photonics 2015, 7, 1–65.
7. Kaufmann, G.H. Advances in Speckle Metrology and Related Techniques; Wiley: Weinheim, Germany, 2011.
8. Mettänen, M.; Hirn, U. A comparison of five optical surface topography measurement methods. TAPPI J. 2015, 14, 27–38.
9. Bertin, S.; Friedrich, H.; Dekmas, P.; Chan, E.; Gimel’farb, G. Digital stereo photogrammetry for grain-scale monitoring offluvialsurfaces: Error evaluation and work flow optimization. ISPRS J. 2015, 101, 193–208.
10. Tang, S.; Zhang, X.; Tu, D. Micro-phase measuring profilometry: Its sensitivity analysis and phase unwrapping. Opt. Lasers Eng. 2015, 72, 47–57.
11. Feng, Q. Novel Methods for 3-D Semi-Automatic Mapping of Fracture Geometry at Exposed Rock Surfaces. Ph.D. Thesis, KTH, Stockholm, Sweden, 2001.
12. Slob, S.; Hack, H.R.G.K.; Van Knapen, B.; Turner, K.; Kemeny, J. A method for automated discontinuity analysis of rock slopes with three-dimensional laser scanning. Transp. Res. Rec. J. Transp. Res. Board. 2005, 1913, 187–194.
13. Slob, S.; Hack, H.R.G.K. 3D terrestrial laser scanning as a new field measurement and monitoring technique. In Engineering Geology for Infrastructure Planning in Europe. A European Perspective; Azzam, R.H.R.a., Charlier, R., Eds.; Springer: Berlin/Heidelberg, Germany, 2004; pp. 179–190.
14. Lange, D.; Jennings, H.M.; Shah, S.P. Analysis of surface roughness using confocal microscopy. J. Mater. Sci. 1993, 28, 3879–3884.
15. Ichikawa, Y.; Toriwaki, J.-I. Confocal Microscope 3d Visualizing Method for Fine Surface Characterization of Microstructures; International Society for Optics and Photonics: Denver, CO, USA, 1996.
16. Ficker, T.; Martišek, D. Digital fracture surfaces and their roughness analysis: Applications to cement-based materials. Cem. Concreate Res. 2012, 42, 827–833.
17. Ficker, T. Sectional techniques for 3D imaging of microscopic and macroscopic objects. Optik 2017, 144, 289–299.
18. Agard, D.A.; Hiraoka, Z.; Shaw, P.; Sedat, J. Fluorescence microscopy in three dimensions, in Methods in Cell Biology. In Fluorescence Microscopy of Living Cells in Culture: Part B: Quantitative Fluorescence Microcopy-Imaging and Spectroscopy; Taylor, D.L., Wang, Y., Eds.; Academic Press: San Diego, CA, USA, 1989; Volume 30, pp. 359–362.
19. Brenner, J.F.; Dew, B.S.; Horton, J.B.; King, T.; Neurath, P.W.; Selles, W.D. An automated microscope for cytologic research a Preliminary evaluation. J. Histochem. Cytochem. 1976, 24, 100–111.
20. Pieper, R.J.; Korpel, A. Image processing for extended depth of field. Appl. Opt. 1983, 22, 1449–1453.
21. Sugimoto, S.A.; Ichioka, Y. Digital composition of images with increased depth of focus considering depth information. Appl. Opt. 1985, 24, 2076–2080.
More