On Ghost Imaging Studies for Information Optical Imaging: Comparison
Please note this is a comparison between Version 3 by Catherine Yang and Version 2 by Catherine Yang.

To understand, study, and optimize optical imaging systems from the information-theoretic viewpoint has been an important research subfield. However, the "direct point-to-point" image information acquisition mode of traditional optical imaging is lacking in "Coding-decoding" operation on the image information, and limits the development of further imaging capabilities. On the other hand, ghost imaging (GI) systems, combined with modern light-field modulation and digital photoelectric detection technologies, behave more in line with the modulation–demodulation information transmission mode compared to traditional optical imaging. This puts forward imperative demands and challenges for understanding and optimizing ghost imaging systems from the viewpoint of information theory, as well as bringing more development opportunities for the research field of information optical imaging. Here, several specific GI systems and studies with various extended imaging capabilities will be briefly reviewed. 

  • ghost imaging
  • information theory
  • information optical imaging

1. Mapping Higher-Dimensional Light-Field Information into Lower-Dimensional Domain

GI can modulate the object's image information with controllable light fields and map it into lower detectable dimensions through encoding, thus making it possible to realize direct imaging in the high-dimensional light-field domain and enable further-developed imaging capabilities. Typical examples include the GI LiDAR and the GI camera.
The principle of GI LiDAR is shown in Figure 1. It was first made public in 2011 [1][2] and demonstrated to perform three-dimensional (3-D) imaging of natural scenes [3]. GI LiDAR uses light fields with designed spatiotemporal fluctuations, which are generated by modulating a pulse laser with a diffuser to illuminate the object, then acquires the return signal with a bucket detector and retrieves the 3-D spatial-depth image information by the second-order correlation between illuminated light fields and return signals as

Δ I t t d Δ I r r r T r r , d t d ,
where td is the relative time delay of the detection signal. During the detection process, the encoding is performed via the designed light fields to transform the 3-D information into a time-serial signal which is recorded by the bucket detector. Compared with traditional LiDAR systems which only perform encoding on the time-frequency dimension, GI LiDAR essentially extends the encoding domain to higher dimensions including both spatial and temporal domains. Hence, different from traditional LiDAR, which can only measure the distance and velocity, GI LiDAR can simultaneously perform the imaging of scenes of different distances as well as obtain their depth and velocity information [4], which belong to higher dimensions.
Figure 1. Schematic diagram of the GI LiDAR system. It uses light fields with designed spatiotemporal fluctuations, which are generated by modulating a pulse laser with a diffuser to illuminate the object, then acquires the return signal with a bucket detector, and retrieves the 3-D spatial-depth image information by the second-order correlation between illuminated light fields and the return signal.
For GI LiDAR with incoherent bucket detection, the phase information (which may include the vibration information) of the target is lost when recording. On the contrary, in those GI variants in microwave bands (e.g., the correlated imaging), the micro-vibration information of the target can be obtained when performing imaging. This is mainly owing to the utilization of the coherent detection method for signal recording since it is the first-order field correlation that is involved there [5][6][7]. Recently, with the development of coherent optical communication [8] and fiber optical time-frequency transmission [9], coherent detection methods for optical-waveband signals have been largely developed. Thus, they have also been introduced into GI LiDAR at optical wavebands, leading to coherent detection GI LiDAR [10][11][12] which has the ability to acquire the information in a higher dimension (e.g., the micro-Doppler vibration information) and enable a stronger anti-interference capability. In this scheme, a long-pulse laser is modulated in the spatial-temporal/spatial-frequency domains to illuminate the object, and the return signal is detected via a typical coherent detection approach. Then, by combining the second-order correlation with time-frequency analysis techniques, 3-D spatial-depth image information as well as the spatial distribution information of velocity and vibration of the target can be retrieved together.
Different from GI LiDAR with active illumination, the GI camera [13] is a passive (using natural lights only) GI scheme inspired by near-field diffraction speckle phenomena [14][15], proposed via utilizing the ergodicity of thermal light fields to extract image information from the second-order correlation in the spatial domain rather than the original temporal domain. The principle of a representative GI spectral camera is shown in Figure 2. The object illuminated by natural lights is firstly imaged by a front imaging module, then diffracted by a spatial random phase modulator (SRPM), and finally recorded by a detector as a pattern of intensity distribution It (rt). By the spatial–dimensional second-order correlation between this intensity distribution and the pre-measured impulse response Ir (rt;r,k) corresponding to a monochromatic point source δ(r,k) where k=1/λ, the object’s spatial-spectral information T(r,k) can be achieved, namely [13]
Δ G 2 ( r , k ) = Δ I t r t Δ I r r t ; r , k r t T r , k g 2 r , k
where g(2)(r,k) is the normalized second-order correlation function that characterizes the resolving ability of high-dimensional light-field information. In addition, the GI camera that enables to realize higher-dimensional spectral-polarization imaging is also demonstrated [16].
Figure 2. Schematic diagram of the GI spectral camera system [13]. The object illuminated by natural lights is firstly imaged by a front imaging module, then diffracted by a spatial random phase modulator (SRPM), and finally recorded by a detector as a pattern of intensity distribution It (rt). By the spatial–dimensional second-order correlation between this intensity distribution and the pre-measured impulse response Ir (rt;r,k) corresponding to a monochromatic point source δ(r,k) where k=1/λ, the object's spatial-spectral information can be achieved.
Unlike the typical traditional camera that only maps the object's 2-D spatial information into the detection plane by a "direct point-to-point" mode, in the GI camera, the desired information in the high-dimensional light-field domain is encoded with randomly distributed speckle patterns and mapped into the 2-D detection plane. Hence, the GI camera is able to transmit high-dimensional image information with less degeneracy, namely, it can much more efficiently make use of the imaging channel capacity compared to the traditional camera. For example, besides spectral imaging, the spectral-polarization GI camera has also been demonstrated [16]. Further, to improve the efficiency, the GI camera with the super-Rayleigh modulator has been proposed by designing the distribution of those encoded speckle patterns [17], which exhibits a better anti-noise imaging performance compared to the vanilla one. Additionally, the issue of extending the super-Rayleigh modulator to a broad spectral band for hyperspectral GI camera has been considered, and some progress has been made by combining with dispersion control techniques [18].

2. Resolution Analysis in the High-Dimensional Light-Field Domain

For the GI system, since it enables to map information of higher-dimensional light-field domains into lower dimensions and to achieve imaging of the high-dimensional image information via the second-order correlation, the description of its resolution should be reconsidered from a different perspective. Intuitively, it is the resolution in the high-dimensional light field domain rather than the 2-D spatial domain that would be limited by the optical diffraction effect. In this case, some analyses on the resolution of GI from the viewpoint of information theory have been conducted. For example, taking the GI camera as the research object, Tong et al. [19][20] proposed a criterion to measure its discernibility on high-dimensional light-field image information by combining with the compressed sensing (CS) theory [21][22]. Specifically, the relationship between the normalized second-order correlation in the high-dimensional light-field domain g(2) (|τiτj|) (τ denotes a high-dimensional coordinate that could contain spatial, spectral and polarization dimensions) and the mutual coherence μ of system's transmission matrix is first constructed as
μ = max i j g 2 τ i τ j ,
and the condition for resolving sparse point-like targets is given as the minimal distance of two points that establishes
g 2 τ i τ j < 1 2 K 1
by combining with the exact recovery condition [23], where K is the sparsity level of those point-like targets. Through this description, it can be roughly seen that multiple dimensions of light fields do interact with each other when considering the resolving ability. Since the resolution is fundamentally limited by the system’s transmission function which transfers the high-dimensional image information, it is possible to greatly increase the spatial resolving ability beyond the Rayleigh criterion by utilizing discrepancies of the imaging object in other dimensions [20]. Similar to the typical deconvolution operation [24][25] for traditional imaging that exploits the prior information of the system’s transmission function, it is possible to achieve spatial super-resolution imaging in GI. By performing regularized preconditioning operations [26][27][28] on both the detection signal as well as the transmission matrix and solving the preconditioned problem via optimization algorithms, it is experimentally demonstrated that a high-quality imaging result with resolution beyond the diffraction limit can be obtained, and further theoretical analysis from the perspective of Fourier frequency domain also verifies its capability [29].

3. Optimizing the Encoding Mode to Reduce Unnecessary Sampling Redundancy

The issue of unnecessary sampling redundancy of the traditional imaging mode is hard to address due to its “direct point-to-point” imaging architecture. In the GI system, however, since the encoding mode can be varied more flexibly during the practical imaging process, unnecessary redundancy in the detection signal can be largely reduced by designing light fields based on the information theory. From the theoretical perspective, several studies in this direction have been performed. In 2013, Li et al. [30] firstly tried to theoretically analyze the mutual information between the detection signal and the imaging object given the determined light fields in GI, and on this basis gave the optimal parameter of Bernoulli light fields that maximizes the mutual information to perform the optimization. However, this research was restricted to designing light fields subject to some specific distributions. Besides ideas inspired by Shannon information theory, by combining the CS theory, a light field optimization scheme via minimizing the mutual coherence between the sampling matrix (consisting of light field patterns) and the orthogonal sparsity basis was proposed [31] according to the incoherent sampling principle [32]. The effect of this scheme on improving the imaging quality was also demonstrated by practical experiments [31]. To further incorporate image statistics into the optimization, an optimization framework combined with dictionary learning [33][34] was proposed [35]. In this framework, an over-complete sparsity basis that describes the statistics of imaging objects was given via dictionary learning, and light fields were optimized by similarly solving a mutual coherence minimization problem through a concise method. A closed-form solution of the optimal light fields was theoretically derived and experimentally applied to further enhance the imaging quality. Briefly speaking, these studies have shown that optimizing light fields with the help of information theory can essentially increase image information transmission efficiency. In addition, some heuristic encoding mode optimization studies have been proposed, which adaptively adjust the encoding mode according to precedent-acquired information [36][37][38][39] to reduce the redundancy contained in subsequent samplings.
On the other hand, the information content that the GI system can transmit is a more fundamental problem that could inspire encoding optimization. As aforementioned, the mutual information between the detection signal and the imaging object given a determined type of light fields has been analyzed [30] and applied to light-field optimization. Besides the mutual information, Fisher information [40] has also been used to analyze the information that detection signals contain about the imaging object in the GI system [41]. In this study, it was shown that signals with larger fluctuations contain more information, which is consistent with several existing studies [42][43][44][45] and provides a potential idea for the encoding mode optimization.
Generally, by utilizing fundamental information-theoretic studies, it is possible to provide a solid foundation for those research topics of the GI system such as encoding light fields optimization, resolution analysis, and performance investigation. Hence, there would be significant demands on this research direction considering that existing studies on the information content analysis of the GI system are still rather limited.

4. Task-Oriented GI System Design

In many practical scenarios, a high-quality image is not necessarily required; instead, it is sufficient to only have the information related to specific tasks [46][47]. Intuitively, this can be realized by designing the imaging system with the help of information theory. Traditional imaging systems, however, usually need to first perform high-quality imaging and then perform a specific analysis based on a sensing–storing–computing–integrated equipment due to their fixed image information transmission mode. Since the GI system can perform more flexible encoding than traditional imaging systems, it is rather suitable for such a task-oriented kind of application scenario so that the desired information can be acquired and retrieved without the need for performing high-quality imaging. Currently, several task-oriented GI studies have been performed, mainly realized by designing the encoding light-field patterns according to the desired task information and assisting with data-processing methods. They can be classified into several categories, including non-imaging object detection [48], non-imaging object classification [49][50], non-imaging object edge detection [51][52], and object tracking [53] as well as progressive imaging [54]. In brief, these studies largely utilize the flexibility of the information mapping mode of GI systems. However, they are mostly come up with from a heuristic perspective rather than a more theoretically solid one. Thus, for further task-oriented studies, GI systems may be combined with the task-specific information measure [46] to develop a more complete information–theoretic framework for designing systems to perform a specific task. Task-oriented GI systems should significantly reduce the excessive demand of the transmission and storage of redundant information, thereby saving much of the energy consumption of devices and leading to a more “green” task information acquisition mode for many application scenarios, such as automatic driving and the industrial internet of things.

5. X-ray Diffraction GI

Different from GI LiDAR and the camera that performs imaging in the spatial domain, the diffraction GI [55][56] (shown as Figure 43), by designing the CTF, instead extracts image information of Fourier diffraction spectra via the second-order correlation
thus enabling to analyze a non-crystalline object with an incoherent thermal source in principle. Hence, it has the potential to address the imperative demand of CDI on a high–bright coherent radiation source at the waveband of X-rays and Fermions. The first X-ray diffraction GI experiment was realized in 2016
Δ G 2 ( r r , r t ) = Δ I t r t Δ I r r r F t f = r t r r λ z 2 ,
thus enabling to analyze a non-crystalline object with an incoherent thermal source in principle. Hence, it has the potential to address the imperative demand of CDI on a high–bright coherent radiation source at the waveband of X-rays and Fermions. The first X-ray diffraction GI experiment was realized in 2016
[57]. Since the X-ray diffraction GI can acquire the Fourier spectra information at the Fresnel region rather than the distant Fraunhofer region with a thermal X-ray source, it provides the potential to realize miniaturized thermal X-ray microscopic equipment.
. Since the X-ray diffraction GI can acquire the Fourier spectra information at the Fresnel region rather than the distant Fraunhofer region with a thermal X-ray source, it provides the potential to realize miniaturized thermal X-ray microscopic equipment.
Figure 43. Schematic diagram of the diffraction GI. In the reference path, the distance from the source to the detector array is z. In the object path, the light passes through the imaging object which is z1 away from the source and then propagates z2 to be recorded by a point-like detector located at rt (the point-like detection is represented by a painted grid here that denotes only the intensity of one grid is recorded). The Fourier-transform diffraction spectral of the sample can be obtained via the second-order correlation when z=z1+z2.
From the perspective of information theory, the diffraction GI essentially performs encoding on information in the Fourier domain, and this can be understood more clearly when performing analysis on the angular spectra [58]. By furIther combining with Equation (11), it can can be found that more flexible encoding can be performed on the reference path [56] without increasing the radiation dose incident on the sample according to information theory, compared to the fixed-architecture traditional CDI scheme. For example, a non-local encoding method (adding masks at the reference path) for the X-ray diffraction GI scheme was experimentally demonstrated [59], which can significantly enhance the imaging quality without increasing the radiation dose on the sample. To achieve a high-quality decoding, a deep-learning-based Y-net has been proposed [60], whose structure is inspired by the form of Equations (9) and (10). Additionally, the imaging performance analysis and system optimization of the diffraction GI system have been performed with information–theoretic studies [19][41].

References

  1. Zhao, C.; Gong, W.; Chen, M.; Li, E.; Wang, H.; Xu, W.; Han, S. Ghost imaging lidar via sparsity constraints. Appl. Phys. Lett. 2012, 101, 141123.
  2. Gong, W. Theoretical and Experimental Investigation On Ghost Imaging Radar with Thermal Light. Ph.D. Thesis, Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, Shanghai, China, 2011.
  3. Gong, W.; Zhao, C.; Yu, H.; Chen, M.; Xu, W.; Han, S. Three-dimensional ghost imaging lidar via sparsity constraint. Sci. Rep. 2016, 6, 1–6.
  4. Wang, C.; Mei, X.; Pan, L.; Wang, P.; Li, W.; Gao, X.; Bo, Z.; Chen, M.; Gong, W.; Han, S. Airborne near infrared three-dimensional ghost imaging lidar via sparsity constraint. Remote. Sens. 2018, 10, 732.
  5. Ma, Y.; He, X.; Meng, Q.; Liu, B.; Wang, D. Microwave staring correlated imaging and resolution analysis. In Geo-Informatics in Resource Management and Sustainable Ecosystem; Springer: Berlin, Germany, 2013; pp. 737–747.
  6. Li, D.; Li, X.; Qin, Y.; Cheng, Y.; Wang, H. Radar coincidence imaging: An instantaneous imaging technique with stochastic signals. IEEE Trans. Geosci. Remote. Sens. 2013, 52, 2261–2277.
  7. Cheng, Y.; Zhou, X.; Xu, X.; Qin, Y.; Wang, H. Radar coincidence imaging with stochastic frequency modulated array. IEEE J. Sel. Top. Signal Process. 2016, 11, 414–427.
  8. Kikuchi, K. Fundamentals of coherent optical fiber communications. J. Light. Technol. 2015, 34, 157–179.
  9. Secondini, M.; Foggi, T.; Fresi, F.; Meloni, G.; Cavaliere, F.; Colavolpe, G.; Forestieri, E.; Poti, L.; Sabella, R.; Prati, G. Optical time–frequency packing: Principles, design, implementation, and experimental demonstration. J. Light. Technol. 2015, 33, 3558–3570.
  10. Deng, C.; Gong, W.; Han, S. Pulse-compression ghost imaging lidar via coherent detection. Opt. Express 2016, 24, 25983–25994.
  11. Pan, L.; Wang, Y.; Deng, C.; Gong, W.; Bo, Z.; Han, S. Micro-Doppler effect based vibrating object imaging of coherent detection GISC lidar. Opt. Express 2021, 29, 43022–43031.
  12. Gong, W.; Sun, J.; Deng, C.; Lu, Z.; Zhou, Y.; Han, S. Research progress on single-pixel imaging lidar via coherent detection. Laser Optoelectron. Prog. 2021, 58, 1011003.
  13. Liu, Z.; Tan, S.; Wu, J.; Li, E.; Shen, X.; Han, S. Spectral camera based on ghost imaging via sparsity constraints. Sci. Rep. 2016, 6, 25718.
  14. Giglio, M.; Carpineti, M.; Vailati, A. Space intensity correlations in the near field of the scattered light: A direct measurement of the density correlation function g (r). Phys. Rev. Lett. 2000, 85, 1416.
  15. Cerbino, R.; Peverini, L.; Potenza, M.; Robert, A.; Bösecke, P.; Giglio, M. X-ray-scattering information obtained from near-field speckle. Nat. Phys. 2008, 4, 238–243.
  16. Chu, C.; Liu, S.; Liu, Z.; Hu, C.; Zhao, Y.; Han, S. Spectral polarization camera based on ghost imaging via sparsity constraints. Appl. Opt. 2021, 60, 4632–4638.
  17. Liu, S.; Liu, Z.; Hu, C.; Li, E.; Shen, X.; Han, S. Spectral ghost imaging camera with super-Rayleigh modulator. Opt. Commun. 2020, 472, 126017.
  18. Wang, P.; Liu, Z.; Wu, J.; Shen, X.; Han, S. Dispersion control of broadband super-Rayleigh speckles for snapshot spectral ghost imaging. Chin. Opt. Lett. 2022, 20, 091102.
  19. Liu, Z.; Hu, C.; Tong, Z.; Chu, C.; Han, S. Some research progress on the theoretical study of ghost imaging in Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences. Infrared Laser Eng. 2021, 50, 20211059.
  20. Tong, Z.; Liu, Z.; Wang, J. Spatial resolution limit of ghost imaging camera via sparsity constraints-break Rayleigh’s criterion based on the discernibility in high-dimensional light field space. arXiv 2020, arXiv:2004.00135.
  21. Donoho, D.L. Compressed sensing. IEEE Trans. Inf. Theory 2006, 52, 1289–1306.
  22. Candès, E.J.; Tao, T. Near-optimal signal recovery from random projections: Universal encoding strategies? IEEE Trans. Inf. Theory 2006, 52, 5406–5425.
  23. Tropp, J.A. Greed is good: Algorithmic results for sparse approximation. IEEE Trans. Inf. Theory 2004, 50, 2231–2242.
  24. Sekko, E.; Thomas, G.; Boukrouche, A. A deconvolution technique using optimal Wiener filtering and regularization. Signal Process. 1999, 72, 23–32.
  25. Orieux, F.; Giovannelli, J.F.; Rodet, T. Bayesian estimation of regularization and point spread function parameters for Wiener–Hunt deconvolution. JOSA A 2010, 27, 1593–1607.
  26. Jin, A.; Yazici, B.; Ale, A.; Ntziachristos, V. Preconditioning of the fluorescence diffuse optical tomography sensing matrix based on compressive sensing. Opt. Lett. 2012, 37, 4326–4328.
  27. Yao, R.; Pian, Q.; Intes, X. Wide-field fluorescence molecular tomography with compressive sensing based preconditioning. Biomed. Opt. Express 2015, 6, 4887–4898.
  28. Tong, Z.; Wang, F.; Hu, C.; Wang, J.; Han, S. Preconditioned generalized orthogonal matching pursuit. EURASIP J. Adv. Signal Process. 2020, 2020, 1–14.
  29. Tong, Z.; Liu, Z.; Hu, C.; Wang, J.; Han, S. Preconditioned deconvolution method for high-resolution ghost imaging. Photonics Res. 2021, 9, 1069–1077.
  30. Li, E.; Chen, M.; Gong, W.; Yu, H.; Han, S. Mutual information of ghost imaging systems. Acta Opt. Sin. 2013, 33, 1211003.
  31. Xu, X.; Li, E.; Shen, X.; Han, S. Optimization of speckle patterns in ghost imaging via sparse constraints by mutual coherence minimization. Chin. Opt. Lett. 2015, 13, 071101.
  32. Candès, E.J.; Romberg, J. Sparsity and incoherence in compressive sampling. Inverse Probl. 2007, 23, 969.
  33. Aharon, M.; Elad, M.; Bruckstein, A. K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation. IEEE Trans. Signal Process. 2006, 54, 4311–4322.
  34. Sulam, J.; Ophir, B.; Zibulevsky, M.; Elad, M. Trainlets: Dictionary learning in high dimensions. IEEE Trans. Signal Process. 2016, 64, 3180–3193.
  35. Hu, C.; Tong, Z.; Liu, Z.; Huang, Z.; Wang, J.; Han, S. Optimization of light fields in ghost imaging using dictionary learning. Opt. Express 2019, 27, 28734–28749.
  36. Aβmann, M.; Bayer, M. Compressive adaptive computational ghost imaging. Sci. Rep. 2013, 3, 1545.
  37. Yu, W.K.; Li, M.F.; Yao, X.R.; Liu, X.F.; Wu, L.A.; Zhai, G.J. Adaptive compressive ghost imaging based on wavelet trees and sparse representation. Opt. Express 2014, 22, 7133–7144.
  38. Li, Z.; Suo, J.; Hu, X.; Dai, Q. Content-adaptive ghost imaging of dynamic scenes. Opt. Express 2016, 24, 7328–7336.
  39. Liu, B.; Wang, F.; Chen, C.; Dong, F.; McGloin, D. Self-evolving ghost imaging. Optica 2021, 8, 1340–1349.
  40. Fisher, R.A. On the mathematical foundations of theoretical statistics. Philos. Trans. R. Soc. London. Ser. Contain. Pap. Math. Phys. Character 1922, 222, 309–368.
  41. Hu, C.; Zhu, R.; Yu, H.; Han, S. Correspondence Fourier-transform ghost imaging. Phys. Rev. 2021, 103, 043717.
  42. Luo, K.H.; Huang, B.Q.; Zheng, W.M.; Wu, L.A. Nonlocal imaging by conditional averaging of random reference measurements. Chin. Phys. Lett. 2012, 29, 074216.
  43. Sun, M.J.; Meng, L.T.; Edgar, M.P.; Padgett, M.J.; Radwell, N. A Russian Dolls ordering of the Hadamard basis for compressive single-pixel imaging. Sci. Rep. 2017, 7, 3464.
  44. Yu, W.K. Super sub-Nyquist single-pixel imaging by means of cake-cutting Hadamard basis sort. Sensors 2019, 19, 4122.
  45. Yu, W.K.; Liu, Y.M. Single-pixel imaging with origami pattern construction. Sensors 2019, 19, 5135.
  46. Neifeld, M.A.; Ashok, A.; Baheti, P.K. Task-specific information for imaging system analysis. JOSA A 2007, 24, B25–B41.
  47. Buzzi, S.; Lops, M.; Venturino, L. Track-before-detect procedures for early detection of moving target from airborne radars. IEEE Trans. Aerosp. Electron. Syst. 2005, 41, 937–954.
  48. Zhai, X.; Cheng, Z.; Wei, Y.; Liang, Z.; Chen, Y. Compressive sensing ghost imaging object detection using generative adversarial networks. Opt. Eng. 2019, 58, 013108.
  49. Chen, H.; Shi, J.; Liu, X.; Niu, Z.; Zeng, G. Single-pixel non-imaging object recognition by means of Fourier spectrum acquisition. Opt. Commun. 2018, 413, 269–275.
  50. Zhang, Z.; Li, X.; Zheng, S.; Yao, M.; Zheng, G.; Zhong, J. Image-free classification of fast-moving objects using “learned” structured illumination and single-pixel detection. Opt. Express 2020, 28, 13269–13278.
  51. Liu, X.F.; Yao, X.R.; Lan, R.M.; Wang, C.; Zhai, G.J. Edge detection based on gradient ghost imaging. Opt. Express 2015, 23, 33802–33811.
  52. Wang, L.; Zou, L.; Zhao, S. Edge detection based on subpixel-speckle-shifting ghost imaging. Opt. Commun. 2018, 407, 181–185.
  53. Yang, D.; Chang, C.; Wu, G.; Luo, B.; Yin, L. Compressive ghost imaging of the moving object using the low-order moments. Appl. Sci. 2020, 10, 7941.
  54. Sun, S.; Gu, J.H.; Lin, H.Z.; Jiang, L.; Liu, W.T. Gradual ghost imaging of moving objects by tracking based on cross correlation. Opt. Lett. 2019, 44, 5594–5597.
  55. Zhang, M.; Wei, Q.; Shen, X.; Liu, Y.; Liu, H.; Cheng, J.; Han, S. Lensless Fourier-transform ghost imaging with classical incoherent light. Phys. Rev. 2007, 75, 021803.
  56. Zhang, M. Experimental Investigation on Non-local Lensless Fourier-transfrom imaging with Cassical Incoherent Light. Ph.D. Thesis, Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, Shanghai, China, 2007.
  57. Yu, H.; Lu, R.; Han, S.; Xie, H.; Du, G.; Xiao, T.; Zhu, D. Fourier-transform ghost imaging with hard X rays. Phys. Rev. Lett. 2016, 117, 113901.
  58. Liu, H.; Cheng, J.; Han, S. Ghost imaging in Fourier space. J. Appl. Phys. 2007, 102, 103102.
  59. Tan, Z.; Yu, H.; Lu, R.; Zhu, R.; Han, S. Non-locally coded Fourier-transform ghost imaging. Opt. Express 2019, 27, 2937–2948.
  60. Zhu, R.; Yu, H.; Tan, Z.; Lu, R.; Han, S.; Huang, Z.; Wang, J. Ghost imaging based on Y-net: A dynamic coding and decoding approach. Opt. Express 2020, 28, 17556–17569.
More
Video Production Service