Damaged QR Code Reconstruction: Comparison
Please note this is a comparison between Version 2 by Rita Xu and Version 1 by Zheng Jianhua.

QR codes often become difficult to recognize due to damage. Traditional restoration methods exhibit a limited effectiveness for severely damaged or densely encoded QR codes, are time-consuming, and have limitations in addressing extensive information loss.

  • image inpainting
  • damaged QR code reconstruction
  • gan

1. Introduction

In recent years, QR codes have found widespread applications globally, especially in areas such as mobile payments, logistics tracking, identity verification, and access control [1,2,3,4,5][1][2][3][4][5]. Take traceable QR codes as an example; they assist users in tracking critical information such as the product origin, production date, and manufacturer, ensuring product quality and safety [2,3,6][2][3][6]. However, in practical usage, QR codes can become damaged due to various reasons such as daily wear and tear, liquid splashes, or other accidental factors, leading to their failure to be recognized, thereby affecting the product traceability and security [7,8,9,10][7][8][9][10].
The objective of repairing damaged QR codes is to restore their missing areas and enable proper recognition [11]. From this standpoint, the restoration of damaged QR codes can be considered a specific application within the domain of image restoration. With their excellent image generation capabilities, Generative Adversarial Networks (GANs) have achieved considerable research success [9,10,12,13,14,15,16,17][9][10][12][13][14][15][16][17]. However, the integration of restoring damaged QR codes with GAN methods commonly used in the field of image restoration is relatively scarce. Therefore, from this perspective, our e research aims to explore the feasibility of applying GANs to the task of repairing damaged QR codes and propose a novel solution. As previously mentioned, the restoration of damaged QR codes is essentially a special case of image restoration. However, due to the specific characteristics of QR code images, weresearchers encounter unique challenges when employing GANs for restoration.
Challenge 1: Information recovery takes priority over image quality for QR code restoration. The core of QR code image restoration is to recover the information contained so that it can be read again. Compared to general image restoration that focuses on visual effect, QR code restoration needs to first consider the recognizability rate after restoration. This requires that the recognition rate metric is more important than image quality when evaluating the restoration models. The availability of information is an additional constraint unique to QR code image restoration tasks.
Challenge 2: The lack of local textures makes relying on the local context difficult for QR code restoration. QR code images inherently have fewer texture details, which makes local restoration difficult. Models need to rely more on non-local contextual information for structurally reasonable inference [15]. Different from image inpainting that only cares about visual effects, QR code image restoration needs to prioritize ensuring the recognizability of information, paying more attention to restoring the global structure rather than reconstructing the local texture details.
To address the aforementioned unique challenges faced by QR code image restoration, wresearchers propose a novel deep learning model named the Edge-enhanced Hierarchical Feature Pyramid GAN (EHFP-GAN). The model comprises two sub-modules: the Edge Restoration Module and the QR Code Reconstruction Module. The Edge Restoration Module repairs the edge image generated by the Canny edge detection [18], and its restoration results are fed alongside the original image into the QR Code Reconstruction Module, enhancing the quality of the edge restoration.

2. QR Code-Related Work

Since the emergence of QR codes, ensuring their reliable scanning and decoding in different environments has been a research hotspot. Relevant studies have mainly aimed at improving the reliability of QR codes from the perspectives of error correction codes, denoising techniques, and image enhancement. In terms of error correction codes, QR codes primarily utilize Reed–Solomon error correction techniques to rectify reading errors, enhancing the data reliability and readability [27][19]. However, in cases of actual damage, the performance of this technique is not ideal. Due to potential interferences such as stains, blurriness, rotation, and scaling, researchers have developed a range of anti-interference techniques, including filtering, denoising, rotation correction, and scale normalization, to reduce the impact of interference on QR code recognition. In terms of denoising techniques, Tomoyoshi Shimobaba et al. (2018) proposed a holographic image restoration algorithm using autoencoders. Through numerical diffraction calculations or holographic optical reconstruction, they obtained reconstructed images and utilized autoencoders to remove image noise pollution, thus restoring clearer holographic images of QR codes [28][20]. Furthermore, researchers have employed the Cahn–Hilliard equation to address QR code image restoration, particularly in restoring low-order sets (edges, corners) and enhancing edges [21]. When dealing with severely damaged QR codes, the effectiveness of traditional image processing techniques is relatively limited. As a result, in recent years, researchers have shifted their focus towards deep learning technology. For blurred QR images, researchers have begun exploring from the perspective of deep learning. For instance, Michael John Fanous et al. proposed GANscan, a high-speed imaging method based on generative adversarial networks, which is employed to capture QR codes on rapidly moving scanning devices. This method primarily utilizes GANs to process motion-blurred QR video frames into clear images [12]. The above studies extended QR technology to adapt to various environments from different perspectives. However, these methods mainly addressed QR code blurriness and noise issues, with limited research on addressing damaged QR codes.

3. Image Inpainting Work

Image inpainting refers to the task of reconstructing lost or corrupted parts of images based on the surrounding available pixels. Its core idea is to utilize spatial continuity and texture similarity in natural images to synthesize plausible content for missing regions. Early methods relied on traditional signal processing techniques for image completion, extending based on the inherent image similarity and structure, yielding limited effects. For example, pixel-by-pixel and block-by-block filling, both of which start from the boundaries of the image’s damaged areas, gradually fill unknown regions in images using known information from the surrounding or similar areas in the image based on calculated priorities, aiming to synthesize visually continuous images [25,26][22][23]. However, this method is only suitable for small area restoration, and it presents issues of blurriness and unnatural textures in repairing complex backgrounds and large missing areas. In recent years [29][24], deep learning has seen rapid development, particularly excelling in image inpainting tasks, enabling the learning of intrinsic priors in images and yielding more realistic completion results. For example, the Context Encoders model utilizes end-to-end convolutional neural networks for image completion, representing one of the earliest successful applications of deep learning to this task. Its core innovation lies in the introduction of the encoder–decoder structure of generative models, which can directly process images with holes [30][25]. Given the development of deep learning in image inpainting tasks in recent years, more people have focused on its applied research, such as object removal and photo restoration. For example, Wan et al. proposed a method using deep learning to restore severely degraded old photos. The method involves training two VAEs to transform old photos and clean photos into latent spaces and learning transformations between the two latent spaces on synthesized image pairs, and then designing global and local branches to handle structural and non-structural defects in old photos, respectively. The two branches are fused in the latent space to enhance the recovery from composite defects [14]. This provides new perspectives for researching QR code restoration. However, unlike ordinary image restoration tasks such as object removal and photo restoration, QR images contain semantic information, requiring a delicate balance between the visual effect and readability. How to employ the image inpainting techniques for high-quality QR code restoration still offers many unexplored possibilities.

References

  1. Jiao, S.; Zou, W.; Li, X. QR code based noise-free optical encryption and decryption of a gray scale image. Opt. Commun. 2017, 387, 235–240.
  2. Bai, H.; Zhou, G.; Hu, Y.; Sun, A.; Xu, X.; Liu, X.; Lu, C. Traceability technologies for farm animals and their products in China. Food Control 2017, 79, 35–43.
  3. Tarjan, L.; Šenk, I.; Tegeltija, S.; Stankovski, S.; Ostojic, G. A readability analysis for QR code application in a traceability system. Comput. Electron. Agric. 2014, 109, 1–11.
  4. Chen, R.; Zheng, Z.; Yu, Y.; Zhao, H.; Ren, J.; Tan, H.-Z. Fast Restoration for Out-of-Focus Blurred Images of QR Code With Edge Prior Information via Image Sensing. IEEE Sens. J. 2021, 21, 18222–18236.
  5. Karrach, L.; Pivarčiová, E.; Bozek, P. Recognition of Perspective Distorted QR Codes with a Partially Damaged Finder Pattern in Real Scene Images. Appl. Sci. 2020, 10, 7814.
  6. Fröschle, H.-K.; Gonzales-Barron, U.; McDonnell, K.; Ward, S. Investigation of the potential use of e-tracking and tracing of poultry using linear and 2D barcodes. Comput. Electron. Agric. 2009, 66, 126–132.
  7. Chen, R.; Zheng, Z.; Pan, J.; Yu, Y.; Zhao, H.; Ren, J. Fast Blind Deblurring of QR Code Images Based on Adaptive Scale Control. Mob. Netw. Appl. 2021, 26, 2472–2487.
  8. van Gennip, Y.; Athavale, P.; Gilles, J.; Choksi, R. A Regularization Approach to Blind Deblurring and Denoising of QR Barcodes. IEEE Trans. Image Process. 2015, 24, 2864–2873.
  9. Wang, M.; Chen, K.; Lin, F. Multi-residual generative adversarial networks for QR code deblurring. In Proceedings of the International Conference on Electronic Information Technology (EIT 2022), Chengdu, China, 18–20 March 2022; pp. 589–594.
  10. Wang, B.; Xu, J.; Zhang, J.; Li, G.; Wang, X. Motion deblur of QR code based on generative adversative network. In Proceedings of the 2019 2nd International Conference on Algorithms, Computing and Artificial Intelligence, Sanya, China, 20–22 December 2019; pp. 166–170.
  11. Bertalmio, M.; Sapiro, G.; Caselles, V.; Ballester, C. Image inpainting. In Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques, New Orleans, LA, USA, 23–28 July 2000; pp. 417–424.
  12. Fanous, M.J.; Popescu, G. GANscan: Continuous scanning microscopy using deep learning deblurring. Light. Sci. Appl. 2022, 11, 265.
  13. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial networks. Commun. ACM 2020, 63, 139–144.
  14. Wan, Z.; Zhang, B.; Chen, D.; Zhang, P.; Chen, D.; Liao, J.; Wen, F. Bringing old photos back to life. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 2747–2757.
  15. Zeng, Y.; Fu, J.; Chao, H.; Guo, B. Aggregated Contextual Transformations for High-Resolution Image Inpainting. IEEE Trans. Vis. Comput. Graph. 2022, 29, 3266–3280.
  16. Nazeri, K.; Ng, E.; Joseph, T.; Qureshi, F.Z.; Ebrahimi, M. Edgeconnect: Generative image inpainting with adversarial edge learning. arXiv 2019, arXiv:1901.00212.
  17. Liu, G.; Reda, F.A.; Shih, K.J.; Wang, T.-C.; Tao, A.; Catanzaro, B. Image inpainting for irregular holes using partial convolutions. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 85–100.
  18. Canny, J. A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell. 1986, 6, 679–698.
  19. Reed, I.S.; Solomon, G. Polynomial Codes Over Certain Finite Fields. J. Soc. Ind. Appl. Math. 1960, 8, 300–304.
  20. Shimobaba, T.; Endo, Y.; Hirayama, R.; Nagahama, Y.; Takahashi, T.; Nishitsuji, T.; Kakue, T.; Shiraki, A.; Takada, N.; Masuda, N.; et al. Autoencoder-based holographic image restoration. Appl. Opt. 2017, 56, F27–F30.
  21. Theljani, A.; Houichet, H.; Mohamed, A. An adaptive Cahn-Hilliard equation for enhanced edges in binary image inpainting. J. Algorithms Comput. Technol. 2020, 14, 1748302620941430.
  22. Telea, A. An image inpainting technique based on the fast marching method. J. Graph. Tools 2004, 9, 23–34.
  23. Criminisi, A.; Perez, P.; Toyama, K. Region Filling and Object Removal by Exemplar-Based Image Inpainting. IEEE Trans. Image Process. 2004, 13, 1200–1212.
  24. Nguyen, V.-T.; Nguyen, A.-T.; Nguyen, V.-T.; Bui, H.-A. A real-time human tracking system using convolutional neural network and particle filter. In Intelligent Systems and Networks; Selected Articles from ICISN 2021, Vietnam, 2021; Springer: Berlin/Heidelberg, Germany, 2021; pp. 411–417.
  25. Pathak, D.; Krahenbuhl, P.; Donahue, J.; Darrell, T.; Efros, A.A. Context encoders: Feature learning by inpainting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2536–2544.
More
ScholarVision Creations