Generative Attentional Networks for Image-to-Image Translation: Progressive U-GAT-IT: Comparison
Please note this is a comparison between Version 2 by Muhammad Saqlain Aslam and Version 1 by Muhammad Saqlain Aslam.

Unsupervised image-to-image translation has received considerable attention due to the recent remarkable advancements in generative adversarial networks (GANs). In image-to-image translation, state-of-the-art methods use unpaired image data to learn mappings between the source and target domains. However, despite their promising results, existing approaches often fail in challenging conditions, particularly when images have various target instances and a translation task involves significant transitions in shape and visual artifacts when translating low-level information rather than high-level semantics. To tackle the problem, we propose a novel framework called Progressive Unsupervised Generative Attentional Networks with Adaptive Layer-Instance Normalization (PRO-U-GAT-IT) for the unsupervised image-to-image translation task. In contrast to existing attention-based models that fail to handle geometric transitions between the source and target domains, our model can translate images requiring extensive and holistic changes in shape. Experimental results show the superiority of the proposed approach compared to the existing state-of-the-art models on different datasets.

  • generative adversarial networks
  • image-to-image translation
  • style transfer
  • cartoon styles
  • anime

1. Introduction

In recent years, generative adversarial networks (GANs) have made significant progress in image-to-image translation. Researchers in machine learning and computer vision have given this topic considerable attention because of the wide range of practical applications available [1][2]. These include image inpainting [3][4], colorization [5][6], super-resolution [7][8], and style transfer [9][10]. Image-to-image translation refers to a category of vision and graphics problems in which the goal is to learn the mapping between an input image (source domain) and an output image (target domain) from a set of aligned image pairs [11]. In the case of portrait stylization, various methods have been explored, such as self-to-anime [1] and cartoon [12]. There are, however, many tasks that will not offer paired training data. When paired data are provided, the mapping model can be trained using a conditional generative model [13][14][15] or a simple regression model [5][16][17] in a supervised manner. Various works [18][19][20][21][22][23][24][25] have successfully translated images in unsupervised settings without available paired data by assuming shared latent space [22] and cycle consistency assumptions [11][21]. Nevertheless, supervised approaches require paired datasets for training, which can be laborious and expensive, if possible, to prepare manually. In contrast, unsupervised methods need a large volume of unpaired data and frequently need help to reach stable training convergence and generate high-resolution results [26]. Previous techniques have shortcomings despite their progress and benefits and often fail to meet challenging tasks, especially when the target images have multiple instances to be translated [27] or the shape of the target instances has drastically changed [11]. For example, they are efficient for style transfer tasks that map local textures such as photo2vangogh and photo2portrait. However, they are ineffective for image translation tasks with extensive shape transformations, such as selfie2anime and cat2dog, in wild images. As a result, pre-processing measures such as image cropping and alignment can significantly prevent these difficulties by limiting the complexity of data distributions [1][2]. Further, current methods like DRIT [28] cannot produce the coveted results for both image translation that preserves appearance (such as horse2zebra) and image translation that transforms shape (such as cat2dog) due to the fixed network structure and hyperparameters. There is a need to adjust the network architecture or hyperparameters for each dataset. In 2014, Ian Goodfellow et al. [29] introduced generative adversarial networks (GANs), which can solve image-to-image problems, including anime face style transfer. A study published in 2017 found that Pix2Pix [13] and CycleGAN [11] are the two primary GAN-based approaches that can successfully address image-to-image problems. CartoonGAN [30] was introduced in 2018 as an upgrade of Pix2Pix, specializing in the cartoon sector. Nevertheless, all the earlier methods merely transfer textures. Junho Kim et al., therefore, presented U-GAT-IT [1], a technique based on CycleGAN that can handle both texture and geometry transfer. However, in the generated image, geometry factors differ dramatically from a human face image. Consequently, the output does not maintain the input signature. This paper proposes Progressive U-GAT-IT (PRO-U-GAT-IT), a novel framework for unsupervised image-to-image translation tasks, which incorporates an attention module and learnable normalization function in an end-to-end strategy. Based on the attention map obtained by the auxiliary classifier, our model guides the translation so that it focuses on more essential regions and disregards minor areas by distinguishing between the source and target domains. Furthermore, these attention maps are embedded in the generator and discriminator to emphasize relevant critical areas, thereby enabling shape transformation. For example, a generator’s attention map focuses on regions that distinguish between the two domains. In contrast, a discriminator’s attention map assists in fine-tuning by concentrating on the difference between real and fake images. Additionally, we discovered that the selection of the normalization function substantially influences the quality of the transformed outcomes for various datasets with varying degrees of shape and texture changes. Furthermore, earlier approaches have limitations, including blurry results, unstable training, low resolutions, and limited variation. Moreover, high-resolution images are difficult to generate because their higher resolution makes them easily distinguishable from training images. Finally, due to memory constraints, large resolutions also require smaller mini-batches, compromising training stability. Nevertheless, recent improvements in the resolution and quality of images produced by generative methods, particularly GANs, have been observed. The contributions of our work are summarized as follows:
  • We propose a framework that improves the image-to-image translation model through a progressive block-training approach. This novel technique allows for the acquisition of distinct features during various training phases, leading to several notable advantages. These include reduced VRAM usage, accelerated training speed on par with or surpassing other methods when using the same device, and the ability to achieve successful image translation at higher resolutions.
  • Furthermore, we propose a novel research field that emphasizes the exploration and refinement of progressive image-to-image translation techniques. Our aim is to enhance both the quality of results and the overall efficiency of image-to-image translation models.

2. Related Work

2.1. Generative Adversarial Networks (GANs)

GANs [29] are persuasive generative models that have attained pleasing results in various applications of computer vision tasks such as super-resolution imaging [31] and image [32] and video generation [33]. Karras et al. proposed a method based on a simple progressive growing of GANs [34] to synthesize largely (for example, 256 × 256) realistic images in an unconditional environment. In a GAN framework, the goal of the generative model is to fool a discriminator by generating fake images, whereas that of the discriminative model is to differentiate between the generated samples and actual samples. Furthermore, for generating meaningful images that satisfy user needs, Conditional GANs (CGANs) [35][36] add additional information, such as discrete labels [19][37], object key points [38], human skeletons [39][40], and semantic maps [36][41][42], to assist in the image generation process.

2.2. Image-to-Image Translation

Convolutional neural networks (CNNs) have been used to learn a translation function for image-to-image translation. The task is to find a mapping between a source and a target domain. The models used in the early methods utilize a supervised framework, where the model identifies pairs of examples, for instance, by employing a conditional GAN to determine the mapping function [13][15][20]. Philip Isola et al. proposed Pix2pix [13], which is a conditional framework that uses a CGAN to determine a mapping function for input-to-output images. Wang et al. proposed Pix2pixHD [15], a high-resolution photo-realistic image-to-image translation method that can be applied to produce photo-realistic interpretations of semantic label maps. In addition, a similar approach has been implemented for several other tasks, including the generation of hand gestures [39]. However, many real-world tasks encounter the issue of having fewer or no paired input-output samples available. The problem of image-to-image translation becomes ill-posed in the absence of paired training data. Several methods that perform unpaired image-to-image translations have recently been proposed to address this limitation, producing remarkable results. These methods are essential for applications that lack or cannot obtain paired data and determine the mapping function without requiring paired training data. In particular, CycleGAN [11] learns to map between two domains of images rather than pairs of images. Besides CycleGAN, many other variants of GAN have been proposed [18][21][25][43][44][45] to deal with the cross-domain problem. However, the drawback of these models is that they can be easily affected by undesired content and cannot identify the most discriminative semantic information about images during the translation phase. Several works have employed an attention mechanism to alleviate these shortcomings. Many applications in computer vision have successfully implemented attention mechanisms, including depth estimation [46], which allows the models to concentrate on a significant part of the input. In some recent studies, attention modules have been used unsupervised to pay attention to the region of interest (ROI) in the image translation task, which can be divided into two categories. The first category involves providing attention using additional data. For example, Liang et al. introduced ContrastGAN [47], which utilizes the object mask annotations from every dataset as additional input data. Furthermore, Mo et al. proposed InstaGAN [2], which combines instance information (such as object segmentation masks) to enhance multi-instance transfiguration. Another method involves training segmentation or an attention model to produce attention maps and adapt them to the system. For example, Chen et al. [8] generated attention maps using an additional attention network to highlight objects of interest more. Kazaniotis et al. presented ATAGAN [48], which generates attention maps using a teacher network. A new module was proposed by Yang et al. [49] that predicts an attention map to guide the image translation method. Kim et al. [1] introduced the U-GAT-IT model to circumvent the challenge of geometry transfer. The key objective of the model is to pay more attention to the regions that contain distinctive anime-style representations. For this purpose, an auxiliary classifier is used to generate attention masks. In a study by Mejjati et al. [50], attention mechanisms were implemented with generators, discriminators, and two other attention networks.

References

  1. Kim, J.; Kim, M.; Kang, H.; Lee, K. U-gat-it: Unsupervised generative attentional networks with adaptive layer-instance normalization for image-to-image translation. arXiv 2019, arXiv:1907.10830.
  2. Mo, S.; Cho, M.; Shin, J. Instagan: Instance-aware image-to-image translation. arXiv 2018, arXiv:1812.10889.
  3. Pathak, D.; Krahenbuhl, P.; Donahue, J.; Darrell, T.; Efros, A.A. Context encoders: Feature learning by inpainting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2536–2544.
  4. Iizuka, S.; Simo-Serra, E.; Ishikawa, H. Globally and locally consistent image completion. ACM Trans. Graph. 2017, 36, 1–14.
  5. Zhang, R.; Isola, P.; Efros, A.A. Colorful image colorization. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 8–16 October 2016; Springer: Berlin/Heidelberg, Germany, 2016; pp. 649–666.
  6. Zhang, R.; Zhu, J.Y.; Isola, P.; Geng, X.; Lin, A.S.; Yu, T.; Efros, A.A. Real-time user-guided image colorization with learned deep priors. arXiv 2017, arXiv:1705.02999.
  7. Dong, C.; Loy, C.C.; He, K.; Tang, X. Image super-resolution using deep convolutional networks. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 38, 295–307.
  8. Kim, J.; Lee, J.K.; Lee, K.M. Accurate image super-resolution using very deep convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 1646–1654.
  9. Gatys, L.A.; Ecker, A.S.; Bethge, M. Image style transfer using convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2414–2423.
  10. Huang, X.; Belongie, S. Arbitrary style transfer in real-time with adaptive instance normalization. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 1501–1510.
  11. Zhu, J.Y.; Park, T.; Isola, P.; Efros, A.A. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2223–2232.
  12. Li, J. Twin-GAN–unpaired cross-domain image translation with weight-sharing GANs. arXiv 2018, arXiv:1809.00946.
  13. Isola, P.; Zhu, J.Y.; Zhou, T.; Efros, A.A. Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1125–1134.
  14. Li, C.; Liu, H.; Chen, C.; Pu, Y.; Chen, L.; Henao, R.; Carin, L. Alice: Towards understanding adversarial learning for joint distribution matching. Adv. Neural Inf. Process. Syst. 2017, 30, 5501–5509.
  15. Wang, T.C.; Liu, M.Y.; Zhu, J.Y.; Tao, A.; Kautz, J.; Catanzaro, B. High-resolution image synthesis and semantic manipulation with conditional gans. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 8798–8807.
  16. Larsson, G.; Maire, M.; Shakhnarovich, G. Learning representations for automatic colorization. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 8–16 October 2016; Springer: Berlin/Heidelberg, Germany, 2016; pp. 577–593.
  17. Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3431–3440.
  18. Anoosheh, A.; Agustsson, E.; Timofte, R.; Van Gool, L. Combogan: Unrestrained scalability for image domain translation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Salt Lake City, UT, USA, 18–23 June 2018; pp. 783–790.
  19. Choi, Y.; Choi, M.; Kim, M.; Ha, J.W.; Kim, S.; Choo, J. Stargan: Unified generative adversarial networks for multi-domain image-to-image translation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 8789–8797.
  20. Huang, X.; Liu, M.Y.; Belongie, S.; Kautz, J. Multimodal unsupervised image-to-image translation. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 172–189.
  21. Kim, T.; Cha, M.; Kim, H.; Lee, J.K.; Kim, J. Learning to discover cross-domain relations with generative adversarial networks. In Proceedings of the International Conference on Machine Learning, PMLR, Sydney, NSW, Australia, 6–11 August 2017; pp. 1857–1865.
  22. Liu, M.Y.; Breuel, T.; Kautz, J. Unsupervised image-to-image translation networks. Adv. Neural Inf. Process. Syst. 2017, 30, 700–708.
  23. Royer, A.; Bousmalis, K.; Gouws, S.; Bertsch, F.; Mosseri, I.; Cole, F.; Murphy, K. Xgan: Unsupervised image-to-image translation for many-to-many mappings. In Domain Adaptation for Visual Understanding; Springer: Berlin/Heidelberg, Germany, 2020; pp. 33–49.
  24. Taigman, Y.; Polyak, A.; Wolf, L. Unsupervised cross-domain image generation. arXiv 2016, arXiv:1611.02200.
  25. Yi, Z.; Zhang, H.; Tan, P.; Gong, M. Dualgan: Unsupervised dual learning for image-to-image translation. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2849–2857.
  26. Song, G.; Luo, L.; Liu, J.; Ma, W.C.; Lai, C.; Zheng, C.; Cham, T.J. AgileGAN: Stylizing portraits by inversion-consistent transfer learning. ACM Trans. Graph. 2021, 40, 1–13.
  27. Gokaslan, A.; Ramanujan, V.; Ritchie, D.; Kim, K.I.; Tompkin, J. Improving shape deformation in unsupervised image-to-image translation. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 649–665.
  28. Lee, H.Y.; Tseng, H.Y.; Huang, J.B.; Singh, M.; Yang, M.H. Diverse image-to-image translation via disentangled representations. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 35–51.
  29. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial networks. Commun. ACM 2020, 63, 139–144.
  30. Chen, Y.; Lai, Y.K.; Liu, Y.J. Cartoongan: Generative adversarial networks for photo cartoonization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 9465–9474.
  31. Ledig, C.; Theis, L.; Huszár, F.; Caballero, J.; Cunningham, A.; Acosta, A.; Aitken, A.; Tejani, A.; Totz, J.; Wang, Z.; et al. Photo-realistic single image super-resolution using a generative adversarial network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4681–4690.
  32. Tang, H.; Sebe, N. Total generate: Cycle in cycle generative adversarial networks for generating human faces, hands, bodies, and natural scenes. IEEE Trans. Multimed. 2021, 24, 2963–2974.
  33. Liu, G.; Tang, H.; Latapie, H.M.; Corso, J.J.; Yan, Y. Cross-view exocentric to egocentric video synthesis. In Proceedings of the 29th ACM International Conference on Multimedia, Virtual, 20–24 October 2021; pp. 974–982.
  34. Karras, T.; Aila, T.; Laine, S.; Lehtinen, J. Progressive growing of gans for improved quality, stability, and variation. arXiv 2017, arXiv:1710.10196.
  35. Mirza, M.; Osindero, S. Conditional generative adversarial nets. arXiv 2014, arXiv:1411.1784.
  36. Tang, H.; Liu, H.; Sebe, N. Unified generative adversarial networks for controllable image-to-image translation. IEEE Trans. Image Process. 2020, 29, 8916–8929.
  37. Perarnau, G.; Van De Weijer, J.; Raducanu, B.; Álvarez, J.M. Invertible conditional gans for image editing. arXiv 2016, arXiv:1611.06355.
  38. Tang, H.; Xu, D.; Liu, G.; Wang, W.; Sebe, N.; Yan, Y. Cycle in cycle generative adversarial networks for keypoint-guided image generation. In Proceedings of the 27th ACM international conference on multimedia, Nice, France, 21–25 October 2019; pp. 2052–2060.
  39. Tang, H.; Wang, W.; Xu, D.; Yan, Y.; Sebe, N. Gesturegan for hand gesture-to-gesture translation in the wild. In Proceedings of the 26th ACM International Conference on Multimedia, Seoul, Republic of Korea, 22–26 October 2018; pp. 774–782.
  40. Tang, H.; Bai, S.; Zhang, L.; Torr, P.H.; Sebe, N. Xinggan for person image generation. In Proceedings of the European Conference on Computer Vision, Virtual, 23–28 August 2020; Springer: Berlin/Heidelberg, Germany, 2020; pp. 717–734.
  41. Tang, H.; Xu, D.; Sebe, N.; Wang, Y.; Corso, J.J.; Yan, Y. Multi-channel attention selection gan with cascaded semantic guidance for cross-view image translation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 2417–2426.
  42. Tang, H.; Xu, D.; Yan, Y.; Torr, P.H.; Sebe, N. Local class-specific and global image-level generative adversarial networks for semantic-guided scene generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 7870–7879.
  43. Benaim, S.; Wolf, L. One-sided unsupervised domain mapping. Adv. Neural Inf. Process. Syst. 2017, 30, 752–762.
  44. Tang, H.; Xu, D.; Wang, W.; Yan, Y.; Sebe, N. Dual generator generative adversarial networks for multi-domain image-to-image translation. In Proceedings of the Asian Conference on Computer Vision, Perth, WA, Australia, 2–6 December 2018; Springer: Berlin/Heidelberg, Germany, 2018; pp. 3–21.
  45. Wang, Y.; van de Weijer, J.; Herranz, L. Mix and match networks: Encoder-decoder alignment for zero-pair image translation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 5467–5476.
  46. Xu, D.; Wang, W.; Tang, H.; Liu, H.; Sebe, N.; Ricci, E. Structured attention guided convolutional neural fields for monocular depth estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 3917–3925.
  47. Liang, X.; Zhang, H.; Xing, E.P. Generative semantic manipulation with contrasting gan. arXiv 2017, arXiv:1708.00315.
  48. Kastaniotis, D.; Ntinou, I.; Tsourounis, D.; Economou, G.; Fotopoulos, S. Attention-aware generative adversarial networks (ATA-GANs). In Proceedings of the 2018 IEEE 13th Image, Video, and Multidimensional Signal Processing Workshop (IVMSP), Zagori, Greece, 10–12 June 2018; IEEE: New York, NY, USA, 2018; pp. 1–5.
  49. Yang, C.; Kim, T.; Wang, R.; Peng, H.; Kuo, C.C.J. Show, attend, and translate: Unsupervised image translation with self-regularization and attention. IEEE Trans. Image Process. 2019, 28, 4845–4856.
  50. Alami Mejjati, Y.; Richardt, C.; Tompkin, J.; Cosker, D.; Kim, K.I. Unsupervised attention-guided image-to-image translation. Adv. Neural Inf. Process. Syst. 2018, 31, 3697–3707.
More
Video Production Service