Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 -- 1565 2023-11-24 09:44:50 |
2 update references and layout Meta information modification 1565 2023-11-24 10:01:22 |

Video Upload Options

Do you have a full video?

Confirm

Are you sure to Delete?
Cite
If you have any further questions, please contact Encyclopedia Editorial Office.
Yang, P.; He, C.; Luo, S.; Wang, T.; Wu, H. Underwater Image Enhancement. Encyclopedia. Available online: https://encyclopedia.pub/entry/52024 (accessed on 21 May 2024).
Yang P, He C, Luo S, Wang T, Wu H. Underwater Image Enhancement. Encyclopedia. Available at: https://encyclopedia.pub/entry/52024. Accessed May 21, 2024.
Yang, Peng, Chunhua He, Shaojuan Luo, Tao Wang, Heng Wu. "Underwater Image Enhancement" Encyclopedia, https://encyclopedia.pub/entry/52024 (accessed May 21, 2024).
Yang, P., He, C., Luo, S., Wang, T., & Wu, H. (2023, November 24). Underwater Image Enhancement. In Encyclopedia. https://encyclopedia.pub/entry/52024
Yang, Peng, et al. "Underwater Image Enhancement." Encyclopedia. Web. 24 November, 2023.
Underwater Image Enhancement
Edit

The complex underwater environment and light scattering effect lead to severe degradation problems in underwater images, such as color distortion, noise interference, and loss of details. To address the color distortion, noise interference, and loss of detail problems in underwater images, researchers propose a triple-branch dense block-based generative adversarial network (TDGAN) for the quality enhancement of underwater images.

generative adversarial network (GAN) underwater image enhancement multiscale dense

1. Introduction

Underwater imaging technology is widely used in deep-sea resource exploration, marine rescue, biodiversity monitoring, and submarine cable laying. However, the images captured by the underwater cameras encounter several degradation problems, such as color distortion, low contrast, and blurring [1]. The main reasons for the above problems are threefold. Firstly, the underwater propagation of light is exponentially attenuated, and this attenuation acts on different wavelengths. In particular, the red channel has the most obvious attenuation. Therefore, raw underwater images are always bluish or greenish compared to images taken in the air. Secondly, stray light is added to the sensor due to the scattering effect [2], resulting in a haze effect on the entire scene and reducing the image resolution and quality. Finally, the colors of underwater images are often distorted, owing to the influence of external environments such as water depth and lighting conditions [3]. The above three aspects cause the underwater image degradation, which cannot meet the practical application requirements. Moreover, the limitations of camera equipment are also a factor in the degradation of underwater images. The reason is that cameras cannot, as the human eye can, capture the full range of brightness [4]. Note that underwater images refer to photographs captured by cameras in the underwater environment.
Many image enhancement and restoration methods have been developed to improve the quality of underwater images in the past few years. Underwater image enhancement methods are mainly based on the redistribution of pixel intensities to improve the color and contrast of images; for example, the image enhancement method [5] enhances underwater images by adjusting image pixel values. Unlike underwater image enhancement, underwater image restoration usually requires the establishment of an effective degradation model and consideration of the underwater imaging mechanism and the physical properties of underwater light propagating. The key parameters of the constructed physical model are derived through prior knowledge, and the underwater image is restored through a retention compensation process [6]. For example, Refs. [7][8][9] used an image formation model to restore underwater images. However, considering the complex underwater physics and optics in the image enhancement and restoration process, the limitations of traditional methods are apparent. Limited by the problems of insufficient training data and parameter selection, traditional methods encounter the poor generalization performance problem, which is manifested in the cases wherein the enhanced images of some scenes are over-enhanced or under-enhanced. With the development of artificial intelligence technology, deep learning methods have been applied to underwater image processing [10][11][12][13]. Under the premise of abundant training data, deep learning dramatically improves the generalization ability of the traditional methods and can enhance the image quality in different underwater scenes. For instance, Li et al. [10] proposed WaterGAN, which uses synthetic real-world images, raw underwater images, and depth data to train a deep learning network for correcting color underwater images. Similar to waterGAN, Fabbri et al. also used a GAN method to enhance underwater images. Firstly, the distorted image is reconstructed based on the CycleGAN [14], and then the underwater GAN (UGAN) [15] is used to train the reconstructed underwater image pairs. Finally, a clear underwater image is obtained based on the pix2pix [16] model. Furthermore, based on Ref. [16], Islam et al. [17] designed a fast underwater image enhancement model (FunieGAN) using the U-net network to generate underwater images with rich visual perception. Although deep learning-based methods can obtain high-quality underwater images, the training time and image quality (e.g., noise and image details) depend on the structure of the neural network. Moreover, color consistency and training stability are the main problems that restrict the performance of the existing deep learning-based methods.
To solve the problems of color distortion, noise interference, and detail loss in the underwater image, researchers propose a generative adversarial network with a triple-branch dense block (TDGAN). Firstly, a triple-branch dense block (TBDB) is developed without building an underwater degradation model and prior image knowledge. The TBDB, which combines dense stitching, multi-scale techniques, and residual learning, can fully utilize feature information and recover image details. Secondly, a dual-branch discriminative network that can extract high-frequency information is designed to extract high-dimensional features and obtain low-dimensional discriminative information. The discriminator can guide the generator to pay attention to the global semantics and local details of the image and output images with prominent local details. In addition, a multinomial loss function is constructed to enrich the visual appearance of the images and obtain high-quality underwater images that align with human visual perception. The non-reference and full-reference metrics are utilized for quantitative comparison, and many experiments prove that TDGAN has higher evaluation metrics on underwater images. Moreover, two ablation studies are used to demonstrate the function of each component in the module. Finally, application tests are used to verify the effectiveness of TDGAN.

2. Traditional Underwater Image Enhancement Methods

Recently, many traditional methods have been proposed to enhance or restore underwater images, such as dark channel prior (DCP) [18], histogram equalization (HE) [19], contrast limited adaptive histogram equalization (CLAHE) [20], unsupervised colour correction method (UCM) [21], and underwater light attenuation prior (ULAP) [22], etc. On the other hand, with the purpose of improving underwater image contrast, Deng et al. presented a removing light source color and dehazing (RLSCD) method [23], which considered the scene depth correlation with decay. The results show that the RLSCD method has the advantage of improving the contrast and brightness of underwater images. Tao et al. [24] reconstructed high-quality underwater images by improving white balance and image fusion strategies. In [24], a multi-scale fusion scheme is designed to adjust the contrast, saturation, and brightness of the image.
Moreover, to solve the problem of color cast and low visibility of underwater images, Ref. [25] proposed an underwater image enhancement method called MLLE. In this method, the color and details of the image are locally adjusted through a fusion strategy, and the contrast of the image is adaptively adjusted by the mean and variance of the local image blocks. The underwater images of MLLE are characterized by high contrast and clarity. To solve the problem of color distortion in underwater images, Ke et al. [26] designed a framework for underwater image restoration. Firstly, a color correction is performed in the Lab color space to remove the color cast. Then, the transmission map of each channel is corrected using the relationship between the scattering coefficient and wavelength. Experiments show that this method significantly improves underwater image detail and color saturation. For the problem of underwater image color correction, Zhang et al. [27] proposed a color correction method, where a dual-histogram-based iterative thresholding method was developed to obtain global contrast-enhanced images and a finite histogram method with Rayleigh distribution was designed to obtain local contrast-enhanced images.

3. Underwater Image Enhancement Method Based on Deep Learning

Over the past few years, deep learning-based underwater image enhancement methods have made remarkable achievements. However, many underwater image enhancement techniques based on deep learning often produce artifacts and color distortion. To solve these problems, Wang et al. [28] proposed a two-phase underwater domain adaptation network (TUDA) to generate underwater images competitive in both visual quality and quantitative metrics. Sun et al. [29] developed an underwater multi-scene generative adversarial network (UMGAN) to enhance underwater images. This method uses a feedback mechanism and a denoising network to address noise and artifacts in generated images. To study the inherent degradation factors of underwater images and improve the network’s generalization ability, Xue et al. [30] designed a multi-branch aggregation network (MBANet). The MBANet analyzes underwater degradation factors from the perspective of color distortion and the veil effect and can significantly improve the performance of underwater object detection.
Furthermore, Cai et al. [31] proposed a CURE-Net to enhance the details of underwater images. The CURE-Net is composed of three cascaded subnetworks, a detail enhancement block, and a supervisory restoration block. The results indicate that CURE-Net achieves a gradual improvement of degraded underwater images. Ref. [32] developed a priori-guided adaptive underwater compressed sensing framework (UCSNet) to reproduce underwater image details. UCSNet uses the principle of multiple networks, where the sampling matrix generation network (SMGNet) is used to capture structural information and highlight image details.

4. Underwater Image Evaluation Metrics

Since image quality is affected by many factors, the assessment of the image quality is usually divided into two types: qualitative and quantitative assessment. Underwater image quality evaluation indicators commonly used by researchers are Underwater Color Image Quality Evaluation (UCIQE) [33] and Underwater Image Quality Metric (UIQM) [34]. In 2015, Yang et al. [33] found a correlation among the image sharpness, color, and subjective perception of the image quality and proposed an image quality evaluation method (UCIQE) for underwater images. UCIQE is a linear model involving contrast, hue, and saturation. Like UCIQE, UIQM constructs a linear combination of Underwater Image Color Metric (UICM), Underwater Image Sharpness Metric (UISM), and Underwater Image Contrast Metric (UIConM). Therefore, the larger the UCIQE and UIQM, the better the underwater image quality.
Additionally, full-reference image quality assessment metrics Peak Signal to Noise Ratio (PNSR) [35] and Structural Similarity Index Measurement (SSIM) [36] are also often used to evaluate the quality between the generated image and the reference image. The larger the PSNR and SSIM values, the better quality of the generated images.

References

  1. Kocak, D.M.; Dalgleish, F.R.; Caimi, F.M.; Schechner, Y.Y. A focus on recent developments and trends in underwater imaging. Mar. Technol. Soc. J. 2008, 42, 52.
  2. Ghani, A.S.A.; Isa, N.A.M. Underwater image quality enhancement through integrated color model with Rayleigh distribution. Appl. Soft Comput. 2015, 27, 219–230.
  3. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Berlin/Heidelberg, Germany, 2015; pp. 234–241.
  4. Abd-Alhamid, F.; Kent, M.; Bennett, C.; Calautit, J.; Wu, Y. Developing an innovative method for visual perception evaluation in a physical-based virtual environment. Build. Environ. 2019, 162, 106278.
  5. Li, C.; Guo, J.; Cong, R.; Pang, Y.; Wang, B. Underwater image enhancement by dehazing with minimum information loss and histogram distribution prior. IEEE Trans. Image Process. 2016, 25, 5664–5677.
  6. Chang, H.; Cheng, C.; Sung, C. Single underwater image restoration based on depth estimation and transmission compensation. IEEE J. Oceanic Eng. 2018, 44, 1130–1149.
  7. Kar, A.; Dhara, S.K.; Sen, D.; Biswas, P.K. Zero-Shot Single Image Restoration Through Controlled Perturbation of Koschmieder’s Model. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2021, Nashville, TN, USA, 20–25 June 2021; pp. 16205–16215.
  8. Marques, T.P.; Albu, A.B. L2uwe: A framework for the efficient enhancement of low-light underwater images using local contrast and multi-scale fusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops 2020, Seattle, WA, USA, 14–19 June 2020; pp. 538–539.
  9. Galdran, A.; Pardo, D.; Picón, A.; Alvarez-Gila, A. Automatic red-channel underwater image restoration. J. Vis. Commun. Image Represent. 2015, 26, 132–145.
  10. Li, J.; Skinner, K.A.; Eustice, R.M.; Johnson-Roberson, M. WaterGAN: Unsupervised generative network to enable real-time color correction of monocular underwater images. IEEE Robot. Autom. Lett. 2017, 3, 387–394.
  11. Naik, A.; Swarnakar, A.; Mittal, K. Shallow-UWnet: Compressed model for underwater image enhancement. arXiv 2021, arXiv:2101.02073.
  12. Ye, X.; Li, Z.; Sun, B.; Wang, Z.; Xu, R.; Li, H.; Fan, X. Deep joint depth estimation and color correction from monocular underwater images based on unsupervised adaptation networks. IEEE Trans. Circ. Syst. Vid. 2019, 30, 3995–4008.
  13. Yang, H.; Huang, K.; Chen, W. Laffnet: A lightweight adaptive feature fusion network for underwater image enhancement. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021; pp. 685–692.
  14. Zhu, J.; Park, T.; Isola, P.; Efros, A.A. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision 2017, Venice, Italy, 22–29 October 2017; pp. 2223–2232.
  15. Fabbri, C.; Islam, M.J.; Sattar, J. Enhancing underwater imagery using generative adversarial networks. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, QLD, Australia, 21–25 May 2018; pp. 7159–7165.
  16. Isola, P.; Zhu, J.; Zhou, T.; Efros, A.A. Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2017, Honolulu, HI, USA, 21–26 July 2017; pp. 1125–1134.
  17. Islam, M.J.; Xia, Y.; Sattar, J. Fast Underwater Image Enhancement for Improved Visual Perception. IEEE Robot. Autom. Lett. 2020, 5, 3227–3234.
  18. He, K.; Sun, J.; Tang, X. Single image haze removal using dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 33, 2341–2353.
  19. Pizer, S.M.; Amburn, E.P.; Austin, J.D.; Cromartie, R.; Geselowitz, A.; Greer, T.; Romeny, B.T.H.; Zimmerman, J.B.; Zuiderveld, K. Adaptive histogram equalization and its variations. Comput. Vis. Graph. Image Process. 1987, 39, 355–368.
  20. Pisano, E.D.; Zong, S.; Hemminger, B.M.; DeLuca, M.; Johnston, R.E.; Muller, K.; Braeuning, M.P.; Pizer, S.M. Contrast limited adaptive histogram equalization image processing to improve the detection of simulated spiculations in dense mammograms. J. Digit. Imaging 1998, 11, 193–200.
  21. Iqbal, K.; Odetayo, M.; James, A.; Salam, R.A.; Talib, A.Z.H. Enhancing the low quality images using unsupervised colour correction method. In Proceedings of the 2010 IEEE International Conference on Systems, Man and Cybernetics, Istanbul, Turkey, 10–13 October 2010; pp. 1703–1709.
  22. Song, W.; Wang, Y.; Huang, D.; Tjondronegoro, D. A rapid scene depth estimation model based on underwater light attenuation prior for underwater image restoration. In Advances in Multimedia Information Processing–PCM 2018: Proceedings of the 19th Pacific-Rim Conference on Multimedia, Hefei, China, 21–22 September 2018; Springer: Berlin/Heidelberg, Germany, 2018; pp. 678–688.
  23. Deng, X.; Wang, H.; Liu, X. Underwater image enhancement based on removing light source color and dehazing. IEEE Access 2019, 7, 114297–114309.
  24. Tao, Y.; Dong, L.; Xu, W. A novel two-step strategy based on white-balancing and fusion for underwater image enhancement. IEEE Access 2020, 8, 217651–217670.
  25. Zhang, W.; Zhuang, P.; Sun, H.; Li, G.; Kwong, S.; Li, C. Underwater image enhancement via minimal color loss and locally adaptive contrast enhancement. IEEE Trans. Image Process. 2022, 31, 3997–4010.
  26. Ke, K.; Zhang, C.; Wang, Y.; Zhang, Y.; Yao, B. Single underwater image restoration based on color correction and optimized transmission map estimation. Meas. Sci. Technol. 2023, 34, 55408.
  27. Zhang, W.; Wang, Y.; Li, C. Underwater image enhancement by attenuated color channel correction and detail preserved contrast enhancement. IEEE J. Ocean. Eng. 2022, 47, 718–735.
  28. Li, J.; Fang, F.; Mei, K.; Zhang, G. Multi-scale residual network for image super-resolution. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 517–532.
  29. Sun, B.; Mei, Y.; Yan, N.; Chen, Y. UMGAN: Underwater Image Enhancement Network for Unpaired Image-to-Image Translation. J. Mar. Sci. Eng. 2023, 11, 447.
  30. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2016, Las Vegas, NV, USA, 26 June–1 July 2016.
  31. Cai, X.; Jiang, N.; Chen, W.; Hu, J.; Zhao, T. CURE-Net: A Cascaded Deep Network for Underwater Image Enhancement. IEEE J. Ocean. Eng. 2023.
  32. Huang, G.; Liu, Z.; Laurens, V.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708.
  33. Yang, M.; Sowmya, A. An underwater color image quality evaluation metric. IEEE Trans. Image Process. 2015, 24, 6062–6071.
  34. Panetta, K.; Gao, C.; Agaian, S. Human-Visual-System-Inspired Underwater Image Quality Measures. IEEE J. Ocean. Eng. 2016, 41, 541–551.
  35. Hore, A.; Ziou, D. Image quality metrics: PSNR vs. SSIM. In Proceedings of the 2010 20th International Conference on Pattern Recognition, Istanbul, Turkey, 23–26 August 2010; pp. 2366–2369.
  36. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612.
More
Information
Contributors MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register : , , , ,
View Times: 213
Revisions: 2 times (View History)
Update Date: 24 Nov 2023
1000/1000
Video Production Service