Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 -- 1236 2022-09-05 08:50:55 |
2 layout Meta information modification 1236 2022-09-05 09:30:43 |

Video Upload Options

Do you have a full video?

Confirm

Are you sure to Delete?
Cite
If you have any further questions, please contact Encyclopedia Editorial Office.
Liu, Z.;  Zhuang, Y.;  Jia, P.;  Wu, C.;  Xu, H.;  Liu, Z. Underwater Image Enhancement and Underwater Biological Detection. Encyclopedia. Available online: https://encyclopedia.pub/entry/26863 (accessed on 12 October 2024).
Liu Z,  Zhuang Y,  Jia P,  Wu C,  Xu H,  Liu Z. Underwater Image Enhancement and Underwater Biological Detection. Encyclopedia. Available at: https://encyclopedia.pub/entry/26863. Accessed October 12, 2024.
Liu, Zheng, Yaoming Zhuang, Pengrun Jia, Chengdong Wu, Hongli Xu, Zhanlin Liu. "Underwater Image Enhancement and Underwater Biological Detection" Encyclopedia, https://encyclopedia.pub/entry/26863 (accessed October 12, 2024).
Liu, Z.,  Zhuang, Y.,  Jia, P.,  Wu, C.,  Xu, H., & Liu, Z. (2022, September 05). Underwater Image Enhancement and Underwater Biological Detection. In Encyclopedia. https://encyclopedia.pub/entry/26863
Liu, Zheng, et al. "Underwater Image Enhancement and Underwater Biological Detection." Encyclopedia. Web. 05 September, 2022.
Underwater Image Enhancement and Underwater Biological Detection
Edit

For aquaculture resource evaluation and ecological environment monitoring, the automatic detection and identification of marine organisms is critical; however, due to the low quality of underwater images and the characteristics of underwater biological detection, the lack of abundant features can impede traditional hand-designed feature extraction approaches or CNN-based object detection algorithms, particularly in complex underwater environments.

underwater biological detection underwater image enhancement attention mechanism

1. Introduction

The exploration of aquatic environments has recently become popular due to the growing scarcity of natural resources and the growth of the global economy [1]. Machine vision has been shown to be a low-cost and dependable method that has the benefits of non-contact monitoring, long-term steady operation, and a broad range of applications. Underwater object detection is pivotal in numerous applications, such as underwater search and rescue operations, deep-sea exploration and archaeology, and sea life monitoring [2]. These applications require effective and precise vision-based underwater sea analytics, including image enhancement, image quality assessment, and object detection methods. However, capturing underwater images using optical imaging systems poses greater problems than capturing images under open-air conditions. More specifically, underwater images frequently suffer from degeneration due to severe color distortion, low contrast, non-uniform illumination, and noise from artificial lighting sources, which dramatically degrades image visibility and affects the detection accuracy for underwater object detection tasks [1]. Over recent years, underwater image enhancement technologies have been developed that work as preprocessing operations to boost detection accuracy by improving the visual quality of underwater images.
On the other hand, underwater object detection performance is associated with the characteristics of underwater biological organisms. Usually, because of differences in size or shape and the overlapping or occlusion of marine organisms, traditional hand-designed feature extraction methods cannot meet detection requirements for actual underwater scenes. Most studies have emphasized the extraction of traditional low-level features, such as color, texture, contours, and shape [3], which has led to the disadvantages of traditional object detection methods, such as poor recognition, low accuracy, and slow recognition. However, by directly benefiting from deep learning methods, object detection has witnessed a great boost in performance over recent years, although general object detection algorithms that are based on deep learning have not yet demonstrated better detection performance for marine organisms due to the low quality of underwater imaging and complex underwater environments.

2. Underwater Image Enhancement and Underwater Biological Detection

2.1. Underwater Image Enhancement (UIE) Methods

Underwater image enhancement (UIE) is a necessary step to improve the visual quality of underwater images. UIE can be divided into three categories: model-free, physical model-based, and deep learning-based approaches.
White balance [4], Gray World theory [5], and histogram equalization [6] are examples of model-free enhancement methods that improve the visual quality of underwater images by directly adjusting the pixel values of images. Ancuti et al. suggested a multi-scale fusion underwater image enhancement method that could be combined with fusion color correction and contrast enhancement to obtain high-quality images [7]. Based on prior research, Ancuti et al. also proposed a weighted multi-scale fusion method for underwater image white balance that could restore faded information and edge information in the original images using gamma variation and sharpening [8]. Fu et al. proposed a Retinex-based enhancement system that included color correction, layer decomposition, and underwater image enhancement in the Lab color space [9]. Zhang et al. extended the Retinex-based method by using bilateral and trilateral filters to enhance the three channels of underwater image in the CIELAB color space [10]. However, because the physical deterioration process of underwater images has not been taken into account, the model-free UIE approaches can generate noise, artifacts, and color distortion, which makes them unsuitable for various types of applications.
Physical model-based methods regard underwater picture enhancement as an inverse image degradation problem and these algorithms can provide clear images by calculating the transmission and background light using Definition 1. Because underwater imaging models are similar to atmospheric models for fog, dehazing algorithms are used to enhance underwater images. He et al. proposed a dehazing algorithm that was based on dark channel prior (DCP), which could effectively estimate the thickness of fog and obtain fog-free images [11]. Based on DCP, Drew et al. proposed an underwater dark channel prior that considered red light attenuation in water [12]. Peng et al. developed a generalized dark primary color prior (GDCP) for underwater image enhancement that included adaptive color correction in an image creation model [13]. Model-based approaches often need prior information and the quality of the improved images is dependent on precise parameter estimation.
Deep learning enhancement methods usually construct convolutional neural networks and train them using pairs of degraded underwater images and their high-quality counterparts [14]. Li et al. suggested an unsupervised generative adversance network (WaterGAN) that generated underwater images from aerial RGB-D images and then trained an underwater image recovery network using the synthesized training data [15]. To produce paired underwater image datasets, Fabbri et al. suggested an underwater color transfer model that was based on CycleGAN [16] and built an underwater image recovery network using a gradient penalty technique [17]. Ye et al. proposed an unsupervised adaptive network for joint learning that could jointly estimate scene depth and correct color underwater images [18]. Chen et al. proposed two perceptual enhancement cascade models, which used gradient strategy feedback information to enhance more prominent features in images [14]. Deep learning UIE approaches that are based on composite image training generally require a large number of datasets [19]. Because the quality of the composite images cannot be guaranteed, these methods cannot be applied to underwater situations.

2.2. Attention Mechanisms

Some studies on attention mechanisms have been presented in the literature. Attention models enable networks to extract information from crucial areas with reduced energy consumption, thereby enhancing CNN performance. Wang et al. proposed a residual attention network that was based on an attention mechanism, which could continuously extract large amounts of attention information [20]. Hu et al. proposed SENet, which contained architectural “squeeze” and “excitation” units. These modules enhanced network expressiveness by modeling the interdependencies between channels [21]. Woo et al. proposed a lightweight module (CBAM) that combined feature channels and feature spaces to refine features [22]. This method could achieve considerable performance improvements while maintaining small overheads.

2.3. Underwater Object Detection Algorithms

Deep learning-based object detection algorithms are currently divided into two categories: one-stage regression detectors and two-stage region generation detectors. One-stage detection methods mainly include the YOLO series [23][24][25], SSDs [26], RetinaNet [27], and RefineDet [28], which directly predict objects without region generation. Two-stage detection methods mainly include RCNNs [29], fast RCNNs [30], faster RCNNs [31], and cascade RCNNs [32]. Initially, these object detection methods were used for natural environments on land. As deep learning technology has advanced, more and more object detection algorithms have been applied to challenging underwater environments. Li et al. used a faster RCNN to detect fish species and achieved an outstanding performance [33]. Li et al. employed a residual network to detect deep-sea plankton. Their experiments revealed that deep residual networks generalized plankton categorization [34]. Cui et al. introduced a CNN-based fish detection system and optimized it using data augmentation, network simplification, and training process acceleration [35]. Huang et al. presented three data augmentation approaches for underwater imaging that could imitate the illumination of marine environments [36]. Fan et al. suggested a cascade underwater detection framework with feature augmentation and anchoring refinement, which could address the issue of imbalanced underwater samples [37]. Zhao et al. designed a new composite backbone network to detect fish species by improving the residual network and used it to learn change information within ocean scenes [3]. However, little research has been conducted in the field of underwater object detection using YOLO.

References

  1. Yeh, C.H.; Lin, C.H.; Kang, L.W.; Huang, C.H.; Wang, C. Lightweight Deep Neural Network for Joint Learning of Underwater Object Detection and Color Conversion. IEEE Trans. Neural Netw. Learn. Syst. 2021, 1–15.
  2. Liu, R.; Fan, X.; Zhu, M.; Hou, M.; Luo, Z. Real-World Underwater Enhancement: Challenges, Benchmarks, and Solutions Under Natural Light. IEEE Trans. Circuits Syst. Video Technol. 2020, 30, 4861–4875.
  3. Zhao, Z.; Liu, Y.; Sun, X.; Liu, J.; Yang, X.; Zhou, C. Composited FishNet: Fish Detection and Species Recognition From Low-Quality Underwater Videos. IEEE Trans. Image Process. 2021, 30, 4719–4734.
  4. van de Weijer, J.; Gevers, T.; Gijsenij, A. Edge-based color constancy. IEEE Trans. Image Process. A Publ. IEEE Signal Process. Soc. 2007, 16, C2.
  5. Provenzi, E.; Gatta, C.; Fierro, M.; Rizzi, A. A Spatially Variant White-Patch and Gray-World Method for Color Image Enhancement Driven by Local Contrast. IEEE Trans. Pattern Anal. Mach. Intell. 2008, 30, 1757–1770.
  6. Zuiderveld, K. Contrast Limited Adaptive Histogram Equalization. Graph. Gems 1994, 474–485.
  7. Ancuti, C.; Ancuti, C.O.; Haber, T.; Bekaert, P. Enhancing underwater images and videos by fusion. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 81–88.
  8. Ancuti, C.O.; Ancuti, C.; Vleeschouwer, C.D.; Bekaert, P. Color Balance and Fusion for Underwater Image Enhancement. IEEE Trans. Image Process. 2017, 27, 379–393.
  9. Fu, X.; Zhuang, P.; Huang, Y.; Liao, Y.; Zhang, X.P.; Ding, X. A retinex-based enhancing approach for single underwater image. In Proceedings of the 2014 IEEE International Conference on Image Processing (ICIP), Paris, France, 27–30 October 2014; pp. 4572–4576.
  10. Zhang, S.; Wang, T.; Dong, J.; Yu, H. Underwater Image Enhancement via Extended Multi-Scale Retinex. Neurocomputing 2017, 245, 1–9.
  11. He, K.; Sun, J.; Tang, X. Single Image Haze Removal Using Dark Channel Prior. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 2341–2353.
  12. Drews, P.; Nascimento, E.R.; Botelho, S.; Campos, M. Underwater Depth Estimation and Image Restoration Based on Single Images. IEEE Comput. Graph. Appl. 2016, 36, 24–35.
  13. Peng, Y.-T.; Cao, K.; Cosman, P.C. Generalization of the Dark Channel Prior for Single Image Restoration. IEEE Trans. Image Process. 2018, 27, 2856–2868.
  14. Chen, L.; Jiang, Z.; Tong, L.; Liu, Z.; Zhao, A.; Zhang, Q.; Dong, J.; Zhou, H. Perceptual Underwater Image Enhancement With Deep Learning and Physical Priors. IEEE Trans. Circuits Syst. Video Technol. 2021, 31, 3078–3092.
  15. Li, J.; Skinner, K.A.; Eustice, R.M.; Johnson-Roberson, M. WaterGAN: Unsupervised Generative Network to Enable Real-time Color Correction of Monocular Underwater Images. IEEE Robot. Autom. Lett. 2017, 3, 387–394.
  16. Zhu, J.Y.; Park, T.; Isola, P.; Efros, A.A. Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017.
  17. Fabbri, C.; Jahidul Islam, M.; Sattar, J. Enhancing Underwater Imagery using Generative Adversarial Networks. arXiv 2018, arXiv:1801.04011.
  18. Ye, X.; Li, Z.; Sun, B.; Wang, Z.; Fan, X. Deep Joint Depth Estimation and Color Correction From Monocular Underwater Images Based on Unsupervised Adaptation Networks. IEEE Trans. Circuits Syst. Video Technol. 2020, 30, 3995–4008.
  19. Li, C.; Guo, C.; Ren, W.; Cong, R.; Hou, J.; Kwong, S.; Tao, D. An Underwater Image Enhancement Benchmark Dataset and Beyond. IEEE Trans. Image Process. 2019, 29, 4376–4389.
  20. FeI, W.; Jiang, M.; Chen, Q.; Yang, S.; Tang, X. Residual Attention Network for Image Classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017.
  21. Jie, H.; Li, S.; Gang, S. Squeeze-and-Excitation Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018.
  22. Woo, S.; Park, J.; Lee, J.Y.; Kweon, I.S. CBAM: Convolutional Block Attention Module. In Proceedings of the European Conference on Computer Vision, Munich, Germany, 8–14 September 2018.
  23. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016.
  24. Redmon, J.; Farhadi, A. YOLO9000: Better, Faster, Stronger. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017.
  25. Redmon, J.; Farhadi, A. YOLOv3: An Incremental Improvement. arXiv 2018, arXiv:1804.02767.
  26. Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. SSD: Single Shot MultiBox Detector. In Proceedings of the Computer Vision—ECCV 2016, Amsterdam, The Netherlands, 11–14 October 2016; Leibe, B., Matas, J., Sebe, N., Welling, M., Eds.; Springer International Publishing: Cham, Switzerland, 2016; pp. 21–37.
  27. Lin, T.Y.; Goyal, P.; Girshick, R.; He, K.; Dollar, P. Focal Loss for Dense Object Detection. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017.
  28. Zhang, S.; Wen, L.; Bian, X.; Lei, Z.; Li, S.Z. Single-Shot Refinement Neural Network for Object Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–22 June 2018.
  29. Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Columbus, OH, USA, 23–28 June 2014.
  30. Girshick, R. Fast R-CNN. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015.
  31. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. In Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, Canada, 7–12 December 2015; Cortes, C., Lawrence, N., Lee, D., Sugiyama, M., Garnett, R., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2015; Volume 28.
  32. Cai, Z.; Vasconcelos, N. Cascade R-CNN: Delving Into High Quality Object Detection. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 6154–6162.
  33. Li, X.; Shang, M.; Qin, H.; Chen, L. Fast accurate fish detection and recognition of underwater images with Fast R-CNN. In Proceedings of the OCEANS 2015—MTS/IEEE Washington, Washington, DC, USA, 19–22 October 2015; pp. 1–5.
  34. Li, X.; Shang, M.; Hao, J.; Yang, Z. Accelerating fish detection and recognition by sharing CNNs with objectness learning. In Proceedings of the OCEANS 2016—Shanghai, Shanghai, China, 10–13 April 2016; pp. 1–5.
  35. Li, X.; Cui, Z. Deep residual networks for plankton classification. In Proceedings of the OCEANS 2016 MTS/IEEE Monterey, Monterey, CA, USA, 19–23 September 2016; pp. 1–4.
  36. Huang, H.; Zhou, H.; Yang, X.; Zhang, L.; Qi, L.; Zang, A.Y. Faster R-CNN for marine organisms detection and recognition using data augmentation. Neurocomputing 2019, 337, 372–384.
  37. Fan, B.; Chen, W.; Cong, Y.; Tian, J. Dual Refinement Underwater Object Detection Network. In Proceedings of the Computer Vision—ECCV 2020, Glasgow, UK, 23–28 August 2020; Vedaldi, A., Bischof, H., Brox, T., Frahm, J.M., Eds.; Springer International Publishing: Cham, Switzerland, 2020; pp. 275–291.
More
Information
Contributors MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register : , , , , ,
View Times: 508
Revisions: 2 times (View History)
Update Date: 05 Sep 2022
1000/1000
ScholarVision Creations