Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 -- 1917 2023-12-13 14:58:11 |
2 layout & references Meta information modification 1917 2023-12-15 02:55:45 |

Video Upload Options

Do you have a full video?

Confirm

Are you sure to Delete?
Cite
If you have any further questions, please contact Encyclopedia Editorial Office.
Wang, W.; Xiao, C.; Dou, H.; Liang, R.; Yuan, H.; Zhao, G.; Chen, Z.; Huang, Y. Single-Frame Low-Resolution Infrared Small Target Detection. Encyclopedia. Available online: https://encyclopedia.pub/entry/52694 (accessed on 05 July 2024).
Wang W, Xiao C, Dou H, Liang R, Yuan H, Zhao G, et al. Single-Frame Low-Resolution Infrared Small Target Detection. Encyclopedia. Available at: https://encyclopedia.pub/entry/52694. Accessed July 05, 2024.
Wang, Wenjing, Chengwang Xiao, Haofeng Dou, Ruixiang Liang, Huaibin Yuan, Guanghui Zhao, Zhiwei Chen, Yuhang Huang. "Single-Frame Low-Resolution Infrared Small Target Detection" Encyclopedia, https://encyclopedia.pub/entry/52694 (accessed July 05, 2024).
Wang, W., Xiao, C., Dou, H., Liang, R., Yuan, H., Zhao, G., Chen, Z., & Huang, Y. (2023, December 13). Single-Frame Low-Resolution Infrared Small Target Detection. In Encyclopedia. https://encyclopedia.pub/entry/52694
Wang, Wenjing, et al. "Single-Frame Low-Resolution Infrared Small Target Detection." Encyclopedia. Web. 13 December, 2023.
Single-Frame Low-Resolution Infrared Small Target Detection
Edit

Infrared small target detection technology is widely used in infrared search and tracking, infrared precision guidance, low and slow small aircraft detection, and other projects. Its detection ability is very important in terms of finding unknown targets as early as possible, warning in time, and allowing for enough response time for the security system.

infrared image small target detection deep learning self-attention

1. Introduction

Compared with visible light imaging detection and active radar imaging detection, infrared imaging detection technology has the following characteristics [1]: unaffected by light conditions, works in all types of weather, works passively, has high imaging spatial resolution, adapts to various environments, has strong anti-electromagnetic interference ability, has a simple structure, is small in size, and easy to carry and hide. Benefiting from the above advantages, infrared detection and imaging technology has been widely used in infrared search and tracking, infrared precise guidance, low and slow small aircraft detection and identification, and other projects [2].
In some cases that need to be pre-judged, the target to be detected is far away from the infrared detection imaging system, and the image shows a dim and small target, often lacking texture information. The targets to be detected are usually aircrafts, drones, missiles, ships, vehicles, and other fast-moving objects [3][4], so the outlines of the imaging targets are fuzzy. In addition, as they are affected by the surrounding environment and detection equipment, small infrared targets are easily submerged in noise and complex backgrounds [5]. All these factors bring challenges to infrared small target detection.
At present, there are many infrared detection devices with low imaging resolution that are applied in various fields [6]. Therefore, it is of practical significance to design a method for small target detection in low-resolution infrared images to improve the small target detection performance. The number of pixels occupied by the target in the low-resolution infrared small target image is low [7], and a more accurate prediction of each pixel of the small target (that is, improving the pixel-level metrics of the low-resolution infrared image small target detection) can significantly improve the target detection performance.
Research on infrared small target detection is divided into single-frame image target detection and multi-frame image target detection [2]. This text focuses on the former. Early researchers mainly proposed model-driven methods. Filter-based methods [8][9] require determining the filtering template in advance based on the structural characteristics of the image, so it has poor adaptability to complex background environments. Methods [10][11][12][13] based on local contrast are suitable for situations where there is a significant difference in grayscale between the target and surrounding background, but they are prone to missed detections and misjudgments. Low-rank-based [14][15] and tensor-based [16][17][18] methods can achieve good results, but the computational cost is high, and hyperparameters are more sensitive to image scenes.
With the development of deep learning, some data-driven methods and infrared small target datasets [7][19][20][21][22] have emerged in recent years. Considering the weak and small characteristics of infrared small targets, infrared small target detection is usually modeled as a semantic segmentation problem. In order to ensure that the features of small targets are not submerged, some methods [7][19][22][23] have been used to enhance the fusion of features at different layers of the network. Based on the small proportion of small targets in the overall image, some methods [24][25] solve the problem with infrared small target detection by suppressing the background area to make the network pay more attention to the target area. There are also some studies [26][27][28][29] that consider how to improve and innovate based on classic encoding and decoding structures.
The existing single-frame infrared small target detection methods [7][23][24] have problems in terms of poor adaptability and high false-alarm and missed detection rates when they are used to detect infrared small target images with low resolution. This is not only because the quality of the existing dataset is not high, which leads to unsatisfactory training of the network, but also related to the large number of parameters in the existing network structure or the insufficient local attention to small targets.

2. Infrared Small Target Datasets

The Society of Photo-Optical Instrumentation Engineers (SPIE) defines infrared small targets as having a total spatial extent of less than 81 pixels (9 × 9) in a 256 × 256 image [30]—that is, the proportion of small targets in the entire image is less than 0.12%. In addition, the size of small infrared targets varies greatly, ranging from only one pixel (i.e., dot target) to dozens of pixels (i.e., expanded target) [29].
In recent years, some scholars have done a lot of work on the collection and production of infrared small target datasets and have publicly released these datasets, which include single-frame datasets [7][19][20][21][22] (see Table 1) and multi-frame datasets [31][32][33] (see Table 2).
Table 1. Details on the present single-frame infrared small target datasets.
Table 2. Details on the present multi-frame infrared small target datasets.
In Table 1 and Table 2, it can be seen that the sample size of real single-frame infrared small target data is relatively small, but the sample size of multi-frame infrared small target data is rich, which can be used to expand single-frame data. Constructing a single-frame infrared small target dataset with a larger data volume and higher quality can promote the development of single-frame infrared small target detection.

3. Infrared Small Target Detection Methods

In recent years, deep learning has developed rapidly in terms of solving visual tasks such as image classification, object detection, and semantic segmentation. Some methods based on deep learning have also emerged for infrared small target detection.
Due to their “weak” and “small” characteristics, infrared small targets are easily overwhelmed by a network’s high-level features. However, if only low-level features are used, it is not possible to fully comprehend semantic information, making it easy to miss detection and raise false alarms. Therefore, some researchers have combined attention mechanisms to study methods for enhancing feature fusion at different layers. Dai et al. proposed a bottom-up channel attention modulation method [23] (ACM) to preserve and highlight infrared small target features in high-level layers. Thereafter, Dai et al. [19] modularized local contrast measurement methods [10] from traditional methods in the network to design a model-driven deep learning network (ALCNet). Li et al. [7] proposed the use of DNANet to achieve progressive information interaction between high-level and low-level features through densely nested modules (DNIM). Chen et al. [22] introduced the self-attention mechanism of a transformer into the designed IRSTFormer to extract multi-scale features from the input image through the overlapping block self-attention structure of the hierarchy.
Based on the small proportion of small targets in the overall image, some researchers solve the problem with infrared small target detection by suppressing the background area so that the network pays more attention to the target area. Wang et al. [24] proposed a coarse-fine two-stage network, IAANet. In the coarse stage, the candidate target regions are obtained by the region proposal network (RPN), and in the fine stage, the global features of all candidate target regions in the image are extracted by the attention encoder (AE). In IAANet, the hard decision method is used to suppress the background regions to the greatest extent. Cheng et al. [25] designed a supervised attention module trained by small target diffusion maps in the proposed LPNet to suppress most of the background pixels irrelevant to small target features in a soft decision manner.
It has been proved that the classical encoder-decoder structure can achieve better results in the semantic segmentation task [34], and some researchers have carried out work on the improvement and innovation of the classical codec structure. Tong et al. [26] proposed MSAFFNet, which introduced the EIFAM module containing edge information based on the codec structure and constructed multi-scale labels to focus on the details of target contour and internal features. Wu et al. [27] proposed UIU-Net (U-Net in U-Net). It imparts a tiny U-Net into a larger U-Net backbone network to realize multi-level and multi-scale representation learning of objects. Chen et al. [28] proposed a MultiTask-UNet (MTUNet) with both detection and segmentation heads. By sharing the backbone, the similar semantic features of the two tasks are fully utilized. Compared with the compound single-task model, MTUNet has fewer parameters and faster inference speed. Wu et al. [29] proposed a multi-level TransUNet (MTU-Net). In the encoder, the features extracted by convolution are passed through a multi-level ViT module (MVTM) to capture remote dependencies.
The networks that combine the attention mechanism and multi-scale feature fusion [7][19][22][23] enhance the network’s ability to extract image features, but the local attention to small targets in the image is not enough and the network’s ability to detect small targets is not enough to be improved. The networks that focus on the localized region of small targets [24][25] have a problem: the number of network parameters and computational amount are larger, and the network prediction speed is slower.

4. Evaluation Metrics

The output of the infrared small target detection network is pixel-level segmentation. Therefore, it is common to use semantic segmentation metrics to evaluate network performance, such as precision, recall, F1 score, ROC curve, and PR curve.
Precision and recall refer to the proportion of correctly predicted positive samples out of all predicted positive samples and all true positive samples, respectively. The F1-Score is the harmonic mean of precision and recall. The definitions of P (precision), R (recall), and F1 − P (F1 score) are as follows:
P = T P T P + F P R = T P T P + F N F 1 P = 2 P R P + R
where T, P, TP, FP, and FN denote the true, positive, true positive, false positive, and false negative, respectively.
The receiver operating characteristic (ROC) curve shows how the model performs across all classification thresholds. The horizontal coordinate of the ROC curve is the false positive rate (FPR), and the vertical coordinate is the true positive rate (TPR). It goes through the points (0, 0) and (1, 1). The horizontal coordinate of the precision-recall (PR) curve is the recall rate, which reflects the classifier’s ability to cover positive examples. The vertical coordinate is the precision, which reflects the accuracy of the classifier’s prediction of positive examples.
However, as a target detection task, some researchers have proposed pixel-level and target-level evaluation metrics based on existing metrics to better evaluate the detection performance of infrared small targets.
IoU and nIoU are pixel-level metrics. IoU represents the ratio of intersection and union between the predicted and true results:
IoU = T P T + P T P
nIoU [19] is the numerical result normalized by the IoU value of each target, as shown in (3), where N represents the total number of targets.
nIoU = 1 N i N T P [ i ] T [ i ] + P [ i ] T P [ i ]
Pd (probability of detection) and Fa (false-alarm rate) are target-level metrics [22]. Pd measures the ratio of the number of correctly predicted targets to the number of all targets. Fa measures the ratio of incorrectly predicted pixels to all pixels in the image.
P d = #   num   of   true   detections #   num   of   actual   targets
F a = #   num   of   false   predicted   pixels #   num   of   all   pixels

References

  1. Levenson, E.; Lerch, P.; Martin, M.C. Infrared imaging: Synchrotrons vs. arrays, resolution vs. speed. Infrared Phys. Technol. 2006, 49, 45–52.
  2. Zhao, M.; Li, W.; Li, L.; Hu, J.; Ma, P.; Tao, R. Single-frame infrared small-target detection: A survey. IEEE Trans. Geosci. Remote Sens. 2022, 10, 87–119.
  3. Sun, X.; Guo, L.; Zhang, W.; Wang, Z.; Yu, Q. Small aerial target detection for airborne infrared detection systems using LightGBM and trajectory constraints. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 9959–9973.
  4. Yang, P.; Dong, L.; Xu, W. Infrared small maritime target detection based on integrated target saliency measure. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 2369–2386.
  5. Qi, S.; Ma, J.; Tao, C.; Yang, C.; Tian, J. A robust directional saliency-based method for infrared small-target detection under various complex backgrounds. IEEE Geosci. Remote Sens. Lett. 2012, 10, 495–499.
  6. Rogalski, A.; Martyniuk, P.; Kopytko, M. Challenges of small-pixel infrared detectors: A review. Rep. Prog. Phys. 2016, 79, 046501.
  7. Li, B.; Xiao, C.; Wang, L.; Wang, Y.; Lin, Z.; Li, M.; An, W.; Guo, Y. Dense nested attention network for infrared small target detection. IEEE Trans. Image Process. 2023, 32, 1745–1758.
  8. Bai, X.; Zhou, F. Analysis of new top-hat transformation and the application for infrared dim small target detection. Pattern Recognit. 2010, 43, 2145–2156.
  9. Comaniciu, D. An algorithm for data-driven bandwidth selection. IEEE Trans. Pattern Anal. Mach. Intell. 2003, 25, 281–288.
  10. Chen, C.P.; Li, H.; Wei, Y.; Xia, T.; Tang, Y.Y. A local contrast method for small infrared target detection. IEEE Trans. Geosci. Remote Sens. 2013, 52, 574–581.
  11. Qin, Y.; Li, B. Effective infrared small target detection utilizing a novel local contrast method. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1890–1894.
  12. Wei, Y.; You, X.; Li, H. Multiscale patch-based contrast measure for small infrared target detection. Pattern Recognit. 2016, 58, 216–226.
  13. Deng, H.; Sun, X.; Liu, M.; Ye, C.; Zhou, X. Infrared small-target detection using multiscale gray difference weighted image entropy. IEEE Trans. Aerosp. Electron. Syst. 2016, 52, 60–72.
  14. Gao, C.; Meng, D.; Yang, Y.; Wang, Y.; Zhou, X.; Hauptmann, A.G. Infrared Patch-Image Model for Small Target Detection in a Single Image. IEEE Trans. Image Process. 2013, 22, 4996–5009.
  15. Zhu, H.; Ni, H.; Liu, S.; Xu, G.; Deng, L. Tnlrs: Target-aware non-local low-rank modeling with saliency filtering regularization for infrared small target detection. IEEE Trans. Image Process. 2020, 29, 9546–9558.
  16. Pang, D.; Shan, T.; Li, W.; Ma, P.; Tao, R.; Ma, Y. Facet derivative-based multidirectional edge awareness and spatial–temporal tensor model for infrared small target detection. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–15.
  17. Guan, X.; Zhang, L.; Huang, S.; Peng, Z. Infrared small target detection via non-convex tensor rank surrogate joint local contrast energy. Remote Sens. 2020, 12, 1520.
  18. Pang, D.; Ma, P.; Shan, T.; Li, W.; Tao, R.; Ma, Y.; Wang, T. STTM-SFR: Spatial–temporal tensor modeling with saliency filter regularization for infrared small target detection. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–18.
  19. Dai, Y.; Wu, Y.; Zhou, F.; Barnard, K. Attentional local contrast networks for infrared small target detection. IEEE Trans. Geosci. Remote Sens. 2021, 59, 9813–9824.
  20. Zhang, M.; Zhang, R.; Yang, Y.; Bai, H.; Zhang, J.; Guo, J. ISNet: Shape Matters for Infrared Small Target Detection. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 867–876.
  21. Wang, H.; Zhou, L.; Wang, L. Miss Detection vs. False Alarm: Adversarial Learning for Small Object Segmentation in Infrared Images. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019; pp. 8508–8517.
  22. Chen, G.; Wang, W.; Tan, S. IRSTFormer: A Hierarchical Vision Transformer for Infrared Small Target Detection. Remote Sens. 2022, 14, 3258.
  23. Dai, Y.; Wu, Y.; Zhou, F.; Barnard, K. Asymmetric contextual modulation for infrared small target detection. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–8 January 2021; pp. 950–959.
  24. Wang, K.; Du, S.; Liu, C.; Cao, Z. Interior Attention-Aware Network for Infrared Small Target Detection. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–13.
  25. Chen, F.; Gao, C.; Liu, F.; Zhao, Y.; Zhou, Y.; Meng, D.; Zuo, W. Local patch network with global attention for infrared small target detection. IEEE Trans. Aerosp. Electron. Syst. 2022, 58, 3979–3991.
  26. Tong, X.; Su, S.; Wu, P.; Guo, R.; Wei, J.; Zuo, Z.; Sun, B. MSAFFNet: A multi-scale label-supervised attention feature fusion network for infrared small target detection. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–16.
  27. Wu, X.; Hong, D.; Chanussot, J. UIU-Net: U-Net in U-Net for infrared small object detection. IEEE Trans. Image Process. 2022, 32, 364–376.
  28. Chen, Y.; Li, L.; Liu, X.; Su, X. A multi-task framework for infrared small target detection and segmentation. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–9.
  29. Wu, T.; Li, B.; Luo, Y.; Wang, Y.; Xiao, C.; Liu, T.; Yang, J.; An, W.; Guo, Y. MTU-Net: Multilevel TransUNet for Space-Based Infrared Tiny Ship Detection. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–15.
  30. Zhang, W.; Cong, M.; Wang, L. Algorithms for optical weak small targets detection and tracking. In Proceedings of the International Conference on Neural Networks and Signal Processing (ICNNSP), Nanjing, China, 14–17 December 2003; pp. 643–647.
  31. Pang, D.; Ma, P.; Feng, Y.; Shan, T.; Tao, R.; Jin, Q. Tensor Spectral k-support Norm Minimization for Detecting Infrared Dim and Small Target against Urban Backgrounds. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–13.
  32. Hui, B.; Song, Z.; Fan, H.; Zhong, P.; Hu, W.; Zhang, X.; Ling, J.; Su, H.; Jin, W.; Zhang, Y. A dataset for infrared detection and tracking of dim-small aircraft targets under ground/air background. China Sci. Data 2020, 5, 291–302.
  33. Sun, H.; Bai, J.; Yang, F.; Bai, X. Receptive-field and Direction Induced Attention Network for Infrared Dim Small Target Detection with a Large-scale Dataset IRDST. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–13.
  34. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional networks for biomedical image segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention (MICCAI), Munich, Germany, 5–9 October 2015; pp. 234–241.
More
Information
Contributors MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register : , , , , , , ,
View Times: 135
Revisions: 2 times (View History)
Update Date: 15 Dec 2023
1000/1000
Video Production Service