Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 -- 1046 2023-09-14 11:09:20 |
2 format correct Meta information modification 1046 2023-09-15 02:39:17 |

Video Upload Options

Do you have a full video?

Confirm

Are you sure to Delete?
Cite
If you have any further questions, please contact Encyclopedia Editorial Office.
Jang, H.; Lee, C.; Ko, H.; Lim, K. X-ray Images for Cargo Inspection of Nuclear Items. Encyclopedia. Available online: https://encyclopedia.pub/entry/49162 (accessed on 07 May 2024).
Jang H, Lee C, Ko H, Lim K. X-ray Images for Cargo Inspection of Nuclear Items. Encyclopedia. Available at: https://encyclopedia.pub/entry/49162. Accessed May 07, 2024.
Jang, Haneol, Chansuh Lee, Hansol Ko, Kyungtae Lim. "X-ray Images for Cargo Inspection of Nuclear Items" Encyclopedia, https://encyclopedia.pub/entry/49162 (accessed May 07, 2024).
Jang, H., Lee, C., Ko, H., & Lim, K. (2023, September 14). X-ray Images for Cargo Inspection of Nuclear Items. In Encyclopedia. https://encyclopedia.pub/entry/49162
Jang, Haneol, et al. "X-ray Images for Cargo Inspection of Nuclear Items." Encyclopedia. Web. 14 September, 2023.
X-ray Images for Cargo Inspection of Nuclear Items
Edit

As part of establishing a management system to prevent the illegal transfer of nuclear items, automatic nuclear item detection technology is required during customs clearance. Multiple item insertions to respond to actual X-ray cargo inspection situations and the resulting occlusion expressions significantly affect the performance of the segmentation models.

cargo inspection semantic segmentation nuclear items

1. Introduction

Security inspections using X-ray images involve the checking of items detected by X-rays by comparing them with a set standard to ensure national security. It is also important for security inspections using X-rays to detect potential risks early in various fields, such as public transportation and events. In the nuclear industry, the need for X-ray-based security inspections is emerging to ensure the accurate detection of major items that can be risk factors [1][2][3][4]. Most X-ray-based identification processes cannot accurately and efficiently identify the target items because many types of objects are often mixed with prohibited items. An inspector should monitor the X-ray images obtained from security inspection devices to detect prohibited items such as guns, ammunition, explosives, toxic substances, and radioactive materials. However, to perform this job, the inspector must focus on the X-ray screen for extended periods, increasing worker fatigue [5][6]. Thus, there is a great demand for a technology that can automatically find objects with special uses and prevent the spread of objects that are prohibited [7][8]. Applying recent deep-learning-based object detection could be one way to solve this problem. However, there is a problem in that recent deep-learning-based technologies for recognizing objects require considerable amounts of high-quality training data.

2. Automatic Cargo Inspection

Due to the recent development of deep-learning-based computer vision technology, AI-based X-ray inspection system development is attracting attention. Object detection and semantic segmentation are applicable computer vision techniques for X-ray inspections. Specifically, semantic segmentation has the advantage of obtaining detailed experimental results compared to object detection, given its ability to undertake pixel-level classification of images.
Representative network structures for semantic segmentation include UNet [9], FPN [10], DeepLabV3+ [11], and HRNet-OCR [12]. UNet is a network structure composed of an encoder that generates representation features and a decoder that generates segmentation results using the representation features. It is characterized by how it combines the feature maps of the decoder and the encoder, which have identical resolutions, to produce good segmentation results. FPN performs well for multi-scale images by performing multi-scale feature fusion using a feature pyramid network structure. DeepLabV3+ uses atrous separable convolution, which combines depth-wise separable convolution and atrous convolution, showing a good segmentation performance while dramatically reducing the number of parameters and the amount of computation. HRNet-OCR is characterized by its ability to generate segmentation results considering object regions by utilizing object-contextual representation, showing state-of-the-art performance on many segmentation benchmarks. However, for this AI-based system, there is a limitation in that high-quality X-ray image datasets of large size must be available to achieve good results [5][13]. In addition, because most of the publicly available X-ray inspection benchmarks are X-ray images for baggage inspection purposes, another problem is that they cannot be used for cargo inspections with different X-ray imaging energy levels [5][14][15][16][17][18][19].
The first study to introduce a publicly available benchmark for cargo inspections was MFA-net [13], which provides the CargoX dataset, compiled by synthesizing prohibited items (e.g., knives) after randomly cropping cargo X-ray images. More specifically, the CargoX dataset is a benchmark for cargo inspections created by inserting prohibited items into random locations. However, because the CargoX dataset does not include X-ray images of nuclear items and only one banned item is synthesized per image, it cannot readily detect overlapping items. Therefore, the CargoX dataset is not suitable for nuclear item detection. In this study, multiple X-ray images of nuclear items are inserted into a cargo X-ray image but randomly rotated, randomly scaled, and randomly located to respond to more diverse scanning environments and item changes.

3. Data Augmentation for Semantic Segmentation

X-ray datasets of high quality and large size are needed to train a semantic segmentation model for the purpose of X-ray cargo inspections. However, X-ray images from cargo inspections are rare. In particular, X-ray images of nuclear items taken with a cargo inspection scanner and accurate label information are not publicly available. Therefore, in order to prevent the illegal transfer of nuclear items in the future, it is crucial to create a synthetic dataset using data augmentation and train a semantic segmentation model.
Various augmentations for image classification have been introduced [20][21][22][23]. Cutout [22] is a simple regularization method that randomly masks patches from the input image. Cutout allows the model to see the entire area of the input image rather than a specific area. AugMix [23] is a method that improves the robustness and uncertainty and mixes the results of data augmentation in a convex combination to prevent image degradation while maintaining diversity. These augmentations are mainly used for encoding invariances to data transformations and are well-suited for image classification.
Various augmentation methods have been introduced for semantic segmentation, most notably CutMix [24] and Copy-Paste [25]. CutMix [24] replaces the area to be mixed with a patch of another image to solve the problem of Cutout augmentation, significantly reducing the informative pixels of an image. With CutMix, every pixel within the image becomes informative, and the benefits from local dropout are obtained. Copy-Paste [25] is a way to paste objects from one image to another to create many new combinations of training data. These augmentations are used for semantic segmentation of general images, meaning that there is a limitation in that they cannot respond to the X-ray transmittance according to the physical properties of overlapping objects.
To train a segmentation model that performs automatic cargo inspections without cargo X-ray images, a means of synthesizing the textures of backscatter X-ray (BSX) images using a GAN was also introduced [14]. However, it is assumed that general cargo X-ray images and X-ray images of nuclear items taken by the same high-energy electromagnetic radiation used for cargo inspections are held. Hence, artificial texture generation is not required.
It is crucial at present to develop a semantic segmentation technique for X-ray cargo inspections in order to prevent the illegal transfer of nuclear items. While X-ray datasets of good quality and a large size are required to train a deep-learning-based segmentation model, X-ray images from cargo inspections are rare. Specifically, X-ray images of nuclear items loaded into a cargo are almost impossible to obtain.

References

  1. Yorozu, T.; Hirano, M.; Oka, K.; Tagawa, Y. Electron spectroscopy studies on magneto-optical media and plastic substrate interface. IEEE Transl. J. Magn. Jpn. 1987, 2, 740–741.
  2. Mery, D.; Riffo, V.; Zscherpel, U.; Mondragón, G.; Lillo, I.; Zuccar, I.; Lobel, H.; Carrasco, M. GDXray: The database of X-ray images for nondestructive testing. J. Nondestruct. Eval. 2015, 34, 1–12.
  3. Kim, M.; Ou, E.; Loh, P.L.; Allen, T.; Agasie, R.; Liu, K. RNN-Based online anomaly detection in nuclear reactors for highly imbalanced datasets with uncertainty. Nucl. Eng. Des. 2020, 364, 110699.
  4. Ma, B.; Jia, T.; Su, M.; Jia, X.; Chen, D.; Zhang, Y. Automated Segmentation of Prohibited Items in X-ray Baggage Images Using Dense De-overlap Attention Snake. IEEE Trans. Multimed. 2022.
  5. Miao, C.; Xie, L.; Wan, F.; Su, C.; Liu, H.; Jiao, J.; Ye, Q. Sixray: A large-scale security inspection X-ray benchmark for prohibited item discovery in overlapping images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 2119–2128.
  6. Wang, Q.; Bhowmik, N.; Breckon, T.P. Multi-class 3D object detection within volumetric 3D computed tomography baggage security screening imagery. In Proceedings of the 2020 19th IEEE International Conference on Machine Learning and Applications (ICMLA), Miami, FL, USA, 14–17 December 2020; pp. 13–18.
  7. Guo, Y.; Liu, Y.; Georgiou, T.; Lew, M.S. A review of semantic segmentation using deep neural networks. Int. J. Multimed. Inf. Retr. 2018, 7, 87–93.
  8. Tao, R.; Wei, Y.; Jiang, X.; Li, H.; Qin, H.; Wang, J.; Ma, Y.; Zhang, L.; Liu, X. Towards real-world X-ray security inspection: A high-quality benchmark and lateral inhibition module for prohibited items detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, BC, Canada, 11–17 October 2021; pp. 10923–10932.
  9. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), Munich, Germany, 5–9 October 2015; Springer: Berlin/Heidelberg, Germany, 2015; pp. 234–241.
  10. Lin, T.Y.; Dollár, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature pyramid networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 2117–2125.
  11. Chen, L.C.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 801–818.
  12. Yuan, Y.; Chen, X.; Wang, J. Object-contextual representations for semantic segmentation. In Proceedings of the European Conference on Computer Vision (ECCV), Glasgow, UK, 23–28 August 2020; Springer: Berlin/Heidelberg, Germany, 2020; pp. 173–190.
  13. Viriyasaranon, T.; Chae, S.H.; Choi, J.H. MFA-net: Object detection for complex X-ray cargo and baggage security imagery. PLoS ONE 2022, 17, e0272961.
  14. Cho, H.; Park, H.; Kim, I.J.; Cho, J. Data Augmentation of Backscatter X-ray Images for Deep Learning-Based Automatic Cargo Inspection. Sensors 2021, 21, 7294.
  15. Wei, Y.; Tao, R.; Wu, Z.; Ma, Y.; Zhang, L.; Liu, X. Occluded prohibited items detection: An x-ray security inspection benchmark and de-occlusion attention module. In Proceedings of the 28th ACM International Conference on Multimedia, Seattle, WA, USA, 12–16 October 2020; pp. 138–146.
  16. Chang, A.; Zhang, Y.; Zhang, S.; Zhong, L.; Zhang, L. Detecting prohibited objects with physical size constraint from cluttered X-ray baggage images. Knowl.-Based Syst. 2022, 237, 107916.
  17. Hassan, T.; Bettayeb, M.; Akçay, S.; Khan, S.; Bennamoun, M.; Werghi, N. Detecting prohibited items in X-ray images: A contour proposal learning approach. In Proceedings of the 2020 IEEE International Conference on Image Processing (ICIP), Virtual, 25–28 October 2020; pp. 2016–2020.
  18. Hassan, T.; Shafay, M.; Akçay, S.; Khan, S.; Bennamoun, M.; Damiani, E.; Werghi, N. Meta-transfer learning driven tensor-shot detector for the autonomous localization and recognition of concealed baggage threats. Sensors 2020, 20, 6450.
  19. Hassan, T.; Akcay, S.; Bennamoun, M.; Khan, S.; Werghi, N. Tensor pooling-driven instance segmentation framework for baggage threat recognition. Neural Comput. Appl. 2022, 34, 1239–1250.
  20. Dornaika, F.; Sun, D.; Hammoudi, K.; Charafeddine, J.; Cabani, A.; Zhang, C. Object-centric Contour-aware Data Augmentation Using Superpixels of Varying Granularity. Pattern Recognit. 2023, 139, 109481.
  21. Meethongjan, K.; Hoang, V.T.; Surinwarangkoon, T. Data augmentation by combining feature selection and color features for image classification. Int. J. Electr. Comput. Eng. (IJECE) 2022, 12, 6172–6177.
  22. DeVries, T.; Taylor, G.W. Improved regularization of convolutional neural networks with cutout. arXiv 2017, arXiv:1708.04552.
  23. Hendrycks, D.; Mu, N.; Cubuk, E.D.; Zoph, B.; Gilmer, J.; Lakshminarayanan, B. AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty. In Proceedings of the International Conference on Learning Representations, New Orleans, LA, USA, 6–9 May 2019.
  24. Yun, S.; Han, D.; Oh, S.J.; Chun, S.; Choe, J.; Yoo, Y. Cutmix: Regularization strategy to train strong classifiers with localizable features. In Proceedings of the IEEE/CVF international conference on computer vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 6023–6032.
  25. Ghiasi, G.; Cui, Y.; Srinivas, A.; Qian, R.; Lin, T.Y.; Cubuk, E.D.; Le, Q.V.; Zoph, B. Simple copy-paste is a strong data augmentation method for instance segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Virtual, 19–25 June 2021; pp. 2918–2928.
More
Information
Contributors MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register : , , ,
View Times: 123
Revisions: 2 times (View History)
Update Date: 15 Sep 2023
1000/1000