Defect Detection in Clothing: Comparison
Please note this is a comparison between Version 2 by Sirius Huang and Version 1 by Vítor Carvalho.

Blind people often encounter challenges in managing their clothing, specifically in identifying defects such as stains or holes. With the progress of the computer vision field, it is crucial to minimize these limitations as much as possible to assist blind people with selecting appropriate clothing. 

  • blind people
  • clothing defect detection
  • object detection
  • deep learning

1. Introduction

Visual impairment, e.g., blindness, can have a significant impact on the psychological and cognitive functioning of an individual. Several studies have shown that vision impairment is associated with a variety of negative health outcomes and a poor quality of life [1,2][1][2]. Additionally, blindness currently affects a significant number of individuals, and thus it should not be assumed as a minor concern for society. According to a recent study, there are 33.6 million people worldwide suffering from blindness, which clearly shows the dimension of this population group [3].
The use of assistive technology can help in mitigating the negative effects of blindness and improve the quality of life of people who are blind. Although there has been a proliferation of smart devices and advancements in cutting-edge technology for blind people, most research efforts have been directed towards navigation, mobility, and object recognition, leaving aesthetics aside [4,5,6][4][5][6]. The selection of clothing and preferred style for different occasions is a fundamental aspect of one’s personal identity [7]. This has a significant impact on the way we perceive ourselves, and on the way we are perceived by others [7,8][7][8]. Nonetheless, individuals who are blind may experience insecurity and stress when it comes to dressing-up due to a lack of ability to recognize the garments’ condition. This inability to perceive visual cues can make dressing-up a daily challenge. In addition, blind people may have a higher probability of clothing staining and tearing due to the inherent difficulties in handling objects and performing daily tasks. In particular, detecting stains in a timely manner is crucial to prevent them from becoming permanent or hard to remove. Despite the promising potential of technological solutions in the future, significant challenges still need to be overcome. The lack of vision makes it challenging for these individuals to identify small irregularities or stains in the textures or fabrics of clothing, and they rely on others for assistance.

2. Defect Detection in Clothing

Defect detection in clothing remains a barely addressed topic on the literature. However, if the scope of the topic is expanded to the industry, some interesting works have been carried out, mainly regarding the fabric quality control in the textile industry. Such quality control approach still plays an important role in the industry, and can be an appealing starting point for defect detection in clothing with other purposes in sight [17][9]. Based on the aforementioned premise, a quick literature survey allows perceiving that machine vision based on image processing technology has replaced manual inspection, and allows for reducing costs and increasing the detection accuracy. An integral part of modern textile manufacturing is the automatic detection of fabric defects [18][10]. More recently, due to their success in a variety of applications, deep learning methods have been applied to the detection of fabric defects [19][11]. A wide range of applications were developed using convolutional neural networks (CNNs), such as image classification, object detection, and image segmentation [20][12]. Defect detection using convolutional neural networks can be applied to several different objects [21,22,23][13][14][15]. Comparatively to traditional image processing methods, CNNs can automatically extract useful features from data without requiring complex feature designs to be handcrafted [24][16]. Zhang et al. [25][17] presented a comparative study between different networks of YOLOv2, with proper optimization, in a collected yam-dyed fabric defect dataset, achieving an intersection over union (IoU) of 0.667. Another method, unsupervised, based on multi-scale convolutional denoising autoencoder networks, was presented by Mei et al. [26][18]. A particularity of this approach is the possibility of being trained with only a small number of defects, without label ground truth or human intervention. A maximum accuracy of 85.2% was reported from four datasets. A deep-fusion fabric defect detection algorithm, i.e., DenseNet121-SSD (Densely Connected Convolutional Networks 121-Single-Shot Multi-Box Detector), was proposed by He et al. [27][19]. By using a deep-fusion method, the detection is more accurate, and the detection speed becomes more efficient, achieving a mean average precision (mAP) of 78.6%. Later, Jing et al. [28][20] proposed a deep learning segmentation model, i.e., Mobile-Unet, for fabric defect segmentation. Here, a benchmark is performed with conventional networks on two fabric image databases, the Yarn-dyed Fabric Images (YFI) and the Fabric Images (FI), allowing to reach IoU values of 0.92 and 0.70 for YFI and FI, respectively. A novel model of a defect detection system using artificial defect data, based on stacked convolutional autoencoders, was then proposed by Han et al. [29][21]. Their method was evaluated through a comparative study with U-Net with real defect data, and it was concluded that actual defects were detected using only non-defect and artificial data. Additionally, an optimized version of the Levenberg–Marquardt (LM)-based artificial neural network (ANN) was developed by Mohammed et al. for leather surfaces [30][22]. The latter enables the classification and identification of defects in computer vision-based automated systems with an accuracy of 97.85%, compared with 60–70% obtained through manual inspection. Likewise, Xie et al. [31][23] proposed a robust fabric defect detection method, based on the improved RefineDet. Three databases were used to evaluate their study. Additionally, a segmentation network with a decision network was proposed by Huang et al. [32][24], with the reduced number of images needed to achieve accurate segmentation results being a major advantage. Furthermore, a deep learning model to classify fabric defects in seven categories based on CapsNet was proposed by Kahraman et al. [33][25], achieving an accuracy of 98.71%. Table 1 summarizes the main results of the aforementioned works, including the datasets used.
Table 1. Literature overview on textile fabric defect detection, including the author, year, method, dataset, defect classes, and metrics.
The results presented in Table 1 demonstrate a lack of standardization in the evaluation metrics and datasets between studies, leading to difficulties in accurately comparing results. This can be attributed to the diversity of tasks in defect detection, including defect classification, defect location, defect segmentation, and defect semantic segmentation, each requiring distinct metrics for evaluation. Furthermore, the studies are focused on one-stage and two-stage detectors, without a comparative study between them. One-stage detectors, such as You Only Look Once (YOLO) [34][26] and the Single-Shot Detector (SSD) [35][27], are known for their speed, but also for their lower accuracy compared to two-stage detectors, such as Faster R-CNN [36][28] and Mask R-CNN (region-based convolutional neural network) [37][29]. Two-stage detectors offer improved accuracy, but at the cost of a slower performance. Despite the similarities between clothing and textiles, a new approach is needed for detecting defects in clothing, especially to assist blind people. For that, different types of images must be analyzed, other than just textiles, resulting in the creation of new datasets. In the textile industry, fabrics usually emerge from the manufacturing process in a roll and undergo stretching, augmenting the detection of defects. Furthermore, the magnification of images to fit the fabrics coming off the roll can greatly amplify any defects present, as depicted in Figure 1.
Figure 1. Examples of defects from the TILDA dataset: (a) large oil stain in the upper right corner, and (b) medium-sized hole in the upper left corner.
It becomes clear that a comprehensive dataset that captures the entirety of a garment can provide crucial insights into identifying defects in the piece as a whole, thus, leading to significant advancements in this field. Furthermore, textile fabrics’ datasets may not capture important clothing features, such as wrinkles, patterns, and buttonholes, which can present a significant challenge during the analysis, since defects can be hidden in the wrinkles of the clothes, or simply hidden by the way the garment was folded or stored, as illustrated in Figure 2.
Figure 2.
Visibility of the defects on clothing: (
a
) imperceptible sweat stain and (
b
) visible sweat stain.

References

  1. Chia, E.-M.; Mitchell, P.; Ojaimi, E.; Rochtchina, E.; Wang, J.J. Assessment of vision-related quality of life in an older population subsample: The Blue Mountains Eye Study. Ophthalmic Epidemiol. 2006, 13, 371–377.
  2. Langelaan, M.; de Boer, M.R.; van Nispen, R.M.A.; Wouters, B.; Moll, A.C.; van Rens, G.H.M.B. Impact of visual impairment on quality of life: A comparison with quality of life in the general population and with other chronic conditions. Ophthalmic Epidemiol. 2007, 14, 119–126.
  3. Steinmetz, J.D.; Bourne, R.A.A.; Briant, P.S.; Flaxman, S.R.; Taylor, H.R.B.; Jonas, J.B.; Abdoli, A.A.; Abrha, W.A.; Abualhasan, A.; Abu-Gharbieh, E.G.; et al. Causes of blindness and vision impairment in 2020 and trends over 30 years, and prevalence of avoidable blindness in relation to VISION 2020: The Right to Sight: An analysis for the Global Burden of Disease Study. Lancet Glob. Health 2021, 9, e144–e160.
  4. Bhowmick, A.; Hazarika, S.M. An insight into assistive technology for the visually impaired and blind people: State-of-the-art and future trends. J. Multimodal User Interfaces 2017, 11, 149–172.
  5. Messaoudi, M.D.; Menelas, B.-A.J.; Mcheick, H. Review of Navigation Assistive Tools and Technologies for the Visually Impaired. Sensors 2022, 22, 7888.
  6. Elmannai, W.; Elleithy, K. Sensor-based assistive devices for visually-impaired people: Current status, challenges, and future directions. Sensors 2017, 17, 565.
  7. Johnson, K.; Lennon, S.J.; Rudd, N. Dress, body and self: Research in the social psychology of dress. Fash. Text. 2014, 1, 20.
  8. Adam, H.; Galinsky, A.D. Enclothed cognition. J. Exp. Soc. Psychol. 2012, 48, 918–925.
  9. Ngan, H.Y.T.; Pang, G.K.H.; Yung, N.H.C. Automated fabric defect detection—A review. Image Vis. Comput. 2011, 29, 442–458.
  10. Li, C.; Li, J.; Li, Y.; He, L.; Fu, X.; Chen, J. Fabric Defect Detection in Textile Manufacturing: A Survey of the State of the Art. Secur. Commun. Netw. 2021, 2021, 9948808.
  11. Kahraman, Y.; Durmuşoğlu, A. Deep learning-based fabric defect detection: A review. Text. Res. J. 2022, 93, 1485–1503.
  12. Lu, Y. Artificial intelligence: A survey on evolution, models, applications and future trends. J. Manag. Anal. 2019, 6, 1–29.
  13. Roslan, M.I.B.; Ibrahim, Z.; Abd Aziz, Z. Real-Time Plastic Surface Defect Detection Using Deep Learning. In Proceedings of the 2022 IEEE 12th Symposium on Computer Applications & Industrial Electronics (ISCAIE), Penang, Malaysia, 21–22 May 2022; pp. 111–116.
  14. Lv, B.; Zhang, N.; Lin, X.; Zhang, Y.; Liang, T.; Gao, X. Surface Defects Detection of Car Door Seals Based on Improved YOLO V3. J. Phys. Conf. Ser. 2021, 1986, 12127.
  15. Ding, F.; Zhuang, Z.; Liu, Y.; Jiang, D.; Yan, X.; Wang, Z. Detecting Defects on Solid Wood Panels Based on an Improved SSD Algorithm. Sensors 2020, 20, 5315.
  16. Tabernik, D.; Šela, S.; Skvarč, J.; Skočaj, D. Segmentation-based deep-learning approach for surface-defect detection. J. Intell. Manuf. 2020, 31, 759–776.
  17. Zhang, H.; Zhang, L.; Li, P.; Gu, D. Yarn-dyed Fabric Defect Detection with YOLOV2 Based on Deep Convolution Neural Networks. In Proceedings of the 2018 IEEE 7th Data Driven Control and Learning Systems Conference (DDCLS), Enshi, China, 25–27 May 2018; pp. 170–174.
  18. Mei, S.; Wang, Y.; Wen, G. Automatic Fabric Defect Detection with a Multi-Scale Convolutional Denoising Autoencoder Network Model. Sensors 2018, 18, 1604.
  19. He, X.; Wu, L.; Song, F.; Jiang, D.; Zheng, G. Research on Fabric Defect Detection Based on Deep Fusion DenseNet-SSD Network. In Proceedings of the International Conference on Wireless Communication and Sensor Networks, Association for Computing Machinery, New York, NY, USA, 13–15 May 2020; pp. 60–64.
  20. Jing, J.; Wang, Z.; Rätsch, M.; Zhang, H. Mobile-Unet: An efficient convolutional neural network for fabric defect detection. Text. Res. J. 2022, 92, 30–42.
  21. Han, Y.-J.; Yu, H.-J. Fabric Defect Detection System Using Stacked Convolutional Denoising Auto-Encoders Trained with Synthetic Defect Data. Appl. Sci. 2020, 10, 2511.
  22. Mohammed, K.M.C.; Srinivas Kumar, S.; Prasad, G. Defective texture classification using optimized neural network structure. Pattern Recognit. Lett. 2020, 135, 228–236.
  23. Xie, H.; Wu, Z. A Robust Fabric Defect Detection Method Based on Improved RefineDet. Sensors 2020, 20, 4260.
  24. Huang, Y.; Jing, J.; Wang, Z. Fabric Defect Segmentation Method Based on Deep Learning. IEEE Trans. Instrum. Meas. 2021, 70, 5005715.
  25. Kahraman, Y.; Durmuşoğlu, A. Classification of Defective Fabrics Using Capsule Networks. Appl. Sci. 2022, 12, 5285.
  26. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016.
  27. Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.-Y.; Berg, A.C. SSD: Single Shot MultiBox Detector. In Proceedings of the Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016; Springer International Publishing: Cham, Switzerland, 2016.
  28. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. Adv. Neural Inf. Process. Syst. 2015, 28, 91–99.
  29. He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask R-CNN. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017.
More
ScholarVision Creations