Multi-Scene Mask Detection: Comparison
Please note this is a comparison between Version 2 by Camila Xu and Version 1 by Chao Ren.

Deep learning for mask detection has important demand in medical and industrial production, which reflects the application of neural networks and image sensors in daily life. During an epidemic of respiratory viruses, mask detection can effectively supervise the wearing of masks, thereby reducing the risk of virus transmission.

  • multi-scene mask detection
  • deep learning
  • multi-scale residual
  • channel-spatial attention

1. Introduction

Deep learning for mask detection has important demand in medical and industrial production, which reflects the application of neural networks and image sensors in daily life [1,2][1][2]. During an epidemic of respiratory viruses, mask detection can effectively supervise the wearing of masks, thereby reducing the risk of virus transmission. In hospitals, doctors and patients are often required to wear masks under certain circumstances for their safety. Also, mask detection technology used in industrial production can prevent dust from entering the respiratory tract of workers. In December 2019, the novel coronavirus (COVID-19) swept the world, causing irretrievable casualties and enormous economic losses around the world. Therefore, during the epidemic period, basic protective measures against infection in public places need to be taken. The main mode of transmission of coronavirus is droplets and close contact [3], so wearing a mask plays a very effective role in slowing down the spread of the virus and reducing the risk of infection [4]. On the one hand, wearing a mask can control the spread of the virus from the source, which can effectively reduce the risk of droplets being transmitted by the infected person to the surrounding air [5]; on the other hand, masks can effectively protect uninfected people by filtering virus-containing particles in the air [6]. The importance of masks is not only reflected in protecting against respiratory virus infection, but also in areas such as industrial production.
For example, wearing masks can prevent workers from inhaling harmful gases and particles produced during production [7]. It can be seen that it is very necessary to establish an effective mask supervision and management mechanism. At present, the supervision of masks in public places is mostly manual, which is labor-intensive, and the effect is not satisfying. Therefore, the research on algorithms to automatically detect mask wearing is of great practical significance for the health and safety of citizens.
At present, deep learning is developing rapidly in the field of computer vision. Compared with object detection based on traditional machine learning, such as the V-J detection network [8], DPM network [9], HOG detection network [10], etc., object detection based on deep learning has significant advantages in object recognition ability and algorithm adaptability [11]. In 2021, Batagelj and Peer et al. constructed a large face image dataset called the Face-Mask-Label Dataset (FMLD) from the publicly available MAFA [12] and Wider Face datasets [13], and proposed a pipeline to recognize whether a mask is worn correctly [14]. They also conducted a comprehensive evaluation experiment on the performance of various face detectors on the face mask dataset and found that all tested models performed worse with masked faces than with unmasked faces. The density of crowds varies across different scenarios, and the types of masks worn can also vary. In many scenarios, masks worn by people are small in size and can be easily obscured. As a result, mask detection primarily involves the detection of small targets. Small target detection has many technical difficulties, such as few available features, high positioning accuracy requirements, and few datasets. There are only a few related studies on small target mask detection based on deep learning. Siradjuddin et al. improved Fast R-CNN by adding Region Proposal Network (RPN) and Region of Interest (ROI) pooling layers, and classification layers for mask detection [15]. In 2020, Bochkovskiy et al. proposed YOLOv4 [16]. Later, YOLOv5 [17] was proposed and its detection accuracy and detection speed have been improved. In [18], Moran Ju et al. proposed ARFFDet based on modified Darknet53 [19], using the scale matching strategy to select the appropriate scale and anchor size for small target detection. Zhang et al. improved Faster-RCNN and introduced considerate multi-feature for small target detection [20]. All the above studies can effectively detect faces and masks. However, for small target mask detection, most of the situations in natural scenes are complex, such as dense crowds, occluded masks, and poor ambient lighting intensity, which will reduce the detection effect of the model and fail to achieve the expected effect of crowd protection. Therefore, the mask detection model needs to have the characteristics of strong ability to detect small targets in multiple scenes.

2. Object Detection

Traditional object detection tasks are based on manual feature extraction, but extracting effective features is very challenging. Furthermore, the performance of this method has been saturated. With the rapid development of deep learning in recent years, the object detection method based on deep learning gradually became mainstream [25][21]. Since AlexNet [26][22] networks emerged in 2012, convolutional neural networks (CNNs) have exploded. It has developed from the deepening of the VGG network structure to the expanding initial network [27,28,29][23][24][25] and Resnet [30,31][26][27] network, then to the lightweight mobile network [32][28] and advanced network [33][29]. The focus of CNN research has also shifted from parameter optimization to designing network architectures, such as using a new architecture based on the attention mechanism [34,35,36][30][31][32] to improve the relevant performance of networks. Deep learning-based object detection can be divided into two categories: two-stage detection and one-stage detection. The two-stage detection networks mainly include RCNN [37][33], SSP-Net [38][34], FastRCNN [39][35], Faster-RCNN [40][36], etc. Although the two-stage detection networks have a high accuracy, they still contain a series of shortcomings, including cumbersome training steps, full training speed, and occupying too much physical space during training [41][37]. The one-stage target detection algorithms include the YOLO series network [16,17,19,42[16][17][19][38][39][40][41],43,44,45], SSD network [46][42], and RetinaNet network [47][43]. Huang et al. used a generalized feature pyramid to improve the adaptability of the network [48][44]. Wang et al. added the SimAM space and channel attention mechanism to enhance the convergence performance of the model [49][45]. In [50][46], the detection effect is improved by adding an attention mechanism and an output layer to enhance feature extraction and feature fusion. Compared with the two-stage target detection algorithms, one-stage methods have significantly faster inference speed but may suffer from lower detection accuracy [41][37].

3. Small Target Detection

Thanks to the advancements in deep learning, object detection has made remarkable progress [51][47]. However, small target detection has remained a challenging task for a long time. Traditional object detection algorithms have demonstrated a significant gap in detecting small objects when compared to normal-sized objects. To enhance small object detection capabilities of the network, there are currently five mainstream approaches available. The first includes image enhancement represented by the scale matching strategy proposed by [52][48] and artificial augmentation by pasting the small objects proposed by [53][49]. The second includes multiple-degree learning represented by the Inside-outside Network (ION) object detection method of [54][50] and the extended feature pyramid network of [55][51]. The third includes the context learning method based on multi-scale context feature enhancement proposed by [56][52] and an adaptive context modeling and iterative improvement method proposed by [57][53]. The fourth includes the generative adversarial learning represented by GAN [58][54] and MTGAN [59][55]. The final type is the anchor-free method represented by DeNet [60][56] and PLN [61][57]. The proposal and application of these five methods greatly enhance the small target detection capability of target detection networks. However, all five aforementioned methods have significant drawbacks. Image enhancement and multiple-degree learning can generate noise during training, i.e., minor image noise may significantly impact the outcome [62][58]. For the multiple-degree learning method mentioned above, suitable fusion strategies and loss functions need to be designed, increasing the model’s complexity and difficulty. Context learning relies on large-scale pre-trained language models, which have high training costs and may contain some biases and errors. And it may not provide sufficient contextual information to meet the training requirements. The generative adversarial learning method requires balancing the training progress of the generator and the discriminator, avoiding the phenomenon of mode collapse or oscillation, which requires careful adjustment of hyperparameters and loss functions. When two or more target center points overlap or are close to each other, the anchor-free method may produce semantic ambiguity. Since the anchor-free method directly predicts the location and size of the target, it leads to an imbalance of positive and negative samples during training, which may affect the convergence and generalization of the model. Therefore, the five small object detection methods mentioned above have limited improvement in detection performance.

4. Mask Wearing Detection

Various studies of mask detection methods based on deep learning have been carried out in recent years. Batagelj et al. [14] constructed a large face mask dataset from the publicly available MAFA and Wider Face datasets, which made a great contribution to testing and developing mask detection models. Khandelwal et al. used the MobileNetV2 backbone to detect whether people are wearing masks [63][59]. Fan et al. proposed a one-stage face mask detector based on a context attention module to extract features related to the mask wearing status [64][60]. In [65][61], Qin et al. developed a new mask detection network by combining classification networks (SRCNet) and image super-resolution, and it quantified the tri-classification problem based on unconstrained 2D face images. Jiang et al. introduced landmark, a key face feature, into the detection layer of YOLOv5, improving the detection accuracy of occlusion and dense crowds [66][62]. In [67][63], a real-time CNN-based approach with transfer learning was introduced to detect if a mask was used. Asghar et al. proposed a depthwise separable convolution neural network based on MobileNet [32][28], which improved learning performance and decreased the number of parameters [68][64]. Balaji et al. employed a VGG-16 CNN model that was built with Keras/TensorFlow and Open-CV to identify the people who did not have face masks on in government workplaces [69][65]. In addition to the above studies, some commercial methods also provide face mask detection features. These solutions enable the easy integration of a video stream from image sensors and then apply vision techniques to monitor whether crowds are wearing masks [70,71][66][67].

References

  1. Benifa, J.B.; Chola, C.; Muaad, A.Y.; Hayat, M.A.B.; Bin Heyat, M.B.; Mehrotra, R.; Akhtar, F.; Hussein, H.S.; Vargas, D.L.R.; Castilla, Á.K.; et al. FMDNet: An Efficient System for Face Mask Detection Based on Lightweight Model during COVID-19 Pandemic in Public Areas. Sensors 2023, 23, 6090.
  2. Su, X.; Gao, M.; Ren, J.; Li, Y.; Dong, M.; Liu, X. Face mask detection and classification via deep transfer learning. Multimed. Tools Appl. 2022, 81, 4475–4494.
  3. Li, Y.; Guo, F.; Cao, Y.; Li, L.; Guo, Y. Insight into COVID-2019 for pediatricians. Pediatr. Pulmonol. 2020, 55, E1–E4.
  4. Jung, H.R.; Park, C.; Kim, M.; Jhon, M.; Kim, J.W.; Ryu, S.; Lee, J.Y.; Kim, J.M.; Park, K.H.; Jung, S.I.; et al. Factors associated with mask wearing among psychiatric inpatients during the COVID-19 pandemic. Schizophr. Res. 2021, 228, 235.
  5. Leung, N.H.; Chu, D.K.; Shiu, E.Y.; Chan, K.H.; McDevitt, J.J.; Hau, B.J.; Yen, H.L.; Li, Y.; Ip, D.K.; Peiris, J.; et al. Respiratory virus shedding in exhaled breath and efficacy of face masks. Nat. Med. 2020, 26, 676–680.
  6. Van der Sande, M.; Teunis, P.; Sabel, R. Professional and home-made face masks reduce exposure to respiratory infections among the general population. PLoS ONE 2008, 3, e2618.
  7. Ingle, M.A.; Talmale, G.R. Respiratory mask selection and leakage detection system based on canny edge detection operator. Procedia Comput. Sci. 2016, 78, 323–329.
  8. Xu, Y.; Yu, G.; Wu, X.; Wang, Y.; Ma, Y. An enhanced Viola-Jones vehicle detection method from unmanned aerial vehicles imagery. IEEE Trans. Intell. Transp. Syst. 2016, 18, 1845–1856.
  9. Yan, J.; Lei, Z.; Yang, Y.; Li, S.Z. Stacked deformable part model with shape regression for object part localization. In Proceedings of the Computer Vision—ECCV 2014: 13th European Conference, Zurich, Switzerland, 6–12 September 2014; Springer: Cham, Switzerland, 2014; pp. 568–583.
  10. Dehghani, A.; Moloney, D.; Griffin, I. Object recognition speed improvement using BITMAP-HoG. In Proceedings of the 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016; pp. 659–663.
  11. Shinde, P.P.; Shah, S. A review of machine learning and deep learning applications. In Proceedings of the 2018 Fourth International Conference on Computing Communication Control and Automation (ICCUBEA), Pune, India, 16–18 August 2018; pp. 1–6.
  12. Ge, S.; Li, J.; Ye, Q.; Luo, Z. Detecting masked faces in the wild with LLE-CNNs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2682–2690.
  13. Yang, S.; Luo, P.; Loy, C.C.; Tang, X. Wider face: A face detection benchmark. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 5525–5533.
  14. Batagelj, B.; Peer, P.; Štruc, V.; Dobrišek, S. How to correctly detect face-masks for COVID-19 from visual information? Appl. Sci. 2021, 11, 2070.
  15. Siradjuddin, I.A.; Reynaldi; Muntasa, A. Faster Region-based Convolutional Neural Network for Mask Face Detection. In Proceedings of the 2021 5th International Conference on Informatics and Computational Sciences (ICICoS), Semarang, Indonesia, 24–25 November 2021; pp. 282–286.
  16. Bochkovskiy, A.; Wang, C.Y.; Liao, H.Y.M. Yolov4: Optimal speed and accuracy of object detection. arXiv 2020, arXiv:2004.10934.
  17. Jocher, G.; Chaurasia, A.; Stoken, A.; Borovec, J.; Kwon, Y. ultralytics/yolov5: V6. 1-TensorRT TensorFlow edge TPU and OpenVINO export and inference. Zenodo 2022, 2, 2.
  18. Ju, M.; Luo, J.; Liu, G.; Luo, H. A real-time small target detection network. Signal Image Video Process. 2021, 15, 1265–1273.
  19. Farhadi, A.; Redmon, J. Yolov3: An incremental improvement. In Proceedings of the Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; Springer: Berlin/Heidelberg, Germany, 2018; Volume 1804, pp. 1–6.
  20. Zhang, J.; Meng, Y.; Chen, Z. A Small Target Detection Method Based on Deep Learning with Considerate Feature and Effectively Expanded Sample Size. IEEE Access 2021, 9, 96559–96572.
  21. Liu, J.; Huang, W.; Xiao, L.; Huo, Y.; Xiong, H.; Li, X.; Xiao, W. Deep Learning Object Detection. In Proceedings of the Smart Computing and Communication: 7th International Conference, SmartCom 2022, New York, NY, USA, 18–20 November 2022; Springer: Berlin/Heidelberg, Germany, 2023; pp. 300–309.
  22. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90.
  23. Ioffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the International Conference on Machine Learning, PMLR, Lille, France, 6–11 July 2015; pp. 448–456.
  24. Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2818–2826.
  25. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9.
  26. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778.
  27. He, K.; Zhang, X.; Ren, S.; Sun, J. Identity mappings in deep residual networks. In Proceedings of the Computer Vision—ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016; Springer: Cham, Switzerland, 2016; pp. 630–645.
  28. Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv 2017, arXiv:1704.04861.
  29. Tan, M.; Le, Q. Efficientnet: Rethinking model scaling for convolutional neural networks. In Proceedings of the International Conference on Machine Learning, PMLR, Long Beach, CA, USA, 9–15 June 2019; pp. 6105–6114.
  30. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. Adv. Neural Inf. Process. Syst. 2017, 30, 5998–6008.
  31. Woo, S.; Park, J.; Lee, J.Y.; Kweon, I.S. Cbam: Convolutional block attention module. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 3–19.
  32. Zhang, Y.; Ge, H.; Lin, Q.; Zhang, M.; Sun, Q. Research of Maritime Object Detection Method in Foggy Environment Based on Improved Model SRC-YOLO. Sensors 2022, 22, 7786.
  33. Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 580–587.
  34. He, K.; Zhang, X.; Ren, S.; Sun, J. Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 1904–1916.
  35. Girshick, R. Fast r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 1440–1448.
  36. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster r-cnn: Towards real-time object detection with region proposal networks. Adv. Neural Inf. Process. Syst. 2015, 28, 91–99.
  37. Xiao, Y.; Tian, Z.; Yu, J.; Zhang, Y.; Liu, S.; Du, S.; Lan, X. A review of object detection based on deep learning. Multimed. Tools Appl. 2020, 79, 23729–23791.
  38. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788.
  39. Redmon, J.; Farhadi, A. YOLO9000: Better, faster, stronger. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 7263–7271.
  40. Li, C.; Li, L.; Jiang, H.; Weng, K.; Geng, Y.; Li, L.; Ke, Z.; Li, Q.; Cheng, M.; Nie, W.; et al. YOLOv6: A single-stage object detection framework for industrial applications. arXiv 2022, arXiv:2209.02976.
  41. Wang, C.Y.; Bochkovskiy, A.; Liao, H.Y.M. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. arXiv 2022, arXiv:2207.02696.
  42. Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. SSD: Single shot multibox detector. In Proceedings of the Computer Vision—ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016; Springer: Cham, Switzerland, 2016; pp. 21–37.
  43. Lin, T.Y.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Focal loss for dense object detection. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2980–2988.
  44. Huang, L.; Huang, W. RD-YOLO: An effective and efficient object detector for roadside perception system. Sensors 2022, 22, 8097.
  45. Wang, Y.; Guo, W.; Zhao, S.; Xue, B.; Zhang, W.; Xing, Z. A Big Coal Block Alarm Detection Method for Scraper Conveyor Based on YOLO-BS. Sensors 2022, 22, 9052.
  46. Xue, J.; Zheng, Y.; Dong-Ye, C.; Wang, P.; Yasir, M. Improved YOLOv5 network method for remote sensing image-based ground objects recognition. Soft Comput. 2022, 26, 10879–10889.
  47. Al Jaberi, S.M.; Patel, A.; AL-Masri, A.N. Object tracking and detection techniques under GANN threats: A systemic review. Appl. Soft Comput. 2023, 139, 110224.
  48. Yu, X.; Gong, Y.; Jiang, N.; Ye, Q.; Han, Z. Scale match for tiny person detection. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Snowmass, CO, USA, 1–5 March 2020; pp. 1257–1265.
  49. Kisantal, M.; Wojna, Z.; Murawski, J.; Naruniec, J.; Cho, K. Augmentation for small object detection. arXiv 2019, arXiv:1902.07296.
  50. Bell, S.; Zitnick, C.L.; Bala, K.; Girshick, R. Inside-outside net: Detecting objects in context with skip pooling and recurrent neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2874–2883.
  51. Deng, C.; Wang, M.; Liu, L.; Liu, Y.; Jiang, Y. Extended feature pyramid network for small object detection. IEEE Trans. Multimed. 2021, 24, 1968–1979.
  52. Li, J.; Wei, Y.; Liang, X.; Dong, J.; Xu, T.; Feng, J.; Yan, S. Attentive contexts for object detection. IEEE Trans. Multimed. 2016, 19, 944–954.
  53. Chen, Q.; Song, Z.; Dong, J.; Huang, Z.; Hua, Y.; Yan, S. Contextualizing object detection and classification. IEEE Trans. Pattern Anal. Mach. Intell. 2014, 37, 13–27.
  54. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial networks. Commun. ACM 2020, 63, 139–144.
  55. Bai, Y.; Zhang, Y.; Ding, M.; Ghanem, B. SOD-MTGAN: Small object detection via multi-task generative adversarial network. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 206–221.
  56. Tychsen-Smith, L.; Petersson, L. Denet: Scalable real-time object detection with directed sparse sampling. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 428–436.
  57. Wang, X.; Chen, K.; Huang, Z.; Yao, C.; Liu, W. Point linking network for object detection. arXiv 2017, arXiv:1706.03646.
  58. Konar, D.; Sarma, A.D.; Bhandary, S.; Bhattacharyya, S.; Cangi, A.; Aggarwal, V. A shallow hybrid classical–quantum spiking feedforward neural network for noise-robust image classification. Appl. Soft Comput. 2023, 136, 110099.
  59. Khandelwal, P.; Khandelwal, A.; Agarwal, S.; Thomas, D.; Xavier, N.; Raghuraman, A. Using computer vision to enhance safety of workforce in manufacturing in a post COVID world. arXiv 2020, arXiv:2005.05287.
  60. Fan, X.; Jiang, M. RetinaFaceMask: A single stage face mask detector for assisting control of the COVID-19 pandemic. In Proceedings of the 2021 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Melbourne, Australia, 17–20 October 2021; pp. 832–837.
  61. Qin, B.; Li, D. Identifying facemask-wearing condition using image super-resolution with classification network to prevent COVID-19. Sensors 2020, 20, 5236.
  62. Jiang, C.; Tan, L.; Lin, T.; Huang, Z. Mask wearing detection algorithm based on improved YOLOv5. In Proceedings of the International Conference on Computer, Artificial Intelligence, and Control Engineering (CAICE 2023), Guangzhou, China, 17–19 February 2023; SPIE: Bellingham, WA, USA, 2023; Volume 12645, pp. 1057–1064.
  63. Tomás, J.; Rego, A.; Viciano-Tudela, S.; Lloret, J. Incorrect facemask-wearing detection using convolutional neural networks with transfer learning. Healthcare 2021, 9, 1050.
  64. Asghar, M.Z.; Albogamy, F.R.; Al-Rakhami, M.S.; Asghar, J.; Rahmat, M.K.; Alam, M.M.; Lajis, A.; Nasir, H.M. Facial mask detection using depthwise separable convolutional neural network model during COVID-19 pandemic. Front. Public Health 2022, 10, 855254.
  65. Balaji, S.; Balamurugan, B.; Kumar, T.A.; Rajmohan, R.; Kumar, P.P. A brief survey on AI based face mask detection system for public places. Ir. Interdiscip. J. Sci. Res. 2021, 5, 108–117.
  66. Udemans, C. Baidu Releases Open-Source Tool to Detect Faces without Masks. 2020. Available online: https://technode.com/2020/02/14/baidu-open-source-face-masks (accessed on 14 February 2020).
  67. Aerialtronics. Face Mask Detection Software. 2022. Available online: https://www.aerialtronics.com/en/products/face-mask-detection-software#featuresfacemask (accessed on 14 February 2020).
More
Video Production Service