Smoke and Fire Detection Approaches: Comparison
Please note this is a comparison between Version 2 by Lindsay Dong and Version 1 by Mukhriddin Mukhiddinov.
Forest fires rank among the costliest and deadliest natural disasters globally. Identifying the smoke generated by forest fires is pivotal in facilitating the prompt suppression of developing fires. Nevertheless, succeeding techniques for detecting forest fire smoke encounter persistent issues, including a slow identification rate, suboptimal accuracy in detection, and challenges in distinguishing smoke originating from small sources.
  • wildfire smoke detection
  • forest fire
  • UAV images

1. Introduction

The escalation of the global warming trend has manifested notably in recent years, precipitating climate-induced drought and the emergence of El Niño events. Between 2013 and 2022, an annual mean of 61,410 wildfires transpired, comprising an average of 7.2 million acres. In the year 2022, a total of 68,988 wildfires raged, affecting 7.6 million acres of land. Remarkably, Alaska bore the brunt of this devastation, accounting for over 40% of the total acreage affected (equivalent to 3.1 million acres). As of 1 June 2023, the current year has witnessed approximately 18,300 wildfires, impacting a cumulative expanse of more than 511,000 acres. Notably, most of these wildfires are instigated by human activities, representing 89% of the average annual wildfire count from 2018 to 2022. Conversely, wildfires incited by lightning occurrences tend to exhibit comparatively larger scales and consume a more extensive acreage, accounting for approximately 53% of the mean property burned during the period spanning 2018 to 2022 [1].
Forest fires pose a serious hazard to both human lives and property, exerting a markedly harmful impact on the natural ecological balance of forest ecosystems. Furthermore, their occurrence remains unpredictable and engenders tough challenges regarding rescue operations [2,3][2][3]. As a result, the prevention of forest fires has consistently held a significant position in strategically establishing public infrastructure across diverse nations. In forest fire outbreaks, the representation of smoke typically precedes the actual ignition, with detectable pre-smoke indicators [4,5,6][4][5][6]. Timely and precise detection of wildfire-induced smoke holds immense significance, not solely for early forest fire alert systems and fighting measures but also for shortening the loss of human lives and property.
Traditional methods for monitoring forest fires involve manual observation through ground-based surveys and observation towers. Manual observation is sensitive to external factors such as logistical limitations, communication challenges, and weather, leading to inefficiencies. As a means of monitoring, observation towers possess limitations, including restricted coverage, areas with no surveillance coverage, and subsequent high maintenance expenses [7]. Despite its broad coverage, satellite-based monitoring [8] of forest fires faces limitations such as inadequate spatial resolution of satellite imagery, dependence on orbital cycles, susceptibility to weather and cloud cover interference, and low satellite numbers. Furthermore, achieving real-time forest fire monitoring using satellite systems is challenging.
Aerial monitoring has emerged as a productive method for forest fire surveillance [9], primarily using aircraft or unmanned aerial vehicles (UAV) and drones for surveillance. Nevertheless, this approach encounters substantial operational expenses due to the expansive expanse of forested landscape under consideration. Conventional methods of early forest fire detection predominantly rely on smoke and temperature-sensitive sensors, often in a combined configuration [10,11,12][10][11][12]. These sensors are engineered to detect airborne smoke particulates and swift escalations in ambient temperature, thereby facilitating fire detection. Notably, activating an alert is contingent upon achieving predetermined thresholds in either smoke concentration or ambient temperature. Despite their utility, these hardware-based sensors exhibit spatial and temporal constraints, compounded by challenges in maintenance after deployment. Consequently, it becomes evident that sensor-based solutions need to catch up in catering to the difficulties of real-time monitoring and preemptive detection and mitigation of forest fires within vast and complicated ecosystems, such as forests.
With the advancement of computer technology, there has been a shift towards more advanced approaches for detecting fire smoke, moving away from manual feature extraction methods. This newer paradigm predominantly revolves around surveillance systems positioned at observation points, capturing forest fire imagery or videos. Subsequently, manual extraction of features from these data sets is conducted, followed by the formulation of distinctive identifiers. This process is demonstrated in the work of Hidenori et al. [13], who used textural features of smoke to train a support vector machine model for identifying wildfire smoke. The efficacy of this approach is dependent on a sufficient number of training cases and the precision of feature extraction, both of which influence the recognition performance of the support vector machine. However, it is noteworthy that this technique engenders substantial data storage requirements and exhibits sluggish computational processing speeds. Fileonenko et al. [14] conducted smoke recognition by leveraging color and visual attributes inherent in smoke regions within surveillance videos. Exploiting the steadiness of the camera’s perspective, these researchers extracted smoke regions by computation of pixel edge roughness, subsequently employing background subtraction for identification. Nevertheless, this technique’s susceptibility to noise impairs its capability to achieve precision and rapid smoke detection. Tao and colleagues [15] worked on automating smoke detection using a Hidden Markov Model. They focused on capturing the changing characteristics of smoke areas in videos. They divided the color changes in consecutive frames into distinct blocks and used Markov models to classify each of these blocks. Despite these endeavors, this strategy still needs to be challenged by the intricacies of its operational setting. Traditional methods that use image or video analysis to detect forest fire smoke have achieved good results but also have some limitations. The underlying feature extraction process necessitates adept domain knowledge for feature selection, introducing the possibility of suboptimal design. Moreover, characteristics such as background, fog, cloud, and lighting can lead to reduced detection and recognition accuracy. Furthermore, these methods may not work as well in complex and changing forest circumstances.
With the rapid progress of deep learning techniques, researchers are increasingly using them for detecting forest fire smoke. Deep learning allows automatic detection and feature extraction through more complicated algorithms, leading to faster learning, better accuracy, and improved performance in dense forest conditions. For example, Zhang and colleagues [16] expanded their dataset by creating synthetic instances of forest fire smoke and used the Faster R-CNN framework for detection. This approach avoids the need for manual feature extraction but requires more computational resources. Another study by Qiang and team [17] used a dual-stream fusion method to detect wildfire smoke using a motion detection algorithm and deep learning. They achieved an accuracy of 90.6% by extracting temporal and spatial features from smoke images. However, there’s still a challenge in capturing feature information effectively from long sequences at the beginning. In the study by Filonenko et al. [18], various established convolutional classification networks, including VGG-19 [19], AlexNet [20], ResNet [21], VGG-16, and Xception, were utilized to classify wildfire smoke images. They employed Yuan’s dataset [22] of four smoke images for both training and validation. Their assessment of these model networks’ performance in recognizing smoke on this dataset highlighted Xception as the most effective detector. In another work, Li et al. [23] introduced an innovative technique called the Adaptive Depthwise Convolution module. This module dynamically adjusts the weights of convolutional layers to enhance the capture of features related to forest fire smoke. Their methodology yielded an accuracy of 87.26% at a frame rate of 43 FPS. Pan et al. [24] explored the deployment of ShuffleNet, coupled with Weakly Supervised Fine Segmentation and Faster R-CNN frameworks, for predicting the presence of fire smoke. However, due to the intricate nature of fire smoke and the high memory requirements for model training, the complexity of the task necessitated exceedingly robust hardware resources.
The extensive adaptability, rapidity, and precision of UAVs have led to their widespread integration in forest fire detection endeavors. UAVs can use their capacity to operate at low altitudes to capture high-resolution images of forested regions, enabling early fire identification. Moreover, UAVs demonstrate proficiency in navigating difficult and inaccessible terrains [25]. They can carry diverse cameras and sensors capable of detecting diverse spectral ranges, encompassing infrared radiation, which facilitates the discernment of latent heat sources beyond human visual perception. Furthermore, UAVs can be equipped with real-time communication systems, enabling quick responsiveness by firefighters and furnishing pertinent information about the fire’s parameters, positioning, and trajectory [26,27][26][27]. The collective attributes of UAVs render their deployment in forest fire detection increasingly pivotal, poised to assume an even more consequential role in the future of wildfire management.

2. Smoke and Fire Detection Approaches

Various approaches exist for smoke and fire detection, broadly categorized as follows: (a) vision-based methods and (b) sensor-based methods. Vision-based methods can be further divided into two distinct groups. The initial category entails feature extraction coupled with machine learning techniques, while the second category focuses on the utilization of deep neural networks.

2.1. Feature Extraction and Machine Learning-Based Approaches

In the context under consideration, the task of detecting smoke and fire entails the initial computation of a feature vector predicated on user-specified attributes. These attributes encompass color, motion, optical flow, and object morphology within the captured image. Subsequent to the computation of these features, they are subjected to analysis by a decision algorithm tasked with ascertaining the presence or absence of smoke or fire within the image. An approach for fire detection predicated on color and motion characteristics is expounded by Toreyin et al. [32][28]. In this particular study, alongside conventional color and motion attributes, the application of wavelet transform is incorporated for behavioral analysis and feature extraction within video content. This methodology necessitates the implementation of thresholding techniques to identify candidate fire areas. Furthermore, Chen et al. [33][29] introduce a method centered on the analysis of color and motion characteristics to detect smoke and fire. The technique in question involves the application of thresholds on RGB and HIS (hue, intensity, saturation) values, supplemented by a distinct threshold related to motion detection based on temporal variations in pixel color. Additionally, Dang-Ngoc et al. [34][30] employ image processing to discern fires within forested regions. In thise work, an algorithm founded on the YCbCr color space, incorporating Y as luma (brightness), Cb as blue minus luma (B-Y), and Cr as red minus luma (R-Y) values, is introduced alongside conventional RGB values, aimed at heightening the accuracy of fire detection. Furthermore, Ghosh et al. [35][31] concurrently leverage color and motion attributes to detect smoke and fire. In this endeavor, fuzzy rules are employed to enhance classification performance. Conversely, Sankarasubramanian et al. [36][32] employ an edge detection algorithm to identify fire. Chen et al. [37][33] employs dynamic fire properties for fire area identification; however, instances involving objects resembling fire within the image might degrade the method’s performance. Lastly, Xie et al. [38][34] employ static and dynamic features in tandem for fire detection.
The important advantage inherent in these approaches lies in their minimal data requirements. Additionally, their incorporation of movement considerations serves to mitigate the misclassification of objects such as the sun as fire sources. Nonetheless, a drawback of these methods arises from their reliance on feature extraction methods anchored in attributes such as color. Consequently, these algorithms exhibit substantial error rates; for instance, an item such as a moving orange box might erroneously trigger a fire detection. Another noteworthy issue within this realm pertains to the necessity of fine-tuning pertinent thresholds, a labor-intensive process that often results in elevated false alarms. Moreover, the methods introduced in this domain grapple with the need for adept experience to appropriately design and configure suitable features.

2.2. Deep Learning-Based Approaches

In recent times, the adoption of deep learning techniques for the identification of smoke or fire in images has gained significant attention. Approaches grounded in artificial intelligence (AI) have effectively reduced the aforementioned shortcomings associated with feature-centric methodologies. For instance, Abdusalomov et al. [39][35] introduced a YOLOv3-based strategy for fire detection in indoor and outdoor environments, demonstrating its efficacy on a real-world image dataset and achieving an impressive accuracy of 92.8%. In another study, Khan et al. [40][36] proposed a hybrid approach that synergistically combined Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) for fire detection in smart urban settings, yielding high accuracy coupled with low false alarm rates. The domain of deep learning-based fire detection has also seen the utilization of Convolutional Neural Networks (CNNs), a class of deep neural networks adept at image processing tasks. Various researchers have proposed CNN-based fire detection systems, including seminal work such as the study conducted by Jeon et al. [41][37], presenting a CNN-centered fire detection methodology evaluated on indoor and outdoor fire image datasets, achieving an accuracy of 91%. Further contributing, Norkobil et al. [42][38] introduced a CNN-grounded fire detection system showcasing remarkable performance in video-based fire detection. Noteworthy studies in this field are explored in the following discourse.
In one study [43][39], a method focused on transfer learning is presented. It utilizes the pre-trained InceptionResNetV2 network to classify images as smoking or non-smoking. The effectiveness of this approach in predicting smoke and non-smoke images is assessed and compared with existing CNN methods using various performance metrics. Across a diverse and extensive new dataset, this method achieves accurate predictions of smoking or non-smoking images with a precision of 97.32%, accuracy of 96.87%, and recall of 96.46%. Talaat et al. [44][40] introduce an enhanced YOLOv8 model for fire detection using a dataset of fire and smoke images. The model incorporates a novel optimization function that reduces computational costs effectively. When compared to other studies, the adapted YOLOv8-based model demonstrates superior performance in minimizing false positives. Additionally, Liu et al. [45][41] propose a unique metric called “fire alarm authenticity”, which utilizes the duration of multiple smoke alarms’ alerts to determine fire location and severity. This criterion contributes to developing an algorithm for identifying alert sequences, validated through simulations involving real fires and false alarms.

References

  1. Hoover, K.; Hanson, L.A. Wildfire Statistics; Congressional Research Service (CRS) in Focus; Congressional Research Service (CRS): Washington, DC, USA, 2023.
  2. Xu, X.; Li, F.; Lin, Z.; Song, X. Holocene fire history in China: Responses to climate change and human activities. Sci. Total Environ. 2020, 753, 142019.
  3. Abdusalomov, A.B.; Islam, B.M.S.; Nasimov, R.; Mukhiddinov, M.; Whangbo, T.K. An improved forest fire detection method based on the detectron2 model and a deep learning approach. Sensors 2023, 23, 1512.
  4. Hu, Y.; Zhan, J.; Zhou, G.; Chen, A.; Cai, W.; Guo, K.; Hu, Y.; Li, L. Fast forest fire smoke detection using MVMNet. Knowl.-Based Syst. 2022, 241, 108219.
  5. Mukhiddinov, M.; Abdusalomov, A.B.; Cho, J. Automatic Fire Detection and Notification System Based on Improved YOLOv4 for the Blind and Visually Impaired. Sensors 2022, 22, 3307.
  6. Avazov, K.; Mukhiddinov, M.; Makhmudov, F.; Cho, Y.I. Fire detection method in smart city environments using a deep-learning-based approach. Electronics 2021, 11, 73.
  7. Zhang, F.; Zhao, P.; Xu, S.; Wu, Y.; Yang, X.; Zhang, Y. Integrating multiple factors to optimize watchtower deployment for wildfire detection. Sci. Total Environ. 2020, 737, 139561.
  8. Yao, J.; Raffuse, S.M.; Brauer, M.; Williamson, G.J.; Bowman, D.M.; Johnston, F.H.; Henderson, S.B. Predicting the minimum height of forest fire smoke within the atmosphere using machine learning and data from the CALIPSO satellite. Remote Sens. Environ. 2018, 206, 98–106.
  9. Mukhiddinov, M.; Abdusalomov, A.B.; Cho, J. A Wildfire Smoke Detection System Using Unmanned Aerial Vehicle Images Based on the Optimized YOLOv5. Sensors 2022, 22, 9384.
  10. Fernández-Berni, J.; Carmona-Galán, R.; Martínez-Carmona, J.F.; Rodríguez-Vázquez, Á. Early forest fire detection by vision-enabled wireless sensor networks. Int. J. Wildland Fire 2012, 21, 938.
  11. Ullah, F.; Ullah, S.; Naeem, M.R.; Mostarda, L.; Rho, S.; Cheng, X. Cyber-threat detection system using a hybrid approach of transfer learning and multi-model image representation. Sensors 2022, 22, 5883.
  12. Abdusalomov, A.B.; Mukhiddinov, M.; Kutlimuratov, A.; Whangbo, T.K. Improved Real-Time Fire Warning System Based on Advanced Technologies for Visually Impaired People. Sensors 2022, 22, 7305.
  13. Maruta, H.; Nakamura, A.; Kurokawa, F. A new approach for smoke detection with texture analysis and support vector machine. In Proceedings of the International Symposium on Industrial Electronics, Bari, Italy, 4–7 July 2010; pp. 1550–1555.
  14. Filonenko, A.; Hernández, D.C.; Jo, K.H. Fast smoke detection for video surveillance using CUDA. IEEE Trans. Ind. Inform. 2017, 14, 725–733.
  15. Tao, H.; Lu, X. Smoke Vehicle detection based on molti-feature fusion and hidden Markov model. J. Real-Time Image Process. 2019, 32, 1072–1078.
  16. Zhang, Q.X.; Lin, G.H.; Zhang, Y.M.; Xu, G.; Wang, J.J. Wildland Forest Fire Smoke Detection Based on Faster R-CNN using Synthetic Smoke Images. Procedia Eng. 2018, 211, 441–446.
  17. Qiang, X.; Zhou, G.; Chen, A.; Zhang, X.; Zhang, W. Forest fire smoke detection under complex backgrounds using TRPCA and TSVB. Int. J. Wildland Fire 2021, 30, 329–350.
  18. Filonenko, A.; Kunianggoro, L.; Jo, K.H. Comparative study of modern convolutional neural network for smoke detection on image data. In Proceedings of the 2017 10th International Conference on Human System Interactions (HSI), Ulsan, Republic of Korea, 17–19 July 2017; pp. 64–68.
  19. Simonyan, K.; Zisseman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556.
  20. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. In Proceedings of the Advances in Neural Information Processing Systems, Lake Tahoe, NV, USA, 3–6 December 2012; pp. 1097–1105.
  21. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778.
  22. Yuan, F.; Shi, J.; Xia, X.; Fang, Y.; Fang, Z.; Mei, T. High-order local ternary patterns with locality preserving projection for smoke detection and image classification. Inf. Sci. 2016, 372, 225–240.
  23. Li, J.; Zhou, G.; Chen, A.; Wang, Y.; Jiang, J.; Hu, Y.; Lu, C. Adaptive linear feature-reuse network for rapid forest fire smoke detection model. Ecol. Inform. 2022, 68, 101584.
  24. Pan, J.; Ou, X.; Xu, L. A Collaborative Region Detection and Grading Framework for Forest Fire Smoke using weakly Supervised Fine Segmentation and Lightweight Faster-RCNN. Forests 2021, 12, 768.
  25. Li, T.; Zhao, E.; Zhang, J.; Hu, C. Detection of wildfire smoke images based on a densely dilated convolutional network. Electronics 2019, 8, 1131.
  26. Kanand, T.; Kemper, G.; König, R.; Kemper, H. Wildfire detection and disaster monitoring system using UAS and sensor fusion technologies. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2020, 43, 1671–1675.
  27. Rahman, E.U.; Kha, M.A.; Algarni, F.; Zhang, Y.; Irfan Uddin, M.; Ullah, I.; Ahmad, H.I. Computer vision-based wildfire smoke detection using UAVs. Math. Probl. Eng. 2021, 2021, 9977939.
  28. Töreyin, B.U.; Dedeoğlu, Y.; Güdükbay, U.; Cetin, A.E. Computer vision based method for real-time fire and flame detection. Pattern Recognit. Lett. 2006, 27, 49–58.
  29. Chen, T.H.; Wu, P.H.; Chiou, Y.C. An early fire-detection method based on image processing. In Proceedings of the 2004 International Conference on Image Processing ICIP’04, Singapore, 24–27 October 2004; pp. 1707–1710.
  30. Dang-Ngoc, H.; Nguyen-Trung, H. Aerial forest fire surveillance-evaluation of forest fire detection model using aerial videos. In Proceedings of the 2019 International Conference on Advanced Technologies for Communications (ATC), Hanoi, Vietnam, 17–19 October 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 142–148.
  31. Ghosh, R.; Kumar, A. A hybrid deep learning model by combining convolutional neural network and recurrent neural network to detect forest fire. Multimed. Tools Appl. 2022, 81, 38643–38660.
  32. Sankarasubramanian, P.; Ganesh, E.N. Artificial Intelligence-Based Detection System for Hazardous Liquid Metal Fire. In Proceedings of the 2021 8th International Conference on Computing for Sustainable Global Development (INDIACom), New Delhi, India, 17–19 March 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 1–6.
  33. Chen, Y.; Xu, W.; Zuo, J.; Yang, K. The fire recognition algorithm using dynamic feature fusion and IV-SVM classifier. Clust. Comput. 2019, 22, 7665–7675.
  34. Xie, Y.; Zhu, J.; Cao, Y.; Zhang, Y.; Feng, D.; Zhang, Y.; Chen, M. Efficient video fire detection exploiting motion-flicker-based dynamic features and deep static features. IEEE Access 2020, 8, 81904–81917.
  35. Abdusalomov, A.; Baratov, N.; Kutlimuratov, A.; Whangbo, T.K. An improvement of the fire detection and classification method using YOLOv3 for surveillance systems. Sensors 2021, 21, 6519.
  36. Khan, S.; Khan, A. Ffirenet: Deep learning based forest fire classification and detection in smart cities. Symmetry 2022, 14, 2155.
  37. Jeon, M.; Choi, H.S.; Lee, J.; Kang, M. Multi-scale prediction for fire detection using convolutional neural network. Fire Technol. 2021, 57, 2533–2551.
  38. Norkobil Saydirasulovich, S.; Abdusalomov, A.; Jamil, M.K.; Nasimov, R.; Kozhamzharova, D.; Cho, Y.I. A YOLOv6-based improved fire detection approach for smart city environments. Sensors 2023, 23, 3161.
  39. Khan, A.; Khan, S.; Hassan, B.; Zheng, Z. CNN-based smoker classification and detection in smart city application. Sensors 2022, 22, 892.
  40. Talaat, F.M.; ZainEldin, H. An improved fire detection approach based on YOLO-v8 for smart cities. Neural Comput. Appl. 2023, 5, 20939–20954.
  41. Liu, G.; Yuan, H.; Huang, L. A fire alarm judgment method using multiple smoke alarms based on Bayesian estimation. Fire Saf. J. 2023, 136, 103733.
More
Video Production Service