Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 -- 1493 2023-12-30 17:50:28 |
2 format change + 7 word(s) 1500 2024-01-02 01:47:59 |

Video Upload Options

Do you have a full video?


Are you sure to Delete?
If you have any further questions, please contact Encyclopedia Editorial Office.
Jamil, M.K.; Cho, Y. Fire Detection Based on Deep Learning Approaches. Encyclopedia. Available online: (accessed on 14 June 2024).
Jamil MK, Cho Y. Fire Detection Based on Deep Learning Approaches. Encyclopedia. Available at: Accessed June 14, 2024.
Jamil, Muhammad Kafeel, Young-Im Cho. "Fire Detection Based on Deep Learning Approaches" Encyclopedia, (accessed June 14, 2024).
Jamil, M.K., & Cho, Y. (2023, December 30). Fire Detection Based on Deep Learning Approaches. In Encyclopedia.
Jamil, Muhammad Kafeel and Young-Im Cho. "Fire Detection Based on Deep Learning Approaches." Encyclopedia. Web. 30 December, 2023.
Fire Detection Based on Deep Learning Approaches

The field of image recognition has witnessed the rise of a particular type of deep neural network (DNN), CNN. Learnable neural networks comprise numerous layers, each of which performs a separate function when extracting or identifying features. Computer vision, a compelling form of AI, is ubiquitous, and is often experienced without us realizing it. Image processing is the area of computer vision and science devoted to imitating elements of the human visual system and enabling computers to discern and process objects in images and videos similarly to humans. Several deep learning (DL) techniques have been effectively applied in various fields of fire and face detection research.

fire flame detection deep learning fire image dataset

1. Introduction

Each year, the number of fire-related natural disasters increases, resulting in more human deaths. In addition to human and material losses, fires frequently cause extensive economic harm. Both natural and anthropogenic forces are significant contributors. Factors such as dryness, wind, heat appliances, chemical fires, and cooking are conductive to fire ignition. Accidental fires can start with alarming randomness and rapidly spread out of control. To prevent unforeseen fires and ensure the safety of individuals, prompt evaluation of potential threats and prompt mitigation are necessary. According to the State Fire Agency of Korea, there were 40,030 fires in the country in 2019, which resulted in 284 deaths and 2219 hospitalizations [1]. The number of fires caused record-breaking levels of property damage. Thus, numerous research organizations have implemented tangential techniques for identifying fires. Fire alarm systems, sensor-based frameworks, and other sensing technologies are just a few examples of the warning systems and identification devices that have been adopted over the past several decades to detect specific fire and flame characteristics; however, numerous issues remain unresolved [2]. Recent research has proved the effectiveness of computer vision and deep learning-based methods for fire detection. Computer vision and artificial intelligence (AI) based approaches, such as static and dynamic texture analysis [3][4], neural network convolutions (CNNs), and 360-degree sensors [5][6][7], are widely used in the field of fire detection.

2. Fire Detection Strategies Based on Image Processing and Computer Vision

Location, rate of spread, length, or surface are only a few of the geometrical features of flames that Toulouse et al. [8] aimed to identify with their novel approach. The pixels depicting the fire were sorted into categories based on their color nonrefractive pixels were able to detect smoke sorted based on their average intensity. The edge computing framework for early fire detectors developed by Avgeris et al. [9] is a multi-step process that significantly facilitates border identification. However, these computer-vision-based frameworks were only used on relatively static images of the fire. Recently developed techniques based on fast Fourier transform (FFT) and wave variation have been utilized by other researchers to analyze the boundaries of wildfires in movies [10]. Studies have demonstrated that these methods are effective only in specific scenarios.
Color pixel statistics have been used to examine both foreground and background photos to search for signs of fire. By fusing color information and recording foreground and background frames, Turgay [11] created a real-time fire detection system. Fire color data is derived from statistical assessments of representative fire photos. The pixel value color information in each color channel is modeled using three Gaussian filters. This technique is used for simple adaptive data scenarios. Despite the widespread use of color in flame and fume recognition, such methods are currently infeasible due to the influence of environmental factors such as lighting conditions, shadows, and other distractions. Even though the fire has long-term dynamic movements, color-based approaches are inferior to the new dynamics for fire and smoke detection.
By analyzing the motion of smoke and flames with linear dynamic systems, researchers in [3] created a method for detecting fires (LDSs). They discovered that by including color, motion, and spatial-temporal features in their model, they could achieve both high detection rates and a considerable reduction in false alarms. The researchers aim to enhance the efficiency of the current fire detection monitoring system, which issues early warning alerts, by employing two different support vector classifier methodologies. To locate forest fires, researchers analyzed the fire’s spatial and temporal dynamic textures [12]. In a static texture investigation, hybrid surface descriptors were employed to generate a significant feature vector that could differentiate flames or distortions from each other without using conventional texture descriptors. These approaches rely heavily on easily discernible data, such as the presence of flames in images. The appearance of fire is affected by several factors, including its color, movement speed, surroundings, size, and borders. Challenges to using such methods include a poor picture and video quality, adverse weather, and an overcast sky. Therefore, modern supplemental methods must be implemented to enhance current methods.

3. Techniques for Fire Detection Based on Deep Learning Approaches

Recently, several deep learning (DL) techniques have been effectively applied in various fields of fire and face detection research [13][14][15]. In contrast to the manual qualities of the techniques the researchers have studied, DL methods can automate the selection and removal of features. Automatic feature extraction based on learned data is another area where DNNs have proven useful [16][17]. Rather than spending time manually extracting functions, developers may instead focus on building a solid dataset and a well-designed neural network.
The researchers have previously presented [4] a novel DL-based technique for fire detection that uses a CNN with dilated convolutions. To evaluate the efficacy of the approach, the researchers trained and tested it using a dataset the researchers created, which contained photographs of fire that the researchers collected from the web and manually tagged. The proposed methodology is contactless and applicable to previously unseen data. Therefore, it can generalize well and eliminate false positives. The contributions to the suggested fire detection approach are fourfold: this proposal includes a custom-built dataset, a few layers, small kernel sizes, and dilation filters all used in the experiments. Researchers can find this collection to be a valuable resource for utilizing images of fires and smoke.
To improve feature representations for visual classifications, Ba et al. [2] created a novel CNN model called Smoke Net that uses spatial and flow attention in CNN. An approach for identifying flames was proposed by Luo et al. [18], which uses a CNN and smoke’s kinetic characteristics. Initially, they separated the potential candidates into two groups, one using the dynamic frame references from the backdrop and the other from the foreground. A CNN with five convolutional layers plus three fully linked layers then automatically retrieved the candidate pixels’ highlights. Deep convolutional segmentation networks have been developed for analyzing fire emergency scenes, specifically for identifying and classifying items in an image based on their construction information regarding color, a relatively brilliant intensity compared to its surroundings, numerous shifts in form and size, and the items’ propensity to catch fire [19].
The proposed CNN models enabled a unique picture fire detection system in [20] to achieve maximum accuracy of 83.7%. In addition, a CNN technique was utilized to improve the performance of image fire detection software [21][22][23][24]. Algorithms based on DL require large amounts of information for training, verifying, and testing. Furthermore, CNNs are prone to spurious regressions and are computationally expensive due to the large datasets required for training. The researchers compiled a large dataset to address these issues, and the associated image collections will soon be made accessible to the public.

4. Fire Detection Approaches Based on YOLO (You Only Look Once) Networks

YOLO, invented in 2016 by Joseph Redmon et al. [25], is an object detection system. Built on CNNs, it was developed to be quick, precise, and adaptable. This system comprises a grid of cells that divide an image into regions, a collection of bounding boxes that are employed to detect objects within those regions, and a collection of predefined classes that are associated with those regions. The YOLO system takes an input image and divides it into a grid of cells, with each cell representing a different area of the image. Thereafter, the system analyzes the regions and places them into one of several categories based on the types of objects found there. Once an object has been recognized, the system constructs a bounding box within it and assigns a class to the box. With the object’s identity established, the system can calculate its coordinates, dimensions, and orientation. Park et al. suggested a fire detection approach for urban areas that uses static ELASTIC-YOLOv3 at night [26]. They recommended using ELASTIC-YOLOv3, which is an improvement on YOLOv2 (which is only effective for detecting tiny objects) and can boost detection performance without adding more parameters at the initial stage of the algorithm. They proposed a method of constructing a movable fire tube that considered the particularities of the flame. However, traditional nocturnal fire flame recognition algorithms have these issues: a lack of color information, a relatively high brightness intensity compared to the surroundings, different changes in shape and size of the flames due to light blur, and movements in all directions. Improved real-time fire warning systems based on advanced technologies and YOLO versions (v3, v4, and v5) for early fire detection approaches have been previously proposed [27][28][29][30].


  1. Korean Statistical Information Service. Available online: (accessed on 10 August 2021).
  2. Ba, R.; Chen, C.; Yuan, J.; Song, W.; Lo, S. SmokeNet: Satellite Smoke Scene Detection Using Convolutional Neural Network with Spatial and Channel-Wise Attention. Remote Sens. 2019, 11, 1702.
  3. Dimitropoulos, K.; Barmpoutis, P.; Grammalidis, N. Spatio-temporal flame modeling and dynamic texture analysis for automatic video-based fire detection. IEEE Trans. Circuits Syst. Video Technol. 2015, 25, 339–351.
  4. Valikhujaev, Y.; Abdusalomov, A.; Cho, Y.I. Automatic Fire and Smoke Detection Method for Surveillance Systems Based on Dilated CNNs. Atmosphere 2020, 11, 1241.
  5. Barmpoutis, P.; Stathaki, T.; Dimitropoulos, K.; Grammalidis, N. Early Fire Detection Based on Aerial 360-Degree Sensors, Deep Convolution Neural Networks and Exploitation of Fire Dynamic Textures. Remote Sens. 2020, 12, 3177.
  6. Lu, G.; Gilabert, G.; Yan, Y. Vision based monitoring and characterization of combustion flames. J. Phys. Conf. Ser. 2005, 15, 194–200.
  7. Gagliardi, A.; Saponara, S. AdViSED: Advanced Video SmokE Detection for Real-Time Measurements in Antifire Indoor and Outdoor Systems. Energies 2020, 13, 2098.
  8. Toulouse, T.; Rossi, L.; Celik, T.; Akhloufi, M. Automatic fire pixel detection using image processing: A comparative analysis of rule-based and machine learning-based methods. SIViP 2016, 10, 647–654.
  9. Avgeris, M.; Spatharakis, D.; Dechouniotis, D.; Kalatzis, N.; Roussaki, I.; Papavassiliou, S. Where There Is Fire There Is SMOKE: A Scalable Edge Computing Framework for Early Fire Detection. Sensors 2019, 19, 639.
  10. Zhang, Z.; Zhao, J.; Zhang, D.; Qu, C.; Ke, Y.; Cai, B. Contour based forest fire detection using FFT and wavelet. Proc. Int. Conf. CSSE 2008, 1, 760–763.
  11. Celik, T.; Demirel, H.; Ozkaramanli, H.; Uyguroglu, M. Fire detection using statistical color model in video sequences. J. Vis. Commun. Image Represent. 2007, 18, 176–185.
  12. Prema, C.E.; Vinsley, S.S.; Suresh, S. Efficient flame detection based on static and dynamic texture analysis in forest fire detection. Fire Technol. 2018, 54, 255–288.
  13. Avazov, K.; Mukhiddinov, M.; Makhmudov, F.; Cho, Y.I. Fire Detection Method in Smart City Environments Using a Deep-Learning-Based Approach. Electronics 2022, 11, 73.
  14. Farkhod, A.; Abdusalomov, A.B.; Mukhiddinov, M.; Cho, Y.-I. Development of Real-Time Landmark-Based Emotion Recognition CNN for Masked Faces. Sensors 2022, 22, 8704.
  15. Mamieva, D.; Abdusalomov, A.B.; Mukhiddinov, M.; Whangbo, T.K. Improved Face Detection Method via Learning Small Faces on Hard Images Based on a Deep Learning Approach. Sensors 2023, 23, 502.
  16. Abdusalomov, A.B.; Safarov, F.; Rakhimov, M.; Turaev, B.; Whangbo, T.K. Improved Feature Parameter Extraction from Speech Signals Using Machine Learning Algorithm. Sensors 2022, 22, 8122.
  17. Khan, F.; Tarimer, I.; Alwageed, H.S.; Karadağ, B.C.; Fayaz, M.; Abdusalomov, A.B.; Cho, Y.-I. Effect of Feature Selection on the Accuracy of Music Popularity Classification Using Machine Learning Algorithms. Electronics 2022, 11, 3518.
  18. Luo, Y.; Zhao, L.; Liu, P.; Huang, D. Fire smoke detection algorithm based on motion characteristic and convolutional neural networks. Multimed. Tools Appl. 2018, 77, 15075–15092.
  19. Sharma, J.; Granmo, O.C.; Goodwin, M. Emergency Analysis: Multitask Learning with Deep Convolutional Neural Networks for Fire Emergency Scene Parsing. In Advances and Trends in Artificial Intelligence. Artificial Intelligence Practices; IEA/AIE 2021. Lecture Notes in Computer Science; Fujita, H., Selamat, A., Lin, J.C.W., Ali, M., Eds.; Springer: Cham, Switzerland, 2021; Volume 12798.
  20. Li, P.; Zhao, W. Image fire detection algorithms based on convolutional neural networks. Case Stud. Therm. Eng. 2020, 19, 100625.
  21. Muhammad, K.; Ahmad, J.; Mehmood, I.; Rho, S.; Baik, S.W. Convolutional Neural Networks Based Fire Detection in Surveillance Videos. IEEE Access 2018, 6, 18174–18183.
  22. Pan, H.; Badawi, D.; Cetin, A.E. Computationally Efficient Wildfire Detection Method Using a Deep Convolutional Network Pruned via Fourier Analysis. Sensors 2020, 20, 2891.
  23. Li, T.; Zhao, E.; Zhang, J.; Hu, C. Detection of Wildfire Smoke Images Based on a Densely Dilated Convolutional Network. Electronics 2019, 8, 1131.
  24. Kim, B.; Lee, J. A Video-Based Fire Detection Using Deep Learning Models. Appl. Sci. 2019, 9, 2862.
  25. Joseph, R.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016.
  26. Park, M.; Ko, B.C. Two-Step Real-Time Night-Time Fire Detection in an Urban Environment Using Static ELASTIC-YOLOv3 and Temporal Fire-Tube. Sensors 2020, 20, 2202.
  27. Abdusalomov, A.; Baratov, N.; Kutlimuratov, A.; Whangbo, T.K. An improvement of the fire detection and classification method using YOLOv3 for surveillance systems. Sensors 2021, 21, 6519.
  28. Mukhiddinov, M.; Abdusalomov, A.B.; Cho, J. Automatic Fire Detection and Notification System Based on Improved YOLOv4 for the Blind and Visually Impaired. Sensors 2022, 22, 3307.
  29. Mukhiddinov, M.; Abdusalomov, A.B.; Cho, J. A Wildfire Smoke Detection System Using Unmanned Aerial Vehicle Images Based on the Optimized YOLOv5. Sensors 2022, 22, 9384.
  30. Abdusalomov, A.B.; Mukhiddinov, M.; Kutlimuratov, A.; Whangbo, T.K. Improved Real-Time Fire Warning System Based on Advanced Technologies for Visually Impaired People. Sensors 2022, 22, 7305.
Contributors MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to : ,
View Times: 144
Revisions: 2 times (View History)
Update Date: 02 Jan 2024
Video Production Service