# Forest Fire Detection and Identification Using Image Processing and SVM

Mubarak Adam Ishag Mahmoud* and Honge Ren*

## Abstract

Abstract: Accurate forest fires detection algorithms remain a challenging issue, because, some of the objects have the same features with fire, which may result in high false alarms rate. This paper presents a new video-based, image processing forest fires detection method, which consists of four stages. First, a background-subtraction algorithm is applied to detect moving regions. Secondly, candidate fire regions are determined using CIE L∗a∗b∗ color space. Thirdly, special wavelet analysis is used to differentiate between actual fire and fire-like objects, because candidate regions may contain moving fire-like objects. Finally, support vector machine is used to classify the region of interest to either real fire or non-fire. The final experimental results verify that the proposed method effectively identifies the forest fires.

Keywords: Background Subtraction , CIE L∗a∗b∗ Color Space , Forest Fire , SVM , Wavelet

## 1. Introduction

Forest-fires are real threats to human lives, environmental systems and infrastructure. It is predicted that forest fires could destroy half of the world’s forests by the year 2030 [1]. The only efficient way to minimize the forest fires damage is adopt early fire detection mechanisms. Thus, forest-fire detection systems are gaining a lot of attention on several research centers and universities around the world. Currently, there exists many commercial fire detection sensor systems, but all of them are difficult to apply in big open areas like forests, due to their delay in response, necessary maintenance, high cost and other problems.

In this study, image processing based has been used due to several reasons such as quick development of digital cameras technology, the camera can cover large areas with excellent results, the response time of image processing methods is better than that of the existing sensor systems, and the overall cost of the image processing systems is lower than sensor systems.

Several forest-fire detection methods based on image processing have been proposed. The methods presented in [2,3] share the same framework. These methods proposed forest fire detection using YCbCr color space. In these methods, detection of the forest-fire is based on four rules: the first and second rules are used to segment flame regions, while the third and fourth rules are used to segment high-temperature regions. The first one is based on the fact that, in any fire image, the red color value is larger than the green and the green is larger than the blue, this fact is represented in YCbCr as luminance Y is larger than chrominance blue (Y>Cb). In the second rule, the luminance Y value is larger than the average values of the Y component for the same image (Y>Ymean) while the Cb component is smaller than the average values of the Cb (Cb< Cbmean). Additionally, the Cr is larger than the average values Cr (Cr>Crmean). The third rule depends on the fact that the fire region center at high temperature is white in color, this results in reducing the red component and increasing the blue component at the fire center, which is presented as (Cb>Y>Cr). The fourth rule is that the Cr is smaller than the standard deviation for the same image (Crstd) multiplied by constant τ (Crτ*Crstd). These methods are fast. However, they are susceptible to false positives because they are not able to differentiate between moving fire-like objects and actual fire. Wang and Ye [4] proposed a forest-fire disaster prevention method that can detect fire and smoke. For fire detection, in any fire image, the red color value is larger if compared with the green. Besides, the green value is larger if compared with the blue. The R component is also larger than the average of the R component for the same image. This rule is represented as (R>G>B), (R>Rmean). The RGB images are then converted to HSV color space. Fire pixels are determined if the following conditions are met: 0≤H≤60, 0.2≤S≤1, 100≤V≤255. For smoke detection, RGB and k-means algorithms are used. Standard RGB smokes values C are taken from the image with significant smoke. The C value must be experimentally adjusted based on the results. Cluster center P is determined from video stream after the image frames are clustered by k-means algorithm. Smoke is detected if |P–C| < threshold. This method works well, nevertheless, smoke can spread quickly and has different colors based on the burning materials, leading to false alarm. Chen et al. [5] designed a fire detection algorithm which combines the saturation channel of the HSV color and the RGB color. This method detected fire using three rules: R≥RT, R≥G>B, and S≥((255-R)*ST/RT). Determinations of two thresholds (ST and RT) are needed. Based on the experimental results, the selected range is 55–65 for ST values and 115–135 for RT. This method is fast and computationally simple compared to the other methods. However, it suffers from false-positive alarms in case of moving fire-like objects.

In this study, a forest-fire detection method is proposed. It depends on multi-stages to identify forestfire. The final results indicate that the proposed algorithm has a good detection rate and fewer false alarms. The proposed algorithm is able to distinguish between fire and fire-like objects which are the main crucial problems for most of the existing methods.

The paper is organized as follows: Section 2 describes the Methodology, Section 3 presents the experimental results, and Section 4 summarizes the achieved results and potential future direction.

## 2. Methodology

In this part, the proposed method is presented. It consists of multi-stages. First, background subtraction is applied, because the fire boundaries continuously changes. Second, a color segmentation model is used to mark the candidate regions. Third, special wavelet analysis is carried out to distinguish between actual fire and fire like objects. Finally, support vector machine (SVM) is used for classifying the candidate regions to either actual fire or non-fire. The proposed algorithm stages will be described in details in the following subsections. Fig. 1 shows a flowchart of the proposed method.

2.1 Background Subtraction

Detecting moving objects is an essential step in most of the fire detection methods based on a video, because the fire boundaries continuously fluctuates. Eq. (1) calculates the contrast between the current image and background to determine the region of motion. Fig. 2 shows an example of background subtraction. A pixel at (x, y) is supposed to be moving if it satisfies Eq. (1) as follows.

##### (1)
[TeX:] $$\left| I _ { n } ( x , y ) - B _ { n } ( x , y ) \right| > t h r$$

where In(x, y) and Bn(x, y) represents the pixel value at (x, y) for the current and background frame, and thr refers to a threshold value which is set to 3 experimentally.

The background value is continuously updated using Eq. (2) as follows:

##### (2)
[TeX:] $$B _ { n + 1 } ( i , j ) = \left\{ \begin{array} { c c } { B _ { n } ( x , y ) + 1 i f I _ { n } ( x , y ) > B _ { n } ( x , y ) } \\ { B _ { n } ( x , y ) - 1 i f I _ { n } ( x , y ) < B _ { n } ( x , y ) } \\ { B _ { n } ( x , y ) } { i f I _ { n } ( x , y ) = B _ { n } ( x , y ) } \end{array} \right.$$

where Bn+1(x, y) and Bn(x, y) represents intensity pixel value at (x, y) for the current and previous background [6].

The proposed method flowchart.
An original frame containing fire (a) and the frame containing fire after background subtraction (b).
2.2 Color-based Segmentation

Different kinds of moving things (e.g., trees, people, birds, etc.) as well fire can be included after applying background subtraction. Thus, CIE L∗a∗b∗ color is used to select candidate regions of fire color.

2.2.1 RGB to CIE L*a*b* conversion

The conversion from RGB to CIE L∗a∗b∗ color space is performed by using Eq. (3):

##### (3)
[TeX:] \left[ \begin{array} { l } { X } \\ { Y } \\ { z } \end{array} \right] = \left[ \begin{array} { c c c } { 0.412673 } \ { 0.357580 } \ { 0.180423 } \\ { 0.212671 }\ { 0.715160 } \ { 0.07169 } \\ { 0.019334 } \ { 0.119193 } \ { 0.950227 } \end{array} \right] * \left[ \begin{array} { l } { R } \\ { G } \\ { B } \end{array} \right] \\ L ^ { * } = \left\{ \begin{array} { c } { 116 * \left( Y / Y _ { n } \right) - 16 , \text { if } \left( Y / Y _ { n } \right) > 0.008856 } \\ { 903.3 * \left( Y / Y _ { n } \right) , \text { otherwise } } \end{array} \right. \\ \begin{aligned} a ^ { * } \ = 500 * \left( f \left( X / X _ { n } \right) - f \left( Y / Y _ { n } \right), \right. \\ b ^ { * } \ = 200 * \left( f \left( Y / Y _ { n } \right) - f \left( Z / Z _ { n } \right), \right. \end{aligned} \\ f ( t ) = \left\{ \begin{array} { c } { t ^ { 1 / 3 } , i f t > 0.008856 } \\ { 7.787 * t + 16 / 116 , \text { Otherwise } } \end{array} \right.

where Xn, Yn, and Zn represents the reference color (white) values. The RGB colors channels range is from 0 to 255 for 8-bit data representation, and the ranges of L*, a*, and b* are [0, 100], [–110, 110], and [–110, 110], respectively.

After calculating the values of color channels (L*, a*, b*), the values of average channel (L*m, a*m, b*m) are obtained using the following equations:

##### (4)
[TeX:] \begin{aligned} L _ { m } ^ { * } = \frac { 1 } { N } \sum _ { x } \sum _ { y } L ^ { * } ( x , y ) \\ a _ { m } ^ { * } = \frac { 1 } { N } \sum _ { x } \sum _ { y } a ^ { * } ( x , y ) \\ b _ { m } ^ { * } = \frac { 1 } { N } \sum _ { x } \sum _ { y } b ^ { * } ( x , y ) \end{aligned}

where L*m, a*m and b*m are the average CIE L*a*b* channels values, and N is the image pixels’ total number.

To detect the candidate fire region using CIE L*a*b*, four rules are defined based on the notion that the fire region is the brightest area with near red color in the image. The rules are as follows:

##### (5)
[TeX:] $$R 1 ( x , y ) = \left\{ \begin{array} { l l } { 1 \text { if } \left( L ^ { * } ( x , y ) \geq L ^ { * } m \right) } \\ { 0 } \ { \text { Otherwise } } \end{array} \right.$$

##### (6)
[TeX:] $$R 2 ( x , y ) = \left\{ \begin{array} { l l } { 1 \text { if } \left( a * ( x , y ) \geq a ^ { * } m \right) } \\ { 0 } \ { \text { Otherwise } } \end{array} \right.$$

##### (7)
[TeX:] $$R 3 ( x , y ) = \left\{ \begin{array} { l l } { 1 \text { if } \left( b ^ { * } ( x , y ) \geq b ^ { * } m \right) } \\ { 0 } \ { \text { Otherwise } } \end{array} \right.$$

##### (8)
[TeX:] $$R 4 ( x , y ) = \left\{ \begin{array} { l l } { 1 \text { if } \left( b ^ { * } ( x , y ) \geq a ^ { * } ( x , y ) \right) } \\ { 0 } \ { \text { Otherwise } } \end{array} \right.$$

where R1(x, y), R2(x, y), R3(x, y), and R4(x, y) are binary images. Fig. 3 shows the applying rules (5) through (8).

Applying the rules from (5)–(8) to the input images: (i) original RGB images, (ii) binary images using rule (5), (iii) binary images using rule (6), (iv) binary images using rule (7), (v) binary images using rule (8), and (vi) binary images using rules (5) through (8).
2.3 Spatial Wavelet Analysis for Color Variations

There is high luminance contrast in genuine fire regions than in fire-like colored objects, due to the turbulent fire flicker. Spatial wavelet analysis is a good image-processing method that can be used to distinguish between genuine fire regions and fire-like colored regions. Thus, a 2D wavelet filter is used on the red channel and the spatial wavelet energy is calculated for each pixel. Fig. 4 shows the wavelet energies of two videos, one contains actual fire and the other contains fire-like objects. It is clear that these regions containing actual fires have high variations and high wavelet energy. The following formula is used to calculate the wavelet energy:

##### (9)
[TeX:] $$E ( x , y ) = \left( H L ( x , y ) ^ { 2 } + L H ( x , y ) ^ { 2 } + H H ( x , y ) ^ { 2 } \right)$$

where E(x, y) is the spatial wavelet energy for specific pixel, HL, LH and HH are low high, high low and high-high wavelet sub-images. The spatial wavelet energy for each block is calculated by adding the specific energy of each pixel in the block as follows [7].

##### (10)
[TeX:] $$E _ { b l o c k } = \frac { 1 } { N _ { b } } \sum _ { x , y } E ( x , y )$$

where Nb is the total number of pixel’s in the block. Eblock is used in the next stage as SVM input, to classify the regions of interest to either fire or non-fire.

Wavelet energy for actual fire (a) and fire-like object (b).
2.4 Classification using SVM

SVM nowadays is commonly used in different fields of pattern recognition systems, because it provides high performance and accurate classification results with limited training data set. The SVM idea is to create an optimal hyperplane to divide the input dataset into two classes with maximum margins. In this study, SVM is used to classify the regions of interest to either fire or non-fire. SVM classification function defined in the following formula:

##### (11)
[TeX:] $$f ( x ) = \operatorname { sign } \left( \sum _ { i = 0 } ^ { l - 1 } w _ { i } \cdot k \left( x , x _ { i } \right) + b \right)$$

where sign() is to determine whether the class of x either belongs to fire or non-fire (+1 class and –1 class). wi are output weights of the kernel, k() represents a kernel function, xi are the support vectors, i is support vectors number. In our proposed method, a one-dimension feature vector has been used. The data in this study is nonlinearly separable, no hyper-plane may exist to separate the input data into two parts, therefore, non-linear radial basis function (RBF) [8] is used, as follows:

##### (12)
[TeX:] $$k ( x , y ) = \exp \left( - \frac { \| x - y \| ^ { 2 } } { 2 \sigma ^ { 2 } } \right) \text { for } \sigma > 0$$

where x, y represent the input feature vectors, σ is a parameter for controlling the width of the effective basis function, experimentally set to 0.1 which gives a good performance. To train the SVM, dataset consisting of 500 wavelet energies from actual fire video and 500 fire-like and non-fire moving pixels were used.

## 3. Results

In this part, experimental results of the proposed method have been presented. The model is implemented using MATLAB (R2017a) and tested on an Intel Core i7 2.97 GHz PC 8 GB RAM PC.

To measure the proposed algorithm performance, 10 videos collected from the Internet (http://www.ultimatechase.com), eight of them are used with dimensions of 256×256. Table 1 shows a snapshot of the tested videos. True positive is counted if an image frame has a fire pixel, and it is determined by the proposed algorithm as fire and if the image frame has no fire. It is determined by the proposed algorithm as fire, it counts as a false-positive. The results are shown in Table 2.

Videos used for the proposed algorithm evaluation

The experimental final results in Table 2 show that the proposed method has an average true positive rate (93.46%) in the eight fire videos and false positive rate (6.89%) in the two fire-color moving object videos. These results indicate the good performance of the proposed method.

Experimental results for testing the proposed forest-fire detection method
3.1 Performance Evaluation

To evaluate the performance of the proposed algorithm, comparisons between the above-mentioned methods and the proposed algorithm are carried out. All of these methods are tested in data sets consisting of 300 images (200 forest-fire images and 100 non-fire images) collected from the Internet. The Algorithms’ performances are calculated using the evaluation metric F-score.

3.1.1 F-score

The F-score [9] is used to evaluate the performance of the detection methods. For any given detection method, there are four possible outcomes; If an image has fire pixels, and it was determined by the algorithm as fire, then it is a true-positive; if the same image is determined not to be fire pixels by the algorithm, it is false-negative. If an image has no fire, and it was determined by the algorithm as no fire, it is true-negative, but if it was identified as fire by the algorithm, it counts as a false-positive. Fire detection methods are evaluated using the following equations:

##### (13)
[TeX:] $$F = 2 * \frac { ( \text {precision} \text { reall } ) } { ( \text { precision } + \text {recall} ) }$$

##### (14)
[TeX:] $$precision \ = \frac { T P } { ( T P + F P ) }$$

##### (15)
[TeX:] $$r e c a l l = \frac { T P } { ( T P + F N ) }$$

where F refers to F-score; TP, TN, FP and FN are a true positive, true negative, false positive, and false negative, respectively. A higher algorithm F-score means a better overall performance. Table 3 shows the comparison results.

 TP rate is TP divided by the overall number of fire images.

 TN rate is TN divided by the overall number of non-fire images.

 FN rate is FN divided by the overall number of fire images.

 FP rate is FP divided by the overall number of non-fire images.

Evaluations of the four tested fire detection methods

Table 3 shows the F-score of four methods. The proposed method F-score is 3.78% higher than that of the method described in Premal and Vinsley [2], this indicates the reliability of the proposed method.

## 4. Conclusion

This work presented an effective forest-fire detection method using image processing. Background subtraction and special wavelet analysis are used. In addition, SVM is used for classifying the candidate region to either real fire or non-fire. Comparison between the existing methods and the proposed method is carried out. The final results indicate that the proposed forest fires detection method achieves a good detection rate (93.46%) and a low false-alarm rate (6.89%) in fire-like objects. These results indicate that the proposed method is accurate and can be used in automatic forest-fire alarm systems.

For future work, the method’s accuracy could be improved by extracting more fire features and increasing the training data set.

## Acknowledgement

The work is supported by Fundamental Research Funds for the central universities (No. 2572017PZ10).

## Biography

https://orcid.org/0000-0001-5745-7376

He received B.S. in Engineering Technology from Faculty of Engineering and Technology, University of Gezira in 2006 and M.S. degrees in Electronics Engineering from Sudan University of Science and Technology in 2012. Now he is a Ph.D. candidate at Information and Computer Engineering, Northeast Forestry University, China.

## Biography

##### Honge Ren
https://orcid.org/0000-0002-5334-7636

She received the Ph.D. degree from Northeast Forestry University, China, in 2009. She is currently professor of College of Information and Computer Engineering at Northeast Forestry University, a supervisor of doctoral students, and the director of Heilongjiang Provincial Forestry Intelligent Equipment Engineering Research Center. Her main research interests include different aspects of artificial intelligence and distributed systems.

## References

• 1 D. Stipanicev, T. Vuko, D. Krstinic, M. Stula, L. Bodrozic, "Forest fire protection by advanced video detection system: Croatian experiences," in Proceedings of the 3rd TIEMS Workshop on Improvement of Disaster Management Systems: Local and Global Trends, Trogir, Croatia, 2006;custom:[[[https://bib.irb.hr/prikazi-rad?rad=279548]]]
• 2 C. E. Premal, S. S. Vinsley, "Image processing based forest fire detection using YCbCr colour model," in Proceedings of 2014 International Conference on Circuit, Power and Computing Technologies (ICCPCT), Nagercoil, India, 2014;pp. 1229-1237. doi:[[[10.1109/ICCPCT.2014.7054883]]]
• 3 V. Vipin, "Image processing based forest fire detection," International Journal of Emerging Technology and Advanced Engineering, vol. 2, no. 2, pp. 87-95, 2012.custom:[[[https://scholar.google.co.kr/scholar?hl=ko&as_sdt=0%2C5&q=Image+processing+based+forest+fire+detection&btnG=]]]
• 4 Y. L. Wang, J. Y. Ye, "Research on the algorithm of prevention forest fire disaster in the Poyang Lake Ecological Economic Zone," Advanced Materials Research, vol. 518-523, pp. 5257-5260, 2012.doi:[[[10.4028/www.scientific.net/amr.518-523.5257]]]
• 5 T. H. Chen, P. H. Wu, Y. C. Chiou, "An early fire-detection method based on image processing," in Proceedings of 2004 International Conference on Image Processing, Singapore, 2004;pp. 1707-1710. doi:[[[10.1109/ICIP.2004.1421401]]]
• 6 M. Kang, T. X. T ung, J. M. Kim, "Efficient video-equipped fire detection approach for automatic fire alarm systems," Optical Engineering, vol. 52, no. 1, 2013.doi:[[[10.1117/1.oe.52.1.017002]]]
• 7 B. U. Toreyin, Y. Dedeoglu, U. Gudukbay, A. E. Cetin, "Computer vision based method for real-time fire and flame detection," Pattern Recognition Letters, vol. 27, no. 1 pp.49-58, pp. no.1 49-58, 2006.doi:[[[10.1016/j.patrec.2005.06.015]]]
• 8 S. Theodoridis, A. Pikrakis, K. Koutroumbas, D. Cavouras, Introduction to Pattern Recognition: A Matlab Approach. New Y ork, NY: Academic Press, 2010.custom:[[[https://scholar.google.co.kr/scholar?hl=ko&as_sdt=0%2C5&q=Introduction+to+Pattern+Recognition%3A+A+Matlab+Approach&btnG=]]]
• 9 T . Fawcett, 2004;, http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.10.9777

Table 1.

Videos used for the proposed algorithm evaluation

Table 2.

Experimental results for testing the proposed forest-fire detection method
 Video Number of frames Number of fire frame TP TP rate (%) FP FP rate (%) Video_NO. 1 260 260 230 88.46 - - Video_NO. 2 246 246 232 94.30 - - Video_NO. 3 208 208 203 97.6 - - Video_NO. 4 200 200 185 92.5 - - Video_NO. 5 245 245 234 95.51 - - Video_NO. 6 585 0 - - 34 5.81 Video_NO. 7 219 219 206 94.06 - - Video_NO. 8 216 216 198 91.67 - - Video_NO. 9 218 218 204 93.58 - - Video_NO. 10 251 0 - - 20 7.97

Table 3.

Evaluations of the four tested fire detection methods
 Method TP rate (%) FN rate (%) TN rate (%) FP rate (%) Recall Precision F-score (%) Premal and Vinsley [2] 91.5 8 89 13 0.920 0.876 89.74 Vipin [3] 86 9.5 82 11 0.901 0.887 89.38 Chen et al. [5] 83 16.5 88 26 0.834 0.761 79.58 Proposed method 94 5 90 8 0.949 0.922 93.52
The proposed method flowchart.
An original frame containing fire (a) and the frame containing fire after background subtraction (b).
Applying the rules from (5)–(8) to the input images: (i) original RGB images, (ii) binary images using rule (5), (iii) binary images using rule (6), (iv) binary images using rule (7), (v) binary images using rule (8), and (vi) binary images using rules (5) through (8).
Wavelet energy for actual fire (a) and fire-like object (b).