1. Introduction
In the face of rain and snow, heavy fog and other special harsh environments, there will be problems such as blurring and fogging in the image acquisition. Traffic monitoring system, intelligent driving system and other fields relying on visual image technology are facing huge difficulties. Relevant experts at home and abroad have done a lot of research on this [1]. Ye et al. [2] proposed an improved dark channel image dehazing method combining image bright area detection and image enhancement. The image enhancement method was used to further optimize the restored image to solve its low brightness. You et al. [3] proposed a polarization image dehazing enhancement algorithm based on the dark channel prior principle by taking advantage of polarization imaging and combining the dark channel prior principle. Based on the atmospheric physical degradation model, image dehazing and enhancement were realized. He et al. [4] proposed an image enhancement dehazing algorithm based on guidance coefficient weighting and adaptive image enhancement. The restored image was combined with the atmospheric scattering model, and the adaptive linear contrast enhancement method was used to optimize the restored image. Fu et al. [5] proposed an ordinary differential equation inspired multi-level feature gradual thinning and edge enhancement dehazing algorithm. It was superimposed on the defog image to obtain the enhanced edge, retaining the final defog result of details.
To sum up, the common image dehazing processing technology keeps the jump of abrupt regions when processing image noise, which leads to insignificant improvement in image quality and still prominent details. To further improve the effect of image dehazing and enhance the image quality, an image denoising and enhancement algorithm based on mean guided filtering was proposed in the research. The algorithm innovatively established image denoising and enhancement model according to the prior rule of dark channel. At the same time, median filter with low computational complexity was used to denoise the image in real time, keeping the jump of abrupt regions, and realizing image denoising and enhancement. Compared with the traditional image dehazing methods, keeping the jump of abrupt regions can further enhance the image processing ability, which has important reference significance for improving the visual image technology.
2. Image Preprocessing
The method flow designed in this paper is shown in Fig. 1.
Step 1 is to input image samples; Step 2 is to pre segment the image by using the super pixel computing method; Step 3 is to judge whether to merge two adjacent regions [6]; Step 4 is to calculate the transmissivity of the image; Step 5 is to establish an image denoising and enhancement model based on the prior rules of dark channels; Step 6 is to judge whether the control factor value range is [0,1]. If yes, it needs to go back to Step 5. Otherwise, it needs to proceed to the next step; Step 6 is to output the enhanced dehazing image and end the process [7].
2.1 Pre-segmentation
Since natural images are easily affected by factors such as illumination, pre-segmentation is performed before image processing. At present, the methods widely used in pre-segmentation are super pixel calculation method and watershed segmentation algorithm. Due to the poor robustness of the watershed algorithm, the super pixel calculation method was selected in this paper [8]. The super pixel method is used to divide the image into several sub-regions. Since these sub-regions are composed of a series of adjacent pixels with similar features such as color and texture, these sub-regions are less susceptible to light than edges and pixels [9]. It is beneficial to obtain better image enhancement effect under the influence of external factors.
2.2 Image Segmentation Implementation
Taking an actual image (Fig. 2) as an example, the specific steps of image segmentation are given.
Schematic diagram of image segmentation steps: (a) the original image, (b) initial over-segmentation, (c) processed over-segmentation, and (d) segmentation result.
(1) The Ncut algorithm is used to pre-segment the original image to obtain area [TeX:] $$a_1, a_2, \ldots, a_n$$, which contains some extremely small areas, as shown in Fig. 1(b) [10].
(2) Under the condition of satisfying formula (1), it deletes the minimal area in the initial segmentation, as shown in Fig. 1(c), and marks the differences of some corresponding positions in Fig. 1(b) and 1(c) with a green circle.
In the formula (1), m and n represent the length and width of the original image respectively; [TeX:] $$N_{a_i}$$ represents the number of pixels in area [TeX:] $$a_i$$; M means the total amount of areas obtained by the Ncut algorithm. Since the very small area in the initial segmentation has little effect on the overall segmentation, it can improve the computational efficiency of subsequent steps after deletion [11].
(3) It calculates the consistency and similarity of different regions according to formulas (2) and (3) [8]:
In the formula (2), [TeX:] $$Z\left(a_i\right)$$ is the amount of mutual information in the region; [TeX:] $$T(x, y)$$ denotes the phase consistency of different regions, where x and y indicate the neighborhood and structural information, respectively [12].
(4) It determines whether the two adjacent regions are merged, and then determines whether the merger is stopped.
(5) It needs to update the image, repeat steps (2)–(4) until there are no more regions merged in the image, and output the image, as shown in Fig. 1(d).
3. Image Dehazing Enhancement Processing
Based on the image segmentation results, the transmittance of the image that needs to be dehazed is further solved, and the image edge detection results are obtained through the mean-guided filtering method. A linear model is established according to the detection results to achieve image dehazing enhancement.
3.1 Transmittance Solution
First, it selects the minimum value in the three channels of GRB corresponding to the pixel point of the image to obtain the dark image. Then it selects the minimum value in the local small block of the dark image as the value of the current pixel point, which obtains the dark primary color image [13]. The expression is as follows:
In the formula (4), [TeX:] $$f_a$$ represents the color channel; [TeX:] $$u_a$$ expresses the local patch centered on the region a. To solve the restored image by formula (4), it is necessary to obtain the transmittance w(t) and the atmospheric light vector H. First, it estimates the transmittance w(t). It assumes that the atmospheric light vector H is known and the w(t) in the local small block [TeX:] $$u_a$$ is constant, it takes two minimum operations on both sides of formula (4) and divided by H to obtain formula (5) [14]:
w(t) is constant within the local block [TeX:] $$u_a$$, which can be written outside the min operator and denoted as [TeX:] $$w_i(t).$$ Since H is a positive value, the formula (4) obtained from the dark primary color prior is brought into formula (5) to obtain the following formula (6):
The second term on the right side of formula (6) is the dark channel of the normalized image [TeX:] $$d^2(a) / H^2$$. In an actual fog-free scene, there will still be fog in areas with far depth of field [15]. Therefore, parameter r=0.95(0<r<1) is introduced to reduce the dehazing and make the restored scene more realistic.
The transmittance [TeX:] $$w_i(t)$$ is rewritten as:
In the formula (7), [TeX:] $$L_i \text { and } L_j$$ both mean the sparse features of hazy images.
3.2 Implementation of Image Dehazing Enhancement
According to the dark channel prior law, each local block of most outdoor haze-free images must have at least one pixel with a very low intensity value of the color channel. The following classic image dehazing enhancement model can be established:
In the formula (8), I(t) is the image with fog; O(t) denotes the image without fog; C(t) indicates the atmospheric light value of the entire image; medium transmittance; O(t)C(t) expresses the attenuation term of the light of the object in the propagation process; [TeX:] $$\Phi(1-C(t))$$ refers to the atmospheric light component, that is, the fog concentration. This model removes the spatially isotropic fog by estimating the fog concentration and subtracting the fog component from the foggy image [16]. The algorithm in this paper is based on the above classic image dehazing model [17]. Now that I(t) is known, it is required to solve the haze-free image O(t) under ideal lighting conditions. To simplify the calculation and control the number of parameters, the atmospheric light component is recorded as formula (9):
Transforming formulas (8) and (9) can get formula (10):
To protect the edge of the image and remove the noise in the transmittance map, it is necessary to smooth the transmittance map [18]. After comparing the advantages and disadvantages of several filters, this paper used the median filter with lower computational complexity to come and go noise and keep the jumps in the mutated regions [19]. After estimating the global atmospheric light [TeX:] $$\Psi(t)$$ and fog concentration using the dark channel principle, the image O(t) is recovered using the physical model formed by the fog image:
The specific implementation steps of image dehazing enhancement are as follows:
(1) It defines the atmospheric light value [TeX:] $$\Psi(t)=\Phi-(1-C(t)),$$ called the fog concentration, and estimates the atmospheric light value [TeX:] $$\Psi(t).$$
(2) The dark channel is calculated and median filtering on the dark channel is performed.
(3) Fog-free and fog-free areas are distinguished by local standard deviation.
(4) It controls the degree of image dehazing.
(5) Since not all parts of the image have fog, it is necessary to treat the regions with good contrast differently. To smooth the image while retaining the boundary part in the image, step (4) is used to shield the regions that do not need dehazing, making the results more robust [20].
(6) A control factor is introduced, whose value range is [0, 1]. As the value increases from small to large, the dehazing ability becomes stronger. This factor can effectively control the dehazing effect and avoid the final result. Dehaze is too weak to meet requirements or too strong to look unnatural.
4. Experimental Studies
To verify the effectiveness and rationality of the image dehazing enhancement algorithm based on mean guided filtering, different types of images were selected for testing, these images had rich detail information and depth of field information.
4.1 Experimental Environment and Dataset
This experiment was implemented using PyTorch. The full-reference evaluation index algorithm was implemented using Python, and the non-reference evaluation index algorithm was implemented using MATLAB. The specific experimental environment is shown in Table 1.
The RESIDE dataset and the 2018 NTIRE-specified dataset were divided into training and test sets as needed. The details of the experimental data are shown in Table 2.
Experimental environment parameters
Under the setting of the above experimental environment and dataset, the experimental research was carried out. The improved dark channel image dehazing method [2], the polarization image dehazing enhancement algorithm [3], the image enhancement dehazing algorithm [4] and the multi-level feature gradual thinning and edge enhancement dehazing algorithm [5] were compared with the proposed algorithm. The comparison results are analyzed as follows.
4.2 Analysis of Experimental Results
A foggy image was arbitrarily selected in the dataset, and the dehazing enhancement algorithm for polarized images based on the dark channel prior principle, the image enhancement dehazing algorithm based on weighted and adaptive guidance coefficients and the proposed algorithm were used for dehazing enhancement processing. The results are shown in Fig. 3.
According to Fig. 3, the visual effect of the proposed algorithm was better, which could not only effectively achieve the effect of dehazing, but also highlighted the details of the scene in bright places, and maintained the details of the scene in the deeper colors. The visual effect of the image was obviously better than that obtained by the traditional algorithm, the color fidelity was high, and the structural information was clear.
Comparison of image dehazing enhancement effect: (a) image with fog, (b) the proposed algorithm, (c) dark channel prior principle, and (d) guidance coefficient weighting.
To further measure the image quality, this study used information entropy, peak signal-to-noise ratio (PSNR), structural similarity and other evaluation indicators. The calculation formula is as follows.
In the formula (12), [TeX:] $$P\left(\rho_i\right)$$ is the probability of [TeX:] $$\rho_i ; L_i$$ means the number of gray levels of the image. The larger the entropy value of the image, the greater the amount of information, and the richer the detailed information of the image.
The information entropy after image processing by applying five methods is shown in Fig. 4.
Information entropy test results.
PSNR is the most widely used evaluation index in the field of image processing. The greater its value, the smaller the degree of deterioration of the processed image and the less distortion compared with the original image. The PSNR is calculated as follows:
In the formula (13), [TeX:] $$\varsigma$$ expresses the number of sampling bits, which is usually set to 8.
The PSNR after applying five methods is shown in Fig. 5.
Structural similarity is used to evaluate the ability of an algorithm to preserve structural information, and the higher the value, the better. Given two images [TeX:] $$I_1 \text { and } I_2 \text {, }$$ the structural similarity calculation formula is as follows:
In the formula (14), [TeX:] $$\varphi_{I_1} \text { and } \varphi_{I_2}$$ represent the average of [TeX:] $${I_1} \text { and } {I_2}$$ respectively; [TeX:] $$\pi_{I_1}^2 \text { and } \pi_{I_2}^2$$ stand for the variance of [TeX:] $${I_1} \text { and } {I_2}$$ respectively; [TeX:] $$\pi_{I_1 I_2}$$ indicate the covariance of [TeX:] $${I_1} \text { and } {I_2}$$; [TeX:] $$\tau_1 \text { and } \tau_2$$ both indicate constants.
The structural similarity after applying five methods is shown in Fig. 6.
Structural similarity test results.
Combined with the above objective evaluation results, the information entropy of the paper design algorithm was higher than 0.7bit, the peak SNR was higher than 40dB, and the structure similarity wad higher than 80%. The above indexes were higher than the other four traditional algorithms, which could effectively improve the evaluation index, obtain better image color, and improve the effect of the image recovery.
5. Conclusion
To improve the image restoration effect and solve the unclear restoration of details, low degree of tone restoration and loss of image details, an image dehazing enhancement algorithm based on mean-guided filtering was proposed. The main research results are as follows:
(1) The proposed algorithm had better visual effect, high color fidelity and clear structural information, and its image visual effect was obviously better than that obtained by traditional algorithms. (2) The information entropy, PSNR and structure similarity of the proposed algorithm were higher than those of the traditional algorithm. In this algorithm, the information entropy was higher than 0.7bit, the peak SNR was higher than 40dB, and the structural similarity was higher than 80%. Therefore, better image dehazing effect could be obtained and the image quality could be guaranteed. (3) Through the implementation of image dehazing, compared with the traditional dehazing algorithm, the proposed algorithm added a median filter to deal with image details. At the same time, to ensure the image quality, it maintained the jump of abrupt regions during noise processing. The final experimental results also showed that the proposed algorithm was superior to other algorithms in image detail processing, demisting effect, etc. However, the algorithms studied also have shortcomings. In image dehazing, the processing of image denoising due to environmental changes has not been fully considered. Later, more complex environmental issues need to be considered to improve the overall performance of the algorithm.