Article Information
Corresponding Author: Juncheng Zou* , 195627249@qq.com
Ganghua Liu, Department of Support Services, The State Grid Chongqing Electric Power Company, Chongqing, China, kas365@163.com
Wei Tian, Department of Support Services, The State Grid Chongqing Electric Power Company, Chongqing, China, tianfangyang@163.com
Yushun Luo, Department of Support Services, The State Grid Chongqing Electric Power Company, Chongqing, China, qnlys2010@163.com
Juncheng Zou*, Department of Support Services, The State Grid Chongqing Electric Power Company, Chongqing, China, 195627249@qq.com
Shu Tang, College of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing, China, tangshu@cqupt.edu.cn
Received: July 13 2021
Revision received: September 2 2021
Accepted: October 10 2021
Published (Print): February 28 2022
Published (Electronic): February 28 2022
1. Introduction
The problem for blind image restoration is to estimate the blur kernel (BK) k from a blurred image y, and then restore the clear image x. This process is:
The accurate estimation of the BK is a key for successful blind image restoration. In recent years, researchers proposed a lot of blind restoration methods. Shan et al. [1] proposed a piecewise function to estimate the BK in 2008. Almeida and Almeida [2] proposed an edge extraction filter to estimate the motion BK in 2010. In 2011, Krishnan et al. [3] combined the L1 norm and the L2 norm to realize the estimation of the BK. In 2013, Xu et al. [4] used the L0 norm to extract large-scale edges in an image, and then used the large-scale edges to estimate the BK. In 2015, Ma et al. [5] used a set of sparse priors to extract significant edge structures to estimate the BK. In 2016, Zuo et al. [6] proposed an an Lp norm to achieve accurate BK estimation. In 2016, Pan et al. [7] first introduced the dark channel prior (DCP) into blind image deblurring, which both achieved accurate BK estimation and high-quality image restoration. In 2017, Pan et al. [8] introduced the L0 norm both in the gradient domain and the space domain. In 2019, Guo and Ma [9] proposed a local image block prior, and then use these edges to guide the estimation of the BK, which can get better restoration results. In 2020, Chen et al. [10] combined the inherent structural correlation and spatial sparsity of images, and used Laplacian priors to achieve image blind deblurring. In 2019, Chen et al. [11] found that blurring will reduce the magnitude of the image gradient, and a blind restoration method named the maximum value of the local gradient (LMG) was proposed. In 2020, Lim et al. [12] fused the texture perception prior of L0 norm and L2 norm to process remotely sensed blurred images, this method has a good restoration effect on the texture area of the image. In 2020, Cai et al. [13] proposed a DBCPeNet, which uses the neural network. In 2020, Wu et al. [14] proposed a network to deal with the problem of video deblurring. In 2020, Zhang et al. [15] used two GANs to learn to deblur, the first GAN learned how to blur clear images, and the first GAN guided the second GAN to learn how to convert blurred images into clear images. In 2020, Li et al. [16] found that for the images obtained by the learning method, peak signal-to-noise ratio (PSNR) cannot accurately reflect the image quality, so they proposed a new image evaluation. 2021, Ren et al. [17] presented a deblurring technology for spatial perception. In 2021, Ren et al. [18] found that the residuals caused by blur or other image degradation are spatially dependent and complex in distribution. Therefore, training on a set of blurred and real image pairs to parameterize and learn the regularization term of the restored image. Hu et al. [19] proposed a network for single image reconstruction. Lu et al. [20] proposed an unsupervised deblurring method.
Here, a windowed-total-variation regularization constraint that can achieve accurate estimation of the BK is proposed. Different from existing methods, the windowed-total-variation regularization constraint is got using the spatial scale of image edges rather than the magnitude, so it can achieve more accurate BK estimation and high-quality image restoration.
2. The Proposed Windowed-Total-Variation Constraint
In [20], a relative total variation model (RTVM) was proposed for structure extraction. And our method is based on the RTVM, which is shown in formula (2):
The differences between formula (2) the RTVM in [21] are: the weight in [21] is removed, and the quadratic is used instead of the absolute values. N(i) represents an image block centered on the pixel i, [TeX:] $$\nabla_{h} u \text { and } \nabla_{v} u$$ are the discrete first-order difference operations in the horizontal and vertical directions, respectively. [TeX:] $$\mathcal{E}$$ is a small positive number.
It can be seen from formula (2): if the width of N(i) is set to be the same as that of the BK, then by minimizing formula (2), we can extract the edge with a spatial scale larger than the BK, regardless of its magnitude. Therefore, to estimate the BK accurately, this thesis presents a windowed-total-variation constraint method, shown in formula (3):
where [TeX:] $$R(x)=\sum_{i} r(i),\|\cdot\|_{p} \text { and }\|\cdot\|_{2}$$ denote the Lp norm and the L2 norm respectively. [TeX:] $$\lambda_{u} \text { and } \lambda_{k}$$ represent regularization parameters. From Eq. (3) we can see that, R(x) is a windowed-total-variation regularization constraint term, which can only extract the useful large-scale edges. [TeX:] $$\|k\|_{p}$$ is sparsity constraint term, which preserves the BK. In Section 3, we will discuss how to solve the proposed windowed-total-variation regularization constraint model in detail.
3. The Solution of the Proposed Windowed-Total-Variation Method
In this section, we adopt an alternate iterative solving algorithm to solve model (3) by converting it into two sub-problems x and k, respectively.
3.1 Solving u Sub-problem
For the solution of the x sub-problem, we fix k, and formula (3) is transformed into:
Obviously, the key to the solution of formula (4) is to solve the regularization term R(x). First, we reorganize the elements in [TeX:] $$\frac{\sum_{j \in N i j}\left(\left(\nabla_{h} x\right)(j)\right)^{2}}{\left(\sum_{j e N(i)}\left(\nabla_{h} x\right)(j)\right)^{2}+\varepsilon}$$ as follows:
In the same way, for the [TeX:] $$\frac{\sum_{j \in N(i)}\left(\left(\nabla_{v} x\right)(j)\right)^{2}}{\left(\sum_{j \in \mathcal{N}(t)}\left(\nabla_{1} x\right)(j)\right)^{2}+\varepsilon},$$ we can get:
Therefore, we can get formula (7):
where [TeX:] $$W_{h}(h)=w_{h}(j), W_{v}(h)=w_{v}(j).$$ And ∘ is the piecewise multiplication. Therefore, x can be solved using fast Fourier transform (FFT).
3.2 Solving k Sub-problem
For the solution of the k sub-problem, we fix u and solve the k in the gradient domain, and the formula (3) is thus transformed into:
For the sparsity of BKs, we set p=1. To solve the k sub-problem efficiently, we use an extra variable [TeX:] $$b_{k}$$ and get:
where [TeX:] $$b_{k}$$ is the extra variable and [TeX:] $$\beta$$ is a penalty parameter. Similarly, we can solve Eq. (10) by converting it into two sub-problems [TeX:] $$b_{k}$$ and k, respectively:
Fixing [TeX:] $$b_{k},$$ we can solve k by:
So the same FFT as Eq. (8) can be directly used to solve k:
Fixing k, we can get [TeX:] $$b_{k}$$ by:
Using the method in [6], Eq. (13) can be solved by:
Finally, to get a meaningful BK, we use the following constraint on k:
4. Experimental Results
We compare our method with other studies [3,7,8,11] to verify the superiority of our method (Table 1).
Average PSNR and mean SSIM of all methods on all 704 artificial blurred images
4.1 The Artificial Image Dataset Experiments
In the artificial image datasets experiments, we used three different data sets [22-24], and 704 artificial blurred images in total. Fig. 1 and Table 2 show the inverse convolution error ratio of all methods on all 704 artificial blurred images.
Cumulative histogram of the inverse convolution error rate of all methods on all 704 artificial blurred images.
Statistical percentage (%) of inverse convolution error rate
Artificial blurred images experiments (campsite): (a) original clear image, (b) corresponding blurred image, (c-f) the results of the method [ 3, 7, 8, 11], and (g) the results of the proposed method.
Artificial blurred images experiments (terrace view): (a) original clear image, (b) corresponding blurred image, (c-f) the results of the method [ 3, 7, 8, 11], and (g) the results of the proposed method.
The estimated BKs and the final deblurred images by the methods [3,7,8,11] both have obvious defects of different degrees: the defects for BKs such as divergence, discontinuity, and smearing will eventually lead to the ringing effect, excessive smoothness, and noise in the final deblurred images. By contrast, the proposed method can not only estimate the most accurate BK (good guarantee of continuity and sparsity), and the corresponding restored image also has sharper edge details and has a very good suppression of various blemishes (Figs. 2 and 3).
4.2 Experiments on Real Images
Next, we conduct the real blurry images experiments (Fig. 4).
Real blurred images experiments (licence plate): (a) the real blurry image, (b) the restored result and its magnified area of the method [ 3], (c–e) results of [ 7], [ 8], [ 11], and (f) our method’s result.
Fig. 5 shows that among the estimated BKs, the BKs estimated by [7] and [8] have obvious expansion and tailing defects, while the BK estimated by the literature [3] is concentrated to one point, while the BK estimated by [11] has obvious discontinuities. Secondly, in the final restored images obtained by the method of [3,7,8], obvious tail shadow defects can be seen at the eyeball spot, and there are problems of color dispersion and blurring. Although the restoration result of [11] has no obvious spot smear problem, there is a certain degree of blur. In contrast, the method proposed in this paper can estimate a more accurate BK and can obtain a highest-quality restored image. See the corresponding magnified areas in Fig. 5(b)–5(f).
Literature [3] has produced serious distortion and there are a lot of flaws. The resolution of BK estimated by [7] is low. The blur kernels estimated by [8] and [11] show blur and discontinuity respectively. Secondly, the restored image in [3] is still blurry, and the restored image in [7] has color diffusion in details, and the restored images in [8] and [11] have blurring problems, and corrugated flaws appear in areas with rich texture. See the corresponding magnified areas in Fig. 6(b)–6(f).
Real blurred images experiments (face picture): (a) the real blurry image, (b) the restored result, es-timated BK, and its magnified area of the method [ 3], (c–e) results of [ 7], [ 8], [ 11], and (f) our method’s result.
Real blurred images experiments (dolls): (a) the real blurry image, (b) the restored result, estimated BK, and its magnified area of the method [ 3], (c–e) results of [ 7], [ 8], [ 11], and (f) our method’s result.
5. Conclusion
We propose a windowed-total-variation regularization constraint model for blind image deblurring. The proposed method is based on the spatial scale of the image edges but not the amplitude to accurately extract useful image edges for accurate BKs estimation and high-quality images restoration. A large number of experiments prove the superiority of our method.
Acknowledgement
All the authors will thank D. Krishnan, J. S. Pan, and L. Chen for offering their codes respectively.