Ganghua Liu , Wei Tian , Yushun Luo , Juncheng Zou* and Shu TangA Windowed-Total-Variation Regularization Constraint Model for Blind Image RestorationAbstract: Blind restoration for motion-blurred images is always the research hotspot, and the key for the blind restoration is the accurate blur kernel (BK) estimation. Therefore, to achieve high-quality blind image restoration, this thesis presents a novel windowed-total-variation method. The proposed method is based on the spatial scale of edges but not amplitude, and the proposed method thus can extract useful image edges for accurate BK estimation, and then recover high-quality clear images. A large number of experiments prove the superiority. Keywords: Edge Amplitude , Image Restoration , Kernel , Spatial Scale , Windowed-Total-Variation 1. IntroductionThe problem for blind image restoration is to estimate the blur kernel (BK) k from a blurred image y, and then restore the clear image x. This process is:
The accurate estimation of the BK is a key for successful blind image restoration. In recent years, researchers proposed a lot of blind restoration methods. Shan et al. [1] proposed a piecewise function to estimate the BK in 2008. Almeida and Almeida [2] proposed an edge extraction filter to estimate the motion BK in 2010. In 2011, Krishnan et al. [3] combined the L1 norm and the L2 norm to realize the estimation of the BK. In 2013, Xu et al. [4] used the L0 norm to extract large-scale edges in an image, and then used the large-scale edges to estimate the BK. In 2015, Ma et al. [5] used a set of sparse priors to extract significant edge structures to estimate the BK. In 2016, Zuo et al. [6] proposed an an Lp norm to achieve accurate BK estimation. In 2016, Pan et al. [7] first introduced the dark channel prior (DCP) into blind image deblurring, which both achieved accurate BK estimation and high-quality image restoration. In 2017, Pan et al. [8] introduced the L0 norm both in the gradient domain and the space domain. In 2019, Guo and Ma [9] proposed a local image block prior, and then use these edges to guide the estimation of the BK, which can get better restoration results. In 2020, Chen et al. [10] combined the inherent structural correlation and spatial sparsity of images, and used Laplacian priors to achieve image blind deblurring. In 2019, Chen et al. [11] found that blurring will reduce the magnitude of the image gradient, and a blind restoration method named the maximum value of the local gradient (LMG) was proposed. In 2020, Lim et al. [12] fused the texture perception prior of L0 norm and L2 norm to process remotely sensed blurred images, this method has a good restoration effect on the texture area of the image. In 2020, Cai et al. [13] proposed a DBCPeNet, which uses the neural network. In 2020, Wu et al. [14] proposed a network to deal with the problem of video deblurring. In 2020, Zhang et al. [15] used two GANs to learn to deblur, the first GAN learned how to blur clear images, and the first GAN guided the second GAN to learn how to convert blurred images into clear images. In 2020, Li et al. [16] found that for the images obtained by the learning method, peak signal-to-noise ratio (PSNR) cannot accurately reflect the image quality, so they proposed a new image evaluation. 2021, Ren et al. [17] presented a deblurring technology for spatial perception. In 2021, Ren et al. [18] found that the residuals caused by blur or other image degradation are spatially dependent and complex in distribution. Therefore, training on a set of blurred and real image pairs to parameterize and learn the regularization term of the restored image. Hu et al. [19] proposed a network for single image reconstruction. Lu et al. [20] proposed an unsupervised deblurring method. Here, a windowed-total-variation regularization constraint that can achieve accurate estimation of the BK is proposed. Different from existing methods, the windowed-total-variation regularization constraint is got using the spatial scale of image edges rather than the magnitude, so it can achieve more accurate BK estimation and high-quality image restoration. 2. The Proposed Windowed-Total-Variation ConstraintIn [20], a relative total variation model (RTVM) was proposed for structure extraction. And our method is based on the RTVM, which is shown in formula (2):
(2)[TeX:] $$r(i)=\frac{\sum_{j \in N(i)}\left(\left(\nabla_{h} x\right)(j)\right)^{2}}{\left(\sum_{j \in N(i)}\left(\nabla_{h} x\right)(j)\right)^{2}+\varepsilon}+\frac{\sum_{j \in N(i)}\left(\left(\nabla_{v} x\right)(j)\right)^{2}}{\left(\sum_{j \in N(i)}\left(\nabla_{v} x\right)(j)\right)^{2}+\varepsilon}$$The differences between formula (2) the RTVM in [21] are: the weight in [21] is removed, and the quadratic is used instead of the absolute values. N(i) represents an image block centered on the pixel i, [TeX:] $$\nabla_{h} u \text { and } \nabla_{v} u$$ are the discrete first-order difference operations in the horizontal and vertical directions, respectively. [TeX:] $$\mathcal{E}$$ is a small positive number. It can be seen from formula (2): if the width of N(i) is set to be the same as that of the BK, then by minimizing formula (2), we can extract the edge with a spatial scale larger than the BK, regardless of its magnitude. Therefore, to estimate the BK accurately, this thesis presents a windowed-total-variation constraint method, shown in formula (3):
where [TeX:] $$R(x)=\sum_{i} r(i),\|\cdot\|_{p} \text { and }\|\cdot\|_{2}$$ denote the Lp norm and the L2 norm respectively. [TeX:] $$\lambda_{u} \text { and } \lambda_{k}$$ represent regularization parameters. From Eq. (3) we can see that, R(x) is a windowed-total-variation regularization constraint term, which can only extract the useful large-scale edges. [TeX:] $$\|k\|_{p}$$ is sparsity constraint term, which preserves the BK. In Section 3, we will discuss how to solve the proposed windowed-total-variation regularization constraint model in detail. 3. The Solution of the Proposed Windowed-Total-Variation MethodIn this section, we adopt an alternate iterative solving algorithm to solve model (3) by converting it into two sub-problems x and k, respectively. 3.1 Solving u Sub-problemFor the solution of the x sub-problem, we fix k, and formula (3) is transformed into:
Obviously, the key to the solution of formula (4) is to solve the regularization term R(x). First, we reorganize the elements in [TeX:] $$\frac{\sum_{j \in N i j}\left(\left(\nabla_{h} x\right)(j)\right)^{2}}{\left(\sum_{j e N(i)}\left(\nabla_{h} x\right)(j)\right)^{2}+\varepsilon}$$ as follows:
(5)[TeX:] $$\sum_{i} \frac{\sum_{j \in N(i)}\left(\left(\nabla_{h} x\right)(j)\right)^{2}}{\left(\sum_{j \in N(i)}\left(\nabla_{h} x\right)(j)\right)^{2}+\varepsilon}=\sum_{j} \sum_{i \in N(j)} \frac{1}{\left(\sum_{j \in N(i)}\left(\nabla_{h} x\right)(j)\right)^{2}+\varepsilon}\left(\left(\nabla_{h} x\right)(j)\right)^{2}=\sum_{j} w_{h}(j)\left(\left(\nabla_{h} x\right)(j)\right)^{2}$$In the same way, for the [TeX:] $$\frac{\sum_{j \in N(i)}\left(\left(\nabla_{v} x\right)(j)\right)^{2}}{\left(\sum_{j \in \mathcal{N}(t)}\left(\nabla_{1} x\right)(j)\right)^{2}+\varepsilon},$$ we can get:
(6)[TeX:] $$\sum_{i} \frac{\sum_{j \in N(i)}\left(\left(\nabla_{r} x\right)(j)\right)^{2}}{\left(\sum_{j \in N(i)}\left(\nabla_{1} x\right)(j)\right)^{2}+\varepsilon}=\sum_{j} \sum_{i \in \mathbb{N}(j)} \frac{1}{\left(\sum_{j \in \mathbb{N}(i)}\left(\nabla_{1} x\right)(j)\right)^{2}+\varepsilon}\left(\left(\nabla_{r} x\right)(j)\right)^{2}=\sum_{j} w_{v}(j)\left(\left(\nabla_{1} x\right)(j)\right)^{2}$$Therefore, we can get formula (7):
where [TeX:] $$W_{h}(h)=w_{h}(j), W_{v}(h)=w_{v}(j).$$ And ∘ is the piecewise multiplication. Therefore, x can be solved using fast Fourier transform (FFT).
(8)[TeX:] $$x=F^{-1}\left(\frac{\overline{F(k)} \circ F(f)}{\overline{F(k)} \circ F(k)+\lambda_{u}\left(W_{h} \circ \overline{F\left(\nabla_{h}\right)} \circ F\left(\nabla_{h}\right)+W_{v} \circ \overline{F\left(\nabla_{v}\right)} \circ F\left(\nabla_{v}\right)\right)}\right)$$3.2 Solving k Sub-problemFor the solution of the k sub-problem, we fix u and solve the k in the gradient domain, and the formula (3) is thus transformed into:
(9)[TeX:] $$\min _{k}\left\|\left(\nabla_{h} x, \nabla_{v} x\right) * k-\left(\nabla_{h} y, \nabla_{v} y\right)\right\|_{2}^{2}+\lambda_{k}\|k\|_{p}$$For the sparsity of BKs, we set p=1. To solve the k sub-problem efficiently, we use an extra variable [TeX:] $$b_{k}$$ and get:
(10)[TeX:] $$\min _{k, b_{1}}\|\nabla x * k-\nabla y\|_{2}^{2}+\lambda_{k}\left\|b_{k}\right\|_{1}+\beta\left\|b_{k}-k\right\|_{2}^{2}$$where [TeX:] $$b_{k}$$ is the extra variable and [TeX:] $$\beta$$ is a penalty parameter. Similarly, we can solve Eq. (10) by converting it into two sub-problems [TeX:] $$b_{k}$$ and k, respectively: Fixing [TeX:] $$b_{k},$$ we can solve k by:
(11)[TeX:] $$\min _{k}\left\|\left(\nabla_{h} x, \nabla_{v} x\right) * k-\left(\nabla_{h} y, \nabla_{v} y\right)\right\|_{2}^{2}+\beta\left\|b_{k}-k\right\|_{2}^{2}$$So the same FFT as Eq. (8) can be directly used to solve k:
(12)[TeX:] $$k=F^{-1}\left(\frac{\overline{F\left(\nabla_{h} x\right)} \circ F\left(\nabla_{h} y\right)+\overline{F\left(\nabla_{v} x\right)} \circ F\left(\nabla_{v} y\right)+\beta b_{k}}{\overline{F\left(\nabla_{h} x\right)} \circ F\left(\nabla_{h} x\right)+\overline{F\left(\nabla_{v} x\right)} \circ F\left(\nabla_{v} x\right)+\beta}\right)$$Fixing k, we can get [TeX:] $$b_{k}$$ by:
(13)[TeX:] $$\min _{b_{k}} \lambda_{k}\left\|b_{k}\right\|_{L}+\beta\left\|b_{k}-k\right\|_{2}^{2}$$Using the method in [6], Eq. (13) can be solved by:
(14)[TeX:] $$k=\operatorname{sign}\left(b_{k}\right) \cdot \max \left(\left|b_{k}\right|-\frac{\lambda_{k}}{\beta}, 0\right)$$Finally, to get a meaningful BK, we use the following constraint on k:
4. Experimental ResultsWe compare our method with other studies [3,7,8,11] to verify the superiority of our method (Table 1). Table 1.
4.1 The Artificial Image Dataset ExperimentsIn the artificial image datasets experiments, we used three different data sets [22-24], and 704 artificial blurred images in total. Fig. 1 and Table 2 show the inverse convolution error ratio of all methods on all 704 artificial blurred images. Table 2.
The estimated BKs and the final deblurred images by the methods [3,7,8,11] both have obvious defects of different degrees: the defects for BKs such as divergence, discontinuity, and smearing will eventually lead to the ringing effect, excessive smoothness, and noise in the final deblurred images. By contrast, the proposed method can not only estimate the most accurate BK (good guarantee of continuity and sparsity), and the corresponding restored image also has sharper edge details and has a very good suppression of various blemishes (Figs. 2 and 3). 4.2 Experiments on Real ImagesNext, we conduct the real blurry images experiments (Fig. 4). Fig. 5 shows that among the estimated BKs, the BKs estimated by [7] and [8] have obvious expansion and tailing defects, while the BK estimated by the literature [3] is concentrated to one point, while the BK estimated by [11] has obvious discontinuities. Secondly, in the final restored images obtained by the method of [3,7,8], obvious tail shadow defects can be seen at the eyeball spot, and there are problems of color dispersion and blurring. Although the restoration result of [11] has no obvious spot smear problem, there is a certain degree of blur. In contrast, the method proposed in this paper can estimate a more accurate BK and can obtain a highest-quality restored image. See the corresponding magnified areas in Fig. 5(b)–5(f). Literature [3] has produced serious distortion and there are a lot of flaws. The resolution of BK estimated by [7] is low. The blur kernels estimated by [8] and [11] show blur and discontinuity respectively. Secondly, the restored image in [3] is still blurry, and the restored image in [7] has color diffusion in details, and the restored images in [8] and [11] have blurring problems, and corrugated flaws appear in areas with rich texture. See the corresponding magnified areas in Fig. 6(b)–6(f). 5. ConclusionWe propose a windowed-total-variation regularization constraint model for blind image deblurring. The proposed method is based on the spatial scale of the image edges but not the amplitude to accurately extract useful image edges for accurate BKs estimation and high-quality images restoration. A large number of experiments prove the superiority of our method. BiographyShu Tanghttps://orcid.org/0000-0001-7517-7992He received an M.E. degree in computer science from Chongqing University of Posts and Telecommunications, Chongqing, China, in 2007, and a Ph.D. degree in Chongqing University, China, in 2013. He is currently an associate professor of the College of Computer Science and Technology at Chongqing University of Posts and Telecommunications, China. His research interests include signal processing, image processing, and computer vision. References
|