1. Introduction
Distributed compressed sensing (DCS) is a signals recovery framework for acquiring sparse signals where both sensing and compression are performed at the same time. It uses sparsity of signals to recovery the signals from the measurements [1,2].
The DCS model can be stated as:
where [TeX:] $$\mathbf{\Phi} \in \mathbb{R}^{M \times N}$$ is a random measurement matrix. This system model describes an under-determined system, where [TeX:] $$M<N \quad . \quad \mathbf{Y}=\left[\mathbf{y}_{1}, \mathbf{y}_{2}, \cdots, \mathbf{y}_{J}\right]$$ are the measure vectors and [TeX:] $$\mathbf{X}=\left[\mathbf{x}_{1}, \mathbf{x}_{2}, \cdots, \mathbf{x}_{J}\right]$$ are unknown sparse vectors. The signal vector [TeX:] $$\mathbf{x}_{i} \text { has } K^{(i)}$$ non-zero components and indices with cardinality [TeX:] $$\left|\operatorname{supp}\left(\mathbf{x}_{i}\right)\right|=\left\|\mathbf{x}_{i}\right\|_{0}=K^{(i)}$$. The goal of DCS model is to reconstruct X from Y . The method is that
where [TeX:] $$\left\|\mathbf{x}_{i}\right\|_{0} \text { is the } l_{0}-\text { norm of } \mathbf{x}_{i}$$.
In practice, the DCS problem can be relaxed as an approximate convex optimization problem [3]:
Here, the parameter K states the maximum sparsity of the xi.
If the signals X are independent, the reconstruction problem can be divided into J individual problems, we can reconstruct each signal independently by using the compressed sensing (CS) framework [4,5]. The more interesting case is when the signals [TeX:] $$\mathbf{x}_{i}, i=1,2, \cdots, J$$ are correlated among each other. Then we can reconstruct X jointly using DCS algorithms [6-10]. Moreover, it was shown that signals reconstruction based on DCS could save about 30% of measurements than using CS on each source [6].
Problem (3) is thought to be NP-hard [6]. Many pursuit algorithms [7-10] have been introduced to recovery the signals with tractable complexity. It has been shown in paper [11] that l1-norm constraint is sufficient to ensure the sparsest solution for many high-dimensional cases.
DCS was defined by Duarte et al. [12] firstly. Two different joint sparse models (JSMs) were proposed by them. Subsequently, many algorithms were carried out. A new DCS algorithm was introduced in [7] and it exploited signal to signal correlation structures. Wakin et al. [13] proposed a simultaneous orthogonal matching pursuit (OMP) [14] method for DCS (named as jointOMP), which can reduce the number of measurements. Unfortunately, there is no backtracking mechanism in jointOMP, which leads to the worse recovery performance. To overcome the drawback, the subspace pursuit method for DCS (DCSSP) was proposed in [15]. A new joint sparse recovery method, called orthogonal subspace matching pursuit (OSMP), is proposed in [16]. Nevertheless, those algorithms have a common limitation that the signals sparsity must be pre-known, whereas it is usually unpractical in practice. Recently, two new recovery methods, named forward-backward pursuit method for DCS (DCSFBP) and backtrackingbased adaptive OMP method for DCS (DCSBAOMP), are proposed [9,10]. In [17], l1/l2-norm is used to enforce joint sparsity on the signals. However, the above methods do not take into account the structures of signals or their representations. Sometimes, each signal xj under consideration is structured in nature, e.g., the structure of block sparse that the non-zero coefficients of signals occurring in cluster. The block-sparse signals appear in many applications including gene expression analysis [18] and equalization of sparse communication channels [19].
Block-sparse signal reconstruction algorithms were introduced and investigated in recent literatures. Here, we focus on the DCS of block-sparse signals. For a given matrix [TeX:] $$\Phi \in \mathbb{R}^{M \times N}$$, we reconstruct blocksparse signals X from [TeX:] $$\mathbf{Y}=\mathbf{\Phi} \mathbf{X}$$. Here X with block size d can be formed as
where [TeX:] $$\mathbf{X}[i]=\left(\mathbf{x}_{[(i-1) d+1] i d, 1}, \mathbf{x}_{[(i-1) d+1] i d, 2}, \cdots, \mathbf{x}_{[(i-1) d+1] i d, J}\right) \in \mathbb{R}^{d \times J}$$ denotes the ith block matrix with length d and N = ld. The matrix X is said to be K-block sparse if X[i] has nonzero Euclidean norm for at most K entries i. Denote
where [TeX:] $$I\left(\|\mathbf{X}[i]\|_{2}>0\right)$$ is an indicator function. In this case, a block K -sparse matrix X can be defined as [TeX:] $$\|\mathbf{X}\|_{2,0} \leq K$$[20]. Then the total sparsity of X is Ktotal [TeX:] $$K_{\text {total}}=K \times d \times J$$. Similarly to (4), the measure matrix Φ is denoted as the following block form,
here Φ[i] is a submatrix of size [TeX:] $$M \times d$$. The supports of the coefficient matrix X are [TeX:] $$\Gamma(\mathbf{X})=\left\{i,\|\mathbf{X}[i]\|_{2} \neq 0\right\}$$ [20].
Here, the reconstruction problem can be relaxed as a convex optimization by
This problem is also a NP-hard problem. One natural idea is to replace the l2/l0 -norm by l2/l1 -norm, that is
where [TeX:] $$\|\mathbf{x}\|_{2,1}=\sum_{i=1}^{l}\|\mathbf{X}[i]\|_{2}$$
Specifically, in [21], the block compressive sampling matching pursuit (BCoSaMP), which is based on the compressive sampling matching pursuit (CoSaMP) [22], was proposed. Eldar et al. [23] proposed the block version of the OMP (BOMP) algorithm for block-sparse signal recovery. In [24], a dictionary optimization problem for block-sparse signal reconstruction was investigated, but this method needed the maximum block length as the prior information. Another approaches, such as iterative hard thresholding (IHT) [21], block subspace pursuit (BSP) [25] and block StOMP (BStOMP) [26] have been investigated to date. However, most of the block algorithms deal with the single signal reconstruction.
Since DCS algorithms do not take into account the block structures of signals and most of the existing block compressed sensing algorithms just deal with the single-channel signal. In this paper, by incorporating the technique of DCS and the additional structure of block sparse signals, we propose a new recovery mechanism to solve the block DCS problem, called the backtracking-based adaptive OMP for block DCS (DCSBBAOMP) algorithm. In contrast to the most existing approaches, the new method can recover the block sparse signals simultaneously without the sparsity structure information. Meanwhile, the new method can recover multiple sparse signals from their compressive measurements. Numerical experiments including recovery of random artificial sparse signals and the real-life data are provided to illustrate the desirable performance of the proposed method.
The rest of the paper is organized as follows: in the next section, the block sparse signals reconstruction method is presented. Experimental results are presented in Section 3 to compare the proposed algorithm with DCSBAOMP, backtracking-based adaptive OMP for block sparse signal (BBAOMP), DCSFBP and DCSSP. In Section 4, conclusions and future work are presented.
2. Proposed Algorithm for Block-Sparse Signals
As the extension of OMP, BBAOMP and DCSBAOMP were proposed by Qi et al. [27] and Zhang et al. [9], respectively. Those algorithms first find one or several atoms which corresponding to the much larger correlation between measurement vectors and residual. To be mentioned, the atoms chosen in the previous stage may be wrong. And then backtracking procedure is used to refine the estimated support set. In the next, these two methods obtain a new residual by using the least-square fit. BAOMP and DCSBAOMP methods do not need signals sparsity as a prior, they are repeated until the residual is smaller than a threshold or the iteration count reaches the number of maximum iteration. The main difference between BBAOMP and DCSBAOMP is that the BBAOMP algorithm is a single-channel method, which deals with block sparse signal, while DCSBAOMP algorithm is a multi-channel method. In this paper, we generalize the DCSBAOMP method to deal with the block sparse signals.
Generalizing the DCSBAOMP algorithm to the block sparse signals, we obtain the DCSBBAOMP algorithm. In this method, the measurement matrix [TeX:] $$\Phi$$ is considered to be a dictionary with each block column matrix [TeX:] $$\Phi[i]$$. It first chooses several block indices Cn whose correlations between the residual [TeX:] $$\operatorname{res}^{n-1}$$ and the block columns [TeX:] $$\Phi[i]$$ are not smaller than [TeX:] $$\mu_{1} \cdot \max _{j \in \Omega_{1}}\left\|\left(\operatorname{res}^{n-1}, \Phi[j]\right)\right\|_{2}, \Omega_{1}=\{1,2, \cdots, l\}$$, and then subtracts some wrong block indices [TeX:] $$\Gamma^{n}$$ which corresponding the indices of [TeX:] $$\left\|\mathbf{X}_{F}^{n}[i]\right\|_{2}$$ are not larger than [TeX:] $$\mu_{2} \cdot \max _{j \in C^{n}}\left\|\mathbf{X}_{F}^{n}[j]\right\|_{2}$$, the final estimated support set will be identified after several iterations. [TeX:] $$\|\mathbf{x}\|_{2}$$ returns the 2-norm of matrix [TeX:] $$\mathbf{X}$$, and it is equal to the largest singular value of [TeX:] $$\mathbf{X}$$. The details about the DCSBBAOMP algorithm are shown in Algorithm 1.
In this algorithm, μ1 is a constant which determines the number of block indices chosen at each time. When μ1=1, only one block index is selected. When μ1 becomes smaller, the DCSBBAOMP algorithm can select more than one block index at every iteration. Smaller μ1 results in much more block indices are selected at each time and then speeds up the algorithm. Unfortunately, the block indices selected at the above process may be wrong. μ2 is a parameter determining the number of deleted block indices. Similarly, bigger μ2 results in smaller deleted block indices and slows down the algorithm. In many experiments, we find that the choice of μ1 and μ2 is same as those of BAOMP and DCSBAOMP. So in this paper, we do not discuss the change of μ1 and μ2 and choose the same values with BBAOMP and DCSBAOMP while μ1=0.4 and μ2=0.6. After updating the support set F, we generate a new residual by using the least-square fit. Due to the block sparsity K is not known in advance, the DCSBBAOMP algorithm is repeated until the residual resn is smaller than a threshold ε or the iteration count n reaches the number of maximum iterations nmax.
Algorithm 1.
DCSBBAOMP algorithm
3. Simulations and Results
In this section, several experiments are simulated to illustrate the performance of the proposed DCSBBAOMP method, which compared with DCSBAOMP, BBAOMP, DCSFBP and DCSSP algorithms.
In each trial, the block sparse signals [TeX:] $$\mathbf{X}$$ are artificially generated as follows: for a fixed sparsity K, we randomly choose the nonzero blocks. Each element in the nonzero blocks is drawn from standard Gaussian distribution [TeX:] $$N(0,1)$$ and the elements of other blocks are zeros. The observation matrix is [TeX:] $$\mathbf{Y}=\mathbf{\Phi} \mathbf{X}$$, where the entries of sensing matrix [TeX:] $$\Phi \in \mathbb{R}^{M \times N}$$ are generated from standard Gaussian distribution N(0,1) independently. Firstly, we give two experiments including the changes of sparsity and number of measurements to compare the SNR and the run time of DCSBBAOMP with those of DCSBAOMP, BBAOMP, DCSFBP and DCSSP. Then we discuss the changes of reconstruction performance without the block size. Finally, the proposed method is tested on electrocardiography (ECG) signals.
To evaluate its performance, we use a measure named signal-to-noise ratio (SNR) defined as [10]:
where [TeX:] $$\mathbf{X}$$ denote the original signals and [TeX:] $$\hat{\mathbf{X}}$$ denote the reconstructed signals, respectively.
To be mentioned, each test is repeated 100 times. Then we compute the average values of SNR and running time, respectively. In the following experiments, the proposed method uses [TeX:] $$\mu_{1}=0.4, \mu_{2}=0.6, n_{\mathrm{max}}=M, \varepsilon=10^{-6}$$ as the input parameters. For simplicity, in all the experiments, the sparse_value on x-axis is equal to [TeX:] $$K_{\text {total}} / J$$, that is,
3.1 The Recovery Performance versus Sparsity with Small Sample
In the first experiment, the recovery performance is observed as the sparse_value varies from 10 to 70 with the fixed block size d=2 and M=128, N=256, J=3. The average SNR and run time are shown in the Fig. 1.
As can be seen from Fig. 1(a), the SNR of all the algorithms decreases slightly as the sparse_value varies from 10 to 70. DCSBAOMP and DCSBBAOMP algorithms obtain the similarly better SNR in here. When sparse_value > 60, the performance decreases significantly. The reason is that sparse_value is close to M/2, the recovery performance will fall down [28]. Fig. 1(b) depicts that the run time of all the algorithms increases as the sparse_value varies from 10 to 70. Specially, the DCSBAOMP gives much litter run time among the five algorithms. The reason lies in two aspects, one is that the DCSBAOMP algorithm is a multichannel algorithm, which accelerates the speed of the algorithm; the other is that the DCSBAOMP utilizes the backtracking procedure, which leads to better reconstruction performance and shorter run time.
Reconstruction results over the sparse_value. The numerical values on x-axis denote the sparse_value of signals and those on y-axis represent the SNR (a) and run time (b).
Reconstruction results over the number of measurement M. The numerical values on x-axis denote the number of measurement M and those on y-axis represent the SNR (a) and run time (b).
3.2 The Recovery Performance versus Number of Measurement
In this experiment, we compare the SNR and run time as the number of measurement M varies from 64 to 176, where N=256, sparse_value=30, J=3.
From Fig. 2, as the number of measurement M increases, for all the five algorithms, the SNR increases while the run time decreases. Meanwhile, all the algorithms can obtain similar SNR and the run time with M > 90. When the number of measurement M < 90, performance of our algorithm is better than that of other algorithms except for DCSFBP. Although DCSSP has the largest SNR in this experiment, the run time of DCSSP is not stable compared with other algorithms.
3.3 The Recovery Performance versus Sparsity with Medium Sample
In this experiment, the recovery performance is observed as the sparse_value varies from 40 to 200 with the fixed block size d=8 and M=512, N=1024, J=3. The average SNR and run time are shown in the Fig. 3.
As can be seen from Fig. 3, the SNR curves decrease slightly and the run time curves increase as the sparse_value increases. It is obviously that DCSSP algorithm achieves the highest SNR, while the SNR of another four algorithms is more or less similar. However, the run time of DCSSP algorithm is longest among the above methods. To be mentioned, although the SNR of our method is not the highest, our method is more computational effective than other methods.
Reconstruction results over the sparse_value. The numerical values on x-axis denote the sparse_ value and those on y-axis represent the SNR (a) and run time (b).
3.4 The Recovery Performance versus Number of Block Size
In this experiment, we compare the SNR and run time as the number of block d varies from 4 to 16 with step size 4. M=512, N=1024, sparse_value=120, J=3 are fixed. Fig. 4 demonstrates the experimental results.
From Fig. 4, we can see that the SNR and run time of all the algorithms do not change any more as the block size d varies from 4 to 16. That is to say, if we fix the total sparsity of the block sparse signals, different block size of sparse signals does not affect the property of all the algorithms.
Reconstruction results over the number of block d. The numerical values on x-axis denote the number of block d and those on y-axis represent the SNR (a) and run time (b).
3.5 The Recovery Performance with Unknown Block Size
3.5.1 The block size of our algorithm is fixed to 8
In this trail, we test the performance of our algorithm when the block size d is unknown. M=512, N=1024, sparse_value=120, J=3 are fixed. We generate the sources with block size d=8. Note that the block size is unknown in recovery process. Then we change the block size from 2 to 26 with step size 2 in the recovery process. The results are shown in Fig. 5.
Reconstruction results with unknown block d. The numerical values on x-axis denote the number of block d and those on y-axis represent the SNR (a) and run time (b); when we generate the source with block size d = 8.
3.5.2 The block size of our algorithm is fixed to 5
In this trail, we generate the sources with block size d=5 instead of d=8. M=512, N=1024, Ktotal=120, J=3 are fixed. Then we change the block size from 3 to 16 with step size 1 in the recovery process. The results are shown in Fig. 6.
Reconstruction results with unknown block d. The numerical values on x-axis denote the number of block d and those on y-axis represent the SNR (a) and run time (b); when we generate the source with block size d = 5.
From Figs. 5 and 6, when the block size choosing in our algorithm is the multiple of real block size, we can obtain much better reconstruction performance than the performance of other block size. Generally speaking, if the real block size d of sources is known in advance, we can get the best performance in the experiments with our algorithm.
3.6 ECG Signals Recovery
In this subsection, we apply our algorithm to the ECG data. Because the sparsity of signals need to be known in DCSSP, we only compare the performance of DCSBBAOMP with that of DCSBAOMP, DCSFBP, DCSSAMP and BBAOMP. We obtain the ECG data from the PhysioNet [29]. In this experiment, three patients are chosen randomly as the source signals [TeX:] $$\mathbf{X}$$. Then we randomly generate one Gaussian matrix [TeX:] $$\Phi \in \mathbb{R}^{M \times N}$$, where N=1024, M=512. The process of signals generation is the same as the literature [9]. From Fig. 7(a), we can see that the ECG data themselves are not sparse. Then we apply the orthogonal Daubechies wavelets (db1) [TeX:] $$\Psi$$ to ECG signals. Fig. 7(b) shows the transformed sparse signals. Then all of the algorithms are applied to recover [TeX:] $$\theta \text { from } \mathbf{Y}=\mathbf{\Phi} \mathbf{X}=\mathbf{\Phi} \mathbf{\Psi} \boldsymbol{\theta}$$. Fig. 7(c) depicts the recovered signals [TeX:] $$\tilde{\mathbf{X}}$$ and Fig. 7(d) shows the corresponding sparse signals [TeX:] $$\tilde{\theta}$$. The performance and the run time are shown in Table 1.
Average reconstruction SNR and run time of block sparse signals using different methods
From Table 1, one can see that the performance of BBAOMP is best but it needs more time to recover the signals since this method is a single-channel method. Except for BBAOMP, our algorithm obtains the highest average SNR, and the run time is similar to other methods.
The electrocardiography (ECG) signals in signal channel no#1 of three patients which are selected randomly from the PTB Diagnostic ECG Database: (a) the original signals [TeX:] $$\mathbf{X}$$, (b) [TeX:] $$\theta$$ with orthogonal Daubechies waveles (db1), (c) [TeX:] $$\tilde{\mathbf{X}}$$ recovered by our algorithm, and (D) [TeX:] $$\tilde{\boldsymbol{\theta}}$$ recovered by our algorithm.
4. Conclusion
In this paper, a DCSBBAOMP method for recovery of block sparse signals is proposed. This method first chooses atoms adaptively and then removes some atoms that are wrongly chosen at the previous step by using backtracking procedure, which promotes the reconstruction property. The most useful advantage of our proposed algorithm is that it can recover multiple sparse signals from their compressive measurements simultaneously. What’s more, it does not need the block sparsity as a prior. Simulation results demonstrate that our method produces much better reconstruction property compared with many existing algorithms.
The two parameters μ1 and μ2 play a key role in our method which provides some flexibility between reconstruction property and computational complexity. However, there is no theoretical guidance on how to select μ1 and μ2. In addition, theoretical guarantees of that the proposed method can accurately recovery the original signals are also not proved. Future works include theoretical analysis of exact reconstruction of the proposed algorithm and the treatment of the selection of parameters of μ1 and μ2.
Acknowledgement
This work is supported by Natural Science Foundation of China (No. 61601417).