1. Introduction
Massive multiple-input multiple-output (MIMO) systems are equipped with a large number of antennas at a base station. This method can effectively improve transmission and spectrum efficiency. As the number of antennas at the base station tends to infinity, the influence of thermal noise on the system can be ignored [1-3]. To obtain these benefits, the base station must perform a precoding process on the transmitted signal, which requires the base station to know the downlink channel state information (CSI) in advance. The structure of the transmitted symbols in pilot-based channel estimation is shown in Fig. 1. To ensure the accuracy of channel estimation, the proportion of the pilot signal in the data frame should increase with the number of antennas, resulting in a lower communication efficiency. Furthermore, it is very difficult for mobile users to estimate the downlink channel with massive antennas in real time. Therefore, research on channel estimation algorithms with low pilot overhead and high accuracy at mobile user terminals has attracted considerable attention in recent years.
Experim ental results on channel measurement of actual communication systems have shown that the channels of massive MIMO systems exhibit sparse or nearly sparse characteristics in many cases [4,5]. Leveraging these sparse characteristics, compressed sensing-based channel estimation algorithms can estimate the CSI of a massive MIMO system with shorter pilot sequences [6-8]. This approach has the advantage of low computational complexity. The matching pursuit (MP) and orthogonal matching pursuit (OMP) algorithms are commonly used to perform sparse channel recovery. Although their computational complexity is low, the accuracy of their estimations cannot meet the requirements of massive MIMO systems. Therefore, some modified algorithms have been proposed to improve estimation accuracy by exploiting the joint sparse feature of the channel [9-11]; however, their application environment involves very strict restrictions.
Frame containing pilot and data symbols; the pilot symbols block is used to estimate channels.
MP is an iterative algorithm in which the estimated values obtained in different iterations provide different contributions to the final estimation result. However, the original MP algorithm does not distinguish the estimated values obtained in different iterations, resulting in a low estimation accuracy. To improve the accuracy of channel estimation, the iterations are divided into different stages [1]. The number of selected indices remains constant over the iterations of a stage and successively increases with a fixed step size from one stage to the next. Similarly, an algorithm called the variable metric method based on gradient pursuit (VMMGP) was proposed, which applies the gradient descent algorithm to channel estimation [13]. The estimated elements obtained in different iterations are multiplied by the weight value to build the final estimation result. The weight values are calculated based on the residual signal and the objective function of the gradient. In [14], an adaptive weighted OMP (AWOMP) algo¬rithm was proposed. The reciprocals of the residual signal in each iteration were used as weight values to improve accuracy. After observing its performance in terms of accuracy over a larger range of signal-to-noise ratio (SNR) values, it was found that the weighted algorithm could improve the accuracy of channel estimation in the case of low SNR, but degraded it under conditions of high SNR. Therefore, an adaptive weighting and MP (AWMP) algorithm was proposed in [15] to weight the channel estimations obtained in each iteration under conditions of low SNR and use the original MP algorithm under condi¬tions of high SNR.
The weighted algorithms discussed above have the following disadvantages. First, the weight values used in these weighted algorithms are not optimal. Second, the method for calculating weight values based on residual signals is imprecise. The residual signal is related not only to noise but also to the representation error of the measurement signal. In the case of low SNR, noise plays a dominant role. A small weight value mainly suppresses noise, thus improving accuracy. In the case of high SNR, noise is no longer the dominant factor; hence, a small weight value mainly suppresses the useful measurement signal and degrades the estimation accuracy. Finally, accurately estimating the SNR is very difficult in actual communication systems; it is thus challenging to select different channel estimation algorithms according to the SNR level.
To solve these problems, in this study, we propose an improved weighted MP algorithm based on compressed sensing. The algorithm assigns weight values to estimated elements obtained in different iterations and then builds the final estimated channel. The weight value is obtained by training the channel data and can be changed with an increase in the number of iterations. The result of simulations conducted to evaluate the proposed approach show that the channel estimation accuracy is improved compared to that of existing methods under conditions of both high and low SNR.
2. System Model
2.1 Massive MIMO
The channel between a mobile user and a base station in a massive MIMO system can be represented by a high-dimensional vector. The elements in the vector are often assumed to be random variables with zero mean and unit variance in a Gaussian distribution. Considering a massive MIMO system with a base station equipped with [TeX:] $$N_{\mathrm{t}}$$ antennas, the channel between the base station and a mobile user with a single antenna can be represented as an [TeX:] $$N_{\mathrm{t}}$$-dimensional vector. If multiple users with a single antenna or a user with multiple antennas are considered, the channel is represented by a channel matrix. The channel matrix can be vectorized to form a channel vector. For simplicity, we use an N-dimensional vector h to represent the channel, which may be an actual channel vector or a channel vector obtained by vectorization of a channel matrix. In the downlink, the pilot signals are transmitted from the base station to the mobile users. The pilot response signal y received by the mobile user can be expressed as
where [TeX:] $$\boldsymbol{\Phi}=\left[\boldsymbol{\Phi}(1), \boldsymbol{\Phi}(2), \ldots \boldsymbol{\Phi}\left(L_{\mathrm{p}}\right)\right]^{T}$$ is an [TeX:] $$L_{p} \times N$$ Ndimensional pilot matrix, [TeX:] $$\Phi(i)$$ represents the N-dimensional pilot vectors transmitted by all N transmit antennas at time i, and [TeX:] $$L_{\mathrm{p}}$$ is the length of the pilot sequence. n is an L_{p} \times 1 dimensional additive white Gaussian noise vector, and each element in vector n obeys a Gaussian distribution with zero mean and [TeX:] $$\sigma^{2}$$ variance.
If the actual channel does not exhibit sparse features, it can be converted to a sparse form using a sparse dictionary. For convenience, only channels with a sparse form are considered herein. In traditional channel models, it is often assumed that the elements of the channel are subject to a Gaussian distribution; however, this assumption may not hold for the actual channels in practice. In the following subsection, a universal channel model is introduced, which can be applied to sparse channels with any probability distribution, including an unknown distribution.
2.2 Sparse Channel Model
If most elements of a signal vector are zero or close to zero, we call the signal vector sparse. The number of non-zero elements in the sparse signal vector is called the sparsity degree, which is represented by K. In addition to the sparsity degree, it is necessary to introduce the probability distribution of the elements in the sparse signal vector. To adopt an identical model to describe sparse signals with different probability distributions, a universal model is introduced in Eq. (2) below.
where h is an N-length sparse signal column vector, [TeX:] $$\odot$$ represents the Hadamard product, [TeX:] $$\mathbf{h}_{\mathrm{a}}$$ is a column vector of length N, and each element in [TeX:] $$\mathbf{h}_{\mathrm{a}}$$ is a random variable obeying a certain distribution. [TeX:] $$\mathbf{h}_{\mathrm{b}}$$ is also a random variable column vector of length N, and each element in [TeX:] $$\mathbf{h}_{\mathrm{b}}$$ obeys the Bernoulli distribution with the success probability parameter p. The value of parameter p is small, that is, the probability of elements in vector h being non-zero is low, which ensures the sparsity of channel h. The elements in [TeX:] $$\mathbf{h}_{\mathrm{a}}$$, regardless of the probability distribution, do not affect the sparse characteristics of the channel.
According to the compressed sensing theory, the high-dimensional sparse vector h can be reconstructed using the measurement vector y, even in a noisy environment. Reconstruction requires the design of a measurement matrix and algorithm to estimate the non-zero positions of vector h and the values at those positions. The non-zero positions are determined by vector [TeX:] $$\mathbf{h}_{\mathrm{b}}$$, and the values at those positions are determined by vector [TeX:] $$\mathbf{h}_{a}$$. In the proposed improved weighted MP algorithm, the non-zero positions of h are first determined, and then the values at these non-zero positions are estimated. This is equivalent to estimating [TeX:] $$\mathbf{h}_{b}$$ first and then estimating [TeX:] $$\mathbf{h}_{\mathrm{a}}$$. For simplicity, we no longer specifically emphasize the channel estimation of [TeX:] $$\mathbf{h}_{a}$$ or [TeX:] $$\mathbf{h}_{\mathrm{b}}$$ in the subsequent sections.
3. Weighted MP Algorithm
Compressed sensing techniques can estimate a sparse channel with short pilot sequences. An N-dimensional sparse signal h can be reconstructed from M-dimensional random sample values [TeX:] $$\mathbf{y}=\mathbf{\Phi h}$$, where [TeX:] $$\Phi$$ is an [TeX:] $$M \times N$$-dimensional measurement matrix (M<N). To ensure the accuracy of the reconstruction, the measurement matrix is required to satisfy the condition of the restricted isometry property (RIP). When the condition [TeX:] $$M \geq K \log \left(\frac{N}{K}\right)$$ is satisfied, K is the sparsity degree of the sparse signal, and the original sparse signal can be reconstructed using equation (3) below even in a noisy environment [16].
where [TeX:] $$\|\cdot\|_{\ell_{0}} \text { denotes } \ell_{0}$$-norm, and [TeX:] $$\|\cdot\|_{\ell_{2}}$$ denotes [TeX:] $$\ell_{2}$$-norm.
The MP algorithm can be used to obtain a solution to Eq. (3). Only one element of the sparse signal is estimated in each iteration. All the estimated elements obtained in the current and previous iterations are stacked together according to their respective positions to construct the final estimated signal.
3.1 Weight Value Analysis
If the estimated element obtained in the initial iteration contains an error, this discrepancy causes the estimated elements obtained in the subsequent iterations to contain a larger error; this phenomenon is referred to as error propagation. The estimated element [TeX:] $$\hat{\mathbf{h}}_{i}$$ obtained in the i-th iteration consists of two parts: the true value [TeX:] $$\mathbf{h}_{i}$$ and the estimation error [TeX:] $$\mathbf{e}_{i}$$. According to the proportional relationship between the true value and the estimation error, a weight value is assigned to the estimated element. The weight value is calculated as follows.
Because the true value and error cannot be accurately known in practice, it is impossible to calculate the weight value [TeX:] $$u_{\mathrm{i}}$$. However, the range of the weight value can be determined. When the SNR is high, the error caused by noise is relatively small; thus, the weight value should be larger. The maximum value of the weight, [TeX:] $$u_{\mathrm{i}}=1$$, is obtained when the noise approaches 0. When the SNR is low, the error caused by noise is large, and the weight should therefore be smaller. The minimum value of the weight, [TeX:] $$u_{\mathrm{i}}=0$$, is obtained when the noise tends to infinity. Therefore, the actual weight value falls within the range 0–1. In addition, the number of iterations affects the weight value. The non-zero elements in the sparse vector are estimated in descending order according to the modulus value of the original MP algorithm. The estimated value of each element is calculated as the inner product of the residual signal and the atom (the column of the measurement matrix [TeX:] $$\boldsymbol{\Phi}$$). As the number of iterations increases, the residual signal sequentially removes the contribution of the estimated elements. Therefore, the modulus of the true value and the error both decrease gradually with the number of iterations.
The estimation error [TeX:] $$\mathbf{e}_{i}$$ consists of two parts and can be expressed as
where [TeX:] $$\mathbf{a}_{i}$$ is the error caused by using atoms to approximate the i-th residual signal, [TeX:] $$\mathbf{b}_{i}$$ is the error caused by the noise in the i-th iteration. The elements in the measurement matrix are independent and identically distributed Gaussian variables. Hence, the projections of noise on each atom are approximately equal, that is, [TeX:] $$\mathbf{b}_{i}$$ can be considered to be independent of iteration i. In comparison, [TeX:] $$\mathbf{a}_{i}$$ decreases as the number of iterations increases. By observing the variation curves of the actual signals [TeX:] $$\mathbf{a}_{i}$$ and [TeX:] $$\mathbf{h}_{i}$$, it is found that [TeX:] $$\mathbf{a}_{i}$$ exhibits a faster attenuation speed than [TeX:] $$\mathbf{h}_{i}$$. Therefore, the weight value should increase with successive iterations.
The growth curve of the weight value over successive iterations is affected by the probability distribution and attenuation speed of the channel element, as well as by noise contamination. As it is difficult to obtain an accurate expression of the growth of the weight value, it is assumed that the weight value increases linearly for convenience of analysis. Therefore, the initial value and step of increase are sufficient to obtain the required weight values for each iteration. These parameters can be easily obtained from an actual channel environment. After training the channel data, a suitable weight value [TeX:] $$u_{\mathrm{op}}$$ can be obtained, which implies that the loss of estimation accuracy after weighting in the high-SNR region is reduced, whereas the gain of estimation accuracy in the low-SNR region is increased. Given a suitable weight [TeX:] $$u_{\mathrm{op}}$$, the weight increase at each step is calculated as [TeX:] $$\Delta u=2\left(1-u_{o p}\right) / K$$, where K is the number of iterations. The initial weight value is calculated as [TeX:] $$u_{0}=2 u_{\mathrm{op}}-1$$.
3.2 Algorithm Description
To improve the accuracy of the estimation, an improved weighted MP algorithm is proposed, in which the weight value of [TeX:] $$\mathcal{U}_{\mathrm{i}}$$ is set to the newly added non-zero element. A flowchart of the algorithm is shown in Fig. 2.
Flowchart of the proposed algorithm.
The main steps of the algorithm are described as follows.
(1) Parameter initialization
The parameter initialization operation is performed first. The residual signal [TeX:] $$\mathbf{r}_{0}=\mathbf{y}$$, the number of iterations k = 1, the support set (a set of non-zero positions of the sparse signal) is initialized as an empty set, i.e., [TeX:] $$\Lambda_{0}=\emptyset$$, the sparsity degree is set as K, the set of indices of the atoms [TeX:] $$\Omega=\{1,2, \ldots N\}$$, and the estimated signal [TeX:] $$\hat{\mathbf{h}}$$ is initialized as the zero vector [TeX:] $$\hat{\mathbf{h}}_{0}=\mathbf{0}$$.
(2) Select atom
By calculating the absolute value of the inner product between the residual signal and the atom, the corresponding index of the atom, denoted as [TeX:] $$\lambda(k)$$, which has the maximum correlation with the current residual signal [TeX:] $$\mathbf{r}_{k-1}$$, is selected.
Here, [TeX:] $$\left\langle\mathbf{r}_{k-1}, \boldsymbol{\Phi}_{i}\right\rangle$$ represents the inner product of the two vectors.
(3) Update support set
Combine the selected atom index [TeX:] $$\lambda(k)$$ with the support set [TeX:] $$\Lambda_{k-1}$$ obtained in previous iterations to form a new support set [TeX:] $$\Lambda_{k}$$, i.e., [TeX:] $$\Lambda_{k}=\Lambda_{k-1} \cup\{\lambda(k)\}$$.
(4) Calculate the value of [TeX:] $$\hat{\mathbf{h}}_{k}$$ at position [TeX:] $$\lambda(k)$$
Calculate the estimated value of [TeX:] $$\hat{\mathbf{h}}_{k}$$ at position [TeX:] $$\lambda(k)$$, [TeX:] $$\hat{\mathbf{h}}_{k}(\lambda(k))=\left\langle\mathbf{r}_{k-1}, \boldsymbol{\Phi}_{\lambda(k)}\right\rangle$$.
(5) Update the estimated signal [TeX:] $$\hat{\mathbf{h}}_{k}$$ after weighting the estimated value of [TeX:] $$\hat{\mathbf{h}}_{k}$$ at the position of [TeX:] $$\lambda(k)$$
Multiply [TeX:] $$\left\langle\mathbf{r}_{k-1}, \Phi_{\lambda(k)}\right\rangle$$ by the weight value [TeX:] $$\boldsymbol{u}_{k}$$ and substitute it into the corresponding position of [TeX:] $$\hat{\mathbf{h}}_{k}$$ to complete the update of the estimated signal [TeX:] $$\hat{\mathbf{h}}_{k}$$.
(6) Update the weight value
Update the weight value [TeX:] $$u_{k}$$ based on adjustment value [TeX:] $$\Delta u$$.
The weight adjustment value [TeX:] $$\Delta u$$ in the algorithm may take a positive or a negative value. A positive value corresponds to an increase in weight value, while a negative value corresponds to a decrease in weight value. If the weight value is increased, the maximum value cannot be greater than 1, and if it is decreased, the minimum value cannot be less than 0.
(7) Update the residual signal [TeX:] $$r_{k}$$ based on [TeX:] $$\widehat{\mathrm{h}}_{\mathrm{k}}$$.
If [TeX:] $$k \geq K$$, the iterative process is complete; otherwise, k = k + 1, and repeat steps (2)–(7) to perform the next iteration.
(8) Output the final estimation [TeX:] $$\hat{\mathbf{h}}_{K}$$.
4. Weighted OMP Algorithm
The concept of weighting is introduced into the OMP algorithm to verify whether the weighting method is valid for other MP algorithms. The weighted OMP algorithm with a fixed weight value is called the fixed-weighted OMP (FWOMP). The weighted OMP algorithm using a varying weight value is referred to as varying weighted OMP (VWOMP). The OMP algorithm differs from the MP algorithm in terms of the signal estimated in the current iteration. In the OMP algorithm, the sparse signal estimated in the current iteration already contains non-zero elements obtained in all previous iterations. The final esti¬mated signal is not a superposition of the estimations obtained in different iterations. Therefore, the signal estimated in the current iteration is multiplied by the weight value first, and then removed from the previous residual signal to obtain the residual signal of the current iteration. The key steps of the weighted OMP algorithm are as follows.
(1) Parameter initialization
First, a parameter initialization operation is performed, in which the residual signal [TeX:] $$\mathbf{r}_{0}=\mathbf{y}$$, the number of iterations k = 1, the support set is initialized as an empty set, i.e., [TeX:] $$\Lambda_{0}=\emptyset$$, the sparsity degree is set as K, the set of indices of the atoms [TeX:] $$\Omega=\{1,2, \ldots N\}$$, and the estimated signal [TeX:] $$\hat{\mathbf{h}}$$ is initialized as the zero vector [TeX:] $$\hat{\mathbf{h}}_{0}=\mathbf{0}$$.
(2) Select atoms
By calculating the absolute value of the inner product between the residual signal and the atom, the corresponding index of the atom, denoted as [TeX:] $$\lambda(k)$$, which has the maximum correlation with the current residual signal [TeX:] $$\mathbf{r}_{k-1}$$, is selected.
(3) Update support set
Combine the selected atom index [TeX:] $$\lambda(k)$$ with the support set [TeX:] $$\Lambda_{k-1}$$ obtained in the previous iteration to form a new support set [TeX:] $$\Lambda_{k}$$ , that is, [TeX:] $$\Lambda_{k}=\Lambda_{k-1} \cup\{\lambda(k)\}$$.
(4) Calculate the value of [TeX:] $$\hat{\mathbf{h}}_{k}$$ at the position of [TeX:] $$\Lambda_{k}$$
(5) Update the estimated signal after the weighting.
Multiply [TeX:] $$\hat{\mathbf{h}}_{k}$$ by the weight value [TeX:] $$u_{k}$$ to update [TeX:] $$\widehat{\boldsymbol{h}}_{k}$$.
(6) Update the weight value
Update the weight value [TeX:] $$u_{k}$$ based on adjustment value [TeX:] $$\Delta u$$.
The weight adjustment value [TeX:] $$\Delta u$$ in the algorithm may take a positive or a negative value. A positive value corresponds to an increase in weight value, while a negative value corresponds to a decrease in weight value. If the weight value is increased, the maximum value cannot be greater than 1, and if it is decreased, the minimum value cannot be less than 0.
(7) Update the residual signal [TeX:] $$\mathrm{r}_{\mathrm{k}}$$ based on the estimated signal [TeX:] $$\hat{\mathrm{h}}_{\mathrm{k}}$$.
[TeX:] $$k \geq K$$, the iterative process is complete; otherwise, k = k + 1, and repeat steps (2)–(7) to perform the next iteration.
(8) Output the final estimation [TeX:] $$\hat{\mathrm{h}}_{\mathrm{K}}$$.
5. Simulation Results
In this section, we present the results of simulations performed to evaluate the proposed approach. The accuracy of channel estimation was evaluated in terms of the normalized mean square error (NMSE), which was calculated using the following equation:
where [TeX:] $$\hat{\mathbf{h}}$$ and h are the estimated and actual channels, respectively.
The proposed algorithm with a fixed weight value is called the fixed-weighted MP (FWMP) algorithm. A suitable value of [TeX:] $$u_{\mathrm{op}}$$ was obtained by training the channel data. The influence of different weight values on the channel estimation accuracy was determined first. The simulation parameters were set as follows. The length of vector [TeX:] $$\mathbf{h}_{a}$$ was 256. Each element of [TeX:] $$\mathbf{h}_{a}$$ was a normally distributed random variable with zero mean and unit variance. The length of vector [TeX:] $$\mathbf{h}_{b}$$ was also 256. Each element obeyed the Bernoulli distribution with a success probability of p = 0.05.
The estimation accuracy achieved by the FWMP algorithm using different weight values is shown in Fig. 3. The FWMP algorithm with a weight value u = 1 is the original MP algorithm. It can be observed that the NMSE first decreased and then increased as the weight value increased from 0.4 to 1.2. Compared with the original MP algorithm, it can be further observed that the NMSE performance of the FWMP algorithm in the low-SNR regime can be improved with [TeX:] $$u_{\text {op }}=0.8$$. In contrast, the NMSE performance was degraded in the high-SNR regime, because the channel noise has less influence on the measured signal under such conditions. A non-one weight value distorts the estimated signal and degrades the estimation accuracy. After training with different weight values, the suitable value [TeX:] $$u_{\mathrm{op}}=0.8$$ was obtained in the specific channel with the model parameters given above.
Comparison of NMSE performance of FWMP algorithm with different weight values.
Because the estimation error varies between iterations, assigning varying weight values for the estimated elements obtained in each iteration is more reasonable. This reconstruction algorithm is called varying weighted MP (VWMP). The VWMP algorithm is similar to the FWMP algorithm, except that the weight value changes between iterations. A comparison of the performance of the FWMP, VWMP, and original MP algorithms in the same simulation environment is shown in Fig. 4. The fixed-weight value in the FWMP algorithm was set to 0.8. In the VWMP1 algorithm, the initial value of the weight was set as 2*0.8-1= 0.6. The step of the weight increase was calculated as [TeX:] $$\Delta u=2(1-0.8) / 13=0.03$$. To ensure that the weight value of the final iteration could be increased to 1, [TeX:] $$\Delta u$$ was increased to 0.05. In the VWMP2 algorithm, the initial value of the weight was set to 1. The weight value decreased by 0.05 for each iteration. There was an approximately equal mean weight value between the VWMP1 and VWMP2 algorithms for fairness. It can be observed from Fig. 4 that the VWMP1 algorithm exhibited superior performance compared to the MP algorithm in terms of the NMSE. The VWMP1 algorithm was able to achieve a gain of approximately 1 dB in both the low and high-SNR regimes compared with the original MP. Compared with the FWMP algorithm (u = 0.8), the VWMP1 algorithm could achieve a gain of approximately 5 dB under conditions of high SNR and similar NMSE performance in the low-SNR regime. The VWMP2 algorithm showed the worst performance in most cases, except for a weak perfor¬mance improvement in the low-SNR regime. This shows that the proposed method of initializing the weight value and increasing it iteratively can improve the accuracy of the estimation under both low- and high-SNR conditions.
Comparison of NMSE performance of different MP algorithms.
A suitable weight value was also obtained using the WOMP algorithm by learning the sparse channel data. Parameters of the sparse channel model were the same as those in the WMP algorithm, except for the success probability p = 0.07. The estimation accuracy of the FWOMP algorithm with different weight values is shown in Fig. 5. It can be observed that the accuracy of the FWOMP algorithm first improved and then decreased as the weight value increased from 0.7 to 1.0. Then, a suitable weight value of [TeX:] $$u_{o p}$$ = 0.9 was obtained. It may also be noted from the simulation results that the FWOMP algorithm performed better than the original OMP at a low SNR, whereas it performed worse with a high SNR, because the weight value does not change with iterations. The accuracy of the FWOMP algorithm with the weight value u = 1.1 was inferior to that of the OMP algorithm in both the high- and low-SNR regimes. This was due to the fact that the weight value exceeded the maximum value of 1.
Comparison of NMSE performance of FWOMP algorithm with different weights.
Comparison of NMSE performance of different OMP algorithms: (a) low SNR and (b) high SNR.
A comparison of the performance of the FWOMP, VWOMP, gradient pursuit (GP) [13], AWOMP [14], and the original OMP algorithm is shown in Fig. 6. The FWOMP algorithm used a weight value of 0.9. In the VWOMP algorithm, the weight value was initialized as 0.8, and the weight increase step [TeX:] $$\Delta u$$ was set to 0.01. It can be observed from the simulation results that the GP and AWOMP algorithms showed slightly improved performance compared with the original OMP algorithm under low-SNR conditions, but exhibited reduced estimation accuracy under high-SNR conditions. Compared with the GP, AWOMP, and FWOMP algorithms with a weight value of 0.9, the VWOMP algorithm showed significantly improved accuracy at high SNR and almost the same performance at low SNR. This shows that the weighting method can be successfully applied to the OMP algorithm.
To further illustrate the performance of the proposed algorithm, a comparison of the bit error rate (BER) of different algorithms is shown in Fig. 7. The parameter of success probability p = 0.09. Quadrature phase-shift keying modulation and a half-rate convolutional code with generator sequences G = [1011011, 1111001] were utilized. As shown in Fig. 7, the performance of the proposed weighted algorithms exceeded that of other algorithms in terms of BER. This shows that the proposed algorithm was able to obtain a more accurate estimated channel and thus a better BER.
Comparison of BER performance of different algorithms.
6. Conclusion
To solve the problem of poor channel estimation accuracy in massive MIMO systems, we proposed an improved weighted channel estimation algorithm, in which the estimated signals obtained in each iteration are weighted to improve the estimation accuracy. The estimated signal is weighted to find the correct atom with greater probability and achieve a balanced performance between the low- and high-SNR regimes. Because the residual signal becomes increasingly smaller with successive iterations, increasing the weight values iteratively can further improve the estimation accuracy under both low- and high-SNR conditions. The validity of the improved weighted MP algorithm was verified by the simula¬tion results.
Acknowledgement
This work was supported in part by the science and technology breakthrough project of the Henan Science and Technology Department (No. 192102210249, 192102210116, and 212102210470), Key projects of colleges and universities in Henan Province (No. 19B510007 and 19A520006). It was also supported in part by the Science and Technology Development Plan of Henan Province (No. 2021023 10625).