Novel Lossless Compression Method for Hyperspectral Images Based on Variable Forgetting Factor Recursive Least Squares

Changguo Li and Fuquan Zhu

Abstract

Abstract: Forgetting factor recursive least squares (FFRLS) is an effective lossless compression technique for hyper-spectral images. However, the forgetting factor of the FFRLS algorithm is a predetermined fixed value that cannot be adjusted in real time, which can affect prediction accuracy. To address this problem, a new lossless compression method for hyperspectral images using variable forgetting factor recursive least squares was developed. The impact of the forgetting factor on the FFRLS algorithm was analyzed, and a forgetting factor adjustment function was constructed using the average of the posterior prediction residuals in a causal neigh-borhood as a variable to adjust the forgetting factor dynamically. The performance of this algorithm was veri-fied using NASA's AIRS and CCSDS's 2006 AVIRIS images with minimum average bit rates of 3.66 and 4.07 bits per pixel, respectively. The experimental results show that the proposed algorithm improves prediction accuracy compared with the algorithm with a fixed forgetting factor and achieves better compression perfor-mance.

Keywords: Causal Neighborhood , Hyperspectral Image , Lossless Compression , Variable Forgetting Factor Recursive Least Squares

1. Introduction

Hyperspectral imaging is a new remote-sensing monitoring method with the characteristics of graph spectral integration. Hyperspectral remote-sensing images not only provide spatial information, such as the size, shape, and orientation of ground objects, but also provide continuous spectral curves reflecting the material and component information of ground objects, which improves the accuracy and reliability of remote-sensing quantitative analysis. In recent years, hyperspectral image processing has become one of the most popular research directions in the field of remote sensing. It has practical value and development prospects in various application fields [1], such as the investigation of mine resources, investigation of crop growth and agricultural development, monitoring of serious environmental pollution, and early warning of natural disasters. However, with the large-scale demand for hyperspectral data and the diversified development of imaging sensors, the amount of hyperspectral image data collected has increased sharply. In addition, the continuous improvement of spatial and spectral resolution makes the hyperspectral images exhibit a trend of high dimensionality and massive data, which considerably increases the burden of sampling, storage, and transmission and restricts the development of hyperspectral remote-sensing earth observation technologies. Thus, there is an urgent need to develop an efficient compression method. Lossless compression has naturally become the first choice for hyperspectral image compression to ensure the long-term preservation and application value of hyperspectral images.

Various algorithms for the compression of hyperspectral images that exploit spatial and spectral correlations have been proposed, such as transform coding [2,3], vector quantization [4], compressed sensing [5,6], neural networks [79], and predictive coding [10]. Because of its low complexity and flexibility in the design of lossless decoding and compression schemes, predictive coding has become one of the most important technologies. Clustered differential pulse code modulation with adaptive prediction length (C-DPCM-APL) [11] and clustered differential pulse code modulation with removal of local spectral outliers (C-DPCM-RLSO) [12] are classic predictive lossless compression algorithms. Similarly, adaptive filtering has been widely applied to the lossless compression of hyperspectral images. In an earlier study [13], the recursive least squares (RLS) algorithm was first applied to lossless hyperspectral image compression. The local difference between the local mean and the current pixel is computed. The local difference of the current pixel serves as the expected signal value, whereas the differences co-located in the previous bands form the input vector of RLS. The prediction is performed by multiplying the input vector by the weight vector, followed by rounding. The conventional recursive least squares (CRLS) [14] predictor expands its context window from four neighboring pixels to 24 ones, and the optimal input vectors are found by an adaptive search of previous prediction bands to improve the prediction accuracy. The bimodal conventional recursive least squares (B-CRLS) algorithm [15] optimizes the input vectors of CRLS. It contains two types of input vectors: one formed only by spectral neighborhoods, and the other formed by spectral and spatial neighborhoods. In a prior study [16], the superpixel-based recursive least squares (SuperRLS) algorithm was introduced, which partitions the hyperspectral image into multiple small regions in terms of the superpixel boundaries. The RLS predictor is performed for each region in parallel. In the same year, the fast recursive least squares based on adaptive length prediction (F-RLS-ALP) method [17], which exploits the feature of the projection matrix of RLS, significantly accelerated the ALP operation when obtaining the same compression bit rate as the RLS-ALP algorithm. In another work [18], based on spectral vector clustering, the band and predictor adaptive selection strategies were adopted using the CRLS algorithm (CRLS-ABS-APS). By enhancing the spectral correlations between the prediction reference bands, the compression performance was further optimized. Recently, a near-lossless recursive least squares (NLRLS) compression scheme [19] was proposed that can achieve both near-lossless and lossless compression modes for hyperspectral images.

The above analysis reveals that the lossless compression of hyperspectral images based on RLS has received widespread attention in recent years. The forgetting factor of RLS is used to account for the information of old and new data during the compression process to achieve the desired prediction accuracy and compression results. In fact, the forgetting factor of the algorithm is usually a fixed value called “forgetting factor recursive least squares” (FFRLS). Therefore, the value of the forgetting factor cannot be adjusted in real time using the predicted residuals, which affects the compression performance of the hyperspectral images. To address this issue, a variable forgetting factor (VFF) strategy is introduced, and a novel variable forgetting factor recursive least squares (VFFRLS) compression method is proposed. The VFF of the proposed algorithm is determined by the average of the posterior prediction residuals in the causal neighborhood of the current pixel and adaptively changes with the pixels. This method enhances the real-time correlation between the VFF and the posterior prediction residuals, thereby improving the prediction accuracy and optimizing the compression performance. The proposed algorithm was validated using the National Aeronautics and Space Administration Atmospheric Infrared Sounder (AIRS) and the Consultative Committee for Space Data Systems 2006 Airborne Visible Infrared Imaging Spectrometer (AVIRIS) datasets.

The remainder of this paper is organized as follows. In Section 2, the VFFRLS algorithm for hyperspectral image lossless compression is described. The experimental results for two publicly available hyperspectral images in terms of the compression results and computing complexity are presented in Section 3. The main conclusions are summarized in Section 4.

2. Lossless Compression of Hyperspectral Images based on the VFFRLS Algorithm

2.1 FFRLS Algorithm

RLS is an adaptive filtering algorithm, the basic principle of which is to ensure that the sum of squares of the difference between the output signal and the expected signal is minimized. Recursive operations are then performed on the instantaneous input data to ensure that the weight vector of each iteration process can achieve the best effect. Thus, it has high prediction accuracy and has been widely applied in the field of hyperspectral image compression. FFRLS has received considerable attention as an extended version of RLS [1113]. The general computational procedure is described below. The predictive output and prior prediction residual of the model are given by Eq. (1):

(1)
[TeX:] $$\left\{\begin{array}{l} S_z(t)=\left[s_z(t-1), s_z(t-M), s_z(t-M-1), s_z(t-M+1), \ldots, s_{z-n}(t+M)\right] \\ \mu_z(t)=\left[\mu_z(t), \mu_z(t), \mu_z(t), \mu_z(t), \ldots, \mu_{z-n}(t)\right] \\ d_z(t)=S_z(t)-\mu_z(t) \\ e_z(t)=s_z(t)-{round}\left(d_z(t) W^T(t-1)+\mu_z(t)\right) \end{array}\right.$$

where M and N are the width and height, respectively, of the hyperspectral image, [TeX:] $$s_Z(t)$$ is the pixel value at location (x, y) in band z, and t denoting the t-th spatial position is calculated as [TeX:] $$t=M x+y .$$ Moreover, [TeX:] $$\mu_z(t)$$ is the local mean, which is computed by the 24 neighboring pixels of the current pixel. In addition, [TeX:] $$S_z(t), \mu_z(t), \text { and } d_z(t)$$ are the pixel vector, local mean vector, and input vector, respectively. The vector length is 5n + 4, where n is the number of prediction bands. Finally, round(x) is a rounding function, and [TeX:] $$e_z(t)$$ is the prior prediction residual of [TeX:] $$s_Z(t).$$ After calculating the equation, the updates of K(t), P(t), and W(t) are as shown in Eq. (2):

(2)
[TeX:] $$\left\{\begin{array}{c} K(t)=\frac{P(t-1) d_z^T(t)}{\lambda+d_z(t) P(t-1) d_z^T(t)} \\ P(t)=\frac{1}{\lambda}\left[I-K(t) d_z(t)\right] P(t-1) \\ W(t)=W(t-1)+K^T(t) e_z(t) \end{array}\right.$$

Here, λ is the fixed forgetting factor, with a value between 0 and 1, and I is the identity matrix of size 5n+4.

2.2 VFFRLS Algorithm

Among the many parameters of the FFRLS algorithm, λ is a crucial parameter that directly affects the compression results of hyperspectral images. When λ is 1, the compression results are affected by all the previous prior prediction residuals, and the RLS method degenerates to the LS method. When λ is 0, the compression result is only impacted by the current prior prediction residual. Analyzing Eqs. (1) and (2) reveals that the smaller the λ value, the larger the Kalman gain K(t), which increases the correction effect of the prior prediction residual of the past pixel on the prediction of the current pixel, resulting in a larger prior prediction residual of the current pixel. Similarly, if the λ value is larger, the prior prediction residual is smaller.

To obtain the impact of different λ values on the prior prediction residual, two uncalibrated and calibrated 16-bit images with scene number 0 from Yellowstone were selected as the test images. By adopting the FFRLS algorithm, the compression processes with forgetting factor λ values of 0.985, 0.9995, and 1 are obtained, as shown in Figs. 1–3, respectively. Figs. 1(a), 2(a), and 3(a) show the compression process for the 60th band of the uncalibrated Yellowstone scene 0. Figs. 1(b), 2(b), and 3(b) show the compression process for the 60th band of the calibrated Yellowstone scene 0. A comparison shows that, when the λ value is 1, the prediction values cannot effectively follow the changes of the pixels, and the residuals fluctuate greatly for both the uncalibrated and the calibrated Yellowstone scene 0 images. Thus, it is unsuitable for hyperspectral image compression. When the λ value is 0.985, only the uncalibrated Yellowstone scene 0 image displays good compression performance. Therefore, only when the λ value is 0.9995 can the compression results maintain synchronized changes with pixels for these two images, and the prediction accuracy is satisfactory in this case.

In practical applications, pixel changes do not maintain a certain prediction trend, and the residuals in the calculation process change over time. Therefore, FFRLS can be improved by continuously adjusting the forgetting factor. If the forgetting factor is adjusted as indicated by the change in the residuals at different moments, it could enhance the compression performance. When the residual is large, the prediction accuracy may not be high. At this moment, the forgetting factor should be appropriately reduced to improve the prediction accuracy. When the residual is small, the prediction value approximates the actual one. Currently, it is unnecessary to make major modifications to the forgetting factor. According to these analyses, a new VFF [TeX:] $$\lambda(t)$$ can be obtained.

Fig. 1.
Compression processes for the 60th band of (a) uncalibrated and (b) calibrated Yellowstone 0 images when the forgetting factor is 0.985.
Fig. 2.
Compression processes for the 60th band of (a) uncalibrated and (b) calibrated Yellowstone 0 images when the forgetting factor is 0.9995.
Fig. 3.
Compression processes for the 60th band of (a) uncalibrated and (b) calibrated Yellowstone 0 images when the forgetting factor is 1.

(3)
[TeX:] $$\left\{\begin{array}{l} \lambda(t)=\lambda_{\text {min }}+\left(1-\lambda_{\text {min }}\right) \cdot 2^{-L(t)} \\ L(t)=\operatorname{round}\left(\rho\left(\frac{\sum_{i=t-S+1}^t \varepsilon_i}{S}\right)^2\right) \\ \varepsilon_z(t)=s_z(t)-\operatorname{round}\left(d_z(t) W^T(t)+\mu_z(t)\right) \end{array}\right.$$

where ρ and [TeX:] $$\lambda_\min$$ are fixed parameters representing the sensitivity gain and the minimum forgetting factor, respectively, and [TeX:] $$\varepsilon_z(t)$$ is the posterior prediction residual of [TeX:] $$s_Z(t).$$ The use of a single posterior prediction residual to correct the forgetting factor may result in greater randomness and errors. Therefore, the average of the posterior prediction residuals in the causal neighborhood of the current pixel is used to achieve this goal. Here, S is the size of the causal neighborhood. Eq. (3) reveals that the VFF [TeX:] $$\lambda(t).$$ fluctuates between [TeX:] $$\lambda_\min$$ and 1. If the mean square value approaches 0, [TeX:] $$\lambda(t)$$ approaches 1. When the mean square value approaches infinity, [TeX:] $$\lambda(t)$$ approaches [TeX:] $$\lambda_\min.$$ For any [TeX:] $$\varepsilon_z(t), \lambda_{\text {min }} \leq \lambda(t) \leq 1.$$ In summary, a new hyperspectral image compression scheme, VFFRLS, is achieved by introducing the VFF strategy.

Fig. 4.
Flowchart of the proposed algorithm implementation.

The forgetting factor can be adjusted to satisfy the compression expectations of the uncalibrated and calibrated hyperspectral images. Fig. 4 shows an implementation flowchart of the proposed algorithm. It is divided into two parts: the VFFRLS predictor and entropy encoding. During the implementation process, the input hyperspectral image is passed through the VFFRLS predictor point by point. Subsequently, the arithmetic encoder encodes the prior prediction residuals and outputs a compressed bit stream.

3. Experimental Results

To verify the proposed hyperspectral image compression method based on VFFRLS, the following experiments were performed using images from AIRS and AVIRIS sensors. These images are available for download (http://cwe.ccsds.org/sls/docs/sls-dc/123.0-B-Info/TestData) and are summarized in Table 1.

Table 1.
Test datasets
3.1 Parameter Settings

The initial values of the algorithm in the experiments were set as follows. Referring to the parameters in a previous report [18], W(0) and P(0) were initialized to zero vectors, [TeX:] $$\lambda, \lambda(0) \text { and } \lambda_{\text {min }}$$ were both set to 0.9995, and the number (n) of previous bands was set to 44. For the remaining parameters, the sensitivity gain ρ and the size of the window S, ρ were fixed, such as ρ = 10, and S was determined through experiments. In the experiment, S was selected from 1, 4, 9, 16, 25, 36, 49, and 64. The compression results of VFFRLS on the four hyperspectral images are shown in Fig. 5. The algorithm yielded the best compression results when S was 16. Thus, in the experiments, ρ and S were initialized to 10 and 16, respectively.

3.2 AIRS Images

For the ten AIRS datasets, each granule includes 1,501 bands in the 3.74–15.4 μm region of the spectrum and consists of 135 scan lines and 90 cross-track footprints per scan line. The bit depth of a band rages from 12 to 14 bits for different bands. Fig. 6(a)–6(j) show the 256th band images.

An experiment was conducted to test the compression performance of the proposed VFFRLS algorithm compared with several lossless compression algorithms on the 10 uncalibrated AIRS granules. In addition to RLS, FFRLS, and the proposed VFFRLS, the other prediction-based methods compared were simple lossless algorithm (SLA) [10] and NLRLS [19]. Table 2 encapsulates the compression results in bits per pixel (bpp). Comparing RLS and SLA shows that the former consistently outperformed the latter, mainly because SLA makes predictions based on neighbor-driven decision making, whereas AIRS images are raw and do not contain calibration-induced artifacts. As an effective algorithm for applying adaptive filtering to hyperspectral data compression, RLS is suitable for uncalibrated and calibrated hyperspectral images. The difference in compression performance between RLS and NLRLS is significant, demonstrating the importance of the loop quantizer in the compression process of hyperspectral images. In lossless prediction mode, NLRLS achieved 3.84 bpp, which is very close to the result for FFRLS. Owing to the introduction of VFFs, the proposed algorithm achieved the best compression results, outperforming FFRLS by 0.02 bpp.

Fig. 5.
Compression results on four hyperspectral images with different size of the window S. (a) Calibrated Yellowstone 0, (b) uncalibrated Yellowstone 0, (c) uncalibrated Maine 10, and (d) uncalibrated Hawaii 1.
Fig. 6.
(a)–(j) represent the 256th band images of Granules 9, 16, 60, 82, 120, 126, 129, 151, 182, and 193, respectively.
Table 2.
Compression results on the AIRS images (unit: bbp)
3.3 AVIRIS Images

Twelve AVIRIS datasets consist of five 16-bit calibrated and uncalibrated Yellowstone (CY and UY) images, one 12-bit uncalibrated Maine (UM) image, and one 12-bit uncalibrated Hawaii (UH) image. The CY images have 677 × 512 pixels/band, the UH image has 614 × 512 pixels/band, and the remaining images have 680 × 512 pixels/band. Each image contains 224 spectral bands. Fig. 7(a)–7(i) show the 128th band images.

The performance of VFFRLS was compared with that of other state-of-the-art lossless compression methods on the 12 2006 AVIRIS images. In these algorithms, the bit rates for C-DPCM-APL and CDPCM- RLSO were obtained from a previous work [12]. The results for RLS, B-CRLS, B-SuperRLS, and CRLS-ABS-APS were obtained from a previous report [18]. The results for SLA, NLRLS, FFRLS, and VFFRLS are new to this study. Table 3 lists the compression results in bpp. SLA does not work well on AVIRIS images in raw and calibrated formats. NLRLS focuses on near-lossless compression, which results in poor lossless compression. Their average bit rates were 6.10 and 4.21 bpp, respectively. The bit rate savings of VFFRLS are, hence, approximately 2.03 and 0.14 bpp, respectively. C-DPCM-RLSO, B-CRLS, and B-SuperRLS have the same average bit rate of 4.09 bpp. This result is surprising because they use lossless compression and different compression schemes. FFRLS and C-DPCM-APL are very close to this value: 4.13 and 4.10 bpp, respectively. Among the methods, the VFFRLS algorithm achieved the highest ranking. Compared with CRLS-ABS-APS and B-SuperRLS, its performance gains were 0.01 and 0.02 bpp, respectively. In addition, Table 3 shows that some algorithms achieved excellent compression performance on individual images. For example, CRLS-ABS-APS employs two optimization strategies, adaptive band selection and adaptive predictor selection, which contribute to the spectral decorrelation of the algorithm. The best compression results were obtained using CY11, UY11, and UH1. Similarly, C-DPCM-APL, C-DPCM-RLSO, and B-CRLS have this advantage. Their corresponding hyperspectral images were UY10, CY3, CY10, and UY18.

Fig. 7.
(a)–(l) represent the 128th band images of UH1, UM10, CY0, CY3, CY10, CY11, CY18, UY0, UY3,UY10, UY11, and UY18, respectively.
Table 3.
Compression results in bits per pixel for various lossless compression algorithms (unit: bbp)
3.4 Computing Complexity and Image Quality Analysis

To verify the computational complexity of the proposed algorithm, algorithms running on AIRS and AVIRIS images were selected for comparison. The running times of the compression part were measured. Table 4 lists the running time of the SLA as a unit time and the other algorithms as multiples. RLS, NLRLS, FFRLS, and VFFRLS are approximately 18.6, 82.5, 112.9, and 119.6 times more complex than SLA, respectively. Analysis of these algorithms indicates that their complexity is mainly determined by the calculation of the RLS filter. FFRLS has a larger number of prediction reference bands (44) than RLS (8). This results in a significant difference in the lengths of the vectors involved in the calculations. Therefore, VFFRLS must perform further calculations of the posterior prediction residuals and VFFs.

Because the lossless compression and decompression of the proposed algorithm are symmetrical, decompression is the inverse operation of compression. Naturally, regarding the image quality, the decompressed image remains the same as the original one.

Table 4.
Algorithm running times for the compression part (normalized to SLA)

4. Conclusion

The accurate prediction of prediction-based hyperspectral image compression algorithms is a pressing and difficult research problem. The novel hyperspectral image compression approach using the VFFRLS algorithm improves prediction accuracy and achieves better compression performance. The findings of this study are as follows: (1) to solve the problem of the low prediction accuracy of the FFRLS algorithm, the VFFRLS algorithm based on a VFF strategy was developed; (2) to avoid the randomness and errors caused by correcting the forgetting factor with a single posterior prediction residual, the causal neighborhood of the current pixel should be used. The average of the posterior prediction residuals in the causal neighborhood determines the VFF; and (3) experiments were executed on AIRS and AVIRIS images to confirm the performance of VFFRLS compared with other algorithms. The experimental results indicate that the VFFRLS method achieves minimum values of 3.66 and 4.07 bpp, respectively.

Combined with the analysis in the previous section, the proposed algorithm improves compression performance at the expense of computational complexity. Thus, future research efforts should focus on optimizing the algorithm through evolution algorithms, such as global mean particle swarm optimization. When the VFFRLS algorithm is in the iterative process, global mean particle swarm optimization will automatically find the optimal parameters to reduce the algorithm prediction residuals and adjustment time.

Biography

Changguo Li
https://orcid.org/0000-0001-7822-4440

He received his M.S. degree in computational mathematics from Sichuan Normal University, China, in 2007; his Ph.D. degree in earth exploration and information technology from Chengdu University of Technology, China, in 2015. He is currently an associate professor in Sichuan Normal University. His current research interests include hyperspectral image processing, parallel computing and pattern recognition.

Biography

Fuquan Zhu
https://orcid.org/0009-0004-2035-1155

He received his M.S. degree in computational mathematics from Sichuan Normal University, China, in 2007; his Ph.D. degree in earth exploration and information technology from Chengdu University of Technology, China, in 2020. He is currently an associate professor in Sichuan Police College. His research interests include image processing and data mining.

References

  • 1 F. Liu and Z. Chen, "An adaptive spectral decorrelation method for lossless MODIS image compression," IEEE Transactions on Geoscience and Remote Sensing, vol. 57, no. 2, pp. 803-814, 2019. https://doi.org/10.1 109/TGRS.2018.2860686doi:[[[10.1109/TGRS.2018.2860686]]]
  • 2 S. Bajpai, "Low complexity and low memory compression algorithm for hyperspectral image sensors," Wireless Personal Communications, vol. 131, no. 2, pp. 805-833, 2023. https://doi.org/10.1007/s11277-02310455-8doi:[[[10.1007/s11277-02310455-8]]]
  • 3 C. Li, D. Chen, C. Xie, Y . Gao, and J. Liu, "Research on lossless compression coding algorithm of N-band parametric spectral integer reversible transformation combined with the lifting scheme for hyperspectral images," IEEE Access, vol. 10, pp. 88632-88643, 2022. https://doi.org/10.1109/ACCESS.2022.3199737doi:[[[10.1109/ACCESS.2022.337]]]
  • 4 R. Li, Z. Pan, and Y . Wang, "The linear prediction vector quantization for hyperspectral image compression," Multimedia Tools and Applications, vol. 78, pp. 11701-11718, 2019. https://doi.org/10.1007/s11042-0186724-8doi:[[[10.1007/s11042-0186724-8]]]
  • 5 J. Luo, T. Xu, T. Pan, X. Han, and W. Sun, "An efficient compression method of hyperspectral images based on compressed sensing and joint optimization," Integrated Ferroelectrics, vol. 208, no. 1, pp. 194-205, 2020. https://doi.org/10.1080/10584587.2020.1728625doi:[[[10.1080/10584587.2020.1728625]]]
  • 6 J. Zhang, Y . Zhang, X. Cai, and L. Xie, "Three-stages hyperspectral image compression sensing with band selection," CMES-Computer Modeling in Engineering & Sciences, vol. 134, no. 1, pp. 293-316, 2022. https://doi.org/10.32604/cmes.2022.020426doi:[[[10.32604/cmes.2022.06]]]
  • 7 Y . Dua, R. S. Singh, K. Parwani, S. Lunagariya, and V . Kumar, "Convolution neural network based lossy compression of hyperspectral images," Signal Processing: Image Communication, vol. 95, article no. 116255, 2021. https://doi.org/10.1016/j.image.2021.116255doi:[[[10.1016/j.image.2021.116255]]]
  • 8 S. Mijares i V erdu, J. Balle, V . Laparra, J. Bartrina-Rapesta, M. Hernandez-Cabronero, and J. Serra-Sagrista, "A scalable reduced-complexity compression of hyperspectral remote sensing images using deep learning," Remote Sensing, vol. 15, no. 18, article no. 4422, 2023. https://doi.org/10.3390/rs15184422doi:[[[10.3390/rs15184422]]]
  • 9 S. Pan, X. Gu, and Y . Zhong, "Hyperspectral image compression based on spatial and spectral content," Journal of Huazhong University of Science and Technology (Natural Science Edition), vol. 51, no. 9, pp. 7480, 2023. https://doi.org/10.13245/j.hust.238614doi:[[[10.13245/j.hust.238614]]]
  • 10 V . Joshi and J. S. Rani, "A simple lossless algorithm for on-board satellite hyperspectral data compression," IEEE Geoscience and Remote Sensing Letters, vol. 20, article no. 5504305, 2023. https://doi.org/10.1109/L GRS.2023.3275436doi:[[[10.1109/LGRS.2023.3275436]]]
  • 11 J. Mielikainen and B. Huang, "Lossless compression of hyperspectral images using clustered linear prediction with adaptive prediction length," IEEE Geoscience and Remote Sensing Letters, vol. 9, no. 6, pp. 1118-1121, 2012. https://doi.org/10.1109/LGRS.2012.2191531doi:[[[10.1109/LGRS.2012.231]]]
  • 12 J. Wu, W. Kong, J. Mielikainen, and B. Huang, "Lossless compression of hyperspectral imagery via clustered differential pulse code modulation with removal of local spectral outliers," IEEE Signal Processing Letters, vol. 22, no. 12, pp. 2194-2198, 2015. https://doi.org/10.1109/LSP .2015.2443913doi:[[[10.1109/LSP.2015.2443913]]]
  • 13 J. Song, Z. Zhang, and X. Chen, "Lossless compression of hyperspectral imagery via RLS filter," Electronics Letters, vol. 49, no. 16, pp. 992-994, 2013. https://doi.org/10.1049/el.2013.1315doi:[[[10.1049/el.2013.1315]]]
  • 14 F. Gao and S. Guo, "Lossless compression of hyperspectral images using conventional recursive least-squares predictor with adaptive prediction bands," Journal of Applied Remote Sensing, vol. 10, no. 1, article no. 015010, 2016. https://doi.org/10.1117/1.JRS.10.015010doi:[[[10.1117/1.JRS.10.015010]]]
  • 15 A. C. Karaca and M. K. Gullu, "Lossless hyperspectral image compression using bimodal conventional recursive least-squares," Remote Sensing Letters, vol. 9, no. 1, pp. 31-40, 2018. https://doi.org/10.1080/21507 04X.2017.1375612doi:[[[10.1080/2150704X.2017.1375612]]]
  • 16 A. C. Karaca and M. K. Gullu, "Superpixel based recursive least-squares method for lossless compression of hyperspectral images," Multidimensional Systems and Signal Processing, vol. 30, pp. 903-919, 2019. https://doi.org/10.1007/s11045-018-0590-4doi:[[[10.1007/s11045-018-0590-4]]]
  • 17 J. Song, L. Zhou, C. Deng, and J. An, "Lossless compression of hyperspectral imagery using a fast adaptivelength-prediction RLS filter," Remote Sensing Letters, vol. 10, no. 4, pp. 401-410, 2019. https://doi.org/10.1 080/2150704X.2018.1562257doi:[[[10.1080/2150704X.2018.1562257]]]
  • 18 F. Zhu, H. Wang, L. Yang, C. Li, and S. Wang, "Lossless compression for hyperspectral images based on adaptive band selection and adaptive predictor selection," KSII Transactions on Internet and Information Systems (TIIS), vol. 14, no. 8, pp. 3295-3311, 2020. https://doi.org/10.3837/tiis.2020.08.008doi:[[[10.3837/tiis.2020.08.008]]]
  • 19 T. Zheng, Y . Dai, C. Xue, and L. Zhou, "Recursive least squares for near-lossless hyperspectral data compression," Applied Sciences, vol. 12, no. 14, article no. 7172, 2022. https://doi.org/10.3390/app12147172doi:[[[10.3390/app1172]]]

Table 1.

Test datasets
Dataset Numbers Size Type Sensor
Granule 9, 16, 60, 82, 120, 126, 129, 151, 182, 193 135 × 90 × 1501 Uncalibrated AIRS
Yellowstone 0, 3, 10, 11, 18 512 × 677 × 224 Calibrated AVIRIS
Yellowstone 0, 3, 10, 11, 18 512 × 680 × 224 Uncalibrated AVIRIS
Maine 10 512 × 680 × 224 Uncalibrated AVIRIS
Hawaii 1 512 × 614 × 224 Uncalibrated AVIRIS

Table 2.

Compression results on the AIRS images (unit: bbp)
Granule SLA RLS NLRLS FFRLS VFFRLS
9 6.47 4.17 3.79 3.63 3.61
16 6.54 4.37 3.78 3.62 3.61
60 6.56 4.43 3.83 3.69 3.68
82 6.08 4.27 3.72 3.59 3.57
120 5.97 4.20 3.81 3.67 3.66
126 6.39 4.38 3.89 3.71 3.69
129 5.97 3.67 3.73 3.59 3.58
151 6.65 4.40 3.94 3.75 3.73
182 6.61 4.43 4.01 3.78 3.75
193 6.44 4.27 3.91 3.73 3.71
Average 6.37 4.26 3.84 3.68 3.66

Table 3.

Compression results in bits per pixel for various lossless compression algorithms (unit: bbp)
Image Algorithm
SLA RLS NLRLS FFRLS C-DPCM-APL C-DPCM-RLSO B-CRLS B-SuperRLS CRLS-ABS-APS VFFRLS
CY0 5.61 3.77 3.62 3.52 3.52 3.50 3.50 3.51 3.48 3.47
CY3 5.53 3.63 3.46 3.42 3.36 3.34 3.39 3.39 3.35 3.35
CY10 4.66 3.24 3.11 3.06 2.93 2.91 3.01 2.95 2.97 3.02
CY11 5.23 3.48 3.32 3.25 3.25 3.21 3.23 3.22 3.20 3.20
CY18 5.65 3.67 3.53 3.43 3.42 3.39 3.41 3.41 3.38 3.37
UY0 7.94 6.01 5.97 5.80 5.81 5.82 5.75 5.77 5.76 5.69
UY3 7.72 5.86 5.72 5.69 5.65 5.66 5.63 5.65 5.63 5.57
UY10 7.03 5.46 5.28 5.30 5.17 5.18 5.25 5.19 5.23 5.23
UY11 7.46 5.70 5.61 5.53 5.47 5.49 5.46 5.46 5.45 5.47
UY18 7.92 5.90 5.82 5.71 5.71 5.70 5.66 5.67 5.67 5.69
UM10 4.28 2.63 2.59 2.51 2.51 2.51 2.44 2.48 2.48 2.42
UH1 4.15 2.50 2.48 2.34 2.35 2.36 2.31 2.33 2.30 2.30
Average 6.10 4.32 4.21 4.13 4.10 4.09 4.09 4.09 4.08 4.07

Table 4.

Algorithm running times for the compression part (normalized to SLA)
Algorithm
SLA RLS NLRLS FFRLS VFFRLS
Complexity 1 18.6 82.5 112.9 119.6
Compression processes for the 60th band of (a) uncalibrated and (b) calibrated Yellowstone 0 images when the forgetting factor is 0.985.
Compression processes for the 60th band of (a) uncalibrated and (b) calibrated Yellowstone 0 images when the forgetting factor is 0.9995.
Compression processes for the 60th band of (a) uncalibrated and (b) calibrated Yellowstone 0 images when the forgetting factor is 1.
Flowchart of the proposed algorithm implementation.
Compression results on four hyperspectral images with different size of the window S. (a) Calibrated Yellowstone 0, (b) uncalibrated Yellowstone 0, (c) uncalibrated Maine 10, and (d) uncalibrated Hawaii 1.
(a)–(j) represent the 256th band images of Granules 9, 16, 60, 82, 120, 126, 129, 151, 182, and 193, respectively.
(a)–(l) represent the 128th band images of UH1, UM10, CY0, CY3, CY10, CY11, CY18, UY0, UY3,UY10, UY11, and UY18, respectively.