Low-Complexity Compression Method for Hyperspectral Images ...

Report 2 Downloads 69 Views
224

IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 9, NO. 2, MARCH 2012

Low-Complexity Compression Method for Hyperspectral Images Based on Distributed Source Coding Xuzhou Pan, Rongke Liu, Member, IEEE, and Xiaoqian Lv

Abstract—In this letter, we propose a low-complexity discrete cosine transform (DCT)-based distributed source coding (DSC) scheme for hyperspectral images. First, the DCT was applied to the hyperspectral images. Then, set-partitioning-based approach was utilized to reorganize DCT coefficients into waveletlike tree structure and extract the sign, refinement, and significance bitplanes. Third, low-density parity-check-based Slepian–Wolf (SW) coder was adopted to implement the DSC strategy. Finally, an auxiliary reconstruction method was employed to improve the reconstruction quality. Experimental results on Airborne Visible/Infrared Imaging Spectrometer data set show that the proposed paradigm significantly outperforms the DSC-based coder in wavelet transform domain (set partitioning in hierarchical tree with SW coding), and its performance is comparable to that of the DSC scheme based on informed quantization at low bit rate. Index Terms—Auxiliary reconstruction, discrete cosine transform (DCT), distributed source coding (DSC), hyperspectral images.

[7], [8]. Moreover, they demonstrate that the presented DSCbased compression frameworks are very promising. In this letter, we put forward a low-complexity DSC scheme for onboard compression of hyperspectral images. In particular, our method is conducted in discrete cosine transform (DCT) domain, rather than WT domain. We modify Xiong’s embedded zerotree DCT (EZDCT) [9] to extract bitplanes of the reorganized DCT coefficients into waveletlike tree structure, yielding significance, sign, and refinement bitplanes. The auxiliary reconstruction is applied to improve the reconstruction quality at the decoder. According to the characteristics of DSC, we make further use of the side information to reconstruct DCT coefficients, reducing the quantization errors. Furthermore, we use the Gray code [10] for the refinement bits of DCT coefficients which can significantly and efficiently improve the correlation between the source and the side information. Additionally, the proposed scheme supports progressive image coding and region of interest (ROI) image coding. Progressive image coding is easy to be implemented by SPIHT algorithm. Thus, our algorithm supplies the flexibility to control the number of stream. Also, our method can conveniently employ the traditional ROI methods, such as Maxshift, to provide the capability of ROI coding. However, progressive coding is not supported by the scheme in [8]. The algorithm presented in [4] divides the images into several slices and employs the adaptive region-based predictor to capture spatially varying spectral correlation, bringing in lossless compression and progressive coding. It achieves good lossless compression performance through complex predictions and Markov random field, while our algorithm uses DCT and SPIHT to implement lossy compression. Furthermore, the methods in [3] and [4] are for multispectral images, of which the spectral resolution is much lower and the statistical properties are more complex and nonstationary than hyperspectral images. Therefore, this letter does not compare the performance with those in [3] and [4]. The rest of this letter is organized as follows. Section II presents the architecture of the method. In Section III, the experimental results are demonstrated. Section IV concludes the proposed method.

m c.ocom . t o spspot I. I NTRODUCTION g b.lbolog . YPERSPECTRAL image compression requires lows ct s complexity encoder because it is usually completed on je ect o j board where the energy and memory are limited. However,ethe prpro r e measures with traditional encoders are not simple enough. lolorFor p x example, many algorithms, including those basede on JPEG2000 xp e[1], e e and 3-D transform, have excessive complexity [2]. Given e i //://images, ie : the close correlation among hyperspectral we can p p htht ttprinciple employ distributed source coding (DSC) to compress

H

them efficiently at a lower cost. Compared with conventional source coding schemes, the DSC method can shift the complexity from encoder to decoder. Zhang et al. propose a lossless compression method for multispectral images based on DSC and prediction [3], [4] with very low encoder complexity. Cheung proposes the DSC-based lossy way in wavelet transform (WT) domain, named as set partitioning in hierarchical tree with Slepian–Wolf coding (SW-SPIHT) [5], [6]. Magli et al. puts forth lossless and lossy DSC-based compression methods

Manuscript received May 3, 2011; revised July 1, 2011 and July 22, 2011; accepted August 6, 2011. Date of publication September 29, 2011; date of current version February 8, 2012. This work was supported in part by the National Natural Science Foundation of China under Project 60702012 and in part by the Scientific Research Foundation for the Returned Overseas Chinese Scholars, State Education Ministry. The authors are with the School of Electronic and Information Engineering, Beihang University, Beijing 100191, China (e-mail: [email protected]; [email protected]; [email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/LGRS.2011.2165271

II. CODEC D ESIGN In terms of the tradeoff among memory, complexity, and performance, we put forth a low-complexity DCT-based DSC

1545-598X/$26.00 © 2011 IEEE

PAN et al.: LOW-COMPLEXITY COMPRESSION METHOD FOR HYPERSPECTRAL IMAGES BASED ON DSC

Fig. 1.

225

DSC-based coding architecture for hyperspectral images in DCT domain.

scheme for hyperspectral images with auxiliary reconstruction. Obviously, DCT-based coders imposed on hardware are less expensive than wavelet-based ones. Furthermore, the regrouped DCT coefficients have the characteristics of waveletlike tree structure [11] and close correlation. This feature provides us the opportunity to gain wavelet-based coders to get better performance than conventional DCT-based coders and to adopt DSC technique to achieve lower complexity than most waveletbased ones.

If S < (K  − T /2) and K  = 0, then the reconstruction value is equal to K  − T /2. If S > (K  + T /2) and K  = 0, then the reconstruction value is equal to K  + T /2. If K  = 0, the reconstruction value equals to S. With the auxiliary reconstruction, we cannot only efficiently control the quantization error range but also make full use of the strong correlation between the original information and the side information to further reduce the quantization errors and improve the reconstruction quality.

gradually halved, the coefficients are updated. However, relying solely on the midpoint reconstruction method, the reconstruction quality is not the best, particularly at low code rate. Therefore, we propose a new auxiliary reconstruction to improve the reconstruction quality. In accordance with the characteristics of DSC, we further utilize the side information to reconstruct the DCT coefficients after midpoint reconstruction at the decoder. Assuming that the threshold of the final encoded bitplane is T , the coefficient quantization step ΔS equals to T . Let K be the original value of coefficient A, S be the value of the corresponding side information, and K  be the reconstruction value. Then, the quantization interval that belongs to coefficient A is (K  − T /2, K  + T /2). In the auxiliary reconstruction, we use the SW code rate R to evaluate the correlation between the source and the side information. A smaller R gives higher relevance. According to the relationship between K  and S, there are four reconstruction rules. If S ∈ (K  − T /2, K  + T /2) and K  = 0, then the following holds: 1) if R < 0.7, the reconstruction value equals to S, and 2) if R >= 0.7, the reconstruction value equals to R · K  + (1 − R) · S.

The key band has relative high quality. All the other bands are compressed based on DSC. The reference band Xi−1 is transmitted by means of the modified EZDCT. In the EZDCT, we employ zerotree quantizer in SPIHT algorithm and SPIHT coder rather than EZW coder. ˆ i−1 is generated and As a result, its reconstructed image X offered at the decoder side. In order to make Xi−1 globally as “similar” as possible to Xi , one-order liner filter, i.e., Xi = a × Xi−1 + b, is applied at the encoder to yield an approximate version of Xi . The solution to calculate the coefficients a and b is applying the pixels between Xi and Xi−1 to fit the data best in a least squares sense. At ˆ i−1 + b is adopted as the filter, ˜ = a × X the decoder side, X i  ˜ represents the estimated version yielded in a least where X i squares sense. Since the one-order linear model is unknown to the decoder, it is necessary to convey the coefficients a and b to the decoder. The coefficients can assist to yield more accurate ˜ . representation of band Xi , i.e., X i In the case of Xi , DCT transform is introduced, and the DCT coefficients at the same position of each block are regrouped into the waveletlike tree constitution. The bitplanes of the

m c.ocom . t o spspot g b.lbolog . s t ejcects j o A. Auxiliary Reconstruction prpro e r re B. Our Proposed Architecture loAtlothe We modify EZDCT to quantify the DCT coefficients. p x p decoder, the successive approximation quantization xadopted. eeeisthe e e i Fig. 1 shows the proposed architecture. We divide the hyAccording to the importance of the coefficients and thresh//://ie : p perspectral into several groups. Each group has a key old of each bitplane, the algorithm of tmidpoint reconstruction h ht ttpWith the threshold band whichimages is compressed directly by the modified EZDCT. is employed to reconstruct the coefficients.

226

IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 9, NO. 2, MARCH 2012

reorganized DCT coefficients are then extracted via setpartitioning algorithm. The bitplanes consist of significance, sign, and refinement coefficients. Refinement bits are Gray encoded. In our scheme, we choose LDPC code as the powerful error-correcting code to realize the DSC strategy. The LDPC-based SW coder is implemented to encode the sign and refinement bits. The essential parameter that determines the compression rate is the value of crossover probabilities (the crossover probabilities between the corresponding bitplane locations of Xi and Xi , with BSC being the virtual channel), and it is the key ingredient of the LDPC-based SW encoder. The significance tree of Xi is applied to regroup the DCT coefficients of Xi in order to extract sign and refinement bitplanes. These generated sign and refinement bits are compared to those of Xi so as to compute the crossover probabilities. Additionally, a table is built offline that associates different crossover probabilities with compression rates, at both the encoder and the decoder. Once the crossover probability is obtained, a proper compression rate can be selected. Specifically, we use the LDPC accumulate (LDPCA) [12] code to implement the SW encoder. The structure on the right side shows how it works at the ˜  is generated by directly DSC decoder. The estimated value X i  ˆ ˜ using Xi−1 . Then, Xi takes the DCT. All the DCT coefficients are offered to auxiliary reconstruction. Once the significance bits are transmitted to the decoder, the sign and refinement ˜  are reconstructed and then made available as side bits of X i information. Combined with the precise side information and the passed syndromes, the LDPC-based SW decoder is then ˆi. employed to reconstitute the sign and refinement bits of X Afterward, the refinement bits in Gray code are transformed to natural binary code. In company with significance bits, we ˆ  . After the auxiliary reconstruction and get the temporal data X i ˆi. inverse DCT, we finally reproduce the band X III.

Fig. 2.

Rate–distortion comparison (view: sc0).

m c.ocom . t o t Fig. 3. Rate–distortion (view: sc3). spspocomparison g o g l .b.blo algorithm is close to informed quantization at Thets proposed ejcecbittsrate and becomes worse at high bit rate. However, our low j o prproscheme has the feature of progressive image coding, while the e r e quantization method does not. plpolor informed x There are two key reasons why our scheme gets better eeeex e performance than others. First, the Gray code is used to enhance i //://ie : the correlation between the source and the side information. p t ttp hthAND The DSC coder can achieve better coding efficiency with more E XPERIMENTAL R ESULTS D ISCUSSION

A. Rate–Distortion Comparison The proposed system is implemented, and the performance is evaluated on senses sc0 and sc3 of Airborne Visible/Infrared Imaging Spectrometer (AVIRIS; Yellowstone). AVIRIS is a spectrometer with 224 bands, and the size of this raw image is 512 lines and 680 pixels. We divide the images into four groups. The code rate of the key bands is 2 bpp. In this experiment, we use eight as the size of DCT kernel and three as the level of WT domain correspondingly. LDPCA code with the length of 396 specified in [12] is applied, which can perform within 10% of the SW bound at moderate rate. The following diagrams illustrate the results. We compare our measurement with SW-SPIHT in Figs. 2 and 3. According to these two figures, we can conclude that the proposed framework outperforms SW-SPIHT by up to 3 dB regardless of the bit rate. At low bit rate, our proposal attains more advantages. Fig. 2 also shows the comparison between our method and informed quantization [6]. As is shown in the graph, informed quantization achieves very good rate–distortion performance.

accurate side information. Second, auxiliary reconstruction is adopted to reduce the quantization errors. We can employ the close dependence of the hyperspectral images more effectively than others. B. Complexity Comparison The remarkable difference between our scheme and SWSPIHT is the variation of transform domain. Our method is implemented in DCT domain, while SW-SPIHT is performed in WT domain. It is pointed out that the calculation load of DCT is much less than that of WT. One of the most efficient ways for 2-D 8 × 8 DCT requires only 54 multiplications and 462 additions. It means that each point needs merely 0.84 multiplications and 7.22 additions. As for discrete wavelet transform, the computational cost is depending on the length of the wavelet filters. The bior 4.4 needs approximately 18 multiplications and 16 additions per point. Additionally, the auxiliary reconstruction does not increase the complexity of the encoder. In conclusion, our scheme has lower computational complexity than other wavelet-based schemes [3], [4].

PAN et al.: LOW-COMPLEXITY COMPRESSION METHOD FOR HYPERSPECTRAL IMAGES BASED ON DSC

IV. C ONCLUSION We have proposed a DCT-based DSC scheme for hyperspectral images with lower complexity. Due to the auxiliary reconstruction and the modification of transform, our proposal is very competitive, compared with other DSC-based coding methods for hyperspectral images. Also, our scheme has the characteristics of ROI coding and progressive image coding. Therefore, the low-complexity DSC-based scheme with auxiliary reconstruction is feasible for hyperspectral image compression. ACKNOWLEDGMENT The authors would like to thank the anonymous reviewers and the editors who helped to improve the quality of this letter. R EFERENCES [1] B. Penna, T. Tillo, and E. Magli, “Transform coding techniques for lossy hyperspectral data compression,” IEEE Trans. Geosci. Remote Sens., vol. 45, no. 5, pp. 1408–1421, May 2007. [2] W. Pan, Y. Zou, and A. Lu, “A compression algorithm of hyperspectral remote sensing image based on 3-D wavelet transform and fractal,” in Proc. 3rd Int. Conf. Intell. Syst. Knowl. Eng., Xiamen, China, Nov. 2008, pp. 1237–1241. [3] J. Zhang, H. Li, and C. W. Chen, “Distributed coding techniques for onboard lossless compression of multispectral images,” in Proc. ICME, New York, Jul. 2009, pp. 141–144.

227

[4] J. Zhang, H. Li, and C. W. Chen, “Progressive distributed coding of multispectral images,” in Proc. 5th ICST Mobile Multimedia Commun. Conf., London, U.K., Sep. 2009, pp. 1–7. [5] C. Tang, N.-M. Cheung, and A. Ortega, “Efficient inter-band prediction and wavelet-based compression for hyperspectral imagery: A distributed source coding approach,” in Proc. IEEE Data Compression Conf., Los Angeles, CA, Mar. 2005, pp. 437–446. [6] N.-M. Cheung, C. Tang, and A. Ortega, “Efficient wavelet-based predictive Slepian–Wolf coding for hyperspectral imagery,” Signal Process., vol. 86, no. 11, pp. 3180–3195, Nov. 2006. [7] A. Abrardo, M. Barni, E. Magli, and F. Nencini, “Errorr resilient and low-complexity onboard lossless compression of hyperspectral images by means of distributed source coding,” IEEE Trans. Geosci. Remote Sens., vol. 48, no. 4, pp. 1892–1904, Mar. 2010. [8] A. Abrardo, M. Barni, and E. Magli, “Low-complexity lossy compression of hyperspectral images via informed quantization,” in Proc. IEEE ICIP, Siena, Italy, Dec. 2010, pp. 505–508. [9] Z. Xiong, O. Guleryuz, and M. T. Orchard, “A DCT-based embedded image coder,” IEEE Signal Process. Lett., vol. 3, no. 11, pp. 289–290, Aug. 1996. [10] W. Liu and W. Zeng, “Scalable non-binary distributed source coding using Gray codes,” in Proc. IEEE Int. Workshop Multimedia Signal Process., Columbia, MO, Nov. 2005, pp. 1–4. [11] J. Chen and C. Wu, “An efficient embedded subband coding algorithm for DCT image compression,” in Proc. SPIE—Image Compression and Encryption Technologies, 2001, vol. 4551, pp. 44–48. [12] D. Varodayan, A. Aaron, and B. Girod, “Rate-adaptive distributed source coding using low-density parity-check codes,” in Proc. 39th Asilomar Conf. Signals, Syst. Comput., Pacific Grove, CA, Nov. 2005, pp. 1203–1207.

m c.ocom . t o spspot g b.lbolog . s t ejcects j o prpro e r e plpolor x e x eieeee i / / p: :// htht ttp