Compressive hyperspectral imaging by random ... - Semantic Scholar

Report 6 Downloads 191 Views
Compressive hyperspectral imaging by random separable projections in both the spatial and the spectral domains Yitzhak August,1 Chaim Vachman,1 Yair Rivenson,2 and Adrian Stern1,* 1 2

Department of Electro-Optical Engineering, Ben-Gurion University of the Negev, P.O. Box 653, Beer-Sheva 84105, Israel

Department of Electrical & Computer Engineering, Ben-Gurion University of the Negev, P.O. Box 653, Beer-Sheva 84105, Israel *Corresponding author: [email protected] Received 5 November 2012; revised 18 February 2013; accepted 21 February 2013; posted 22 February 2013 (Doc. ID 179331); published 22 March 2013

An efficient method and system for compressive sensing of hyperspectral data is presented. Compression efficiency is achieved by randomly encoding both the spatial and the spectral domains of the hyperspectral datacube. Separable sensing architecture is used to reduce the computational complexity associated with the compressive sensing of a large volume of data, which is typical of hyperspectral imaging. The system enables optimizing the ratio between the spatial and the spectral compression sensing ratios. The method is demonstrated by simulations performed on real hyperspectral data. © 2013 Optical Society of America OCIS codes: 110.4155, 110.4190, 110.4234, 110.1758.

1. Introduction

Hyperspectral (HS) images are used in numerous fields such as biomedical imaging, remote sensing, the food industry, art conservation and restoration, and many more. The amount of data typically captured with HS imaging systems is very large and it is often highly compressible. This has motivated the application of compressive sensing techniques for HS imaging. Compressive sensing (CS) [1–3] is a fast-emerging field in the area of digital signal sensing and processing. CS theory provides a sensing framework for sampling sparse or compressible signals in a more efficient way that is usually done with Shannon– Nyquist sampling scheme. With CS, a compressed version of the signal is obtained already in the acquisition stage, thus averting the need for digital compressing. Since CS requires fewer measurements, it can be applied to reduce the number of 1559-128X/13/100D46-09$15.00/0 © 2013 Optical Society of America D46

APPLIED OPTICS / Vol. 52, No. 10 / 1 April 2013

sensors or to reduce the acquisition time. One natural implementation arena of CS theory is the field of imaging. The first implementation of CS for imaging was the single-pixel CS camera [4]. Single pixel CS camera architecture has been used for imaging in the visible, the terahertz [5,6], and the short-wave infrared [7] spectrum. The use of single-pixel CS cameras is suitable in cases where large detector arrays are not available or are too expensive. Another use of the single-pixel CS camera is in aerospace remote sensing [8,9]; in this case, the motivation is to reduce the cost of data acquisition. Other compressive imaging techniques include single-shot compressive imaging [10,11], compressive holography [12,13], progressive compressive imaging [14], compressive motion tracking [15,16], and CS applications for microscopy [17–19], to name but a few. An overview of CS techniques in optics may be found in [20]. In this work we focus on using CS for HS imaging. Hyperspectral and multispectral imaging may benefit from CS, since HS data is typically highly compressible.

Hyperspectral data is typically organized in the form of a cube, which is a three-dimensional (3D) digital array, as shown in Fig. 1. The x–y plane represents the spatial information and the third dimension is for the spectral reflection as a function of wavelength. Each point in the x–y plane has its own spectral signature, described by a spectral vector. The number of spectral bands in the HS image is in the range of dozens to thousands, where the typical wavelength width of each spectral band ranges from 0.5 up to 10 nm with some spectral overlap. The common acquisition techniques for HS data are based on spectrometer point scanning and spectrometer line scanning [21,22]. One of the main limitations of these two methods is the relatively slow scanning process. Other limitations arise from the fact that huge amounts of data need to be processed and transmitted. CS-inspired methods can help in handling these limitations. The applicability of CS is based on the fundamental notion that data are sparse, or at least compressible, properties that HS data typically possess; different studies show that an HS cube is sparse and sometimes even extremely sparse [23–30]. If we look at a single narrow spectral window, that is if we look on an x–y plane, we have a regular image which is typically compressible in the wavelet domain. On the other

Fig. 1. (Color online) Hyperspectral cube.

hand, if we look in the spectral direction, λ, we generally also find the data to be extremely redundant. For example, the spectral signature of green grass is unique, thus all the vectors in the HS image that represent reflection from the grass have the same spectral signature. In recent years, several types of CS systems for HS imaging have been proposed [31–36]. In [37], CS HS cube acquisition is accomplished by a method called coded aperture snapshot spectral imagers (CASSI). In the CASSI architecture, the spatial information is first randomly encoded and then the spectral information is mixed by a shearing operation. CASSI is suboptimal in terms of CS because CASSI employs random signal multiplexing only in the x–y plane, while in the spectral domain, it undergoes deterministic uniform transformation. Another implementation of the CS system for HS imaging, presented in [35], is shown in Fig. 2(b). This method follows the single-pixel CS camera technique [Fig. 2(a)] that was expanded to 3D imaging by replacing the standard detector (single photodiode) in the single pixel CS camera with a spectrometer probe. With this architecture, the spatial information is encoded while the spectral information remains unchanged. This mechanism can be considered a parallel spectral acquisition, leaving the spectral dimension unmixed and uncompressed. In this work, we present a new method for HS image acquisition using CS separable encoding both in the spatial and the spectral domains. We propose a scheme for 3D multiplexing using two stages of multiplexing; the first stage is spatial multiplexing, which is done by using the classical scheme of the single pixel CS camera, and the second stage is spectral multiplexing and is introduced in Section 4. The spectral encoding is performed in a single step and thus the proposed method requires the same number of projections as in [27], while benefiting

Fig. 2. (Color online) (a) Schematic diagram of single pixel CS camera and its photodiode detector. (b) Expansion to multispectral imaging using a grating and a CCD vector. 1 April 2013 / Vol. 52, No. 10 / APPLIED OPTICS

D47

from random multiplexing of the wavelength domain too. 2. Compressive Sensing

In this section, we review briefly CS theory, which is a technique to recover sparse signals from significantly less measurements than needed when using traditional sampling theory. A block diagram of a CS system is depicted in Fig. 3. In this figure, f represents a physical signal, e.g., an objects’ intensities. α is a vector of components in the sparsifying domain used to represent f. α is a mathematical representation vector that contains mainly zeros or near zero values. In the image acquisition step, the signal vector f is sampled using the Φ operator, yielding the measurement vector g. The final step in Fig. 3 is the image reconstruction, accomplished by estimation of f using l1 type minimization [1–3]. We assume that an N × 1 vector, f that is to be measured, can be expressed by f  Ψα, where the N × 1 vector, α, contains only k ≪ N nonzero elements and Ψ is a sparsifying operator. The measurements vector g ∈ RMx1 is obtained by g  Φf;

(1)

where Φ ∈ RM×N is a sensing matrix. By properly choosing M and Φ and assuming sparsity of f in the Ψ domain, the signal f can be recovered from the measurements g. The crucial step here is to build a sensing matrix Φ such that it enables accurate recovery of an N-sized f from fewer M measurements g. Reconstruction of f from g is guaranteed if the number of measurements M meets the following condition [1,3]: M ≥ Cμ2 K logN:

(2)

It can be seen that the number of measurements required M depends on the size of the signal N, its sparsity k and μ, representing the mutual coherence between Φ and Ψ. The mutual coherence is defined by μΦ; Ψ 

p N max jhΦH i Ψj ij; 1≤i≠j≤M

(3)

where Φi , Ψj are vectors of Φ and Ψ, respectively. The p value of μ is in the range of 1 ≤ μ ≤ N. The lower μ is the better is the performance of the system. The original signal f can be recovered by solving the following problem:

Fig. 3. Compressive sensing block diagram [10]. D48

APPLIED OPTICS / Vol. 52, No. 10 / 1 April 2013

fˆ  Ψαˆ

subject to minf‖g − ΦΨα‖22  γ‖α‖1 g; (4) αˆ

where γ‖  ‖1 is l1 norm and γ is a regularization weight. One of the difficulties of using the CS method for HS imaging is the huge size of the matrices Φ required for representing the sensing operation. Signals in CS theory are represented by vectors with N components. The measurements data is M-dimensional so that the sensing matrix is of size Φ ∈ RM×N . Hyperspectral imaging involves 3D signals F ∈ RN 1 ×N 2 ×N 3 , which can be converted to vectors by lexicographic ordering to an N-length vector [f  vecF]. Since N  N 1 × N 2 × N 3 , the sensing matrix size has the order of N 1 × N 2 × N 3 2. For instance, let us consider the computational aspects of randomly encoding a 3D data HS cube of F ∈ RN 1 ×N 2 ×N 3 , with N 1  N 2  N 3  256. In this 24 24 case, the sensing matrix Φ will be Φ ∈ R2 ×2 . Such matrices cannot be handled in standard computational systems because of their challenging storage and memory requirements. The optical implementation and sensor calibration of such systems also present a great challenge because the realization of random Φ requires the system to have N × M nearly independent modes (degrees of freedom). 3. Separable Compressive Sensing

Separable sensing operators are common in many optical systems (e.g., wave propagation) and are often applied in image processing tasks. Separable CS was proposed in [38–40] to overcome the practical limitations in compressive imaging implementations involving large data and for the reason that often separable sensing operators arise naturally in multidimensional signal processing. As shown in [38–40], a separable system matrix significantly reduces the implementation complexity at the expense of some compression efficiency loss, i.e., more samples are required compared to nonseparable CS, to accurately reconstruct the signal. A separable sensing operator, Φ, can be represented in the form of Φy ⊗ Φx, where the symbol ⊗ denotes the Kronecker product, also referred to as the direct product or the tensor product. If Φy  ϕy1 ; ϕy2 ;    ; ϕyn is an n × p matrix and Φx an m × q matrix, then the Kronecker product between Φy and Φx is given: Φyx  Φy ⊗ Φx 2 ϕy1;1 Φx 6 ϕy2;1 Φx 6 6 .. 4 . ϕyn;1 Φx

ϕy1;2 Φx ϕy2;2 Φx .. . ϕyn;1 Φx

3    ϕy1;p Φx    ϕy2;p Φx 7 7 7: .. .. 5 . . … ϕyn;p Φx

(5)

As we described in the previous section, in the case of an n-dimensional signal, we use the vec operator in order to create a column vector from a matrix F by stacking the column:

2 6 6 vecF  6 4

f1 f2 .. .

3 7 7 7: 5

(6)

fn Let us consider the two-dimensional (2D) signal F  f 1 ; f 2 ;    ; f n and the measurement G  g1 ; g2 ;    ; gn . F and G are a matrix representation of f and g. In such a case, Eq. (1) can be written in the form [38] vecG  Φyx × vecF  ΦTy ⊗ Φx  × vecF;

(7)

and, using properties of the Kronecker product, we can write G  Φy FΦx :

(8)

Consequently, Eq. (4) can be rewritten to solve ˆ T Fˆ  ΨAΨ

minf‖vecG − vecΦy ΨAΨT Φx ‖2 α

 γ‖vecA‖1 g (9)

Equation (9) provides a simple way to handle the huge matrix vector multiplication of Eq. (4). For example, if the size of each of Φy, Φx , F is ∼1000 × 1000 entries, Eq. (9) requires operations with matrices of the same order, whereas the standard compressive sensing recovery problem, Eq. (4), involves algebraic manipulations with matrices of the order of ∼106 × 106. When considering CS with a separable sensing scheme, it was shown in [38] that the mutual coherence of the separable sensing system is given by μΦyx ; Ψyx   μΦy ⊗ Φx ; Ψy ⊗ Ψx   μΦy ; Ψy μΦx ; Ψx :

4. Implementation Architecture for Spatial and Separable Spectral Encoding for Hyperspectral Compressive Sensing

In this section, we present an optical implementation scheme that permits both spatial and spectral random encoding. In Section 4.A we provide a description of the spectral encoding method and in Section 4.B we provide the full description of the system architecture for compressive HS imaging by separable spatial and spectral operators (CHISSS). A. Spectral Encoding

subject to

α  vecA

times more measurements are required to accurately reconstruct the signal using a separable sensing operator than with a nonseparable random operator [38]. This is a reasonable cost for gaining the computational simplification. In practice, as it was numerically demonstrated in [38], the loss in compression efficiency is quite moderate and practically smaller than the one predicted in Eq. (11).

(10)

The mutual coherence, Eq. (10), can be shown to be larger than that of a nonseparable sensing operator. Therefore, according to Eq. (2), the number of measurements, M, required to accurately reconstruct the signal with the separable sensing scheme is larger. For example, if Φ is a random orthogonal matrix uniformly distributed on the unit sphere, it can be shown that

In this section we describe the principle of the proposed separable spectrum sensing operation. Figure 4 provides a schematic description of the spectral encoding principle. In this description the input signal is the optical spatially multiplexed signal at the detector S3 in Fig. 2. Figure 4 shows a mechanism which replaces the detectors a or b in Fig. 2. The input signal, S3, is the output of the single pixel CS camera presented in Fig. 2. Thus, it is a spectral vector that we wish to encode and measure using the photo sensor. In Fig. 4, the input optical signal at S3 passes through a diffractive or dispersive element working as a spectral to spatial convertor. A spatial grating can be used to separate the spectral components in the horizontal y direction, thus converting the light spot into a spectral line. The spectral line in Fig. 4 (along the y direction) is spatially encoded using the coded aperture mask C1. Here, C1 is a single line of coded apertures. This operation gives each wavelength its own weight, i.e., each wavelength is multiplied by the local coded aperture transition value. To focus and collect the different spectral components, regular converging lenses can be used. In practice, we

p r μΦy ⊗ Φx ; Ψ 2 log10 N 1 log10 N; (11) ≈ p  2 μΦ; Ψ 2 log10 N meaning that r 1 log10 N 2

Fig. 4. (Color online) Schematic diagram of the spectral separable operator. 1 April 2013 / Vol. 52, No. 10 / APPLIED OPTICS

D49

propose a parallel process for the spectral encoding with a cylindrical lens; this will be explained in the next section. The technique described above provides a single randomly encoded measurement of the spectral component. However, for CS we need M measurements that satisfy Eq. (2), where each measurement is a result of different encoding of the datacube. Multiple encoding of the spectral vector can be achieved by time division multiplexing, i.e., by changing the aperture pattern for each measurement of the image sensor. However, this will result in a long acquisition time. Alternatively, the various spectral encoding can be achieved by spatial division multiplexing. The system described in the next subsection shows such spatial division multiplexing, which is essentially implemented by duplicating the apparatus described in Fig. 4 in the x direction. The spectral information is multiplied by different random codes and captured by a line array of sensors. This way, parallel spectral encoded measurement is achieved within one exposure for a given spectral vector. The ability to measure all the spectral projections with a single exposure provides a way to measure HS images with the same number of spatial measurements that is needed for a monochromatic single pixel CS camera [4]. B.

System Structure

In this section we describe the proposed CHISSS architecture. The architecture implements an optical CS system using separable operators. In contrast to the previous architecture for CS–HS imaging [32,41,42], CHISSS architecture provides a way for encoding both the spatial and the spectral domains using separate and random operations with the ability to change the compression ratio between the spectral and the spatial domains. The CHSISS system uses two separable random encoding codes, one for the spatial domain and the other for the spectral domain. Figure 5 depicts the proposed CHSISS system.

The spatial multiplexing process is performed in a way similar to that with the single pixel HS camera [35]. As in Fig. 2, the lens L1 is used to image the object on the digital micromirror device (DMD) D1. A random code of size N x × N y is displayed by L1. The encoded light reflected from D1 is then focused on the central point of the G1 grating using the lens L2. At this point, the spot on the G1 plane contains the same mixed spatial information for the entire spectrum. One can view the process up to this stage as a parallel encoding of the spatial data for each wavelength. Therefore, each spectral component is a result of the spatial x–y multiplexing (provided by the DMD), where each component undergoes the same multiplexing process. The spectral multiplexing is achieved by applying a second encoding operator separately. The spectral encoder is based on the method described in Fig. 4. By means of the cylindrical lenses L3 and L4 and the coded aperture C1 the spectral encoding process, described in Section 4.A, is performed in parallel. Grating G1 splits and diffracts the beam S3 into N λ spectral spots, which are spread along parallel rays on the coding device C1 by means of the cylindrical lens L3. The coded aperture C1 has a random reflection pattern; therefore, each horizontal spectral geometrical line is encoded by a different random pattern. The coded aperture in Fig. 5 has M λ horizontal elements and N λ vertical elements. Next, N λ spectrally encoded components reflected from the vertical lines of C1 are summed by means of the cylindrical lens L4 and collected by the appropriate pixel in a line array sensor. The different spectral modulations pass through the L4 cylindrical lens in parallel. Note that the encoding process with the CHISSS system in Fig. 5 is separable in the x–y and λ domains. Since the spectral encoding is performed in parallel in a single step (by space-division-multiplexing) the overall acquisition time is determined solely by the spatial encoding. Therefore, the CHISSS acquisition time is similar to that of the single pixel CS camera. We wish to note that the system and method described above perform universal HSI CS, i.e., they are designed to image arbitrary HSI data. Since no a priori information about the spatial or spectral features of the imaged scene is assumed to be available, random projections are preferable [1]. However, if a priori information about the imaged scene is available, one can imprint appropriate nonrandom masks on D1 and C1 to achieve improved task-specific CS [43]. The system in Fig. 5 can also be easily adapted to perform adaptive spectral imaging [44] by changing the static coded mask C1 with a variable one (such as a DMD). 5. Simulation Results

Fig. 5. (Color online) Schematic diagram of CHSISS system for CS HS imaging. D50

APPLIED OPTICS / Vol. 52, No. 10 / 1 April 2013

We simulated the acquisition process with the CHISSS shown in Fig. 5 and investigated the reconstructions. To simulate the system, we use a computer procedure that implements the appropriate spatial and spectral separable encoding operators.

We used real data from a HS camera. The HS image of the Iris painting (Fig. 6, left) was taken indoors using a halogen light source and the parking lot image (Fig. 6, right) was taken outdoors during daylight. Both images were recorded in 256 spectral bands from 500 to 657 nm, where the spectral width of each band is about 0.61–0.62 nm. The spatial image size was 256 × 256 pixels. We used these two HS cubes as objects and sampled them according to the CHISSS system structure shown in Fig. 5. As we describe in Fig. 4, each HSI cube was first spatially encoded and then spectrally encoded. In the simulation we used three orthogonal random masks, Φx , Φy , Φλ , to compose the separable sensing operator. Note that with the CHISSS shown in Fig. 5 (as with the systems in Fig. 2) the spatial sampling operator, Φyx , does not have to be separable in the x and y directions. However, to alleviate the computational burden required for CS and reconstruction of data of size N  2563 , we chose to use spatial masks obtained from a Kroneker product of Φx and Φy . While a nonseparable spatial sensing operator, Φyx , is represented by a matrix of the order 2562 × 2562, the matrices Φy and Φx are of the order 256 × 256 and the system’s forward model is implemented simply by Eq. (8). For the recovery process, we used the MATLABR2012a and the TwIST [45] solver procedure. The programs were run on an Intel i7-2600 3.4 GHz processor with 8 GB memory. We used the 3D Haar wavelets as the sparsifying operators, ΨT , together with l1 regularization according to Eq. (9). Reconstructed images from the simulated CHISSS are shown in Fig. 6 (lower row). These results are for a total compression ratio Mx × My × Mλ ; Nx × Ny × Nλ of 10% of the original HS datacube. For the Iris painting, the spatial domain (x–y) compressive sensing ratio was set to

  Mx × My 217 2  ≅ 71.2%; Nx × Ny 256 while the spectral compressive sensing ratio was Mλ 38 ≅ 14.8%:  N λ 256 For the parking lot the HS data compression ratios are   Mx × My 181 2  ≅ 49.9%; Nx × Ny 256 and Mλ 51 ≅ 19.9%;  N λ 256 for the spatial and spectral domains, respectively. As we can see in Fig. 6, despite the X10 compression, the reconstructions are quite similar to the original images. The reconstruction peak signal-to-noise ratio (PSNR) for the Iris painting was ∼21 dB and for the parking lot ∼25 dB. The dependence of the reconstruction quality on the CS ratio M∕N is demonstrated in Fig. 7. Figure 7(a) shows an RGB projection of the HSI source and Figs. 7(b), 7(c), 7(d), and 7(e) show the reconstructions from data compressively sensed with ratios 10%, 38%, 5%, and 13%, respectively. As it can be seen, the results are reasonable even with compression as deep as 5%, while at compression ratios larger than 10% the degradation is hardly noticeable. Since the sparsity of the HS datacube in the spatial dimension is typically different from that in the spectral dimension, it is interesting to investigate the dependence of the CHISSS performance on the spatial and spectral compression ratios. Figure 8 shows the PSNR for the parking lot image compressively

Fig. 6. (Color online) Left: original image of “Iris painting,” and (lower image) its reconstruction from 10% samples. Right: original image of “Parking lot,” and (lower image) its reconstruction from 10% samples. 1 April 2013 / Vol. 52, No. 10 / APPLIED OPTICS

D51

Fig. 7. (Color online) RGB projection of 256 × 256 × 256 HS cube. (a) Source, (b) reconstruction from 128 × 128spatial× 102spectralmeasurements  10%, (c) reconstruction from 197 × 197spatial × 163spectralmeasurements  38%, (d) reconstruction from 204 × 204spatial × 20spectralmeasurements  5%, (e) reconstruction from 204 × 204spatial × 51spectralmeasurements  13%.

sampled with various spectral and spatial ratios, yielding given overall sampling ratios M∕N. Dotted contours represent the locations of the same total compression ratio. From Fig. 8, it is evident that, as expected, the PSNR increases as a function of the total sensing ratio. In addition, we can also see that the reconstruction PSNR increases as the spectral compression contribution to the total compression ratio is higher. This reflects the well-known fact the HS cubes are more compressible in the spectral dimension [26,46,47]. Figure 9 shows the reconstruction PSNR contour lines of the interpolated surface in Fig. 8. The contour lines show, from another perspective, the observation obtained from Fig. 8 that the influence of the spectral compression is larger than that of the spatial. For instance, from observing the upper part of Fig. 9, we see that the equi-PSNR contours are approximately vertically aligned, implying that Fig. 9. (Color online) Reconstruction PSNR contours plots the CSHSS of the “Parking lot.”

introducing spectral compression (taking above about 70% of the samples) to a given spatial compression induces negligible PSNR degradation. The greater influence of the spectral compression is evident in the rest of the graph too (below 70% spectral samples). For example, to achieve a PSNR of 30 dB, one can choose a spatial compression of 42% together with a spectral compression of 72% (point A), yielding a total compression of 30%. Alternatively, the same PSNR can be achieved with a spatial compression of 75% together with a spectral compression of 25% (point B), yielding a total compression of 19%. 6. Conclusion Fig. 8. (Color online) Reconstruction PSNR calculation for “Parking lot.” HS as function of spatial and spectral compression ratios. Points with same color represent the same overall compression ratio. For visualization purposes a surfaces grid was built by bilinear interpolation. D52

APPLIED OPTICS / Vol. 52, No. 10 / 1 April 2013

We have presented a technique and simulation for HS compressive imaging using separable random projections in all three dimensions of the HS data. The proposed CHSISS architecture can provide both spatial and spectral random encoding in a relatively simple way. The spectral multiplexing is done in

parallel and only once per single spatial multiplexing; therefore, we can acquire an HS cube for the same number of spatial projections. Simulation results demonstrate the need to balance the compression depths in the spatial and spectral domains to optimize the CHISSS performance for a given total compression sensing ratio. Because of higher redundancy in the spectral domain, more spatial projections are needed than spectral projections. Adrian Stern wishes to thank the Israel Science Foundation (grant No. 1039/09). The authors wish to thank Iris Tresman (Arts Department, Ben-Gurion University) for providing her painting for HS imaging. We also acknowledge Professor Ohad Ben-Shahar’s research group (the interdisciplinary Computational Vision Lab) for providing the hyperspectral camera. References 1. E. J. Candes and M. B. Wakin, “An introduction to compressive sampling,” IEEE Signal Process. Mag. 25(2), 21–30 (2008). 2. M. Stojnic, W. Xu, and B. Hassibi, “Compressed sensing of approximately sparse signals,” in IEEE International Symposium on Information Theory (IEEE, 2008), pp. 2182–2186. 3. D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory 52, 1289–1306 (2006). 4. M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, Ting Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25(2), 83–91 (2008). 5. W. L. Chan, K. Charan, D. Takhar, K. F. Kelly, R. G. Baraniuk, and D. M. Mittleman, “A single-pixel terahertz imaging system based on compressed sensing,” Appl. Phys. Lett. 93, 121105 (2008). 6. W. L. Chan, M. L. Moravec, R. G. Baraniuk, and D. M. Mittleman, “Terahertz imaging with compressed sensing and phase retrieval,” in Opt. Lett. 33, 974–976 (2008). 7. L. McMackin, M. A. Herman, B. Chatterjee, and M. Weldon, “A high-resolution SWIR camera via compressed sensing,” Proc. SPIE 8353, 835303 (2012). 8. J. Ma, “Single-pixel remote sensing,” IEEE Geosci. Remote Sens. Lett. 2, 199–203 (2009). 9. J. Ma, “A single-pixel imaging system for remote sensing by two-step iterative curvelet thresholding,” IEEE Geosci. Remote Sens. Lett. 6, 676–680 (2009). 10. A. Stern and B. Javidi, “Random projections imaging with extended space-bandwidth product,” J. Disp. Technol. 3, 315–320 (2007). 11. A. Stern, Y. Rivenson, and B. Javidi, “Optically compressed image sensing using random aperture coding,” Proc. SPIE 6975, 69750D (2008). 12. Y. Rivenson and A. Stern, “Compressive sensing techniques in holography,” in 10th Euro-American Workshop OnInformation Optics (WIO), (IEEE, 2011), pp. 1–2. 13. Y. Rivenson, A. Stern, and B. Javidi, “Compressive Fresnel holography,” J. Disp. Technol. 6, 506–509 (2010). 14. S. Evladov, O. Levi, and A. Stern, “Progressive compressive imaging from radon projections,” Opt. Express 20, 4260–4271 (2012). 15. Y. Kashter, O. Levi, and A. Stern, “Optical compressive change and motion detection,” Appl. Opt. 51, 2491–2496 (2012). 16. D. J. Townsend, P. K. Poon, S. Wehrwein, T. Osman, A. V. Mariano, E. M. Vera, M. D. Stenner, and M. E. Gehm, “Static compressive tracking,” Opt. Express 20, 21160–21172 (2012). 17. M. de Moraes Marim, E. D. Angelini, and J. Olivo-Marin, “Compressed sensing in biological microscopy,” Proc. SPIE 7446, 744605 (2009). 18. S. Schwartz, A. Wong, and D. A. Clausi, “Compressive fluorescence microscopy using saliency-guided sparse reconstruction ensemble fusion,” Opt. Express 20, 17281–17296 (2012).

19. V. Studer, “PNAS plus: Compressive fluorescence microscopy for biological and hyperspectral imaging,” Proc. Natl. Acad. Sci. USA 109, E1679–E1687 (2012). 20. R. M. Willett, R. F. Marcia, and J. M. Nichols, “Compressed sensing for practical optical imaging systems: a tutorial,” Opt. Eng. 50, 072601 (2011). 21. J. S. Sanders, R. E. Williams, R. G. Driggers, and C. E. Halford, “A novel concept for hyperspectral remote sensing,” in Proceedings, IEEE Southeastcon (IEEE, 1992), vol. 1, pp. 363–367. 22. T. Wilson and R. Felt, “Hyperspectral remote sensing technology (HRST) program,” in Proceedings IEEE Aerospace Conference (IEEE, 1998), vol. 5, pp. 193–200. 23. J. In, S. Shirani, and F. Kossentini, “JPEG compliant efficient progressive image coding,” in Proceedings of the 1998 IEEE International Conference On Acoustics, Speech and Signal Processing (IEEE, 1998), vol. 5, pp. 2633–2636. 24. Q. Wang and Y. Shen, “A JPEG2000 and nonlinear correlation measurement based method to enhance hyperspectral image compression,” in Proceedings IEEE Instrumentation and Measurement Technology Conference (IEEE, 2005), pp. 2009–2011. 25. J. Lv, Y. Li, B. Huang, and C. Wu, “Hyperspectral compressive sensing,” Proc. SPIE 7810, 781003 (2010). 26. S. Lim, K. Sohn, and C. Lee, “Compression for hyperspectral images using three dimensional wavelet transform,” in IEEE 2001 International Geoscience and Remote Sensing Symposium (IEEE, 2001), vol. 1, pp. 109–111. 27. S. Lim, K. H. Sohn, and C. Lee, “Principal component analysis for compression of hyperspectral images,” in IEEE 2001 International Geoscience and Remote Sensing Symposium (IEEE, 2001), vol. 1, pp. 97–99. 28. G. A. Shaw and H.-H. K. Burke, “Spectral imaging for remote sensing,” Lincoln Lab. J. 14, 3–28(2003). 29. M. Iordache, J. M. Bioucas-Dias, and A. Plaza, “Sparse unmixing of hyperspectral data,” IEEE Trans. Geosci. Remote Sens. 49, 2014–2039 (2011). 30. N. Keshava and J. F. Mustard, “Spectral unmixing,” IEEE Signal Process. Mag. 19(1), 44–57 (2002). 31. H. Arguello and G. R. Arce, “Code aperture optimization for spectrally agile compressive imaging,” J. Opt. Soc. Am. A 28, 2400–2413 (2011). 32. Y. Wu and G. Arce, “Snapshot spectral imaging via compressive random convolution,” in 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (IEEE, 2011), pp. 1465–1468. 33. H. Arguello and G. Arce, “Code aperture agile spectral imaging (CAASI),” in Imaging Systems Applications, OSA Technical Digest (CD) (Optical Society of America, 2011), paper ITuA4. 34. Y. Wu, I. O. Mirza, G. R. Arce, and D. W. Prather, “Development of a digital-micromirror-device-based multishot snapshot spectral imaging system,” Opt. Lett. 36, 2692–2694 (2011). 35. T. Sun and K. Kelly, “Compressive sensing hyperspectral imager,” in Computational Optical Sensing and Imaging, OSA Technical Digest (CD) (Optical Society of America, 2009), paper CTuA5. 36. C. Li, T. Sun, K. F. Kelly, and Y. Zhang, “A compressive sensing and unmixing scheme for hyperspectral data processing,” IEEE Trans. Image Process. 21, 1200–1210 (2012). 37. M. E. Gehm, R. John, D. J. Brady, R. M. Willett, and T. J. Schulz, “Single-shot compressive spectral imaging with a dualdisperser architecture,” Opt. Express 15, 14013–14027 (2007). 38. Y. Rivenson and A. Stern, “Compressed imaging with a separable sensing operator,” IEEE Signal Process. Lett. 16, 449–452 (2009). 39. Y. Rivenson and A. Stern, “Practical compressive sensing of large images,” presented at 16th International Conference on Digital Signal Processing (IEEE, 2009), pp. 1–9. 40. M. F. Duarte and R. G. Baraniuk, “Kronecker compressive sensing,” IEEE Trans. Image Process. 21, 494–504 (2012). 41. A. A. Wagadarikar, N. P. Pitsianis, X. Sun, and D. J. Brady, “Video rate spectral imaging using a coded aperture snapshot spectral imager,” Opt. Express 17, 6368–6388 (2009). 42. Q. Zhang, R. Plemmons, D. Kittle, D. Brady, and S. Prasad, “Reconstructing and segmenting hyperspectral images from compressed measurements,” in 3rd Workshop on 1 April 2013 / Vol. 52, No. 10 / APPLIED OPTICS

D53

Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS) (IEEE, 2011). 43. A. Ashok, P. K. Baheti, and M. A. Neifeld, “Compressive imaging system design using task-specific information,” Appl. Opt. 47, 4457–4471 (2008). 44. D. Dinakarababu, D. Golish, and M. Gehm, “Adaptive feature specific spectroscopy for rapid chemical identification,” Opt. Express 19, 4595–4610 (2011). 45. J. M. Bioucas-Dias and M. A. T. Figueiredo, “A new TwIST: Two-step iterative shrinkage/thresholding algorithms for

D54

APPLIED OPTICS / Vol. 52, No. 10 / 1 April 2013

image restoration,” IEEE Trans. Image Process. 16, 2992–3004 (2007). 46. M. J. Ryan and J. F. Arnold, “Lossy compression of hyperspectral data using vector quantization,” Remote Sens. Environ. 61, 419–436 (1997). 47. S.-E. Qian, A. B. Hollinger, M. Dutkiewicz, H. A. Z. Tsang, and J. R. Freemantle, “Effect of lossy vector quantization hyperspectral data compression on retrieval of red-edge indices,” IEEE Trans. Geosci. Remote Sens. 39, 1459–1470 (2001).