A New Look at Generalized Orthogonal Matching Pursuit: Stable ...

Report 0 Downloads 40 Views
1

arXiv:1304.0941v1 [cs.IT] 3 Apr 2013

A New Look at Generalized Orthogonal Matching Pursuit: Stable Signal Recovery under Measurement Noise Jian Wang, Suhyuk (Seokbeop) Kwon and Byonghyo Shim School of Information and Communication Korea University, Seoul, Korea 136-713 Email: {jwang,shkwon,bshim}@isl.korea.ac.kr Phone: 82-2-3290-4842

Abstract—Generalized orthogonal matching pursuit (gOMP) is an extension of orthogonal matching pursuit (OMP) algorithm designed to improve the recovery performance of sparse signals. In this paper, we provide a new analysis for the gOMP algorithm for both noiseless and noisy scenarios. We show that if the measurement matrix Φ ∈ Rm×n satisfies the restricted isometry property (RIP) with δ7K+N−1 ≤ 0.0231, then gOMP can perfectly recover any K-sparse signal x ∈ Rn from the measurements y = Φx within ⌈ 6K ⌉ iterations (N is the number N of indices chosen in each iteration). We also show that if Φ satisfies the RIP with δ11K+N−1 < 0.0627, then gOMP can perform a stable recovery of K-sparse signal x from the noisy measurements y = Φx+v within ⌈ 10K ⌉ iterations. For Gaussian N random measurements, the results indicate that the required n )), which is much smaller measurement size is m = O(K log( K n )). than the existing result m = O(K 2 log( K Index Terms—Compressive sensing (CS), sparse recovery, stable recovery, generalized orthogonal matching pursuit (gOMP), restricted isometry property (RIP)

I. I NTRODUCTION Orthogonal matching pursuit (OMP) is a greedy algorithm widely used for solving sparse recovery problems [1]–[6]. The goal of OMP is to recover a K-sparse vector x ∈ Rn from the measurements y = Φx (1) where Φ ∈ Rm×n is a measurement matrix. In each iteration, OMP estimates the support (positions of nonzero elements) of x by adding an index of the column of Φ which is maximally correlated with the residual. The vestige of columns in the estimated support is eliminated from the measurements y, yielding an updated residual for the next iteration. While the number of iterations of OMP is usually set to the sparsity level of the underlying signal to be recovered, there has been some attempts to relax this constraint to enhance the recovery performance. In one direction, an approach allowing more iterations than the sparsity level has been suggested [7]–[10]. In another direction, an algorithm selecting multiple indices This work was supported by the KCC (Korea Communications Commission), Korea, under the R&D program supervised by the KCA (Korea Communication Agency) (KCA-12-911-01-110) and the National Research Foundation of Korea (NRF) grant funded by the Korea government (MEST) (No. 2012R1A2A2A01047510).

TABLE I A LGORITHM

G OMP

Input:

Initialize:

While

End Output:

measurement matrix Φ ∈ Rm×n , measurements y ∈ Rm , sparsity level K, number of indices for each selection N . iteration count k = 0, estimated list T 0 = ∅, residual vector r0 = y. krk k2 > ǫ and k < m/N do k = k + 1. Λk = arg maxΛ:|Λ|=N k(Φ′ rk−1 )Λ k1 . (Identification) T k = T k−1 ∪ Λk . (Augmentation) (Estimation) xk = arg minsupp(u)=T k ky − Φuk2 . rk = y − Φxk . (Residual Update) the estimated signal xk .

in each iteration, referred to as generalized OMP (gOMP) [11], OMMP [12], [13] or KOMP [14], has been proposed. Since it is possible to choose more than one support index in each iteration, gOMP is in general terminated in less than K iterations. Also, it has been empirically shown that gOMP is better than OMP algorithm in recovery performance and computational cost [11], [14]. In analyzing the theoretical performance of gOMP, a property so called restricted isometry property (RIP) has been popularly used [11], [12], [14]. A measurement matrix Φ is said to satisfy the RIP of order K if there exists a constant δ such that [15] (1 − δ) kxk22 ≤ kΦxk22 ≤ (1 + δ) kxk22

(2)

for any K-sparse vector x. In particular, the minimum of all constants δ satisfying (2) is called the isometry constant δK . It has been shown that the perfect recovery of any K-sparse signal via gOMP algorithm is guaranteed under [11] √ N √ . (3) δN K < √ K +3 N For Gaussian random measurements, this result implies that the required size of measurements is [16]   n  . (4) m = O K 2 log K

2

While the perfect recovery analysis is possible for noiseless scenario, such is not possible for noisy scenario where the measurements are contaminated by a noise vector v as y = Φx + v.

(5)

To analyze the recovery performance in this scenario, the ℓ2 norm of recovery distortion has been commonly employed. In [11], it has been shown that under √ N √ δN K < √ and δN (K+1) < 1, (6) K +3 N gOMP generates an estimate xK such that kxK − xk2 ≤ CK kvk2

(7) √ where CK = O( K). The main purpose of this paper is to provide improved recovery bounds of gOMP algorithm in both noiseless and noisy scenarios. The primary contributions of this paper are twofold: 1) In noiseless scenario, we show that if the measurement matrix Φ satisfies the RIP with δ7K+N −1 ≤ 0.0231,

(8)

then gOMP perfectly recovers any K-sparse signal within ⌈ 6K N ⌉ iterations. 2) In noisy scenario, we show that under δ11K+N −1 ≤ 0.0627,

(9)

gOMP can perform a stable recovery of K-sparse signal x from the noisy measurements y = Φx + v within ⌈ 10K N ⌉ iterations with the recovery distortion satisfying kx⌈

10K N



− xk2 ≤ Ckvk2

(10)

where C is a constant. In comparison to our previous work in [11], our new results are distinct and important in two aspects. Firstly, in contrast to the previous recovery bounds in (3) and (6), which are expressed as monotonically decreasing functions of K and hence obviously vanish at large K, the proposed bounds in (8) and (9) are constants. When Gaussian random measurements are employed, the proposed bounds imply that the required measurement size is [16]   n  , (11) m = O K log K which is clearly  better than the previous result m = n O K 2 log K [11]. Secondly and perhaps more importantly, while our previous work demonstrates that the reconstruction error (i.e., the ℓ2 -norm of√recovery distortion) in noisy scenario depends linearly on Kkvk2 [11], our new result suggests that the reconstruction error is upper bounded by a constant multiple of kvk2 , which ensures the stability of gOMP algorithm under noisy scenario. The remainder of the paper is organized as follows. In Section II, we introduce lemmas and propositions that are useful in our analysis. In Section III, we provide the recovery bound analysis of gOMP algorithm in both the noiseless and

noisy scenarios. In Section IV and V, we provide the recovery condition analysis of the gOMP in the noiseless and noisy scenario, respectively, and conclude the paper in Section VI. We briefly summarize notations used in this paper. For a vector x ∈ Rn , T = supp(x) = {i|xi 6= 0} represents the set of its non-zero positions. Ω = {1, · · · , n}. For a set S ⊆ Ω, |S| denotes the cardinality of S. T \S is the set of all elements contained in T but not in S. ΦS ∈ Rm×|S| is a submatrix of Φ that only contains columns indexed by S. Φ′S means the transpose of the matrix ΦS . xS ∈ R|S| is an vector which equals x for elements indexed by S. If ΦS is full column rank, then Φ†S = (Φ′S ΦS )−1 Φ′S is the pseudoinverse of ΦS . span(ΦS ) stands for the span of columns in ΦS . PS = ΦS Φ†S is the projection onto span(ΦS ). P⊥ S = I − PS is the projection onto the orthogonal complement of span(ΦS ). II. P RELIMINARIES In this section, we provide lemmas and propositions that will be used throughout the paper. Let Γk = T \T k , then Γk is the set of remaining (unselected) support indices after k iterations of gOMP (see Fig. 1(a)). Here and in the rest of the the paper, we assume without loss of generality that Γk = {1, · · · , |Γk |}. Then it is clear that 0 ≤ |Γk | ≤ K. For example, if k = 0, then T k = ∅ and |Γk | = |T | = K. Whereas if T k ⊇ T , then Γk = ∅ and |Γk | = 0. Also, for notational convenience we assume that {xi }i=1,2,··· ,|Γk | are arranged in descending order of their magnitudes, (i.e., |x1 | ≥ |x2 | ≥ · · · ≥ |x|Γk | |). Now, we define the subset Γkτ of Γk as  τ = 0,  ∅ k Γkτ = {1, 2, · · · , 2τ −1 N } τ = 1, 2, · · · , ⌈log2 |ΓN | ⌉,  k  k Γ τ = ⌈log2 |ΓN | ⌉ + 1. (12) See Fig. 1(b) for the illustration of Γkτ . Note that the last set Γk

k ⌈log2 |ΓN | ⌉+1

does not necessarily have 2⌈log2

|Γk | N ⌉

N

1

elements. For a given set Γk and a constant µ ≥ 2, let L ∈ k {1, 2, · · · , ⌈log2 |ΓN | ⌉ + 1} be the minimum positive integer satisfying2 kxΓk \Γk0 k22




Bu , Bℓ .

Then, one can easily check that (30) holds whenever Bu ≤ Bℓ . Also, since (30) is a sufficient condition of (29) and (29) is equivalent to (25), one can conclude that (25) can be guaranteed under Bu ≤ Bℓ .

6

Let’s first find out the upper bound Bu of kxΓkL k22 . Observe that krkL k22

ky − ΦxkL k22

=

kL

kΦ(x − x

= ≥

(1 −



(1 −

)k22

δ|T ∪T kL | )kx − xkL k22 δ|T ∪T kL | )kxΓkL k22 ,

(31) (32) (33)

After some manipulations, one can easily check that krkL k22 ≤ η L krk k22 + (1 + δγ )

krkL k22 . 1 − δ|T ∪T kL |

krk k22

(34)

(35)

We next find out a lower bound Bℓ of kxΓk \ΓkL−1 k22 . By applying Proposition 2.4 to rk1 , rk2 , · · · , rkL , we have krk1 k22 krk2 k22

krkL k22



≤ .. .



C1,k0 ,k1 krk k22 + (1 + δγ )kxΓk \Γk1 k22 , C2,k1 ,k2 krk1 k22 + (1 + δγ )kxΓk \Γk2 k22 ,

CL,kL−1 ,kL krkL−1 k22 + (1 + δγ )kxΓk \ΓkL k22 , (38)

Since ki − ki−1 ≥ further have

|Γk | κ⌈ Ni ⌉

krk1 k22 krk2 k22

krkL k22



ηkrk k22 + (1 + δγ )kxΓk \Γk1 k22 , ηkrk1 k22 + (1 + δγ )kxΓk \Γk2 k22 ,

≤ .. . ≤ ηkrkL−1 k22 + (1 + δγ )kxΓk \ΓkL k22 .

(44)

|Γk τ|

(12), we have ⌈ N ⌉ = 2τ −1 for τ = 1, · · · , L − 1. As a  |Γk |  result, k1 − k0 = κ(1 + ⌈ N1 ⌉) = ⌈2κ⌉, ki − ki−1 = k + ⌈κ(1 + Pi Pi−1 τ −1 τ −1 )⌉ − (k + ⌈κ(1 + )⌉) = ⌈κ · 2i ⌉ − ⌈κ · 2i−1 ⌉ for τ =1 2 τ =1 2  |Γk |  L−1 i = 2, · · · , L − 1, and kL − kL−1 = κ(2 + ⌈ NL ⌉) − ⌈κ · 2L−1 ⌉. 5 From

1 , we further 2 κ·2i −κ·2i−1 = κ·2i−1

Since κ is a multiple of ki −ki−1 =

kL − kL−1 =

κ · 2L−1 +



κ⌈

In summary, ki − ki−1 ≥ κ⌈

|Γk L|

have k1 − k0 = 2κ ≥ κ = κ⌈ |Γk |

|Γk 1| ⌉, N

= κ⌈ Ni ⌉ for i = 2, · · · , L−1, and   |Γk |  |Γk | ⌉ − κ · 2L−1 = κ⌈ NL ⌉ ≥ κ⌈ NL ⌉.

N |Γk i| ⌉ N

holds true for i = 1, · · · , L.

(47) (48)

L X

τ =0

η L−τ kxΓk \Γkτ k22 .

(49)

(50)

for τ = 0, 1, · · · , L. Substituting (49) into (50), we have krkL k22

≤ =
|W |. In | this case, it is clear that ⌈ |W N ⌉ = 1 and kuU k2 ≥ kuW k2 , and hence r |W | ⌈ ⌉kuU k2 kzW k2 = kuU k2 kzW k2 - N ≥ kuW k2 kzW k2



=

huW , zW i

hu, zi,

(118)

which is the desired result.

Lemma B.2: Let x ∈ Rn be any K-sparse vector supported on T and Φ ∈ Rm×n be the measurement matrix with unit $# 7 We

note that when

elements.

|W | ⌈ N ⌉

>

|W | , N

the last set W

|W |

⌈ N ⌉

In the next lemma, we find out a lower bound of kΦ′Λl+1 rl k2 .

Lemma B.3: Let x ∈ Rn be any K-sparse vector supported on T and Φ ∈ Rm×n be the measurement matrix with unit ℓ2 norm columns. Then, for a given set Γk and an integer l ≥ k, the residual of gOMP satisfies  1 − δ|Γkτ ∪T l | krl k22 − kΦΓk \Γkτ xΓk \Γkτ + vk22 . kΦ′Λl+1 rl k22 ≥ |Γk | ⌈ Nτ ⌉ (125) Proof: Let z ∈ Rn be the vector satisfying zT ∩T k ∪Γkτ = xT ∩T k ∪Γkτ and zΩ\(T ∩T k ∪Γkτ ) = 0. Further, let u = Φ′ rl , U = Λl+1 , and W = Γkτ \T l .8 Then using Lemma B.1, we have r |Γk \T l | ′ l ⌈ τ ⌉kΦ′Λl+1 rl k2 kzΓkτ \T l k2 hΦ r , zi ≤ N r |Γk | ⌈ τ ⌉kΦ′Λl+1 rl k2 kzΓkτ \T l k2 . (126) ≤ N Since supp(xl ) = T l and supp(Φ′ rl ) = Ω\T l , we have hΦ′ rl , xl i = 0 and hΦ′ rl , zi = hΦ′ rl , z − xl i and thus kΦ′Λl+1 rl k2

The following lemma characterizes the reduction of residual (in ℓ2 -norm) in the (l + 1)-th iteration of gOMP. ,

(121)

that the elements of uW are arranged in a descending order where (121) is because Φxl ∈ span(ΦT l ) and T l ⊂ T l+1 l of their magnitudes. We define the subset Wi of W as7 and hence P⊥ T l+1 Φx = 0. As a result, ( l l | rl − rl+1 = rl − P⊥ (122) {N (i − 1) + 1, · · · , N i} i = 1, · · · , ⌈ |W T l+1 r = PT l+1 r . N ⌉ − 1, Wi = |W | |W | l+1 l+1 {N (⌈ N ⌉ − 1) + 1, · · · , |W |} i = ⌈ N ⌉. Noting that Λ ⊆ T , we have (111) f l l+1 kr − r k2 = kPT l+1 rl k2 ≥ kPΛl+1 rl k2 . (123) Then% it is clear that Since PΛl+1 = P′Λl+1 = (Φ†Λl+1 )′ Φ′Λl+1 , we further have hu, zi = huW , zW i (112) X krl − rl+1 k2 ≥ k(Φ†Λl+1 )′ Φ′Λl+1 rl k2 (113) |huWi , zWi i| ≤ 1 i kΦ′Λl+1 rl k2 (124) ≥ √ X 1 + δN (114) kuWi k2 kzWi k2 , ≤ i where (124) √ is due to the fact √ that the singular values of ΦΛl+1 lie between 1 − δ and 1 + δN . N where (113) is due to the H¨older’s inequality. By the definition

#

#

(120)

l

has less than N

≥ =

hΦ′ rl , zi q |Γk | ⌈ Nτ ⌉kzΓkτ \T l k2

hΦ′ rl , z − xl i q . |Γk | ⌈ Nτ ⌉kzΓkτ \T l k2

(127)

8 U = Λl+1 is because Λl+1 contains the indices corresponding to N most significant elements in Φ′ rl . Since supp(u) = Ω\T l and supp(z) = T ∩ T k ∪ Γkτ and also noting that T k ⊆ T l , we have W = supp(u) ∩ supp(z) = Γkτ \T l .

11

Furthermore, we have ′ l

Since

l

hΦ r , z − x i = hΦ(z − xl ), rl i (128)  1 l 2 l 2 l l 2 = kΦ(z − x )k2 + kr k2 − kr − Φ(z − x )k2 (129) 2  1 = kΦ(z − xl )k22 + krl k22 − kΦ(x − z) + vk22 (130) 2  1 kΦ(z − xl )k22 + krl k22 − kΦΓk \Γkτ xΓk \Γkτ + vk22 , = 2 (131) where (130) uses the fact that rl + Φxl = y = Φx + v. In proving (125), we consider the following two cases. l 2 2 • First, if kr k2 − kΦΓk \Γk xΓk \Γk + vk2 < 0, (125) is true τ τ ′ l 2 since kΦΛl+1 r k2 ≥ 0. l 2 2 • Next, if kr k2 − kΦΓk \Γk xΓk \Γk + vk2 ≥ 0, then using τ √τ 1 inequality 2 (a + b) ≥ ab (with a = kΦ(z − xl )k22 and b = krl k22 − kΦΓk \Γkτ xΓk \Γkτ + vk22 ), (131) becomes ′ l



kΦ(z − xl )k2

1−

Combining these two cases, we obtain the desired result. We are now ready to prove Proposition 2.3. Proof: Using Lemma B.2 and B.3 and also noting that krl − rl+1 k22 = krl k22 − krl+1 k22 , we have

krl k22 − krl+1 k22  1 − δ|Γkτ ∪T l | ≥ × krl k22 − kΦΓk \Γkτ xΓk \Γkτ + vk22 , |Γk | (1 + δN )⌈ Nτ ⌉ (134)

which completes the proof. A PPENDIX C P ROOF OF P ROPOSITION 2.4 Proof: Subtracting both sides of (17) by krl k22 − kΦT \Γkτ xT \Γkτ k22 , we have krl+1 k22 − kΦΓk \Γkτ xΓk \Γkτ + vk22 ! 1 − δ|Γkτ ∪T l | ≤ 1 − |Γk | ⌈ Nτ ⌉(1 + δN )

×(krl k22 − kΦΓk \Γkτ xΓk \Γkτ + vk22 ).

(135)



|Γk τ| N ⌉(1

+ δN )

≤ exp −

1 − δ|Γkτ ∪T l |



|Γk τ| N ⌉(1

+ δN )

krl+1 k22 − kΦΓk \Γkτ xΓk \Γkτ + vk22 ! 1 − δ|Γkτ ∪T l | ≤ exp − |Γk | (krl k22 ⌈ Nτ ⌉(1 + δN ) −kΦΓk \Γkτ xΓk \Γkτ + vk22 ),

!

.

(136)

and also krl+2 k22 − kΦΓk \Γkτ xΓk \Γkτ + vk22 ! 1 − δ|Γkτ ∪T l+1 | ≤ exp − |Γk | (krl+1 k22 ⌈ Nτ ⌉(1 + δN ) .. .

−kΦΓk \Γkτ xΓk \Γkτ + vk22 ),

(137)



krl k22 − kΦΓk \Γkτ xΓk \Γkτ + vk22 ! 1 − δ|Γkτ ∪T l′ −1 | ′ (krl −1 k22 ≤ exp − |Γk | τ ⌈ N ⌉(1 + δN )

(132)

where (133) is due to (xl )Ω\T l = 0. Plugging (132) and (133) into (127), we have s 1 − δ|Γkτ ∪T l | ′ l kΦΛl+1 r k2 ≥ |Γk | ⌈ Nτ ⌉ q × krl k22 − kΦΓk \Γkτ xΓk \Γkτ + vk22 .

1 − δ|Γkτ ∪T l |

Hence,

q krl k22 − kΦΓk \Γkτ xΓk \Γkτ + vk22 .

Noting that z − xl is supported on Γkτ ∪ T l , we have q kΦ(z − xl )k2 ≥ 1 − δ|Γkτ ∪T l | kz − xl k2 q 1 − δ|Γkτ ∪T l | k(z − xl )Ω\T l k2 ≥ q 1 − δ|Γkτ ∪T l | kzΩ\T l k2 , (133) =

> 0, we have

τ |Γk τ| N ⌉(1+δN )

l

hΦ r , z − x i ≥

1−δ|Γk ∪T l |

−kΦΓk \Γkτ xΓk \Γkτ + vk22 ).

(138)

After some manipulations, we obtain ′

krl k22 − kΦΓk \Γkτ xΓk \Γkτ + vk22 ≤

′ lY −1

i=l

exp −

×(krl k22

1 − δ|Γkτ ∪T l |



|Γk τ| N ⌉(1

+ δN )

!

− kΦΓk \Γkτ xΓk \Γkτ + vk22 ).

(139)

Since Cτ,l,l′ > 0, it is clear from (??) that ′

krl k22 ≤ Cτ,l,l′ krl k22 + kΦΓk \Γkτ xΓk \Γkτ + vk22 ,

(140)

which completes the proof. R EFERENCES [1] S. G. Mallat and Z. Zhang, “Matching pursuits with time-frequency dictionaries,” IEEE Trans. Signal Process., vol. 41, no. 12, pp. 3397– 3415, Dec. 1993. [2] J. A. Tropp, “Greed is good: Algorithmic results for sparse approximation,” IEEE Trans. Inform. Theory, vol. 50, no. 10, pp. 2231–2242, Oct. 2004. [3] J. A. Tropp and A. C. Gilbert, “Signal recovery from random measurements via orthogonal matching pursuit,” IEEE Trans. Inform. Theory, vol. 53, no. 12, pp. 4655–4666, Dec. 2007. [4] M. A. Davenport and M. B. Wakin, “Analysis of Orthogonal Matching Pursuit using the restricted isometry property,” IEEE Trans. Inform. Theory, vol. 56, no. 9, pp. 4395–4401, Sep. 2010. [5] J. Wang and B. Shim, “On the recovery limit of sparse signals using orthogonal matching pursuit,” IEEE Trans. Signal Process., vol. 60, no. 9, pp. 4973–4976, Sep. 2012. [6] J. Ding, L. Chen, and Y. Gu, “Perturbation analysis of orthogonal matching pursuit,” IEEE Trans. Signal Process., vol. 61, no. 2, pp. 398–410, Jan. 2013. [7] E. Livshitz, “On efficiency of orthogonal matching pursuit,” 2010 [Online]. Available: http://arxiv.org/PS cache/arxiv/pdf/1004/1004.3946v1.pdf. [8] T. Zhang, “Sparse recovery with orthogonal matching pursuit under rip,” IEEE Trans. Inform. Theory, vol. 57, no. 9, pp. 6215–6221, Sep. 2011.

12

[9] S. Foucart, “Stability and robustness of weak orthogonal matching pursuits,” 2011 [Online]. Available: http://www.math.drexel.edu/∼ foucart/WOMMP.pdf. [10] J. Wang and B. Shim, “Improved recovery bounds of orthogonal matching pursuit using restricted isometry property,” arXiv:1211.4293, 2012. [11] J. Wang, S. Kwon, and B. Shim, “Generalized orthogonal matching pursuit,” IEEE Trans. Signal Process., vol. 60, no. 12, pp. 6202–6216, Dec. 2012. [12] E. Liu and V. N. Temlyakov, “The orthogonal super greedy algorithm and applications in compressed sensing,” IEEE Trans. Inform. Theory, vol. 58, no. 4, pp. 2040–2047, Apr. 2012. [13] S. Huang and J. Zhu, “Recovery of sparse signals using omp and its variants: convergence analysis based on rip,” Inverse Problems, vol. 27, no. 3, pp. 035003, 2011. [14] R. Maleh, “Improved rip analysis of orthogonal matching pursuit,” arXiv preprint arXiv:1102.4311, 2011. [15] E. J. Cand`es and T. Tao, “Decoding by linear programming,” IEEE Trans. Inform. Theory, vol. 51, no. 12, pp. 4203–4215, Dec. 2005. [16] R. Baraniuk, M. Davenport, R. DeVore, and M. Wakin, “A simple proof of the restricted isometry property for random matrices,” Constructive Approximation, vol. 28, no. 3, pp. 253–263, 2008. [17] D. L. Donoho and M. Elad, “On the stability of the basis pursuit in the presence of noise,” Signal Processing, vol. 86, no. 3, pp. 511–532, 2006. [18] E. J. Cand`es, “The restricted isometry property and its implications for compressed sensing,” Comptes Rendus Mathematique, vol. 346, no. 9-10, pp. 589–592, 2008. [19] D. Needell and R. Vershynin, “Signal recovery from incomplete and inaccurate measurements via regularized orthogonal matching pursuit,” IEEE J. Sel. Topics Signal Process., vol. 4, no. 2, pp. 310–316, Apr. 2010. [20] D. Needell and J. A. Tropp, “Cosamp: Iterative signal recovery from incomplete and inaccurate samples,” Applied and Computational Harmonic Analysis, vol. 26, no. 3, pp. 301–321, Mar. 2009.