Sparse Blind Deconvolution: What Cannot Be Done Sunav Choudhary and Urbashi Mitra Dept. of Electrical Engineering University of Southern California 3740 McClintock Ave, Los Angeles CA 90089 Email: {sunavcho,ubli}@usc.edu
Abstract—Identifiability is a key concern in ill-posed blind deconvolution problems arising in wireless communications and image processing. The single channel version of the problem is the most challenging and there have been efforts to use sparse models for regularizing the problem. Identifiability of the sparse blind deconvolution problem is analyzed and it is established that a simple sparsity assumption in the canonical basis is insufficient for unique recovery; a surprising negative result. The proof technique involves lifting the deconvolution problem into a rank one matrix recovery problem and analyzing the rank two nullspace of the resultant linear operator. A DoF (degrees of freedom) wise tight parametrized subset of this rank two null-space is constructed to establish the results. Index Terms—Identifiability, rank one matrix recovery, blind deconvolution, parametric representation, rank two null space
I. I NTRODUCTION Blind deconvolution is an important inverse problem that is routinely encountered in many practical applications like blind image restoration [1], [2] in image processing, blind system identification [3] in control theory, and blind channel estimation and equalization [4], [5] in wireless communications. In the absence of additional constraints, blind deconvolution is known to be ill-posed and each application above uses some form of prior knowledge about the underlying signal structures to render the problem well-posed. In recent years, sparsity based models have been demonstrated to be good at capturing hidden signal structures in many practical applications; prominent examples include natural images admitting sparse wavelet domain representations [6], ultra wide band communication channels exhibiting sparsity in Doppler-delay domain representations [7], and user preferences and topic models displaying low-rank structures [8] (sparsity in eigenvalue domain). While there have been a few attempts at exploiting sparsity as prior knowledge for blind deconvolution [9]–[11], satisfactory results have not been obtained on the key issue of identifiability of such sparse models, except in a few cases. We have attempted to address this question in the present article and discovered some surprising negative results in the process. A. Contributions and Organization In this paper, we address the question of identifiability for the noiseless sparse blind deconvolution problem. Specifically, given model orders m, n ∈ Z+ , we investigate some sparse and non-sparse choices of domain restriction (x, y) ∈ K ⊆ Rm × Rn and establish that the vectors x and y cannot be uniquely determined from their linearly convolved resultant
vector z = x ? y, even in non-pathological cases. The focus is on algorithm independent identifiability analysis and hence, we shall not restrict ourselves to efficient/polynomialtime algorithms. Section I-B discusses relevant prior results. Section II describes the system model and sets up the notion of identifiability as well as the lifted reformulation of the blind deconvolution problem as a rank one matrix recovery problem. Section III presents the non-identifiability results and Section IV concludes the paper. Our approach leads to the following novelties. 1) We state a parametric representation of a subset of the rank two null-space of the linear convolution map, that agrees with the true rank two null-space in DoF. Exploiting this representation, we explicitly demonstrate the nonidentifiable nature of unconstrained blind deconvolution by constructing adversarial signal pairs for even model orders m and n. In fact, this approach is generically applicable to other problems (like dictionary learning [12]) that have a simple characterization of the null-space after lifting. Further, characterizing the rank two null-space is useful to the study of scaling laws in bilinear inverse problems [13]. 2) We show that sparsity alone is not sufficient to ensure identifiability, even in the presence of perfect model order and sparse support information. In fact, given a support set on each input vector, one can find input instances to the problem that are unidentifiable. B. Related Work Most research on blind system and channel identification [3]– [5] has been focused on single-in-multiple-out (SIMO) and multiple-in-multiple-out (MIMO) systems, also known as blind multi-channel finite-impulse-response (FIR) estimation. Identifiability and successful recovery of the multiple channel vectors critically depends on the diversity across the channels; either stochastic (cyclostationary second order statistics) [4], [14] or deterministic (no common zero across all channels) [5], [15]. As pointed out in [14], such diversity is not available in the single-in-single-out (SISO) setup (thus making it challenging) which is our primary interest because of its equivalence to the blind deconvolution problem. To the best of our knowledge, blind deconvolution was cast as a rank one matrix recovery problem first in [16]; we adopted this framework in our earlier works [17], [18] on characterization of identifiability in general bilinear inverse problems. Herein, we specifically consider the blind deconvolution problem, which
is an instance of a bilinear inverse problem, and hence are able to derive much stronger results than [17]. Further, [16] focused mostly on the algorithmic aspects of the deconvolution problem and did not explicitly address identifiability. The follow-up paper [9] does (implicitly) address identifiability (through a study of recoverability by convex programming) but assumes the knowledge of the support of the sparse signal. A promising identifiability analysis was proposed in [19] leveraging results from [12] on matrix factorization for sparse dictionary learning using the `1 norm and `p quasi-norm for 0 < p < 1. Their approach and setup differ from ours in two important aspects. Firstly, [19] deals with the SIMO setup while we are interested in the SISO setup. Secondly, [19] analyzes identifiability as a local optimum to a non-convex `1 (or `p for 0 < p < 1) optimization and hence is heavily dependent on the algorithmic formulation, whereas we consider the solution as the local/global optimum to the `0 optimization problem and our impossibility results are information theoretic in nature and hence hold regardless of algorithmic formulation. We emphasize that the constrained `1 optimization formulation in [19] is non-convex and therefore does not imply existence of provably correct efficient recovery algorithms, despite identifiability of the channel. Nonetheless, it would be interesting to try and extend their approach to the SISO setup and compare with our results. A closely related problem to blind deconvolution is the Fourier phase retrieval problem [20], [21] where a signal has to be reconstructed from its autocorrelation function. This is clearly a special case of the blind deconvolution problem with much fewer DoF and allows identifiability and tractable recovery with a sparsity prior on the signal [20]. A second important difference is that after lifting [22], the Fourier phase retrieval problem has one linear constraint involving a positive semidefinite matrix. This feature is known to be helpful in the conditioning of the problem and in the development of recovery algorithms [23]. While the blind deconvolution problem does not enjoy the same advantage, this seems to be a good avenue to explore if additional constraints are allowed. C. Notation By default, vectors are column vectors. Lowercase and uppercase boldface alphabets respectively denote column vectors (e.g. z) and matrices (e.g. A). The MATLABr indexing rules are used to denote parts of a vector/matrix (e.g. A(2 : 3, 4 : 6) denotes the sub-matrix of A formed by the rows {2, 3} and columns {4, 5, 6}). The all zero vector/matrix is denoted by 0 (dimension dictated by usage context). Special sets are denoted by uppercase blackboard bold font (e.g. R for real numbers). Other sets are denoted by uppercase calligraphic font (e.g. S). Linear operators on matrices are denoted by uppercase script font (e.g. S ). II. S YSTEM M ODEL A. The Blind Deconvolution Problem We shall consider the noiseless convolution system model z = x ? y,
(1)
where z is the vector of observations, ? : Rm ×Rn → Rm+n−1 denotes the linear convolution map, and (x, y) denotes the pair of unknown signals with a given domain restriction (x, y) ∈ K. We are interested in solving for vectors x and y from the noiseless observation z in (1). The blind linear deconvolution problem corresponding to (1) is represented by the feasibility problem find (x, y) subject to x ? y = z,
(x, y) ∈ K.
(P1 )
We are interested in whether the pair (x, y) can be uniquely identified in a meaningful sense. We assume that the lengths (or model orders) m and n, respectively, of vectors x and y are fixed and known a priori. Notice that the deconvolution problem (P1 ) has an inherent scaling ambiguity. Thus, we use the following definition of identifiability. Definition 1 (Identifiability). A vector pair (x, y) ∈ K ⊆ Rm × Rn is identifiable w.r.t. (with respect to) the linear convolution map ?, if ∀(x0 , y 0 ) ∈ K ⊆ Rm × Rn satisfying x ? y = x0 ? y 0 , ∃α 6= 0 such that (x0 , y 0 ) = αx, α1 y . B. Lifting We use the ‘lifting’ technique from optimization [22] to rewrite Problem (P1 ) as a rank minimization problem subject to linear equality constraints [16], [17] which is more amenable to an identifiability analysis minimize W
rank(W )
subject to S (W ) = z,
W ∈ K0 ,
(P2 )
where K0 ⊆ Rm×n is any set satisfying \ K0 W ∈ Rm×n rank(W ) ≤ 1 = xy T (x, y) ∈ K , (2) and S : Rm×n → Rm+n−1 is a linear operator which can be deterministically constructed from the linear convolution map. We shall refer to S (·) as the lifted linear convolution map. Specifically, S (·) is the unique linear operator that satisfies S xy T = x ? y, ∀(x, y) ∈ Rm × Rn . (3) By construction, the optimal solution to Problem (P2 ) is a rank one matrix Wopt and its singular value decomposition √ √ Wopt = σuv T yields a solution (x, y)opt = ( σu, σv) to Problem (P1 ). The proof of equivalence of Problems (P1 ) and (P2 ) is in [13]. Our results in Section III shall be based on an analysis of Problem (P2 ). Remark 1. It is well known that a linear operation can be decomposed into a set of inner product operations that collectively define the linear operation. The lifted linear convolution map S (·) can be decomposed into a functionally equivalent set of m + n − 1 matrices using coordinate projections. Let φj : Rm+n−1 → R denote the j th coordinate projection operator and Sj ∈ Rm×n denote the j th matrix in the decomposition for 1 ≤ j ≤ m + n − 1. Then we have the
1
0
0
0
0
1
0
0
0
0
1
0
0
0
0
0
1
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
1
0
0
0
S1
S2
S3
0
0
0
1
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
1
0
0
0
0
0
1
0
0
0
0
1
0
0
0
0
1
We see that X2 in (8) is obtained by shifting down the elements of X1 by one unit along the anti diagonals, and then flipping the sign of each element. Since the convolution operator S (·) sums elements along the anti diagonals (see Fig. 1 for illustration), the representation of X as in (8) immediately implies that S (X) = 0. Since (7) implies that rank(X) ≤ 2 so we have X ∈ N (S , 2).
Remark 2. Notice that S (·) maps Rm×n to Rm+n−1 . An m × n dimensional rank two matrix has 2(m + n − 2) DoF, S4 S5 S6 so that N (S , 2) has at most (2m + 2n − 4) − (m + n − 1) = (m + n − 3) DoF. Since the representation on the r.h.s. of (7) 1 Fig. 1. Lifted matrices Sk ∈ Rm×n for linear convolution map with m = 3, also has (m + n − 3) DoF, our parametrization is tight up to n = 4 and 1 ≤ k ≤ p. DoF. However, the converse of Lemma 1 is false in general, as stated in Proposition 1 below without proof (see the extended version [24] for proof). This means that proving positive relation identifiability results for blind deconvolution using parametric φj ◦ S (·) = hSj , ·i, ∀1 ≤ j ≤ m + n − 1, (4) representations for N (S , 2) is not as straightforward. where h·, ·i denotes the trace inner product in the space of Proposition 1. For m, n ≥ 3, ∃X ∈ N (S , 2) that is not matrices Rm×n . The matrices Sj , 1 ≤ j ≤ m + n − 1, are representable in the form of (7). m×n Hankel matrices in {0, 1} given by ( B. Pathological Cases 1, k + l = j + 1, Sj (k, l) = (5) 0, otherwise, Let x ∈ Rm , y ∈ Rn , and Rm × Rn . If x(m) = K = T T T y(1) = 0, then x = u , 0 and y = 0, v T for u = x(1 : for 1 ≤ k ≤ m, and 1 ≤ l ≤ n. Fig. 1 illustrates these matrices m − 1) and v = y(2 : n). Lemma 1 and the representation forming the decomposition corresponding to the lifted linear in (7) imply that under the linear convolution map, (x, y) operator S (·) for (m, n) = (3, 4). T is indistinguishable from (x∗ , y∗ ) where xT = 0, u and ∗ T y∗T = v T , 0 , since X = [x, −x∗ ][y, y∗ ] ∈ N (S , 2). As III. M AIN R ESULTS x and x∗ are linearly independent, (x, y) is unidentifiable by Let S (·) be the lifted linear convolution map. We denote Definition 1. Thus, for identifiability of (x, y) it is necessary that at least one of x(m) or y(1) is nonzero. the rank two null-space of S (·) by N (S , 2), defined by Similarly, if x(1) = y(n) = 0 then invoking Lemma 1 N (S , 2) , X ∈ Rm×n rank(X) ≤ 2, S (X) = 0 . and representation (7) implies that (x, y) is indistinguishable (6) (under the convolution map) from (x∗ , y∗ ) where xT ∗ = We demonstrate some parametrized subsets of N (S , 2) in x(2 : m)T , 0 and y∗T = 0, y(1 : n − 1)T . By linear inSection III-A followed some pathological cases of non- dependence of x and x∗ , (x, y) is unidentifiable. Thus, for identifiability in Section III-B. Section III-C states a non- identifiability it is necessary that at least one of x(1) or y(n) identifiability result for non-sparse blind deconvolution with is nonzero. proof. The proof strategy yields valuable insight for the sparsity The examples above highlight that model orders m and n constrained blind deconvolution result that we present in play a critical role for identifiability in the SISO setup, i.e. overSection III-D. Throughout this section, we assume that K estimating the model orders is fatal for blind deconvolution. represents a (not necessarily convex) cone, i.e. ∀(x, y) ∈ K, This is a well known fact in the SIMO setup [25]. (αx, αy) ∈ K for every α 6= 0. A. Parametrized Subsets of the Rank Two Null Space Lemma 1. Let X ∈ Rm×n admit a factorization of the form u 0 0 vT X= , (7) 0 −u v T 0 for some v ∈ Rn−1 and u ∈ Rm−1 . Then X ∈ N (S , 2). Proof: Let X admit a factorization as in (7). Then, T 0 uv T 0 0 X= + (8) 0 0T −uv T 0 | {z } | {z } X1
X2
C. Non-sparse Blind Deconvolution We will assume that x(1) 6= 0 to examine non-pathological examples (see Section III-B); the treatment of the other case, viz. y(n) 6= 0, is ideologically identical. Since Definition 1 allows for unspecified scalar multiplicative constants, we can further assume that x(1) = 1. Theorem 1. Let K = Rm × Rn with m ≥ 4 and n ≥ 4 being even integers and x(1) = 1. Then (x, y) ∈ K is unidentifiable almost everywhere w.r.t. any continuous n dimensional measure over y with no point masses (e.g. n dimensional Lebesgue product measure over y).
Proof: The idea is to construct a vector u ∈ Rm−1 such subspace constraints are imposed by system design, and not that xT can be written as a linear combination of the vectors only leads to identifiability but also to efficient and provably correct recovery algorithms. 0, uT and uT , 0 . Notice that Theorem 1 states a much stronger (almost everyIf x(m) = 0 then we simply set u = x(1 : m − 1). where) unidentifiability result than any counterparts in literature Otherwise, x(m) 6= 0 which implies u(m − 1) 6= 0. We select (e.g. [19]) which only assert existence of some unidentifiable u(1) = 1 and let θ represent a parameter such that input. The requirement of m and n being even positive integers x(m) tan θ = − . (9) is due to insistence on this strong result. A weaker version u(m − 1) asserting the existence of an unidentifiable signal pair follows By nonzero assumption on x(m), tan θ 6∈ {0, ±∞}, and we as a special case of Theorem 2 in Section III-D on setting Λ1 = Λ2 = ∅ (see Theorem 2 below), and indeed does not solve the following system of equations for u(2 : m − 1). require m or n to be even integers. This agrees with the absence u(2 : m − 2) −1 1 of any conditions on the model orders in [19]. x(2 : m − 1) = u(m − 1) −u(2 : m − 2) tan θ We note that the statement of Theorem 1 is asymmetric (10) w.r.t. x and y since it applies to every x but not to every y It is easy to see by reverse substituting for tan θ from (9) that (only almost every y). Clearly, it is possible to state a slightly one would end up with an m − 1 order polynomial equation weaker symmetric version of Theorem 1 that holds for almost in u(m − 1) (with a nonzero constant term) which necessarily every x and y. has a nonzero real root if m is even. Thus, one can construct D. Sparse Blind Deconvolution a real vector u such that We now prove a negative result about using a sparsity prior in u 0 x = cos θ − sin θ . (11) the canonical basis. This is an instance of a more general result 0 u for subspace based constraints that we defer to the extended By a similar argument, a vector v ∈ Rn−1 can be constructed version [27] of this paper, due to space limitations. We define such that y has the representation the sparse domains v 0 y = cos φ + sin φ . (12) K1 (Λ1 ) = {x ∈ Rm | x(1) 6= 0, x(m) 6= 0, x(Λ1 ) = 0}, 0 v (15) If y(1) 6= 0 then the construction proceeds exactly as in the case K2 (Λ2 ) = {y ∈ Rn | y(1) 6= 0, y(n) 6= 0, y(Λ2 ) = 0} (16) of x; else we have y(1) = 0 and we set v = y(2 : n). Since x and y are independently parametrized, φ − θ 6∈ {sπ | s ∈ Z} for any m ≥ 5, n ≥ 5 and index sets Λ1 ⊆ {3, 4, . . . , m − 2}, is true almost everywhere by the absence of point masses for Λ2 ⊆ {3, 4, . . . , n − 2}, assuming sparsity in the canonical basis for both x and y. Note that Λ1 and Λ2 denote sets the measure over y. of zero indices, so that larger cardinality of Λ1 or Λ2 imply Once we have u and v, we consider the decomposition sparser problem instances. We have intentionally imposed 0 vT u 0 cos θ x(1) 6= 0, x(m) 6= 0, y(1) 6= 0, and y(n) 6= 0 to examine sin φ − cos φ X= 0 −u sin θ vT 0 non-pathological examples (see Section III-B). 0 vT u 0 cos φ − sin θ cos θ + (13) Theorem 2. For any given index sets Λ1 ⊆ {3, 4, . . . , m − 2} 0 −u sin φ vT 0 and Λ2 ⊆ {3, 4, . . . , n − 2}, let K = K1 (Λ1 ) × K2 (Λ2 ) u 0 0 vT represent the feasible cone in Problem (P1 ). Then there exists = sin(φ − θ) 0 −u v T 0 an unidentifiable pair (x, y) ∈ K. for X ∈ N (S , 2), as in (7). Setting u 0 cos φ 0 x0 = , y0 = 0 −u sin φ v
v 0
sin θ − cos θ
(14)
and observing (13) we conclude that the pairs (x, y) and (x0 , y 0 ) produce the same convolved output. Since x and x0 are linearly independent if φ − θ 6∈ {sπ | s ∈ Z}, (x, y) is unidentifiable by Definition 1. Remark 2 and Theorem 1 suggest that O(m) additional constraints on x (a semi-blind deconvolution problem) in Problem (P1 ) are (almost) necessary for identifiability. Theorem 2 in Section III-D below supports this observation for sparse blind deconvolution, where the sparsity prior introduces additional constraints into Problem (P1 ). An example for second hop channel estimation is discussed in [26] where m − 1 additional
Proof: The proof relies on the use of (10) as a generative model for x, to establish a representation like (11). To do so, we shall construct a vector u ∈ Rm−1 such that any vector x ∈ Rm admitting the representation in (11) satisfies x(Λ1 ) = 0. By (11), x(m) 6= 0 implies u(m − 1) 6= 0, and setting x(1) = 1, implies u(1) = 1. From (9), tan θ 6∈ {0, ±∞}. For every j ∈ Λ1 we set u(j − 1) = 0. Since x(j) = 0, we get u(j) = 0 for (10) to be consistent. We assign arbitrary nonzero values to all other elements of u(2 : m − 1) that were not explicitly set to zero. Since u(j − 1) = u(j) = 0 for every j ∈ Λ1 , it is clear that any vector x that is representable as in (11), satisfies x(Λ1 ) = 0. Clearly, a similar argument for the vector y ∈ Rn yields a vector v ∈ Rn−1 such that v(j − 1) = v(j) = 0 for every
j ∈ Λ2 , and any vector y representable as in (12), satisfies y(Λ2 ) = 0. Next, we select any values of θ and φ satisfying φ−θ 6∈ {sπ | s ∈ Z}, tan θ 6∈ {0, ±∞} and tan φ 6∈ {0, ±∞}. By (13), the pairs (x, y) (generated by (11) and (12)) and (x0 , y 0 ) (generated by (14)) are indistinguishable under the linear convolution map. Clearly, (x, y), (x0 , y 0 ) ∈ K and, x and x0 are linearly independent since φ − θ 6∈ {sπ | s ∈ Z}. Thus, (x, y) ∈ K is unidentifiable by Definition 1. Notice that the result of Theorem 2 is in stark contrast to what is usually seen in compressed sensing [28] or low-rank matrix recovery [29], i.e. if the sparsity of the signal (or rank of the matrix) is small enough then identifiability is a non-issue. More generally, bilinear observations display different characteristics than linear observation models w.r.t. conic priors like sparsity. In this particular case, the difference arises because the lifted convolution map S (·) has a large rank two null-space. It is instructive to compare the assumptions and implications of Theorems 1 and 2. Since the feasible set K in Theorem 1 is essentially unstructured, the set of unidentifiable inputs is quite large (almost every input is unidentifiable). In contrast, the feasible set in Theorem 2 has a lot more structure, and hence it is intuitive to expect that the set of unidentifiable inputs should be a lot smaller. This is indeed true and is established by a stronger version of Theorem 2 in the extended version [27] of the present paper. IV. C ONCLUSION This paper develops a foundation for characterizing identifiability in the sparse and non-sparse blind deconvolution problems via a lifting method and a null-space analysis. We showed the insufficiency of a simple sparsity assumption in the canonical basis, a surprising negative result. The key technique is the use of a parametrized subset of the rank two null-space of the convolution map, a result of independent interest. An identifiability analysis for different sparsity bases is a topic of ongoing research. ACKNOWLEDGMENT This work has been funded in part by the following grants and organizations: ONR N00014-09-1-0700, AFOSR FA9550-12-1-0215, NSF CNS-0832186, NSF CNS-1213128, and NSF CCF-1117896. The authors thank the anonymous reviewers for their thoughtful and constructive remarks. R EFERENCES [1] D. Kundur and D. Hatzinakos, “Blind Image Deconvolution,” IEEE Signal Process. Mag., vol. 13, no. 3, pp. 43–64, May 1996. [2] A. Levin, Y. Weiss, F. Durand, and W. T. Freeman, “Understanding Blind Deconvolution Algorithms,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 33, no. 12, pp. 2354–2367, Dec. 2011. [3] K. Abed-Meraim, W. Qiu, and Y. Hua, “Blind System Identification,” Proc. IEEE, vol. 85, no. 8, pp. 1310–1322, Aug. 1997. [4] C. R. Johnson, Jr., P. Schniter, T. J. Endres, J. D. Behm, D. R. Brown, and R. A. Casas, “Blind Equalization Using the Constant Modulus Criterion: A Review,” Proc. IEEE, vol. 86, no. 10, pp. 1927–1950, Oct. 1998. [5] H. Liu, G. Xu, L. Tong, and T. Kailath, “Recent developments in blind channel equalization: From cyclostationarity to subspaces,” Signal Processing, vol. 50, no. 1, pp. 83–99, 1996, special Issue on Subspace Methods, Part I: Array Signal Processing and Subspace Computations.
[6] D. L. Donoho, “Sparse components of images and optimal atomic decompositions,” Constr. Approx., vol. 17, no. 3, pp. 353–382, 2001. [7] C. R. Berger, S. Zhou, J. C. Preisig, and P. Willett, “Sparse Channel Estimation for Multicarrier Underwater Acoustic Communication: From Subspace Methods to Compressed Sensing,” IEEE Trans. Signal Process., vol. 58, no. 3, pp. 1708–1721, Mar. 2010. [8] Y. Zhou, D. Wilkinson, R. Schreiber, and R. Pan, “Large-Scale Parallel Collaborative Filtering for the Netflix Prize,” in Algorithmic Aspects in Information and Management, ser. Lecture Notes in Computer Science, R. Fleischer and J. Xu, Eds. Springer Berlin Heidelberg, 2008, vol. 5034, pp. 337–348. [9] A. Ahmed, B. Recht, and J. Romberg, “Blind deconvolution using convex programming,” IEEE Trans. Inf. Theory, vol. 60, no. 3, pp. 1711–1732, 2014. [10] K. Herrity, R. Raich, and A. O. Hero, III, “Blind Reconstruction of Sparse Images with Unknown Point Spread Function,” Computational Imaging VI, vol. 6814, no. 1, p. 68140K, 2008. [11] C. Hegde and R. G. Baraniuk, “Sampling and Recovery of Pulse Streams,” IEEE Trans. Signal Process., vol. 59, no. 4, pp. 1505–1517, 2011. [12] R. Gribonval and K. Schnass, “Dictionary identification—sparse matrixfactorization via `1 -minimization,” IEEE Trans. Inf. Theory, vol. 56, no. 7, pp. 3523–3539, 2010. [13] S. Choudhary and U. Mitra, “Identifiability Scaling Laws in Bilinear Inverse Problems,” ArXiv e-prints, vol. abs/1402.2637, Feb. 2014. [Online]. Available: http://arxiv.org/abs/1402.2637 [14] O. Grellier, P. Comon, B. Mourrain, and P. Trébuchet, “Analytical blind channel identification,” IEEE Trans. Signal Process., vol. 50, no. 9, pp. 2196–2207, 2002. [15] E. de Carvalho and D. T. M. Slock, “Blind and semi-blind FIR multichannel estimation: (global) identifiability conditions,” IEEE Trans. Signal Process., vol. 52, no. 4, pp. 1053–1064, 2004. [16] M. S. Asif, W. Mantzel, and J. K. Romberg, “Random Channel Coding and Blind Deconvolution,” in 47th Annual Allerton Conference on Communication, Control, and Computing (Allerton), Sep. 2009, pp. 1021– 1025. [17] S. Choudhary and U. Mitra, “On Identifiability in Bilinear Inverse Problems,” in 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Vancouver, Canada, May 2013, pp. 4325–4329. [18] ——, “Identifiability Bounds for Bilinear Inverse Problems,” in 47th Asilomar Conference on Signals, Systems and Computers, Pacific Grove, USA, Nov. 2013, pp. 1677–1681. [19] A. Kammoun, A. Aissa El Bey, K. Abed-Meraim, and S. Affes, “Robustness of blind subspace based techniques using `p quasi-norms,” in 2010 IEEE Eleventh International Workshop on Signal Processing Advances in Wireless Communications (SPAWC), 2010, pp. 1–5. [20] K. Jaganathan, S. Oymak, and B. Hassibi, “Sparse Phase Retrieval: Convex Algorithms and Limitations,” in 2013 IEEE International Symposium on Information Theory Proceedings (ISIT), Jul. 2013, pp. 1022–1026. [21] E. J. Candès, Y. C. Eldar, T. Strohmer, and V. Voroninski, “Phase retrieval via matrix completion,” SIAM J. Imaging Sci., vol. 6, no. 1, pp. 199–225, 2013. [22] E. Balas, “Projection, lifting and extended formulation in integer and combinatorial optimization,” Ann. Oper. Res., vol. 140, pp. 125–161, 2005. [23] A. Beck, “Convexity properties associated with nonconvex quadratic matrix functions and applications to quadratic programming,” J. Optim. Theory Appl., vol. 142, no. 1, pp. 1–29, 2009. [24] S. Choudhary and U. Mitra, “Fundamental Limits of Blind Deconvolution Part I: Ambiguity Kernel,” ArXiv e-prints, vol. abs/1411.3810, Nov. 2014. [Online]. Available: http://arxiv.org/abs/1411.3810 [25] A. Liavas, P. Regalia, and J.-P. Delmas, “Blind channel approximation: effective channel order determination,” IEEE Trans. Signal Process., vol. 47, no. 12, pp. 3336–3344, 1999. [26] S. Choudhary and U. Mitra, “Sparse recovery from convolved output in underwater acoustic relay networks,” in 2012 Asia-Pacific Signal Information Processing Association Annual Summit and Conference (APSIPA ASC), Hollywood, USA, Dec. 2012, pp. 1–8. [27] ——, “Fundamental Limits of Blind Deconvolution Part II: SparsityAmbiguity Trade-offs,” ArXiv e-prints, vol. abs/1503.03184, Mar. 2015. [Online]. Available: http://arxiv.org/abs/1503.03184 [28] D. L. Donoho, “Compressed Sensing,” IEEE Trans. Inf. Theory, vol. 52, no. 4, pp. 1289–1306, 2006.
[29] D. Gross, “Recovering Low-Rank Matrices From Few Coefficients in Any Basis,” IEEE Trans. Inf. Theory, vol. 57, no. 3, pp. 1548–1566, 2011.