The Rate-Distortion Function and Error Exponent of Sparse Regression Codes with Optimal Encoding Ramji Venkataramanan∗ University of Cambridge
Sekhar Tatikonda Yale University
[email protected] [email protected] arXiv:1401.5272v4 [cs.IT] 18 Dec 2015
December 21, 2015
Abstract This paper studies the performance of Sparse Regression Codes for lossy compression with the squared-error distortion criterion. In a sparse regression code, codewords are linear combinations of subsets of columns of a design matrix. It is shown that with minimum-distance encoding, sparse regression codes achieve the Shannon rate-distortion function for i.i.d. Gaussian sources R∗ (D) as well as the optimal error exponent. This completes a previous result which showed that R∗ (D) and the optimal exponent were achievable for distortions below a certain threshold. The proof of the rate-distortion result is based on the second moment method, a popular technique to show that a non-negative random variable X is strictly positive with high probability. We first identify the reason behind the failure of the standard second moment method for certain distortions, and illustrate the different failure modes via a stylized example. We then use a refinement of the second moment method to show that R∗ (D) is achievable for all distortion values. Finally, the refinement technique is applied to Suen’s correlation inequality to prove the achievability of the optimal Gaussian error exponent.
1
Introduction
Developing practical codes for lossy compression at rates approaching Shannon’s rate-distortion bound has long been an important goal in information theory. A practical compression code requires a codebook with low storage complexity as well as encoding and decoding with low computational complexity. A class of codes called Sparse Superposition Codes or Sparse Regression Codes (SPARCs) has been recently proposed for lossy compression with the squared-error distortion criterion [1,2]. These codes were introduced by Barron and Joseph for communcation over the AWGN channel [3, 4]. The codewords in a SPARC are linear combinations of columns of a design matrix A. The storage complexity of the code is proportional to the size of the matrix, which is polynomial in the block length n. A computationally efficient encoder for compression with SPARCs was proposed in [2] and shown to achieve rates approaching the Shannon rate-distortion function for Gaussian sources. In this paper, we study the compression performance of SPARCs with the squared-error distortion criterion under optimal (minimum-distance) encoding. We show that for any ergodic source with variance σ 2 , SPARCs with optimal encoding achieve a rate-distortion trade-off given by 2 R∗ (D) := 21 log σD . Note that R∗ (D) is the optimal rate-distortion function for an i.i.d. Gaussian source with variance σ 2 . The performance of SPARCs with optimal encoding was first studied ∗ This work was partially supported by a Marie Curie Career Integration Grant and NSF Grant CCF-1217023. The material in this paper was presented in part at the 2014 IEEE International Symposium on Information Theory.
3.5
3
0.5 log σ2/D
Rate (bits)
2.5
2
1.5
1-D/σ2 1
0.5
0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
D/σ2
Figure 1: The solid line shows the previous achievable rate R0 (D), given in (1). The rate-distortion function R∗ (D) is shown in dashed lines. It coincides with R0 (D) for D/σ 2 ≤ x∗ , where x∗ ≈ 0.203.
in [1], where it was shown that for any distortion-level D, rates greater than 1 σ2 D R0 (D) := max log , 1 − 2 2 D σ
(1)
are achievable with the optimal Gaussian error-exponent. The rate R0 (D) in (1) is equal to R∗ (D) when σD2 ≤ x∗ , but is strictly larger than R∗ (D) when σD2 > x∗ , where x∗ ≈ 0.203; see Fig. 1. In this paper, we complete the result of [1] by proving that sparse regression codes achieve the Gaussian rate-distortion function R∗ (D) for all distortions D ∈ (0, σ 2 ). We also show that these codes attain the optimal error exponent for i.i.d. Gaussian sources at all rates. Though minimum-distance encoding is not practically feasible (indeed, the main motivation for sparse regression codes is that they enable low-complexity encoding and decoding), characterizing the rate-distortion function and error exponent under optimal encoding establishes a benchmark to compare the performance of various computationally efficient encoding schemes. Further, the results of this paper and [1] together show that SPARCs retain the good covering properties of the i.i.d Gaussian random codebook, while having a compact representation in terms of a matrix whose size is a low-order polynomial in the block length. Let us specify some notation before proceeding. Upper-case letters are used to denote random variables, and lower-case letters for their realizations. Bold-face letters are used to denote random vectors and matrices. All vectors have length n. The source sequence is S := (S1 , . . . , Sn ), and the ˆ := (Sˆ1 , . . . , Sˆn ). kxk denotes the `2 -norm of vector x, and |x| = kxk √ reconstruction sequence is S n 2 is the normalized version. N (µ, σ ) denotes the Gaussian distribution with mean µ and variance σ 2 . Logarithms are with base e and rate is measured in nats, unless otherwise mentioned. The notation an ∼ bn means that limn→∞ n1 log an = limn→∞ n1 log bn , and w.h.p is used to abbreviate the phrase ‘with high probability’. We will use κ, κ1 , κ2 to denote generic positive constants whose exact value is not needed. 2
The codewords in a SPARC are of the form Aβ where A is a pre-specified matrix and β is a sparse vector. The positions of non-zeros in β uniquely determine the codeword Aβ. The precise structure of β is described in Section 2. To show that a rate R can be achieved at distortion-level D, we need to show that with high probability at least one of the enR choices for β satisfies |S − Aβ|2 ≤ D.
(2)
If β satisfies (2), we call it a solution. Denoting the number of solutions by X, the goal is to show that X > 0 with high probability when R > R∗ (D). Analyzing the probability P (X > 0) is challenging because the codewords in a SPARC are dependent: codewords Aβ(1) and Aβ(2) will be dependent if β(1) and β(2) share common non-zero terms. To handle the dependence, we use the second moment method (second MoM), a technique commonly used to study properties of random graphs and random constraint satisfaction problems [5]. For any non-negative random variable X, the second MoM [6] lower bounds the probability of the event X > 0 as1 (EX)2 P (X > 0) ≥ . (3) E[X 2 ] Therefore the second MoM succeeds if we can show that (EX)2 /E[X 2 ] → 1 as n → ∞. In [1], it is shown that the second MoM succeeds for R > R0 (D), but for R∗ (D) < R < R0 (D) we find that (EX)2 /E[X 2 ] → 0, so the second MoM fails. (R0 (D) is defined in (1).) From this result in [1], it is not clear whether the gap from R∗ (D) is due to an inherent weakness of the sparse regression codebook, or if it is just a limitation of the second MoM as a proof technique. In this paper, we demonstrate that it is the latter, and refine the second MoM to prove that all rates greater than R∗ (D) are achievable. Our refinement of the second MoM is inspired by the work of Coja-Oghlan and Zdeborov´ a [7] on finding sharp thresholds for two-coloring of random hypergraphs. The high-level idea is as follows. The key ratio (EX)2 /E[X 2 ] can be expressed as (EX)/E[X(β)], where X(β) denotes the total number of solutions conditioned on the event that a given β is a solution. (Recall that β is a solution if |S − Aβ|2 ≤ D.) Thus when the second MoM fails, i.e. the ratio goes to zero, we have a situation where the expected number of solutions is much smaller than the expected number of solutions conditioned on the event that β is a solution. This happens because for any S, there are atypical realizations of the design matrix that yield a very large number of solutions. The total probability of these matrices is small enough that EX in not significantly affected by these realizations. However, conditioning on β being a solution increases the probability that the realized design matrix is one that yields an unusually large number of solutions. At low rates, the conditional probability of the design matrix being atypical is large enough to make E[X(β)] EX, causing the second MoM to fail.2 The key to rectifying the second MoM failure is to show that X(β) ≈ EX with high probability although E[X(β)] EX. We then apply the second MoM to count just the ‘good’ solutions, i.e., solutions β for which X(β) ≈ EX. This succeeds, letting us conclude that X > 0 with high probability. The idea of applying the second moment method to a random variable that counts just the ‘good’ solutions has been recently used to obtain improved thresholds for problems involving random discrete structures including random hypergraph 2-coloring [7], k-colorability of random graphs [8], and random k-SAT [9]. However, the key step of showing that a given solution is ‘good’ with 1 2
(3) follows from the Cauchy-Schwarz inequality (E[XY ])2 ≤ E[X 2 ]E[Y 2 ] by substituting Y = 1{X>0} . This is similar to the inspection paradox in renewal processes.
3
Section 1 M columns
Section 2 M columns
Section L M columns
A:
β:
0,
0, √cL ,
0, √cL , 0,
√c , 0, L
,0
T
Figure 2: A is an n × M L matrix and β is a M L × 1 binary vector. The positions of the non-zeros in β correspond to the gray columns of A which combine to form the codeword Aβ.
high probability depends heavily on the geometry of the problem being considered. This step requires identifying a specific property of the random object being considered (e.g., SPARC design matrix, hypergraph, or boolean formula) that leads to a very large number of solutions in atypical realizations of the object. For example, in SPARC compression, the atypical realizations are design matrices with columns that are unusually well-aligned with the source sequence to be compressed; in random hypergraph 2-coloring, the atypical realizations are hypergraphs with an edge structure such that an unusually large number of vertices that can take on either color [7]. The paper is structured as follows. The construction of sparse regression codes is reviewed in Section 2, and the main result is stated in Section 3. In Section 4, we set up the proof and show why the second MoM fails for R < (1 − ρD2 ) Since the proofs of the main theorems are technical, we motivate the main ideas with a stylized example in Section 4.3. The proof of the main results are given in Section 5, with the proof of the main technical lemma given in Section 6.
2
SPARCs with Optimal Encoding
A sparse regression code is defined in terms of a design matrix A of dimension n × M L whose entries are i.i.d. N (0, 1). Here n is the block length and M and L are integers whose values will be specified in terms of n and the rate R. As shown in Fig. 2, one can think of the matrix A as composed of L sections with M columns each. Each codeword is a linear combination of L columns, with one column from each section. Formally, a codeword can be expressed as Aβ, where β is an M L × 1 vector (β1 , . . . , βM L ) with the following property: there is exactly one non-zero βi for 1 ≤ i ≤ M , one non-zero βi for M + 1 ≤ i ≤ 2M , and so forth. The non-zero values of β are all set equal to √cL where c is a constant that will be specified later. Denote the set of all β’s that satisfy this property by BM,L . Minimum-distance Encoder : This is defined by a mapping g : Rn → BM,L . Given the source sequence S, the encoder determines the β that produces the codeword closest in Euclidean distance, i.e., g(S) = argmin kS − Aβk. β∈BM,L
Decoder : This is a mapping h : BM,L → Rn . On receiving β ∈ BM,L from the encoder, the decoder produces reconstruction h(β) = Aβ. Since there are M columns in each of the L sections, the total number of codewords is M L . To
4
obtain a compression rate of R nats/sample, we therefore need M L = enR .
(4)
For our constructions, we choose M = Lb for some b > 1 so that (4) implies L log L =
nR . b
(5)
Thus L is Θ (n/ log n), and the number of columns M L in the dictionary A is Θ (n/ log n)b+1 , a polynomial in n.
3
Main Results
The probability of error at distortion-level D of a rate-distortion code Cn with block length n and encoder and decoder mappings g, h is Pe (Cn , D) = P |S − h(g(S))|2 > D . (6) For a SPARC generated as described in Section 2, the probability measure in (6) is with respect to the random source sequence S and the random design matrix A.
3.1
Rate-Distortion Trade-off of SPARC
Definition 3.1. A rate R is achievable at distortion level D if there exists a sequence of SPARCs {Cn }n=1,2,... such that limn→∞ Pe (Cn , D) = 0 where for all n, Cn is a rate R code defined by an n × Ln Mn design matrix whose parameter Ln satisfies (5) with a fixed b and Mn = Lbn . Theorem 1. Let S be a drawn from an ergodic source with mean 0 and variance σ 2 . For D ∈ 2 2 (0, σ 2 ), let R∗ (D) = 21 log σD . Fix R > R∗ (D) and b > bmin ( σD ), where bmin (x) =
20R x4 , √ 1/2 2 2 x 2 1 −1 −1 −1 (1 + x ) (1 − x ) −1 + 1 + (x−1) R − 2 (1 − x )
1 < x ≤ e2R . (7)
Then there exists a sequence of rate R SPARCs {Cn }n=1,2... for which limn→∞ Pe (Cn , D) = 0, where Cn is defined by an n × Ln Mn design matrix with Mn = Lbn and Ln determined by (5). Remark : Though the theorem is valid for all D ∈ (0, σ 2 ), it is most relevant for the case where x∗ ≈ 0.203 is the solution to the equation (1 − x) +
D σ2
> x∗ ,
1 log x = 0. 2
For σD2 ≤ x∗ , [1, Theorem 1] already guarantees that the optimal rate-distortion function can be achieved, with a smaller value of b than that required by the theorem above.
5
3.2
Error Exponent of SPARC
The error exponent at distortion-level D of a sequence of rate R codes {Cn }n=1,2,... is given by r(D, R) = − lim sup n→∞
1 log Pe (Cn , D), n
(8)
where Pe (Cn , D) is defined in (6). The optimal error exponent for a rate-distortion pair (R, D) is the supremum of the error exponents over all sequences of codes with rate R, at distortion-level D. The optimal error exponent for discrete memoryless sources was obtained by Marton [10], and the result was extended to memoryless Gaussian sources by Ihara and Kubo [11]. Fact 1. [11] For an i.i.d Gaussian source distributed as N (0, σ 2 ) and squared-error distortion criterion, the optimal error exponent at rate R and distortion-level D is ( 2 1 a a2 − 1 − log R > R∗ (D) 2 σ2 σ2 r∗ (D, R) = (9) 0 R ≤ R∗ (D) where a2 = De2R . For R > R∗ (D), the exponent in (9) is the Kullback-Leibler divergence between two zero-mean Gaussians, distributed as N (0, a2 ) and N (0, σ 2 ), respectively. The next theorem characterizes the error-exponent performance of SPARCs. Theorem 2. Let S be drawn from an ergodic source with mean zero and variance σ 2 . Let D ∈ 2 (0, σ 2 ), R > 21 log σD , and γ 2 ∈ (σ 2 , De2R ). Let 7 2 (10) b > max 2, bmin γ /D , 5 where bmin (.) is defined in (7). Then there exists a sequence of rate R SPARCs {Cn }n=1,2... , where Cn is defined by an n × Ln Mn design matrix with Mn = Lbn and Ln determined by (5), whose probability of error at distortion-level D can be bounded as follows for all sufficiently large n. Pe (Cn , D) ≤ P (|S|2 ≥ γ 2 ) + exp −κn1+c , (11) where κ, c are strictly positive constants. Corollary 1. Let S be drawn from an i.i.d Gaussian source with mean zero and variance σ 2 . Fix 2 rate R > 12 log σD , and let a2 = De2R . Fix any ∈ (0, a2 ), and
7 b > max 2, bmin 5
a2 − D
.
There exists a sequence of rate R SPARCs with parameter b that achieves the error exponent 1 a2 − a2 − − 1 − log . 2 σ2 σ2 Consequently, SPARCs achieve the optimal error exponent for i.i.d. sources given by (9).
6
(12)
Proof. From Theorem 2, we know that for any > 0, there exists a sequence of rate R SPARCs {Cn } for which exp(−κn1+c ) 2 2 Pe (Cn , D) ≤ P (|S| ≥ a − ) 1 + (13) P (|S|2 ≥ a2 − ) for sufficiently large n, as long as the parameter b satisfies (12). For S that is i.i.d N (0, σ 2 ), Cram´er’s large deviation theorem [12] yields a2 − 1 1 a2 − − 1 − log (14) lim − log P (|S|2 ≥ a2 − ) = n→∞ n 2 σ2 σ2
for a2 − > σ 2 . Thus P (|S|2 ≥ a2 − ) decays exponentially with n; in comparison exp(−κn1+c ) decays faster than exponentially with n. Therefore, from (13) we have −1 exp(−κn1+c ) −1 2 2 lim inf log P (|S| ≥ a − ) + log 1 + log Pe (Cn , D) ≥ lim inf n→∞ n→∞ n n P (|S|2 ≥ a2 − ) (15) 2 1 a − a2 − = − 1 − log . 2 σ2 σ2 Since > 0 can be chosen arbitrarily small, we conclude that the error exponent in (9) is achievable. We remark that the function bmin (x) is increasing in x. Therefore (12) implies that larger values of the design parameter b are required to achieve error exponents closer to the optimal value (i.e., smaller values of in Corollary 1).
4
Inadequacy of the Direct Second MoM
4.1
First steps of the proof
Fix a rate R > R∗ (D), and b greater than the minimum value specified by the theorem. Note that 2 De2R > σ 2 since R > 21 log σD . Let γ 2 be any number such that σ 2 < γ 2 < De2R . Code Construction: For each block length n, pick L as specified by (5) and M = Lb . Construct an n × M L design matrix A with entries drawn i.i.d N (0, 1). The codebook consists of all vectors Aβ such that β ∈ BM,L . The non-zero entries of β are all set equal to a value specified below. Encoding and Decoding: If the source sequence S is such that |S|2 ≥ γ 2 , then the encoder declares an error. If |S|2 ≤ D, then S can be trivially compressed to within distortion D using the all-zero codeword. The addition of this extra codeword to the codebook affects the rate in a negligible way. If |S|2 ∈ (D, γ 2 ), then S is compressed in two steps. First, quantize |S|2 with an n-level uniform scalar quantizer Q(.) with support in the interval (D, γ 2 ]. For input x ∈ (D, γ 2 ], the quantizer output is (γ 2 − D)(i − 12 ) (γ 2 − D)(i − 1) (γ 2 − D)i Q(x) = D + , if x ∈ D + , D+ , i ∈ {1, . . . , n}. n n n (16) Conveying the scalar quantization index to the decoder (with an additional log n nats) allows us to adjust the codebook variance according to the norm of the observed source sequence.3 The non3
The scalar quantization step is only included to simplify the analysis. In fact, we could use the same codebook variance (γ 2 − D) for all S that satisfy |S|2 ≤ (γ 2 − D), but this would make the forthcoming large deviations analysis quite cumbersome.
7
p zero entries of β are each set to (Q(|S|2 ) − D)/L so that each SPARC codeword has variance (Q(|S|2 ) − D). Define a “quantized-norm” version of S as s 2 ˜ := Q(|S| ) S. S (17) 2 |S| ˜ 2 = Q(|S|2 ). We use the SPARC to compress S. ˜ The encoder finds Note that |S| ˜ − Aβk2 . βˆ := argmin kS β∈BM,L
ˆ Note that for block length n, the total number ˆ = Aβ. The decoder receives βˆ and reconstructs S of bits transmitted by encoder is log n + L log M , yielding an overall rate of R + logn n nats/sample. Error Analysis: For S such that |S|2 ∈ (D, γ 2 ), the overall distortion can be bounded as ˆ 2 ≤ |S − S| ˆ + |S ˆ2 ˆ 2 = |S − S ˜ +S ˜ − Aβ| ˜ 2 + 2|S − S|| ˜ S ˜ − Aβ| ˜ − Aβ| |S − Aβ| ˆ ˜ − Aβ| κ1 κ2 |S ˆ2 ˜ − Aβ| + |S ≤ 2+ n n
(18)
for some positive constants κ1 , κ2 . The last inequality holds because the step-size of the scalar 2 quantizer is (γ n−D) , and |S|2 ∈ (D, γ 2 ). ˜ be the event that the minimum of |S ˜ − Aβ|2 over β ∈ BM,L is greater than D. The Let E(S) ˜ occurs. If E(S) ˜ does not occur, the overall distortion in (18) can encoder declares an error if E(S) be bounded as ˆ 2 ≤ D + κ, |S − Aβ| (19) n for some positive constant κ. The overall rate (including that of the scalar quantizer) is R + Denoting the probability of error for this random code by Pe,n , we have Pe,n ≤ P (|S|2 ≥ γ 2 ) +
max
ρ2 ∈(D,γ 2 )
˜ | |S| ˜ 2 = ρ2 ). P (E(S)
log n n .
(20)
As γ 2 > σ 2 , the ergodicity of the source guarantees that lim P (|S|2 ≥ γ 2 ) = 0.
(21)
n→∞
To bound the second term in (20), without loss of generality we can assume that the source sequence ˜ = (ρ, . . . , ρ). S This is because the codebook distribution is rotationally invariant, due to the i.i.d. N (0, 1) design matrix A. For any β, the entries of Aβ(i) i.i.d. N (0, ρ2 − D). We enumerate the codewords as Aβ(i), where β(i) ∈ BM,L for i = 1, . . . , enR . Define the indicator random variables ˜2 ˜ = 1 if |Aβ(i) − S| ≤ D, Ui (S) (22) 0 otherwise. We can then write
˜ =P P (E(S))
nR
e X i=1
8
˜ = 0 . Ui (S)
(23)
˜ the Ui (S)’s ˜ are dependent. To see this, consider codewords S(i), ˆ ˆ For a fixed S, S(j) corresponding to the vectors β(i), β(j) ∈ BM,L , respectively. Recall that a vector in BM,L is uniquely defined by the position of the non-zero value in each of its L sections. If β(i) and β(j) overlap in r of their ˆ ˆ non-zero positions, then the column sums forming codewords S(i) and S(j) will share r common ˜ ˜ terms, and consequently Ui (S) and Uj (S) will be dependent. ˜ by just Ui . Applying the second MoM with For brevity, we henceforth denote Ui (S) nR
X :=
e X
Ui ,
i=1
we have from (3) P (X > 0) ≥
(EX)2 (a) EX = 2 E[X ] E[X| U1 = 1]
(24)
where (a) is obtained by expressing E[X 2 ] as follows. nR
2
E[X ] = E[X
X i
Ui ] =
e X
nR
E[XUi ] =
i=1
e X i=1
P (Ui = 1)E[X|Ui = 1] = EX · E[X| U1 = 1].
(25)
P nR The last equality in (25) holds because EX = ei=1 P (Ui = 1), and due to the symmetry of the code construction. As E[X 2 ] ≥ (EX)2 , (24) implies that E[X| U1 = 1] ≥ EX. Therefore, to show that X > 0 w.h.p, we need E[X| U1 = 1] → 1 as n → ∞. (26) EX
4.2
EX versus E[X| U1 = 1]
Using Cram´er’s large deviation theorem [12] (or a Chernoff bound), we can bound the expected number of solutions as 2 2 EX = enR P (U = 1) ≤ enR e−nf (ρ ,ρ −D,D) , (27) where for x, y, z > 0, the large-deviation rate function f (x, y, z) is x+z xz A 1 A 2y − Ay − 4y − 2 ln 2x if z ≤ x + y f (x, y, z) = 0 otherwise
(28)
and A=
p y 2 + 4xz − y.
(29)
Note that
1 ρ2 log . (30) 2 D Using the strong version of Cram´er’s large deviation theorem due to Bahadur and Rao [12, 13], we also have a lower bound for sufficiently large n: f (ρ2 , ρ2 − D, D) =
κ 2 2 EX = enR P (U1 = 1) ≥ enR · √ e−nf (ρ ,ρ −D,D) n
(31)
In words, when |S|2 = x, the probability that a randomly chosen i.i.d. N (0, y) codeword is within distortion z of the sequence S is approximately e−nf (x,y,z) . The function f will play an important role in establishing Lemma 5.1, the key technical ingredient in the proof. 9
Next consider E[X| U1 = 1]. If β(i) and β(j) overlap in r of their non-zero positions, the column ˆ and S(j) ˆ sums forming codewords S(i) will share r common terms. Therefore, nR
E[X| U1 = 1] =
e X
P (Ui = 1| U1 = 1)
i=1 L enR X P (U2 = U1 = 1| F12 (r)) P (Ui = 1, U1 = 1) (a) X L = (M − 1)L−r = P (U1 = 1) r P (U1 = 1)
(32)
r=0
i=1
where F12 (r) is the event that the codewords corresponding to U1 and U2 share r common terms. ˆ In (32), (a) holds because for each codeword S(i), there are a total of Lr (M − 1)L−r codewords ˆ which share exactly r common terms with S(i), for 0 ≤ r ≤ L. From (32) and (31), we obtain L E[X| U1 = 1] X L P (U2 = U1 = 1| F12 (r)) = (M − 1)L−r EX enR (P (U1 = 1))2 r r=0 X L P (U2 = U1 = 1| F12 (α)) (b) (a) ∼ 1+ = 1+ Lα M Lα (P (U1 = 1))2 1 L α= L ,..., L
where (a) is obtained by substituting α = A], where it was also shown that ∆α ≤
r L
(33) X
en∆α
1 α= L ,..., L L
and enR = M L . The equality (b) is from [1, Appendix
κ R + min{α, α ¯, L b
log 2 log L }
− h(α)
where 1 h(α) := αR − log 2
1+α 1 − α(1 −
(34)
! 2D ) ρ2
.
(35)
The term en∆α in (33) may be interpreted as follows. Conditioned on β(1) being a solution, the expected number of solutions that share αL common terms with β(1) is ∼ en∆α EX. Recall that we require the left side of (33) to tend to 1 as n → ∞. Therefore, we need ∆α < 0 for α = L1 , . . . , L L . The inequality in in (34) is asymptotically tight, hence we need h(α) to be positive in order for ∆α < 0. However, when R < (1 − ρD2 ), it can be verified that h(α) < 0 for α ∈ (0, α∗ ) where α∗ ∈ (0, 1) is the solution to h(α) = 0. Thus ∆α is positive for α ∈ (0, α∗ ) when Consequently, (33) implies that E[X| U1 = 1] X n∆α ∼ e → ∞ as n → ∞, EX α
1 2
2
log ρD ≤ R ≤ (1 −
D ). ρ2
(36)
and the second MoM fails.
4.3
A Stylized Example
Before describing how to rectify the second MoM failure in the SPARC setting, we present a simple example to give intuition about the failure modes of the second MoM. The proofs in the next two sections do not rely on the discussion here. Consider a sequence of generic random structures (e.g., a sequence of random graphs or SPARC design matrices) denoted by Rn , n ≥ 1. Suppose that for each n, the realization of Rn belongs 10
to one of two types: a type T1 structure which has which has en solutions, or a type T2 structure which has e2n solutions. Here, we do not specify precisely what a “solution” is—for concreteness, one could think of a solution as a coloring of the graph, or in the case of SPARC, a codeword that is within the target distortion. Let the probabilities of Rn being of each type be P (Rn ∈ T1 ) = 1 − e−np ,
P (Rn ∈ T2 ) = e−np ,
(37)
where p > 0 is a constant. Regardless of the realization, we note that Rn always has at least en solutions. We now examine whether the second MoM can guarantee the existence of a solution for this problem as n → ∞. The number of solutions X can be expressed as a sum of indicator random variables: N X X= Ui , i=1
where Ui = 1 if configuration i is a solution, and N is the total number of configurations. (In the SPARC context, a configuration is a codeword; in graph coloring, a configuration is an assignment of colors to the vertices.) We assume that the configurations are symmetric (as in the SPARC set-up), so that each one has equal probability of being a solution, i.e., P (Ui = 1 | Rn ∈ T1 ) =
en , N
P (Ui = 1 | Rn ∈ T2 ) =
e2n . N
(38)
Due to symmetry, the second moment ratio can be expressed as E[X | U1 = 1] E[X | U1 = 1] EX 2 = = . 2 (EX) EX (1 − e−np )en + e−np e2n
(39)
The conditional expectation in the numerator can be computed as follows. E[X | U1 = 1] = P (Rn ∈ T1 | U1 = 1)E[X | U1 = 1, T1 ] + P (Rn ∈ T2 | U1 = 1)E[X | U1 = 1, T2 ] (a)
=
=
(1 − e−np )(en /N ) e−np (e2n /N ) n e + e2n (1 − e−np )(en /N ) + e−np (e2n /N ) (1 − e−np )(en /N ) + e−np (e2n /N )
(1 − e−np )e2n + en(4−p) , (1 − e−np )en + en(2−p)
(40)
where (a) is obtained by using Bayes’ rule to compute P (Rn ∈ T1 | U1 = 1). The second MoM ratio in (39) therefore equals E[X | U1 = 1] (1 − e−np )e2n + en(4−p) EX 2 = = . 2 (EX) EX [(1 − e−np )en + en(2−p) ]2
(41)
We examine the behavior of the ratio above as n → ∞ for different values of p. Case 1: p ≥ 2. The dominant term in both the numerator and the denominator of (41) is e2n , and we get E[X | U1 = 1] → 1 as n → ∞, (42) EX and the second MoM succeeds.
11
Case 2: 1 < p ≤ 2. The dominant term in the numerator is en(4−p) , while the dominant term in the denominator is e2n . Hence E[X | U1 = 1] en(4−p) n→∞ = (1 + o(1)) ∼ en(2−p) −→ ∞. 2n EX e
(43)
Case 3: 0 < p ≤ 1. The dominant term in the numerator is en(4−p) , while the dominant term in the denominator is en(4−2p) . Hence E[X | U1 = 1] en(4−p) n→∞ = n(4−2p) (1 + o(1)) ∼ enp −→ ∞. EX e
(44)
Thus in both Case 2 and Case 3, the second MoM fails because the expected number of solutions conditioned on a solution (U1 = 1) is exponentially larger than the unconditional expected value. However, there is an important distinction between the two cases, which allows us to fix the failure of the second MoM in Case 2 but not in Case 3. Consider the conditional distribution of the number of solutions given U1 = 1. From the calculation in (40), we have P (X = en | U1 = 1) = P (Rn ∈ T1 | U1 = 1) = 2n
P (X = e
(1 − e−np )en , (1 − e−np )en + en(2−p)
en(2−p) . | U1 = 1) = P (Rn ∈ T2 | U1 = 1) = (1 − e−np )en + en(2−p)
(45)
When 1 < p ≤ 2, the first term in the denominator of the RHS dominates, and the conditional distribution of X is P (X = en | U1 = 1) = 1 − e−n(p−1) (1 + o(1)),
P (X = e2n | U1 ) = e−n(p−1) (1 + o(1)).
(46)
Thus the conditional probability of a realization Rn being type T1 given U1 = 1 is slightly smaller than the unconditional probability, which is 1−e−np . However, conditioned on U1 = 1, a realization Rn is still extremely likely to have come from type T1 , i.e., have en solutions. Therefore, when 1 < p ≤ 2, conditioning on a solution does not change the nature of the ‘typical’ or ‘high-probability’ realization. This makes it possible to fix the failure of the second MoM in this case. The idea is to define a new random variable X 0 which counts the number of solutions coming from typical realizations, i.e., only type T1 structures. The second MoM is then applied to X 0 to show that is strictly positive with high probability. When p < 1, conditioning on a solution completely changes the distribution of X. The dominant term in the denominator of the RHS in (45) is en(2−p) , so the conditional distribution of X is P (X = en | U1 = 1) = e−n(1−p) (1 + o(1)),
P (X = e2n | U1 ) = 1 − e−n(1−p) (1 + o(1))
(47)
Thus, conditioned on a solution, a typical realization of Rn belongs to type T2 , i.e., has e2n solutions. On the other hand, if we draw from the unconditional distribution of Rn in (37), a typical realization has en solutions. In this case, the second moment method cannot be fixed by counting only the solutions from realizations of type T1 , because the total conditional probability of such realizations is very small. This is the analog of the “condensation phase” that is found in problems such as random hypergraph coloring [7]. In this phase, although solutions may exist, even an enhanced second MoM does not prove their existence. Fortunately, there is no condensation phase in the SPARC compression problem. Despite the failure of the direct second MoM, we prove (Lemma 5.1) that conditioning on a solution does 12
not significantly alter the total number of solutions for a very large fraction of design matrices. Analogous to Case 2 above, we can apply the second MoM to a new random variable that counts only the solutions coming from typical realizations of the design matrix. This yields the desired result that solutions exist for all rates R < R∗ (D).
5
Proofs of Main Results
5.1
Proof of Theorem 1
The code parameters, encoding and decoding are as described in Section 4.1. We build on the proof set-up of Section 4.2. Given that β ∈ BM,L is a solution, for α = 0, L1 , . . . , L L define Xα (β) to be the number of solutions that share αL non-zero terms with β. The total number of solutions given that β is a solution is X Xα (β) (48) X(β) = 1 ,..., L α=0, L L
Using this notation, we have E[X| U1 = 1] (a) E[X(β)] = = EX EX
X 1 α=0, L ,..., L L
E[Xα (β)] (b) ∼ 1+ EX
X
en∆α ,
(49)
1 α= L ,..., L L
where (a) holds because the symmetry of the code construction allows us to condition on a generic β ∈ BM,L being a solution; (b) follows from (33). Note that E[Xα (β)] and E[X(β)] are expectations evaluated with the conditional distribution over the space of design matrices given that β is a solution. The key ingredient in the proof is the following lemma, which shows that Xα (β) is much smaller than EX w.h.p ∀α ∈ { L1 , . . . , L L }. In particular, Xα (β) EX even for α for which E[Xα (β)] ∼ en∆α → ∞ as n → ∞. EX Lemma 5.1. Let R >
2
log ρD . If β ∈ BM,L is a solution, then for sufficiently large L ≥1−η P Xα (β) ≤ L−3/2 EX, for L1 ≤ α ≤ L−1 L 1 2
where −2.5(
η=L
b −1) bmin (ρ2 /D)
.
(50)
(51)
The function bmin (.) is defined in (7). Proof. The proof of the lemma is given in Section 6. The probability measure in Lemma 5.1 is the conditional distribution on the space of design matrices A given that β is a solution. Definition 5.1. For > 0, call a solution β “-good” if X Xα (β) < EX. 1 α= L ,..., L L
13
(52)
˜ = (ρ, . . . , ρ), whether a solution β is -good or not is determined by Since we have fixed S the design matrix. Lemma 5.1 guarantees that w.h.p any solution β will be -good, i.e., if β is a solution, w.h.p the design matrix is such that the number of solutions sharing any common terms with β is less E[X]. The key to proving Theorem 1 is to apply the second MoM only to -good solutions. Fix = L−0.5 . For i = 1, . . . , enR , define the indicator random variables ˜ 2 ≤ D and β(i) is -good, 1 if |Aβ(i) − S| Vi = (53) 0 otherwise. The number of -good solutions, denoted by Xg , is given by Xg = V1 + V2 + . . . + VenR .
(54)
We will apply the second MoM to Xg to show that P (Xg > 0) → 1 as n → ∞. We have P (Xg > 0) ≥
EXg (EXg )2 = E[Xg2 ] E[Xg | V1 = 1]
(55)
where the second equality is obtained by writing E[Xg2 ] = (EXg )E[Xg | V1 = 1], similar to (25). Lemma 5.2. a) EXg ≥ (1 − η)EX, where η is defined in (51). b) E[Xg | V1 = 1] ≤ (1 + L−0.5 )EX. Proof. Due to the symmetry of the code construction, we have (a)
EXg = enR P (V1 = 1) = enR P (U1 = 1)P (V1 = 1|U1 = 1)
(56)
= EX · P (β(1) is -good | β(1) is a solution). In (56), (a) follows from the definitions of Vi in (53) and Ui in (22). Given that β(1) is a solution, Lemma 5.1 shows that X Xα (β(1)) < (EX)L−0.5 . (57) 1 α= L ,..., L L
with probability at least 1 − η. As = L−0.5 , β(1) is -good according to Definition 5.1 if (57) is satisfied. Thus EXg in (56) can be lower bounded as EXg ≥ (1 − η)EX.
(58)
For part (b), first observe that the total number of solutions X is an upper bound for the number of -good solutions Xg . Therefore E[Xg | V1 = 1] ≤ E[X| V1 = 1]. Given that β(1) is an -good solution, the expected number of solutions can be expressed as X Xα (β(1)) | V1 = 1]. E[X| V1 = 1] = E[X0 (β(1)) | V1 = 1] + E[ L 1 ,..., L α= L
(59)
(60)
There are (M − 1)L codewords that share no common terms with β(1). Each of these codewords is independent of β(1), and thus independent of the event V1 = 1. ˜ − Aβ|2 ≤ D) E[X0 (β(1)) | V1 = 1] = E[X0 (β(1))] = (M − 1)L P (|S ˜ − Aβ|2 ≤ D) = EX. ≤ M L P (|S 14
(61)
Next, note that conditioned on β(1) being an -good solution (i.e., V1 = 1), X Xα (β(1)) < EX
(62)
1 L α= L ,..., L
with certainty. This follows from the definition of -good in (52). Using (61) and (62) in (60), we conclude that E[X| V1 = 1] < (1 + )EX. (63) Combining (63) with (59) completes the proof of Lemma 5.2. Using Lemma 5.2 in (55), we obtain −2.5(
b
bmin (ρ2 /D) EXg (1 − η) 1−L P (Xg > 0) ≥ ≥ = E[Xg | V1 = 1] 1+ 1 + L−1/2
−1)
,
(64)
where the last equality is obtained by using the definition of η in (51) and = L−0.5 . Hence the probability of the existence of at least one good solution goes to 1 as L → ∞. Thus we have ˜ | |S| ˜ 2 = ρ2 ) in (20) tends to zero whenever shown that for any ρ2 ∈ (D, γ 2 ), the quantity P (E(S) 2 2 R > 12 log ρD and b > bmin ( ρD ). Combining this with (19)–(21),we conclude that that the probability that ˆ2≤D+ κ |S − Aβ| n goes to one as n → ∞. As γ 2 > σ 2 can be chosen arbitrarily close to σ 2 , the proof of Theorem 1 is complete.
5.2
Proof of Theorem 2
The code construction is as described in Section 4.1, with the parameter b now chosen to satisfy (10). Recall the definition of an -good solution in Definition 5.1. We follow the set-up of Section 5.1 and count the number of -good solutions, for an appropriately defined . As before, we want an upper bound for the probability of the event Xg = 0, where the number of -good solutions Xg is defined in (54). Theorem 2 is obtained using Suen’s correlation inequality to upper bound on the probability of the event Xg = 0. Suen’s inequality yields a sharper upper bound than the second MoM. We use it to prove that the probability of Xg = 0 decays super-exponentially in L. In comparison, the second MoM only guarantees a polynomial decay. We begin with some definitions required for Suen’s inequality. Definition 5.2 (Dependency Graphs [6]). Let {Vi }i∈I be a family of random variables (defined on a common probability space). A dependency graph for {Vi } is any graph Γ with vertex set V (Γ) = I whose set of edges satisfies the following property: if A and B are two disjoint subsets of I such that there are no edges with one vertex in A and the other in B, then the families {Vi }i∈A and {Vi }i∈B are independent. Fact 2. [6, Example 1.5, p.11] Suppose {Yα }α∈A is a family of independent random variables, and each Vi , i ∈ I is a function of the variables {Yα }α∈Ai for some subset Ai ⊆ A. Then the graph with vertex set I and edge set {ij : Ai ∩ Aj 6= ∅} is a dependency graph for {Ui }i∈I .
15
In our setting, we fix = L−3/2 , let Vi be the indicator the random variable defined in (53). Note that Vi is one if and only if β(i) is an -good solution. The set of codewords that share at least one common term with β(i) are the ones that play a role in determining whether β(i) is an -good solution or not. Hence, the graph Γ with vertex set V (Γ) = {1, . . . , enR } and edge set e(Γ) given by {ij : i 6= j and the codewords β(i), β(j) share at least one common term} nR
is a dependency graph for the family {Vi }ei=1 . This follows from Fact 2 by observing that: i) each Vi is a function of the columns of A that define β(i) and all other codewords that share at least one common term with β(i); and ii) the columns of A are generated independently of one another. For a given codeword β(i), there are Lr (M − 1)L−r other codewords that have exactly r terms in common with β(i), for 0 ≤ r ≤ (L − 1). Therefore each vertex in the dependency graph for the nR family {Vi }ei=1 is connected to L−1 X L (M − 1)L−r = M L − 1 − (M − 1)L r r=1
other vertices. Fact 3 (Suen’s Inequality [6]). Let Vi ∼ Bern(pi ), i ∈ I, be a finite family of Bernoulli random variables having a dependency graph Γ. Write i ∼ j if ij is an edge in Γ. Define X X 1 XX λ= EVi , ∆ = E(Vi Vj ), δ = max EVk . i∈I 2 i∈I j∼i
i∈I
Then
! P
X i∈I
Vi = 0
k∼i
λ λ λ2 , , . ≤ exp − min 2 6δ 8∆
(65) nR
We apply Suen’s inequality with the dependency graph specified above for {Vi }ei=1 to compute P nR an upper bound for P (Xg = 0), where Xg = ei=1 Vi is the total number of -good solutions for = L−3/2 . Note that the chosen here is smaller than the value of L−1/2 used for Theorem 1. This smaller value is required to prove the super-exponential decay of the error probability via Suen’s inequality. We also need a stronger version of Lemma 5.1. Lemma 5.3. Let R >
1 2
2
log ρD . If β ∈ BM,L is a solution, then for sufficiently large L P Xα (β) ≤ L−5/2 EX, for L1 ≤ α ≤ L L ≥1−ξ
where ξ=L
−2.5(
b − 75 ) bmin (ρ2 /D)
.
(66)
(67)
Proof. The proof is nearly identical to that of Lemma 5.1 in Section 6, with the terms L−3/2 and 3 −5/2 and 5 , respectively, throughout the lemma. Thus we obtain the following 2L replaced by L 2L condition on b which is the analog of (103). R log 2 5 3.5R 1 b > max min α, α ¯, + = +O 1 L log L 2L Λ(0) L α∈{ L ,..., L } (min{αΛ(α), c1 }) (68) 2 7 ρ 1 = bmin +O . 5 D L 16
The result is then obtained using arguments analogous to (104) and (105). We now compute each of the three terms in the RHS of Suen’s inequality. First Term λ2 : We have nR
λ=
e X i=1
(a)
EVi = EXg = EX · P (β(1) is -good | β(1) is a solution),
where (a) follows from (56). Given that β(1) is a solution, Lemma 5.3 shows that X Xα (β(1)) < (EX)L−3/2
(69)
(70)
1 L α= L ,..., L
with probability at least 1 − ξ. As = L−3/2 , β(1) is -good according to Definition 5.1 if (70) is satisfied. Thus the RHS of (69) can be lower bounded as follows. λ = EX · P (β(1) is -good | β(1) is a solution) ≥ EX · (1 − ξ).
(71)
Using the expression from (31) for the expected number of solutions EX, we have ρ2 1 κ λ ≥ (1 − ξ) √ en(R− 2 log D ) , n
(72)
where κ > 0 is a constant. For b > 75 bmin (ρ2 /D), (67) implies that ξ approaches 1 with growing L. Second term λ/(6δ): Due to the symmetry of the code construction, we have X X δ= max P (Vk = 1) = P (Vk = 1) ∀i ∈ {1, . . . , enR } i∈{1,...,enR }
k∼i
k∼i L−1 X
L (M − 1)L−r · P (V1 = 1) r r=1 = M L − 1 − (M − 1)L P (V1 = 1) . =
(73)
Combining this together with the fact that L
λ=
M X
EVi = M L P (V1 = 1),
i=1
we obtain
λ ML 1 = L = , L −bL δ M − 1 − (M − 1) 1−L − (1 − L−b )L
(74)
where the second equality is obtained by substituting M = Lb . Using a Taylor series bound for the denominator of (74) (see [1, Sec. V] for details) yields the following lower bound for sufficiently large L: λ Lb−1 ≥ . (75) δ 2
17
Third Term λ2 /(8∆): We have L
L
M M X 1 XX 1X ∆= E [Vi Vj ] = P (Vi = 1) P (Vj = 1 | Vi = 1) 2 2 i=1 j∼i i=1 j∼i X (a) 1 P (Vj = 1 | V1 = 1) = EXg 2 j∼1 hX i 1 = EXg E 1{Vj = 1} | V1 = 1 2 j∼1 # " X (b) 1 ≤ EXg E Xα (β(1)) | V1 = 1 . 2 L−1 1 α= L ,...,
(76)
L
In (76), (a) holds because of the symmetry of the code construction. The inequality (b) is obtained as follows. The number of -good solutions that share common terms with β(1) is bounded above by the total number of solutions sharing common terms with β(1). The latter quantity can be expressed as the sum of the number of solutions sharing exactly αL common terms with β(1), for α ∈ { L1 , . . . , L−1 L }. Conditioned on V1 = 1, i.e., the event that β(1) is a -good solution, the total number of solutions that share common terms with β(1) is bounded by EX. Therefore, from (76) we have " # X 1 L−3/2 1 ∆ ≤ EXg E (EX)2 , Xα (β(1)) | V1 = 1 ≤ (EXg ) (L−3/2 EX) ≤ (77) 2 2 2 L−1 1 α= L ,...,
L
where we have used = L−3/2 , and the fact that Xg ≤ X. Combining (77) and (71), we obtain (1 − ξ)2 (EX)2 λ2 ≥ ≥ κL3/2 , 8∆ 4L−3/2 (EX)2
(78)
where κ is a strictly positive constant. Applying Suen’s inequality: Using the lower bounds obtained in (72), (75), and (78) in (65), we obtain enR 2 X n n(R− 12 log ρD − log ) b−1 3/2 2n , L P Vi ≤ exp −κ min e ,L , (79) i=1
where κ is a positive constant. Recalling from (5) that L = Θ( logn n ) and R > for b > 2, enR X P Vi ≤ exp −κn1+c ,
1 2
2
ln ρD , we see that (80)
i=1
where c > 0 is a constant. Note that the condition b > 57 bmin (ρ2 /D) was also needed to obtain (79) via Suen’s inequality. In particular, this condition on b is required for ξ in Lemma 5.3 to go to 0 with growing L. 2 Using (80) in (20), we conclude that for any γ 2 ∈ (σ 2 , De R ) the probability of error can be bounded as Pe,n ≤ P (|S|2 ≥ γ 2 ) +
max
ρ2 ∈(D,γ 2 )
˜ | |S| ˜ 2 = ρ2 ) P (E(S)
≤ P (|S|2 ≥ γ 2 ) + exp(−κn1+c ), 18
(81)
provided the parameter b satisfies
7 2 b > max max 2, bmin ρ /D . 5 ρ2 ∈(D,γ 2 )
(82)
It can be verified from the definition in (7) that bmin (x) is strictly in x ∈ (1, e2R ). 7increasing 2 Therefore, the maximum on the RHS of (82) is bounded by max 2, 5 bmin γ /D . Choosing b to be larger than this value will guarantee that (81) holds. This completes the proof of the theorem.
6
Proof of Lemma 5.1
We begin by listing three useful properties of the function f (x, y, z) defined in (28). Recall that the probability that an i.i.d. N (0, y) sequence is within distortion within distortion z of a norm-x sequence is ∼ e−nf (x,y,z) . 1. For fixed x, y, f is strictly decreasing in z ∈ (0, x + y). 2. For fixed y, z, f is strictly increasing in x ∈ (z, ∞). 3. For fixed x, z and x > z, f is convex in y and attains its minimum value of
1 2
log xz at y = x−z.
These properties are straightforward to verify from the definition (28) using elementary calculus. For K ⊆ {1, . . . , L}, let βK denote the restriction of β to the set K, i.e., βK coincides with β in the sections indicated by K and the remaining entries are all equal to zero. For example, if K = {2, 3}, the second and third sections of βK will each have one non-zero entry, the other entries are all zeros. Definition 6.1. Given that β is a solution, for α =
1 L L, . . . , L,
define Fα (β) as the event that
˜ − AβK |2 ≥ Dα |S for every size αL subset K ⊂ {1, . . . , L}, where Dα is the solution to the equation Rα = f (ρ2 , (ρ2 − D)α, Dα ).
(83)
The intuition behind choosing Dα according to (83) is the following. Any subset of αL sections of the design matrix A defines a SPARC of rate Rα, with each codeword consisting of i.i.d N (0, (ρ2 − D)α) entries. (Note that the entries of a single codeword are i.i.d., though the codewords are dependent due to the SPARC structure.) The probability that a codeword from this rate Rα code ˜ is ∼ e−nf (ρ2 ,(ρ2 −D)α,z) . Hence the expected number is within distortion z of the source sequence S ˜ is of codewords in the rate Rα codebook within distortion z of S enRα e−nf (ρ
2 ,(ρ2 −D)α,z)
.
As f (ρ2 , (ρ2 −D)α, z) is a strictly decreasing function of z in (0, ρ2 ), (83) says that Dα is the smallest expected distortion for any rate Rα code with codeword entries chosen i.i.d. N (0, (ρ2 − D)α). 4 ˜ is vanishingly small. For z < Dα , the expected number of codewords within distortion z of S 4 Note that Dα is not the distortion-rate function at rate Rα as the codewords are not chosen with the optimal variance for rate Rα.
19
Conditioned on Fα (β), the idea is that any αL sections of β cannot by themselves represent ˜ S with distortion less than Dα . In other words, in a typical realization of the design matrix, all ˜ On the other the sections contribute roughly equal amounts to finding a codeword within D of S. ˜ with distortion less than Dα , the remaining hand, if some αL sections of the SPARC can represent S α ¯ L sections have “less work” to do—this creates a proliferation of solutions that share these αL common sections with β. Consequently, the total number of solutions is much greater than EX for these atypical design matrices. The first step in proving the lemma is to show that for any β, the event Fα (β) holds w.h.p. The second step is showing that when Fα (β) holds, the expected number of solutions that share any common terms with β is small compared to EX. Indeed, using Fα (β) we can write P Xα (β) > L−3/2 EX = P {Xα (β) > L−3/2 EX}, Fαc (β) + P {Xα (β) > L−3/2 EX}, Fα (β) ≤ P (Fαc (β)) + P (Fα (β)) P Xα (β) < L−3/2 EX | Fα (β) ≤ P (Fαc (β)) +
E[Xα (β) | Fα (β)] L−3/2 EX
(84)
where the last line follows from Markov’s inequality. We will show that the probability on the left side of (84) is small for any solution β by showing that each of the two terms on the RHS of (84) is small. First, a bound on Dα . Lemma 6.1. For α ∈ (0, 1], Rα > f (ρ2 , (ρ2 − D)α, ρ2 α ¯ + Dα) = Consequently, Dα < ρ2 α ¯ + Dα for α =
1 ρ2 log 2 . 2 ρ α ¯ + Dα
(85)
1 L L, . . . , L.
Proof. The last equality in (85) holds because f (x, x − z, z) = g(α) = Rα −
1 2
ln xz . Define a function
1 ρ2 log 2 . 2 ρ α ¯ + Dα
2
Then g(0) = 0, g(1) = R − 12 ln ρD > 0, and the second derivative is −(1 − ρD2 )2 d2 g = < 0. dα2 (1 − (1 − ρD2 )α)2 Therefore g is strictly concave in [0, 1], and its minimum value (at α = 0) is 0. This proves (85). Recalling the definition of Dα in (83), (85) implies that f (ρ2 , (ρ2 − D)α, Dα ) = Rα > f (ρ2 , (ρ2 − D)α, ρ2 α ¯ + Dα) As f is decreasing in its third argument (the distortion), we conclude that Dα < ρ2 α ¯ + Dα. We now bound each term on the RHS of (84). Showing that the first term of (84) is small implies that w.h.p any αL sections by themselves will leave a residual distortion of at least Dα . Showing that the second term is small implies that under this condition, the expected number of solutions sharing any common terms with β is small compared to EX. 20
Bounding Fαc (β): From the definition of the event Fα (β), we have ˜ − AβK |2 < Dα | β is a solution) P (Fαc (β)) = ∪K P (|S
(86)
where the union is over all size-αL subsets of {1, . . . , L}. Using a union bound, (86) becomes ˜ − AβK |2 < Dα , |S ˜ − Aβ|2 < D) L P (|S c P (Fα (β)) ≤ (87) ˜ − Aβ|2 < D) Lα P (|S where K is a generic size-αL subset of {1, . . . , L}, say K = {1, . . . , αL}. Recall from (31) that for sufficiently large n, the denominator in (87) can be bounded from below as ˜ − Aβ|2 < D) ≥ √κ e−nf (ρ2 ,ρ2 −D,D) P (|S n 2
log ρD . The numerator in (87) can be expressed as Z Dα 2 2 ˜ ˜ ˜ − Aβ|2 < D | |S ˜ − AβK |2 = y) dy P (|S − AβK | < Dα , |S − Aβ| < D) = ψ(y) P (|S
and f (ρ2 , ρ2 − D, D) =
1 2
(88)
(89)
0
˜ − AβK |2 . Using the cdf at y to bound ψ(y) in the where ψ is the density of the random variable |S RHS of (89), we obtain the following upper bound for sufficiently large n. ˜ − AβK |2 < Dα , |S ˜ − Aβ|2 < D) P (|S Z Dα ˜ − AβK |2 < y) · P (|S ˜ − Aβ|2 < D | |S ˜ − AβK |2 = y) dy P (|S ≤ 0 Z Dα (a) κ 2 2 ˜ − AβK ) − AβKc |2 < D | |S ˜ − AβK |2 = y) dy √ e−nf (ρ ,(ρ −D)α,y) · P (|(S ≤ n 0 Z Dα (b) κ 2 2 2 ¯ √ e−nf (ρ ,(ρ −D)α,y) · e−nf (y,(ρ −D)α,D) ≤ dy n 0 Z Dα (c) κ 2 2 2 ¯ √ e−nf (ρ ,(ρ −D)α,Dα ) · e−nf (Dα ,(ρ −D)α,D) ≤ dy. n 0
(90)
In (90), (a) holds for sufficiently large n and is obtained using the strong version of Cram´er’s large deviation theorem: note that AβK is a linear combination of αL columns of A, hence it is ˜ Inequality (b) a Gaussian random vector with N (0, (ρ2 − D)α) entries that is independent of S. 2 ˜ and α) entries, and is independent of both S is similarly obtained: AβKc has i.i.d. N (0, (ρ − D)¯ AβK . Finally, (c) holds because the overall exponent f (ρ2 , (ρ2 − D)α, y) + f (y, (ρ2 − D)α ¯ , D) is a decreasing function of y, for y ∈ (0, ρ2 α ¯ + Dα], and Dα ≤ ρ2 α ¯ + Dα. Using (88) and (90) in (87), for sufficiently large n we have L 2 2 2 c ¯ (ρ2 ,ρ2 −D,D)] e−n[f (ρ ,(ρ −D)α,Dα )+f (Dα ,(ρ −D)α,D)−f . (91) P (Fα (β)) ≤ κ Lα L Bounding E[Xα (β)| Fα (β)]: There are Lα (M − 1)Lα¯ codewords which share αL common terms with β. Therefore L ˜ − Aβ 0 |2 < D | |S ˜ − Aβ|2 < D, Fα (β)) E[Xα (β) | Fα (β)] = (M − 1)Lα¯ · P (|S (92) Lα 21
where β 0 is a codeword that shares exactly αL common terms with β. If K is the size-αL set of 0 and common sections between β and β 0 , then β 0 = βK + βK c ˜ − Aβ 0 |2 < D | |S ˜ − Aβ|2 < D, Fα (β)) P (|S ˜ − AβK ) − Aβ 0 c |2 < D | |S ˜ − Aβ|2 < D, Fα (β)) = P (|(S K ! n X (a) (b) κ 2 0 2 ¯ 1 (Dα − (AβKc )i ) < D ≤ √ e−nf (Dα ,(ρ −D)α,D) ≤P n , n
(93)
i=1
where (b) holds for sufficiently large n. In (93), (a) is obtained as follows. Under the event Fα (β), ˜ − AβK |2 is at least Dα , and Aβ 0 c is an i.i.d. N (0, (ρ2 − D)¯ ˜ the norm |S α) vector independent of S, K 0 β, and βK . (a) then follows from the rotational invariance of the distribution of AβKc . Inequality (b) is obtained using the strong version of Cram´er’s large deviation theorem. Using (93) in (92), we obtain for sufficiently large n L L κ Lα ¯ κ −nf (Dα ,(ρ2 −D)α,D) ¯ (Dα ,(ρ2 −D)α,D)) ¯ ¯ √ en(Rα−f E[Xα (β) | Fα (β)] ≤ (M − 1) √ e ≤ . Lα Lα n n (94) Overall bound: Substituting the bounds from (91), (94) and (31) in (84), for sufficiently large n we have for L1 ≤ α ≤ 1: L 2 2 2 −3/2 ¯ (ρ2 ,ρ2 −D,D)] P Xα (β) > L EX ≤ κ e−n[f (ρ ,(ρ −D)α,Dα )+f (Dα ,(ρ −D)α,D)−f Lα (95) 3/2 −n[Rα+f (Dα ,(ρ2 −D)α,D)−f ¯ (ρ2 ,(ρ2 −D),D)] +L e . Since Dα is chosen to satisfy Rα = f (Dα , (ρ2 − D)¯ α, D), the two exponents in (95) are equal. To bound (95), we use the following lemma. Lemma 6.2. For α ∈ { L1 , . . . , L−1 L }, we have 2
2
2
2
2
f (ρ , (ρ − D)α, Dα ) + f (Dα , (ρ − D)¯ α, D) − f (ρ , (ρ − D), D) >
αΛ(α) c1
if Dα > D if Dα ≤ D.
(96)
where Dα is the solution of (83), c1 is a positive constant given by (133), and 1 Λ(α) = 8
D ρ2
4 D 2 D 1+ 2 1− 2 −1 + ρ ρ
p !1/2 2 ρ2 /D 1 ρ2 . 1 + ρ2 R− log 2 2α ρ α ¯ + Dα ( D − 1) (97) 2
Proof. See Appendix. We observe that Λ(α) is strictly decreasing for α ∈ (0, 1]. This can be seen by using the Taylor expansion of log(1 − x) for 0 < x < 1 to write ∞ ρ2 1 D D k αk−1 1 1X R− ln =R+ log 1 − α 1 − 2 =R− 1− 2 . 2α ρ2 α ¯ + Dα 2α ρ 2 ρ k k=1
22
(98)
Since
1 ρ2 1 R > log > 2 D 2
D 1− 2 , ρ
(98) shows that Λ(α) is strictly positive and strictly decreasing in α ∈ (0, 1) with p 4 2 !1/2 2 2 2 ρ /D 1 D D D 1 D , Λ(0) := lim Λ(α) = 1+ 2 1 − 2 −1 + 1 + ρ2 R− 1− 2 2 α→0 8 ρ ρ ρ 2 ρ ( D − 1) p !1/2 2 2 ρ2 /D 1 1 D 4 D 2 D ρ2 . R − log Λ(1) = 1+ 2 1− 2 −1 + 1 + ρ2 8 ρ2 ρ ρ 2 D ( D − 1)
(99)
Substituting (96) in (95), we obtain L −3/2 P Xα (β) > L EX < κ L3/2 exp(−n · min{αΛ(α), c1 }), Lα
α∈
1 L ,..., L L
.
(100)
Taking logarithms and dividing both sides by L log L, we obtain L log Lα 1 log κ 3 n(min{αΛ(α), c1 }) −3/2 log P Xα (β) > L EX < + + − L log L L log L L log L 2L L log L log 2 3 (min{αΛ(α), c1 })b (a) log κ = + min α, α ¯, + − L log L log L 2L R (101) where to obtain (a), we have used the bound L log < min {αL log L, (1 − α)L log L, L log 2} Lα and the relation (5). For the right side of (101) to be negative for sufficiently large L, we need (min{αΛ(α), c1 })b log 2 3 > min α, α ¯, + . (102) R log L 2L This can be arranged by choosing b to be large enough. Since (102) has to be satisfied for all α ∈ { L1 , . . . , L−1 L }, we need R log 2 3 1 (a) 2.5R min α, α ¯, + = +O b > max 1 L log L 2L Λ(0) L α∈{ L ,..., L } (min{αΛ(α), c1 }) (103) 2 ρ 1 = bmin +O . D L In (103), (a) holds because Λ(α) is of constant order for all α ∈ (0, 1], hence the maximum is attained at α = L1 . The constant Λ(0) is given by (99), and bmin (.) is defined in the statement of Theorem 1.
23
When b satisfies (103) and L is sufficiently large, for α ∈ { L1 , . . . , L L }, the bound in (101) becomes min{αΛ(α), c1 }(b − bmin − O( L1 )) 1 log κ log P Xα (β) > L−3/2 EX < − L log L L log L R (104) b 2.5( bmin − 1) log κ Λ(0) (b − bmin ) log κ ≤ − = − . L log L L R L log L L Therefore
−2.5( b b −1) min . P Xα (β) > L−3/2 EX < κL
(105)
This completes the proof of Lemma 5.1.
A
Appendix
A.1
Proof of Lemma 6.2
For α ∈ { L1 , . . . , L−1 L }, define the function gα : R → R as gα (u) = f (ρ2 , (ρ2 − D)α, u) + f (u, (ρ2 − D)¯ α, D) −
1 ρ2 ln . 2 D
(106)
We want a lower bound for gα (Dα ) ≥ Λ(α)α, where Dα is the solution to Rα = f (ρ2 , (ρ2 − D)α, Dα ).
(107)
We consider the cases Dα > D and Dα ≤ D separately. Recall from Lemma 6.1 that Dα < ρ2 α ¯ +Dα. Case 1: D < Dα < ρ2 α ¯ + Dα. In this case, both the f (.) terms in the definition of gα (Dα ) are strictly positive. We can write Dα = ρ 2 α ¯ + Dα − δ,
(108)
where δ ∈ (0, (ρ2 − D)¯ α). Expanding gα (ρ2 α ¯ + Dα − δ) around ρ2 α ¯ + Dα using Taylor’s theorem, we obtain δ2 δ2 g(Dα ) = g(ρ2 α ¯ + Dα) − g 0 (ρ2 α ¯ + Dα)δ + g 00 (ξ) = g 00 (ξ) (109) 2 2 since g(ρ2 α ¯ + Dα) = g 0 (ρ2 α ¯ + Dα) = 0. Here ξ is a number in the interval (D, ρ2 α ¯ + Dα). We bound g(Dα ) from below by obtaining separate lower bounds for g 00 (ξ) and δ. Lower Bound for g 00 (ξ): Using the definition of f in (28), the second derivative of g(u) is g 00 (u) =
−1 + 2u2
2ρ4 h i2 p p α(ρ2 − D) (ρ2 − D)2 α2 + 4ρ2 u (ρ2 − D)2 α2 + 4ρ2 u − (ρ2 − D)α
2D2 + hp i2 . p 2 2 2 2 2 2 2 2 α ¯ (ρ − D) (ρ − D) α ¯ + 4Du (ρ − D) α ¯ + 4Du − (ρ − D)¯ α
24
(110)
It can be verified that g 00 (u) is a decreasing function, and hence for ξ ∈ (D, ρ2 α ¯ + Dα), g 00 (ξ) ≥ g 00 (ρ2 α ¯ + Dα)
−1 ρ4 1 + + 2 2 2 2 2 2 2 2 2(ρ α ¯ + Dα) 2α(ρ − D)(ρ α ¯ + Dα) (ρ (1 + α ¯ ) + Dα) 2¯ α(ρ − D)(ρ α ¯ + D(1 + α)) 2 (ρ + D) 1 = . ≥ 2 2 2 2 2αα ¯ (ρ − D)(ρ (1 + α ¯ ) + Dα)(ρ α ¯ + D(1 + α)) 4αα ¯ (ρ − D)ρ2 (111) =
Lower bound for δ: From (107) and (108), note that δ is the solution to Rα = f (ρ2 , (ρ2 − D)α, ρ2 α ¯ + Dα − δ).
(112)
Using Taylor’s theorem for f in its third argument around the point p := (ρ2 , (ρ2 − D)α, ρ2 α ¯ + Dα), we have ∂ 2 f δ 2 ∂f 1 ρ2 1 1 ∂ 2 f 2 (113) Rα = f (p) − = ln 2 + δ + δ + 2 δ , ∂z p ∂z p˜ 2 2 ρ α ¯ + Dα 2(ρ2 α ¯ + Dα) 2 ∂z 2 p˜ where p˜ = p = (ρ2 , (ρ2 − D)α, z˜) for some z˜ ∈ (D, ρ2 α ¯ + Dα). As (113) is a quadratic in δ with positive coefficients for the δ and δ 2 terms, replacing the δ 2 coefficient with an upper bound and solving the resulting quadratic will yield a lower bound for δ. Since the function ∂ 2 f 2x2 p p = (114) ∂z 2 (x,y,z) y y 2 + 4xz( y 2 + 4xz − y)2 is decreasing in z, the δ 2 coefficient can be bounded as follows. 1 ∂ 2 f 1 ∂ 2 f ∗ ≤ a := , 2 ∂z 2 p˜=(ρ2 ,(ρ2 −D)α,˜z ) 2 ∂z 2 (ρ2 ,(ρ2 −D)α,D)
(115)
where a∗ can be computed to be a∗ =
ρ4 h i2 . p p α(ρ2 − D) (ρ2 − D)2 α2 + 4ρ2 D (ρ2 − D)2 α2 + 4ρ2 D − (ρ2 − D)α
Therefore we can obtain a lower bound to δ, denoted δ, by solving the equation 1 1 ρ2 2 ∗ δ a +δ − Rα − ln 2 = 0. 2(ρ2 α ¯ + Dα) 2 ρ α ¯ + Dα
(116)
(117)
We thus obtain " 1/2 # 2 1 1 ρ δ>δ= ln . −1 + 1 + 16(ρ2 α ¯ + Dα)2 a∗ α R − 4(ρ2 α ¯ + Dα)a∗ 2α ρ2 α ¯ + Dα
(118)
We now show that δ can be bounded from below by αΛ(α) by obtaining lower and upper bounds for a∗ α. From (116) we have p ρ2 /D ρ4 ∗ , a α= ≥ h i 2 p p 8D(ρ2 − D) (ρ2 − D) (ρ2 − D)2 α2 + 4ρ2 D (ρ2 − D)2 α2 + 4ρ2 D − (ρ2 − D)α (119)
25
where the inequality is obtained by noting that a∗ α is strictly increasing in α, and hence taking α = 0 gives a lower bound. Analogously, taking α = 1 yields the upper bound a∗ α ≤
ρ4 . 4D2 (ρ4 − D2 )
(120)
Using the bounds of (119) and (120) in (118), we obtain p !1/2 2 4 2 2 2 2 ρ /D D (ρ − D ) 1 ρ . δ>δ≥α R− −1 + 1 + ρ2 ln 2 ρ6 2α ρ α ¯ + Dα ( D − 1)
(121)
Finally, using the lower bounds for g 00 (ξ) and δ from (121) and (111) in (109), we obtain p 4 2 !1/2 2 2 2 2 ρ /D D D α D 1 ρ 1+ 2 1 − 2 −1 + 1 + ρ2 g(Dα ) > R− ln 2 8 ρ2 ρ ρ 2α ρ α ¯ + Dα ( − 1) D
= αΛ(α). (122) Case 2: Dα ≤ D. In this case, g(Dα ) is given by g(Dα ) = f (ρ2 , (ρ2 − D)α, Dα ) + f (Dα , (ρ2 − D)α ¯ , D) −
1 ρ2 1 ρ2 ln = Rα − ln , 2 D 2 D
(123)
where we have used (107) and the fact that f (Dα , (ρ2 − D)¯ α, D) = 0 for Dα ≤ D. The right hand side of the equation Rα = f (ρ2 , (ρ2 − D)α, z) is decreasing in z for z ∈ (0, D]. Therefore, it is sufficient to consider Dα = D in order to obtain a lower bound for Rα that holds for all Dα ∈ (0, D]. Next, we claim that the α that solves the equation Rα = f (ρ2 , (ρ2 − D)α, D)
(124)
lies in the interval ( 21 , 1). Indeed, observe that the LHS of (124) is increasing in α, while the RHS is 2 decreasing in α for α ∈ (0, 1]. Since the LHS is strictly greater than the RHS at α = 1 (R > 21 ln ρD ), the solution is strictly less than 1. On the other hand, for α ≤ 12 , we have 2 1 D 1 ρ2 R (125) ≤ 1 − 2 < ln = f (ρ2 , (ρ2 − D), D) < f (ρ2 , (ρ −D) , D), Rα ≤ 2 2 2 ρ 2 D i.e., the LHS of (124) is strictly less than the RHS. Therefore the α that solves (124) lies in ( 12 , 1). To lower bound the RHS of (123), we expand f (ρ2 , (ρ2 − D)α, D) using Taylor’s theorem for the second argument. 1 ρ2 ∂f ∆2 ∂ 2 f f (ρ2 , (ρ2 − D)α, D) = f (ρ2 , (ρ2 − D) − ∆, D) = ln −∆ 2 2 + 2 D ∂y (ρ ,(ρ −D),D) 2 ∂y 2 (ρ2 ,y0 ,D) 1 ρ2 ∆2 ∂ 2 f = ln + , 2 D 2 ∂y 2 (ρ2 ,y0 ,D) (126) 26
where ∆ = (ρ2 − D)¯ α, and y0 lies in the interval ( 12 (ρ2 − D), (ρ2 − D)). Using (126) and the shorthand ∆2 ∂ 2 f f 00 (y0 ) := , 2 ∂y 2 (ρ2 ,y0 ,D) (124) can be written as Rα −
(ρ2 − D)2 00 1 ρ2 ln =α ¯2 f (y0 ) 2 D 2
(127)
or
1 ρ2 (ρ2 − D)2 00 ln = Rα ¯ + α ¯2 f (y0 ). 2 D 2 Solving the quadratic in α ¯ , we get R−
(128)
2
−R + [R2 + 2(ρ2 − D)2 (R − 21 ln ρD )f 00 (y0 )]1/2 α ¯= . (ρ2 − D)2 f 00 (y0 )
(129)
Using this in (127), we get
Rα −
1 ρ2 ln = 2 D
h i1/2 2 ρ2 1 2 2 2 00 −R + R + 2(ρ − D) (R − 2 ln D )f (y0 ) 2(ρ2 − D)2 f 00 (y0 )
.
(130)
The LHS is exactly the quantity we want to bound from below. From the definition of f in (28), the second partial derivative with respect to y can be computed: f 00 (y) =
∂ 2 f ρ2 1 y = + 2− . 2 3 2 2 ∂y (ρ ,y,D) y y 2(y + 4ρ2 D)3/2
The RHS of (131) is strictly decreasing in y. We can therefore bound f 00 (y0 ) as 2 12ρ2 ρ2 00 2 00 00 ρ − D < f (ρ − D) < f (y ) < f < . 0 (ρ2 − D)3 2 (ρ2 − D)3
(131)
(132)
Substituting these bounds in (130), we conclude that for Dα ≤ D, " #1/2 2 2 (R − 1 ln ρ2 ) 2ρ 1 ρ2 (ρ2 − D) 2 D . g(Dα ) = Rα − ln ≥ c1 := −R + R2 + 2 D 24ρ2 (ρ2 − D)
(133)
References [1] R. Venkataramanan, A. Joseph, and S. Tatikonda, “Lossy compression via sparse linear regression: Performance under minimum-distance encoding,” IEEE Trans. Inf. Thy, vol. 60, pp. 3254–3264, June 2014. [2] R. Venkataramanan, T. Sarkar, and S. Tatikonda, “Lossy compression via sparse linear regression: Computationally efficient encoding and decoding,” IEEE Trans. Inf. Theory, vol. 60, pp. 3265–3278, June 2014. [3] A. Barron and A. Joseph, “Least squares superposition codes of moderate dictionary size are reliable at rates up to capacity,” IEEE Trans. on Inf. Theory, vol. 58, pp. 2541–2557, Feb 2012. [4] A. Joseph and A. Barron, “Fast sparse superposition codes have exponentially small error probability for R < C,” IEEE Trans. Inf. Theory, vol. 60, pp. 919–942, Feb 2014.
27
[5] N. Alon and J. H. Spencer, The probabilistic method. John Wiley & Sons, 2004. [6] S. Janson, Random Graphs. Wiley, 2000. [7] A. Coja-Oghlan and L. Zdeborov´ a, “The condensation transition in random hypergraph 2-coloring,” in Proc. 23rd Annual ACM-SIAM Symp. on Discrete Algorithms (SODA), pp. 241–250, 2012. [8] A. Coja-Oghlan and D. Vilenchik, “Chasing the k-colorability threshold,” in Proc. IEEE 54th Annual Symposium on Foundations of Computer Science (FOCS), pp. 380–389, 2013. [9] A. Coja-Oghlan and K. Panagiotou, “Going after the k-SAT threshold,” in Proc. 45th Annual ACM Symposium on Theory of Computing, pp. 705–714, 2013. [10] K. Marton, “Error exponent for source coding with a fidelity criterion,” IEEE Trans. Inf. Theory, vol. 20, pp. 197 – 199, Mar 1974. [11] S. Ihara and M. Kubo, “Error exponent for coding of memoryless Gaussian sources with a fidelity criterion,” IEICE Trans. Fundamentals, vol. E83-A, Oct. 2000. [12] F. Den Hollander, Large deviations, vol. 14. Amer Mathematical Society, 2008. [13] R. R. Bahadur and R. R. Rao, “On deviations of the sample mean,” The Annals of Mathematical Statistics, vol. 31, no. 4, 1960.
28