Hardness of Approximating the Minimum Distance of a Linear Code

Report 2 Downloads 88 Views
Hardness of Approximating the Minimum Distance of a Linear Code Ilya Dumer

Daniele Micciancioy August 30, 1999

Madhu Sudanz

Abstract

We show that the minimum distance of a linear code (or equivalently, the weight of the lightest codeword) is not approximable to within any constant factor in random polynomial time (RP), unless NP equals RP. Under the stronger assumption that NP is not contained in RQP (random quasi-polynomial time), we show that the minimum distance is not approximable to within the factor 2log ? n , for any  > 0, where n denotes the block length of the code. We also show that the minimum distance is not approximable to within an additive error that is linear in the block length of the code, unless NP equals RP. Our results hold for codes over every nite eld, including the special case of binary codes. In the process we show that the nearest codeword problem is hard to solve even under the promise that the number of errors is (a constant factor) smaller than the distance of the code (even if the code is asymptotically good). This is a particularly meaningful version of the nearest codeword problem. Our results strengthen (though using stronger assumptions) a previous result of Vardy who showed that the minimum distance is NP-hard to compute exactly. Our results are obtained by adapting proofs of analogous results for integer lattices due to Ajtai and Micciancio. A critical component in the adaptation is our use of linear codes that perform better than random (linear) codes. (1

)

 College of Engineering, University of California at Riverside.

Riverside, CA 92521, USA. Email: Research supported in part by NSF grant NCR-9703844. y Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology. 545 Technology Square, Cambridge, MA 02139, USA. Email: [email protected]. Research supported in part by DARPA grant DABT63-96-C-0018. z Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology. 545 Technology Square, Cambridge, MA 02139, USA. Email: [email protected]. Research supported in part by a Sloan Foundation Fellowship, an MIT-NEC Research Initiation Grant and NSF Career Award CCR-9875511. [email protected].

1 Introduction In this paper we study the computational complexity of two central problems from coding theory: (1) The complexity of approximating the minimum distance of a linear code and (2) The complexity of error-correction in codes of relatively large minimum distance. An error-correcting code C over a q -ary alphabet  of block length n, is a collection of strings from n . The Hamming distance between two strings x; y 2 n , denoted (x; y), is the number of coordinates in which x and y di er. The (Hamming) weight of a string x is wt(x) = (x; 0). The (minimum) distance of the code, denoted (C ), is the minimum over all pairs of distinct strings x; y 2 C of the Hamming distance between x and y. The information content of the code is the quantity logq jCj, which counts the number of message symbols that can be encoded by an element of C . If q is a prime power, and Fq denotes the nite eld on q elements, then by setting  = Fq it is possible to think of n = Fnq as a vector space. A code over Fq is linear if it is a linear subspace of n = Fnq . For such a code, the information content is just its dimension as a vector space and the minimum distance equals the weight of the lightest non-zero codeword. It is customary to refer to a linear code of block length n, dimension k and minimum distance d as an [n; k; d]q code. We use an k  n matrix A 2 Fqkn of rank k to de ne a linear code CA = fxA j x 2 Fkq g of length n and dimension k.

1.1 The Minimum Distance Problem.

Three of the four central parameters associated with a linear code, namely n, k and q , are evident from its matrix representation. The minimum distance problem (MinDist) is that of evaluating the fourth | namely | given a matrix A 2 Fqkn nd the minimum distance of the code CA = fxA j x 2 Fkq g. It is easy to see that a code with minimum distance d can unambiguously correct any error vector of weight b d?2 1 c or less. (For details on the computational complexity of the error correction problem see the next paragraph.) Therefore, computing the minimum distance of a code is obviously related to the problem of evaluating its error correction capability. The central nature of this parameter makes this a fundamental computational problem in coding theory. The problem gains even more signi cance in light of the fact that long q -ary codes chosen at random give the best parameters known for any q < 46 1 (in particular, for q = 2). Such a choice is expected to produce a code of large distance, but no ecient methods are known to lower bound the distance of a code produced in this manner. A polynomial time algorithm to compute the distance would be the ideal solution to this problem, as it could be used to construct good error correcting codes by choosing a matrix at random and checking if the associated code has a large minimum distance. No such algorithm is known. The complexity of this problem (can it be solved in polynomial time or not?) was rst explicitly questioned by Berlekamp, McEliece and van Tilborg [7] in 1978 who conjectured it to be NP-complete. This conjecture was nally resolved in the armative by Vardy ([16]) in 1997. ([16] also gives further motivations and detailed account of prior work on this problem.) We examine the approximability of this parameter and show that it is hard to approximate the minimum distance to within any constant factor, unless NP = RP (i.e., every problem in NP has a polynomial time probabilistic algorithm that always reject No instances and accepts Yes instances with high probability). Under the stronger assumption that NP does not have random 1 For squares q  49, linear AG codes can perform better than random ones [15] and are constructible in polynomial time. For any q  46 it is still possible to do better than random codes, however the best known procedures to construct them run in exponential time [17].

2

quasi-polynomial time2 algorithms (RQP), we get that the minimum distance of a code of block length n is not approximable to within a factor of 2log ? n for any constant  > 0. (This factor is a naturally occurring factor in the study of the approximability of optimization problems | see the survey of Arora and Lund [4].) Our methods adapt the proof of the non-approximability of the shortest lattice vector problem (SVP) due to Micciancio [14] which in turn is based on Ajtai's proof of the hardness of SVP [3]. (1

)

1.2 The Error Correction Problem.

In the process of obtaining the inapproximability result for the minimum distance problem, we also shed light on the general error-correction problem for linear codes. Informally, the error-correction problem addresses the computational complexity of recovering a codeword from a \received word" that is close to the codeword in Hamming distance. The simplest formulation of the error-correction problem is the Nearest Codeword Problem (NCP) (also known as the \maximum likelihood decoding problem"). Here, the input instance consists of a linear code given by its matrix A 2 Fqkn and a received word x 2 Fnq and the goal is to nd the nearest codeword y 2 CA to x. The NCP is a well-studied problem: Berlekamp et al. [7] showed that it is NP-hard; and more recently Arora, Babai, Stern and Sweedyk [2] showed that the distance of the received word to the nearest codeword is hard to approximate to within a factor of 2log ? n for any  > 0, unless NP  QP (deterministic quasi-polynomial time). However the NCP only provides a rst cut at understanding the error-correction problem. It shows that the error-correction problem is hard, if we try to decode every linear code for arbitrary amounts of error. In contrast, the positive results from coding theory show how to perform errorcorrection in speci c linear codes for a small amount of error relative to the distance of the code. Thus the hardness of the NCP may come from one of two factors: (1) The problem attempts to decode every linear code and (2) The problem attempts to recover from too many errors. Both issues have been raised in the literature [16], but only the former has seen some progress [6]. One problem that has been de ned to study the latter phenomenon is the \Bounded distance decoding problem" (BDD, see [16]). This is a special case of the NCP where the error is guaranteed (or \promised") to be less than half the minimum distance of the code. This case is motivated by the fact that within such a distance, there may be at most one codeword and hence decoding is clearly unambiguous. Also this is the case where many of the classical error-correction algorithms (for say BCH codes, RS codes, AG codes etc.) work in polynomial time. To compare the general NCP, and the more speci c BDD problem, we introduce a parameterized family of problems that we call the Relatively Near Codeword Problem (RNC). For real , RNC() is the following problem: Given a generator matrix A 2 Fqkn of a linear code CA of minimum distance d, an integer t with the promise that t <   d, and a received word x 2 Fnq , nd a codeword within distance t from x. (The algorithm may fail if the promise is violated, or if no such codeword exists. In other words, the algorithm is expected to work only when the amount of error that occurs is limited in proportion to the error that the code was designed to tolerate.) Both the nearest codeword problem (NCP) and the bounded distance decoding problem (BDD) are special cases of RNC() : NCP = RNC(1) while BDD = RNC( ) . Till recently, not much was known about RNC() for constants  < 1, leave alone  = 21 (i.e., the BDD problem). No nite upper bound on  can be easily derived from the Arora et al.'s NP-hardness proof for NCP [2]. (In (1

)

1 2

2

f (n)

is quasi-polynomial in n if it grows slower than 2logc n for some constant c.

3

other words, their proof does not seem to hold for RNC() for any  < 1.) It turns out, as observed by Jain et al. [10], that Vardy's proof of the NP-hardness of the minimum distance problem also shows the NP-hardness of RNC() for  = 1 (and actually extends to some  = 1 ? o(1)). In this paper we signi cantly improve upon this situation, by showing NP-hardness (for random reductions) of RNC() for every  > 12 bringing us much closer to an eventual (negative?) resolution of the bounded distance decoding problem.

1.3 Results and Techniques.

The main result of this paper (see Theorem 15) is that approximating the minimum distance problem within any constant factor is hard for NP under polynomial reverse unfaithful random reductions (RUR-reductions, [11]), and approximating it within 2log ? n is hard under quasipolynomial RUR-reductions. These are probabilistic reductions that maps No instances always to No instances and Yes instances to Yes instances with high probability. The probability a Yes instance is not mapped to a Yes instance is called the soundness error and in all reductions presented in this paper it can be made exponentially small in a security parameter s in poly(s) time. Although not a proper NP-hardness result (i.e., hardness under deterministic polynomial reductions), hardness under polynomial RUR-reductions also gives evidence of the intractability of a problem as the existence of a (random) polynomial time algorithm to solve the hard problem would imply NP = RP (random polynomial time), i.e. every problem in NP would have a probabilistic polynomial algorithm that always rejects No instances and accepts Yes instances with high probability. Similarly, hardness for NP under quasi-polynomial RUR-reductions implies that the hard problem cannot be solved in RQP unless NP  RQP (random quasi-polynomial time). In order to prove these results, we rst study the \Relatively Near Codeword Problem" and show that the optimization version of RNC() is hard to approximate to within any constant factor

for any  > 1=2 unless NP = RP (see Theorem 9). In particular RNC() is hard to approximate to within = 1=. This problem immediately reduces to approximating the minimum distance of a code within = 1=. This gives a rst inapproximability result for the minimum distance problem within some constant factor > 1. We then use tensor product constructions to \amplify" the constant and prove the claimed hardness results for the minimum distance problem. The hardness of approximating the relatively near codeword problem RNC() for  > 1=2 is obtained by adapting a technique of Micciancio [14], which is in turn based on the work of Ajtai [3] (henceforth Ajtai-Micciancio). They consider the analogous problem over the integers (rather than nite elds) with Hamming distance replaced by Euclidean distance. Much of the adaption is straightforward; in fact, some of the proofs are even easier in our case due to the di erence. The main hurdle turns out to be in adapting the following combinatorial problem considered and solved by Ajtai-Micciancio: Given an integer k construct, in poly(k) time, an integer d, a lattice L in Zk with minimum distance d and a vector v 2 Zk such that a (Euclidean) ball of radius   d  around v contains at least 2k vectors from L (where  < 1 and  > 0 are some constants independent of k). In our case we are faced with a similar problem with Zk replaced by Fkq and Euclidean distance being replaced by Hamming distance. The Ajtai-Micciancio solution to the above problem involves number-theoretic methods and does not translate to our setting. Instead we show that if we consider a linear code whose performance (i.e., trade-o between rate and distance) is better than that of a random code, and pick a random light vector in Fnq , then the resulting construction has (1

4

)

the required properties. We rst solve this problem over suciently large alphabets using high rate Reed-Solomon codes. (This construction has been used in the coding theory literature to demonstrate limitations to the \list-decodability" of Reed-Solomon codes [12].) We then translate the result to small alphabets using the well-known method of concatenating codes [8]. Finally, we extend our methods to address some problems relating asymptotically-good codes. We show that even for such codes, the Relatively Near Codeword problem is hard, unless NP equals RP (see Theorem 17). We then translate this to a result (see Theorem 22) showing that the minimum distance of a code is hard to approximate to within an additive error that is linear in the block length of the code.

2 Notations and problem de nition

All vectors will be assumed to be row vectors. For a vector v 2 Fnq and set S  Fnq , let (v; S ) = minw2S f(v; w)g be the (Hamming) distance between v and S . For vector v 2 Fnq and positive integer r, let B(v; r) = fw 2 Fnq j(v; w)  rg be the ball of radius r centered in v. Given a generator matrix A 2 Fqkn , we consider the linear code CA = fxA j x 2 Fkq g of distance (CA) = minfwt(xA) j x 6= 0g. In order to study the computational complexity of coding problems, we formulate them in terms of promise problems. A promise problem is a generalization of the familiar notion of decision problem. The di erence is that in a promise problem not every string is required to be either a Yes or a No instance. Given a string with the promise that it is either a Yes or No instance, one has to decide which of the two sets it belongs to. The following promise problem captures the hardness of approximating the minimum distance problem within a factor .

De nition 1 (Minimum Distance Problem) For prime power q and  1, an instance of GapDist ;q is a pair (A; d), A 2 Fqkn and d 2 Z , such that  (A; d) is a Yes instance if (CA)  d.  (A; d) is a No instance if (CA) >  d. In other words, given a code A and an integer d with the promise that either (CA)  d or (CA) >  d, one must decide which of the two cases holds true. The relation between approximating the minimum distance of A and the above promise problem is easily explained. On one hand, if one can compute a -approximation d0 2 [(CA);  (CA)] to the minimum distance of the code, then one can easily solve the promise problem above by checking whether d0   d or d0 >  d. On the other hand, assume one has a decision oracle O that solves the promise problem above . Then, the minimum distance of a given code A can be easily approximated using the oracle as follows. Notice that O(A; n) always returns Yes while O(A; 0) always returns No. Using binary search, one can eciently nd a d such that O(A; d) = Yes and O(A; d ? 1) = No. This means that (A; d) is not a No instance and (A; d ? 1) is not a Yes instance , and the minimum distance (CA) must lie in the interval [d;  d]. +

3

4

Similarly we can de ne the following promise problem to capture the hardness of approximating

RNC  within a factor . ( )

By de nition, when the input does not satis es the promise, the oracle can return any answer. Remember that the oracle can give any answer if the input is neither a Yes instance nor a No one. So, one would be wrong to conclude that (A; d ? 1) is a No instance and (A; d) is a Yes one. 3

4

5

De nition 2 (Relatively Near Codeword Problem) For prime power q,  > 0 and  1, an  is a triple (A; v; t), A 2 Fkn , v 2 Fn and t 2 Z , such that t <   (C ) instance of GapRNC ;q A q q ( )

+

and

5

 (A; v; t) is a Yes instance if (v; CA)  t.  (A; v; t) is a No instance if (v; CA) > t.

It is immediate that the problem RNC() gets harder as  increases. It is hardest when  = 1 in which case we obtain the promise problem associated to approximating the nearest codeword problem:

De nition 3 (Nearest Codeword Problem) For prime power q and  1, an instance of GapNCP ;q is a triple (A; v; t), A 2 Fqkn , v 2 Fnq and t 2 Z , such that  (A; v; t) is a Yes instance if (v; CA)  t.  (A; v; t) is a No instance if (v; CA) >  t. The promise problem GapNCP ;q is NP-hard for every constant  1 (cf. [2] ), and this result +

6

is critical to our hardness result(s).

3 Hardness of the relatively near codeword problem As outlined in Section 1, our reduction relies on the construction of a linear code CA and a Hamming sphere of radius r <   (CA) (for some  < 1) containing exponentially (in the block length) many codewords. Obviously, it must be   12 because any sphere of radius r < (CA)=2 can contain at most one codeword. We now prove that for any  > 12 it is actually possible to build such a code and sphere. After the development of this combinatorial tool, we prove the hardness of approximating the relatively near codeword problem by reduction from the nearest codeword problem.

3.1 Construction of the combinatorial tool

We rst show how to construct a linear code and a sphere (with radius smaller than the minimum distance of the code) containing a number of codewords exponential in the alphabet size. Then, we use code concatenation to derive a similar result for xed alphabet in which the number of codewords in the sphere is exponential in the block length of the code.

Lemma 4 For any  2 (0; 1), there exists an algorithm that, on input a prime power q, outputs, l in poly(q ) time, three integers l; m; r > 0 and a matrix A 2 Fm q such that  the linear code de ned by A has minimum distance (CA) > 2(1 ? )r,  the expected number of codewords inside a random sphere B(v; r) (v chosen uniformly at

random from Flq ) is at least q bq c =4. Strictly speaking, the condition t <   (CA ) is a promise and hence should be added as a condition in both the Yes and No instances of the problem. 5

6 To be precise, Arora et al. [2] present the result only for binary codes. However, their proof is valid for any alphabet. An alternate way to obtain the result for any prime power is to use a recent result of Hastad [9] who states his result in linear algebra (rather than coding-theoretic) terms. We will state and use some of the additional features of the latter result in Section 5.

6

Proof: Let r = bq c, l = q and m = q ? b2(1 ? )rc. We let A be a generating matrix of the [q; m; q ? m +1] extended Reed-Solomon code (cf. [5, 13]). For example, let the rows of A correspond to the polynomials xi (for i = 0; : : :; m ? 1) evaluated on all elements of Fq . Clearly, A can be constructed in time polynomial in q and the minimum distance satis es (CA ) = q ? m + 1 = b2(1 ? )rc + 1 > 2(1 ? )r: Now, lets bound the expected number of codewords in B(v; r) when v is chosen uniformly at q

random in Fq . First of all notice that

X

Exp [jCA \ B(v; r)j] = Prq fx 2 B(v; r)g v2Fqq x2CA v2Fq X = Prq fv 2 B(x; r)g x2CA v2Fq (0; r)j = jCAj  jB q q = q m?q  jB(0; r)j  q?2(1?)r  jB(0; r)j: Let's now bound the size of the ball B(0; r):

 

q (q ? 1)r r  q r  r (q ? 1)r  q(1?)r (q ? 1)r r = q (2?)r 1 ? 1q

jB(0; r)j 

 q  q



?)r 1 ? 1

(2

?)r

q

q

(2

4 where in the last inequality we have used the monotonicity of (1 ? 1=q )q and q  2. Finally, combining the two inequalities we get Expq [jCA \ B(v; r)j]  q ?2(1?)r q (2?)r =4 v2Fq = q r =4 = q bq c =4:

2

From the previous lemma, it immediately follows that there exists a sphere B(v; r) containing at least q bq c =4 codewords from CA. However, while the lemma asserts that A and r can be easily computed, it is not clear how to eciently determine the center of the sphere. Let  = Exp [jB(v; r) \ CA j] be the expected number of codewords in the sphere when the center is chosen v2Flq uniformly at random from Flq . It is fairly easy to nd spheres containing a number of codewords not much bigger than . In fact, by Markov's inequality Prv2Fq l (jB(v; r) \ CA j > ) < 1= 7

when v is chosen uniformly at random from Fqq . It turns out that if v is chosen uniformly at random from B(0; r) (instead of the whole Flq ), then a similar lower bound can be proved. Namely, Prv2B(0;r)(jB(v; r) \ CA j < ) <  . In fact, this is just a special case of the following quite general fact.

Fact 5 Let G be a group, H  G a subgroup and S  G an arbitrary subset of G. Let  be the expected size of H \ Sz when z is chosen uniformly at random from G (here Sz denotes the set fs  z j s 2 S g). Choose x 2 S ? = fs? j s 2 S g uniformly at random. Then for any   1, Pr fjH \ Sxj  g  : x2S ? 1

1

1

Proof: First, we compute the expectation  = Exp [jH \ Szj] =

z2G

X

Pr fy 2 Sz g

y2H z2G

= jH j  jS j :

jGj

Now, pick y 2 H uniformly at random and independently from x (which is chosen uniformly at random from S ?1 ). We notice that

jfx 2 S ?1; y 2 H : y = x?1 zgj Pr f xy = z g = x;y jS j  jH j

j: = jjHH j\ Sz jS j Moreover, since H is a subgroup, Hy = H and jSx \ H j = jSxy \ Hy j = jS (xy ) \ H j. Therefore, denoting by I (z ) be the indicator variable that equals 1 if jSz \ H j   and 0 otherwise, we can write Pr fjSx \ H j  g = Pr fjS (xy) \ H j  g x x;y X = Pr fxy = zg  I (z) x;y z2G

jSz \ H j  I (z) jH j  jS j z2G  jjHGjj  j Sj =  X

=

2

Applying Fact 5 on group G = (Fqq ; +), subgroup H = (CA; +) and the set S = B(0; r) we immediately get the following corollary to Lemma 4. Notice then that if we set v = x then Sx = v + B(0; r) = B(v; r).

Corollary 6 For any  2 (0; 1) there exists a probabilistic algorithm that on input a prime power q, outputs in time polynomial in q integers l; m; r > 0, a matrix A 2 Fmq l and a vector v 2 Flq such that

8

 A de nes a linear code with minimum distance (CA) > 2(1 ? )r,  for every   1, the probability that jB(v; r) \ CAj is smaller than qbq c is at most 4. 

It is important to notice that in the previous lemma one must use arbitrarily large alphabets in order to get arbitrarily many codewords in the ball. We would like to prove a similar result in which the alphabet size can be kept xed and only the block length of the code increases. This can be easily accomplished using the standard construction of concatenating codes [8]. The idea is to apply Corollary 6 to a suciently large extension eld Fqc and then represent each element of Fqc as a sequence of elements of Fq .

Lemma 7 For  2 (0; 1) and nite eld Fq , there exists a probabilistic polynomial time algorithm that on input integers k; s 2 Z , outputs, in poly(k; s) time, integers l; m; r 2 Z , a matrix A 2 Fmq l and a vector v 2 Flq such that  (CA) > 2(1 ? )r  The probability that B(v; r) contains less than qk codewords is at most q?s . Proof: Let c be an integer such that cbqcc  k + s + 2 and q0 = qc is polynomial in s and k. For example, let c = d?  maxflogq (k + s + 2); 1ge. Apply Corollary 6 to prime power q 0 = q c to obtain integers l0; m0; r0, matrix A0 2 Fmq0 0 l0 and vector v0 2 Flq00 such that (CA0 ) > 2(1 ? )r0 , and for all   1 the probability that jB(v0; r0) \CA0 j is smaller than   (q 0)b q0 c is at most 4 . In particular, when  = q ? s , with probability at least 1 ? 4  q ? s  1 ? q ?s we have jB(v0; r0) \ CA0 j  qcbq c=qs  qk s =qs = qk : So, the sphere B(v0; r0) contains the required number of codewords with suciently high prob+

+

1

)

(

( +2)

( +2)

c

+2

+ +2

+2

ability. It only remains to reduce the alphabet size from q c to q . This can be done concatenating the code CA with a [q c ; c; q c ? q c?1 ] linear Hadamard code. Details follow. Recall that Fqc is a c-dimensional vector space over Fq . Fix a basis b ; : : :; bc 2 Fqc of Fqc over Pc 1 c Fq and let i : Fq ! Fq be the coordinate functions such that x = i=1 i (x)bi. Notice that the i 's are linear, i.e., i (ax + by) = ai (x) + bi (y) for all a; b 2 Fq and x; y 2 Fqc . For all x 2 Fqc , let now h(x) be the sequence of all Fq -linear combinations of the i (x),

h(x) =

c X i=1

!

aii (x)

a1 ;:::;ac2Fq

and extend h to Fqqc componentwise c

~h(x1; : : :; xqc ) = h(x1 ); h(x2);    ; h(xqc ): Notice that h: Fqc ! Fqq is linear, h(0) = 0 and wt(h(x)) = q c?1 (q ? 1) for all x 6= 0. Therefore wt(h~ (w)) = q c?1 (q ? 1)  wt(w) and (~h(w1); ~h(w2)) = q c?1 (q ? 1)  (w1; w2). We now de ne CA as the concatenation of CA0 and h, i.e., CA = h~ (CA0 ) = f~h(w) : w 2 CAg: c

9

Further, 0 let0 v = h~ (v0), r = q c?1 (q ? 1)  r0, l = q c  l0 and m = c  m0. A generating matrix A 2 Fqm l for CA can be easily obtained replacing each element a in A0 by the corresponding c c q matrix [h(a  b1) j : : : j h(a  bc)] 2 Fq . We claim that these settings satisfy the requirements of the lemma. Notice rst that since wt(h~ (w)) = q c?1 (q ? 1)  wt(w), we have (CA) = q c?1 (q ? 1)  (CA0 ) > 2(1 ? )r. Further, jCA \ B(v; r)j = jCA0 \ B(v0; r0)j and thus the probability that B(v; r) contains fewer than qk codewords is at most q ?s . 2 In the next subsection we will use the codewords inside the ball B(v; r) to represent the solutions to a nearest codeword problem. In order to be able to represent any possible solution, we need rst to project the codewords in B(v; r) to the set of all strings over Fq of some shorter length. This is accomplished in the next lemma by another probabilistic argument. Given a matrix T 2 Flqk and a vector y 2 Flq , let T(y) = yT denote the linear transformation from Flq to Fkq . Further, let T(S ) = fT(y) j y 2 S g.

Lemma 8 For any  2 (0; 1) and nite eld Fq there exists a probabilistic polynomial time algol lk rithm that on input (1k ; 1s) outputs integers l; m; r, matrices A 2 Fm q , and T 2 Fq and a vector v 2 Flq such that 1. (CA) > 2(1 ? )r. 2. T(B(v; r) \ CA ) = Fkq with probability at least 1 ? q ?s . Proof: Run the algorithm of Lemma 7 on input (1 k s ; 1s ). Let l; m; r; A; v be the output of the algorithm and de ne S = B(v; r) \ CA . The rst property directly follows from the previous lemma. Moreover, with probability at least 1 ? q ? s we have jS j  q k s . Choose T 2 Flqk uniformly at random. We want to prove that with very high probability T(S ) = Fkq . Choose a vector t 2 Fkq at random and de ne a new function T0 (y) = yT + t. Clearly T0 (S ) = Fkq i T(S ) = Fkq . Notice that the random variables T0 (y) (y 2 S ) are pairwise independent and uniformly distributed. Therefore for any vector x 2 Fkq , T0 (y) = x with probability p = q ?k . Let Nx be the number of y 2 S such that T0 (y) = x. By linearity of expectation and pairwise independence of the T0 (y) we have Exp [Nx ] = jS jp and Var [Nx ] = jS j(p ? p ) < jS jp. Applying 2 + +1

( +1)

+1

2 + +1

2

Chebychev's inequality we get PrfNx = 0g  PrfjNx ? Exp [Nx ] j  Exp [Nx ]g Var [Nx ]  Exp [Nx ]2 < 1  q ?(k+s+1) :

jS jp

Therefore, for any x 2 Fkq , the probability that T0 (y) 6= x for every y 2 S is at most q ?(k+s+1) . By union bound, with probability at least 1 ? q ?(s+1) , for every x 2 Fkq there exists a vector y 2 S such that T(y) = x. Adding up the error probabilities, we nd that with probability at least 1 ? (q ?(s+1) + q ?(s+1) )  1 ? q ?s , T(S ) = Fkq , proving the second property. 2

3.2 The reduction

We can now prove the inapproximability of the relatively near codeword problem. 10

Theorem 9 For any  > 1=2,  1 and any nite eld Fq , GapRNC ;q is hard for NP under ( )

polynomial RUR-reductions. Moreover, the error probability can be made exponentially small in a security parameter s while maintaining the reduction polynomial in s. Proof: Let  be an integer strictly bigger than 1=(2 ? 1) and let 0 = ( + 1) . We prove the () hardness of GapRNC ;q by reduction from GapNCP 0 ;q . 0 0 0 Let (C ; v ; t ) be an instance of GapNCP 0 ;q with C0 2 Fqkn . We want to map it to an ) . Invoking Lemma 8 on input (1k ; 1s) and  = 1 ? (1+1= )=(2) 2 instance (C; v; t) of GapRNC( ;q (0; 1), we obtain integers l; m; r, a generating matrix A 2 Fmq l of a linear code with minimum distance (CA) > 2(1 ? )r = ((1 + 1= )=)r, a matrix T 2 Flqk and a vector w 2 Flq such that T(CA \ B(w; r)) = Fkq with probability at least 1 ? q?s . Notice that ATC0 2 Fmq n de nes a linear code whose codewords are a subset of CC0 . De ne C by pasting t0 copies of A and r copies of ATC0: C = [A ; : : :; A; ATC0; :{z: :; ATC}0 ] | {z } |

De ne vector v

t0 as the concatenation of t0 copies

r

of w and r copies of v0: v = [w ; : : :; w; v0; : : :; v0 ] | {z } | {z } t0

r

Finally, let t = ( + 1)t0r. The output of the reduction is (C; v; t). First notice that (regardless of the input instance (C0; v0; t0)), we can establish t    (CC ) as follows:   (CC)  t0  (CA)  > t0 1 + 1= r



= ( + 1)t0 r = t:

We now prove that if (C0 ; v0; t0) is a Yes instance of GapNCP 0 ;q , then (C; v; t) is a Yes ) , and if (C0; v0; t0) is a No instance of GapNCP 0 , then (C; v; t) is a No instance of GapRNC( ;q

;q () instance of GapRNC ;q . Assume (C0; v0; t0) is a No instance, i.e., the distance of v0 from CC0 is greater than 0t0 . For all x 2 Fmq we have (xC; v)  r  (x(ATC0); v0)  r  (v0; CC0 )

> r  0t0 = r( + 1) t0 = t

proving that (C; v; t) is a No instance. (Notice that No instances get mapped to No instances with probability 1, as required.) Conversely, assume (C0; v0; t0) is a Yes instance, i.e., there exists x such that (xC0; v0)  t0 . Let y = zA be a codeword in CA such that (y; w)  r and yT = x. We know such a codeword will exists with probability at least 1 ? q ?s . In such a case, we have (zC; v) = t0(zA; w) + r(zATC0; v0)  t0r + rt0 = t; 11

proving that (C; v; t) is a Yes instance. 2

Remark 10 The reduction given here is a randomized many-one reduction (or a randomized Karp

reduction) which fails with exponentially small probability. However it is not a Levin-reduction: i.e., given a witness for a Yes instance of the source of the reduction we do not know how to obtain a witness to Yes instances of the target in polynomial time. The problem is that given a solution x to the nearest codeword problem, one has to nd a codeword y in the sphere B(w; r) such that yT = x. Our proof only asserts that with high probability such a codeword exists, but it is not known how to nd it. This was the case also for the Ajtai-Micciancio hardness proof for the shortest vector problem, where the failure probability was only polynomially small.

As discussed in the introduction, hardness under polynomial RUR-reductions easily implies the following corollary.

Corollary 11 For any  > 1=2,  1 and any nite eld Fq , GapRNC ;q is not in RP unless ( )

NP = RP.

4 Hardness of the Minimum Distance Problem In this section we prove the hardness of approximating the Minimum Distance Problem. We rst derive an inapproximability result to within some constant bigger than one by reduction from GapRNC( ;q) . Then we use direct product constructions to amplify the inapproximability factor to log ? n any constant and to any factor 2 ( > 0). (1

)

4.1 Inapproximability to within some constant

The inapproximability of GapDist ;q to within a constant 2 (1; 2) immediately follows from the = ) hardness of GapRNC(1

;q .

Lemma 12 For every 2 (1; 2), and every nite eld Fq , GapDist ;q is hard for NP under polynomial RUR-reductions with exponentially small soundness error.

? . Let (C; v; t) be an instance of GapRNC ? Proof: The proof is by reduction from GapRNC ;q

;q and assume without loss of generality that v does not belong to code generated by C. (One can easily check whether v 2 CC by solving a system of linear equations. If v 2 CC then (v; CC) = 0 and (C; v; t) is a Yes instance.) De ne the matrix   C 0 C= v

? , then (C0; t) is a Yes instance We now prove that if (C; v; t) is a Yes instance of GapRNC ;q

? , then (C0; t) is a No instance of of GapDist ;q , and if (C; v; t) is a No instance of GapRNC ;q GapDist ;q . Recall that in either case (CC) > t. Assume (C; v; t) is a Yes instance, i.e., there exists a x such that (xC; v)  t. Then, xC ? v is a non-zero vector of the code generated by C0 of weight at most t. Conversely, assume (C; v; t) is a No instance and let y = xC + v be any non-zero vector of C0. If = 0 then y = xC is a non-zero element of CA and therefore wt(y) > t (using the promise). On the other hand, if = 6 0 then wt(y) = wt(( ? x)C ? v) > t as (v; CC) > t. 2 1

1

1

1

1

12

4.2 Inapproximability to within bigger factors

To amplify the hardness result obtained above, we take the direct product of the code with itself. We rst de ne direct products.

De nition 13 For i 2 f1; 2g, let Ci be a linear code generated by Ai 2 Fkq n . Then the direct product of C and C , denoted C C is a code over Fq of block length n n and dimension k k whose codewords, when expressed as matrices in Fqn n , are the set fAT X A jX 2 Fqk k g. A generating matrix for the code C C can be easily de ned as the matrix A A 2 Fqn n k k whose rows (when expressed as matrices) are given by (A j )T  A j where Ai j is the ji th row of Ai and ji 2 f1; : : :; kig for i 2 f0; 1g. Notice that the codewords of C C are matrices whose rows are codewords of C and columns are codewords of C . In our reduction we will need the following fundamental property of direct i

1

2

1

2

1

2

1

1

2

1

1 2

2

1

2

2

( 2) 2

i

1 ( i)

( 1) 1

2

2

1

1

2

1 2

1

2

product codes.

Proposition 14 [13] For linear codes C and C of minimum distance d and d , their direct product is a linear code of distance d1 d2.

1

2

1

2

Proof: We now show that for any non-zero matrix X, AT XA has at least d d non-zero entries. Consider rst the matrix XA . Since this matrix is non-zero, there must be some row which is non-zero. Since every row is a codeword from C , this implies that this row must have at least d non-zero entries. Thus XA has at least d non-zero columns. Now consider the matrix AT (XA ). At least d columns of this matrix are non-zero and each (being a codeword of C ) must have at 1

2

1 2

1

1

1

1

1

2

1

1

2

least d2 non-zero entries. This completes the claim. Finally, we verify that the minimum distance of C1 C2 is exactly d1d2. To see this consider vectors xi 2 Fkq i such that xiAi has exactly di non-zero elements. Then notice that the matrix M = AT2 xT2 x1 A1 is a codeword of C1 C2. Expressing M as (x2A2 )T (x1A1) we see that its ith column is zero if the ith coordinate of x1A1 is zero and the j th row of M is zero if the j th coordinate of x2 A2 is zero. Thus M is zero on all but n ? d1 columns and n ? d2 rows and thus at most d1 d2 entries are non-zero. 2 We can now prove the following theorem.

Theorem 15 For every nite eld Fq the following holds:  For every real > 1, GapDist ;q is hard for NP under polynomial RUR-reductions.  For every  > 0, GapDist ;q is hard for NP under quasi-polynomial RUR-reductions for ?

(n) = 2log

(1 )

n.

In both cases the error probability is exponentially small in a security parameter.

Proof: Let be such that GapDist ;q is hard by Lemma 12. Given an instance (A; d) of GapDist ;q, consider the instance (A l; dl) of GapDist ;q , where A l = (|   ((A A){z A)    A}) 0

0

l

0

is a generator matrix of

0

l

CA l = (|   ((CA CA){z CA)    CA}) l

13

for an integer parameter l 2 Z+. By Proposition 14 it follows that Yes instances map to Yes log instances and No instances to No instances. Setting l = log

yields the rst part of the theorem.

l Notice that for constant l, the size of A is polynomial in A, and A l can be constructed in polynomial time. ? To show the second part, just set 0 = 2 and l = log  n in the previous reduction. This time the block length of A l will be N = nl = 2log = n which is quasi-polynomial in the block length n of the original instance A. The reduction can be computed in quasi-polynomial (in n) time, and the approximation factor achieved is 0

1

1

(N ) = 2l = 2log

?

1 

n = 2log1? N :

2

As for the relatively near codeword problem, the following corollary can be easily derived from the hardness result under RUR-reductions.

Corollary 16 For every nite eld Fq the following holds:  For every real > 1, if GapDist ;q is not in RP unless NP = RP.  For every  > 0, GapDist ;q is not in RQP for (n) = 2 ? n unless NP  RQP. log(1 )

5 Hardness results for asymptotically good codes In this section, we prove that the relatively near codeword problem is NP-hard (under RUR reductions) even when restricted to asymptotically good codes. To be more formal, we de ne a restricted version of GapRNC where the code is required to be asymptotically good. ) be the restriction of GapRNC() to instances For R;  2 [0; 1], let (R;  )-restricted GapRNC( ;q

;q (A 2 Fqkn ; v; t) satisfying k  R  n and t    n. (I.e., the code represented by A is asymptoticallygood with information rate at least R > 0 and distance rate at least  > 0.) We will show that (R;  )-restricted GapRNC is NP-hard under RUR reductions. We will then use this result to prove that the minimum distance of a linear code is hard to approximate additively, even to within a linear addtive error relative to the block length of the code. To prove this we use the following formalism. For  > 0 and prime power q , let GapDistAdd;q be the promise problem with instances (A 2 Fqkn ; d), with YES instances being those with (CA)  d and NO instances being those with (CA) > d +   n. We will show that GapDistAdd is NP-hard under RUR reductions.

5.1 Hardness of error-correction in asymptotically-good codes

We start by showing the hardness of the restricted version of the Relatively Near Codeword problem.

Theorem 17 For every prime power q and every  1, there exists a rate R > 0, distance  > 0  is hard for NP under RUR-reductions. and  < 1 such that the (R;  )-restricted GapRNC ;q ( )

The proof of Theorem 17 follows the same sequence of lemmas as the proof of Theorem 9. The main di erences are the following: 14

 In the construction of the combinatorial tool in Section 3.1 we relied on a code of very small

distance. In this section we replace this construction with a construction based on algebraicgeometry codes (recall that these also perform better than random linear codes) to get a construction with an asymptotically good code. (See Lemma 18)  When using the hardness result for GapNCP, we need a result in which the target vectors in the NO instances are at a linear distance from the code. We observe that a result of Hastad [9] gives us hardness of NCP with this additional property. We start with the following lemma which is analogous to Lemma 4. Lemma 18 For any square prime power q  49, there exist constants R > 0, > 0 and  > 0 and an algorithm that on input integers k and n, outputs, in poly(k; n) time, three integers l; m; r > 0 l and a full rank matrix A 2 Fm q such that  the linear code de ned by A has minimum distance (CA) > (1 + )r,  the expected number of codewords inside a random sphere B(v; r) (v chosen uniformly at random from Flq ) is at least q k ,  l  n, m  R  l and r   l. Comment: Apart from the inversion of some quanti ers and a new parameter n, the main di erence when compared with Lemma 4 is in the third condition above where the code CA is forced to be asymptotically good. (In achieving this condition, we lose in the rst condition where we are only able to show that the distance of the code is mildly larger than the radius of the dense ball.)   Proof: We prove the lemma for = 21 ,  = logq 2 + 21  logq 1 ? 1q ? pq1?1 and R = 12 ? 2 ? pq1?1 . Notice that the conditions  > 0, R > 0 hold for q  49. Let l0 be large enough so that l0  2  logq (2l0). Given k; n let l = maxfn; l0; 4k g, m = R  l and r =  l. Notice that these settings already satisfy the third condition above, i.e., l  n, m  R  l, and r   l. We then construct in polynomial time in l, a generator matrix A 2 Fmq l for an algebraic-geometry code (see [15]) satisfying (CA)  l ? m ? pql?1 . Notice that

l ? m ? pq l? 1 = l(1 ? R ? pq1? 1 ) = l( 21 + 2 ) = 2l (1 + ) = r(1 + )

thus satisfying the rst condition. It remains to verify the second condition, namely, that the expected number of codewords in a ball of radius r around a randomly chosen vector v is at least qk . Observe that the expected number of codewords in a random sphere is Exp [jB(v; r) \ CA j]  v2Flq

 

l (q ? 1)r q m?l r l

 p2 (q ? 1)l= ql ? = ?= ? p ? 2l We would like to prove that the nal quantity above is at least q k . Taking logarithms of both sides to base q , it suces to prove that 2





( 1 2

2

1

q 1)

l logq 2 + 12 logq (1 ? 1=q) ? =2 ? pq1? 1 ? 21 logq (2l)  k: 15

Using the value of , we nd that it suces to prove l (=2) ? 12 logq (2l)  k: Using the fact that l  l0, we get that the left hand side above is at least l 4 which is at least k by the setting of l above. This concludes the proof. 2 By following the same sequence of claims as used to get from Lemma 4 to Lemma 8, we get an analogous result for the asymptotically good case.

Lemma 19 For any nite eld Fq , there exists constants ; R; > 0 and a probabilistic polynomial l lk time algorithm that on input (1k ; 1n; 1s ) outputs integers l; m; r, matrices A 2 Fm q , and T 2 Fq and a vector v 2 Flq such that 1. (CA) > (1 + )r. 2. T(B(v; r) \ CA ) = Fkq with probability at least 1 ? q ?s 3. l  n, m  R  l and r   l. Finally, we need the hardness of a restricted version of the NCP in order to prove the main theorem of this section. For 2 [0; 1], let -restricted GapNCP ;q be the restriction of GapNCP ;q to instances (A 2 Fqkn ; v; t) satisfying t   n. We will need the fact that -restricted GapNCP ;q is NP-hard. This result does not appear to follow from the proof of [2]. Instead we observe that this result follows easily from a recent and extremely powerful result of Hastad [9],

Theorem 20 ([9]) For every prime p, and for every  > 0, given a system of linear equations modulo p with m constraints, it is NP-hard to distinguish instances in which (1 ? )m constraints can be satis ed, from those in which at most ( p1 + )m constraints can be satis ed.7

Phrased in coding theoretic terms this amounts to saying that for every prime p and for every 1 constant , by setting  = p(p ?+1) , -restricted GapNCP ;p is NP-hard. The result for general prime k powers q = p follows immediately (using the fact that instances of GapNCP ;p are also instances of GapNCP ;q and the nearest codeword may as well use coecients from Fp , if both the generator and target are from Fp ). This gives the following corollary to Theorem 20.

Corollary 21 For every prime power q and for every constant , there exists > 0, such that -restricted GapNCP ;q is NP-hard. The proof of Theorem 17 now follows analogously to the proof of Theorem 9. Proof:[of Theorem 17] We will pick parameters I; J appropriately and then paste I copies of the code obtained from Lemma 19 next to J copies of a hard instance of restricted GapRNC. The result will follow by the setting of the parameters. Let ; R0 ; > 0 be the constants given by Lemma 19. Let 0 = 2 (1 + 3=). Let 0 > 0 be  such that 0-restricted GapNCP 0 ;q is NP-hard (as given by Corollary 21. Let R = R0 = 6 + 20 , =4 . We will reduce 0 -restricted GapNCP 0 to (R;  )-restricted  = minf ; g, and  = 1+3

;q 1+ () GapRNC ;q . Since R;  > 0 and  < 1, this suces to prove the theorem. Hastad's result has the further property that every linear equation only involves three variables, but we don't need this extra property. 7

16

Let (C0 2 Fqkn ; v0; t0) be an instance of 0 -restricted GapNCP 0 ;q . Invoke the algorithm of Lemma 19 on input (1k ; 1n ; 1s) to get integers l; m; r, matrices A 2 Fmq l , T 2 Flqk and vector l m r w 2 Flq as promised by the lemma. Set J = t0 and I = 2Jrt0 ; and let C be the matrix obtained by pasting I copies of A with J copies of the matrix ATC0. Let v be the vector obtained by concatenating I copies of w with J copies of v0. Let t = I  r + J  t0 . We claim that (C; v; t) ) if (C0; v0; t0 ) is a YES (resp. NO) is a YES (resp. NO) instance of (R;  )-restricted GapRNC( ;q instance of 0 -restricted GapNCP 0 ;q . We rst derive some useful upper bounds on I and J . First, r=t0   l=n  (since l  n). Thus J  (r=t0)+1  (r=t0)  (1+1= ) = +1  tr0 . Also J  (r=t0)+1  l=( 0  n)+1  nl 20 . Finally, ) 6 I  2(t0=r)(J=) + 1  3 Jrt0 yielding I  3(1+    . Thus we nd that I0 is upper bounded by a constant and so is J ln . Finally, we also note that the setting yields 3  JItr  2 . We? now verify that asymptotically-good. Notice that the block length of CC is I  l + J   CC is   J  n 2 6 n  l I + l  l  + 0 (i.e., Block length = O(l)). Thus the information rate is at least   m 0= 6 + 2 = R as required. Further the ratio of the parameter t to the block  R 0  l(6=( )+2= ) length is IIrl++JJnt0  minf rl ; nJ g  minf ; 0g =  . Thus the instance satis es the (R;  )-restriction. Next we verify that the instance has large distance (relative to t). Notice that (CC )  I  (CA)  I (1 + )r. Thus I  r + J  t0 t  (CC ) I (1 + )r 1 + JI tr0 = 1+ 1  1++=2

< 

Finally, we need to verify that YES (NO) instances map to YES (resp. NO) instances. As in the proof of Theorem 9 we see that if we start with a YES instance of GapNCP, then (CC ; v)  I  r + J  t0 = t as required. On the other hand if we start with a NO instance of GapNCP, then (CC ; v)  J  0  t0 = 2 (1 + 3 )J  t0 (1 + 3 )J  t0 = (2  t) I  r + J  t0 (1 + 3 ) = 2  t I r  J t0 + 1  2  t

>  t;

as required. This completes the proof. 2

5.2 Inapproximability of Minimum Distance with linear additive error

Finally, applying the reduction from GapRNC to GapDist, given in the proof of Lemma 12, we derive the following hardness of approximating the minimum distance of a code to within a \linear 17

additive error" (i.e., in codes of block length n, it is hard to approximate the minimum distance to within an additive factor of n).

Theorem 22 For every prime power q, there exists an  > 0, such that approximating GapDistAdd;q is hard for NP under polynomial RUR-reductions with exponentially small soundness error.

Proof: By Theorem 17 we have that for some R,  > 0,  < 1 and > 1 (R; )-restricted GapRNC ;q is NP-hard. Assume w.l.o.g. that  = , or else we can use the theorem with

0 = minf  ; g. We will prove the theorem for  = ( ? 1) > 0. Let (C 2 Fqkn ; v; t) be an

? and assume without loss of generality that v does not instance of (R;  )-restricted GapRNC ;q   C 0 belong to code generated by C. De ne the matrix C = v . As in the proof of Lemma 12 we

? , then (C0)  t and thus (C0; t) is a Yes get that if (C; v; t) is a Yes instance of GapRNC ;q

? , then we get instance of GapDistAdd;q . Similarly if (C; v; t) is a No instance of GapRNC ;q that (CC0 ) > t = t + ( ? 1)t  t + ( ? 1)n = t + n and thus (C0; t) is a No instance of GapDistAdd;q . 2 ( )

1

1

1

1

1

6 Other reductions We proved that approximating the minimum distance problem is hard for NP under RUR-reductions, i.e. probabilistic reductions that map No instances to No instances, and map Yes instances to Yes instances with high probability. (This is similar to the hardness proof for the shortest vector problem in [3, 14].) An obvious question is whether it is possible to remove the randomization and make the reduction deterministic. We notice that our reduction (as well as the Ajtai-Micciancio ones for SVP) uses randomness in a very restricted way. Namely, the only part of the reduction where randomness is used is the proof of Lemma 8. The construction in the lemma depends only on the input size, and not the particular input instance we are reducing. So, if the construction succeeds, the reduction will faithfully map all Yes instances (of the appropriate size) to Yes instances. Therefore, the statement in Lemma 8 can be easily modi ed to obtain hardness results for NP under deterministic non-uniform reductions, i.e. reductions that take a polynomially sized advice that depends only on the input size8 :

Corollary 23 For every nite eld Fq the following holds:  For any  > 1=2, and  1 GapRNC ;q is hard for NP under non-uniform deterministic ( )

polynomial reductions.

 For every real > 1, GapDist ;q is hard for NP under non-uniform deterministic polynomial reductions.

 For every  > 0 and (n) = 2

? n ,

log(1 )

ministic quasi-polynomial reductions.

GapDist ;q is hard for NP under non-uniform deter-

Since our reduction achieves exponentially small error probability, hardness under non-uniform reductions also follows from general results about derandomization [1]. However, the ad-hoc derandomization method we just described is more ecient and intuitive. 8

18

We notice also that a uniform deterministic construction satisfying the properties of Lemma 8 would immediately give a proper NP-hardness result (i.e. hardness under deterministic Karp reductions) for the relatively near codeword problem and the minimum distance problem. Finally, we notice that all our results rely on the fact that the code is given as part of the input. Thus it is still concievable that for every error-correcting code, there exists a fast algorithm to correct errors (say up to the distance of the code), however, this algorithm may be hard to nd (given a description of the code). A result along the lines of the result of Bruck and Naor [6], showing the hardness of relatively nearby codeword problem even with preprocessing, would be desirable to x this gap in our knowledge. We are however unable to extend our techniques (or those of [6]) to address this problem.

References [1] L. Adleman, \Two Theorems on Random Polynomial Time", in Proc. 19th Symposium on Foundations of Computer Science 1978, pp. 75{83. [2] S. Arora, L. Babai, J. Stern, Z. Sweedyk, \The Hardness of Approximate Optima in Lattices, Codes, and Systems of Linear Equations", Journal of Computer and System Sciences, Vol. 54, 1997 pp. 317{331. [3] M. Ajtai, \The Shortest Vector Problem is NP-Hard for Randomized Reductions", in Proc. 30th Symposium on Theory of Computing 1998, pp. 10{19. [4] S. Arora, C. Lund, \Hardness of Approximations", in D. S. Hochbaum (ed.), Approximation Algorithms for NP-Hard Problems, PWS Publishing, 1997. [5] R.E. Blahut, Theory and Practice of Error Control Codes, Addison-Wesley, Reading, Massachusetts, 1983. [6] J. Bruck, M. Naor, \The Hardness of Decoding Linear Codes with Preprocessing," IEEE Transactions on Information Theory, Vol. IT-36, n. 2, March 1990, pp. 381{385. [7] E.R. Berlekamp, R.J. McEliece, H.C.A. van Tilborg, \On the Inherent Intractability of Certain Coding Problems", IEEE Transactions on Information Theory, Vol. IT-24, n. 3, May 1978, pp. 384{386. [8] G.D. Forney, Concatenated Codes, Research Monograph No. 37, The MIT Press, Cambridge, Massachusetts, 1966. [9] J. Hastad. Some optimal inapproximability results. Technical Report 97-37, Electronic Colloquium on Computational Complexity, 1997. Preliminary version in Proceedings of the 29th ACM Symposium on Theory of Computing, 1997. [10] K. Jain, M. Sudan, V.V. Vazirani, Personal communication, May 1998. [11] D. S. Johnson, \A Catalog of Complexity Classes", Chapter 2 in J. van Leeuwen (ed.) Handbook of theoretical computer science Vol. A (Algorithms and Complexity), Elsevier Science, 1990. [12] J. Justesen, T. Hholdt, \Bounds on List Decoding of MDS codes", Manuscript, March 1999. [13] F.J. MacWilliams, N.J.A. Sloane, The Theory of Error-Correcting Codes, North-Holland, Amsterdam, 1981. 19

[14] D. Micciancio, \The Shortest Vector in a Lattice is Hard to Approximate to within Some Constant", in Proc. 39th Symposium on Foundations of Computer Science 1998, pp. 92{98. [15] M.A. Tsfasman and S.G. Vladuts, Algebraic - Geometric Codes. Dordrecht: Kluwer, 1991. [16] A. Vardy. \The Intractability of Computing the Minimum Distance of a Code," IEEE Trans. Inform. Theory, Vol. IT-43, no. 6, November 1997, pp. 1757{1766. [17] V. Zinoviev and S. Litsyn, \On codes exceeding the Gilbert bound," Problems of Information Transmission. Vol. 21, no. 1, 1985, pp. 109{111 (in Russian).

20