IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 45, NO. 2, MARCH 1999
385
New Upper Bounds on Error Exponents Simon Litsyn, Member, IEEE
Abstract—We derive new upper bounds on the error exponents for the maximum-likelihood decoding and error detecting in the binary symmetric channels. This is an improvement on the best earlier known bounds by Shannon–Gallager–Berlekamp (1967) and McEliece–Omura (1977). For the probability of undetected error the new bounds are better than the bounds by Levenshtein (1978, 1989) and the recent bound by Abdel-Ghaffar (1997). Moreover, we further extend the range of rates where the undetected error exponent is known to be exact. The new bounds are based on an analysis of possible distance distributions of codes along with some inequalities relating the distance distributions to the error probabilities.
Levenshtein, and Abdel-Ghaffar (upper bounds). A discussion of these bounds can be found in Kløve and Korzhik [26, ch. 3]. In this paper, using an analysis of possible distance distributions of codes, we improve the upper bounds on the error exponents. Moreover, we further extend the range where upper and lower bounds on the undetected error exponent coincide. The main results are presented in Theorems 1 and 2. To state the problem rigorously we need some notations. , be the space of binary -tuples endowed Let with the Hamming metric
Index Terms— Distance distribution, error exponents, Krawtchouk polynomials, maximum-likelihood decoding, undetected error.
(1) where The Hamming weight of an
. is
I. INTRODUCTION
A
classical problem of the information theory is to estimate probabilities of undetected and decoding errors when a block code is used for information transmission over a binary symmetric channel (BSC). We will study here exponential bounds on the performance of the best codes. These bounds , where is the length of a are of the form tends to zero when grows, and is code, and the error exponent depending only on the code rate has been the transition probability . The function thoroughly studied since 1948, when Shannon proved in his seminal paper [50] that the error exponent for the maximum, likelihood decoding is positive in the interval is the capacity of the channel. Thus for all rates less where , the error probability can be made as small as one than wishes by choosing an appropriate . The problem of deriving lower and upper bounds on the error exponents attracted a great deal of attention since then. The best currently known results for the probability of decoding error were derived in the middle 1960’s by Gallager (lower bounds), and Shannon, Gallager, and Berlekamp (upper bounds). In 1977, McEliece and Omura presented an improvement of the upper bounds in the range of small rates based on the upper bound on the minimum distance of codes due to McEliece, Rodemich, Rumsey, and Welch. A description of these bounds appears in every textbook on information theory, e.g. Blahut [4, ch. 10], Csis´ar and K¨orner [8, ch. 2, par. 5], Gallager [14, ch. 5], Viterbi and Omura [54, ch. 3]. The best known estimates for the exponents of the probability of undetected error are due to Korzhik and Levenshtein (lower bounds), and Leont’ev,
Let distance
be a code of size
and minimum
Let be the maximal possible minimum distance of a and relative discode of length and size . The rate of a code are defined as (all logarithms tance in the paper are base )
Let
be the binary entropy function
and for
, stand for its inverse function. Define
Then, by the Gilbert–Varshamov bound [16], [53] (2) On the other hand, the McEliece–Rodemich–Rumsey–Welch (linear programming) bound [45] (see [38], [39, eq. (18)] for the form of the bound below) yields that
Manuscript received January 12, 1998; revised October 1, 1998. The author is with the Department of Electrical Engineering-Systems, Tel Aviv University, 69978 Tel Aviv, Israel (e-mail:
[email protected]). Communicated by T. Kløve, Associate Editor for Coding Theory. Publisher Item Identifier S 0018-9448(99)01401-7. 0018–9448/99$10.00 1999 IEEE
(3)
386
Let
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 45, NO. 2, MARCH 1999
, where of the equation
is the only root in
Assume if , and (the last condition guarantees continuity at
if ). Let (8)
Then for when
(9)
, the minimum in (3) is attained , and (3) gets especially simple form Let
stand for the capacity of the channel
(4) In the axis of relative distance, the two bounds coincide for (5) otherwise, the bound (3) is tighter. . A longThe bound (4) is also valid for all standing conjecture is that the Gilbert–Varshamov bound (2) is actually tight. that the input A BSC is described by the probability symbol ( or ) and the corresponding output symbol ( or ) . Given a code differ. We assume that we can partition to decode regions . A vector belongs to if there is no other codeword such . If is such that there is no codeword that such that , but there are several, say , , codewords, , satisfying
then we assign
arbitrarily to anyone from Then the average probability of decoding error in BSC (provided that all codewords are equiprobable) is (6) If, as the result of transmission of a codeword , some other was received, we say that an undetected error codeword has occured. The average probability of undetected error is (7) is an The distance distribution of a code , defined by
-tuple
For positive and we write and Define
and , we use notation if tends to when grows. Analogously, for and .
Notice, that . We start with a simpler case of the exponent for the probability of undetected error. The known lower bounds can be obtained using random coding arguments (see Korzhik, 1965, and Levenshtein, 1977). if if
(10) (11)
The best known upper bounds (12)–(15) on shown at the bottom of this page are due to Leont’ev, 1972, Levenshtein, 1977, 1989, and Abdel-Ghaffar, 1997 (the fact that the last bound is improving the exponent was mentioned by Barg [2]). Of course, would we have an upper bound on better than , it might replace it in the expressions. Clearly, in the range where several bounds are relevant, the minimum should be applied. Notice, that the bound (12) is . tight since it coincides with (10) on the interval Bound (14) turns out to be a line segment exiting at from any point of the bound (15). Hence, if the conjecture on the tightness of the Gilbert–Varshamov bound were true, it would imply [39] the tightness of the bounds (10) and (11). For other results on the probability of undetected error see, e.g., [5], [19]–[25], [34], [35], and [55]. About the probability of decoding errors the following is known. The lower bounds are due to Elias, 1956, and Gallager, 1963, 1965, 1968 (see (16)–(18) at the top of the following page). Here
It is easy to see that
if
(12)
if
(13)
for any
(14)
if
(15)
LITSYN: NEW UPPER BOUNDS ON ERROR EXPONENTS
We address the bound (16) as the lower sphere-packing bound, the bound (17) as the lower straight-line bound, and the bound (18) as the lower minimum-distance bound. The best known upper bounds are due to Elias, 1956, Shannon, Gallager, and Berlekamp, 1967, and McEliece and Omura, 1977 (based on the McEliece, Rodemich, Rumsey, and Welch, 1977) (see (19)–(21) at the top of this page). Bound . Bound (19) proves that (16) is tight in the range (21) is an application of the following result due to Shannon, Gallager and Berlekamp. Lemma 1: Any straight line between a point on any upper bound and a point on (19) is also an upper bound. Here it is applied to (20). This argument yields tightness of (16), (17), and (18), modulo the conjecture about the tightness of the Gilbert–Varshamov bound. We address the bounds (19), (20), and (21) as the upper sphere-packing, upper minimumdistance, and upper straight-line bounds, respectively. There is still a gap between the lower and upper bounds, and it is a long-standing open problem how to close or, at least, decrease it. Since any improvement in the upper bounds leads to an improvement on the upper bounds for on the exponents, the main efforts were applied in attempts to improve on (3). We quote Viterbi and Omura [54, p. 184]: “The most likely improvement ... will come about as a result of an improvement in the upper bound on minimum distance.” However, we present here an alternative approach, which does not require improving upper bounds on the minimum distance, nevertheless leading to better upper bounds on the error exponents. The main idea is as follows. We prove in Section II that there necessarily should be a large enough component in the distance distribution. In Sections III and IV we demonstrate that the impact of this component results in an exponential factor, thus improving on the earlier known bounds. The new bounds on the exponents are summarized in the following theorems.
387
if
(16)
if
(17)
if
(18)
if
(19)
if
(20)
for any
(21)
if
(22)
if
(23)
Theorem 1: where page, and
is defined by (22) and (23) at the top of this
Theorem 2:
where
and
are such that
388
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 45, NO. 2, MARCH 1999
The graphs of the new and old bounds for the exponents of undetected and decoding errors are presented on Figs. 1 and and 2 for BSC with the transition probabilities respectively.
II. BOUNDS ON
THE
Proof: Consider two vectors , such that . The number of such that and is exactly . In the code there are ordered pairs . Since every such pair cosets occurs in
DISTANCE DISTRIBUTION
(26)
In this section we prove that in every code of given rate there exists a big enough component of the distance distribution. The notion of binomiality of the distance distribution proves to be useful. It is known, that the distance distribution of random codes is normalized binomial, i.e., the number of codewords at distance from a specific codeword equals on [12], [28], [48]. An analysis of the interval average where the distance distribution can be upper-estimated by the binomial distribution is undertaken in [30], [31]. Here we deal with the lower bounds on the distance components. Our estimates improve on results by Kalai and Linial [18]. A. A Generalization of the Bassalygo–Elias Lemma be a code of length Let tion corresponding coset as
with the distance distribu. For define the
and the summation of vectors is componentwise, modulo two. let stand for the subset of vectors For any of Hamming weight
which proves the claim. As a corollary of (26) we obtain the equation shown at the top of the page. there exists
Corollary 1: For every such that
(27) , (27) reduces to the Notice, that if one chooses Bassalygo–Elias inequality [3]. Another generalization of the Bassalygo–Elias lemma (“the three subsets lemma”) can be found in [36]. B. Lower Bounds on the Distance Distribution of Codes In Section VI-B necessary properties of the dual Hahn are summarized. The following relates the polynomials to the distance distribution of constant weight polynomials codes. be a code with all codewords of Hamming weight Let , and let be its distance distribution. Then, as it was shown by Delsarte [9], the following inequality holds: for
For
, consider
(28)
, and let Lemma 3: Let
be a polynomial of degree at most
be its distance distribution such that (24) Lemma 2:
for for Then there exists
(25)
such that (29)
LITSYN: NEW UPPER BOUNDS ON ERROR EXPONENTS
Fig. 1. Bounds on the undetected error exponent for
389
p
= 0:2.
Proof: We use (79) and (28) to obtain the following chain:
Proof: From (25) and (29)
Since , the minimum of the first sum is attained when all the summands are equal, i.e., for all
This proves (30). that proves the lemma. Along with Lemma 2 this gives the following estimate.
Now, since the conditions imposed on the polynomial are the same as in the linear programming bound on the size of codes (see, e.g., [9]), we may choose the same polynomial as in [45]
be a polynomial satisfying the condiLemma 4: Let tions of the previous lemma. Then there exists such that
(31)
(30)
(so, if where the parameter is such that stands for the first zero of , then is between and ), and , , will be chosen later.
390
Fig. 2. Bounds on the decoding error exponent for
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 45, NO. 2, MARCH 1999
p
= 0:02.
The polynomial (31) is nonpositive in the interval and the coefficients of its expansion in the basis of polynomials are nonnegative [45]. For this polynomial, as it was shown in [45]
(32)
and
Now we choose in such a way that the value of the numerator in (30) is positive and is dominated by the first term. To do this we want to find the maximal such that
(33) (35)
From (77) we have This is equivalent to
This yields that for every (34)
Plugging it into the expressions for and we reduce it to into account that
and taking
So
(36)
LITSYN: NEW UPPER BOUNDS ON ERROR EXPONENTS
Substituting the estimate for into account (35) we get
391
into (30) and taking
By the construction, the obtained code has minimum distance at least . At every step we are decreasing , so, taking into account the size of by at most (38), we get
Notice that
. Therefore,
Applying Theorem 3 to we check that the distance distribution of , and thus of , satisfies claim ii). Hence, we have proved that if i) is not true then ii) is valid. Theorem 3: Let ,
, be such that
Then there exists
where
, such that
Notice, that in the statement of the theorem we may assume and (of course, these are not the best possible estimates). When grows, the theorem gives the following. Theorem 4: In every code of big enough length there exists a component of the distance distribution such that either for some ; i) or for some ii)
Proof: The second claim easily follows from the preunder the imposed revious theorem by minimizing strictions, and comparing it with (3). To prove the first part for we do the following. Assume indirectly that . For let
The theorem can be rephrased in the following way: every code of big enough length either has the relative distance equal at most to the one from the Gilbert–Varshamov bound, or there necessarily exists a distance distribution component that is at least binomial. Actually, a better (than the binomial) bound can be obtained. We elaborate on (34). To do this, we proceed to . Let, and grow, estimate the asymptotic behavior of . Denote
Clearly, (39) By (34)
yielding (40) To obtain better estimates on strategy. As shown in [45, Appendix B]
and
we use another
Clearly, (37) Denoting relation (76) in the form
Let
, we rewrite the recurrence
and From (37) we have (38) Now 1. 2. 3. 4.
we construct a code as follows: . Set . Append to . Pick a word from . Discard and If is nonvoid, go to 2, otherwise stop.
Solving it in we get, with accuracy up to a factor the first expression on the the top of the following page. Furthermore, by (81) and the last equality we get the second equation on the top of the following page. Approximating the sum by the corresponding integral we obtain (41), also at the top of the following page. Note that this integral can be computed explicitly using Mathematica.
392
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 45, NO. 2, MARCH 1999
(41)
Theorem 5: In every code of big enough length there exists a component of the distance distribution such that
Lemma 5: Let
be a polynomial of degree at most
such that for for for
(42) for
and Then there exists
such that
(43)
where
is defined by (41), ,
and
are such that .
(45) Proof: Identical to the proof of Lemma 3. Choosing as the polynomial (cp. with [45])
Notice that the upper limit of the interval in (43) is at least (compare the expresssion to (3)). C. Simple Lower Bounds Here we develop another simple approach to derivation of lower bounds on the distance distribution components. It gives weaker bounds, but the expressions we obtain are sometimes easier to deal with. The necessary properties of the Krawtchouk polynomials are summarized in Section VI-A. be a code with a distance distribution Let . The distribution satisfies the condition on nonnegativity of the MacWilliams transform [9] for
(44)
where is such that , then is between first zero of be chosen later, we obtain [45]
(so, if denotes the and ), and to
Denote
We are interested in upper bounds on for . The simplest bound follows from the orthogonality re-
LITSYN: NEW UPPER BOUNDS ON ERROR EXPONENTS
393
Theorem 6: In every code of big enough length there exists a component of the distance distribution such that
lation (68)
(49) yielding
where and (46)
Another bound uses arguments similar to derivation of (41) up to the terms of order and gives the exact value of (see [18])
is defined by (47).
We may use other than (47) bounds on in (49). Use of (46) proves again existence of a binomial component in the distance distribution. If we apply (48), we obtain the following weaker bound. Theorem 7: In every code of big enough length there exists a component of the distance distribution such that
(47) (50) Notice, that the bounds (46) and (47) coincide at , and for , (47) is always less than (46).
where
Lemma 6: For It is possible to use the same idea of constructing the “tangential” upper bound on the values of the dual Hahn polynomials. Lemma 7: For (48) Proof: Let (51) where
are the roots of
. Then
Thus , as well as the estimate (47), are -convex. On the other hand, the estimate (46) is -convex (the second ). Both estimates coincide derivative equals to and are equal to at
Proof: Recall that by (34)
(52) Following the lines of Lemma 6, differentiating (52) in , and , we obtain computing the value of the derivative at the claimed. Now we may plug the bound on
Their derivatives in and are equal to
also coincide at
into (42).
Theorem 8: In every code of big enough length there exists a component of the distance distribution such that
The line is a lower bound for (46) and an upper bound for (47). Simple algebraic manipulations accomplish the proof. Using arguments similar to those yielding Theorem 5 we get the following result.
(53)
394
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 45, NO. 2, MARCH 1999
if
(55)
if
(56)
Plugging it into (57) we get
Proof: Take into account that
(58)
III. UNDETECTED ERROR EXPONENTS Results obtained in the previous sections yield new bounds on the probability of undetected error in BSC. Let the transition of rate probability in BSC be . For a code with the distance ditribution the average probability of undetected error is
Differentiating the right-hand side in
we obtain
(54) We first use Theorem 7 to obtain a simple bound on the exponent. Theorem 9: where is defined by (55) and (56) at the top of this page. Proof: From (54) we derive (57) If
, choose in (50) , and we obtain
, then
The condition for the nonnegativity of the derivative is
But the last condition always holds, since . Thus and equals . the maximum in (58) occurs when we choose and in such a way that If . Checking the nonnegativity of the derivative in accomplishes the proof. Notice that the bounds of Theorems 9 and 1 coincide for . (see (5)), otherwise Theorem 1 provides a tighter bound. IV. DECODING ERROR EXPONENTS
If
choose
, and we obtain
Differentiating in , we find that the maximum occurs at , thus giving the claim. Proof of Theorem 1: Here we use the estimate from Theo. Then we may choose rem 8. First, assume that and in such a way that and .
Relations between the distance distribution and the error probability were studied earlier. In 1963, Gallager [12] proved that the codes with the distance distribution upper-bounded by the normalized binomial one (such distance distribution occurs in virtually all codes) provide the exponent of the probability of decoding error greater or equal to (16), (17), or (18). Other proofs of this result can be found in [6], [14], and [32]. Poltyrev [49] developed a different approach to estimating the decoding error probabilities for the codes with known distance distribution. Asymptotically, for the codes with the binomial distance distribution, it gives the same lower bound on the exponent. In this section we deal with upper bounds on the exponents for the probability of decoding error when a lower bound on the distance distribution of a code is known. with a distance distribution Let be a code of length . For a coset , , we define its weight distribution
LITSYN: NEW UPPER BOUNDS ON ERROR EXPONENTS
395
Here equals to the number of codewords being at if and only if is distance from . Clearly, a codeword. We have
the errors from which are at least as close to those having no fewer ’s in the positions than the number of ’s in the positions number of such vectors is
as to
Let stand for the decode (Voronoi) region of a codeword , i.e., the set of all vectors which are closer to than to any other codeword. Denote
are . The
(63)
(59) the error probability given the word
is transmitted. Clearly,
This number can only decrease if the distance between is greater than . Define
and
(60)
Lemma 8: Let , some
. Let for some . Define
, and
Consider an arbitrary subset of the codewords of weight containing and of size . Since there are possibilities for the codewords , we conclude that there are at least
errors from which are closer to than to any other codeword (out of ) of weight . The same arguments hold for every errors codeword of weight . So, there are at least . By (62) their probability is from which are not in Then
(64) (61)
Proof: To avoid cumbersome notations assume that and are even, and is integer. Without loss of generality assume that is a codeword. By (59) we get the inequality at the bottom of this page. Let be some codeword of weight . Permuting coordinates we may always obtain . Consider the set of errors having ’s in ’s in the last the first coordinates and positions. Every such error has probability
Asymptotically in
To analyze we check that the summand in the second sum , and the summand in the first achieves maximum at . Notice that , so sum attains maximum at . Standard estimates then yield
(62) and there are all in all
such errors. All these errors are at least as close to as to , . Now we wish to count how i.e., they do not belong to many such errors are closer to than to any other codeword of weight . If there is another codeword of weight which is at distance from , say
Moreover,
Plugging the estimates into (64) we obtain the claim.
396
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 45, NO. 2, MARCH 1999
Lemma 9: Let be a code of big enough length and , and let be the number of codewords nonzero rate such that there exists for which (65) is defined by (42) and where by (43). Then
is in the interval defined
Proof: Indeed, construct the code from the codewords which do not satisfy (65). Assuming indirectly that this code we get a contradiction, since its distance has rate distribution does not satisfy Theorem 5. Proof of Theorem 2: Let be a long enough code of posibe the first distance component satisfying tive rate. Let . Discard all codewords from which have for some and . This operation . can decrease the size of the code by a factor of Furthermore, discard from the obtained code all the words that do not satisfy the conditions of Lemma 9. The new code has asymptotically the same rate as , the minimum distance , all the codewords satisfy (65), and for every codeword of (also belonging to ) we have in the code is less or equal to in the initial code . Assuming the probability of decoding error equal for the codewords belonging to and not to , we conclude
V. CONCLUSION In the paper, using an analysis of possible distance distributions of codes, we propose a method for upper-bounding the exponents for the probabilities of undetected and decoding errors in the binary symmetric channels. The new bounds improve on the corresponding results of Shannon, Gallager, and Berlekamp, and McEliece and Omura for the probability of decoding errors, as well as on the bounds by Levenshtein and Abdel-Ghaffar for the probability of undetected errors. Apparently, a more thorough analysis allows further elaborating on the exponents for the probability of decoding error. This is the topic of [42]. The results are easily generalized to arbitrary binary-input discrete memoryless channels using the Bhattacharya distance, which, in this case, is proportional to the Hamming distance (see, e.g. [44], [46], [47], [54]). The bounds can also be generalized to other models, e.g. non-binary-input memoryless discrete channels, spherical codes used on the additive white Gaussian noise channels, etc. This will be published elsewhere. APPENDIX In the appendix we summarize the properties of Krawtchouk and dual Hahn polynomials we need. The main source is [9], see also [7], [29], [41], [38], [43], [45], and references therein. A. Krawtchouk Polynomials Let degree
stand for the discrete Krawtchouk polynomial of
(66) The polynomials satisfy the following difference equation: (67) Therefore, by Lemma 9 They are orthogonal with respect to
, i.e., (68)
Since
, then
On the other hand, optimizing in and , from Theorem 5 and Lemma 8 we obtain the second half of the statement. Note that the bound of Theorem 2 is tighter for all rates than (20). Indeed, this bound is the exponent of the probability to . It confuse between two codewords being at distance and , we have is easy to check that for
For any polynomial of degree at most there is the unique polynomials expansion in the basis of
and the coefficients
can be found from (69)
Some useful values are (70) (71) (72)
Since, by the proof of Lemma 8, is always exponentially , we get an extra negative summand of greater than in (61).
(73)
LITSYN: NEW UPPER BOUNDS ON ERROR EXPONENTS
Denote by that for
the smallest zero of
397
. It is known [38]
Let
and
grow in such a way that . Then (83)
Let
and grow in such a way that . Then
ACKNOWLEDGMENT (74)
B. Dual Hahn Polynomials Let degree
stand for the discrete dual Hahn polynomial of
We are grateful to V. Levenshtein and the anonymous referee for comments and suggestions which helped to improve the paper. A. Ashikhmin and A. Barg informed us that a somewhat different approach using binomial moments of a code allows to obtain the same bound on the undetected error exponent as in Theorem 1. REFERENCES
(75) The polynomials satisfy the following difference equation:
(76) They are orthogonal with respect to
, i.e.,
(77) For any polynomial of degree at most there is the unique polynomials expansion in the basis of
and the coefficients
can be found from (78)
Some useful values are (79) (80) (81) (82) Denote by that for
the smallest zero of
. It is known [38]
[1] K. A. S. Abdel-Ghaffar, “A lower bound on the undetected error probability and strictly optimal codes,” IEEE Trans. Inform. Theory, vol. 43, pp. 1489–1502, Sept. 1997. [2] A. Barg, “Binomial moments of distance distribution and the probability of undetected error,” perprint, 1997. [3] L. A. Bassalygo, “New upper bounds for codes correcting errors,” Probl. Pered. Inform., vol. 1, no. 4, pp. 41–44, 1965. [4] R. E. Blahut, Principles and Practice of Information Theory. Reading, MA: Addison-Wesley, 1987. [5] V. M. Blinovskii, “An estimate for the probability of undetected error,” Probl. Pered. Inform., vol. 32, no. 2, pp. 3–9, 1996. [6] E. L. Blokh and V. V. Zyablov, Linear Concatenated Codes. Moscow, USSR: Nauka, 1982 (in Russian). [7] G. Cohen, I. Honkala, S. Litsyn, and A. Lobstein, Covering Codes. Amsterdam, The Netherlands: Elsevier, 1997. [8] I. Csiszi´ar and J. K¨orner, Information Theory. Coding Theorems for Discrete Memoryless Systems. Budapest, Hungary: Akademiai Kiado, 1981. [9] P. Delsarte, An Algebraic Approach to the Association Schemes of Coding Theory (Philips Res. Rep. Suppl., no. 10), 1973. [10] P. Elias, “Coding for two noisy channels,” in Information Theory. New York: Academic, 1956, pp. 61–74. [11] R. M. Fano, Transmission of Information. Boston, MA: MIT Press, 1961. [12] R. G. Gallager, Low-Density Parity-Check Codes. Boston, MA: MIT Press, 1963. [13] R. G. Gallager, “A simple derivation of the coding theorem and some applications,” IEEE Trans. Inform. Theory, vol. IT-11, pp. 3–18, 1965. [14] , Information Theory and Reliable Communication. New York: Wiley, 1968. [15] , “The random coding bound is tight for the average code,” IEEE Trans. Inform. Theory, vol. IT-19, pp. 244–246, 1973. [16] E. N. Gilbert, “A comparison of signalling alphabets,” Bell Syst. Tech. J., vol. 31, no. 3, pp. 504–522, 1952. [17] R. W. Hamming, “Error detecting and error correcting codes,” Bell Syst. Tech. J., vol. 29, pp. 147–160, 1950. [18] G. Kalai and N. Linial, “On the distance distribution of codes,” IEEE Trans. Inform. Theory, vol. 41, pp. 1467–1472, Sept. 1995. [19] T. Kasami, T. Kløve, and S. Lin, “Linear block codes for error detection,” IEEE Trans. Inform. Theory, vol. IT-29, pp. 131–136, Jan. 1983. [20] G. L. Katsman, “Upper bounds on the probability of undetected error,” Probl. Pered. Inform., vol. 31, no. 1, pp. 13–16, 1995. [21] T. Kløve, “Generalizations of the Korzhik bound,” IEEE Trans. Inform. Theory, vol. IT-30, pp. 771–773, Sept. 1984. , “Using codes for error correction and detection,” IEEE Trans. [22] Inform. Theory, vol. IT-30, pp. 868–870, Dec. 1984. [23] , “Optimal codes for error detection,” IEEE Trans. Inform. Theory, vol. 38, pp. 479–489, Mar. 1992. [24] , “The weight distribution of cosets,” IEEE Trans. Inform. Theory, vol. 40, pp. 911–913, May 1994. [25] , “Bounds on the weight distribution of cosets,” IEEE Trans. Inform. Theory, vol. 42, pp. 2257–2260, 1996. [26] T. Kløve and V. I. Korzhik, Error Detecting Codes. Boston, MA: Kluwer, 1995. [27] V. I. Korzhik, “Bounds on undetected error probability and optimum group codes in a channel with feedback,” Radiotech., vol. 20, no. 1, pp. 27–33, 1965.
398
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 45, NO. 2, MARCH 1999
[28] V. N. Koshelev, “On some properties of random group codes of great length,” Probl. Pered. Inform., vol. 1, no. 4, pp. 35–38, 1965. [29] I. Krasikov and S. Litsyn, “On integral zeros of Krawtchouk polynomials,” J. Comb. Theory, Ser. A, vol. 74, no. 1, pp. 71–99, 1996. , “Estimates for the range of binomiality in codes’ spectra,” IEEE [30] Trans. Inform. Theory, vol. 43, pp. 987–991, 1997. [31] , “Bounds on spectra of codes with known dual distance,” Des., Codes Cryptogr., vol. 13, no. 3, pp. 285–298, 1998. [32] D. E. Lazi´c and V. Senk, “A direct geometrical method for bounding the error exponent for any specific family of channel codes—Part I: Cutoff rate lower bound for block codes,” IEEE Trans. Inform. Theory, vol. 38, pp. 1548–1559, July 1992. [33] V. K. Leont’ev, “Error-detecting encoding,” Probl. Pered. Inform., vol. 8, no. 2, pp. 6–14, 1972. (English translation: Prob. Inform. Transm., vol. 8, no. 2, pp. 86–92, 1972.) [34] S. K. Leung-Yan-Cheong, E. R. Barnes, and D. U. Friedman, “On some properties of undetected error probability of linear codes,” IEEE Trans. Inform. Theory, vol. IT-25, pp. 110–112, Jan. 1979. [35] S. K. Leung-Yan-Cheong and M. E. Hellman, “Concerning a bound on undetected erro probability,” IEEE Trans. Inform. Theory, vol. IT-22, pp. 235–237, Mar. 1976. [36] V. I. Levenshtein, “Minimum redundancy of binary error-correcting codes,” Probl. Pered. Inform., vol. 10, no. 2, pp. 110–123, 1974. , “Bounds on the probability of undetected error,” Probl. Pered. [37] Inform., vol. 13, no. 1, pp. 3–18, 1977. [38] , “Bounds for packings of metric spaces and some applications,” Probl. Kibern., vol. 40, pp. 43–110, 1983 (in Russian). , “On the straight-line bound for the undetected error exponent,” [39] Probl. Pered. Inform., vol. 25, pp. 33–37, 1989. , “Krawtchouk polynomials and universal bounds for codes and [40] designs in Hamming spaces,” IEEE Trans. Inform. Theory, vol. 41, pp. 1303–1321, 1995. [41] J. H. van Lint, Introduction to Coding Theory. New York: SpingerVerlag, 1992. [42] S. Litsyn and B. Sudakov, “New upper bounds on error exponents. Part
II,” in preparation. [43] F. J. MacWilliams and N. J. A. Sloane, The Theory of Error-Correcting Codes Amsterdam, The Netherlands: North-Holland, 1977. [44] R. J. McEliece and J. K. Omura, “An improved upper bound on the block coding error exponent for binary-input discrete memoryless channels,” IEEE Trans. Inform. Theory, vol. IT-23, pp. 611–613, Sept. 1977. [45] R. J. McEliece, E. R. Rodemich, H. Rumsey Jr., and L. R. Welch, “New upper bounds on the rate of a code via the Delsarte-MacWilliams inequalities,” IEEE Trans. Inform. Theory, vol. IT-23, pp. 157–166, Mar. 1977. [46] J. K. Omura, “On general Gilbert bounds,” IEEE Trans. Inform. Theory, vol. IT-19, pp. 661–665, Sept. 1973. , “Expurgated bounds, Bhattacharya distance, and rate distortion [47] funcions,” Inform. Contr., vol. 24, pp. 358–383, 1974. [48] J. N. Pierce, “Limit distribution of the minimum distance of random linear codes,” IEEE Trans. Inform. Theory, vol. IT-13, pp. 595–599, 1967. [49] G. Poltyrev, “Bounds on the decoding error probability of binary linear codes via their spectra,” IEEE Trans. Inform. Theory, vol. 40, pp. 1284–1292, July 1994. [50] C. E. Shannon, “A mathematical theory of communications,” Bell Syst. Tech. J., vol. 27, pp. 379–423; 623–656, 1948. [51] C. E. Shannon, R. G. Gallager, and E. R. Berlekamp, “Lower bounds to error probability for coding on discrete memoryless channels,” Inform. Contr., vol. 10, pp. 65–103 (Part I); 522–552 (Part II), 1967. [52] G. Sz´eg¨o, Orthogonal Polynomials (Amer. Math. Soc. Colloq. Publ.), vol. 23. Providence, RI: 1975. [53] R. R. Varshamov, ”An estimate for the number of signals in codes correcting errors,” Dokl. Akad. Nauk SSSR, vol. 117, no. 5, pp. 739–741, 1957. [54] A. J. Viterbi and J. K. Omura, Principles of Digital Communication and Coding. New York: McGraw-Hill, 1979. [55] J. K. Wolf, A. M. Michelson, and A. H. Levesque, “On the probability of undetected error for linear block codes,” IEEE Trans. Commun., vol. COM-30, pp. 317–324, Feb. 1982.