Parity-Check Matrices Separating Erasures From ... - Semantic Scholar

Report 2 Downloads 42 Views
3332

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 59, NO. 6, JUNE 2013

Parity-Check Matrices Separating Erasures From Errors Khaled A. S. Abdel-Ghaffar, Member, IEEE, and Jos H. Weber, Senior Member, IEEE

Abstract—Most decoding algorithms of linear codes, in general, are designed to correct or detect errors. However, many channels cause erasures in addition to errors. In principle, decoding over such channels can be accomplished by deleting the erased symbols and decoding the resulting vector with respect to a punctured code. For any given linear code and any given maximum number of correctable erasures, parity-check matrices are introduced that yield parity-check equations which do not check any of the erased symbols and which are sufficient to characterize all punctured codes corresponding to this maximum number of erasures. These matrices allow for the separation of erasures from errors to facilitate decoding. Several constructions of such separating parity-check matrices are presented. To reduce decoding complexity, separating parity-check matrices with small number of rows are preferred. The minimum number of rows in a parity-check matrix separating a given maximum number of erasures is called the separating redundancy. Upper and lower bounds on the separating redundancies are derived. In particular, it is shown that the separating redundancies tend to grow linearly with the number of rows in fullrank parity-check matrices of codes. The separating redundancies of some classes of codes are determined for some maximum numbers of erasures. Index Terms—Error and erasure decoding, linear block code, parity-check matrix, separating matrix, separating redundancy.

I. INTRODUCTION

M

ANY channels cause noise that alters the transmitted symbols. This results in erasures and errors, depending, respectively, on whether or not the positions of the altered symbols are known. Using coding techniques, the receiver attempts to correct the errors and retrieve the erased symbols or to detect the presence of errors. The receiver succeeds if the number of erasures and errors are within the capability of the code. Let be an linear block code over , where , , and denote the code’s length, dimension, and Hamming distance, respectively, and is a prime power. Such a code is a -dimensional subspace of the space of vectors of length over , in which any two different vectors differ in at least

Manuscript received June 01, 2012; revised January 08, 2013; accepted January 21, 2013. Date of publication February 07, 2013; date of current version May 15, 2013. This work was supported by the National Science Foundation under Grant CCF-1015548. This paper was presented in part at the 2008 IEEE International Symposium on Information Theory, in part at the Symposium on Information Theory in the Benelux, Eindhoven, The Netherlands, 2009, and in part at the 2010 IEEE International Symposium on Information Theory. K. A. S. Abdel-Ghaffar is with the Department of Electrical and Computer Engineering, University of California, Davis, CA 95616 USA (e-mail: [email protected]). J. H. Weber is with Delft University of Technology, 2628 CD Delft, The Netherlands (e-mail: [email protected]). Communicated by G. Cohen, Associate Editor for Coding Theory. Digital Object Identifier 10.1109/TIT.2013.2245939

positions. The set of codewords of can be defined as the null space of an binary parity-check matrix of rank . The row space of is the dual code of . Since a -ary vector is a codeword of if and only if , where the superscript denotes transpose, the parity-check matrix gives rise to parity-check equations, denoted by

An equation is said to check in position if and only if . In the most general scenario, if the number of erasures, , does not exceed , then the decoder can choose two nonnegative integers and satisfying (1) such that the following is true. If the number of errors does not exceed , then the decoder can correct all errors and erasures. Otherwise, if the number of errors is greater than but at most , then the decoder can detect the occurrence of more than errors and, in this case, may request the retransmission of the codeword (see, e.g., [17]). Notice that the above only states the existence of a decoder with such capabilities but does not show a specific algorithm to achieve these capabilities. Typically, decoders are devised for correcting or, alternatively, detecting errors, but not necessarily in combination with erasures. For BCH codes and Reed–Solomon codes, these decoders can be modified to correct erasures as well [20]. This modification is based on the specific structure of the codes. However, known algorithms that are applicable to linear codes, in general, use trials in which and the resulting erasures are replaced by symbols in word is decoded using a decoder capable of correcting or detecting errors only. Although two trials are sufficient for binary codes, the number of trials grows rapidly with rendering the applicability of this algorithm to be very limited for codes over large fields [20]. On the other hand, to prove the existence of a decoder with the prescribed capabilities, it suffices to show that if all erasures from the received word are deleted, then the errors in the resulting word can be corrected or detected based on the punctured code whose codewords are obtained by deleting all the symbols corresponding to the erased symbols in the received word (see, e.g., [17]). The crux of the proof is based on (1) which implies that the punctured code has Hamming distance of at least , and therefore, errors can be corrected or detected in the word with all erasures deleted. In case of error correction, the erasures can then be recovered since their number is

0018-9448/$31.00 © 2013 IEEE

ABDEL-GHAFFAR AND WEBER: PARITY-CHECK MATRICES SEPARATING ERASURES FROM ERRORS

less than the Hamming distance of the code. Although this proof is actually based on an algorithm, the implementation of the decoder requires the characterization, e.g., a parity-check matrix, of the punctured code which depends on the positions of the erasures in the received word. For the decoder to computationally characterize the punctured code after receiving the word may require an unacceptable time delay. On the other hand, storing precomputed characterizations of all punctured codes corresponding to all erasure patterns may not be feasible either. In this paper, we propose to use a parity-check matrix, which typically has redundant rows, and which contains, as submatrices, parity-check matrices of all codes punctured up to a fixed number of symbols, denoted by . This fixed number, ranging from zero to , is presumably the maximum number of erasures caused by the channel in a codeword. The parity-check matrix we are proposing yields enough parity-check equations that do not check any erased symbol and are sufficient to characterize the punctured code. Having these parity-check equations not checking any of the erased symbols leads to the notion of “separating” the erasures from the errors. To reduce storage requirements, we are interested in finding a parity-check matrix with a minimal number of rows satisfying the above condition. The fact that the same parity-check equations characterize the punctured code allows for error-decoding of this code with all erasures masked. This can done by applying generic error-decoding techniques, ranging from syndrome decoding to iterative decoding methods based on belief propagation, to the parity-check matrices of the punctured codes [14]. Moreover, if the proposed parity-check matrix of the original code is completely orthogonalizable [14], then the parity-check matrices of the punctured codes, being submatrices of the proposed parity-check matrix, are also completely orthogonalizable. In this case, majority-logic decoding can be applied to these parity-check matrices to decode the punctured codes. Notice that majority-logic decoding is applied here to correct errors only, and as explained later, the erasures can be easily retrieved. Alternatively, we can apply majority-logic decoding to correct both errors and erasures using the unpunctured code. However, to correct erasures, the decoder complexity needs to be twice that of the error-only majority-logic decoder [18]. We will not further consider the decoding aspects of the punctured codes but rather concentrate on the construction of parity-check matrices with small number of rows that are capable of separating erasures from errors. This study is motivated by the interest shown in the last decade in decoding techniques, such as belief propagation especially applied to low-density parity-check codes, that are based on parity-check matrices with a large number of redundant rows. Decoding exploits the redundancy of these matrices to yield good performance. The computational complexity of decoding is reduced at the price of storing parity-check matrices with more rows than necessary to characterize the codes. Actually, decoding techniques based on such parity-check matrices have been introduced already to decode words suffering from erasures only [4], [15]. For this application, the decoder seeks a parity-check equation that checks exactly one erased symbol whose value can be determined directly from the equation. A set of positions is called a stopping set if there is no parity-check

3333

equation that checks exactly one symbol in these positions. Erasure decoding fails if and only if erasures fill the positions of a nonempty stopping set. The parity-check matrices we are proposing do not have nonempty stopping sets of sizes less than or equal to the maximum number of erasures except in the case of and the code is maximum-distance separable (MDS). Thus, except for this case, for any pattern of or fewer erasures, not only there are enough parity-check equations not checking any of the erased symbols that characterize the punctured code, but also there is a parity-check equation that checks exactly one of the erased symbols. This greatly facilitates the retrieval of the erased symbols once the errors are corrected. It is not surprising to see that this work is related to work on stopping sets, especially to [7], [8], [12], [19], and [22]. However, work on stopping sets assumes that the channel does not cause errors, which limits its applicability. On the contrary, our work deals with errors in addition to erasures. The basic concept behind the proposed decoding technique is best illustrated by an example as follows. Example 1: Let be the binary extended Hamming code with parity-check matrix

(2)

Notice that the first four rows form a full-rank parity-check matrix of the code. However, as we show, allowing redundant rows simplifies the decoding of errors and erasures. Assume that the channel causes at most one error. Suppose that is received, where “?” denotes an erasure. Since the first, second, and sixth parity-check equations do not check the erased symbol, we know that , which is obtained by deleting the erased symbol from , is the received word corresponding to a transmitted codeword in the code whose parity-check matrix is

(3)

This parity-check matrix which is obtained by deleting the second column and the third, fourth, and fifth rows in is a parity-check matrix of a Hamming code. Decoding yields the codeword . Updating to , the erased symbol can be retrieved from the third parity-check equation of as 1, and the transmitted codeword corresponding to is . Finally, suppose that the received word is . In this case, our goal is to detect, rather than to correct, a single error. From the second and sixth parity-check equations of , we know that the vector , obtained by deleting the

3334

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 59, NO. 6, JUNE 2013

erased symbols from the received word, is a codeword in the code with parity-check matrix

(4) In particular, since is in the null space of this matrix, is a codeword in this code of length 6. This code has Hamming distance 2, and therefore, if the channel caused at most one error, then it did not cause any errors in the nonerased symbols. The erased symbols can be retrieved from the first and then the third parity-check equations of and the transmitted codeword is recovered as . The success of this decoding technique hinges on the fact that the parity-check matrix has a sufficient number of rows with zeros in the erased positions such that deleting the zeros in the erased positions from these rows yields a parity-check matrix for the code obtained by puncturing the extended Hamming code in the erased positions. Thus, we say that separates the erased positions. Typically, separating parity-check matrices have redundant rows. To reduce decoding complexity, parity-check matrices with small number of rows are preferred. The rest of this paper is organized as follows. Preliminaries and basic definitions are covered in Section II. Sections III and IV introduce the notions of separating matrices and separating redundancy, respectively, and contain the main results of the paper. Section V concludes this paper. II. PRELIMINARIES Let

be a subset of and be a subset of . For any of size , let where and . Then, is a submatrix of . For simplicity, we write and to denote in case and , respectively. We allow for empty matrices, i.e., with no rows or no columns, which is the case if either or or both are empty. The rank of an empty matrix is defined to be zero. If is a vector of length , then denotes the vector whose components are indexed by . Furthermore, for the code of length , define the punctured code

for

, where denotes an erasure. Let and denote the erasure and error patterns, respectively. The decoder picks two nonnegative integers and satisfying (1) where is assumed to be at most . Then, it decodes the word with respect to the code . If , the decoder succeeds in correcting all the errors and can then retrieve the erasures. If , the decoder can detect the occurrence of more than errors. This technique of decoding based on the decoding of is the motivation of this work. Let be an matrix over and be a subset of . We define (5) i.e.,

is the largest all-zero submatrix of

. Let (6)

be a parity-check matrix of an Lemma 1: Let linear code over and of size . Then, is in the null space of which has rank at most equal to . Proof: If , then there is a codeword such that . Since and is an all-zero matrix, then . We conclude that is in the null space of . Furthermore, if , then has rank at most as , which is a code of length , has dimension in this case. Throughout this paper, by a nonzero vector we mean a vector that has at least one nonzero component, i.e., it is not the all-zero vector. A nonzero vector over is said to be normalized if its leading nonzero term is equal to 1, i.e., if for all and for some . The support of the vector is the set and its weight is the size of its support set. In particular, the set in (5) is the set of indices of rows of the matrix whose supports are disjoint from . III. SEPARATING MATRICES A. Definitions and Characterization

i.e., consists of all codewords in in which the components in positions belonging to the set , defined by , are deleted. Clearly, is a linear code over of length , dimension , and Hamming distance . Furthermore, if , then since the deletion of any number of components less than the Hamming distance of a code from two distinct codewords results in distinct vectors. It follows that if , then there is a one-to-one correspondence between the codes and such that if and only if there is a unique such that . We consider combined erasure/error decoding of the code . In this scenario, a transmitted codeword , for , is subjected to errors and erasures resulting in a word ,

We say that a parity-check matrix, , for the linear code, , over separates if and only if the submatrix is a parity-check matrix of . For , we say that is -separating for if it separates every set of size . Clearly, any parity-check matrix of any linear code is 0-separating. The existence of -separating parity-check matrices for every nonnegative integer is shown in the next section. Example 2: Let be the binary extended Hamming code with parity-check matrix given in (2). Suppose . Then, from (5) and (6), we have and is as given in (3). This is a parity-check matrix of the code . Hence, separates the set . It can be checked that is actually 1-separating. If , then and is the matrix given in (4) which

ABDEL-GHAFFAR AND WEBER: PARITY-CHECK MATRICES SEPARATING ERASURES FROM ERRORS

is a parity-check matrix of the code . Therefore, separates . However, is not 2-separating since it does not separate the set as and is not a parity-check matrix of the code . The following lemma gives a simple necessary and sufficient condition, which does not involve punctured codes, for a paritycheck matrix of a code to separate a set. Lemma 2: A parity-check matrix of an linear code separates a set of size if and only if has rank . Proof: First, note that is a linear code of length and dimension . From the definition, if separates , then is a parity-check matrix of and, therefore, has rank . On the other hand, from Lemma 1, if has rank , then it is a parity-check matrix of . In the next result, we show that if or , then it suffices to check if a parity-check matrix separates sets of size to conclude that it is -separating, i.e., it also separates all sets of size smaller than . By the Singleton bound (see, e.g., [16]), this result holds for all codes and all values of except if and the code is MDS, i.e., . This exceptional case will be addressed in Section IV-C, where it will be shown that any parity-check matrix of an code separates all sets of size but not necessarily smaller sets. Lemma 3: Let be a parity-check matrix of an linear code over . If separates all sets of size for a fixed , then it is -separating. Proof: We may assume that . It suffices, by induction, to prove that separates an arbitrary subset of size . If the rank of is zero, then the rank of , being a submatrix of , is also zero for any . This contradicts Lemma 2 as separates , and hence, has rank . Therefore, contains at least one nonzero element. Let the th column of be nonzero. Then, is a submatrix of with the th column deleted. For every row in , the entry in the th column of is zero. Since the th column of contains at least one nonzero element, it follows that the rank of is at least one more than the rank of . By Lemma 2, the rank of equals . Therefore, the rank of is at least equal to . By Lemma 1, has rank . We conclude, from Lemma 2, that separates . Parity-check matrices which are -separating do not contain any nonempty stopping sets of size or less if as shown in the next result. Thus, once the errors are corrected, these matrices can be used to recover the erased symbols one by one without solving systems of equations as shown in Example 1. Theorem 1: Let be an -separating parity-check matrix of an linear code where . Then, has no nonempty stopping set of size or less. Proof: Since the result is trivial in case , we may assume that . From Lemma 3, it suffices to show that no arbitrary subset of size is a stopping

3335

and define . Clearly, set. Pick an element . If , i.e., for all such that for all , we have , then the th column in is all-zeros. Hence, can be obtained by deleting the th column in , which is all-zeros. Therefore, has the same rank as . This contradicts Lemma 2. We conclude that . Let . Then, for all and . This proves that is not a stopping set. B. Constructions 1) Basic Constructions: So far, we defined and characterized separating matrices. However, we did not show how to construct them or even whether or not they exist. Here, we tackle this issue by giving explicit constructions of -separating paritycheck matrices for any linear code, , over and any nonnegative integer . We use to denote a full-rank parity-check matrix of . Let denote a matrix composed of all the nonzero normalized codewords in the dual code . Since any row in an -separating parity-check matrix of the code is a multiple of a row in , we conclude that a necessary and sufficient condition for the code to have an -separating parity-check matrix is that is -separating for . The next theorem shows that is indeed an -separating parity-check matrix for every . Theorem 2: Let be an linear code over and be a matrix consisting of all the nonzero normalized codewords in as rows. Then, for any nonnegative integer , is an -separating parity-check matrix for . Proof: From its definition, is a parity-check matrix for and has that many rows. It remains to show that it is -separating. Let be a full-rank parity-check matrix for and be an arbitrary subset of of size . Since , has rank . As has rank , which is at least equal to , then there is a subset such that and has rank . Hence, the row space of contains vectors such that each vector has exactly a single nonzero element in a unique position indexed by an element in . These vectors are linearly independent. Among these vectors, there are vectors with zeros in all positions indexed by . These vectors, multiplied possibly by nonzero elements in for normalization, are rows in . Deleting all positions in from these vectors, which are occupied by zeros, gives linearly independent vectors that belong to and proves that separates . As is arbitrary, we conclude that is -separating for every . From the above, it follows that for any linear code, -separating parity-check matrices do exist for any nonnegative integer less than the Hamming distance of the code. In particular, is such a matrix. However, for large values of and , the large number of rows in this matrix, , makes it difficult to store and process this matrix for the purpose of decoding. Therefore, we are interested in -separating parity-check matrices with smaller number of rows. In the following result, we show that by keeping only the rows in of weight at most , we obtain a parity-check matrix which is -separating for every .

3336

Theorem 3: Let be an linear code over and be a matrix consisting of all the nonzero normalized codewords in of weight at most as rows. Then, for any nonnegative integer , is an -separating parity-check matrix for . Proof: The proof follows from that of Theorem 2 by noticing that the linearly independent vectors each having exactly a single nonzero element in a unique position indexed by an element in are of weight at most . Hence, these vectors, multiplied possibly by nonzero elements in for normalization, are rows in . The matrix may have considerably fewer rows than the matrix . However, to determine the exact size of , we need to know the number of nonzero codewords in of weight less than or equal to . In the following, we give constructions of -separating parity-check matrices for any given value of . Depending on the code and on this value of , these constructions may give -separating parity-check matrices with considerably fewer rows than and . 2) Constructions Based on Covering Designs: In the constructions presented in this section and Section III-B3, let denote a full-rank parity-check matrix for the code and let . From Lemma 3, it suffices to show that the constructed parity-check matrices separate all sets of size to conclude that they are -separating. (As mentioned before Lemma 3, the condition only excludes the case in which and the code is MDS. It will be shown in Section IV-C that any -separating parity-check matrix for an MDS code is also -separating. In particular, if the code is MDS, the following constructions for yield -separating parity-check matrices.) In the constructions presented in this section, we use covering designs. Recall that a covering design, where , is a collection, , of subsets of of size , called blocks, such that every subset of of size is contained in at least one block. Let be the minimum size, , of a covering design. A vast amount of literature and tables regarding the minimum sizes of covering designs are available (see, e.g., [2] and [13]). In order to construct an -separating parity-check matrix for an linear code, , we will use an covering design , where is an integer such that . Let , where , be the distinct subsets of of size . We assign to each subset of of size a block of the covering design containing this subset. For each , let be the set of indices for which block is assigned to . Since , any columns of are linearly independent. Therefore, by elementary row operations on , we can obtain an matrix , of rank , such that its last rows have zeros in the positions indexed by . Furthermore, by elementary row operations on the first rows of , we can obtain, for each , an matrix , still of rank , with rows having zeros in the positions indexed by and with rows having zeros in the positions indexed by . For example, in

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 59, NO. 6, JUNE 2013

case

and matrix has the form

, the resulting

Let denote the matrix whose set of rows is the union of the last rows in for and rows of for . Theorem 4: The matrix is an -separating parity-check matrix for the linear code where , and the number of rows in this matrix is at most

Proof: Clearly, the number of rows in is as given in the statement of the theorem. To simplify the notation in the following, we use to denote . First, we will show that is indeed a parity-check matrix of . Since is in the row space of the parity-check matrix , its null space contains and its rank is at most . We have to show that its rank is not less than . Since has rank , there is a set of size such that has rank . Then, for each , there is a vector, , in the row space of that has a nonzero entry in position and zeros in all other positions in . Let , where , be such that . Then, belongs to the row space spanned by the last rows of . Since these rows are in , the vector is in the row space of . As the vectors, for , are linearly independent, it follows that the rank of is at least . Hence, is a parity-check matrix of . Next, we show that separates the set for each . Notice that contains the last rows of which are linearly independent. In particular, the and, consequently, the rank of are at least rank of equal to . The rank of cannot be larger because of Lemma 1. We conclude that has rank , and from Lemma 2, separates . This is true for all , which proves that separates all sets of size . To minimize the upper bound on the number of rows in , the covering design should be optimal, i.e., , which is the minimum size of an covering design. As an application of Theorem 4, we set . In this case, an covering design can be constructed by taking all the sets of size as blocks. This covering design is optimal, i.e., no covering design has fewer blocks and . For this covering design, the number of rows of is upper bounded by (7) Better, i.e., smaller, bounds can be obtained by choosing even for covering designs that are not necessarily optimal. Consider the subsets of of size of the form , where is a subset of

ABDEL-GHAFFAR AND WEBER: PARITY-CHECK MATRICES SEPARATING ERASURES FROM ERRORS

the blocks of an is

of size . These subsets constitute covering design. The number of blocks

(8) which is the number of subsets of size . Substituting this number for in Theorem 4, we note that

in the expression given

which is the expression given in (7) and where the inequality follows from the choice . Hence, the upper bound on the number of rows of can be reduced by choosing . For , we may improve this construction as follows. Since the rank of is , it has linearly independent columns. Let be the set of indices of such columns. Then, by elementary row operations if necessary, we may assume that is such that is an identity matrix. Let be a covering design where is an integer such that . We assign to each a block of containing . For each , let be the set of indices for which block is assigned to . Since any columns in are linearly independent as , by elementary row operations on , we can obtain an matrix , of rank , such that its last rows have zeros in the positions indexed by . Furthermore, by elementary row operations on the first rows of , we can obtain, for each , an matrix , still of rank , with rows having zeros in the th column and with rows having zeros in the positions indexed by . Let denote the matrix whose set of rows is the union of the last rows in for , of for , and the rows of the matrix . rows Theorem 5: The matrix is a 1-separating parity-check matrix for the linear code where , and the number of rows in is at most

Proof: Since the rows of are linear combinations of the rows of which is also a submatrix of , the matrix is a parity-check matrix of . As is the identity matrix, for every , there are linearly independent rows in , and therefore also in , that have zeros in their th component. This proves that separates the set for . If , then for some , . Rows of and rows of are linearly independent and have zeros in their th component. This proves that separates the set for .

3337

3) Constructions Based on Generic Matrices: The concept of generic erasure-correcting sets was first developed by Hollmann and Tolhuizen [11], [12] to iteratively decode over channels causing erasures only [11], [12]. A generic -erasure correcting set, , is a set of vectors of length such that given a full-rank parity-check matrix of an linear code , the matrix , whose rows are of the form for all , is a parity-check matrix for that allows iterative decoding to retrieve the erased symbols in any correctable erasure pattern of size at most . The term “generic” refers to the fact that produces a parity-check matrix, , that can retrieve the erased symbols for any code and any full-rank parity-check matrix for as long as the number of rows in equals the length of the vectors in . The generic set can be conveniently represented by a “generic” matrix whose rows are the vectors in of length . Then, and the number of rows of is the same as the number of rows of , which is the same as the size of . In [11], [12], and more recently in [1], constructions of generic sets and bounds on their sizes are presented. Here, we extend the concept of generic matrices to construct separating parity-check matrices. In particular, for , we say that a matrix , over with columns, is a generic -separating matrix if for any full-rank parity-check matrix, , of an linear code with , the matrix is an -separating parity-check matrix for . Let be a matrix over whose rows are all the nonzero normalized vectors of length and weight at most . Define where is an parity-check matrix for the linear code . If , the following result shows that is an -separating parity-check matrix for , and, in particular, it implies that is a generic -separating matrix. Theorem 6: The matrix is an -separating parity-check matrix for the linear code where , and the number of rows in this matrix is

Proof: The number of rows of is the same as the number of rows of , which is as given in the statement of the theorem. In the following, we use to denote . Since is a submatrix of and all the rows of are linear combinations of the rows of , is a parity-check matrix for . The theorem clearly holds if . So, we assume in the following that . First, we will show that separates any subset of of size . Let be the set of all rows in such that . For each nonzero normalized vector of length over , let be the set of all rows in such that is a nonzero multiple of . Let be the set of nonzero normalized vectors such that is nonempty. Clearly (9) Since contains all normalized vectors of weight 1 as rows, contains, as rows, all the vectors in . For each ,

3338

pick a row in and suppose that is a row in other than . Since contains all normalized vectors of weight 2 as rows, then contains, as a row, a nonzero multiple of the vector for each nonzero element . Clearly, for some , one of these vectors, denoted by , is such that and contains, as a row, a nonzero multiple of this vector. Hence, contains rows, each is a nonzero multiple of , where and . So far we counted vectors in . Since has Hamming distance , then has rank . In particular, there are vectors that form a basis for the -dimensional vector space over . Hence, . If , then every vector , where , for , can be added to a linear combination of such that the resulting vector is such that . As contains as rows all nonzero normalized vectors of weight up to , a nonzero multiple of belongs to . There are such vectors. Thus, from (9), we already counted

vectors in . These vectors are linearly independent since they can be obtained by elementary row operations on the rows of which has full rank. This proves that , and consequently, have rank at least . From Lemma 1, it follows that has rank . Therefore, from Lemma 2, separates . Since is an arbitrary set of size , is an -separating parity-check matrix of . In case contains a codeword of weight , we can obtain a 1-separating parity-check matrix with fewer rows. First, let be a full-rank parity-check matrix of containing as its last row. We construct a matrix such that is a 1-separating parity-check matrix of . The matrix is a submatrix of the generic -separating matrix used in the construction of . The matrix contains only normalized rows (instead of the rows of ) of weight one and rows (instead of the rows of ) of weight two. Each of the rows in of weight one has 1 in a position among the first positions. Each of the rows in of weight two has 1 in a position among the first positions and a nonzero element in in the last position. Theorem 7: The matrix is a 1-separating parity-check matrix for the linear code whose dual code contains a codeword of weight where , and the number of rows in this matrix is

Proof: For ease of notation, let . Clearly, the matrix has rank , and as has rank , it follows that also has rank and it is a parity-check matrix for . It remains to show that is 1-separating, i.e., that has rank for any given . The matrix consists of the rows in of the form , where is the th row of , , is the th coordinate of this row, and is the multiplicative inverse of . These rows are linearly

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 59, NO. 6, JUNE 2013

has rank . Since independent, and so is obtained by deleting an all-zero column from , has also rank . 4) A Construction for Cyclic Codes: Continuing with the case , we give a construction of a 1-separating paritycheck matrix for cyclic codes. Let be an cyclic code over , i.e., is linear and the cyclic shift of every codeword is a codeword. Then, its dual, , is an code, which is also cyclic and has a codeword such that , , and (see, e.g., [14] and [17]). Let be the circulant matrix whose th row, for , is cyclically shifted to the right times. Then, has the following form:

Since is a cyclic code, all the rows of are in . Theorem 8: The matrix is a 1-separating paritycheck matrix for the cyclic code if . Proof: For , the th row in has a run of cyclically consecutive zeros and this run is cyclically followed by a nonzero element in the th column. Hence, for , the th column in has a run of cyclically consecutive zeros and this run cyclically follows a nonzero element in the th row. It follows that any cyclically consecutive rows are linearly independent. Therefore, is a parity-check matrix for , and for every , the set contains the indices of cyclically consecutive rows in . These rows are linearly independent and , which is obtained by deleting the th component, which is zero, from each of these rows, has rank . This proves that separates for every . IV. SEPARATING REDUNDANCY A. Definition linear code over . It has been Let be an shown in Section III-B that has a parity-check matrix which is -separating for each , . We define the -separating redundancy, , of to be the minimum number of rows of an -separating parity-check matrix of . Determining the exact value of for an arbitrary code and an arbitrary value of seems to be a difficult problem. Therefore, next, we consider bounds on and later we determine for some specific codes and values of . B. Bounds 1) Lower Bound: First, we start by giving a lower bound on the separating redundancy. Theorem 9: Let be an linear code over . Then, for , we have

ABDEL-GHAFFAR AND WEBER: PARITY-CHECK MATRICES SEPARATING ERASURES FROM ERRORS

Proof: Let be an -separating parity-check matrix of . Consider the collection of submatrices for all subsets of size . The number of distinct nonzero codewords in that appear in these matrices is a lower bound has rank on the number of rows of . Each submatrix and, in particular, has at least rows. Each row in has zeros in the positions indexed by . If is a nonzero codeword in of weight , then it appears in at most such submatrices. This number is at most as . matrices each with rows. Therefore, There are the total number of distinct rows is at least as given by the lower bound in the statement of the theorem. 2) Constructive Upper Bounds: Based on the explicit constructions of separating parity-check matrices given in Section III-B, we can obtain upper bounds on . The following two results follow directly from Theorems 2 and 3, respectively. Corollary 1: Let be an linear code over . Then, for , we have

Corollary 2: Let be an linear code over . Then, for , is at most equal to the number of nonzero normalized codewords in of weight at most . Next, we consider upper bounds on the separating redundancy based on Theorems 4 and 5. To get the sharpest bounds, we minimize the size of the covering design used in the constructions. In particular, we choose to be optimal, i.e., in the construction of in Theorem 4 and , which clearly equals , in the construction of in Theorem 5. We also choose to minimize the upper bound on the number of rows in the matrices and . This yields the following two results, where in the first result we upper bounded by given in (8). Corollary 3: Let be an linear code over . Then, for , we have

In particular

Corollary 5: Let Then, for

3339

be an

linear code over , we have

.

In case contains a codeword for which all coordinates are nonzero, Theorem 7 is applicable and gives the following corollary. Corollary 6: Let be an linear code over such that . If the dual code has a codeword of weight , then

In case the code is cyclic, the following result follows readily from Theorem 8. Corollary 7: Let be an cyclic code over such that . Then, we have

3) Nonconstructive Upper Bounds: The previous upper bounds on are obtained by giving explicit constructions of parity-check matrices that are -separating. Here, we give an upper bound by proving the existence of an -separating parity-check matrix of , with a prescribed number of rows, without explicitly specifying its construction. Our approach is similar to the probabilistic approach leading to the Hollmann–Tolhuizen, the Han–Siegel, and the Han–Siegel–Vardy bounds on the stopping redundancy of linear codes [7], [10], [11]. However, we replace the probabilistic argument by a combinatorial one as has been done in [1]. Basically, the argument goes as follows. For a given set of size , we count the number of parity-check matrices that separate . Clearly, this number is the same for all sets of the same size . We denote this number by . Next, we show that if is large enough, then by the pigeon-hole principle, there is an parity-check matrix that separates every set of size . From Lemma 3, it follows that is -separating. First, we start with a lemma counting the number of matrices over of full rank that have no all-zero rows. The lemma uses the well-known result (see, e.g., [6]) that the number of matrices over of rank is given by (10)

Corollary 4: Let such that

be an

linear code over . Then, we have

For nonnegative integers

and , define (11)

If , then . In this case, it is possible to show that Corollaries 3 and 4 agree if and Corollary 4 improves upon Corollary 3 if . On the other hand, from Theorem 6, we obtain the following result.

matrices over of rank Lemma 4: The number of without all-zero rows is given by . Proof: Let be a subset of of size . From (10), the number of matrices of rank such that all rows indexed by are all-zeros is given by . This follows by noticing that deleting the rows indexed by results

3340

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 59, NO. 6, JUNE 2013

in an matrix of rank (which may still contain all-zero rows). The expression in the lemma follows by using the Principle of Inclusion and Exclusion (see, e.g., [3, Th. B, p. 178]). In the following result, we give an expression for . Lemma 5: The number of parity-check matrices separating a given set of size , where , is given by

choices for

, and from Lemma 4, there are

choices for

. There are

choices

for the matrix . Finally, notice choices for the subset of size that there are . Multiplying the number of all choices and summing over proves the expression for stated in the lemma. We point out that setting in Lemma 5 and noticing, from (11), that if if

Proof: Let . Consider an parity-check matrix of . Since the columns indexed by are linearly independent, by elementary row operations on , we can obtain an parity-check matrix such that the column indexed by , , has a unique nonzero entry and this entry, which is 1, is in the th row. In particular, the submatrix of whose rows are indexed by and whose columns are indexed by is an identity matrix and the submatrix of whose rows are indexed by and whose columns are indexed by is of full rank, since itself is of full rank. (For example, in case ,

where is the matrix, is the

identity matrix, is an arbitrary all-zero matrix, and is an matrix of rank .) Any parity-check matrix of can be written uniquely as where is an matrix over of rank . From Lemma 2, the parity-check matrix separates if and only if has rank , where is the set of indices of rows all whose elements in the columns indexed by in are zeros. From the structure of , it follows that is a parity-check matrix of separating if and only if has rank , is an all-zeros matrix, where (since if it has a nonzero row, then the index of this row does not belong to ), is an matrix of rank , and none of the rows of the matrix is all-zeros (since if it has an all-zero row then the index of this row belongs to ). (For example, in case and ,

where

is the is an

,

all-zero matrix, matrix of rank

is a

matrix with no all-zero rows, matrix.) With the second and and is an arbitrary third conditions, the first condition, that the rank of is , is equivalent to the condition that the matrix has rank . We count the number of matrices satisfying the above conditions. First, from (10), there are

we get the number of equals

parity-check matrices of

which

(12) linear code over and . There exists an parity-check which is -separating if

Lemma 6: Let matrix

of

be an

Proof: From Lemma 5, it follows that for every set of size , there are exactly paritycheck matrices, of size , for the code that separate . Hence, there are parity-check matrices that do not separate . Since there are subsets of of size , we conclude that there exists a paritycheck matrix of size which separates all sets of size if

This inequality is equivalent to the one in the statement of the lemma. As separates all sets of size , it follows from Lemma 3 that is -separating. Theorem 10: Let be an linear code over and . Then, is upper bounded by the smallest integer satisfying the inequality

Proof: The result follows by substituting the expressions for and in Lemma 5 and (12), respectively, in the bound of Lemma 6. Theorem 10 is our main result. However, due to the complicated expression involved, it seems difficult to derive a simple upper bound on the separating redundancy based on this theorem. By weakening the bound, the following result can be obtained. Theorem 11: Let be an linear code over and . Then

ABDEL-GHAFFAR AND WEBER: PARITY-CHECK MATRICES SEPARATING ERASURES FROM ERRORS

3341

TABLE I BOUNDS ON

,

FOR THE

Proof: See the Appendix. From Theorem 11, it follows that is upper bounded by a linear function of under the conditions that and are fixed and grows at least as fast as . Combined with the lower bound of Theorem 9, we know that under these conditions, tends to increase linearly with . The following result gives a precise expression for this conclusion. Corollary 8: Consider a sequence of linear codes , , over a fixed finite field where

and

. Then, for every fixed , the -separating redundancy, is lower bounded by , satisfies

, of

for , which

Example 3: As an example of the application of the derived bounds for a specific code, we consider the binary Golay code [16]. The various bounds on the separating redundancy , for , that are applicable to this code are shown in Table I. Recall that the binary Golay code is self-dual, i.e., it is the same as its dual. The number of nonzero codewords in the binary Golay code of weight at most 13 needed for the upper bound of Corollary 2 can be determined from its weight distribution [16]. In case the values of to be used in Corollary 3 are not known, we use the smallest size of known covering designs as reported in [13]. The Golay code has an all-ones codeword and therefore Corollary 6 is applicable but it is not cyclic and therefore Corollary 7 is not applicable. C. Separating Redundancies for Some Codes 1) Binary Extended Hamming Codes: The binary extended Hamming code is a linear code over , where that has a parity-check matrix of the form

where is an binary matrix whose columns are the distinct binary vectors of length and is the all-ones row vector of length . This code is also known as the Reed–Muller code . We have the following result. Proposition 1: Let be a binary extended Hamming code, where . Then, .

BINARY GOLAY CODE

is Proof: It is known that the dual code of the Reed–Muller code which is a binary linear code [16]. This code consists of one all-zero codeword, one all-ones codeword, and codewords of weight . Since , we have from Theorem 9. Equality follows from Corollary 6. Theorem 7 gives a construction of a 1-separating parity-check matrix for with rows. This matrix has the form

where is the complement of the matrix , , obtained by adding the all-ones vector to every row in . For , this matrix is given in (2). 2) MDS Codes: Any linear code satisfies the Singleton bound , [16]. If equality holds, then the code is said to be MDS [16]. It is well known that the dual code, , of an MDS linear code is an MDS code and for any subset of size , there is exactly one normalized codeword in whose support is [16]. Therefore, there are exactly (13) normalized codewords in of weight . Recall that Lemma 3 and Theorem 1 are applicable to all codes and all values of except if and , i.e., and the code is MDS. The following proposition addresses this exceptional case. Proposition 2: Let be an MDS linear code over , i.e., . Then, any parity-check matrix, , of separates all sets of size . In particular, any -separating parity-check matrix of is -separating. Proof: The dual code, , is MDS with Hamming distance . Hence, every nonzero row in , being a codeword in , has weight at least . If , , then no nonzero row in has zeros in all positions indexed by and the rank of is zero. From Lemma 2, it follows that separates . Therefore, the -separating parity-check matrices constructed in Theorems 4–8 or shown to exist in Theorems 10 and 11 are also -separating. More generally, for an MDS code . The following result gives the exact value of these separating redundancies.

3342

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 59, NO. 6, JUNE 2013

Proposition 3: For an , where

MDS linear code

over

,

Proof: As just stated, Proposition 2 implies that . From Theorem 9, is lower bounded by

as and . From Corollary 2, is upper bounded by the number of normalized nonzero codewords in of weight at most . Since is MDS, those codewords have weight exactly and their number is given in (13). Next, we consider the separating redundancy of the MDS code . For this purpose, recall that a Turán design, where , is a collection of subsets of of size , called blocks, such that every subset of of size contains at least one block [2]. We use a variation of this concept and define a Turán design to be a collection of distinct subsets of size , also called blocks, such that every subset of of size contains at least blocks. Notice that in this definition we require the blocks of the design be distinct. Such a design exists if and only if . Provided that this is the case, let be the minimum size of a Turán design. Proposition 4: For an MDS linear code over , where ,

Proof: From Lemmas 2 and 3, a necessary and sufficient condition for a parity-check matrix to be -separating is that for any subset of size , the set of size contains the supports of two linearly independent rows in . Notice that is the set of indices of rows whose supports are contained in . Let be a -separating parity-check matrix for that has rows. If there is a row, , of weight , then the index of this row belongs to unique set for some subset of size . We will argue that we can replace by another vector of weight such that the resulting matrix is still a -separating parity-check matrix of . Notice that this replacement does not affect the capability to separate any set other than . The support of is the set of size . The row is not a linear combination of other rows in whose supports are contained in ; otherwise, by deleting from , we obtain a parity-check matrix which is -separating but has fewer rows than . Hence, there is another row, , in whose support is contained in such that and are linearly independent. There is a nonzero element such that is a nonzero vector whose support is of size at most . Since is a codeword in , the size of its support is exactly . As and are linearly independent, and are also linearly independent. By replacing in by , the resulting matrix is still a -separating

is reparity-check matrix for where a row of weight placed by a row of weight . By repeating this process, we obtain a -separating parity-check matrix for that has no rows of weight . Let be the set of supports of all rows in of weight . Every subset of size contains the supports of at least two linearly independent rows of weight . The supports of these two rows are distinct otherwise a linear combination of them gives a nonzero codeword in of weight less than . We conclude that every subset of size contains two subsets in . This gives an Turán design. The size, , of this design is at least equal to , which is the minimum size of an Turán design. Since has at least rows, . Next, we show that if we are given an Turán design, then we can construct a -separating parity-check matrix for whose number of rows equals the number of blocks. The key point is to notice that every subset of of size is the support set of a unique normalized codeword in . This gives a one-to-one correspondence between subsets of size and normalized codewords in of weight . Let be the matrix whose rows are the codewords corresponding to the blocks of the Turán design. If is a subset of of size , then its complement of size contains two blocks, i.e., has two rows with distinct supports that are disjoint from . Hence, these two rows are linearly independent and the rank of is at least two. From Lemma 1, it follows that the rank of is two. It remains to show that is a parity-check matrix of . Let be a block in the Turán design. For every , the subset is of size and hence should contain a block other than . Notice that the blocks and , for , are distinct, and therefore, they correspond to linearly independent rows in . This proves that has rank . Next, we consider the case for an MDS code over . Since , Theorem 9 gives the lower bound . The following result shows that equality holds if . Proposition 5: For an linear MDS code over , where , we have . Proof: For , let be the set of consecutive integers starting with and reduced modulo to be positive integers less than or equal to . Then, there is a normalized codeword in whose support is . Let be the matrix whose th row is this normalized codeword. The rest of the proof is the same as that of Theorem 8 with replaced by . In all cases considered so far in which is determined for an code , an -separating parity-check matrix is constructed with rows of weight . However, this is not true in general, i.e., a code may have an -separating parity-check matrix whose number of rows is less than the number of rows in any -separating parity-check matrix all of whose rows are of weight . Indeed, consider the binary repetition code, . Suppose that is a 2-separating parity-check matrix for with rows, each having weight two. Then, the

ABDEL-GHAFFAR AND WEBER: PARITY-CHECK MATRICES SEPARATING ERASURES FROM ERRORS

number of ones in is at most . In particular, there is a column with no more than ones. Without loss of generality, assume that it is the first column. Choose and , where , such that each row in with a one in the first column, if any, has a one in either column or column . Then, the matrix has five columns, the first of which is all zeros and the sum of the columns is the all-zero vector as each row in has two ones. We conclude that the rank of is at most three, and therefore, does not separate and hence is not a 2-separating parity-check matrix. On the other hand, it is possible to check that

composed of nine rows of weight two and one row of weight four, is a 2-separating parity-check matrix for the binary repetition code.

3343

stopping redundancies of some codes. For example, we have shown that the -separating redundancy of MDS codes of Hamming distance equals the minimum size of some variation of Turán designs. It is interesting to note that the stopping redundancy for an MDS code is lower bounded by the minimum size of Turán designs and upper bounded by the minimum size of some special Turán designs called single-exclusion systems [7], [9]. Except for some special cases, the minimum sizes of these Turán designs or their variations are not known. In all cases considered in Section IV-C for which is determined explicitly, we find that equals the lower bound given in Theorem 9. This is not true in general. For example, let be binary Hamming code. The dual of this code is the the . simplex code. For , Theorem 9 gives However, it is possible to argue that does not have a 1-separating parity-check matrix with fewer than six rows. In fact, the 1-separating redundancy for this code is six. In general, despite the simple structure of binary Hamming codes, which are linear codes for , we do not have an explicit expression for their 1-separating redundancy that holds for all values of although we know, from Corollary 8, that it grows linearly with . This is again similar to the case faced in determining the stopping redundancy of Hamming codes which seems to be a notoriously difficult problem [11], [21]. Based on the above, it is quite plausible that codes for which the stopping redundancy has been successfully determined or tightly bounded, e.g., Reed–Muller codes [5], are the same codes for which the separating redundancy can be successfully determined or tightly bounded.

V. DISCUSSION AND CONCLUSION In this paper, we introduced the concept of -separating parity-check matrices for decoding over channels causing errors and at most erasures. We presented several constructions of such matrices and defined the -separating redundancies of linear codes to be the minimum number of rows in their -separating parity-check matrices. We derived lower and upper bounds on the -separating redundancies and determined their exact values for a limited number of codes and restricted values of . Probably, one of the important highlights of this study is showing that the -separating redundancy grows linearly for a sequence of linear codes over with provided that and are fixed and grows at least as fast as . This linear behavior of the separating redundancy is similar to that of the stopping redundancy [11]. As Table I in Example 3 clearly shows that there is a considerable gap between the lower bound and the best upper bounds on the separating redundancies. This situation is reminiscent of that of stopping redundancy where the known lower and upper bounds on this quantity are rather far apart. Indeed, the stopping redundancy of the binary Golay code is trivially bounded from below by 12, while the best computable upper bound, to our knowledge, is 182 [10] although computer search demonstrated that the stopping redundancy does not exceed 34 [19]. Actually, our work indicates some similarity in the difficulties faced in determining the separating redundancies and the

APPENDIX Starting from Theorem 10, we derive Theorem 11. Let be the left-hand side of the inequality in Theorem 10. Using the upper bound

we obtain

Substituting for

from (11), we get

(14)

3344

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 59, NO. 6, JUNE 2013

There are two products in (14). The first product, , can be lower bounded using the bound

Setting

in (16)

(19) where and is a positive integer, and which can be easily verified using induction on . Applying this bound with , we get

Substituting (19) into the second sum in (14) gives for

,

(15) in (14) as Since it is times the number of matrices of rank without all-zero rows (see Lemma 4), we can use (15) in (14) to lower bound . Next we express the second product in (14) as a sum. For this purpose, it is convenient to use , where , , and are nonnegative integers, to denote the number of partitions of the integer into distinct nonnegative integers, each less than or equal to . Notice that unless in which case . Furthermore, if . As an example, since we have , , , and there is no other way to partition 7 into three distinct nonnegative integers, each less than or equal to 6. Considering and to be indeterminates, can be expanded as a sum of terms, where each term is a multiple of for some nonnegative integers and . The coefficient of in this sum is the number of partitions of the integer into distinct nonnegative integers, each less than or equal to . Using the function , we can write

(16)

Setting

in (16), it follows that

(17) for as definition, it follows that

. On the other hand, from the

if otherwise.

(18)

On the other hand, if

, then

In particular, the sum over in (14) can start with 1. Combining this with (14) and (15), we get

ABDEL-GHAFFAR AND WEBER: PARITY-CHECK MATRICES SEPARATING ERASURES FROM ERRORS

The sum over nomial theorem to obtain

can be evaluated using the bi-

3345

where in the last equation we made use, again, of the fact that . Setting in (16) and using (18), we get

To evaluate that

(23) given in (22), we notice by using (17) and (18)

We conclude that (24) where, in the last equation, we made use of the fact that . Hence, we can write

for Notice that From (20), (23), and (24), we get

.

(20) where

(21)

and

(22) We lower bound

Hence, from (21)

and evaluate

. Notice that for

,

It follows from the definition of , as the left-hand side of the inequality in Theorem 10, that this inequality is satisfied if

Therefore, from Theorem 10, the upper bound on given in Theorem 11 follows and the proof of Theorem 11 is complete. ACKNOWLEDGMENT We wish to thank Ngo Minh Tri for his contributions to the constructions of separating parity-check matrices presented in this paper.

3346

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 59, NO. 6, JUNE 2013

REFERENCES [1] R. Ahlswede and H. Aydinian, “On generic erasure correcting sets and related problems,” IEEE Trans. Inf. Theory, vol. 58, no. 2, pp. 501–508, Feb. 2012. [2] The CRC Handbook of Combinatorial Designs, C. J. Colbourn and J. H. Dinitz, Eds. Boca Raton, FL, USA: CRC Press, 1996. [3] L. Comtet, Advanced Combinatorics. Dordrecht, The Netherlands: Reidel, 1974. [4] C. Di, D. Proietti, I. E. Telatar, T. J. Richardson, and R. L. Urbanke, “Finite-length analysis of low-density parity-check codes on the binary erasure channel,” IEEE Trans. Inf. Theory, vol. 48, no. 6, pp. 1570–1579, Jun. 2002. [5] T. Etzion, “On the stopping redundancy of Reed-Muller codes,” IEEE Trans. Inf. Theory, vol. 52, no. 11, pp. 4867–4879, Nov. 2006. [6] S. D. Fisher and M. N. Alexander, “Matrices over a finite field,” Amer. Math. Monthly, vol. 73, pp. 639–641, 1966. [7] J. Han and P. H. Siegel, “Improved upper bounds on stopping redundancy,” IEEE Trans. Inf. Theory, vol. 53, no. 1, pp. 90–104, Jan. 2007. [8] J. Han and P. H. Siegel, “On ML redundancy of codes,” in Proc. Int. Symp. Inf. Theory, Toronto, ON, Canada, Jul. 6–11, 2008, pp. 280–284. [9] J. Han, P. H. Siegel, and R. M. Roth, “Single-exclusion number and the stopping redundancy of MDS codes,” IEEE Trans. Inf. Theory, vol. 55, no. 9, pp. 4155–4166, Sep. 2009. [10] J. Han, P. H. Siegel, and A. Vardy, “Improved probabilistic bounds on stopping redundancy,” IEEE Trans. Inf. Theory, vol. 54, no. 4, pp. 1749–1753, Apr. 2008. [11] H. D. L. Hollmann and L. M. G. M. Tolhuizen, “Generic erasure correcting sets: Bounds and constructions,” J. Combin. Theory, Ser. A, vol. 113, pp. 1746–1759, 2006. [12] H. D. L. Hollmann and L. M. G. M. Tolhuizen, “On parity check collections for iterative erasure decoding that correct all correctable erasure patterns of a given size,” IEEE Trans. Inf. Theory, vol. 53, no. 2, pp. 823–828, Feb. 2007. [13] La Jolla Covering Repository, The Institute for Defense Analyses, May 25, 2012 [Online]. Available: http://www.ccrwest.org/cover.html [14] S. Lin and D. J. Costello Jr., Error Control Coding. Upper Saddle River, NJ, USA: Prentice-Hall, 2004. [15] M. G. Luby, M. Mitzenbacher, M. A. Shokrollahi, and D. A. Spielman, “Efficient erasure correcting codes,” IEEE Trans. Inf. Theory, vol. 47, no. 2, pp. 569–584, Feb. 2001. [16] F. J. MacWilliams and N. J. A. Sloane, The Theory of Error-Correcting Codes. Amsterdam, The Netherlands: North Holland, 1977. [17] R. M. Roth, Introduction to Coding Theory. Cambridge, U.K.: Cambridge Univ. Press, 2006.

[18] D. V. Sarwate, “Errors-and-erasures decoding of binary majority-logic-decodable codes,” Electron. Lett., vol. 12, no. 17, pp. 441–442, Aug. 1976. [19] M. Schwartz and A. Vardy, “On the stopping distance and the stopping redundancy of codes,” IEEE Trans. Inf. Theory, vol. 52, no. 3, pp. 922–932, Mar. 2006. [20] S. A. Vanstone and P. C. van Oorschot, An Introduction to Error Correcting Codes with Applications. Norwell, MA, USA: Kluwer, 1989. [21] J. H. Weber and K. A. S. Abdel-Ghaffar, “Stopping set analysis for Hamming codes,” in Proc. Inf. Theory Workshop Cod. Complex., Rotorua, New Zealand, Aug.-Sep. 28–1, 2005, pp. 244–247. [22] J. H. Weber and K. A. S. Abdel-Ghaffar, “Results on parity-check matrices with optimal stopping and/or dead-end set enumerators,” IEEE Trans. Inf. Theory, vol. 54, no. 3, pp. 1368–1374, Mar. 2008. Khaled A. S. Abdel-Ghaffar (M’12) received the B.Sc. degree from Alexandria University, Alexandria, Egypt, in 1980, and the M.S. and Ph.D. degrees from the California Institute of Technology, Pasadena, CA, in 1983 and 1986, respectively, all in electrical engineering. In 1988, Dr. Abdel-Ghaffar joined the University of California, Davis, where he is now a Professor of Electrical and Computer Engineering. He did research at the IBM Almaden Research Center, San Jose, CA, Delft University of Technology, The Netherlands, University of Bergen, Norway, and Alexandria University, Egypt. His main interest is coding theory. Dr. Abdel-Ghaffar served as an Associate Editor for Coding Theory for the IEEE TRANSACTIONS ON INFORMATION THEORY from 2002 to 2005 and he is currently serving as an Associate Editor for Algebraic and LDPC Codes for the IEEE TRANSACTIONS ON COMMUNICATIONS. He is a co-recipient of the IEEE Communications Society 2007 Stephen O. Rice Prize paper award.

Jos H. Weber (S’87–M’90–SM’00) was born in Schiedam, The Netherlands, in 1961. He received the M.Sc. (in mathematics, with honors), Ph.D., and MBT (Master of Business Telecommunications) degrees from Delft University of Technology, Delft, The Netherlands, in 1985, 1989, and 1996, respectively. Since 1985 he has been with the Faculty of Electrical Engineering, Mathematics, and Computer Science of Delft University of Technology. Currently, he is an associate professor in the Multimedia Signal Processing Group. He is the chairman of the WIC (Werkgemeenschap voor Informatie- en Communicatietheorie in de Benelux) and the secretary of the IEEE Benelux Chapter on Information Theory. He was a Visiting Researcher at the University of California at Davis, USA, the University of Johannesburg, South Africa, the Tokyo Institute of Technology, Japan, and EPFL, Switzerland. His main research interests are in the areas of channel and network coding.