AN INVARIANT FOR MATRICES AND SETS OF POINTS IN PRIME CHARACTERISTIC DAVID G. GLYNN Abstract. There is polynomial function Xq in the entries of an m × m(q − 1) matrix over a field of prime characteristic p, where q = ph is a power of p, that has very similar properties to the determinant of a square matrix. It is invariant under multiplication on the left by a non-singular matrix, and under permutations of the columns. This gives a way to extend the invariant theory of sets of points in projective spaces of prime characteristic, to make visible hidden structure. There are connections with coding theory, permanents, and additive bases of vector spaces.
1. Introduction and some definitions The determinant of an m × n matrix A = (aij ) is usually only defined for square matrices over a field; see [13]. Here we investigate similar functions, called Xq , where q is a power of a prime p, that have almost all the properties of a determinant, but also work for non-square matrices. However, the characteristic of the field is needed to be the prime p, and not zero. In this case it leads to more geometric and algebraic structure being visible than with the classicial algebraic and geometric methods. In fact, for these fields Xq includes the usual determinant as a special case. For general background information on geometries over finite fields (and more generally fields of prime characteristic), see [10, 11]. Section two characterizes Xq by its properties, and shows why q must be a power of a prime. Section three summarizes the main properties of Xq , giving some examples, and shows how it leads to an invariant of polynomials of degree 2(q − 1) over a field of characteristic p. Section four shows where permanents are related to Xq , and, using a result of Alon et al. [1], shows that if Xp of a set of points is non-zero, then that set of points is an additive base for GF(p). Section five gives more properties of Xq such as Xq of a direct sum of matrices, and the evaluation of it via a sum over all vectors in Fqm . This also gives another permanent formula. Section six shows that certain coding theoretic invariants related to the complete weight enumerator come from Xq . Section seven relates Xq to properties of projective space, and how repeated points come into the equation. Section eight shows that, in contrast to the determinant function, it is easy to construct covariants from Xq . These are algebraic varieties (such as sets of hyperplanes or hypersurfaces) that are invariantly related to sets of points in projective space over the field of prime characteristic. Many examples are given. Finally, Section nine contains Date: 26 November 2009. Key words and phrases. invariant;covariant;p-modular;determinant;permanent;prime characteristic;projective geometry;code;complete weight enumerator;additive basis 2010 Mathematics Subject Classification. 05E99 11C20 11T71 14L24 15A15 15A33 15A72 51A05 94B27. 1
2
DAVID G. GLYNN
an appendix which shows how to derive Xp from the complete polarization of det(A)p−1 . This provides another reason why Xp is invariant. 1.1. Multiset notation. We let S := {∗ a, b, . . . ∗} be a multiset S, where the entries a, b,. . . of S may be repeated. The ordering of a, b, . . . , is irrelevant. If a is repeated k times it may be written a(k) . If there are no repeated elements in S we have a set, and it is written S := { a, b, . . . }. For any set or multiset S, |S| is the number of entries of S. 1.2. Determinant and permanent. If A is an m × m matrix, then the determinant det(A) is a function: det(A) := Σ sgn(s) as1 ,1 .as2 ,2 . . . asm ,m , where the sum is over the set of all vectors s := (s1 , . . . , sm ), with {∗ sj | 1 ≤ j ≤ m ∗} = {∗ i | 1 ≤ i ≤ m ∗} = { i | 1 ≤ i ≤ m }, and where sgn(s) = ±1 is the sign of the permutation that takes j 7→ sj , for all 1 ≤ j ≤ m. Thus sgn(s) := Π1≤ksl (−1). If the sign, sgn(s), of the permutation s is omitted from the formula for the determinant we have the permanent of a matrix A, given by per(A) := Σ as1 ,1 .as2 ,2 . . . asm ,m , where s goes through all m! permutations of { 1, . . . , m } as above. 2. Invariant functions 2.1. Desired properties. Let A be an m × n matrix over F and let A0 be the result of transforming A by several simple operations that we describe below. We want a certain function f which is a polynomial in the entries of A, such that for some constants d and e ∈ Z+ , there holds: • • • • •
f (A0 ) = f (A) under any permutations of the rows. f (A0 ) = f (A) under permutations of the columns. f (A0 ) = f (A) if a row of A is added to another row of A. f (A0 ) = k d f (A) if a row is multiplied by a constant k. f (A0 ) = k e f (A) if a column is multiplied by a constant k.
These properties are all satisfied, for d = e = 2i, by the invariant function λ det(A)2i , where λ ∈ F and i is a non-negative integer. They imply the invariance by multiplication on the left: f (XA) = det(X)d f (A), where X is any m × m matrix over F . Invariance of multiplication on both the left and right by general matrices is too much to expect, except for square matrices and for powers of the determinant function: if we could multiply on the right, and if the number of rows is less than the number of columns, then zero columns could always be introduced. A zero column implies that the invariant is zero for our kinds of invariants.
AN INVARIANT FOR MATRICES AND SETS OF POINTS
3
2.2. Search for invariant functions. The properties of 2.1 lead us to the consideration of polynomials in the aij ’s that are of homogeneous degree d within the variables of each row, and homogeneous degree e within each column. It turns out that we can find a good function with e = 1: that is, the function will be multilinear in the columns of A. It is clear that md = ne, by counting degrees of the homogeneous polynomials f in two ways. When e = 1 this shows that n = md. If the operation of permuting rows preserves the invariant then we note that it is the same as multiplying on the left of A by a permutation matrix of determinant ±1. Then (±1)d = 1 implies d is even or F has characteristic two. The hardest property to satisfy is that the addition of one row to another preserves the invariant. So consider matrices with smallest row sizes, and let e = 1. The case m = 1 is easy, since adding a row to itself is the same as multiplying by two. In that case we can take f (a1 , . . . , ad ) = a1 . . . ad (or times a constant). Thus the first non-trivial case is m = 2: A is a 2 × 2d matrix. Once this case can be confirmed the case for general m can be finished using a row expansion as in Section 2.3. Lemma 2.1. An invariant f for an m×md matrix (with e = 1), satisfying the properties of Section 2.1 must have a formula that is the sum of all products of the entries in distinct columns of A, such that each of the m possible subscripts is repeated precisely d times in each product. Thus f (A) = Σai1 ,1 .ai2 ,2 . . . aimd ,md , where the sum is over all vectors (i1 , . . . , imd ), with {∗ ij | 1 ≤ j ≤ md ∗} = {∗ i(d) | 1 ≤ i ≤ m ∗}. Proof. From the previous discussion any monomial of f contains precisely d entries in each row and one entry in each column of A. Also, f is symmetric under permutations of rows and columns. The only such polynomial is the given one, up to multiplication by a constant in F . Another crucial lemma is the following: Lemma 2.2. Let p be a prime. Let d ∈ Z+ be fixed. Then c+d ≡ 0 (mod p), ∀ c with 0 < c ≤ d ⇐⇒ d = ph − 1, for some h ∈ Z+ . c Proof. By the binomial expansion, c+d is the coefficient of xc in (1 + x)c+d over GF(p): c then (1 + x)c+d = (1 + x)c−1 .(1 + x)d+1 . Let q := ph . If d = q − 1 then (1 + x)c−1 .(1 + x)d+1 = (1 + x)c−1 (1 + xq ) = (1 + x)c−1 + (1 + x)c−1 xq . Assuming 0 < c ≤ d = q − 1, this has no term xc since the first part has degree less than c and second part has degree at least q. Hence the binomial coefficient satisfies the property if d + 1 = ph . On the other hand, let the p-expansion of d be d = α0 p0 + · · · + αh−1 ph−1 , where each αi satisfies 0 ≤ αi ≤ p − 1. If αi < p − 1 for some i then put c = pi and it may be i seen that p p+d 6= 0. (A binomial coefficient xy is non-zero modulo p if and only if the i p-expansion of y “dominates” the p-expansion of x; see [4].) Thus, if the stated binomial coefficients are all zero, d + 1 is forced to be a power of p, i.e. ph , and αi = p − 1, ∀ 0 ≤ i ≤ h − 1.
4
DAVID G. GLYNN
Suppose now that m = 2. Thus A = (aij ) is 2 × 2d over F , which is of prime characteristic p. Add the first row of A to the second, thereby obtaining a new matrix A0 . We want that f (A) = f (A0 ) for all such matrices A over F . Using the f in Lemma 2.1, let us consider the coefficient of a2,1 . . . a2,i .a1,i+1 . . . a1,2d in the polynomial f (A0 ). This must be identically zero if 0 ≤ i < d since f (A) has degree d in each row of A. The coefficient is zero and so the number of columns of size d containing the first i columns is 2d − i c+d = =≡ 0 (mod p), d−i c where c := d − i and 0 < c ≤ d. Using Lemma 2.2 is follows that d = ph − 1. On the other hand, for any positive integer p the coefficient of a2,1 . . . a2,d · a1,d+1 . . . a1,2d in the polynomial f (A0 ) is one, because 2d − d d = = 1. d−d 0 When i > d, we cannot obtain a2,1 . . . a2,i .a1,i+1 . . . a1,2d by multiplying d elements in the second row of A0 with d from the first (in different columns). Thus such a monomial never appears in the polynomials f (A0 ) or f (A). Since the formula for f (A) is symmetric in the columns we have shown that the formula for f (A) is the same as that for f (A0 ). Thus in the m = 2 case, the polynomials f (A0 ) and f (A) mod p are the same if and only if d = ph − 1, and p is the prime characteristic of the field. 2.3. Row expansions. Consider f as a function of an m × md matrix A over a field of characteristic p. Just as the ordinary determinant can be calculated by row (or column expansions), so can the invariant f . As a polynomial in the m2 d variables that are the elements of A, it is the sum of all monomials that have d entries in each row, and one in each column. Thus, if we fix a product of d entries in one row r of A with columns in a set C, then the corresponding monomials of f with that product form the f invariant for the submatrix A \ r \ C with the corresponding row r and d columns in C deleted. Let us do this more generally as follows. Let A.R.C denote the submatrix of A induced by a subset of rows R and subset of columns C. Let A \ R \ C denote the complementary submatrix with the rows in R and the columns in C deleted. Then we obtain a row expansion formula for a fixed row r: f (A) = Σ|C|=d Πc∈C arc . f (A \ r \ C). This generalises to a kind of “Laplace” expansion via any subset of rows R: Lemma 2.3. f (A) = Σ|C|=|R|d f (A.R.C).f (A \ R \ C). 2.4. The values of d which give an invariant. From Lemmas 2.1 and 2.2 we have shown, by the investigation of smaller matrices with row sizes m which are one or two, that for a field F of characteristic p, our invariant must satisfy d = q − 1, where q = ph . So we assume now that the polynomial f corresponds to this case. For larger values of m, we can now use a row expansion as in Lemma 2.3 to deduce the fact that f is also an invariant satisfying all the desired properties of Section 2.1.
AN INVARIANT FOR MATRICES AND SETS OF POINTS
5
For suppose we add the first row of A to the second, thereby obtain the m × m(q − 1) matrix A0 . Expanding f via the first two rows (put R equal to the set of the first two rows) we see immediately that, because A.R.C is a 2 × 2(q − 1) matrix, f (A0 .R.C) = f (A.R.C) for all subsets C of columns of size 2(q − 1). Also, A0 \ R \ C = A \ R \ C because they are unchanged by the row operation. Thus f (A0 ) = Σ|C|=|R|d f (A0 .R.C).f (A0 \ R \ C) = Σ|C|=|R|d f (A.R.C).f (A \ R \ C), and so f (A0 ) = f (A). Thus the case m = 2 can be inductively applied to larger values of m. In the following Section 3 we summarize that f , now called Xq , defined for the values d = q − 1 = ph − 1 for a field F of characteristic p, is the invariant that we having been searching for. 3. The Main Definition and Properties Definition 3.1. For a field F of prime characteristic p, for h ∈ Z+ , and for q = ph , Xq has the following formula. A is an m × m(q − 1) matrix over F . Xq (A) = Σai1 ,1 .ai2 ,2 . . . aim(q−1) ,m(q−1) , where the sum is over all vectors (i1 , . . . , im(q−1) ), with {∗ ij | 1 ≤ j ≤ m(q − 1) ∗} = {∗ i(q−1) | 1 ≤ i ≤ m ∗}. Theorem 3.2. The invariant Xq , q = ph , satisfies the properties of Section 2.1. Let A be an m × m(q − 1) matrix over a field F of characteristic p. • • • • • • •
Xq (A) ∈ F . Xq (A0 ) = Xq (A) under any permutations of the rows. Xq (A0 ) = Xq (A) under permutations of the columns. Xq (A0 ) = Xq (A) if a row of A is added to another row of A. Xq (A0 ) = k q−1 · Xq (A) if a row is multiplied by a constant k. Xq (A0 ) = k · Xq (A) if a column is multiplied by a constant k. Xq (BA) = det(B)q−1 · Xq (A), for any m × m matrix B over F . m(q−1) Remark 3.3. There are q−1,...,q−1 = (m(q − 1))!/(q − 1)!m monomials or “terms” in Xq (A) as a polynomial in the m2 (q − 1) variables aij . All coefficients of the monomials are 1 in the field F . Example 3.4. For an m × m matrix A = (aij ) over a field of characteristic two, X2 (A) = det(A) = per(A). Thus when m = 2 we have X2 (A) = a11 a22 + a12 a21 . Example 3.5. For a 2 × 4 matrix A = (aij ) over a field of characteristic three X3 (A) = a11 a12 a23 a24 + a11 a22 a13 a24 + a11 a22 a23 a14 +a21 a12 a13 a24 + a21 a12 a23 a14 + a21 a22 a13 a14 . Example 3.6. Xq of a 2 × 2(q − 1) matrix over a field F of prime characteristic p, where q = ph , leads to an invariant of a polynomial g(x) of degree 2(q − 1) as follows.
6
DAVID G. GLYNN
Let A :=
1 ... λ1 . . .
1 λ2(q−1)
, where λi ∈ F (or an extension of F ). Then Xq (A) 2(q−1)
is the coefficient of xq−1 in g(x) := Πi=1 (x − λi ). Since the λi ’s are general field elements, g(x) is a general polynomial of degree 2(q − 1). Thus, for any such polynomial, the coefficient of xq−1 is invariant. The group of invariance is GL(2, F ) acting as the non-singular fractional linear transformations on the roots of g(x). This means that the coefficient of xq−1 in g(x) is zero if and only if it is zero in g(ax + b) (a, b ∈ F, a 6= 0), if and only if it is zero in x2(q−1) g(x−1 ). 4. Connections with Permanents 4.1. Additive bases. There is a strong connection between Xq , when q = p is prime, and a result about additive bases by Alon et al. [1]. In fact, they consider a set of m(p − 1) vectors in Fpm and have an expression involving a permanent that gives a sufficient condition that such a set is an additive basis of the vector space. Definition 4.1. An additive basis S for Fpm is a multiset of vectors such that every vector in the vector space is a sum (with coefficients zero or one) of the vectors in the multiset. If S is a multiset of m(p − 1) such vectors let the corresponding m × m(p − 1) matrix with these vectors as columns be A = A(S). Theorem 4.2. [1, Cor. 3.4] A multiset S of vectors in Fpm is an additive basis for this vector space if the following m(p − 1) × m(p − 1) matrix A(p−1) , with A repeated p − 1 times (row-wise), satisfies A(S) A(S) per .. 6= 0. . A(S) The rows of the matrix A(p−1) may be permuted to form a new matrix D so that the first row of A appears in the first p − 1 rows of D, the second row of A in the next p − 1 rows of D, and so on. (The permanent is unchanged by permutations of the rows.) Then every monomial of Xp (A) corresponds to a collection of m disjoint subsets Ci (|Ci | = p − 1) of columns, each Ci corresponding to the elements of the monomial in row i of A. Each monomial of per(D) also corresponds to a subset of p − 1 columns for each of the m row-blocks. However, one may permute the p − 1 rows within each row-block of D and obtain the same monomial. So for each monomial of Xp (A) there are (p − 1)!m of the same monomial in D. Thus per(A(p−1) ) = ((p − 1)!)m · Xp (A) ≡ (−1)m · Xp (A) (mod p). Corollary 4.3. Given an m × m(p − 1) matrix A over GF(p), its columns form an additive basis over GF(p) if Xp (A) 6= 0. There is another result, [1, Lemma 3.5] that is related to Xp . This says that if there is a basis of m (column) vectors of Fpm , forming the matrix B, and if each basis vector is repeated p − 1 times, giving an m × m(p − 1) matrix B (p−1) , then per((B (p−1) )(p−1) ) =
AN INVARIANT FOR MATRICES AND SETS OF POINTS
7
(p−1)
per(B(p−1) ) 6= 0. This follows immediately from the notes above, and Theorem 7.5, which is Xq (B (q−1) ) = det(B)q−1 , one of the most basic invariant properties of Xq . Thus an ordinary basis of Fpm , repeated p − 1 times, is clearly also an additive basis. 4.2. Characteristic three. Consider a multiset S of 2m points (coordinatised by nonzero vectors vi of length m) in a projective space of dimension m − 1 over a field F of characteristic p = 3. We may assume that the points span the space, and also by Corollary 7.4 that at most two points are repeated, otherwise X3 (S) will be zero. Thus, by a linear transformation and permutation of the points, we can assume that vi = ei , the i’th unit point, for 1 ≤ i ≤ m. If the remaining points vm+1 , . . . , v2m are put into the columns of the square m × m matrix M , then from the definition of X3 we see that X3 (S) = per(M ). The adjoint of a square matrix A is denoted by adj(A). Thus the formula for the inverse of A is adj(A)/ det(A) if A is non-singular. Theorem 4.4. Let M be any m × m matrix over F (of characteristic three). Then per(adj(M )) = det(M )m−2 . per(M ), or when M is non-singular, per(M )/ det(M ) = per(M −1 )/ det(M −1 ). Proof. As above consider the multiset of S of 2m points of which m are the unit vectors and m are the columns of M . If we multiply the points (or vectors) by any m×m matrix A then we have X3 (AS) = det(A)2 .X3 (S) = det(A)2 . per(M ). Thus when M is non-singular substitute A := M −1 . Hence we obtain a new multiset of vectors in which the first m columns are the columns of M −1 , while the second m columns are the unit vectors. Thus X3 (M −1 S) = per(M −1 ) = det(M −1 )2 . per(M ). Rearranging this, since det(M −1 ) = det(M )−1 , there holds per(M )/ det(M ) = per(M −1 )/ det(M −1 ), or equivalently per(M ). det(M −1 ) = per(M −1 ). det(M ). Now per(M −1 ) = per(adj(M )/ det(M )m ) = det(M )−m . per(adj(M )). Thus per(M ). det(M )−1 = det(M )1−m . per(adj(M )) implying that det(M )m−2 . per(M ) = per(adj(M )). Each side of this is a homogeneous polynomial in m2 variables of degree mm−1 . (adj(M ) is of degree m − 1 in the entries of M .) This identity of polynomials holds for all non-singular matrices M and so it also holds for all matrices M . Since per = det for characteristic two fields the following result about integers modulo six also holds.
8
DAVID G. GLYNN
Corollary 4.5. For any m × m integer matrix M there holds per(adj(M )) ≡ det(M )m−2 . per(M )
(mod 6).
4.3. Sums of permanents. Let P be the set of all partitions of the columns of an m × m(q − 1) matrix A over a field of prime characteristic p into m subsets of size q − 1. Note that m(q − 1) |P | = /m! = (m(q − 1))!/(q − 1)!m /m! q − 1, . . . , q − 1 For any partition in P there is an m × m “induced” matrix AP , each column of which corresponding to a particular subset of the partition, that is q − 1 columns, and we multiply the q − 1 elements in the rows together to get the single column of AP . Theorem 4.6. Xq (A) = ΣP per(AP ), where per(AP ) is the permanent of the m × m matrix AP “induced” in A by a column partition P into m subsets of size q − 1. Example 4.7. The formula X3 for the 2 × 4 matrix A in Example 3.5 splits into the sum of three 2 × 2 permanents as a11 a12 a13 a14 a11 a13 a12 a14 a11 a14 a12 a13 per + per + per . a21 a22 a23 a24 a21 a23 a22 a24 a21 a24 a22 a23 Remark 4.8. If A = (aij ) is a 2 × 4 matrix over write a11 0 0 a 12 X3 (A) = det −a21 −a22 −a21 a22
a field of characteristic three we may a13 a14 a13 −a14 . a23 0 0 a24
In general it would be interesting to find formulae for Xq as a single determinant or as a sum of a small number of determinants, as it would lead to efficient methods for calculating Xq . Note that formula in Remark 4.8 can be rewritten as the Hadamard product of a conference matrix (orthogonal (0, 1, −1)-matrix with a zero in each row and column; see [15, §18]) with A(2) (A written below itself twice). 5. More Remarks and Examples The “direct sum” A ⊕ B of two matrices A and B over the same field is the block A 0 matrix . Let A have size r × r(q − 1) and B have size s × s(q − 1). 0 B Theorem 5.1. Xq (A ⊕ B) = Xq (A).Xq (B). Proof. This is a consequence of the Laplace expansion Lemma 2.3.
AN INVARIANT FOR MATRICES AND SETS OF POINTS
9
5.1. Evaluation via algebraic functions and sums. Lemma 5.2. Let S be a multiset of m(q −1) vectors v in F m , where F has characteristic m p. Then Xq (S) is the coefficient of xq−1 . . . xq−1 m in Πv∈S Σi=1 xi vi . 1 Proof. This is immediate from the definition of Xq ; see Definition 3.1.
The coefficient in Lemma 5.2 (up to ±1) can be calculated by summing the function over all x ∈ Fqm , where Fq ∼ = GF(q). Thus we have: Theorem 5.3. If S is a multiset of vectors in F m then Xq (S) = (−1)m Σx∈Fqm Πv∈S Σm i=1 xi vi . Proof. Πv∈V Σm i=1 xi vi , where x = (x1 , . . . , xm ), is a polynomial of total degree m(q − 1) in the variables xi . The coefficient of x1q−1 . . . xq−1 can be calculated by applying the m following well-known lemma m times: 0 if 0 ≤ i < q − 1; i Σy∈Fq y = −1 if i = q − 1. Corollary 5.4. For an m × m matrix A over a field of characteristic p −1 m m per(M ) = (−1)m Σm i=1 Σxi ∈Fq∗ Πj=1 xj Σi=1 xi aij . (q−2) Proof. Let B be the m × m(q − 1) matrix Im A consisting of the identity matrix Im repeated p − 2 times together with the matrix A. Then Xq (B) = per(M ) from the definition of Xq . Using Theorem 5.3 with S as the columns of B, the formula is obtained.
When q = p = 3 in Corollary 5.4, we let F3∗ = {±1}. The outer sum can then be restricted to x1 = 1, xi = ±1, i > 1, and the (−1)m replaced by (−1)m−1 . One finds that this corollary is also true for a field F of any characteristic, and it is related to the polarization identity for symmetric m-tensors; see [7, 8]. It gives a formula for the permanent that should in general be as fast as Ryser’s formula to calculate for larger matrices. 6. Coding theoretic invariants 6.1. Definitions. A linear code C of length n and dimension m over GF (q) is a subspace of dimension m of the vector space GF (q)n . See [14]. The parameters are usually given in the form [n, m]q . There is a fixed coordinate system so that each vector (or “codeword”) has a “Hamming weight”, given by the number of non-zero elements. A “generator matrix” for C is any m × n matrix over GF (q), the m rows of which form a basis for C. The complete weight enumerator polynomial cweC of a linear code C of length n and dimension m over GF (q) (with parameters [n, m]) is defined by first assigning a variable xi for each of the q elements αi of the GF (q). Often α0 = 0 and αi = ρi for 1 ≤ i ≤ q −1, where ρ is primitive element or generator of the cyclic multiplicative group of the field. Then µ(w,i) cweC := Σw=(w1 ,...,wn )∈C Πq−1 , i=0 xi
10
DAVID G. GLYNN
where µ(w, i) is the “multiplicity” of the field element αi in w; that is, the number of j’s for which wj = αi . Note that cweC is a homogeneous polynomial (or “form”) of degree n in the q variables xi . See [14, §3.1]. 6.2. Evaluation via weight enumerator. Theorem 6.1. For any linear code C of length m(q − 1) and dimension m, over GF (q), p prime, we have Xq (C) := Xq (S) = (−1)m cweC (α0 , α1 , . . . , αq−1 ), where S is the multiset of m(q − 1) columns of any m × m(q − 1) generator matrix of C. Proof. This follows directly using the above definition of the cwe and Theorem 5.3.
Example 6.2. Ternary linear code Any ternary linear code C = [2m, m]3 can be assumed (by permuting columns), to have a generator matrix of the form G := (Im A), where Im is the m × m identity matrix and where A is a general m × m matrix over F3 := GF (3). The dual code C ⊥ has generator matrix H := (−At Im ) and it also has parameters [2m, m]3 . If G and H correspond to their multisets of vectors in F3m then from first definitions X3 (G) = per(A) while X3 (H) = per(−At ) = (−1)m per(At ) = (−1)m per(A). Thus cweC (0, 1, −1) ≡ (−1)m cweC ⊥ (0, 1, −1)
(mod 3).
This can also be proved using the MacWilliams relations between the code and its dual: in the ternary code case, if the variable xi corresponds to the element i of GF (3), and if ω 3 = 1, ω ∈ C \ {1}, there holds cweC ⊥ (x0 , x1 , x2 ) = cweC (x0 + x1 + x2 , x0 + ωx1 + ω ¯ x2 , x0 + ω ¯ x1 + ωx2 )/|C|. See [14, §3.3]. 7. Projective Properties Consider the projective space PG(n, F ) over a field F of characteristic p. Let m := n + 1. Consider a multiset S of non-zero m(q − 1) vectors in F m . By Theorem 3.2 if we multiply any of these vectors by non-zero value k in F the value of Xq (acting on the matrix with columns that are the homogeneous coordinates of the points in S) is multiplied by k. Thus Xq (S) = 0 for a multiset S of m(q − 1) points in (m − 1)-dimensional projective space over F is a “geometric” or “invariant” property, independent of coordinates. We shall see that many invariants and covariants of projective spaces can be derived from this observation. 7.1. Projective line and plane in characteristic three. Theorem 7.1. In a projective line of a field F of characteristic three a set P of four (distinct) points form a subline over GF (3) (or in other words, a harmonic set) if and only if X3 (P ) is zero. Proof. W may use a matrix transformation so that the first three vectors are (1, 0), (0, 1), and (1, 1). Then the fourth point (a, b) must satisfy a + b = 0, and so it is the unique point (1, −1).
AN INVARIANT FOR MATRICES AND SETS OF POINTS
11
This invariant is clearly not related to the classical determinant which classifies basis versus non-basis sets of m points in projective spaces. For any two-dimensional projective space (a plane) of characteristic three, X3 gives an invariant for six points. It may be shown that if any four of the points are selected, there is a unique non-singular conic in the projective subplane over GF (3) that contains these points (such a conic in PG(2,3) contains only four points). The other pair of points A and B (together with the first four) make the invariant X3 = 0 if and only if the polar line (with respect to the conic) of A contains B (or vice-versa). 7.2. Finite projective geometry. If the field F = Fq = GF (q) there is a particular explanation for Xq in the finite projective geometry P G(m − 1, q) of dimension m − 1 over GF(q). As in Theorem 5.3 consider the product f (x) := Πv∈S Σm i=1 xi vi . Then considered as a function on the points of PG(m − 1, q) it has degree m(q − 1). Since this is divisible by q − 1, any point x of P G(m − 1, q) determines a well-defined function value: f (kx) = f (x), where k ∈ GF (q) \ {0}. See [6]. Thus it is clear that Xq (S) = 0 if and only if Σx∈P G(m−1,q) f (x) = 0. Corollary 7.2. Let S be a multiset of m(q − 1) points in P G(m − 1, q). If we fix dual homogeneous coordinates x in P G(m − 1, q) so that P is the set of all dual homogeneous coordinates of the hyperplanes of P G(m − 1, q); that is, multiples of a given hyperplane are not repeated in S, then Xq (S) = (−1)m−1 Σx∈P Πv∈S Σm i=1 xi vi . Proof. This follows from Theorem 5.3 since Σλ∈Fq∗ λm(q−1) = Σλ∈Fq∗ 1m = −1. Thus Xq (S) = (−1)m Σx∈Fqm Πv∈S v.x = (−1)m Σx∈P Σλ∈Fq∗ Πv∈S v.λx = (−1)m Σx∈P Σλ∈Fq∗ λm(q−1) Πv∈S v.x = (−1)m Σx∈P Σλ∈Fq∗ Πv∈S v.x. = (q − 1)(−1)m Σx∈P Πv∈S v.x = (−1)m−1 Σx∈P Πv∈S v.x. 7.3. Projections. For multisets T ⊆ S of points in a projective space define S/T to be the projection of S from the subspace generated by T to a complementary subspace. Theorem 7.3. Let S be a multiset of m(q − 1) points in a projective space of m − 1 dimensions of characteristic p. Suppose there is a sub-multiset T ⊆ S generating a subspace of n − 1 dimensions. If |T | > n(q − 1) there holds Xq (S) = 0, while if |T | = n(q − 1) then Xq (S) = 0 if and only if Xq (T ) = 0 or Xq (S/T ) = 0. Proof. Xq (S) = Xq (A) where A is the m × m(q − 1) matrix with the homogeneous coordinate vectors of the points as columns. Put the columns of T to the left and the others to the right of A. Using a linear transformation, multiplying A on the left by a non-singular matrix, we can ensure that the columns on the left corresponding to T are generated by the unit vectors e1 , . . . , en . In the formula for Xq (A) any monomial having a non-zero contribution must take a non-zero value in each of the columns corresponding
12
DAVID G. GLYNN
to T . These values would have to appear in the first n rows, which also each contain p − 1 elements in the monomial. There are too many columns of T for this to occur and so each monomial of Xq is zero. In the case that T = n(q − 1) the submatrix consisting of the first n rows and the first n(q − 1) columns must intersect each monomial taking a non-zero value in Xq in precisely n(q − 1) places (one in each column and q − 1 in each row. Thus the remaining values of A in the first n rows and the last (m − n)(q − 1) columns are irrelevant in the calculation of Xq . Assume that these values are zero and B 0 we see that A takes the form of a direct sum A = , where B is n × n(q − 1) and 0 C C is (m − n) × (m − n)(q − 1). Using Corollary 5.1 and noting that Xq (S/T ) = Xq (C), we obtain the result. Corollary 7.4. If a point in a projective space of dimension m − 1 over a field F of characteristic p (or vector of F m ) is repeated more than q − 1 times then any multiset of m(q − 1) points (or vectors in F m ) containing these will have Xq = 0. 7.4. Repetitions of points. Theorem 7.5. Let A = B (q−1) := (BB . . . B) be an m × m(q − 1) matrix over a field F of characteristic p, where the first q − 1 columns form an m × m submatrix B, the next q − 1 columns form another B, and so on up to the final q − 1 columns. Then Xq (A) = det(B)q−1 . Proof. A = (BB . . . B) = B(Im Im . . . Im ), where Im is the m × m identity matrix over F . Then, using Theorem 3.2 Xq (A) = det(B)q−1 Xq (Im Im . . . Im ). Using the definition of Xq we see that there is only one way of choosing q − 1 non-zero values in (Im Im . . . Im ) in the first row, q − 1 in the second row, and so on, such that they lie in different columns. Thus, quite easily, Xq (Im Im . . . Im ) = 1, and the result follows. Corollary 7.6. If there is a multiset S of m vectors in a vector space of dimension m over a field F of characteristic p, each of which is repeated q − 1 times, Xq (S) will be the determinant of the m × m matrix with the vectors as columns, to the power q − 1. Corollary 7.7. If there is a set S of m points in a projective space of dimension m − 1 over a field F of characteristic p, and we form S (q−1) , that is we repeat each point q − 1 times, then Xq (S (q−1) ) = 0 if and only if S is a basis, that is, linearly independent spanning set, of that space. Thus the geometrical structure induced by Xq reveals finer detail (if q > 2) than the usual structure (a modular lattice) of the projective space. 8. Covariants Theorem 8.1. Given any n(q − 1) points in a projective space of dimension n over a field F of characteristic p there is a covariant hypersurface of degree q − 1 (that is, (q − 1)-ic) containing the points.
AN INVARIANT FOR MATRICES AND SETS OF POINTS
13
Proof. Given m(q−1) coordinate vectors vi in F m , where m := n+1, we may replace q−1 of them by the same variable vector x := (x1 , . . . , xm ). Then letting S be the multiset {∗ v1 , . . . , vn(p−1) , x(q−1) ∗} we see that Xq (S) = 0 is the equation of a hypersurface of degree q − 1 containing the n(q − 1) points. The hypersurface contains each of these points vi since when we substitute vi for x we obtain a multiset S 0 where vi occurs at least q times. Using Corollary 7.4 we then see that Xq (S 0 ) = 0. Note that in general there are q−1+n − n(q − 1) independent (q − 1)-ics containing q−1 n(q − 1) points in dimension n, but this theorem gives a unique one that will be fixed by the group of linear automorphisms of the multiset. Example 8.2. Covariants for small values of n and p For n = 1 the covariant hypersurface is just the product of the q − 1 points in the multiset, considered as hyperplanes or “dual points”. This product is obviously covariant for any number of points in any projective line, so that it happens for any characteristic. When p = 2 a set of n points generates the covariant hyperplane that contains them. This is obviously also true for any characteristic field. When p = 3 there is a covariant quadric containing 2n points of projective space of dimension n. Thus, when n = 2, the quadric is the unique (non-singular) conic of the prime subplane over GF (3) that contains the four (independent) points. Thus the characteristic comes into play at this stage. When p = 5 there is a covariant quartic containing 4n points of projective space of dimension n. For example, for n = 2, a general set of eight points of the plane of characteristic five are contained in a unique covariant quartic curve. By the same method, replacing r < q − 1 points by the same variable point of the m(q − 1) in (m − 1)-dimensional space of characteristic p one obtains a covariant r-ic. However, if r < q − 1 it is certainly not always true that the r-ic contains the general multiset of m(q − 1) − r points. A special case is noteworthy: Theorem 8.3. Given any multiset S of (n + 1)(q − 1) − 1 points in a projective space of dimension n of characteristic p there is a (unique) covariant hyperplane hq (S). It is the locus of points which, adjoined to the multiset, make the invariant Xq zero. If the multiset is contained in a hyperplane then the covariant hyperplane vanishes. Example 8.4. More covariant hyperplanes for small p When p = 3, given a general multiset S of points of size 2n + 1 in n-dimensional space, there is a (unique) covariant hyperplane h3 (S). When p = 5 there is a covariant hyperplane h5 (S) in n-dimensional space of a general multiset S of points of size 4n + 3. Theorem 8.5. Given any multiset S of (n + 1)(q − 1) points in a projective space of dimension m − 1 of characteristic p, there is a covariant multiset of the same number of hyperplanes. This is obtained by finding, for each point v of S, the covariant hyperplane hq (S\{v}). Thus, let us call this multiset of hyperplanes S ⊥q := {∗ hq (S\{v}) | v ∈ S ∗}. Remark 8.6. If Xq (S) = 0 then it is clear that covariant hyperplane hq (v) corresponding to that point v ∈ V contains it. Thus it is similar to a “tangent hyperplane” at that point.
14
DAVID G. GLYNN
There are more methods to obtain invariants from Xq . If n is an integer satisfying 1 ≤ n ≤ q − 1 and n divides m(q − 1), then for any multiset S of m(q − 1)/n points of P G(m − 1, F ), where F is a field of characteristic p, define Xq(n) (S) := Xq (S (n) ), where S (n) is the multiset of size m(q − 1) of points induced from S by repeating each element of S n times. Note that the option of repeating points does not apply to the determinant invariant, since repeated points have a determinant that is zero. The following is obvious. (n)
Theorem 8.7. For any divisor n of m(q − 1), Xq is a symmetric invariant of any multiset of size k := m(q − 1)/n of P G(m − 1, F ). Equivalently, any m × k matrix over F or [k, m]qh code, has this invariant. (n)
Remark 8.8. If n > q − 1 then by Corollary 7.4 the invariant Xq is zero. Also, if (1) (q−1) n = 1 then Xq = Xq , while if n = q − 1, then Xq = detq−1 , where det acts on the m × m matrix with columns in S. Interesting cases to consider would be n = (q − 1)/2 (q odd), where |S| = 2m in P G(m − 1, F ). This should be useful for the theory of self-dual linear codes, for example over GF (q), since a self-dual code always has a generator matrix of type m×2m. Another case to look at would be n = m. Then any set S of q − 1 points spanning P G(m − 1, F ) (m) would have the invariant Xq . ((q−1)/2)
Example 8.9. Xq
, q odd
For the case m = 1 there holds ((q−1)/2)
Xq((q−1)/2) (x1 , x2 ) = Xq ({∗ x1
((q−1)/2)
, x2
∗}) = (x1 .x2 )(q−1)/2 ,
which is zero if at least one of x1 or x2 is zero. For the case m = 2 we may assume that the four points of the projective line are (1, 0), ((q−1)/2) (0, 1), (1, 1), and (1, x), where x ∈ F of characteristic p. Then Xq of this set of 2 (q−1)/2 four points is f := 1+x+x +· · ·+x . Let µ be a primitive element (multiplicative generator) for the field GF (q 2 ), which is either in F or contained in a quadratic extension of F . Using the fact that (x − 1)f = x(q+1)/2 , the roots of this polynomial f of degree (q − 1)/2 are seen to be the elements of {µ2i(q−1) | 1 ≤ i ≤ (q − 1)/2}. Thus, given any three distinct points of the projective line of odd characteristic p, there are (q − 1)/2 further points in the subline over GF (q 2 ) containing the three points which make this symmetric invariant zero. In the next Example 8.10 for p = 5 we check this in more detail. ((p−1)/2)
Example 8.10. Xp ((2)
, where p = 5
Consider X5 acting on matrices A of size 2 × 4 over a field F of characteristic five. The columns can be assumed therefore to be four points on the projective line over F . Given three of these points, what is the condition for the fourth point that makes (2) X5 (A) = 0? Firstly, we may assume by using a linear transformation, that the first three points (columns of A) are (1, 0), (0, 1) and 1, 1). They are contained in a unique
AN INVARIANT FOR MATRICES AND SETS OF POINTS
15
subline over GF (5).If the fourth point has coordinates (a, b) then it is easily calculated 1 1 0 0 1 1 a a (2) = 0 exactly when a2 +4ab+b2 = 0. Using that X5 (A) = X5 0 0 1 1 1 1 b b projective coodinates we may assume that b = 1. Thus the pair of invariant √ points with respect to the first three are (a, 1), where a2 + 4a + 1 = 0; that is, a = 3 ± 3 ∈ GF(25). Let us identify the on the projective line with x ∈ F , and √ (1, 0) with ∞. √ √ points (x, 1) √ √ Since (3 + 3)(3 − 3) = 1 = (3 + 3) + (3 − 3) we see that the set {3 ± 3} ⊂ GF(25) is fixed by the linear fractional transformations x 7→ 1 − x and x 7→ x−1 , generating all the linear collineations of the line that fix the first three points √ as a set.√ Also, the (non-linear) Frobenius involution x 7→ x5 of GF (25) maps 3 + 3 ↔ 3 − 3. Hence these two points are invariant as a set under the linear collineations that fix the first three points (a group isomorphic to the symmetric group S3 ), and also invariant under (2) the non-linear Frobenius involution. This confirms the invariant property of X5 acting on matrices of size 2 × 4. Theorem 8.11. There is a covariant (q − 1)-ic of a multiset of (m − 1)(q − 1)/i points S of (m − 1)-dimensional space of characteristic p (where i is a divisor of (m − 1)(q − 1)) and x := (x1 , . . . , xm ) is variable, given by Xq (S (i) ∪ {x(q−1) }) = 0. For example, in the case p = 5, m = 3, and i = 2, we consider X5 (S (2) ∪ {x(4) }) = 0, where S is a set of (m − 1)(p − 1)/i = 4 points in a plane of characteristic 5. By using a linear transformation we can assume that the four points of S are given by the unit vectors (1, 0, 0), (0, 1, 0), (0, 0, 1), and (1, 1, 1). Thus they are contained in the prime subplane PG(2, 5) of PG(2, F ), and also in this covariant quartic curve. Then, using the definition of X5 one sees that the quartic curve is Q : y 2 z 2 + x2 z 2 + x2 y 2 − xyz 2 − xzy 2 − yzx2 = 0. This turns out to be degenerate, able to be written as the product of two irreducible conics, so that Q = (yz + ωxz + ω 2 xy)(yz + ω 2 xz + ωxy), where ω 3 = 1, ω ∈ GF (25), ω 6= 1. Note that ω 2 + ω = −1, and ω 5 = ω 2 . The four points of S are thus double points (singularities) of the quartic Q. These are the only points in PG(2, 5) of Q. 9. Appendix: Complete Polarization Here we can show how the formula for Xp derives from the complete polarization of detp−1 . We assume that all vectors come from a vector space V of dimension n over a field F , which at first does not necessarily have prime characteristic. Let f be a homogenous polynomial in one vector variable x := (x1 , . . . , xn ) of degree d. If F has prime characteristic p then we also assume that d ≤ p − 1. This is necessary since we need d! 6= 0 in F . As in [16, pp. 5–6] the polarization or Aronhold operator Dyx can be defined by the formula ∂f ∂f y1 + · · · + yn . Dyx f := ∂x1 ∂xn
16
DAVID G. GLYNN
Thus Dyx f is now of degree d−1 and linear in the vector y := (y1 , . . . , yn ). By repetition of this operator d times we can linearize f so that it becomes a multilinear form in d (j) (j) vector variables x(j) := (x1 , . . . , xn ) in V , by forming the complete polarization of f with respect to the variable x f (Px ) := 1/d! Dx(1) x . . . Dx(d) x f. Notice that x disappears completely from this expression, and it is replaced by the d vectors x(j) . f (Px ) is symmetric in these variables as a multilinear form which means (j) that the coefficients of the polarization in the new nd variables xi can be written as a symmetric nd hypercube over F . Alternatively, and perhaps more easily, f (Px ) can be calculated as 1/d! times the coefficient of λ1 . . . λd in f (λ1 x(1) + · · · + λd x(d) ), where the λj are auxiliary variables. Example 9.1. In the case n = 2 and d = 3, if x := (x1 , x2 ) and f (x) := x31 + x21 x2 , then (1) (2) (3)
(1) (2) (3)
(1) (2) (3)
(1) (2) (3)
f (Px ) = x1 x1 x1 + 1/3.(x1 x1 x2 + x1 x2 x1 + x2 x1 x1 ). Note that if all n variables xi from the vector space are substituted for the corre(j) sponding xi , (any 1 ≤ j ≤ d), in the polarized form, then the original polynomial is returned. Theorem 9.2. If p is a prime, Xp of an m × m(p − 1) matrix is (−1)m times the complete polarization of det(A)p−1 , where A is a general m × m matrix over the field F of prime characteristic p. Proof. Let A := (aij ) be a general m × m matrix over F . The columns of A are aj := (a1j , . . . , amj )t , 1 ≤ j ≤ m. Let f (a1 , . . . , am ) := det(A)p−1 . Since det(A) is multilinear in each of its variable column vectors aj it follows that f is a homogeneous polynomial of degree p − 1 in each of these aj . The complete polarization of f can be performed with respect to each vector aj . Thus we want to calculate f (Pa1 ,...,am ) := f (Pa1 )...(Pam ) . Each column vector aj of A will split in the complete polarization into q − 1 new (k) vectors aj , 1 ≤ k ≤ p − 1. Hence we define in total m(p − 1) columns of a new (k)
m × m(p − 1) matrix M by aj
(k)
(k)
:= (a1j , . . . , a1j )t . Thus the general element in the i’th (k)
row and (j, k)’th column of M is aij , which is an independent variable. Using m(p − 1) auxiliary variables λj,k , for 1 ≤ j ≤ m and 1 ≤ k ≤ p − 1, we see that the complete polarization f (Pa1 ,...,am ) is the coefficient of the diagonal term Λ := Π1≤j≤m,1≤k≤p−1 λj,k in (1)
(p−1)
f (λ1,1 a1 + · · · + λ1,p−1 a1
(k)
(p−1) , . . . , λm,1 a(1) ). m + · · · + λm,p−1 am
Letting A(k) be the m × m matrix (aij ), for 1 ≤ k ≤ p − 1, this is the coefficient C of Λ in (k) det(Σp−1 diag(λ1,k , . . . , λm,k ))p−1 . k=1 A The proof of this theorem now resides in showing that Xp (M ) = (−1)m C. Considering C as a function of the m × m(p − 1) matrix M it is clear it is already a
AN INVARIANT FOR MATRICES AND SETS OF POINTS
17
(k)
polynomial in the m2 (p−1) variables aij that is multilinear in the m(p−1) columns, and of degree p − 1 when taken as a function of any particular row—an important property shared by Xp . Equivalently, the monomials of C will take one variable in each of the m(p − 1) columns, and p − 1 variables in each row. Let this type of monomial be type T . Recall the basic result, Theorem 7.5, about Xp : for any m × m matrix B over F , det(M )p−1 = Xp (B (p−1) ), where B (p−1) is an m × m(p − 1) matrix that has each column of B repeated p − 1 times. We can use this to calculate C, for we define the m × (k) diag(λ , . . . , λ (p−1) . Thus G has each column of m(p − 1) matrix G := (Σp−1 1,k m,k )) k=1 A p−1 Σk=1 A(k) diag(λ1,k , . . . , λm,k ) repeated p − 1 times. Hence C is the coefficient of Λ in Xp (G). (k) Consider a general monomial (or term) t in the variables aij of type T : how can it (k)
can arise in C? Let nij be the number of variables aij of t, 1 ≤ k ≤ p − 1, and let Nt := (nij ) be the corresponding m × m matrix over Z. Clearly, 0 ≤ nij ≤ p − 1 for all i, j. Since t is of type T each column of Nt has a sum of p − 1 and so does each row. Thus the line-sums of Nt are all p − 1. (k) Each auxiliary variable λjk is paired with a unique variable aij of t, for some i. But in the calculation of Xp (G) we must only consider monomials that only contain λjk once. Hence we consider the basic definition of Xp (G) as a sum of monomials of type (1) (p−1) T . Let Pj be the set of repeated columns of G of the type λj,1 aj + · · · + λj,p−1 aj , corresponding to the j’th column of A. There are nij variables of t coming from products within the i’th row intersected with Pj , but these products must come from different columns within Pj , as the definition of Xp (G) states, and when we vary i all possible columns of Pj must be taken up precisely once in the products. We can obtain these variables of t, corresponding to j, by using products that correspond to taking n1j in p−1 ways of the first row, n2j in the second, and so on. There are precisely n1j ,...,n m,j choosing these products, while each of these products gives the correct variables in t precisely n1j ! . . . nmj ! times. Thus we can choose the products independently within p−1 .n1j ! . . . nm,j ! = (p − 1)! ≡ −1 each Pj , and the number of ways for Pj is n1j ,...,n m,j (mod p). Multiplying the numbers of ways through for all partitions Pk with 1 ≤ k ≤ m we obtain the coefficient C = (−1)m . Remark 9.3. Is it possible to replace p by the power q = ph in the complete polarization theorem 9.2? The present proof only appears to break down in the last few lines, because it is not true that (q − 1)! ≡ −1 (mod p), for q = ph , h > 1. References [1] N. Alon, N. Linial and R. Meshulam, Additive bases of vector spaces over finite fields, J. Combin. Th. Series A 57 (1991), 203–210. [2] R.A. Brualdi and H.J. Ryser, Combinatorial Matrix Theory, Encyclopedia of Mathematics and its Applications 39, Cambridge University Press, Cambridge, 1991. [3] A. Cayley, On the theory of determinants, Camb. Phil. Trans. viii, 1849, 75. (The paper was actually given at a meeting of the society in 1843.) [4] L.E. Dickson, History of the Theory of Numbers. Vol. I: Divisibility and Primality, Chelsea Publishing Company, New York, 1966.
18
DAVID G. GLYNN
[5] D.G. Glynn, The modular counterparts of Cayley’s hyperdeterminants, Bull. Austr. Math. Soc. 57 (1998), 479–492. [6] D.G. Glynn and J.W.P. Hirschfeld, On the classification of geometric codes by polynomial functions, Des. Codes Cryptogr. 6 (1995), 189–204. [7] D.G. Glynn, The permanent of a square matrix, submitted. [8] R. Goodman and N.R. Wallach, Representations and Invariants of the Classical Groups, Encyclopedia of Mathematics and its Applications 68, Cambridge University Press, Cambridge, 1968. [9] R. Hartshorne, Algebraic Geometry, Springer, Graduate Texts in Mathematics 52, New York, 1977. [10] J.W.P. Hirschfeld, Projective Geometries over Finite Fields, Second Edition, Oxford University Press, Oxford, 1998. [11] J.W.P. Hirschfeld and J.A. Thas, General Galois Geometries, Oxford University Press, Oxford, 1991. [12] W.V.D. Hodge and D. Pedoe, Methods of Algebraic Geometry, (two volumes), Cambridge Uni. Press, Cambridge, 1968. [13] E. Pascal, Die Determinanten, H. Leitzmann, Halle, 1900. (The German edition, proof-read among others by the younger H. Grassmann, was preceded by an Italian edition, “I Determinanti”, Hoepli, Milan, 1897.) [14] V.S. Pless and W.C. Huffman, Editors, R.A. Brualdi, Assoc. Editor, Handbook of Coding Theory, (two volumes), North-Holland, 1998. [15] J.H. van Lint and R.M. Wilson, A Course in Combinatorics, Cambridge University Press, Cambridge, 1991. [16] H. Weyl, The Classical Groups, their invariants and representations, Princeton University Press, Princeton, N.J., 1939. E-mail address:
[email protected] CSEM, Flinders University, P.O. Box 2100, Adelaide, South Australia 5001, Australia, Tel: +61 8 82017582, +61 8 82984274 (hm)