Matrices connected with Brauer's centralizer algebras - CiteSeerX

Report 1 Downloads 31 Views
Matrices connected with Brauer's centralizer algebras



Mark D. McKerihan

y

Department of Mathematics University of Michigan Ann Arbor, MI 48109 e-mail: [email protected]

Submitted: October 9, 1995; Accepted: October 31,1995

Abstract

In a 1989 paper [HW1], Hanlon and Wales showed that the algebra structure of the Brauer Centralizer Algebra A(fx) is completely determined by the ranks of certain combinatorially de ned square matrices Z = , whose entries are polynomials in the parameter x. We consider a set of matrices M = found by Jockusch that have a similar combinatorial description. These new matrices can be obtained from the original matrices by extracting the terms that are of \highest degree" in a certain sense. Furthermore, the M = have analogues = that play the same role that the Z = play in A(fx) , for another algebra that arises naturally in this context. We nd very simple formulas for the determinants of the matrices M = and = , which prove Jockusch's original conjecture that det M = has only integer roots. We de ne a Jeu de Taquin algorithm for standard matchings, and compare this algorithm to the usual Jeu de Taquin algorithm de ned by Schutzenberger for standard tableaux. The formulas for the determinants of M = and = have elegant statements in terms of this new Jeu de Taquin algorithm.

M

M

M

 Subject Class y This research

05E15, 05E10 was supported in part by a Department of Education graduate fellowship at the University of Michigan

1

2

the electronic journal of combinatorics 2 (1995),#R23

Contents

1 Introduction

1.1 Acknowledgments : : : : : : : : : : : : : : : : : : : : : : : : : : :

2 Determinants of M and M 2.1 2.2 2.3 2.4 2.5

Column permutations of standard matchings Product formulas for M and M : : : : : : : : Eigenvalues of Tk (x) and Tk (y1 ; : : :; yn) : : : The column span of P : : : : : : : : : : : : : Computation of det M and det M : : : : : :

: : : : :

: : : : :

: : : : :

: : : : :

: : : : :

: : : : :

: : : : :

: : : : :

: : : : :

: : : : :

: : : : :

De nition of the algorithm : : : : : : : : : : : : Jeu de Taquin preserves standardness : : : : : Dual Knuth equivalence with JdT for tableaux The normal shape obtained via JdT : : : : : : An alternate statement of the main theorem : :

: : : : :

: : : : :

: : : : :

: : : : :

: : : : :

: : : : :

: : : : :

: : : : :

: : : : :

: : : : :

3 Jeu de Taquin for standard matchings 3.1 3.2 3.3 3.4 3.5

3 9

10 10 13 14 15 20

25 25 27 32 36 39

the electronic journal of combinatorics 2 (1995),#R23

3

1 Introduction Brauer's Centralizer Algebras were introduced by Richard Brauer [Brr] in 1937 for the purpose of studying the centralizer algebras of orthogonal and symplectic groups on the tensor powers of their de ning representations. An interesting problem that has been open for many years now is to determine the algebra structure of the Brauer centralizer algebras A(fx) . Some results about the semisimplicity of these algebras were found by Brauer, Brown and Weyl, and have been known for quite a long time (see [Brr],[Brn],[Wl]). More recently, Hanlon and Wales [HW1] have been able to reduce the question of the structure of A(fx) to nding the ranks of certain matrices Z = (x). Finding these ranks has proved very dicult in general. They have been found in several special cases, and there are many conjectures about these matrices which are supported by large amounts of computational evidence. One conjecture arising out of this work was that A(fx) is semisimple unless x is a rational integer. Wenzl [Wz] has used a di erent approach (involving \the tower construction" due to Vaughn Jones [Jo]) to prove this important result. In our work we take the point of view taken by Hanlon and Wales in [HW1]-[HW4], and we pay particular attention to the case where x is a rational integer. We consider subsets of Z+  Z+, which we will think of as the set of positions in an in nite matrix, whose rows are numbered from top to bottom, and whose columns are numbered from left to right. Thus, the element (i; j) will be thought of as the position in the ith row, and jth column of the matrix. These positions will be called boxes. De nition 1.1. De ne the partial order <s, \the standard order" on Z+  Z+, by x s y if x appears weakly North and weakly West of y. De nition 1.2. De ne the total order 1. This means that one generally cannot reconstruct M by substitution in M(n). The matrix M has an interesting algebraic interpretation which we brie y describe here. To do this we give a short description of the algebra A(fx) and the closely related algebra Af . See [HW1] for a more complete description. Both algebras have the same basis, namely the set of 1-factors on 2f points. To de ne the product of two such 1-factors 1 and 2 , we construct a graph B(1 ; 2 ). We can think of this graph as a 1-factor (1 ; 2 ) together with some number (1 ; 2) of cycles of even lengths 2l1 ; 2l2; : : :; 2l (1 ;2 ) . The product in Af(x) is given by 1  2 = x (1 ;2 ) (1 ; 2 ): In Af , the product is

1 0 ( ; ) Y plj (y1 ; : : :; yn)A (1 ; 2) 1  2 = @ 1

2

j =1

where pi (y1 ; : : :; yn) is the ith power sum. Since pi (1; : : :; 1) = n for all i, the specialization of Af to y1 =    = yn = 1 is isomorphic to Af(n) . In Af there is an important tower of ideals Af = Af (0)  Af (1)  Af (2)     . Let Df (k) = Af (k)=Af (k + 1). In [HW1], Hanlon and Wales express the multiplciation in Df (k) in terms of a matrix Zm;k = Zm;k (y1 ; : : :; yn ) where f = m + 2k. Furthermore, they show that the algebra structure of Df (k) for particular values of y1 ; : : :; yn is completely determined by the rank of Zm;k . Their work implies that det(Zm;k ) 6= 0 for the values y1 ; : : :; yn if and only if Df (k) is semisimple for those values.

the electronic journal of combinatorics 2 (1995),#R23

8

A typical element of Df (k) is a sum of terms of the form f where f is a homogeneous symmetric function and  is a certain type of 1-factor. Let gr(f) = deg(f) + k. The multiplication in Df (k) respects this grading in the sense that gr(f1 1  f2 2 )  gr(f1 1 ) + gr(f2 2 ). Let D~ f (k) be the associated graded algebra. One can construct matrices Mm;k = Mm;k (y1 ; : : :; yn) that play the same role in D~ f (k) that the Zm;k play in Df (k). It turns out that Mm;k is the matrix obtained from Zm;k by extracting highest degree terms. Using the representation theory of the symmetric group, one can show that the matrix Mm;k is similar to a direct sum of matrices M= where  and  are partitions of f and m respectively. These matrices M= are precisely the matrices M de ned above. The main result of this paper is a formula for the determinant of M, which can be interpreted as a discriminant for the algebra D~ f (k) in the same way that det Z is a discrimant for Df (k). This paper is split into two main sections. In section 2, we prove several basic facts about standard matchings which are needed to compute the determinant of M. In particular, we nd an ordering on matchings (de ned in 2.4) such that if  is a standard matching, then any column permutation of  yields a matching which is weakly greater than  in this order (see Theorem 2.5). In Theorem 2.11 we show that the standard matchings of shape = index a basis for an important vector space associated to =. This part of the paper is very similar in avor to standard constructions of the irreducible representations of the symmetric group. Using these two theorems, we are able to give an explict product formula for the determinant of M in Theorem 2.14. This formula has the following form: Y det M = C h2 (x)c 2 2 `j=j

where C is some nonzero real number, and c is a Littlewood-Richardson coecient. The same argument also shows that for x = n 2 Z+, det M = C

Y

2 `j=j

z (y1 ; : : :; yn)c 2

where the constant C is the same as the one above. In section 3, we introduce a Jeu de Taquin algorithm for standard matchings. Much of this section is devoted to a comparison of this algorithm with the well known Jeu de Taquin algorithm for standard tableaux invented by Schutzenberger. This makes sense because there is a natural way to think about any matching as a tableau such that a matching is standard if and only if it is standard as a tableau. In Theorem 3.3 we show that if Jeu de Taquin for matchings is applied to a standard matching of a skew shape with one box removed, then the output is another standard matching. Theorem 3.5 gives a description of how the two Jeu de Taquin algorithms compare in terms of the dual Knuth equivalence for permutations. Theorem 3.5 is used to show in Theorem 3.10 that if Jeu de Taquin is used to bring a standard matching of a skew shape to a standard matching of a normal shape (the shape of a partition), then

the electronic journal of combinatorics 2 (1995),#R23

9

both algorithms arrive at the same normal shape, and as a consequence of this, the standard matching of normal shape obtained from any standard matching is independent of the sequence of Jeu de Taquin moves chosen. Finally, using Theorem 3.12, a result of Dennis White [Wh] we nd that the number of times the normal shape  appears as the shape obtained from a standard matching of shape = using Jeu de Taquin is the Littlewood-Richardson coecient c (Theorem 3.14). Using this theorem we obtain elegant restatements of the main results from section 2 (Theorems 2.14 and 2.16) in terms of the Jeu de Taquin algorithm for standard matchings.

1.1 Acknowledgments

The author would like to thank William Jockusch for suggesting the problem discussed in this paper, and Phil Hanlon for valuable discussions leading to the results described here.

10

the electronic journal of combinatorics 2 (1995),#R23

2 Determinants of M and M

2.1 Column permutations of standard matchings

De nition 2.1. Suppose  is a matching of shape =. De ne T() to be the lling of [=] that puts i in box x if (x) is in row i. De ne T() to be the lling obtained by rearranging the elements in each row of T() in increasing order from left to right. Example 2.1. Here are T() and T() for the matching  shown below. ε =

T( ε ) = 2

3

3

2

5

2

1

4

1

4

1

3

3

2

3

2

2

5

1

1

4

4

3

3

2 1

T( ε ) =

3

2

Lemma 2.1. If  is a standard matching then T() is a semistandard tableau. Proof. Suppose x 2 [=]. If y 2 [=] is immediately below or to the right of x

then x <s y. Hence (x) ?k. (Recall that Rl \ Cj = ; unless j  1 ? l). Since  stabilizes the columns of [=], this implies that there exists some z 2 Ci such that (z) 2 Cj . But i + j > k ? k = 0, which contradicts Lemma 2.2 b. We conclude that cannot precede lexicographically. Observe that this same argument shows that if x 2 R1 and ()(x) 2 Rk+1, then x 2 Ck , and ()(x) 2 C?k .

the electronic journal of combinatorics 2 (1995),#R23

13

Suppose now that = , and suppose that  2= H. Write the elements of A as x1 ; x2; : : :; xa in order from right to left, and let xi be the rightmost box in A such that y = ()(xi ) 6= (xi ). In Rk+1, let y1 ; y2; : : :; ya be the rst a boxes from left to right. Note that for all j 2 f1; : : :; ag, (xj ) = yj . Furthermore, for each j 2 f1; : : :; ag, (yj ) = xj 2 R1, and hence T() has a 1 in position yj for all such j. This implies that each such yj must be at the top of its column, and hence in C?k . Since = , y 2 Rk+1. Also, y 6= yj for any j  i. Thus y is farther right in Rk+1 than yi . Since  stabilizes the columns of [=], this implies that there is some z 2 Ck in the same column as xi , such that (z) is in the same column as y. Now, by the comments above, (z) must be in C?k or to its right. Lemma 2.2 b implies that (z) must be in C?k or to its left. Thus (z) 2 C?k , and now Lemma 2.2 c implies that (z) is exactly k rows below z. By Lemma 2.3, (z) must be in the same column as yi , or to its left. But this contradicts the fact that y lies to the right of yi and (z) is in the same column as y. It follows that  2 H as desired. We have shown that (2.2)  2= H =) T()  T():   If  2 H then let  be the partition such that [ ] = [] [ A [ (A). Let   be  the standard matching of shape = that is  restricted to [= ]. Let  2 H be a column permutation of [=] such that  xes A [ (A) pointwise, and   = . It is clear that such a  exists because of the de nition of H. We can think of  as a column permutation of [= ]. By induction, we have T(   )  T(  ) (2.3) and T(   ) = T(  ) ()    =   : (2.4) Now, T() = T( ) has the same letters as T() in the positions in A [ (A), so it is clear from (2.3) that T()  T(). Moreover, (2.4) implies that T() = T() if and only if  = .

2.2 Product formulas for

M

and M

For the remainder of this paper, we assume that j=j = 2k. It will be convenient to think of the matrices M and M as products of three matrices, M = P tTk (x)J (2.5) M = P tTk (y1 ; : : :; yn)J: Here, P is the F=  A= matrix with (; ) entry equal to the coecient of  in e() 2 V= , where e 2 R[S=] is given by X X e= sgn(): (2.6) 2C=  2R=

the electronic journal of combinatorics 2 (1995),#R23

14

In other words, P is the matrix whose ith column is the expansion of e(i ) in terms of the basis F=, where i is the ith standard matching of shape =. The matrix Tk (x) is the F=  F= matrix with (1 ; 2) entry given by Tk (x)1 ;2 = x (1 ;2 ) : We de ne Tk (y1 ; : : :; yn) by Tk (y1 ; : : :; yn)1 ;2 = p?(1 ;2 ) : Note that Tk (x) and Tk (y1 ; : : :; yn) are symmetric matrices. The F=  A= matrix J is de ned as follows:

(

J; = 1  = ; 0 = 6 :

(2.7) (2.8)

(2.9)

We want to show that the (i ; j ) entry of P tTk (x)J is equal to the (i ; j ) entry of M. If Y is any matrix, then let Y denote the column of Y indexed by . We have (P tTk (x)J)i ;j = Pti Tk (x)Jj = Pti Tk (x)j X X = sgn()Tk (x)i ;j 2C=  2R= (2.10) X X = sgn()x (i ;j ) 2C=  2R= = Mi ;j :

Exactly the same argument shows that M = P tTk (y1 ; : : :; yn)J as well.

2.3 Eigenvalues of

Tk (x)

and Tk ( 1

y ; : : : ; yn )

Using the product formulas (2.5) for M and M, we can nd product formulas for det M and det M. We can do this because the matrices Tk (x) and Tk (y1 ; : : :; yn) are known to have certain nice properties. What follows is a brief discussion of these properties. For a more detailed discussion, see [HW1]. Recall that there is an S= action on F= given by ()(x) = ((?1 x)): (2.11) This action can be linearly extended to V= , which de nes a representation  : S= ! End(V= ) as follows: (())() = : (2.12) The key fact about Tk (x) and Tk (y1 ; : : :; yn) that we need is that both commute with the action of S= . In other words, Tk (x) and Tk (y1 ; : : :; yn ) can

the electronic journal of combinatorics 2 (1995),#R23

15

be regarded as endomorphisms of V= satisfying the following equations for all  2 S= : ()Tk (x) = Tk (x)() (2.13) ()Tk (y1 ; : : :; yn ) = Tk (y1 ; : : :; yn)() This is an easy consequence of the following easy to prove identities (see [HW1]).

(1 ; 2) = (1 ; 2) (2.14) ?(1 ; 2) = ?(1 ; 2) Furthermore, the S= action on F= is equivalent to the action on the conjugacy class of xed point free involutions of S= . It follows from ([M] Ex.5, p.45) that V= =

M 2

V2

(2.15)

where the sum runs over all even partitions 2 ` 2k. Here, V2 denotes a submodule of V= , isomorphic to the irreducible S= module indexed by 2. In this notation, V(2k) is isomorphic to the trivial representation, while V(12k ) is isomorphic to the sign representation. Since V= decomposes into a direct sum of irreducibles V2 , each of which has multiplicity 1, it follows from Schur's Lemma that Tk (x) restricted to V2 is a scalar operator denoted h2 (x)I. Similarly, Tk (y1 ; : : :; yn) restricted to V2 is a scalar operator. Hanlon and Wales [HW1] compute both of these scalars. Theorem 2.6. [HW1] Let 2 ` 2k. Then h2 (x) =

Y

(x + 2j ? i ? 1):

(i;2j )2[2 ]

Theorem 2.7. [HW1] Let 2 ` 2k. Then Tk (y1 ; : : :; yn) restricted to V2 is the scalar operator z (y1 ; : : :; yn)I , where z (y1 ; : : :; yn) is the zonal polynomial indexed by  .

2.4 The column span of

P

In order to analyze M and M, we will consider how Tk (x) and Tk (y1 ; : : :; yn) act on the column span of P. It turns out that this span is the same as the space e(V= ) = he() :  2 F= i, where e is de ned in equation (2.6). We will show, in fact, that the columns of P form a basis for this space. De nition 2.3. We say that a matching  of shape = is row (column) increasing if every pair x <s y 2 [=], that lie in the same row (column), satis es (x) h (y). Let A be the set of boxes in [=] that are in the same row as y, and at least as far West as y. Let B be the set of boxes in [=] that are in the same row as x, and at least as far East as x. Let Sym(A [ B) denote the symmetric group on the set A [ B. It is a well known fact that the following relation, called the Garnir relation, holds in the group algebra RS=.

the electronic journal of combinatorics 2 (1995),#R23

17

B

D x y A

Theorem 2.9. (Garnir relation) Let = be a skew shape with A; B  [=] de ned as above. Then

X

2Sym(A[B)

e = 0:

It is useful to observe here that a row increasing matching  is completely determined by T() (see De nition 2.1). De ne i;j to be the number of boxes in row i that are attached to a box in row j in . Since  is row increasing, the boxes in row i that are attached to row j occur together in a block. Thus, it is not hard to see that a row increasing matching , is uniquely determined by the numbers i;j , or equivalently by T() because i;j is also equal to the number of j's in row i of T(). Note that for all i and j we have i;j = j;i. Lemma 2.10. Suppose that the matching  of shape = is row increasing, and that x lies immediately above y in [=] with (x) >h (y). De ne the subsets A; B  [=] as above. If  2 Sym(A [ B), let  be the row increasing matching obtained from  using Lemma 2.8. Then T()  T(). Proof. By the de nitions of A and B, if a 2 A and b 2 B, then (a) j;i+1, which implies (2.23) ()i+1;j > i+1;j :

the electronic journal of combinatorics 2 (1995),#R23

18

This in turn implies that for some k < j, the number of boxes in row i + 1 that are attached to row k must be smaller in  than in , i.e. (2.24) ()k;i+1 = ()i+1;k < i+1;k = k;i+1; which contradicts the assumption that j is the rst row in which wT () and wT () di er. We conclude that (2.25) ()j;i > j;i which implies that T()  T(). If j = i, then we know that for all k < i, we have (2.26) ()i;k = ()k;i = k;i = i;k ; and (2.27) ()i+1;k = ()k;i+1 = k;i+1 = i+1;k : If (2.28) ()i;i < i;i then there must be at least one box in B which is attached in  to another box in row i. Hence, by the observation in the rst paragraph, every box in A must be attached in  to row i or above. So, for all k > i + 1, the only boxes in A [ B that are attached in  to row k are in B. It is easy to see that for all k > i + 1 ()i;k  i;k : (2.29) It follows that (2.30) ()i;i + ()i;i+1  i;i + i;i+1: If (2.28) holds, then by (2.30) we must have ()i+1;i = ()i;i+1 > i;i+1 = i+1;i : (2.31) This implies that for some k < i, the number of boxes in row i + 1 that are attached to row k must be smaller in  than in , i.e. (2.32) ()k;i+1 = ()i+1;k < i+1;k = k;i+1; which contradicts the assumption that i is the rst row in which wT () and wT () di er. We conclude that ()i;i  i;i: (2.33) If ()i;i > i;i, then T()  T(), so we can assume that ()i;i = i;i. In this case, by (2.30) we know that ()i;i+1  i;i+1. If ()i;i+1 > i;i+1, then

the electronic journal of combinatorics 2 (1995),#R23

19

T()  T(), so we can assume that ()i;i+1 = i;i+1. But now, (2.29) implies that ()i;k = i;k for all k > i + 1. This means that wT () and wT () are the same through row i, i.e. j > i. If j = i + 1, then for all k  i we have ()i+1;k = i+1;k :

(2.34)

Furthermore, for all k > i + 1, the number of boxes in A [ B that are attached to row k is the same in both  and . By assumption, the number of boxes in row i that are attached to row k is the same in  and , so the same is true in row i + 1. It follows that ()i+1;i+1 = i+1;i+1 as well, which means that wT () and wT () are the same through row i + 1, contradicting our assumption. Similarly, we can show that j cannot be greater than i + 1, because in that case we have ()k;l = k;l

(2.35)

whenever either k  i + 1, or l  i + 1. If k > i + 1 and l > i + 1, then (2.35) holds as well since  has no e ect on edges between rows k and l in that case. We conclude that if there is a di erence between wT () and wT () , then it must occur in row i or above, and we have already shown that the lemma holds in that case. Theorem 2.11. The set fe() :  2 A=g forms a basis for the vector space e(V= ). Proof. We already know that the set of e() such that  is a row increasing matching of shape = spans e(V= ) (see (2.20)). We can improve this slightly as follows: e(V= ) = he() :  2 F= ;  row increasing; e() 6= 0i

(2.36)

De ne W= = he() :  2 A= i. We will induct on the order , to show that e() 2 W= for all row increasing matchings  of shape =. If e() = 0 for all such matchings, then this is trivially true. If not, let 0 be the row increasing matching that minimizes T(0) with respect to  among the row increasing matchings  satisfying e() 6= 0. We will show that 0 is a standard matching of shape =. If not, then there is some box x and a box y immediately below x with 0(x) >h 0(y). De ne the sets A and B as in Lemma 2.10. Let C be the number of permutations  2 Sym(A [ B) such that 0 = 0 . Note that C 6= 0, since the identity clearly works. Furthermore, note that for any matching  we have e() = e():

(2.37)

This follows from the de nition of e, and the fact that  di ers from  by a row permutation.

the electronic journal of combinatorics 2 (1995),#R23

20

Using the Garnir relation (Theorem 2.9) we now obtain Ce(0 ) = ? =?

X

2Sym(A[B);0 6=0

X

2Sym(A[B);0 6=0

e(0 ) e(0 ):

(2.38)

By Lemma 2.10, each 0 that appears in the right hand side of (2.38) satis es T(0 )  T(0), so by our choice of 0, we have e(0 ) = 0. Hence, e(0 ) = 0; (2.39) which contradicts our choice of 0 . We deduce that there can be no such boxes x and y, and therefore 0 must be a standard matching, i.e. 0 2 A= , and e(0 ) 2 W= . Now, if  is a row increasing matching with T()  T(0 ), such that  2= A= , then there is some box x immediately above a box y with (x) >h (y). De ne A and B as before, and let C be the number of permutations  2 Sym(A [ B) such that  = . Note that C 6= 0. Using the Garnir relation we obtain X e(): (2.40) Ce() = ? 2Sym(A[B);6=

By Lemma 2.10, every  that appears in the right hand side of (2.40) satis es T()  T(), so by induction every term on the right hand side is in W= , which implies that e() 2 W= as well. We have shown the e(V= ) = W= . It remains only to show that the set fe() :  2 A= g is linearly independent. Suppose that there is a linear relation

X

2A=

 e()

(2.41)

We want to show that  = 0 for all  2 A= . If not, then let 0 be the standard matching that maximizes T(0 ) among those  2 A= with  6= 0. In the proof of Lemma 2.12 below, we show that for any standard matching , e() has a non-zero  coecient in V= , and that if 1 and 2 are two standard matchings with T(1 )  T(2 ), then the 2 coecient of e(1 ) is zero. In particular, for any  2 A= such that T()  T(0 ), the 0 coecient of e() is zero. Since the 0 coecient of e(0 ) is not zero, we must have 0 = 0, contradicting our choice of 0 . Hence, we must have  = 0 for all  2 A= . Another way to state Theorem 2.11 is that the columns of P form a basis for the space e(V= ).

2.5 Computation of det and det M M

The computations of det M and det M are virtually identical, so we will only compute det M explicitly, and merely state the corresponding result for det M.

the electronic journal of combinatorics 2 (1995),#R23

21

Suppose that v 2 V2 is an eigenvector of Tk (x) with eigenvalue h2 (x). Since Tk (x) commutes with the action of S= , we have Tk (x)(e(v)) = e(Tk (x)v) = e(h2 (x)v) = h2 (x)e(v):

(2.42)

i.e. e(v) is also an eigenvector of Tk (x) with eigenvalue h2 (x). Furthermore, it is not hard to see that the h2 (x) are all distinct, so we must have e(v) 2 V2 . We have shown that e(V2 )  V2 : It now follows that e(V= ) = e =

"M 2 `2k

M

2 `2k

(2.43) V2

#

e(V2 ):

(2.44)

Let d = d be the dimension of e(V ) (thus, if  is not even then d = 0). For every evenS partition  ` 2k, let B  = fe(v1 ); : : :; e(vd g be a basis for e(V ). Let B= = B  . We now have two bases for e(V= ), namely e(A= ), and B= . Let Q be the F=  B= matrix whose column indexed by e(vi ) is the expansion of e(vi ) in V= in terms of the basis F= . Let S be the A=  B= matrix such that PS = Q: (2.45) The matrix S is an invertible transition matrix from the basis e(A= ), to the basis B= for e(V= ). The importance of the matrix Q is the fact that its columns are eigenvectors for Tk (x). Thus, (Tk (x)Q)e(vi ) = h (x)Qe(vi ) :

(2.46)

This is equivalent to saying that Tk (x)Q = QD, where D is a B=  B= diagonal matrix whose diagonal entry in the column indexed by e(vi ), is h (x). We can use these facts to study the product formulation of the matrix M as follows: S t M = S t P tTk (x)J = Qt Tk (x)J (2.47) = DQt J = DS t P tJ:

the electronic journal of combinatorics 2 (1995),#R23

From this we obtain

det M = det D det(P tJ) Y = det(P tJ) h2 (x)d2 : 2 `2k

22

(2.48)

Let N = P tJ. Then, N is a A=  A= matrix, and det(N) is a scalar because the matrices P and J both have entries in R. It is not hard to see that the (i ; j ) entry of N is the coecient of j in e(i ). We have Nij = Ni ;j = =

X

X

2C=  2R=

X

X

2C=  2R=

sgn()(i = j ) sgn()(i = j )

(2.49)

where (statement) = 1 if the statement is true and 0 otherwise. Notation . Let R (resp. C ) be the subgroup of R= (resp. C=) that xes the standard matching .

Lemma 2.12.

det N =

Y 2A=

jR jjC j:

Proof. Renumber the standard matchings so that they appear in increasing

order with respect to . Note that for any row permutation  of [=], and any matching  of shape =, T() = T(). If Nij 6= 0, then for some  2 R= , and some  2 C= , we have i = j . By Theorem 2.5 we have T(j )  T(j ) = T(i ) = T(i ):

(2.50)

But if i < j, then T(i )  T(j ), which contradicts (2.50), so Nij = 0. In other words, N is lower triangular. Suppose that for some  2 R=, and some  2 C= , we have i = i :

(2.51)

T(i )  T(i ):

(2.52)

T(i ) = T(i ):

(2.53)

If  2= Ci , then by Theorem 2.5, For all  2 R=

23

the electronic journal of combinatorics 2 (1995),#R23

We conclude that if  2= Ci , then T(i ) 6= T(i ) for any  2 R= . Hence, if  2= Ci , then (2.51) cannot occur. If  2 Ci , then (2.51) holds if and only if  2 Ri . Since all permutations in Ci are even, we have (2.54) Nii = jRi jjCi j: The Lemma clearly follows. Lemma 2.13. d 2 = d2 is equal to the Littlewood-Richardson coecient c 2 . Proof. We begin with the fact that the operator e a ords a skew representation of the symmetric group S= . More explicitly, eS=  = S = =

M

`2k

c S 

(2.55)

where S  is the irreducible Specht module indexed by . It is a well known fact (see [M]) that the coecient c is equal to the number of LR llings of shape = whose Hebrew word has weight  (see De nition 3.6). The following are isomorphisms of vector spaces. e(V2 )  = eS= (V2 )  = eS= S= V2 (2.56) M    c S S= V2 : = `2k  S S

But Schur's Lemma says that = V2 is one dimensional if  = 2, and zero dimensional otherwise. Thus, (2.57) e(V2 )  = c 2 S  S= V2 : That is, e(V2 ) has dimension c 2 , which is what we wanted to show. Lemma 2.13, together with Lemma 2.12 and equation (2.48) completes the proof of the main theorem of this paper.

Theorem 2.14.

det M =

Y 2A=

jR jjC j

Y 2 `2k

h2 (x)c 2 :

Corollary 2.15 (Jockusch's Conjecture). det M has only integer roots.

Proof. This is immediate from Theorem 2.14, since the polynomials h2 (x) have

only integer roots. The arguments in this section can be used nearly word for word to prove the following generalization of Theorem 2.14. The only changes that need to be made are M ! M, Tk (x) ! Tk (y1 ; : : :; yn ) and h2 (x) ! z (y1 ; : : :; yn).

the electronic journal of combinatorics 2 (1995),#R23

Theorem 2.16. det M =

Y 2A=

jR jjC j

Y 2 `2k

24

z (y1 ; : : :; yn)c 2 : 

Example 2.3. Although the determinants of these matrices are nice, in general

their eigenvalues are not. The shape (4; 2)=(2), which is in some sense the smallest non-trivial example (because there are two standard matchings of this shape) already has eigenvalues that aren't nice. Here are the matrices one gets for that shape. The reader can verify that the determinants factor according to theorems 2.14 and 2.16, but that this is not re ected in the eigenvalues. 4x2 4x  M = 4x 2x2 + 2x (2.58)

M=

4p2 1 4p2

4p2 2p21 + 2p2



(2.59)

the electronic journal of combinatorics 2 (1995),#R23

25

3 Jeu de Taquin for standard matchings 3.1 De nition of the algorithm

Assume that D is a skew shape with one box z removed (note that any skew shape can be considered a skew shape with one box removed by simply attaching a corner either on the Northwest or Southeast edges and then removing that box. Thus these de nitions and results apply to plain skew shapes as well). Suppose  is a standard matching of D. Denote the box to the right of z by x, and the box below by y. Assume that either x 2 D or y 2 D. We de ne a Northwest Jeu de Taquin (NW-JdT) move (for matchings) of  at z as follows. If only one of x; y lies in D then let w be that one. If both lie in D then let w be the element of fx; yg that minimizes (w) with respect to (w) b = z; :z b = (w):

(3.2)

In terms of 1-factors,  0 has an edge between the boxes z and (w) =  0 (z), whereas  has an edge between w and (w). This one edge \moves," and all the others remain xed. The move  !  0 is called a NW-JdT move of  at z. Example 3.1. The following is a sequence of NW-JdT moves starting with a standard matching of shape (6,4,2/2) and ending with a standard matching of shape (6,3,2/1). The shaded box is the box z removed from the skew shape. NW-JdT

NW-JdT

NW-JdT

A SE-JdT move of  is de ned similarly. In this case, let x be the box above z, and y the box to the left of z. Assume that either x 2 D or y 2 D. If only one of x; y lies in D then let w be that one. If both lie in D then let w be the element of fx; yg that maximizes (w) with respect to