Random dense bipartite graphs and directed ... - Semantic Scholar

Report 4 Downloads 147 Views
Random dense bipartite graphs and directed graphs with specified degrees Catherine Greenhill

Brendan D. McKay∗

School of Mathematics and Statistics University of New South Wales Sydney, Australia 2052

Department of Computer Science Australian National University Canberra ACT 0200, Australia

[email protected]

[email protected]

26 November 2008

Abstract Let s and t be vectors of positive integers with the same sum. We study the uniform distribution on the space of simple bipartite graphs with degree sequence s in one part and t in the other; equivalently, binary matrices with row sums s and column sums t. In particular, we find precise formulae for the probabilities that a given bipartite graph is edge-disjoint from, a subgraph of, or an induced subgraph of a random graph in the class. We also give similar formulae for the uniform distribution on the set of simple directed graphs with out-degrees s and indegrees t. In each case, the graphs or digraphs are required to be sufficiently dense, with the degrees varying within certain limits, and the subgraphs are required to be sufficiently sparse. Previous results were restricted to spaces of sparse graphs. Our theorems are based on an enumeration of bipartite graphs avoiding a given set of edges, proved by multidimensional complex integration. As a sample application, we determine the expected permanent of a random binary matrix with row sums s and column sums t. ∗

Research supported by the Australian Research Council.

1

1

Introduction

P Let s = (s1 , . . . , sm ) and t = (t1 , . . . , tn ) be vectors of positive integers with m j=1 sj = Pn k=1 tk . Define B(s, t) to be the set of simple bipartite graphs with vertices {u1 , . . . , um }∪ {v1 , . . . , vn }, such that vertex uj has degree sj for j = 1, . . . , m and vertex vk has degree tk for k = 1, . . . , n. Equivalently, we may think of B(s, t) as the set of all m × n matrices over {0, 1} with jth row sum equal to sj for j = 1, . . . , m and kth column sum equal to tk for k = 1, . . . , n. In addition, let H be a fixed bipartite graph on the same vertex set. In this paper we find precise formulae for the probabilities that H is edge-disjoint from G ∈ B(s, t), that H is a subgraph of G, and that H is an induced subgraph of G. These probabilities are defined for the uniform distribution on B(s, t). In general, whenever we refer to a random element of a set, we always mean an element chosen uniformly at random. These formulae are obtained when the graphs in B(s, t) are sufficiently dense, the graph H is sufficiently sparse and the entries of s and t only vary within certain limits. The exact conditions are stated in Section 2. The starting point of the calculations is an enumeration of the set B(s, t, H) of graphs in B(s, t) which are edge-disjoint from H; see Theorem 2.1. In the case m = n, the n × n binary matrix associated with the bipartite graph can also be interpreted as the adjacency matrix of a digraph which has no multiple edges but may have loops. By excluding the diagonal we obtain a parallel series of results for simple digraphs (digraphs without multiple edges or loops). These are presented in Section 3. These subgraph probabilities enable the development of a theory of random graphs and digraphs in these classes. As examples of computations made possible by this theory, we calculate the expected number of subgraphs isomorphic to a given regular subgraph. A particular case of interest is the permanent of a random 0-1 matrix with row sums s and column sums t. Now we briefly review the history of this problem. All previous precise asymptotics were restricted to sparse graphs. Define g = max{s1 , . . . , sm , t1 , . . . , tn }, x = P max{x1 , . . . , xm , y1 , . . . , yn } and N = j sj . Asymptotic estimates for bounded g were found by Bender [2] and Wormald [21]. This was extended by Bollob´as and McKay [3] to  the case g, x = O min{log m, log n}1/3 and by McKay [12] to the case g 2 + xg = o(N 1/2 ). Estimates which are sometimes more widely applicable were given by McKay [11]. The best enumerative results for B(s, t) in the sparse domain appear in [9, 16]. Although results about sparse digraphs with specified in-degree and out-degree sequences can be deduced from the above, we are not aware of this having been done. Some

2

results using the pairings model have appeared [6]. For digraphs in the dense regime, some related work includes enumeration of tournaments by score sequence with possible forbidden subgraph [13, 15, 14, 7], Eulerian digraphs [13, 18], Eulerian oriented graphs [13, 20], and digraphs with a given excess sequence [19]. For the case of dense bipartite graphs with specified degrees, an asymptotic formula for the case of empty H was given by Canfield and McKay [5] for semiregular graphs and by Canfield, Greenhill and McKay [4] for irregular graphs. The latter study is the inspiration for the present one. In related work using different methods, Barvinok [1] gives upper and lower bounds for |B(s, t, H)| which hold very generally (from sparse to dense graphs) but which can differ by a factor of (mn)O(m+n) . Barvinok’s results also give insight into the structure of a “typical” element of B(s, t, H), which he proves is close to a certain “maximum entropy” matrix. The paper is structured as follows. The results for bipartite graphs are presented in Section 2 and the corresponding results for digraphs can be found in Section 3. Then Section 4 presents a proof of the fundamental enumeration result, Theorem 2.1, from which everything else follows. Throughout the paper, the asymptotic notation O(f (m, n)) refers to the passage of m e (m, n)), which is to be taken as a and n to ∞. We also use a modified notation O(f  shorthand for O f (m, n)nO(1)ε , where the O(1) factor is uniform over ε provided ε is small enough.

2

Subgraphs of random bipartite graphs

In this section we state our results for bipartite graphs. The starting point of the investigation is the enumeration formula given in the following theorem. Define m, n, s, t as in the Introduction and further define m n X X −1 −1 s=m sj , t = n tk , λ = s/n = t/m, A = 21 λ(1 − λ). j=1

k=1

Note that s is the average degree on one side of the vertex bipartition, t is the average degree on the other side, and λ is the edge density (the number of edges divided by mn). Let H be a fixed bipartite graph on the same vertex set that defines B(s, t), namely {u1 , . . . , um } ∪ {v1 , . . . , vn }. For j = 1, . . . , m, k = 1, . . . , n, let xj and yk be the degrees of vertices uj and vk of H, respectively, and further define δj = sj − s + λxj ,

ηk = tk − t + λyk . 3

Also define X= R=

m X j=1 m X

xj =

n X

yk ,

Y =

k=1

(sj − s)2 ,

C=

j=1

X

δj ηk ,

jk∈H n X

(tk − t)2 .

k=1

In the case of Y and similar notation used in this section, the summation is over all j ∈ {u1 , . . . , um } and k ∈ {v1 , . . . , vn } such that uj vk is an edge of H. Theorem 2.1. For some ε > 0, suppose that sj − s, xj , tk − t and yk are uniformly O(n1/2+ε ) for 1 ≤ j ≤ m and 1 ≤ k ≤ n, and X = O(n1+2ε ), for m, n → ∞. Let a, b > 0 be constants such that a + b < 21 . Suppose that m, n → ∞ with n = o(m1+ε ), m = o(n1+ε ) and   5m 5n (1 − 2λ)2 1+ + ≤ a log n. 8A 6n 6m Then, provided ε > 0 is small enough, we have    n  −1 Y m  mn−X n−xj Y m−yk |B(s, t, H)| = tk λmn sj j=1 k=1    1 R  C  Y −b × exp − 1 − + O(n ) . 1− − 2 2Amn 2Amn 2Amn The proof of Theorem 2.1 will be presented in Section 4. As in the special case of empty H proved in [4], the formula for |B(s, t, H)| has an intuitive interpretation. The first binomial and the two products of binomials are, respectively, the number of graphs with λmn edges that avoid H, the number of such graphs with row sums s, and the number of such graphs with column sums t. Therefore, the exponential factor measures the nonindependence of the events of having row sums s and having column sums t. Another expression for the product of binomials in the theorem is given below in equation (16). We can now employ Theorem 2.1 to explore the uniform probability space over B(s, t). First we need a little more notation. For all nonnegative integers h, ` define Rh,` =

m X

δjh x`j ,

Ch,` =

j=1

n X

ηkh yk` .

k=1

We will abbreviate Rh,0 = Rh and Ch,0 = Ch . Also note that R1 = C1 = λX and R0,1 = C0,1 = X. Finally, let X X X Y1,1 = xj yk , Y0,1 = δj yk , Y1,0 = x j ηk . jk∈H

jk∈H

4

jk∈H

Theorem 2.2. Under the conditions of Theorem 2.1, the following are true for a random graph G ∈ B(s, t) provided ε > 0 is small enough: (i) the probability that G is edge-disjoint from H is (1 − λ)X miss(m, n); (ii) the probability that G contains H as a subgraph is λX hit(m, n), where λX  1 1 λX 2 1  R1,1 C1,1  + + − + 2(1 − λ) n m 2(1 − λ)mn 1 − λ n m   R C Y λ λ(1 − 2λ)  R0,3 C0,3  0,2 0,2 − + + + + 2 λ(1 − λ)mn 2(1 − λ) n m 6(1 − λ)2 n2 m      R2,1 C2,1 1 1 − 2λ R1,2 C1,2 −b + 2 − + 2 + O(n ) − 2(1 − λ)2 n2 m 2(1 − λ)2 n2 m

 miss(m, n) = exp

and  (1 − λ)X  1 1  (1 − λ)X 2 1  R1,1 C1,1  hit(m, n) = exp + + + + 2λ n m 2λmn λ n m     1 R2,1 C2,1 1 + 2λ  R1,2 C1,2  1 + λ R0,2 C0,2 − 2 + + + − + 2 2λ n m 2λ n2 m2 2λ2 n2 m  (1 + λ)(1 + 2λ)  R0,3 C0,3  Y − Y0,1 − Y1,0 + Y1,1 + 2 − + O(n−b ) . − λ(1 − λ)mn 6λ2 n2 m Proof. The first probability in the statement of Theorem 2.2 is |B(s, t, H)| |B(s, t)| which can be expanded using Theorem 2.1. (One method is to apply (16) below.) The second probability can be derived in similar fashion, or can be deduced from the first on noting that the probability that G includes H is the probability that the complement of G avoids H. In the standard model of random bipartite graphs on m + n vertices with expected edge density λ, each of the mn possible edges is present independently with probability λ. The probability that a random bipartite graph taken from the standard model is disjoint from or contains a given set of X edges is (1 − λ)X or λX , respectively. Therefore, the quantities miss(m, n) and hit(m, n) given in Theorem 2.2 can be interpreted as a measure of how far these probabilities differ in B(s, t) compared to the standard model. Suppose that in addition to the conditions of Theorem 2.2, we also have X max |sj − s| + λR0,2 = o(An), j

X max |tk − t| + λC0,2 = o(Am). k

5

(1)

Then miss(m, n) = 1 + o(1) and hit(m, n) = 1 + o(1). These extra requirements are met, for example, if X = O(n1/2−2ε ). Another interesting case is when sj − s, xj , tk − t and yk are uniformly O(nε ) and X = O(n1−2ε ). To assist with the application of Theorem 2.2, we will give the simplifications that result when the graphs in B(s, t) are semiregular or when the graph H is semiregular. Corollary 2.1. In addition to the conditions of Theorem 2.2, assume that sj = s and tk = t for all j, k. Then  λY1,1 λX  1 1 λX 2 miss(m, n) = exp + + − 2(1 − λ) n m 2(1 − λ)mn (1 − λ)mn   R0,2 C0,2  λ(2 − λ)  R0,3 C0,3  λ −b + 2 + O(n ) + − − 2(1 − λ) n m 6(1 − λ)2 n2 m and  (1 − λ)X  1 1  (1 − λ)X 2 (1 − λ)Y1,1 hit(m, n) = exp + + − 2λ n m 2λmn λmn  2 1 − λ  R0,2 C0,2  1 − λ  R0,3 C0,3  −b + 2 + O(n ) . + − − 2λ n m 6λ2 n2 m Corollary 2.2. In addition to the conditions of Theorem 2.2, assume that xj = x and yk = y for all j, k. (Note that Theorem 2.2 requires x, y = O(n2ε ) in that case.) Then   λ(xy − x − y) yR + xC Yb −b miss(m, n) = exp − − − + O(n ) 2(1 − λ) 2(1 − λ)2 mn λ(1 − λ)mn and   (1 − λ)(xy − x − y) yR + xC Yb −b hit(m, n) = exp − − − + O(n ) , 2λ λ(1 − λ)mn 2λ2 mn P where Yb = jk∈H (sj − s)(tk − t). The next question we will address is the probability of H appearing as an induced subgraph. To be precise, suppose that H has no edges outside {u1 , . . . , uJ } × {v1 , . . . , vK } and let HJ,K denote the subgraph of H induced by those vertices. We will only consider the situation when the graphs in B(s, t) are semiregular. The corresponding result for irregular graphs can also be obtained using the same approach. The probability that HJ,K is an induced subgraph of G ∈ B(s, t) is simpler to state in terms of some new variables. For ` = 1, 2, 3, define ω` =

J X

ω`0

`

(xj − λK) ,

j=1

=

K X k=1

Note that ω1 = ω10 = X − λJK. 6

(yk − λJ)` .

Theorem 2.3. Adopt the assumptions of Theorem 2.1 with sj = s and tk = t for all j, k, and assume that J, K = O(n1/2+ε ). Then the probability that a random graph in B(s, t) has HJ,K as an induced subgraph is X

JK−X

λ (1 − λ)

 1 JK (1 − 2λ)ω1  1 ω12 + + − exp 2 4A m n 4Amn  0 (n + K)ω2 (m + J)ω2 1 − 2λ  ω3 ω30  −b − − − + + O(n ) . 4An2 4Am2 24A2 n2 m2

Proof. Let H ∗ be the complete bipartite graph on the parts {u1 , . . . , uJ } and {v1 , . . . , vK }. Then the probability that a random graph in B(s, t) has HJ,K as an induced subgraph is |B(s − x, t − y, H ∗ )| . |B(s, t)| This ratio can be estimated using Theorem 2.1 (or by combining Theorems 2.1 and 2.2).

The argument of the exponential in Theorem 2.3 is o(1) if JK 2 = o(An) and J 2 K = o(Am). So, in those circumstances, the probabilities of induced subgraphs asymptotically match the standard bipartite random graph model for edge probability λ. A related question asks for the distribution of the number of subgraphs of given type in a random graph in B(s, t). This deserves a serious study, which we will only just initiate here. A colour-preserving isomorphism of two bipartite graphs on {u1 , . . . , um } ∪ {v1 , . . . , vn } is an isomorphism that preserves the sets {u1 , . . . , um } and {v1 , . . . , vn }. Let I(H) be the set of all graphs isomorphic to H by a colour-preserving isomorphism. We know that m! n! |I(H)| = , aut(H) where aut(H) is the number of colour-preserving automorphisms of H. When the graphs in B(s, t) are semiregular, the expected number of elements of I(H) that are contained in or edge-disjoint from a random graph in B(s, t) is clearly just |I(H)| times the probability given by Theorem 2.2 and Corollary 2.1. If this regularity condition does not hold, the calculation is more complex. Here we consider the case that the graph H is semiregular and leave the most general case for a future paper. We will need the following averaging lemma.

7

(0)

(0)

Lemma 2.1. Let z (0) = (z1 , z2 , . . . , zn(0) ) be a vector in [−1, 1]n such that (1)

(0) j=1 zj

Pn

= 0.

(r) zi

(2)

Form z , z , . . . as follows: for each r ≥ 0, if is the first of the smallest elements (r) (r) of z and z` is the first of the largest elements of z (r) , then z (r+1) is the same as z (r) (r+1) (r+1) (r) (r) except that zi and z` are both equal to (zi + z` )/2. Then z (n) ∈ [− 21 , 12 ]n . Proof. If z (r) ∈ / [− 21 , 12 ]n , then the fact that

(r) j=1 zj

Pn

(r)

= 0 implies that zi

< 0 and

(r) z`

> 0. Therefore z (r+1) has at least one fewer element outside [− 21 , 21 ] than z (r) does. The lemma follows. (In fact, z (b(2n−1)/3c) ∈ [− 21 , 21 ]n , but this improvement is not necessary for our application.) Theorem 2.4. Suppose that the conditions of Theorem 2.1 apply with xj = x and yk = y for all j, k. Then the following is true of a random graph G in B(s, t): (i) the expected number of graphs in I(H) that are subgraphs of G is   (1 − λ)(xy − x − y) yR + xC X −b − λ |I(H)| exp − + O(n ) ; 2λ 2λ2 mn (ii) the expected number of graphs in I(H) that are edge-disjoint from G is   yR + xC λ(xy − x − y) −b X − + O(n ) . (1 − λ) |I(H)| exp − 2(1 − λ) 2(1 − λ)2 mn (0)

Proof. Define z (0) , z (1) , . . . as in Lemma 2.1, with z j = sj − s for 1 ≤ j ≤ n. For r ≥ 0, define m X X (r) (r) Y (g, h) = zj g Tj,h , where Tj,h = (tkh − t), j=1

k : uj vk ∈E(H)

and F

(r)

=

X

  Y (r) (g, h) exp − . 2Amn

(g,h)∈Sm ×Sn

For a permutation pair (g, h) ∈ Sm × Sn , define H g,h to be the isomorph of H with edge set {uj g vkh | uj vk ∈ E(H)}. As (g, h) runs over Sm ×Sn , each isomorph of H appears as H g,h exactly aut(H) times. Therefore, by Corollary 2.2, the expectation required in part (i) of the theorem is   λX F (0) (1 − λ)(xy − x − y) yR + xC −b exp − − + O(n ) . aut(H) 2λ 2λ2 mn

8

(r)

For some r ≥ 0, suppose that z (r+1) is formed from z (r) by averaging zi in Lemma 2.1. Then {(i`)g | g ∈ Sn } = Sn , so F

(r)

(r)

and z`

as

 Y (r) (g, h)   Y (r) ((i`)g, h)  = + exp − exp − 2Amn 2Amn (g,h)   (r) (r) (r)  P X zj Tj,h  zi Ti,h + z` T`,h  j ∈{i,`} / 1 exp − =2 exp − 2Amn 2Amn 1 2

X

(g,h)

=

X (g,h)

=

X (g,h)

(r)  P zj Tj,h j ∈{i,`} / exp − 2Amn

 z (r) T + z (r) T  `,h i,h ` + exp − i 2Amn (r+1) (r+1)  zi Ti,h + z` T`,h −2 e − + O(n ) 2Amn

  Y (r+1) (g, h) e −2 ) + O(n exp − 2Amn

 e −2 ) . = F (r+1) exp O(n By Lemma 2.1 there is some r0 = O(n log n) such that z (r0 ) ∈ [−n−1/2 , n−1/2 ]n . By the   e −1 ) , so F (0) = m! n! exp O(n e −1 ) by definition of F (r0 ) , we have F (r0 ) = m! n! exp O(n induction. Part (i) of the theorem follows. Part (ii) is proved in identical fashion. A simple example of Theorem 2.4 at work is the enumeration of perfect matchings in the case m = n. Equivalently, this is the permanent of the corresponding n × n binary matrix. Most previous research has focussed on the case that the matrix has constant row and column sums. For s = t = o(n1/3 ), the asymptotic expectation and variance are known, while for s = t = n − O(n1− ), the asymptotic expectation is known [3]. In the intermediate range of densities covered by the current paper, it appears that only bounds are known. The van der Waerden lower bound n! λn (proved independently by Egorychev n−s and Falikman) was improved by Gurvits [10] to s! (s − 1)s−1 /ss−2 . The best upper 1/λ n+1/(2λ) (1−λ)/(2λ) bound is s! ∼ n! λ (2πn) conjectured by Minc and proved by Bregman. See Timash¨ev [17] for references and discussion. Applying Theorem 2.4(i) with x = y = 1 gives the following. Theorem 2.5. Suppose that m = n and s, t, λ satisfy the requirements of Theorem 2.1. Then the expected permanent of a random n × n matrix over {0, 1} with row sums s and column sums t is   1−λ R+C n −b n! λ exp − + O(n ) . 2λ 2λ2 n2

9

It is interesting to note that in the regular case R = C = 0, the average given in Theorem 2.5 is only higher than Gurvits’s lower bound [10] by a factor of λ−1/2 (1 + o(1)).

3

Subdigraphs of random digraphs

The adjacency matrix of a simple digraph is a square {0, 1}-matrix with zero diagonal. Therefore, Theorem 2.1 can be applied to enumerate digraphs with specified degrees, and the result can then be used to explore the corresponding uniform probability space. In this section, H denotes a fixed simple digraph on the vertices {w1 , . . . , wn }. Let D(s, t) be the set of all simple digraphs on vertices {w1 , . . . , wn } with out-degrees s and in-degrees t, and let D(s, t, H) be the subset of D(s, t) containing those digraphs which are arc-disjoint from H. For 1 ≤ j ≤ n let xj , yj denote the out-degree and in-degree of vertex wj in H, respectively. The quantities s = t, λ, δj , ηj , X, Y , and so forth are all defined by the same formulae as in Section 2 with m = n. In the definition of Y , the summation over jk ∈ H should now be interpreted as summation over j, k ∈ {1, . . . , n} such that wj wk is an arc of H. But note that λ does not represent the arc-density of a digraph in D(s, t). Instead the arc-density of a digraph in D(s, t) is given by p = s/(n − 1). We begin with the basic enumeration result for digraphs. Theorem 3.1. For some ε > 0, suppose that sj − s, xj , tj − s and yj are uniformly O(n1/2+ε ) for 1 ≤ j, k ≤ n, and X = O(n1+2ε ), for n → ∞. Let a, b > 0 be constants such that a + b < 12 . Suppose that n → ∞ with (1 − 2λ)2 ≤ a log n. 3A Then, provided ε > 0 is small enough, we have |D(s, t, H)|  2 −1 n    n −X−n Y n−xj −1 n−yj −1 = λn2 sj tj j=1 Pn    1 R  C  Y + j=1 (sj − s)(tj − s) −b × exp − 1 − 1− − + O(n ) . 2 2An2 2An2 2An2 e be the bipartite graph obtained from H by replacing each vertex wj by two Proof. Let H e and finally adding the vertices uj , vj , replacing each arc wj wk of H by the edge uj vk of H, 10

e Then the degree sequences perfect matching {uj vj | j = 1, . . . , n} to the edge set of H. e and the total number of edges in H e are given by on the left and right of H x ej = xj + 1,

yej = yj + 1,

e = X + n, X

e satisfies respectively (1 ≤ j ≤ n). The quantity Ye for H Ye = Y +

n X

(sj − s)(tj − s) + O(n2−b )

j=1

e for any positive constant b < 1/2. Using this fact while applying Theorem 2.1 to H completes the proof. This formula for |D(s, t, H)| has an intuitive interpretation which is analogous to that given after Theorem 2.1 for the bipartite graph case. Using this enumeration theorem, we can explore the uniform probability space over D(s, t). In each case, the proof is analogous to that of the corresponding theorem for bipartite graphs in Section 2. Theorem 3.2. Under the conditions of Theorem 3.1, the following are true for a random digraph G ∈ D(s, t) if ε > 0 is small enough: (i) the probability that G is arc-disjoint from H is (1 − p)X miss(n, n); (ii) the probability G contains H as a subdigraph is pX hit(n, n), where miss(m, n) and hit(m, n) are defined in Theorem 2.2. The special cases of miss(m, n) and hit(m, n) provided by Corollaries 2.1 and 2.2 apply here as well, as do the sufficient conditions (1) for the probabilities in Theorem 3.2 to asymptotically match those in the standard random digraph model with arc probability p. Next suppose that each arc of H has both ends in {w1 , . . . , wJ }. Let HJ be the subdigraph of H induced by those vertices. For ` = 1, 2, 3, define χ` =

J X

χ0`

`

(xj − p(J−1)) ,

j=1

=

J X

(yk − p(J−1))` .

k=1

Note that χ1 = χ01 = X − pJ(J − 1). Theorem 3.3. Adopt the assumptions of Theorem 3.2 with sj = s and tk = t for all j, k, and assume that J = O(n1/2+ε ). The probability that a random digraph in D(s, t) has HJ 11

as an induced subdigraph is  2 J (1 − 2λ)χ1 χ21 X J(J−1)−X p (1 − p) exp + − n 2An 4An2  (n + J)(χ2 + χ02 ) (1 − 2λ)(χ3 + χ03 ) −b + O(n ) . − − 4An2 24A2 n2 The argument of the exponential in Theorem 3.3 is o(1) if J 3 = o(An). So in that case, the probabilities of induced subdigraphs asymptotically matches the standard random digraph model for arc probability p. Let I(H) be the isomorphism class of H and note that |I(H)| = n!/aut(H), where aut(H) is the number of automorphisms of H. By the same averaging technique as used to prove Theorem 2.4, we obtain the following. Theorem 3.4. Suppose that the conditions of Theorem 3.1 apply with xj = yj = x for all j. Then the following is true of a random digraph G in D(s, t): (i) the expected number of digraphs in I(H) that are subgraphs of G is   (1 − λ) x(x − 2) (R + C)x X −b p |I(H)| exp − − + O(n ) ; 2λ 2λ2 n2 (ii) the expected number of digraphs in I(H) that are arc-disjoint from G is   (R + C)x λx(x − 2) −b X + O(n ) . − (1 − p) |I(H)| exp − 2(1 − λ) 2(1 − λ)2 n2

4

Proof of Theorem 2.1

In the remainder of the paper we give the proof of Theorem 2.1. The overall method and many of the calculations will parallel [4], albeit with extra twists at each step, so we acknowledge our considerable debt to Rod Canfield. Outline of proof of Theorem 2.1. The basic idea is to identify |B(s, t, H)| as a coefficient in a multivariable generating function and to extract that coefficient using the saddlepoint method. In Subsection 4.1, we write |B(s, t, H)| = P (s, t, H)I(s, t, H), where P (s, t, H) is a rational expression and I(s, t, H) is an integral in m+n complex dimensions. Both depend on the location of the saddle point, which is the solution of some nonlinear equations. Those equations are solved in Subsection 4.2, and this leads to the value of P (s, t, H) in (17). In Subsections 4.3–4.6, the integral I(s, t, H) is estimated in a small 12

region R0 defined in (26). The result is given by Lemma 4.2 together with (20). Finally, in Subsection 4.7, it is shown that the integral I(s, t, H) restricted to the exterior of R0 is negligible. Theorem 2.1 then follows from (2), (17), Lemmas 4.2–4.4 and (20). In presenting the proof we will omit many details that are similar to those in [4]. Readers who wish to see them are invited to consult the preprint version of this paper [8]. We will use a shorthand notation for summation over doubly subscripted variables. If zjk is a variable for 1 ≤ j ≤ m and 1 ≤ k ≤ n, then zj• =

n X

zjk ,

z•k =

zj∗ =

zjk ,

z•• =

j=1

k=1 n−1 X

m X

zjk ,

z∗k =

m−1 X

zjk ,

j=1 k=1

zjk ,

j=1

k=1

m X n X

z∗∗ =

m−1 n−1 XX

zjk ,

j=1 k=1

for 1 ≤ j ≤ m and 1 ≤ k ≤ n. For 1 ≤ j ≤ m and 1 ≤ k ≤ n define hjk = 1 if uj vk is an edge of H and hjk = 0 otherwise. Then define the sets Xj = { k | 1 ≤ k ≤ n, hjk = 1 },

Xj = { k | 1 ≤ k ≤ n, hjk = 0 },

Yk = { j | 1 ≤ j ≤ n, hjk = 1 },

Y k = { j | 1 ≤ j ≤ n, hjk = 0 },

for 1 ≤ j ≤ m and 1 ≤ k ≤ n. P P The notations jk∈H and jk∈H indicate sums over the sets {(j, k) | 1 ≤ j ≤ m, 1 ≤ k ≤ n, hjk = 1} and {(j, k) | 1 ≤ j ≤ m, 1 ≤ k ≤ n, hjk = 0}, respectively, and similarly for products. We also define summations whose domain is limited by H. X X X zj•|H = zjk , z•k|H = zjk , z••|H = zjk , zj•|H =

k∈Xj

j∈Yk

X

X

k∈Xj

zjk ,

z•k|H =

j∈Y k

jk∈H

zjk ,

z••|H =

X

zjk .

jk∈H

e e Under the assumptions of Theorem 2.1, we have m = O(n) and n = O(m). We also −1 −1 c1 c2 +c3 ε c4 +c5 ε e have that 8 ≤ A ≤ O(log n), so A = O(1). More generally, A m n = c2 +c4 e O(n ) if c1 , c2 , c3 , c4 , c5 are constants.

13

4.1

Expressing the desired quantity as an integral

In this subsection we express |B(s, t, H)| as a contour integral in (m + n)-dimensional complex space, then begin to estimate its value using the saddle-point method. Firstly, notice that |B(s, t, H)| is the coefficient of us11 · · · usmm w1t1 · · · wntn in the function Y (1 + uj wk ). jk∈H

By Cauchy’s coefficient theorem this equals Q I I 1 jk∈H (1 + uj wk ) ··· du1 · · · dum dw1 · · · dwn , |B(s, t, H)| = m+n s1 +1 (2πi) u1 · · · usmm +1 w1t1 +1 · · · wntn +1 where each integral is along a simple closed contour enclosing the origin anticlockwise. It will suffice to take each contour to be a circle; specifically, we will write uj = qj eiθj

and

wk = rk eiφk

for 1 ≤ j ≤ m and 1 ≤ k ≤ n. Also define λjk =

qj r k 1 + qj r k

for 1 ≤ j ≤ m and 1 ≤ k ≤ n. Then |B(s, t, H)| = P (s, t, H)I(s, t, H) where Q jk∈H (1 + qj rk ) P (s, t, H) = Q sj Q n tk , (2π)m+n m j=1 qj k=1 rk  Z π Z π Q i(θj +φk ) − 1) jk∈H 1 + λjk (e Pm Pn ··· dθdφ, I(s, t, H) = −π −π exp(i j=1 sj θj + i k=1 tk φk )

(2)

θ = (θ1 , . . . , θm ) and φ = (φ1 , . . . , φn ). We will choose the radii qj , rk so that there is no linear term in the logarithm of the integrand of I(s, t, H) when expanded for small θ, φ. This gives the equation X

λjk (θj + φk ) −

m X j=1

jk∈H

sj θ j −

n X

tk φk = 0.

k=1

For this to hold for all θ, φ, we require λj•|H = sj

(1 ≤ j ≤ m),

λ•k|H = tk

(1 ≤ k ≤ n).

(3)

The quantities λjk have an interesting interpretation. If edge uj vk is chosen with probability λjk independently for all j, k ∈ H, then the expected degrees are s, t. 14

In addition to the quantities defined before the statement of Theorem 2.2 we define for j = 1, . . . , m, k = 1, . . . , n, X X Jj = ηk , Kk = δj . j∈Yk

k∈Xj

4.2

Locating the saddle-point

In this subsection we solve (3) and derive some of the consequences of the solution. n Change variables to {aj }m j=1 , {bk }k=1 as follows:

qj = r

1 + aj , 1 − r2 aj

where

rk = r

r r=

1 + bk , 1 − r 2 bk

(4)

λ . 1−λ

Equation (3) is slightly underdetermined, which we will exploit to impose an additional condition. If {qj }, {rk } satisfy (3) and c > 0 is a constant, then {cqj }, {rk /c} also satisfy (3). From this we can see that, if there is a solution to (3) at all, there is one P Pn for which m j=1 (n − xj )aj < 0 and k=1 (m − yk )bk > 0, and also a solution for which Pn Pm j=1 (n − xj )aj > 0 and k=1 (m − yk )bk < 0. It follows from the Intermediate Value Theorem that there is a solution for which m X

(n − xj )aj =

j=1

n X

(m − yk )bk ,

(5)

k=1

so we will seek a common solution to (3) and (5). From (4) we find that where Zjk

λjk /λ = 1 + aj + bk + Zjk ,

(6)

aj bk (1 − r2 − r2 aj − r2 bk ) = , 1 + r 2 aj b k

(7)

and that equations (3) can be rewritten as X δj = (n − xj )aj + bk + Zj•|H λ k∈Xj X ηk = (m − yk )bk + aj + Z•k|H . λ j∈Y k

15

(8)

Summing (8) over all j, k, respectively, we find in both cases that that X=

m X

(n − xj )aj +

j=1

n X

(m − yk )bk + Z••|H .

(9)

k=1

Equations (5) and (9) together imply that m X

(n − xj )aj =

j=1

n X

(m − yk )bk = 21 (X − Z••|H ) .

k=1

Substituting back into (8), we obtain P Pn Zj•|H Z••|H δj aj xj y b X k∈Xj bk aj = − + − k=1 k k + − + , λn 2mn n n 2mn Pn Pmmn Z•k|H Z••|H η X b y j∈Yk aj j=1 xj aj + k k− + − + bk = k − λm 2mn m mn m m 2mn

(10)

for 1 ≤ j ≤ m, 1 ≤ k ≤ n. By the same argument as in [4], the equations (10) can be used to define a convergent iteration. Start with aj = bk = 0 for all j, k, and substitute these values into the right hand sides of (10) to obtain the next approximation to aj and bk . Four iterations give the following estimate of aj . The value of bk follows by symmetry, while Zjk follows from (7). δj δj xj δj x2j δj x3j (1 − 2λ)δj X δj2 X X aj = + + + − − + λn λn2 2mn λn3 λn4 4Amn2 4Amn3 2 2 xj X λ(7 − 10λ)X 3(1 − 2λ)δj xj X xj X (1 − 2λ)Y − − − 2 − 3 + 2 2 3 mn mn 16Am n 4Amn 4λAm2 n2 2 (1 − 2λ)δj C2 δj xj C2 δj C2 3XC2 XR2 + 2 2 + 2 2 3 + 2 3 − 3 2 − 2λAm n 4λA m n λAm n 8Am n 8Am2 n3 xj C1,1 C1,1 (1 − 2λ)δj C1,1 (1 − 2λ)R2 C2 xj R1,1 − − − − − 8λA2 m3 n3 λmn3 λm2 n 2λAm2 n2 λm2 n2 2 Y0,1 C1,2 Jj (1 − 2λ)δj Jj δj Jj x j Jj + − − 2 2 − 3 + 2 3 + λm n λm n λmn 2λAmn 2λAmn λmn2 2 2 x j Jj (1 − 2λ)δj xj Jj 3(1 − 2λ)XJj (1 − 2λ)Jj + − + 3 + 3 λmn λAmn 4Am2 n2 2λAm2 n2  R X Jj C 1 1 X 2 + 2 + δj 0 yk0 + ηk yk2 2 2 + 2 2 3 n m 2λAm n λm n 0 0 λm n k∈X (j ,k )



j

X 2 δj (1 − 2λ) X X X yk − ηk + δ 0η 0 2 2 2 2 m n k∈X 2λAm n k∈X 2λAm2 n2 0 0 j k j

(j ,k )

j

X  (1 − 2λ)δj xj  X 1 + + n δj 0 + ηk y k + m λm2 n2 2λAm2 n3 λm2 n3 0 0 k∈X 

j

16

(j ,k )

+

X X 1 1 X e −5/2 ), δj 0 xj 0 + ηk00 + O(n 3 2 2 λmn 0 0 λm n 0 0 00 (j ,k ) k ∈X

(j ,k )

where the notation

P

0

0

(j ,k )

j

means

P

0

k ∈Xj

P

0

j ∈Y

k

0

0

.

A sufficient approximation of λjk is given by substituting this estimate into (6). In evaluating the integral I(s, t, H), the following approximations will be required: δj2 (1 − 2λ)δj (1 − 2λ)ηk ηk2 + − 2− 2 λjk (1 − λjk ) = λ(1 − λ) + n m n m (1 − 12A)δj ηk (1 − 2λ)δj xj (1 − 2λ)ηk yk + + + 2Amn n2 m2 (1 − 2λ)(Jj + Kk − λX) e −3/2 + + O(n ), mn (1 − 12A)δj λjk (1 − λjk )(1 − 2λjk ) = λ(1 − λ)(1 − 2λ) + n (1 − 12A)ηk e −1 ), + O(n + m 2 e −1/2 ). λjk (1 − λjk )(1 − 6λjk + 6λjk ) = λ(1 − λ)(1 − 12A) + O(n

(11)

(12)

(13)

We now estimate the factor P (s, t, H). If Y λ Λ= λjkjk (1 − λjk )1−λjk , jk∈H

then Λ−1 =

Y

(1 + qj rk )

m Y j=1

jk∈H

−sj

qj

n Y

rk−tk

k=1

using (3). Therefore, the factor P (s, t, H) defined in (2) is given by P (s, t, H) = (2π)−(m+n) Λ−1 . Writing λjk = λ(1 + zjk ), we have  log

λ

λjkjk (1 − λjk )1−λjk λλ (1 − λ)1−λ





λ = λzjk log 1−λ



  5 zjk λ(1 − 2λ) 3 λ(1 − 3λ + 3λ2 ) 4 λ 2 + z − z + zjk + O . 2(1 − λ) jk 6(1 − λ)2 jk 12(1 − λ)3 (1 − λ)4

(14)

We know from (3) that λ••|H = λmn, which implies that z••|H = X, hence the first term on the right side of (14) contributes λλX (1 − λ)−λX to Λ. Now using (6) we can write 17

zjk = aj + bk + Zjk and apply the above estimates to obtain mn Λ = λλ (1 − λ)1−λ (1 − λ)−X  R2 λ2 X 2 C3  C2 R2 C2 (1 − 2λ)  R3 × exp − + 2 (15) + + − 4An 4Am 8A2 m2 n2 4Amn 24A2 n2 m  C2,1 R2,1 C4  Y (1 − 6A)  R4 −1/2 e + 3 + + + O(n ) . + + 2Amn 4An2 4Am2 96A3 n3 m As in [4], our answer will be simpler when written in terms of binomial coefficients. Using an accurate approximation of the binomial coefficients (such as [4, Equation 18]), we obtain that  −1 Y  n   m  mn−X n−xj Y m−yk (λλ (1 − λ)1−λ )−mn (1 − λ)X = λmn sj tk (4πA)(m+n−1)/2 m(n−1)/2 n(m−1)/2 j=1 k=1  R2 C2 1 − 2A  m n  1 − 4A  R2 C2  × exp − − − + + + 2 4An 4Am 24A n m 16A2 n2 m (16)     1 − 6A R4 1 − 2λ R3 C3 C4 + 2 − + 3 + 24A2 n2 m 96A3 n3 m  2 C R λ X(m + n + X) 2,1 2,1 −1/2 e + − + O(n ) . − 4Amn 4An2 4Am2 Putting (15) and (16) together, we find that P (s, t, H) =

4.3

  n  n−xj Y m−yk tk sj 2π (m+n+1)/2 j=1 k=1  1 − 2A  m C2  1 − 4A  R2 n R2 C2 − × exp + + − 24A n m 8A2 m2 n2 16A2 n2 m2  λ2 X  1 Y 1 −1/2 e − + + O(n ) . − 4A m n 2Amn

A(m+n−1)/2 m(n−1)/2 n(m−1)/2



mn−X λmn

−1 Y m 

(17)

Evaluating the integral

Our next task is to evaluate the integral I(s, t, H) given in (2). Let C be the ring of real numbers modulo 2π, which we can interpret as points on a circle, and let z be the canonical mapping from C to the real interval (−π, π]. An open half-circle is Ct = (t − π/2, t + π/2) ⊆ C for some t. Now define bN = { v = (v1 , . . . , vN ) ∈ C N | v1 , . . . , vN ∈ Ct for some t ∈ R }. C

18

If v = (v1 , . . . , vN ) ∈ C0N then define ¯=z v

−1



 N 1 X z(vj ) . N j=1

¯ = t + (v1 − t, . . . , vN − t). The function v → v ¯ More generally, if v ∈ CtN then define v N b . is well-defined and continuous for v ∈ C bm × C bn such that Let R denote the set of vector pairs (θ, φ) ∈ C ¯ + φ| ¯ ≤ (mn)−1/2+2ε , |θ |θˆj | ≤ n−1/2+ε (1 ≤ j ≤ m), |φˆk | ≤ m−1/2+ε (1 ≤ k ≤ n),

(18)

¯ and φˆk = φk − φ. ¯ In this definition, values are considered in C. The where θˆj = θj − θ constant ε is the sufficiently-small value required by Theorem 2.1. Let IR00 (s, t, H) denote the integral I(s, t, H) restricted to any region R00 . We next estimate IR0 (s, t, H) in a certain region R0 ⊇ R. In Subsection 4.7 we will show that the remaining parts of I(s, t, H) are negligible. We begin by analysing the integrand in R, but for future use when we expand the region to R0 (to be defined in (26)), note that all the approximations we establish for the integrand in R also hold in the superset of R0 defined by ¯ + φ| ¯ ≤ 3(mn)−1/2+2ε , |θ |θˆj | ≤ 3n−1/2+ε (1 ≤ j ≤ m − 1), |θˆm | ≤ 2n−1/2+3ε ,

(19)

|φˆk | ≤ 3m−1/2+ε (1 ≤ k ≤ n − 1), |φˆn | ≤ 2m−1/2+3ε . ˆ = (θˆ1 , . . . , θˆm−1 ) and φ ˆ = (φˆ1 , . . . , φˆn−1 ). Let T1 be the transformation Define θ ˆ φ, ˆ ν, ψ) = (θ, φ) defined by T1 (θ, ¯ + φ, ¯ ν=θ

¯ − φ, ¯ ψ=θ

¯ (1 ≤ j ≤ m − 1) and φˆk = φk − φ ¯ (1 ≤ k ≤ n − 1). We also together with θˆj = θj − θ define the 1-many transformation T1∗ by [ ˆ φ, ˆ ν) = ˆ φ, ˆ ν, ψ). T1∗ (θ, T1 (θ, ψ

After applying the transformation T1 to IR (s, t, H), the new integrand is easily seen to be independent of ψ, so we can multiply by the range of ψ and remove it as an independent 19

variable. Therefore, we can continue with an (m + n − 1)-dimensional integral over S such that R = T1∗ (S). More generally, if S 00 ⊆ (− 12 π, 12 π)m+n−2 × (−2π, 2π] and R00 = T1∗ (S 00 ), we have Z ˆ φ, ˆ ν) dθd ˆ φdν, ˆ IR00 (s, t, H) = 2πmn G(θ, (20) S

00

 ˆ φ, ˆ ν) = F T1 (θ, ˆ φ, ˆ ν, 0) with F (θ, φ) being the integrand of I(s, t, H). The where G(θ, factor 2πmn combines the range of ψ, which is 4π, and the Jacobian of T1 , which is mn/2. Note that S is defined by the same inequalities (18) as define R. The first inequality is now |ν| ≤ (mn)−1/2+2ε and the bounds on θˆm = −

m−1 X

θˆj and φˆm = −

j=1

n−1 X

φˆk

k=1

still apply even though these are no longer variables of integration. In the region S, the integrand of (20) can be expanded as  X m X m X n n X ˆ φ, ˆ ν) = exp − G(θ, (A + αjk )(ν + θˆj + φˆk )2 − i (A3 + βjk )(ν + θˆj + φˆk )3 +

j=1 k=1 m n XX

j=1 k=1

(A4 + γjk )(ν + θˆj + φˆk ) + 4

j=1 k=1

X

e A(ν + θˆj + φˆk ) + O(n 2

−1/2

 ) .

jk∈H

Here αjk , βjk , and γjk are defined by 1 λ (1 − λ ) jk 2 jk 1 λ (1 − λ )(1 − 2λ ) jk jk 6 jk 1 λ (1 − λ )(1 − 6λ jk jk 24 jk

= A + αjk , = A3 + βjk ,

(21)

+ 6λ2jk ) = A4 + γjk ,

where 1 λ(1 − λ)(1 − 6λ + 6λ2 ). A3 = 61 λ(1 − λ)(1 − 2λ), and A4 = 24 Approximations for αjk , βjk , γjk were given in (11)–(13). Note that αjk in this paper is e −1/2 ) uniformly slightly different from in [4], but it is still true that αjk , βjk , γjk = O(n over j, k.

20

4.4

Another change of variables

ˆ φ, ˆ ν) = T2 (ζ, ξ, ν), where ζ = (ζ1 , . . . , ζm−1 ) We now make a second change of variables (θ, and ξ = (ξ1 , . . . , ξn−1 ), whose purpose is to almost diagonalize the quadratic part of G. The transformation T2 is defined as follows. For 1 ≤ j ≤ m − 1 and 1 ≤ k ≤ n − 1 let θˆj = ζj + cπ1 , where c=−

1 m+m

1/2

φˆk = ξk + dρ1 ,

and d = −

1 n + n1/2

and, for 1 ≤ h ≤ 4, πh =

m−1 X

ζjh ,

ρh =

j=1

n−1 X

ξkh .

k=1

The Jacobian of the transformation is (mn)−1/2 . By summing the equations θˆj = ζj + cπ1 and φˆk = ξk + dρ1 , we find that π1 = m1/2

m−1 X

θˆj ,

|π1 | ≤ m1/2 n−1/2+ε ,

φˆk ,

1/2

j=1

ρ1 = n

1/2

(22)

n−1 X

|ρ1 | ≤ n

m

−1/2+ε

,

k=1

where the inequalities come from the bounds on θˆm and φˆn . This implies that e −1 ) (1 ≤ j ≤ m − 1), ζj = θˆj + O(n e −1 ) (1 ≤ k ≤ n − 1). ξk = φˆk + O(n The transformed region of integration is T2−1 (S), but for convenience we will expand it a little to be the region defined by the inequalities |ζj | ≤ 32 n−1/2+ε

(1 ≤ j ≤ m − 1),

|ξk | ≤ 23 m−1/2+ε

(1 ≤ k ≤ n − 1),

|π1 | ≤ m |ρ1 | ≤ n

1/2 −1/2+ε

,

−1/2+ε

,

−1/2+2ε

.

1/2

n

m

|ν| ≤ (mn)

(23)

We now consider the new integrand E1 = exp(L1 ) = G ◦ T2 . The semiregular parts of L1 (those not involving αjk , βjk , γjk or H) are diagonalised. To see the effect of the transformation on the irregular parts of L1 , write ζm = θˆm − cπ1 and ξn = θˆn − dρ1 . 21

e −1/2 ) and ξn = O(n e −1/2 ). Thus we have, for all From (22) we can see that ζm = O(n e −1/2 ) and cπ1 + dρ1 + ν = O(n e −1 ). Recall also 1 ≤ j ≤ m and 1 ≤ k ≤ n, ζj + ξk = O(n e −1/2 ). Using these bounds we find that, after transformation, that αjk , βjk , γjk = O(n L1 = −Amnν 2 − Anπ2 − Amρ2 − 3iA3 nνπ2 − 3iA3 mνρ2 + 6A4 π2 ρ2 − iA3 nπ3 − iA3 nρ3 − 3iA3 cnπ1 π2 − 3iA3 dmρ1 ρ2 + A4 nπ4 + A4 mρ4 −

m−1 n−1 XX

 αjk (ζj + ξk )2 + 2(ζj + ξk )(ν + cπ1 + dρ1 )

(24)

j=1 k=1

−i

m−1 n−1 XX

βjk (ζj + ξk )3 + A

j=1 k=1

4.5

X

e −1/2 ). (ζj + ξk )2 + O(n

jk∈H

Completing the diagonalization

The quadratic form in L1 is the following function of the m + n − 1 variables ζ, ξ, ν: X Q = −Amnν 2 − Anπ2 − Amρ2 + A (ζj + ξk )2 jk∈H



m−1 n−1 XX

(25) αjk

2

 (ζj + ξk ) + 2(ζj + ξk )(ν + cπ1 + dρ1 ) .

j=1 k=1

We will make a third change of variables, (ζ, ξ, ν) = T3 (σ, τ , µ), that diagonalizes this quadratic form, where σ = (σ1 , . . . , σm−1 ) and τ = (τ1 , . . . , τn−1 ). This is achieved using a slight extension of [15, Lemma 3.2]. Lemma 4.1. Let U and Y be square matrices of the same order, such that U −1 exists and all the eigenvalues of U −1 Y are less than 1 in absolute value. Then (I + Y U −1 )−1/2 (U + Y ) (I + U −1 Y )−1/2 = U , where the fractional powers are defined by the binomial expansion. Note that U −1 Y and Y U −1 have the same eigenvalues, so the eigenvalue condition on U −1 Y applies equally to Y U −1 . If we also have that both U and Y are symmetric, then (I + Y U −1 )−1/2 is the transpose of (I + U −1 Y )−1/2 , as proved in [4]. Let V be the symmetric matrix associated with the quadratic form Q. Write V = Vd + Vnd where Vd has all off-diagonal entries equal to zero and matches V on the diagonal entries, and Vnd has all diagonal entries zero and matches V on the off-diagonal entries. We will apply Lemma 4.1 with U = Vd and Y = Vnd . Note that Vd is invertible and that both Vd and 22

Vnd are symmetric. Let T3 be the transformation given by T3 (σ, τ , µ)T = (ζ, ξ, ν)T = (I +Vd−1 Vnd )−1/2 (σ, τ , µ)T . If the eigenvalue condition of Lemma 4.1 is satisfied then this transformation diagonalizes the quadratic form Q, keeping the diagonal entries unchanged. e −3/2 ), except for the column correspondAll the off-diagonal entries of Vd−1 Vnd are O(n e −1/2 ), and the entries corresponding to ing to ν, which has off-diagonal entries of size O(n e −1 ). Similarly, the off-diagonal entries of Vnd Vd−1 ζj ξk for hjk = 1, which have size O(n e −3/2 ), except for the row corresponding to ν, which has off-diagonal entries of are all O(n e −1/2 ), and the entries corresponding to ζj ξk for hjk = 1, which have size O(n e −1 ). size O(n To see that these conditions imply that the eigenvalues of Vd−1 Vnd are less than one, recall that the value of any matrix norm is greater than or equal to the greatest absolute value of an eigenvalue. The ∞-norm (maximum row sum of absolute values) of Vd−1 Vnd e −1/2 ), so the eigenvalues are all O(n e −1/2 ). is O(n Arguing as in [4], the Jacobian of T3 is  e −1/2 ). det (I + Vd−1 Vnd )−1/2 = 1 + O(n To derive T3 explicitly, we can expand (I +Vd−1 Vnd )−1/2 while noting that αj∗ = O(n1/2+ε ) for all j, α∗k = O(m1/2+ε ) for all k, α∗∗ = O(mn2ε + nm2ε ), R ≤ mn1+2ε and C ≤ nm1+2ε . This gives σj = ζj +

m−1 X 0

 c(αj∗ + αj 0 ∗ ) −2 e + O(n ) ζj 0 2An

j =1 n−1  X αjk + dαj∗ + cα∗k

+

τk = ξk +

k=1 m−1 X j=1

+

 αjk + dαj∗ + cα∗k e −2 ) ζj + O(n 2Am

n−1  X d(α∗k + α

∗k

0

µ=ν+

2An

k =1 m−1 X j=1

   e −2 ) ξk + αj∗ + O(n e −1 ) ν + O(n e −3/2 ), + O(n 2An

2Am

0

)

   e −2 ) ξ 0 + α∗k + O(n e −1 ) ν + O(n e −3/2 ), + O(n k 2Am

n−1    X αj∗ α∗k −2 e e −2 ) ξk + O(n e −1 )ν, + O(n ) ζj + + O(n 2Amn 2Amn k=1

for 1 ≤ j ≤ m − 1, 1 ≤ k ≤ n − 1. The transformation T3−1 perturbs the region of integration in an irregular fashion that e −1 ) for we must bound. From the explicit form of T3 above, we have σj = ζj + O(n  e −1 ) for 1 ≤ k ≤ n − 1, and µ = ν + o (mn)−1/2+2ε . 1 ≤ j ≤ m − 1, τk = ξk + O(n Thus σ, τ and µ are only slightly different from ζ, ξ and ν. This shows that the bound |ν| ≤ (mn)−1/2+2ε is adequately covered by |µ| ≤ 2(mn)−1/2+2ε . 23

For 1 ≤ h ≤ 4, define µh =

m−1 X

h

σj ,

νh =

j=1

n−1 X

τk h .

k=1

From (23), we see that |π1 | ≤ m1/2 n−1/2+ε and |ρ1 | ≤ m−1/2+ε n1/2 are the remaining constraints that define the region of integration. Applying these constraints we obtain µ1 = π1 + o(m1/2 n−1/2+5ε/2 ). Since our region of integration has |π1 | ≤ m1/2 n−1/2+ε , this implies the bound |µ1 | ≤ m1/2 n−1/2+3ε . By a parallel argument, we have ν1 = ρ1 +o(m−1/2+5ε/2 n1/2 ), which implies |ν1 | ≤ n1/2 m−1/2+3ε . Putting together all the bounds we have derived, we see that T3−1 (T2−1 (S)) ⊆ Q ∩ M, where Q = { |σj | ≤ 2n−1/2+ε , j = 1, . . . , m − 1 } ∩ { |τk | ≤ 2m−1/2+ε , k = 1, . . . , n − 1 } ∩ {|µ| ≤ 2(mn)−1/2+2ε }, M = { |µ1 | ≤ m1/2 n−1/2+3ε } ∩ { |ν1 | ≤ n1/2 m−1/2+3ε }. Now define S 0 = T2 (T3 (Q ∩ M)), R0 = T1∗ (S 0 ).

(26)

We have proved that S 0 ⊇ S. Also notice that R0 is contained in the region defined by the inequalities (19). As we forecast at that time, our estimates of the integrand have been valid inside this expanded region. It remains to apply the transformation T3−1 to the integrand (24) so that we have it in terms of (σ, τ , µ). The explicit form of T3−1 is similar to the explicit form for T3 , namely: m−1 X

n−1    X c(αj∗ + αj 0 ∗ ) αjk + dαj∗ + cα∗k e −2 ) σ 0 − e −2 ) τk + O(n + O(n j 2An 2An 0 k=1 j =1  α j∗ e −1 ) µ + O(n e −3/2 ), + O(n − 2An m−1 n−1   X  αjk − dαj∗ + cα∗k X d(α∗k + α∗k0 ) e −2  −2 e ξk = τk − + O(n ) σj − + O(n ) τk0 2Am 2Am 0 j=1 k =1   α ∗k −1 −3/2 e e − + O(n ) µ + O(n ), 2Am m−1 n−1    X  αj∗ X α∗k −2 e e −2 ) τk + O(n e −1 )µ, ν =µ− + O(n ) σj − + O(n 2Amn 2Amn j=1 k=1

ζj = σj −

24

for 1 ≤ j ≤ m−1, 1 ≤ k ≤ n−1. In addition to the relationships between the old and new e −1/2 ), ρ2 = ν2 + O(n e −1/2 ), variables that we proved before, we can note that π2 = µ2 + O(n e −1 ), ρ3 = ν3 + O(n e −1 ), π4 = µ4 + O(n e −3/2 ), and ρ4 = ν4 + O(n e −3/2 ). π3 = µ3 + O(n Define x0j = xj − hjn for 1 ≤ j ≤ m − 1, and yk0 = yk − hmk for 1 ≤ k ≤ n − 1. The quadratic part of L1 , which we called Q in (25), loses its off-diagonal parts according to our design of T3 . Thus, what remains is 2

−Amnµ − Anµ2 − Amν2 −

m−1 X

(αj∗ −

Ax0j )σj2



j=1

n−1 X

e −1/2 ). (α∗k − Ayk0 )τk2 + O(n

k=1

In order to transform the cubic terms of L1 , we calculate the following in Q ∩ M:  m−1  n−1 X 3iA3 µ2 X e −1/2 ), αj∗ σj + α∗k τk + O(n −3iA3 nνπ2 = −3iA3 nµµ2 + 2Am j=1 k=1  m−1 3iA3 X −iA3 nπ3 = −iA3 nµ3 + c(αj∗ + αj 0 ∗ )σj2 σj 0 , 2A 0 j,j =1

+

m−1 n−1 XX

(αjk + dαj∗ +

cα∗k )σj2 τk



e −1/2 ), + O(n

j=1 k=1

−3iA3 cnπ1 π2 = −3iA3 cnµ1 µ2 + −i

m−1 n−1 XX

3

βjk (ζj + ξk ) = −i

m−1 3iA3 c2 mµ2 X e −1/2 ), αj∗ σj + O(n 2A j=1

m−1 n−1 XX

e −1/2 ). βjk (σj + τk )3 + O(n

j=1 k=1

j=1 k=1

The remaining cubic terms are each parallel to one of those. Finally, the quartic part of L1 transforms to e −1/2 ). 6A4 µ2 ν2 + A4 nµ4 + A4 mν4 + O(n  e −1/2 ) , In summary, the value of the integrand for (σ, τ , µ) ∈ Q ∩ M is exp L2 + O(n where L2 = −Amnµ2 − Anµ2 − Amν2 −

m−1 X

(αj∗ − Ax0j )σj2 −

j=1

n−1 X

(α∗k − Ayk0 )τk2 + 6A4 µ2 ν2

k=1

+ A4 nµ4 + A4 mν4 − iA3 nµ3 − iA3 mν3 − 3iA3 cnµ1 µ2 − 3iA3 dmν1 ν2 − 3iA3 nµµ2 − 3iA3 mµν2 − i

m−1 X

βj∗ σj3

j=1

+i

m−1 X 0

j,j =1

gjj 0 σj σj20 + i

n−1 X

−i

n−1 X

β∗k τk3

k=1

hkk0 τk τk20 + i

0

m−1 n−1 XX j=1 k=1

k,k =1

25

 ujk σj τk2 + vjk σj2 τk ,

with  3A3 (1 + cm + c2 m2 )αj∗ + cmαj 0 ∗ = O(n−1/2+ε ), 2Am  3A3 (1 + dn + d2 n2 )α∗k + dnα∗k0 = O(m−1/2+ε ), = 2An  3A3 nαjk + (1 + dn)αj∗ + cnα∗k − 3βjk = O(m−1/2+2ε + n−1/2+2ε ), = 2An  3A3 mαjk + (1 + cm)α∗k + dmαj∗ − 3βjk = O(m−1/2+2ε + n−1/2+2ε ). = 2Am

gjj 0 = hkk0 ujk vjk

Note that the O(·) estimates in the last four lines are uniform over j, j 0 , k, k 0 .

4.6

Estimating the main part of the integral

Define E2 = exp(L2 ). We have shown that the value of the integrand in Q ∩ M is  e −1/2 ) . Denote the complement of the region M by Mc . We can E1 = E2 1 + O(n approximate our integral as follows: Z Z Z −1/2 e E1 = E2 + O(n ) |E2 | Q∩M Q∩M Q∩M Z Z −1/2 e = E2 + O(n ) |E2 | Q∩M Q Z Z Z −1/2 e = E2 + O(1) |E2 | + O(n ) |E2 |. (27) Q

c

Q∩M

Q

It suffices to estimate the value of each integral in (27). This can be done using the same calculation as in Section 4.3 of [4], using α ˆ jk = αjk − Ahjk in place of the variable αjk used in that paper. A potential problem with this analogy is that the variable αjk used in [4] e −1/2 ), whereas it is not true that α e −1/2 ). However, has the property αjk = O(n ˆ jk = O(n a careful look at Section 4.3 of [4] confirms that only the properties α ˆ j∗ = αj∗ − Ax0j = e 1/2 ), α e 1/2 ), and the bounds on g 0 , h 0 , ujk , vjk , are required. O(n ˆ ∗k = α∗k − Ayk0 = O(n jj kk The result is that Z  π 1/2  π (m−1)/2  π (n−1)/2 E2 = Amn An Am Q  2 9A 3A4  m n  3A4 15A23  × exp − 33 + + + − n m 4A2 16A3 8A 2A2 n−1  1 1  1 X + α ˆ ∗∗ + (ˆ α∗k )2 − 2 2 2Am 2An 4A m k=1  m−1 1 X 2 −b e (ˆ αj∗ ) + O(n ) . + 4A2 n2 j=1 26

(28)

Using (11), we calculate that α ˆ ∗∗ m−1 X

1  R2 C2  1 2 e 1/2 ), =− + − 2 λ X + O(n 2 n m

e 3/2 ), (ˆ αj∗ )2 = 41 (1 − 2λ)2 R2 + O(n

j=1 n−1 X

e 3/2 ). (ˆ α∗k )2 = 14 (1 − 2λ)2 C2 + O(n

k=1

Substituting these values into (28) together with the actual values of A, A3 , A4 , we conclude that Z  π 1/2  π (m−1)/2  π (n−1)/2 E2 = Amn An Am Q   C2  1 1 − 2A m n  1 − 4A  R2 + × exp − − + + (29) 2 24A n m 16A2 n2 m2  R2 + C2 λ2 X  1 1 −b + + + + O(n ) . 4Amn 4A m n R By the same argument as in [4], the other two terms in (27) have value O(n−b ) Q E2 . Multiplying (29) by the Jacobians of the transformations T2 and T3 , we have proved the following. Lemma 4.2. The region S 0 given by (26) contains S and Z  π 1/2  π (m−1)/2  π (n−1)/2 −1/2 ˆ ˆ ˆ ˆ G(θ, φ, ν) dθdφdν = (mn) 0 Amn An Am S    1 1 − 2A m n 1 − 4A  R2 C2  × exp − − + + + 2 24A n m 16A2 n2 m2  R + C2 λ2 X  1 1 + 2 + + + O(n−b ) . 4Amn 4A m n

4.7

Bounding the remainder of the integral

In the previous subsections, we estimated the value of the integral IR0 (s, t, H), which is the same as I(s, t, H) except that it is restricted to a certain region R0 ⊇ R. In this subsection, we extend this to an estimate of I(s, t, H) by showing that the remainder of the region of integration contributes negligibly. For 1 ≤ j ≤ m, 1 ≤ k ≤ n, let Ajk = A + αjk = 21 λjk (1 − λjk ) (recall (21)), and e −1/2 ). We begin with a technical lemma whose proof define Amin = minjk Ajk = A + O(n is omitted. 27

Lemma 4.3. |F (θ, φ)| =

Y

fjk (θj + φk ),

jk∈H

where fjk (z) =

p

1 − 4Ajk (1 − cos z) .

Moreover, for all real z, 1 A z 4 . 0 ≤ fjk (z) ≤ exp −Ajk z 2 + 12 jk

Lemma 4.4. Let F (θ, φ) be the integrand of I(s, t, H). Then Z Z −1 |F (θ, φ)| dθdφ = O(n ) F (θ, φ) dθdφ, 0

c

R

R

where Rc denotes the complement of R. R Proof. Our approach is to bound |F (θ, φ)| over a variety of regions whose union covers R Rc . To make the comparison of these bounds with R0 F (θ, φ) easier, we note that Z  F (θ, φ) dθdφ = exp A−1 O(mε + nε ) I0 , (30) 0

R

where

m  n  π 1/2 Y π 1/2 Y π 1/2 I0 = . A•• Aj• A•k j=1 k=1

Let κ = π/300 and define the region  X = (θ, φ) |θj |, |φk | ≤ 15κ for 1 ≤ j ≤ m, 1 ≤ k ≤ n . Using arguments as in [4, Section 5] we can prove that Z Z  |F (θ, φ| dθdφ ≤ 300 |F (θ, φ| dθdφ + O e−c1 Am + e−c1 An I0 c

R

(31)

X −R

for some c1 > 0. It remains to bound the integral over X − R. bm+n (which includes X ) that By Lemma 4.3, we have for (θ, φ) ∈ C m X n   X X 1 Ajk (θˆj + φˆk + ν)2 + 12 Ajk (θˆj + φˆk + ν)4 , |F (θ, φ)| ≤ exp − j=1 k=1

jk∈H

¯ φˆk = φk − φ ¯ and ν = θ ¯ + φ. ¯ As before, the integrand is independent where θˆj = θj − θ, ¯−φ ¯ and our notation will tend to ignore ψ for that reason; for our bounds it of ψ = θ will suffice to remember that ψ has a bounded range. 28

We proceed by exactly diagonalizing the (m+n+1)-dimensional quadratic form. Since Pn ˆ Pm ˆ θ = j j=1 k=1 φk = 0, we have X

Ajk (θˆj + φˆk + ν)2 =

m X

Aj•|H θˆj2 +

j=1

jk∈H

+2

n X

A•k|H φˆ2k + A••|H ν 2

k=1 m X n X

(αjk − Ajk hjk )θˆj φˆk

j=1 k=1 m X

(αj• − Aj•|H )θˆj + 2ν

+ 2ν

j=1

n X

(α•k − A•k|H )φˆk .

k=1

e −1/2 ), Aj•|H = O(1), e e This is almost diagonal, because αjk = O(n A•k|H = O(1). The e coefficients −2Ajk hjk can be larger but only in the O(n) places where hjk = 1. We can make the quadratic form exactly diagonal using the slight additional transformation (I + U −1 Y )−1/2 described by Lemma 4.1, where U is a diagonal matrix with diagonal entries Aj•|H , A•k|H and A••|H . The matrix Y has zero diagonal and other entries of e −1/2 ) apart from the row and column indexed by ν, which have entries magnitude O(n e 1/2 ), and the O(n) e e of magnitude O(n just-mentioned entries of order O(1). By the same −1 e −1/2 ), argument as used in Subsection 4.5, all eigenvalues of U Y have magnitude O(n so the transformation is well-defined. The new variables {ϑˆj }, {ϕˆk } and ν˙ are related to the old by (θˆ1 , . . . , θˆm , φˆ1 , . . . , φˆn , ν)T = (I + U −1 Y )−1/2 (ϑˆ1 , . . . , ϑˆm , ϕˆ1 , . . . , ϕˆn , ν) ˙ T. We will keep the variable ψ as a variable of integration but, as noted before, our notation will generally ignore it. e −3/2 ), we have uniformly over More explicitly, for some d1 , . . . , dm , d01 , . . . , d0n = O(n j = 1, . . . , m, k = 1, . . . , n that θˆj = ϑˆj + φˆk = ϕˆk + ν = ν˙ +

m X q=1 m X

e −2 )ϑˆq + O(n

n X

e −3/2 + n−1 hjk )ϕˆk + O(n e −1/2 )ν, O(n ˙

k=1

e −3/2 + n−1 hjk )ϑˆj + O(n

e −2 )ϕˆq + O(n e −1/2 )ν, O(n ˙

(32)

q=1

j=1 m X

n X

j=1

k=1

dj ϑˆj +

n X

e −1 )ν. d0k ϕˆk + O(n ˙

e in (32) represent values that depend on m, n, s, t but not Note that the expressions O(·) on {ϑˆj }, {ϕˆk }, ν. ˙ 29

The region of integration X is (m+n)-dimensional. In place of the variables (θ, φ) ˆ φ, ˆ ν, ψ) by applying the identities θˆm = − Pm−1 θˆj and φˆn = − Pn−1 φˆk . we can use (θ, j=1 k=1 ˆ and φ ˆ don’t include θˆm and φˆn .) The additional transformation (32) (Recall that θ maps the two just-mentioned identities into identities that define ϑˆm and ϕˆn in terms of ˆ ϕ, ˆ = (ϑˆ1 , . . . , ϑˆm−1 ) and ϕ ˆ ν), ˆ = (ϕˆ1 , . . . , ϕˆn−1 ). These have the form (ϑ, ˙ where ϑ ϑˆm = −

m−1 X

n−1 X  e −1 ) ϑˆj + e −1/2 )ϕˆk + O(n e 1/2 )ν, 1 + O(n O(n ˙

j=1

ϕˆn =

m−1 X j=1

e −1/2 )ϑˆj − O(n

k=1 n−1 X

(33)

 e −1 ) ϕˆk + O(n e 1/2 )ν. 1 + O(n ˙

k=1

ˆ ϕ, ˆ ν, Therefore, we can now integrate over (ϑ, ˙ ψ). The Jacobian of the transformation ˆ φ, ˆ ν, ψ) is mn/2. from (θ, φ) to (θ, ˆ ϕ, ˆ φ, ˆ ν) defined by (32). The matrix ˆ ν) Next consider the transformation T4 (ϑ, ˙ = (θ, of partial derivatives can be obtained by substituting (33) into (32). Without loss of e generality, we can suppose that xm , yn = O(1). Recall that the Frobenius norm of a matrix is the square root of the sum of squares of absolute values of the entries. After multiplying by n1/2 the row indexed by ν and dividing by n1/2 the column indexed by ν˙ (these two operations together not changing the determinant), the Frobenius norm of the e −1/2 ). Since the Frobenius norm bounds the eigenvalues, we can argue as matrix is O(n e −1/2 ). in [4, Section 5] to find that the Jacobian of this transformation is 1 + O(n e −1/2 ) The transformation T4 changes the region of integration only by a factor 1 + O(n in each direction, since the inverse of (32) has exactly the same form except that the e −3/2 ), may be different. Therefore, the constants {dj }, {d0k }, while still of magnitude O(n image of region X lies inside the region Y=



ˆ ϕ, ˆ ν) (ϑ, ˙ |ϑˆj |, |ϕˆk | ≤ 31κ (1 ≤ j ≤ m, 1 ≤ k ≤ n), |ν| ˙ ≤ 31κ .

We next bound the value of the integrand in Y. By repeated application of the inequality xy ≤ 12 x2 + 12 y 2 , we find that 1 12

m X n X

Ajk (θˆj + φˆk + ν)4 ≤ 23 10

m X

j=1 k=1

j=1

30

Aj• ϑˆ4j +

n X k=1

A•k ϕˆ4k

+ A•• ν˙

4



.

Now define h(z) = −z 2 +

23 4 z . 10

ˆ ϕ, ˆ ν) Then, for (ϑ, ˙ ∈ Y,

m n X  X ˆ |F (θ, φ)| ≤ exp Aj•|H h(ϑj ) + A•k|H h(ϕˆk ) + A••|H h(ν) ˙ j=1

k=1

 m−1 X

≤ exp

Aj•|H h(ϑˆj ) +

n−1 X

j=1

 A•k|H h(ϕˆk ) + A••|H h(ν) ˙

(34)

k=1

 = exp A••|H h(ν) ˙

m−1 Y

Y  n−1  ˆ exp Aj•|H h(ϑj ) exp A•k|H h(ϕˆk ) ,

j=1

k=1

where the second line holds because h(z) ≤ 0 for |z| ≤ 31κ. Define  ˆ ϕ, ˆ ν) W0 = (ϑ, ˙ ∈ Y |ϑˆj | ≤ 12 n−1/2+ε

(1 ≤ j ≤ m−1),

|ϕˆk | ≤ 21 m−1/2+ε

(1 ≤ k ≤ n−1), |ν| ˙ ≤ 12 (mn)−1/2+2ε ,

W1 = Y − W0 , n−1 m−1 o n X X ˆ ˆ ϕ, ˆ ν) d0k ϕˆk ≤ n−5/4 . dj ϑj + W2 = (ϑ, ˙ ∈Y j=1

k=1

Also define similar regions W00 , W10 , W20 by omitting the variables ϑˆ1 , ϕˆ1 instead of ϑˆm , ϕˆn starting at (34). (Note that without loss of generality we can also assume that x1 , y1 = e O(1).) Using (32), we see that T4 , and the corresponding transformation that omits ϑˆ1 and ϕˆ1 , map R to a superset of W0 ∩ W2 ∩ W00 ∩ W20 . Therefore, X − R is mapped to a subset of W1 ∪ (W0 − W2 ) ∪ W10 ∪ (W00 − W20 ). Arguments as in [4, Section 5] show that Z 2ε 2ε  |F (θ, φ)| dθdφ = O e−c2 Am + e−c2 An I0 0

0

0

W1 ∪(W0 −W2 )∪W1 ∪(W0 −W2 )

for some c2 > 0. Adding the bounds from (31) and (35) we conclude that Z 2ε 2ε  |F (θ, φ)| dθdφ = O e−c3 Am + e−c3 An I0 c

R

for some c3 > 0, which implies Lemma 4.4 by (30).

31

(35)

References [1] A. Barvinok, On the number of matrices and a random matrix with prescribed row and column sums and 0-1 entries, preprint (2008), available at http://arxiv.org/abs/0806.1480. [2] E. A. Bender, The asymptotic number of non-negative integer matrices with given row and column sums, Discrete Math., 10 (1974) 217–223. [3] B. Bollob´as and B. D. McKay, The number of matchings in random regular graphs and bipartite graphs, J. Combin. Theory Ser. B, 41 (1986) 80–91. [4] E. R. Canfield, C. Greenhill and B. D. McKay, Asymptotic enumeration of dense 0-1 matrices with specified line sums, J. Combin. Theory Ser. A, 115 (2008) 32–66. [5] E. R. Canfield and B. D. McKay, Asymptotic enumeration of dense 0-1 matrices with equal row sums and equal column sums, Electron. J. Combin. 12 (2005), #R29. [6] C. Cooper, A. Frieze and M. Molloy, Hamilton cycles in random regular digraphs, Combin. Probab. Comput. 3 (1994) 39–49. [7] Z. Gao, B. D. McKay and X. Wang, Asymptotic enumeration of tournaments with a given score sequence containing a specified digraph, Random Structures and Algorithms, 16 (2000) 47–57. [8] C. Greenhill and B. D. McKay, Random dense bipartite graphs and directed graphs with specified degrees, preprint (2008), available at http://arxiv.org/abs/math/0701600. [9] C. Greenhill, B. D. McKay and X. Wang, Asymptotic enumeration of sparse 0-1 matrices with irregular row and column sums, J. Combin. Theory Ser. A, 113 (2006) 291–324. [10] L. Gurvits, Van der Waerden/Schrijver-Valiant like conjectures and stable (aka hyperbolic) homogeneous polynomials: one theorem for all, Electron. J. Combin. 15 (2008) #R66 (26 pages). [11] B. D. McKay, Subgraphs of random graphs with specified degrees, Congr. Numer., 33 (1981) 213–223. [12] B. D. McKay, Asymptotics for 0-1 matrices with prescribed line sums, in Enumeration and Design, (Academic Press, 1984) 225–238. [13] B. D. McKay, The asymptotic numbers of regular tournaments, eulerian digraphs and eulerian oriented graphs, Combinatorica, 10 (1990) 367–377. 32

[14] B. D. McKay and R. W. Robinson, Asymptotic enumeration of Eulerian circuits in the complete graph, Combin. Prob. Comput., 7 (1998) 437–449. [15] B. D. McKay and X. Wang, Asymptotic enumeration of tournaments with a given score sequence, J. Combin. Theory Ser. A, 73 (1996) 77–90. [16] B. D. McKay and X. Wang, Asymptotic enumeration of 0-1 matrices with equal row sums and equal column sums, Linear Algebra Appl., 373 (2003) 273–288. [17] A. N. Timash¨ev, On permanents of random doubly stochastic matrices and on asymptotic estimates for the number of Latin rectangles and Latin squares (Russian), Diskret. Mat., 14 (2002) 65–86; translation in Discrete Math. Appl., 12 (2002) 431– 452. [18] X. Wang, Asymptotic enumeration of Eulerian digraphs with multiple edges, Australas. J. Combin. 5 (1992) 293–298. [19] X. Wang, Asymptotic enumeration of digraphs by excess sequence, in Graph theory, combinatorics, and algorithms, Vol. 1, 2 (Wiley-Intersci. Publ., Wiley, New York, 1995) 1211–1222. [20] X. Wang, The asymptotic number of Eulerian oriented graphs with multiple edges, J. Combin. Math. Combin. Comput. 24 (1997) 243–248. [21] N. C. Wormald, Some problems in the enumeration of labelled graphs, Ph. D. Thesis, Department of Mathematics, University of Newcastle (1978).

33