arXiv:0801.0155v1 [math.PR] 31 Dec 2007
Resolvent of Large Random Graphs Charles Bordenave
∗
and Marc Lelarge†
May 11, 2009
Abstract We analyze the convergence of the spectrum of large random graphs to the spectrum of a limit infinite graph. We apply these results to graphs converging locally to trees and derive a new formula for the Stieljes transform of the spectral measure of such graphs. We illustrate our results on the uniform regular graphs, Erd¨ os-Renyi graphs and preferential attachment graphs. We sketch examples of application for weighted graphs, bipartite graphs and the uniform spanning tree of n vertices. MSC-class: 05C80, 15A52 (primary), 47A10 (secondary).
1
Definition and Main Results
1.1
Convergence of the spectral measure of random graphs
We denote a (multi-)graph G with vertex set V and undirected edge set E by G = (V, E). The degree of the vertex v ∈ V in G is deg(G, v). In this paper, a network is a graph G = (V, E) together with a complete separable space H called the mark space and maps from V to H. When needed, we use the following notation: G will denote a graph and G a network with underlying graph G. A rooted network (G, o) is a network G with a distinguished vertex o of G, called the root. A rooted isomorphism of rooted networks is an isomorphism of the underlying networks that takes the root of one to the root of the other. [G, o] will denote the class of rooted networks that are rooted-isomorphic to (G, o). We shall use the following notion introduced by Benjamini and Schramm [5] and Aldous and Steele [2]. Let G ∗ (respectively G ∗ ) denote the set of rooted isomorphism classes of rooted connected locally finite networks (respectively graphs). Define a metric on G ∗ by letting the distance between (G1 , o1 ) and (G2 , o2 ) be 1/α, where α is the supremum of those r > 0 such that there is some rooted isomorphism of the balls of (graph-distance) radius ⌊r⌋ around the roots of Gi such that each mark has distance less than 1/r. We define the same metric on G ∗ , by considering that a graph is a network with a constant ∗ †
Institut de Math´ematiques - Universit´e de Toulouse & CNRS - France. Email:
[email protected] INRIA-ENS - France. Email:
[email protected] 1
mark attached to each vertex. G ∗ and G ∗ are separable and complete metric spaces [1]. For probability measures ρ, ρn on G ∗ , we write ρn ⇒ ρ when ρn converges weakly with respect to this metric. For a finite network G, let U (G) denote the distribution on G ∗ obtained by choosing a uniform random vertex of G as root (see [1]). We also define U2 (G) as the distribution on G ∗ × G ∗ of the pair of rooted graphs ((G, o1 ), (G, o2 )) where (o1 , o2 ) is a uniform random pair of vertices of G. If (Gn ), n ∈ N, are finite networks and ρ is a probability measure on G ∗ , we say the random weak limit of Gn is ρ if U (Gn ) ⇒ ρ. If (G, o) is a random rooted network whose distribution of its equivalent class in G ∗ is ρ, then we also shall say that the random weak limit of Gn is G. We shall also consider a sequence of random finite networks (Gn ), n ∈ N, the expectation with respect to the randomness of the graph being denoted by E(·). The measure ρn = U (Gn ) is now a random measure on G ∗ and, following Aldous and Steele [2], we will say that the random weak limit of Gn is ρ if Eρn ⇒ ρ (recall that the average measure Eρn is defined by (Eρn )(A) = E(ρn (A)) for all measurable events A on G ∗ ). Note that the notion of random weak convergence for random graphs involves the averaging with respect to the randomness of the graph. In the first part of this paper, we consider a sequence of finite graphs Gn = ([n], En ) with measure ρn ∈ G ∗ converging to a random weak limit ρ, where [n] = {1, · · · , n}. We will also consider a sequence of random finite graphs Gn = ([n], En ) converging to a random weak limit ρ. (n)
We denote by A(n) = A(Gn ), the n×n adjacency matrix of Gn , in which Aij = |{(ij) ∈ En }| (n)
and Aij = 0 otherwise. The Laplace matrix of Gn is L(n) = D (n) − A(n) , where D (n) is the P (n) (n) degree diagonal matrix in which Dii = deg(Gn , i) = j∈[n] Aij is the degree of i in Gn and (n)
Dij = 0 for all i 6= j. The main object of this paper is to study the convergence of the empirical measures of the eigenvalues of A(n) and L(n) respectively when the sequence of graphs converges weakly. Note that the spectra of A(n) or L(n) do not depend on the labeling of the graph Gn . If we label the vertices of Gn differently, then the resulting matrix is unitarily equivalent to A(n) and L(n) and it is well-known that the spectra are unitarily invariant. For ease of notation, we define (n) ∆(n) − αD(n) , α =A (n)
(n)
(n)
with α ∈ {0, 1} so that ∆0 = A(n) and ∆1 = −L(n) . The spectral measure of ∆α is denoted P (n) (n) (n) by µα = n−1 ni=1 δλ(n) , where (λα,i )1≤i≤n are the eigenvalues of ∆α . We endow the set α,i
of measures on R with the usual weak convergence topology. This convergence is metrizable with the Levy distance L(µ, ν) = inf{h ≥ 0 : ∀x ∈ R, µ((−∞, x − h]) − h ≤ ν((−∞, x]) ≤ µ((−∞, x − h]) + h}. We will consider two assumptions, one, denoted by (D), for a given sequence of finite graphs and another, denoted by (R), for a sequence of random finite graphs. D. As n goes to infinity, U (Gn ) converges weakly to ρ. R. As n goes to infinity, U2 (Gn ) converges weakly to ρ ⊗ ρ. 2
An uniform integrability assumption will also be needed in this work. Let deg(Gn , o) := deg(U (Gn ), o) denotes the degree of the root under ρn or under Eρn , according to the case (D) or (R). A. The sequence of variables (deg(Gn , o)), n ∈ N, is uniformly integrable. Assumption (A) is satisfied for example if for some γ > 1, lim supn Edeg(Gn , o)γ < ∞. Theorem 1 (i) Let Gn = ([n], En ) be a sequence of graphs satisfying assumptions (D-A), (n) then there exists a probability measure µα on R such that limn→∞ µα = µα . (ii) Let Gn = ([n], En ) be a sequence of random graphs satisfying assumptions (R-A), then (n) there exists a probability measure µα on R such that, limn→∞ EL(µα , µα ) = 0. (n)
In (ii), note that the stated convergence implies the weak convergence of the law of µα to (n) δµα : for all bounded continuous functions f on the measures on R, limn Ef (µα ) = f (µα ). We now sketch the method of proof. We denote C+ = {z ∈ C : ℑz > 0}. Let H be the set 1 . We introduce the resolvent of holomorphic functions f from C+ to C+ such that |f (z)| ≤ ℑz (n)
of the matrix ∆α :
−1 R(n) (z) = (∆(n) α − zIn ) .
Recall [4] that the Stieltjes transform of the empirical spectral distribution is given by: Z 1 1 (n) (n) mα (z) = dµ(n) (z) α (x) = trR x − z n R (n)
where z = u + iv with v > 0. We will see that for any i ∈ [n], we have Rii ∈ H and the space H equipped with the topology induced by the uniform convergence on compact sets is a complete separable metrizable compact space. H will be our mark space: to each vertex i ∈ [n] we attach (n) (n) the mark Rii . Gn = {Gn , (Rii )i∈[n] } is a finite network and let ρn be the probability measure on G ∗ of U (Gn ). Note that we have n
m(n) α (z)
1 X (n) Rii (z) = = n i=1
Z
hR(z)o, oidρn [G, o].
Under an hypothesis of uniformly bounded degree in G, [15] has proved that the adjacency operator of G is self adjoint. Thus we will be able to define the resolvent R of the possibly infinite graph (G, o) with law ρ. We will then obtain a network with law ρ and prove the following convergence result: ρn ⇒ ρ. As a corollary, we will get Z Z (n) lim mα (z) = lim hR(z)o, oidρ n [G, o] = hR(z)o, oidρ[G, o]. n→∞
n→∞
3
To relax the hypothesis of uniformly bounded degree and prove Theorem 1 (i), we will use a standard difference inequality. In order to prove Theorem 1(ii), some more work will be needed since, R (n) the argument depicted above only leads to limn→∞ Emα (z) = hR(z)o, oi(z)dρ[G, o]. The corR (n) relation assumption R will be used to obtain limn→∞ E mα (z) − hR(z)o, oi(z)dρ[G, o] = 0.
1.2
Random graphs with trees as local weak limit
We now consider a sequence random of graphs (Gn , n ∈ N∗ ) which converges as n goes to infinity to a possibly infinite tree. A Galton-Watson Tree (GWT) with offspring distribution F is the random tree obtained by a standard Galton-Watson branching process with offspring distribution F . For example, the infinite k-ary tree is a GWT with offspring distribution δk , see Figure 1. A GWT with degree distribution F∗ is a rooted random tree obtained by a GaltonWatson branching process where the root has offspring distribution F∗P and all other genitors have offspring distribution F where for all k ≥ 1, F (k − 1) = kF (k)/ ∗ k kF∗ (k) (we assume P k kF∗ (k) < ∞). For example the infinite k-regular tree is a GWT with degree distribution δk , see Figure 1. It is easy to check that a GWT with degree distribution F∗ defines a unimodular probability measure on G ∗ (for a definition and properties of unimodular measures, refer to [1]). 11 00 00 11
1 0 0 1 0 1
11 00 00 11 00 11
1 0 0 1 0 1
11 00 00 11 00 11
1 0 0 1
11 00 00 11
11 00 00 11
00 11 11 00 00 11
1 0 0 1
11 00 11 00
1 0 0 1 0 1
11 00 00 00 11 11 00 11
1 0 0 1
1 0 00 11 0 00 0 1 00 0 00 0 1 0 1 00 1 11 011 1 00 11 011 1 00 1 11 0 11 1 00 11 0 1
1 0 11 00 0 00 1 00 11 11 11 00
11 00 0 0 00 11 00 11 0 00 11 00 11 0 001 11 01 1 0 1 00 11 001 11 0 1 00 11 001 11 0 1
1 0 0 1
1 0 0 1
1 0 0 1
0 1 1 0 0 1
11 00 00 11 00 11 11 00 11 00
0 1 11 00 0 1 00 11 0 1 11 00 1 0 00 11 1 0
0 1 1 0 0 1
1 0 00 0 1 0 00 0 1 00 0 0 011 1 00 11 0 1 1 011 1 00 11 0 11 1 001 11 01 1 0 1
11 00 11 00 00 11 11 00 00 11
1 0 0 1 0 1
00 11 11 00 00 11 00 11
0 1 1 0 0 1
00 11 11 00 00 11
11 00 1 0 00 11 1 0 00 11
00 11 11 00 00 11
00 11 11 00 00 11
Figure 1: Left: representation of a 3-ary tree. Right: representation of a 3-regular tree. Let n ∈ N∗ and Gn = ([n], En ), be a random graph on the finite vertex set [n] and edge set En . As above, let Eρn be the probability measure of the rooted graph U (Gn ) with root o uniformly drawn on [n]. We will assume that assumption (A) holds and that the following holds RT. As n goes to infinity, U2 (Gn ) converges weakly to ρ ⊗ ρ, where ρ ∈ G ∗ is the probability measure of GWT with degree distribution F∗ . We mention three important classes of graphs which converge locally to a tree and which satisfy our assumptions.
4
Example 1 Uniform regular graph. The uniform k-regular graph on n vertices satisfies these assumptions with the infinite k-regular tree as local limit. It follows for example easily from Bollob´as [6], see also the survey Wormald [19]. Example 2 Erd¨ os-R´enyi graph. Similarly, consider the Erd¨os-R´enyi graph on n vertices where there is an edge between two vertices with probability p/n independently of everything else. This sequence of random graphs satisfies the assumptions with limiting tree the GWT with degree distribution P oi(p). Example 3 Graphs with asymptotic given degree and preferential attachment graphs. More generally, the usual random R graph with asymptotic degree distribution F∗ satisfies this set of hypothesis provided that xF∗ (dx) < ∞ (see Chapter 3 in Durrett [8]). (n)
(n)
Recall that ∆α = A(n) − αD(n) , with α ∈ {0, 1}, the spectral measure of ∆α is denoted (n) (n) (n) by µα . The resolvent of ∆α is R(n) (z) = (∆α − zIn )−1 . For all i ∈ [n], the mapping z 7→ R(n) (z) ii ∈ H and, under Eρn , we define the random variable in H (n) Xα(n) = Roo .
(n)
(n)
Finally, the Stieljes transform of µα is denoted by mα (z) = Our main result is the following.
R
(n) −1 R (x−z) dµα (x)
= n−1 trR(n) (z).
Theorem 2 Under assumptions (RT-A), (i) There exists a unique probability measure Qα ∈ P(H) such that for all z ∈ C+ , d
Y (z) = − z + α(N + 1) +
N X i=1
!−1
Yi (z)
,
(1)
where N has distribution F and Y and Yi are iid copies independent of N with law Qα . (n)
(ii) For all z ∈ C+ , mα (z) converges as n tends to infinity in L1 to EXα (z), where for all z ∈ C+ , !−1 N∗ X d , (2) Yi (z) Xα (z) = − z + αN∗ + i=1
where N∗ has distribution F∗ and Yi are iid copies with law Qα , independent of N∗ .
Equation (1) is a Recursive Distributional Equation (RDE). They appear often in combinatorial optimization and branching processes, see Aldous and Bandyopadhyay [3] for a survey on max-type RDE. In random matrix theory, the Stieljes transform appears classicaly as a fixed point of a mapping on H. For example, in the Wigner case (i.e. the matrix 5
√ Wn = (Aij / n)1≤i,j≤n where Aij are iid copies of A with var(A) = σ 2 ), the Stieljes tranform m(z) of the limiting spectral measure, the semi circular law with radius 2σ, satisfies for all z ∈ C+ , −1 m(z) = − z + σ 2 m(z) . (3)
Here, in Theorem 2, the fixed point is at the level of P(H). This is due to the fact that the weak (n) (n) limit of Xα (z) is not necessarily a deterministic variable, and mα (z) = n−1 trR(n) (z) has to (n) converge to the mean of the weak limit of Xα (z). (n)
Note that for all z ∈ C+ , mα (z) and Xα (z) have a bounded support, hence the convergence (n) in Theorem 2(iii) of mα (z) to EXα (z) holds in Lp for all p ≥ 1. It is tempting to conjecture (n) (n) that on H, Xα converges weakly to Xα . In this paper, we only prove that Xα converges weakly to Xα if the degree deg(Gn , o) is uniformly bounded by some constant M . Example 2 If Gn is the a Erd¨os-R´enyi graph on [n], with parameter p/n then Gn converges to the GWT with degree distribution P oi(p). In this case, F and F∗ have the same distribution, and then the law of X0 is Q0 . Theroem 2 sheds a new light a result of Khorunzhy, Shcherbina and Vengerovsky [12], Theorems 3 and 4, Equations (2.17), (2.24). Indeed, for all z ∈ C+ , we may easily find√a fixed point equation for the Fourier transform of Y (z) using the formula eiθw = √ R∞ P −1 (−t2 /4)k 1 − θ 0 J1 (2√t θt) e−itw dt valid for all θ ≥ 0 and w ∈ C+ , and where J1 (t) = 2t ∞ k=0 k!(k+1)! is the Bessel function of the first kind. Example 1 If Gn is the uniform k-regular graph on [n], with k ≥ 2, then Gn converges to the GWT with degree distribution δk . We consider the case α = 0, looking for deterministic 1 (z − solutions of Y , we find: Y (z) = −(z + (k − 1)Y (z))−1 , hence, in view of (3), Y (z) = − 2(k−1) p 2 √z − 4(k − 1)), and Y is simply the Stieljes transform of the semi-circular law with radius 2 k − 1. For X0 (z) we obtain, X0 (z) = −(z + kY (z))−1 = −
2(k − 1) p . (k − 2)z + k z 2 − 4(k − 1)
In particular ℑX0 (z) = ℑ(z+kY (z))/|z+kY (z)|2 . Using the formula µ0 [a, b] = limv→0+ (n) µ√ 0
1 π
Rb a
ℑX0 (x+
iv)dx, valid for all continuity points a < b of µ0 , we deduce easily that converges √ weakly to the probability measure µ0 (dx) = f (x)dx which has a density f on [−2 k − 1, 2 k − 1] given by p 4(k − 1) − x2 k , f (x) = 2π k 2 − x2 √ √ and f (x) = 0 if x ∈ / [−2 k − 1, 2 k − 1]. This formula for the density of the spectral measure is already known and it is due to McKay [13]. This probability measure appeared first in Kesten [11] in the context of simple random walks on groups. To the best of our knowledge, the proof of Theorem 2 is the first proof using the resolvent method of McKay’s Theorem. It is interesting to notice that this measure and the semi-circle distribution are simply related by their Stieljes transform. 6
The remainder of this paper is organized as follows, in Section 2, we prove Theorem 1, in Section 3 we prove Theorem 2. Finally, in Section 4, we extend and apply our results to related graphs.
2 2.1
Proof of Theorems 1 Definition of the random finite networks ρn (n)
In this section, we check the definition of the finite networks {Gn , (Rii )i∈[n] } defined in Section 1.1. First note that by standard linear algebra, we have 1
(n)
Rii (z) =
(n) ∆ii
(n)
− z − αti (∆i
− zIn−1 )−1 αi
, (n)
where αi is the ((n − 1) × 1) ith column vector of ∆(n) with the ith element removed and ∆i is the matrix obtained form ∆(n) with the ith row and column deleted (corresponding to the (n) graph with vertex i deleted). An easy induction on n shows that Rii ∈ H for all i ∈ [n]. Lemma 2.1 The space H equipped with the topology induced by the uniform convergence on compact sets is a complete separable metrizable compact space. Proof. This lemma follows from Chapter 7 of [7]. The compactness follows from: for any compact K ⊂ C+ , we define d(K) = min{|y − z|, y ∈ R, z ∈ K} > 0. Then we have f ∈ H ⇒ sup |f (z)| ≤ d(K)−1 . z∈K
By Montel’s Theorem, H is a normal family in the set of holomorphic functions on C+ . Since H is closed, the compactness follows. 2
2.2
Linear operators associated with a graph of bounded degree
We first recall some standard results that can be found in Mohar and Woess [16]. Let (G, o) be the random weak limit of Gn . In this section we assume that deg(G) = sup{deg(G, u), u ∈ V } < ∞. Let A be the adjacency matrix of G. We define the matrix ∆(α) = ∆ = A − αD, where D is the degree diagonal matrix and α ∈ {0, 1}. Let ek = {δik : i ∈ N} be the specified complete orthonormal system of L2 (N). Then ∆ can be interpreted as a linear operator over L2 (N), which is defined on the basis vector ek as follows: h∆ek , ej i = ∆kj . Since G is locally finite, ∆ek is an element of L2 (N) and ∆ can be extended by linearity to a dense subspace of L2 (N), which is spanned by the basis vectors {ek , k ∈ N}. Denote this dense 7
subspace H0 and the corresponding operator ∆0 . The operator ∆0 is symmetric on H0 and thus closable (Section VIII.2 in [17]). We will denote the closure of ∆0 by the same symbol ∆ as the matrix. The operator ∆ is by definition a closed symmetric transformation: the coordinates of y = ∆x are X yi = ∆ij xj , i ∈ N, j
whenever these series converge. The following lemma is proved in [15] for the case α = 0 and the case α = 1 follows by the same argument. Lemma 2.2 Assume that deg(G) = sup{deg(G, u), u ∈ V } < ∞, then ∆ is self-adjoint.
2.3
Proof of Theorem 1 (i) - bounded degree
We assume that deg(G) = sup{deg(G, u), u ∈ V } < ∞. Let (G, o) be the random weak limit of Gn . Let ∆(n) denote also the (random) operator associated to the graph U (Gn ) as in section 2.2. Since Gn is finite, ∆(n) is a bounded self-adjoint operator. By the Skorokhod representation theorem, we can assume that U (Gn ), G are defined on a common probability space such that a.s. Gn has a random weak limit G. The random operators ∆(n) are defined on the same probability space and we have by definition ∆(n) φ → ∆φ a.s.,
(4)
for each φ ∈ H0 the subspace of L2 (N), which is spanned by the basis vectors {ek , k ∈ N}. By Theorem VIII.25(a) in [17], the convergence (4) and the fact that ∆ is a self-adjoint operator imply the convergence of ∆(n) → ∆ in the strong resolvent sense: Rα(n) (z)x − R(z)x → 0, for any x ∈ L2 (N), and for all z ∈ C+ . This last statement implies ρn ⇒ ρ. If we consider the same probability space as above, we can write Z n 1 X (n) m(n) (z) = hR (z)ei , ei i → hR(z)o, oidρ[G, o] =: Tr(R(z)), n i=1
since we have the domination |hR(n) (z)ei , ei i| ≤ (ℑz)−1 and where the trace was defined in [1]. Under the assumption deg(G) = sup{deg(G, u), u ∈ V } < ∞, we proved m(n) (z) → Tr(R(z)), since the limit is deterministic.
8
(5)
2.4
Proof of Theorem 1 (i) - general case
en on [n] obtained from Gn by removing all edges adjacent to Let M ∈ N, we define the graph G en denoted by B (n) is equal to a vertex i, if deg(Gn , i) ≥ M . Therefore the adjacency matrix of G ( (n) Aij if max{deg(Gn , i), deg(Gn , j)} ≤ M (n) Bij = 0 otherwise The empirical measure of the eigenvalues of B (n) is denoted by µ(n,M ) . By the Difference Inequality Lemma (Lemma 2.3 in [4]), 1 L3 (µ(n) , µ(n,M ) ) ≤ tr(A(n) − B (n) )2 , n (n)
where L denotes the Levy distance. We denote ξi = deg(Gn , i). We get X (n) (n) (n) tr(A(n) − B (n) )2 ≤ (Aij )2 1(max(ξi , ξj ) > M ) 1≤i,j≤n
≤
X
1≤i,j≤n n X
≤ 2
≤ 2 and therefore: 3
L (µ
(n)
,µ
(n,M )
)≤2
Z
(n)
(n)
(n)
> M)
Aij (1(ξi 1(ξi
n X
> M ))
(n)
Aij
j=1
i=1
n X
(n)
> M ) + 1(ξj
(n)
(n)
ξi 1(ξi
> M ),
i=1
deg(G, o)1(deg(G, o) > M )dρn [G, o] =: pn,M .
We now define m(n,M ) (z) as the Stieljes transform of µ(n,M ) . By Lemma 6.2, we deduce that C (6) (pn,M )1/3 + (pn,M )2/3 + pn,M . |m(n,M ) (z) − m(n) (z)| ≤ ℑz By assumptions (D-A), we have Z pn,M → 2 deg(G, o)1(deg(G, o) > M )dρ[G, o],
where the right-hand term tends to 0 as M tends to infinity. Fix ǫ > 0, for M sufficiently large and n ≥ N (M ), we have Cǫ . |m(n,M ) (z) − m(n) (z)| ≤ ℑz Hence we get, for n ≥ N (M ), q ∈ N, 2Cǫ |m(n+q) (z) − m(n) (z)| ≤ + |m(n+q,M ) (z) − m(n,M ) (z)|. ℑz By (5), we have limn→∞ m(n,M ) (z) = mM (z) for some mM ∈ H. Hence the sequence m(n) (z) is Cauchy and the proof of Theorem 1(i) is complete. 9
2.5
Proof of Theorem 1 (ii) - bounded degree
We assume first the following A’. There exists M ∈ N such that for all n ∈ N and i ∈ [n], deg(Gn , i) < M . With this extra assumption, ∆(n) and ∆ are self adjoint operators and, as in §2.3, we deduce that Eρn ⇒ ρ. In particular, we have Z (n) E[m (z)] → hR(z), o, oidρ[G, o]. Hence, in order to prove Theorem 1 (ii) with the additional assumption (A’), it is sufficient to prove that, for all z ∈ C+ , 2 lim E m(n) (z) − E[m(n) (z)] = 0. (7) n→∞
Take z ∈ C+ with v := ℑz > M . Notice, that by Vitali’s convergence Theorem, it is sufficient to prove (7) for all z such that ℑz > M . Since (∆(n) )kii ≤ M k , we can write (n)
Rii (z)
=
=:
−
ℓ−1 X (∆(n) )k
ii
k=0 (n) Xi (ℓ)
z k+1 +
−
∞ X (∆(n) )k
k=ℓ (n) ǫi (ℓ).
ii
z k+1
h i k (n) (n) (n) (n) Note that |ǫi (ℓ)| ≤ ǫ(ℓ) := k=ℓ vM X (ℓ) = X (ℓ) − E X (ℓ) . Since . We denote k+1 i i i (n) (n) (n) Xi (ℓ) − Rii = ǫi (ℓ) ≤ ǫ(ℓ), we have n 1 X (n) (n) X i (ℓ) + 2ǫ(ℓ). m (z) − E[m(n) (z)] ≤ n P∞
i=1
Therefore if for all ℓ ∈ N, we prove that in L2
n
1 X (n) lim X i (ℓ) = 0, n→∞ n
(8)
i=1
then the proof of (7) will be complete. We now prove (8): !2 n n X X X 1 1 (n) (n) (n) (n) X i (ℓ) X i (ℓ)X j (ℓ) + X i (ℓ)2 = E E n n2 i=1 i=1 i6=j (n) (n) = E X o1 (ℓ)X o2 (ℓ) , (n)
where (o1 , o2 ) is a uniform pair of vertices. We then notice that X i (ℓ) is a measurable func(n) (n) tion of the ball of radius ℓ and center i. Thus, by assumption (R), limn EX o1 (ℓ)X o2 (ℓ) = (n)
(n)
limn EX o1 (ℓ)EX o2 (ℓ) = 0, and (8) follows. Hence we proved (7) under the assumption (A’). 10
2.6
Proof of Theorem 1 (ii) - general case
We now relax assumption (A’) by assumption (A). By the same argument as in §2.4, we get i h C E m(n,M ) (z) − m(n) (z) ≤ (pn,M )1/3 + (pn,M )2/3 + pn,M , ℑz
where pn,M = 2E [deg(Gn , o)1(deg(Gn , o) > M )]. By assumptions (R-A), Z pn,M → 2 deg(G, o)1(deg(G, o) > M )dρ[G, o],
where the right-hand term tends to 0 as M tends to infinity. The end of the proof follows by the same argument, since we have now for n ≥ N (M ), q ≥ 1, E|m(n+q) (z) − m(n) (z)| ≤
3
2Cǫ + E|m(n+q,M )(z) − m(n,M ) (z)|. ℑz
Proof of Theorem 2
3.1
Proof of Theorem 2 (i)
In this paragraph, we check the existence and the unicity of the solution of the RDE (1). Let Θ = N × H∞ , where H∞ is the usual infinite product space. We define a map ψ : Θ 7→ H as follows ψ(n, (hi )i∈N ) : C+ → C+ P z 7→ − (z + α(n + 1) + ni=1 hi (z))−1 .
Let Ψ be a map from P(H) to itself, where Ψ(P ) is the distribution of ψ(N, (Yi )i∈N ), where (i) (Yi , i ≥ 1) are independent with distribution P ; (ii) N has distribution F ; (iii) the families in (i) and (ii) are independent. We say Q ∈ P(H) is a solution of the RDE (1) if Q = Ψ(Q). Lemma 2.3 There exists a unique measure Qα ∈ P(H) solution of the RDE (1). Proof. Let √ Ω be a bounded open set in C+ with an empty intersection with the ball of center 0 and radius EN + 1. Let P(H) be the set of probability measures on H. We define the distance on P(H) Z W (P, Q) = inf E
Ω
11
|X(z) − Y (z)|dz
where the infimum is over all possible coupling of the distributions P and Q where X has law P and Y has law Q. The fact that W is the distance follows from the fact that two holomorphic functions equal on a set containing a limit point are equal. The space P(H) equipped with the metric W gives a complete metric space. R Let X with law P , Y with law Q coupled so that W (P, Q) = E Ω |X(z) − Y (z)|dz. We consider (Xi , Yi )i∈N iid copies of (X, Y ) and independent of the variable N . By definition, we have the following Z W (Ψ(P ), Ψ(Q)) ≤ E |ψ(N, (Xi ); z) − ψ(N, (Yi ); z)|dz Ω PN Z X (z) − Y (z) i i i=1 dz ≤ E PN PN Ω z + α(N + 1) + z + α(N + 1) + i=1 Yi (z) i=1 Xi (z) ≤
Z
(ℑz)−2 E
Ω
N X i=1 −2
≤ EN ( inf ℑz) z∈Ω
|Xi (z) − Yi (z)| dz
W (P, Q).
√ Then since inf z∈Ω ℑz > EN , Ψ is a contraction and from Banach fixed point Theorem, there exists a unique probability measure Qα on H such that Ψ(Qα ) = Qα . 2
3.2
Resolvent of a tree
In this paragraph, we prove the following proposition. Proposition 2.1 Let F∗ be a distribution with finite mean, and (Tn , 1) be a GWT rooted at 1 with degree distribution F∗ stopped at generation n. Let A(n) be the adjacency matrix of Tn , and (n) (n) (n) (n) ∆α = A(n) − αD(n) . Let R(n) (z) = (∆α − zIn )−1 and Xα (z) = R11 (z). For all z ∈ C+ , as (n) n goes to infinity Xα (z) converges weakly to Xα (z) defined by Equation (2). We start with the following Lemma which explains where the RDE (1) comes from. Lemma 2.4 Let F be a distribution with finite mean, and (Tn , 1) be a GWT rooted at 1 with offspring distribution F stopped at generation n. Let A(n) be the adjacency matrix of Tn . We (n) −α(D (n) +V (n) ), where V (n) = 1 and V (n) = 0 for all (i, j) 6= (1, 1). ¯ (n) define the matrix ∆ α =A 11 ij −1 and Y (n) (z) = R(n) (z). For all z ∈ C , as n goes to infinity Y (n) (z) ¯ (n) Let R(n) (z) = (∆ α −zIn ) + 11 converges weakly to Y (z) given by the RDE (1). Proof.
We start with the standard formula: (n)
Y (n) (z) = − z + α(D11 + 1) +
X
2≤i,j≤n
12
−1
e(n−1) A(n) A(n) R ij 1i 1j
,
e(n−1) = (∆ e (n−1) − zIn−1 )−1 with ∆ e (n−1) is the matrix obtained from ∆ ¯ (n) where R α with the first (n−1) (n) (n) e row and column deleted. Since Tn is a tree Rij A1i A1j = 0 if i 6= j, so that we get Y
(n)
(z) = − z + α(N + 1) +
N X
!−1
(n−1) Yi (z)
i=1
,
(n−1)
where N has distribution F and Yi are iid copies of Y (n−1) , independent of N . In other words, we have Y (n) = Ψ(Y (n−1) ), where the mapping Ψ was defined in §3.1. The end of the proof follows directly from Lemma 2.3. 2 Proof of Proposition 2.1. Again, we use the decomposition formula: −1 X (n) e(n−1) A(n) A(n) , Xα(n) (z) = − z + αD11 + R ij 1i 1j 2≤i,j≤n
e(n−1) = (∆ e (n−1) − zIn−1 )−1 with ∆ e (n−1) is the matrix obtained from ∆(n) where R α with the first (n−1) (n) (n) e row and column deleted. Since Tn is a tree R A1i A1j = 0 if i 6= j, so that we get ij Xα(n) (z)
= − z + αN∗ +
N∗ X
!−1
(n−1) Yi (z)
i=1
,
(n−1)
where N∗ has distribution F∗ and Yi are iid copies of Y (n−1) , independent of N , defined in Lemma 2.4. Proposition 2.1 follows easily from Lemma 2.4. 2
3.3
Proof of Theorem 2 (ii) - bounded degree (n)
We first assume that Assumption (A’) holds. Then, as in §2.2, 2.5, ∆α and ∆α are self adjoint operators and Eρn ⇒ ρ. From Theorem 1, we have Z L1 (n) m (z) −→ hR(z)o, oidρ[G, o]. d
Proposition 2.1 and Theorem VIII.25(a) in [17] imply that hR(z)o, oi = Xα (z). The proof of Theorem 2 (ii) is complete with the extra assumption (A’).
3.4
Proof of Theorem 2 (ii) - general case
We now relax assumption (A’) by assumption (A). From Theorem 1, it is sufficient to prove that limn Em(n) (z) converges to EXα (z), where Xα is defined by Equation (2). By the same argument as in §2.4, it is sufficient to prove that, for the weak convergence on H, lim Xα(M ) = Xα
M →∞
13
(M )
(M )
which converges weakly where Xα is defined by Equation (2) with a degree distribution F∗ to F as M goes to infinity. This continuity property is established in Lemma 6.3 (in Appendix). The proof of the theorem is complete.
4
Motivated applications and discussion
4.1
Weighted graphs
A weighted graph is a graph G = (V, E) with attached weights on its edges. As in §1.1, we consider a sequence of graphs Gn , and a sequence of symmetric matrices W (n) = (wij )1≤i,j≤n , with (wij )1≤i≤j iid real variables, independent of Gn and wij = wji . We define C (n) = W (n) ◦A(n) where ◦ denotes the Hadamard product (for all i, j, (A ◦ B)ij = Aij Bij ) and T (n) is the diagonal P (n) (n) matrix whose entry ii is equal to j wij Aij . Finally, we define να as the spectral measure of (n)
the matrix Ξα = C (n) − αT (n) . We consider a new assumption
P (n) B. The sequence of variables ( ni=1 Aio |wio |2 ), n ∈ N, is uniformly integrable.
A straightforward extension of Theorem 1 is
Theorem 3 (i) Let Gn = ([n], En ) be a sequence of graphs satisfying assumptions (D-B). (n) Then there exists a probability measure µα on R such that a.s. limn→∞ να = να . (ii) Let Gn = ([n], En ) be a sequence of random graphs satisfying Assumptions (R-B). Then (n) there exists a probability measure να on R such that, limn→∞ EL(να , να ) = 0. (n)
We may also state an analog of Theorem 2 for the case α = 0, that is ∆0 (n)
s(n) the Stieljes transform of ν0 Ξ(n)
(n)
and by X (n) (z) = (Ξ0 − zIn )−1
oo
= A(n) . We denote by
the value of the resolvent
of at the root. The proof of the next result is a straightforward extension of the proof of Theorem 2. Theorem 4 Assume that assumptions (RT-B) hold, then (i) There exists a unique probability measure P ∈ P(H) such that for all z ∈ C+ , d
Y (z) = − z +
N X i=1
!−1
|wi |2 Yi (z)
,
where N has distribution F , wi are iid copies with distribution w11 , Y and Yi are iid copies with law P and the variables N, wi , Yi are independent.
14
(ii) For all z ∈ C+ , s(n) (z) converges as n tends to infinity in L1 to EX(z), where for all z ∈ C+ , !−1 N∗ X d 2 , |wi | Yi (z) X(z) = − z + i=1
where N∗ has distribution F∗ , wi are iid copies with distribution w11 , Yi are iid copies with law P and the variables N, wi , Yi are independent.
The case α = 1 is more complicated: the diagonal term T (n) introduces a dependence within the matrix which breaks the nice recursive structure of the RDE.
4.2
Bipartite graphs
In §1.2, we have considered a sequence of random graphs converging weakly to a GWT tree. Another important class of random graphs are the bipartite graphs. A graph G = (V, E) is bipartite if there exists two disjoint subsets V a , V b , with V a ∪ V = V such that all edges in E have an adjacent vertex in V a and the other in V b . The analysis of random bipartite graphs finds strong motivation in coding theory, see for example Richardson and Urbanke [18]. The natural limit for random bipartite graphs is the following Bipartite Galton-Watson Tree (BGWT) with degree distribution (F∗ , G∗ ) and scale p ∈ (0, 1). The BGWT is obtained from a Galton-Watson branching process with alternated degree distribution. With probability p, the root has offspring distribution F∗ , all odd generation genitors have an offspring distribution G, and all even generation genitors (apart from the root) have an offspring distribution F . With probability 1 − p, the root has offspring distribution G∗ , all odd generation genitors have an offspring distribution F , and all even generation genitors have an offspring distribution G. We now consider a sequence (Gn ) of random bipartite graphs satisfying assumptions (R − A) with weak limit a BGWT with degree distribution (F∗ , G∗ ) and scale p ∈ (0, 1). The weak convergence of a natural ensemble of bipartite graphs toward a BGWT with degree distribution (F∗ , G∗ ) and scale p ∈ (0, 1) follows from [18], p being the proportion of vertices in V a , F∗ the asymptotic degree distribution of vertices in V a and b∗ the asymptotic degree distribution of (n) (n) (n) vertices in V a . As usual, we denote by µα the spectral measure of ∆α , mα is the Stieljes (n) (n) (n) transform of µα , and Xα = Roo the value of the resolvent taken at the uniformly chosen root. We give without proof the following theorem which is a generalization of Theorem 2. Theorem 5 Under the foregoing assumptions, (i) There exists a unique pair of probability measures (Rαa , Rαb ) ∈ P(H) × P(H) such that for all z ∈ C+ , −1 !−1 Na Nb X X d d , Y b (z) = − z + α(N b + 1) + Yib (z) Yia (z) , Y a (z) = − z + α(N a + 1) + i=1
i=1
15
where N a (resp. N b ) has distribution F (resp. G) and Y a , Yia (resp. Y b , Yib ) are iid copies with law Rαa (resp. Rαb ), and the variables N b , Yia , N a , Yib are independent. (n)
(ii) For all z ∈ C+ , mα (z) converges as n tends to infinity in L1 to pEXαa (z) + (1 − p)EXαb (z) where for all z ∈ C+ , d
Xαa (z) = − z + αN∗a +
a
N∗ X i=1
−1
Yib (z)
d
Xαb (z) = − z + αN∗b +
,
b
N∗ X i=1
−1
Yia (z)
,
where N∗a (resp. N∗b ) has distribution F∗ (resp. G∗ ) and Yia (resp. Yib ) are iid copies with law Rαa (resp. Rαb ), independent of N∗b (resp. N∗a ). In the case α = 0 and for bi-regular graphs (i.e. BGWT with degree distribution (δk , δl ) and parameter p), the limiting spectral measure is already known and first derived by Godsil and Mohar [9], see also Mizuno and Sato [14] for an alternative proof.
4.3
Uniform random trees
The uniformly distributed tree on [n] converges weakly to the Skeleton tree T∞ which is defined as follows. Consider a sequence T0 , T1 , · · · of independent GWT with offspring distribution the Poisson distribution with intensity 1 and let v0 , v0 , · · · denote their roots. Then add all the edges (vi , vi+1 ) for i ≥ 0. The distribution in G ∗ of the corresponding infinite tree is the Skeleton tree. See [2] for further properties and Grimmett [10] for the original proof of the weak convergence of the uniformly distributed tree on [n] to the Skeleton tree T∞ . (n)
Let µ0 denote the spectral measure of the adjacency matrix of the random spanning tree (n) T on [n] drawn uniformly (for simplicity of the statement, we restrict ourselves to the case α = 0). We denote by X (n) (z) = ((A(n) − zIn )−1 )oo the value of the resolvent of A(n) taken at (n) the uniform root and by m(n) (z) the Stieljes transform of µ0 . As an application of Theorems 1, 2, we have the following: Theorem 6
(i) There exists a unique probability measure R ∈ P(H) such that for all z ∈ C+ , −1 d X(z) = W (z)−1 − X1 (z) ,
(9)
where W ∈ H is the resolvent taken at the root of a GWT with offspring distribution P oi(1), X and X1 have law R and the variables W and X1 are independent. (ii) For all z ∈ C+ , m(n) (z) converges as n tends to infinity in L1 to EX(z). Sketch of Proof. The sequence T (n) satisfy (R-A), thus, from Theorem 1, it is sufficient to show that the resolvent operator R = (A − zI)−1 of the Skeleton tree taken at the root satisfies the 16
d
RDE (9). The distribution invariant structure of T∞ implies that X(z) = R(z)v1 v1 . The number of offsprings of the root, v1 of T1 is a Poisson random variable with intensity 1, say N . We denote the offsprings of v1 in T1 by vi11 , · · · , vi1N . From the decomposition formula, we have e vv + R(z)v1 v1 = − z + R(z) 2 2
N X
i
R (z)vi1 vi1
i=1
!−1
,
e where R(z) is the resolvent of the infinite tree obtained by removing T1 and v1 and Ri (z) is the e resolvent of the subtree of the descendants of vi1 in T1 . Now, by construction R(z) has the same i e distribution than R(z), and R (z) are independent copies, independent of R(z) of W (z). Thus we obtain !−1 N X d ′ , (10) Wi (z) X(z) = − z + X (z) + i=1
d
−1
= − −W (z)
−1 + X1 (z) ,
(11)
where in (11), we have applied (1) for GWT with Poisson offspring distribution. The existence and unicity of the solution of (9) follows from (10) using the same proof as in Lemma 2.3. 2
Appendix Some elementary facts on the resolvent of an hermitian matrix. Lemma 6.1 Let n ≥ 1, z ∈ C such that ℑz = v > 0, A be an hermitian n × n matrix, R = (A − zIn )−1 and α be a complex vector in Cn . Then, (i) For all 1 ≤ i ≤ n, |Rii | ≤ v −1 . (ii) ℑ(α∗ Rα) ≥ 0, P (iii) (z + α∗ Rα)−1 − (z + ni=1 |αi |2 Rii )−1
≤ v −2
P i6=j α∗i αj Rij
.
Proof of Lemma 6.1. We start with (ii). We write z = u + iv, since ℑ(a−1 ) = −ℑ(a)/|a|2 , we have ℑ(α∗ (A − zIn )−1 α) = vα∗ ((A − uIn )2 + v 2 I)−1 α ≥ 0. We now prove (iii). We write, P n ∗ X i6=j αi αj Rij 2 −1 ∗ −1 P |αi | Rii ) = (z + α Rα) − (z + (z + α∗ Rα)(z + ni=1 |αi |2 Rii ) i=1
From (i), |z + α∗ Rα| ≥ ℑ(z + α∗ Rα) ≥ v. Similarly Pn (i) 2applied to a vector whose only non zero entry is i gives ℑ(Rii ) ≥ 0, and therefore |z + i=1 |αi | Rii | ≥ v. We thus obtain (iii). 17
e be the resolvent of the It remains to prove (i). It is sufficient to check it for i = 1. Let R matrix obtained from A by putting to 0 the first column and row. We write: X eij )−1 = −(z + α∗ Rα) e −1 . R11 = −(z + A1i A1j R 2≤i,j≤n
e ≥ v, hence |R11 | = |z + α∗ Rα| e −1 ≤ v −1 . 2 With α∗ = (0, A12 , · · · , A1n ). By (ii), ℑ(z + α∗ Rα)
Lemma 6.2 Let F , G be two distribution functions on R with Stieljes transform mF and mG . There exists C > 0 such that for all z ∈ C+ , ℑz > 1. |mF (z) − mG (z)| ≤
C (L(F, G) + L2 (F, G) + L3 (F, G)) ℑz
Proof. Fix zR = u + iv and h = L(F, G), then F (x − h) − h ≤ G(x) ≤ F (x + h) + h. Notice 1 that mG (z) = R (x−z) 2 G(x)dx, it follows that: |mF (z) − mG (z)| = ≤ ≤ ≤ ≤ ≤
Z F (x) − G(x) dx (x − z)2 R Z |F (x) − G(x)| dx |x − z|2 R Z Z h max(F (x) − F (x − h), F (x + h) − F (x)) dx + dx 2 2 (x − u)2 + v 2 R (x − u) + v R Z Z F (x + h) − F (x) F (x) − F (x − h) dx + dx πhv −1 + 2 2 2 2 R (x − u) + v R (x − u) + v Z ∞ 1 1 −1 πhv + 4 (x − h)2 + v 2 − x2 + v 2 dx 0 Z ∞ 2 2xh − h −1 πhv + 4 ((x − h)2 + v 2 )(x2 + v 2 ) dx, 0
and a direct computation gives the result.
2
(n)
Lemma 6.3 Let (F∗ ), n ∈ N, be a sequence of probability measures on N converging to F∗ P (n) (n) such that supn kF∗ (k) < ∞. Denote by Xα and Xα the variable defined by (2) with degree (n) (n) distribution F∗ and F∗ respectively. Then, for the weak convergence on H, limn Xα = Xα . ′ Proof. RLet F∗ and FR∗′ be two probability P measures on′ N with finite mean, and let dT V (F∗ , F∗ ) = ′ supA⊂N | A F∗ (dx) − A F∗ (dx)| = 1/2 k |F∗ (k) − F∗ (k)| be the total variation distance. Let N∗ , N∗′ , N, N ′ denote variables with law F∗ , F∗′ , F, F ′ respectively, and coupled so that 2P(N∗ 6= N∗′ ) = dT V (F∗ , F ′ ∗) and 2P(N 6= N ′ ) = dT V (F, F ′ ) (the existence of these variables is guaranteed by the coupling inequality). We now reintroduce the distance defined in the proof of Lemma 2.3. Let Ω be a bounded open set in C+ with an empty intersection with the ball of center 0
18
and radius on P(H)
√
EN + 1. Let P(H) be the set of probability measures on H. We define the distance W (P, P ′ ) = inf E
Z
Ω
|X(z) − X ′ (z)|dz
where the infimum is over all possible coupling of the distributions P and P ′ where X has law P and X ′ has law P ′ . With our assumptions, we may introduce the variables X := Xα (with law P ) and X ′ := Xα′ (with law P ′ ) defined by (2) with degree distribution F∗ and F∗′ respectively. The proof of the lemma will be complete if we prove that there exists C, not depending on F∗ and F∗′ , such that W (P, P ′ ) ≤ C max(dT V (F∗ , F∗′ ), dT V (F, F ′ )). (12)
We denote by Y (with law Q) and Y ′ (with law Q′ )R the variable defined by (1) with offspring distribution F and F ′ , coupled so that W (Q, Q′ ) = E Ω |Y (z)−Y ′ (z)|dz. We consider (Yi , Yi′ )i∈N iid copies of (Y, Y ′ ) and independent of the variable N∗ . By definition, we have the following Z ′ W (P, P ) ≤ E X ′ (z) − X(z) 1(N∗ 6= N∗′ )dz Ω !−1 !−1 Z N∗ N∗ X X dz Yi′ (z) − z + αN∗ + Yi (z) +E z + αN∗ + Ω i=1 i=1 Z Z N∗ X Yi (z) − Yi′ (z) dz ≤ dT V (F∗ , F∗′ ) (ℑz)−1 dz + (ℑz)−2 E ≤
dT V (F∗ , F∗′ )
Z
Ω
Ω
−1
(ℑz)
Ω
i=1
dz + EN∗ ( inf ℑz)−2 W (Q, Q′ ). z∈Ω
(13)
We then argue as in the proof of Lemma 2.3. Since Ψ(Q) = Q and Ψ′ (Q′ ) = Q′ (where Ψ′ is defined as Ψ with the distribution F ′ instead of F ), we get: W (Q, Q′ ) = W (Ψ(Q), Ψ′ (Q′ )) Z ≤ E |ψ(N, (Yi ); z) − ψ(N ′ , (Yi′ ); z)|dz Ω Z Z |ψ(N, (Yi ); z) − ψ(N, (Yi′ ); z)|dz ≤ dT V (F, F ′ ) (ℑz)−1 dz + E Ω ZΩ −1 ′ ≤ dT V (F, F ) (ℑz) dz + EN ( inf ℑz)−2 W (Q, Q′ ). z∈Ω
Ω
Then since inf z∈Ω ℑz >
√
EN, we deduce that ′
′
W (Q, Q ) ≤ dT V (F, F )
R
−1 Ω (ℑz) dz
1 − EN (inf z∈Ω ℑz)−2
This last inequality, together with (13), implies (12).
19
. 2
Acknowledgment The authors thank Noureddine El Karoui for fruitful discussions and his interest in this work.
References [1] D. Aldous and R. Lyons. Processes on unimodular random networks. Electronic Journal of Probability, 12:1454–1508, 2007. [2] D. Aldous and J. M. Steele. The objective method: probabilistic combinatorial optimization and local weak convergence. In Probability on discrete structures, volume 110 of Encyclopaedia Math. Sci., pages 1–72. Springer, Berlin, 2004. [3] D. J. Aldous and A. Bandyopadhyay. A survey of max-type recursive distributional equations. Ann. Appl. Probab., 15(2):1047–1110, 2005. [4] Z. D. Bai. Methodologies in spectral analysis of large-dimensional random matrices, a review. Statist. Sinica, 9(3):611–677, 1999. With comments by G. J. Rodgers and Jack W. Silverstein; and a rejoinder by the author. [5] I. Benjamini and O. Schramm. Recurrence of distributional limits of finite planar graphs. Electron. J. Probab., 6:no. 23, 13 pp. (electronic), 2001. [6] B. Bollob´ as. A probabilistic proof of an asymptotic formula for the number of labelled regular graphs. European J. Combin., 1(4):311–316, 1980. [7] J. B. Conway. Functions of one complex variable. Springer-Verlag, New York, 1973. Graduate Texts in Mathematics, 11. [8] R. Durrett. Random graph dynamics. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press, Cambridge, 2007. [9] C. D. Godsil and B. Mohar. Walk generating functions and spectral measures of infinite graphs. In Proceedings of the Victoria Conference on Combinatorial Matrix Analysis (Victoria, BC, 1987), volume 107, pages 191–206, 1988. [10] G. R. Grimmett. Random labelled trees and their branching networks. J. Austral. Math. Soc. Ser. A, 30(2):229–237, 1980/81. [11] H. Kesten. Symmetric random walks on groups. Trans. Amer. Math. Soc., 92:336–354, 1959. [12] O. Khorunzhy, M. Shcherbina, and V. Vengerovsky. Eigenvalue distribution of large weighted random graphs. J. Math. Phys., 45(4):1648–1672, 2004.
20
[13] B. D. McKay. The expected eigenvalue distribution of a large regular graph. Linear Algebra Appl., 40:203–216, 1981. [14] H. Mizuno and I. Sato. The semicircle law for semiregular bipartite graphs. J. Combin. Theory Ser. A, 101(2):174–190, 2003. [15] B. Mohar. The spectrum of an infinite graph. Linear Algebra Appl., 48:245–256, 1982. [16] B. Mohar and W. Woess. A survey on spectra of infinite graphs. Bull. London Math. Soc., 21(3):209–234, 1989. [17] M. Reed and B. Simon. Methods of modern mathematical physics. I. Functional analysis. Academic Press, New York, 1972. [18] T. Richardson and R. Urbanke. Modern Coding Theory. Cambridge University Press, 2008. [19] N. C. Wormald. Models of random regular graphs. In Surveys in combinatorics, 1999 (Canterbury), volume 267 of London Math. Soc. Lecture Note Ser., pages 239–298. Cambridge Univ. Press, Cambridge, 1999.
21