On random graph homomorphisms into Z - Semantic Scholar

Report 0 Downloads 122 Views
On random graph homomorphisms into Z Itai Benjamini Mathematics Department, The Weizmann Institute of Science, Rehovot,71600,Israel E-mail: [email protected]

Olle Haggstrom Department of Mathematical Statistics, Chalmers University of Technology, S-412 96 Goteborg, Sweden

,

E-mail: [email protected]

Elchanan Mossel Institute of Mathematics, The Hebrew University, Jerusalem 91904, Israel E-mail: [email protected]

Given a bipartite connected nite graph = ( ) and a vertex 0 2 , we consider uniform probability measure on the set of graph homomorphisms : ! Z satisfying ( 0 ) = 0. This measure can be viewed as a -indexed random walk on Z, generalizing both the usual time-indexed random walk and tree-indexed random walk. Several general inequalities for -indexed random walk are derived, including an upper bound on uctuations implying that the distance ( ( ) ( )) between ( ) and ( ), is stochastically dominated by the distance to 0 of a simple random walk on Z having run for ( ) steps. Various special cases are studied. For instance, when is an -level regular tree with all vertices on the last level wired to an additional single vertex, we show that the expected range of the walk is (log ). This result can also be rephrased as a statement about conditional branching random walk. To prove it, a power-type Pascal triangle is introduced and exploited. G

f

V

V; E

v

f v

V

G

G

d f u ;f v

f u

f v

d u; v

G

O

n

n

1. INTRODUCTION

The study of Lipschitz functions on graphs and metric spaces is rather advanced. Uniform measure on graph homomorphisms into Z provides a model for looking at typical Lipschitz functions. It is natural to ask what the properties of such random Lipschitz functions are. For instance, is it true that concentration inequalities for typical Lipschitz function are stronger than those which hold for all Lipschitz functions? The research reported in this paper makes some initial steps in that direction. We start with the de nition of the measure. Let G = (VG ; EG ) be a nite graph. We assume that G is connected and bipartite. Let v0 2 VG be a speci ed vertex of G. Let XG;v denote the set of all mappings f : VG ! Z with the property that (i) f (v0 ) = 0, and (ii) jf (u) ? f (v)j = 1 for all u; v 2 VG such that fu; vg 2 EG (property (ii) asserts that f is a graph homomorphism from G to Z). Let PG;v be the uniform probability measure on XG;v , i.e. P (f ) = 1 0

0

0

G;v0

jXG;v j 0

for each f 2 XG;v ; here jXG;v j denotes the cardinality of XG;v . We also write EG;v for expectation with respect to PG;v . Note that the assumptions of connectedness and bipartiteness 0

0

0

0

1

0

BENJAMINI, HA GGSTRO M AND MOSSEL

2

of G are necessary and sucient for PG;v to be well-de ned: the bipartiteness ensures that XG;v is nonempty, and the connectedness ensures that it is nite. Note that when we take G to be a path of length n starting at v0 , i.e. 0

0

VG = fv0 ; : : : ; vn g; EG = ffvi ; vi+1 g : 0  i < ng ;

(1)

then the model reduces to the usual simple random walk (SRW) on Z up to time N . If we instead take G to be some tree rooted at v0 , then we obtain the usual model for a tree-indexed random walk on Z; see Benjamini and Peres [3]. Hence it is natural to use the term G-indexed random walk for our model. Much of our interest is on the distributions of the range

R(f ) = jff (v) : v 2 VG gj ;

(2)

and of the di erence jf (u) ? f (v)j for u; v 2 VG . Note that these distributions are independent of the choice of v0 , because for any v0 ; v1 2 VG there is a natural bijection between XG;v and XG;v which preserves jf (u) ? f (v)j for all u; v 2 VG . We will look at some examples of such walks when G is large in the sense that jVG j is exponentially large in the diameter of the graph. Related models (such as the solid-on-solid model and Shlosman's random staircases) where G is Z2 have been studied in the physics literature; see e.g. Georgii [6]. One might suspect that the model presented here is just a discrete version of the graph-indexed Gaussian eld as de ned e.g. in Janson [12], and thus has similar properties. At least for some properties, this does not seems to be the case: Janson [12, page 133] proved that the variance of the eld at a vertex v in the Gaussian eld is equal to the electrical resistance in the graph (viewed as a network with unit resistors) from the v to the xed vertex v0 whose value is xed to be 0. In particular, the variance of the eld value is monotone decreasing in adding edges. The remark following Proposition 2.4 below shows that this monotonicity fails in our model. It is an interesting task to gure out what properties are common to these two models. In Section 2, we shall obtain some basic correlation and other inequalities for G-indexed random walks. For instance, we will see in Theorem 2.1 that for any u; v 2 VG at distance d from each other, we have 0

1

EG;v ?jf (u) ? f (v)j   d ; 2

0

(3)

Thus providing a subdi usive estimate for the uctuations. The example in (1) shows that this bound is sharp. More generally, Theorem 2.1 shows that actually for all n and all increasing functions g sup

G u;v2VG : d(u;v)=n

EG;v [g (jf (u) ? f (v)j)] 0

(4)

is attained by G as in example (1). The subsequent Sections 3{6 are devoted to particular cases. Section 3 deals with the case where G consists of two endpoints connected by m parallel paths of length k. In Section 4, we treat the more intricate case where G is an n-level regular tree wired at the n'th level, i.e. with all leaves on the last level connected to an additional single vertex. This is tantamount to conditioning a branching random walk (see e.g. Asmussen and Hering [2] or Ney [17]) on the event that all particles occupy the same location at time n + 1. Somewhat surprisingly, it turns out (Theorem 4.1) that the expected range of this process is as small as O(log n); in contrast, it is well-known and easy to see that the unconditional branching random walk (i.e. free boundary) has an expected range of order n. As a key tool in the analysis of the conditional branching random walk, we will introduce the power-type Pascal triangle, which is a natural generalization of the usual Pascal triangle. The short Section 5 concerns the case where G is the k-dimensional discrete hypercube. We expect (Conjecture 5.1) the concentration of measure for random Lipschitz functions to be much

3

RANDOM GRAPH HOMOMORPHISMS

stronger than the usual concentration of measure phenomenon for the hypercube; in particular, we believe that the expected range of the G-indexed walk is o(n). In Section 6, we indicate the richness of the G-indexed random walk model by showing how it can be used to emulate the famous Ising model through a particular choice of G. Finally, in Section 7, we make some concluding remarks about open problems and natural directions of generalization.

2. CORRELATION AND OTHER INEQUALITIES

This section contains some general inequalities for G-indexed random walks. These inequalities provide information about unimodality and correlations under PG;v , as well as comparisons between G-indexed random walks for di erent choices of G. For u; v 2 VG , let d(u; v) denote graph-theoretic distance between u and v. We begin with a simple result concerning the marginal distribution of f (v) for a given vertex v 2 VG . The distribution of f (v) under PG;v is obviously symmetric around 0. Furthermore, f (v) is either PG;v -a.s. even or PG;v -a.s. odd depending on whether d(v0 ; v) is even or odd. The following result tells us that if we restrict to the even or the odd integers, then the distribution of f (v) is in fact unimodal. Proposition 2.1. Fix any bipartite connected nite graph G and any v0 ; v 2 VG . For any non-negative integers s; t such that s < t and t ? s is even, we have 0

0

0

0

PG;v [f (v) = t]  PG;v [f (v) = s] :

(5)

0

0

Proof. Set As = ff 2 XG;v : f (v) = sg and de ne At similarly. Since PG;v assigns the same probability to each f 2 XG;v , it suces to show that jAs j  jAt j, and to do this we shall describe an injective mapping from jAt j to jAs j. For any f 2 At , we de ne the vertex set f  VG as follows. For each w 2 VG we take f to contain w if and only if (i) f (w) = s+2 t , and (ii) there exists a path from v0 to w such that all vertices u on the path (except w) satisfy f (u) < s+2 t (note that s+2 t is a strictly positive integer). Pictorially, f is a \cutset" separating v0 from v, and moreover f is the cutset \closest" to v0 with the property that all vertices in the cutset take value s+2 t . Take ~ f to be the set of vertices that can be reached from v0 through paths that only contain vertices u with f (u) < s+2 t . Finally, de ne f 0 2 XG;v by setting 0

0

0

 f (w)

0

if w 2 ~ f [ f f 0 (w) = t + s ? f (w) otherwise ; for each w 2 VG . Clearly, f 0 2 As , and moreover it is easy to see that the mapping is invertible, so that any two elements of At are mapped on di erent elements of As .

Remark: The proof is easily extended to show that the inequality in (5) is strict whenever PG;v [f (v) = s] > 0. 0

For the remaining results in this section, we need to recall a couple of general inequalities which are widely used in statistical mechanics: variants of Holley's Theorem [11] and the FKG inequality [5]. For a nite set V and a nite set S of reals, we consider two random elements Y and Y 0 taking values in S V , and write  and 0 for their respective distributions. S V is equipped with the usual coordinatewise partial order . A function g : S V ! R is said to be increasing if g( )  g() whenever   . The probability measure  on S V is said to have positive correlations if all

BENJAMINI, HA GGSTRO M AND MOSSEL

4

increasing functions from S V to R are positively correlated under . We write d for the usual stochastic domination, i.e.  d 0 if all increasing g : S V ! R have greater expectation under 0 than under . We say that  is irreducible if, for any ;  2 S V such that both  and  have positive -probability, we can move from  to  through single-site ips without passing through any element of zero -probability. Lemma 2.1 (Holley). Suppose that the probability measure  and 0 on S V are irreducible, and that there exists  2 S V such that ( ) > 0 and 0 ( ) > 0. If for all v 2 V , all s 2 S , -a.e.  2 S V nfvg and 0 -a.e.  2 S V nfvg such that    we have

(X (v)  s j X (V n fvg) =  )  0 (X 0 (v)  s j X 0(V n fvg) = ) ;

(6)

then  d 0 . Lemma 2.2 (FKG).

Suppose that  is irreducible, and for all v 2 V , all s 2 S , and -a.e.

;  2 S V nfvg such that   , we have

(X (v)  s j X (V n fvg) =  )  (X (v)  s j X (V n fvg) = ) :

(7)

Then  has positive correlations. Proofs of these results appear e.g. in Georgii et al. [7]; the same proofs under slightly di erent conditions can be found in Liggett [15]. As a rst application, we have the following result. Proposition 2.2. For any bipartite connected nite graph G and any v0 2 VG , the measure PG;v has positive correlations. 0

Proof. This is a trivial matter of checking that PG;v satis es the conditions in Lemma 2.2. 0

Next, we let PG;v be the probability measure on XG;v corresponding to picking f  2 XG;v as  = ff 2 follows: pick f according to PG;v , and let f (v) = jf (v)j for each v 2 V . De ne XG;v  . For f  2 X  , XG;v : f (v)  0 for all v 2 VG g, and note that PG;v is concentrated on XG;v G;v let k(f  ) denote the number of connected components of the vertex set fv 2 VG : f  (v) > 0g.  , we get that By simply counting the number of f 2 XG;v that give rise to a given f  2 XG;v 0

0

0

0

0

0

0

0

0

0

0

k(f  )

2 PG;v (f ) = jX 0

(8)

G;v0 j

 (note the similarity with the Fortuin{Kateleyn random-cluster model; see for each f  2 XG;v e.g. Grimmett [8]). It turns out that not only PG;v , but also PG;v , has positive correlations: Proposition 2.3. For any bipartite connected nite graph G and any v0 2 VG , the measure PG;v has positive correlations. 0

0

0

0

Proof. Again, it is just a matter of checking that the conditions in Lemma 2.2 hold. To check that (7) holds for PG;v is slightly less trivial than for PG;v , so we do this explicitly. For v = v0 , (7) holds trivially (with equality), so we take v 2 VG n fv0 g, and some  2 NVG nfvg  . De ne which arises as a projection on NVG nfvg of some element of XG;v 0

0

0

N (v;  ) = f (w) : w is a nearest neighbor of vg and furthermore let (v;  ) be the number of connected components of the vertex set fw 2 VG n fvg :  (w) > 0g that intersect the neighborhood of v. If  arises as such a projection, then

5

RANDOM GRAPH HOMOMORPHISMS

N (v;  ) is either fig or fi; i + 2g for some i 2 N. Write Pvj for the conditional distribution, under PG;v , of f (v) given that f  (VG n fvg) =  . Pvj can be determined directly from (8), and we get the following. If N (v;  ) = fi; i + 2g for some i 2 N, then 0

Pvj (i + 1) = 1 : If N (v;  ) = f0g, then while if N (v;  ) = f1g, then

Pvj (1) = 1 ;

(

Pvj (0) = 22v;v;+2 Pvj (2) = 2 v;2 +2 : (

)

(

)

(

)

(9)

Finally, if N (v;  ) = fig for i > 1, then

(

Pvj (i ? 1) = 21 Pvj (i + 1) = 21 :

Since (v;  ) is decreasing in  , we see that Pvj is stochastically increasing in  , as needed. Next, we give a couple of results that allow us to compare PG;v for di erent choices of G. Intuitively, one might think that adding edges would make the G-indexed random walk become more concentrated around 0. This is true if we add an edge incident to v0 : Proposition 2.4. Let G be a bipartite connected nite graph, and let v0 and v1 be two vertices in VG at odd distance from each other. Let G0 be the graph obtained from G by adding an edge between v0 and v1 . We then have 0

PG0 ;v d PG;v : 0

(10)

0

Proof. The proof is by applying Lemma 2.1; we need to check that (6) holds with  = PG0 ;v and 0 = PG;v . >From the proof of Proposition 2.3, we know that the conditional distribution of f  (v) given that f (VG nfvg) =  is stochastically increasing in  , both for PG0 ;v and for PG;v . It is therefore enough to show for any (feasible)  that the conditional distribution of f  (v) given that f  (VG n fvg) =  is stochastically greater for PG;v than for PG0 ;v . For v 6= v1 this holds with equality, and it also holds with v = v1 because the e ect of adding the edge fv0 ; v1 g is to force f  (v1 ) to be 1, which is the smallest possible value for a vertex at odd distance from v0 . 0

0

0

0

0

0

Remark: Unfortunately, Proposition 2.4 cannot be extended in such a way that (10) can be

deduced whenever G0 is obtained by adding some (arbitrary) edge that does not destroy the bipartiteness. A simple counterexample is as follows. De ne G by taking

VG = fv0 ; : : : ; v4 g; EG = ffv0 ; v1 g; fv0; v3 g; fv1; v2 g; fv1; v4 g; fv3 ; v4 gg; and take G0 to be the same except that the edge fv2 ; v3 g is added. A calculation shows that the PG;v -probability of having a nonzero value at v4 is 1=3, whereas the PG0 ;v -probability of having a nonzero value at v4 is larger: 2=5. The intuitive reason behind this example is that when the values at v1 and v3 are di erent, the value at v4 must be zero, whereas when the values at v1 and v3 are identical, with probability 1=2 the value at v4 is nonzero. Adding the edge fv2 ; v3 g strengthen the bond between v1 and v3 and thus increases the probability that the value at v4 is nonzero. 0

0

BENJAMINI, HA GGSTRO M AND MOSSEL

6

A di erent way of modifying G into a new graph G0 is to glue together all neighbors v1 ; : : : ; vm of v0 into a single vertex. This is equivalent to conditioning PG;v on the event that f  (v1 ) =    = f  (vm ). Write P~ G;v for this conditional distribution; the advantage of considering P~ G;v rather than PG0 ;v is that P~ G;v is de ned on the same space XG;v as PG;v . De ne P~ G;v from P~ G;v in the same way that PG;v was de ned from PG;v (i.e. by taking vertexwise absolute values). Also de ne X~G;v = ff 2 XG;v : f (v1 ) =    = f (vm )g : Proposition 2.5. For any bipartite connected nite graph G and any v0 2 VG , we have 0

0

0

0

0

0

0

0

0

0

0

0

0

PG;v d P~ G;v : 0

0

Proof. This is another application of Lemma 2.1. For the same reason as in the proof of Proposition 2.4, it is enough to show for any (feasible)  that the conditional distribution of f  (v) given that f  (VG nfvg) =  is stochastically greater for P~ G;v than for PG;v . Analogously to (8), P~ G;v satis es 0

0

0

k f P~ G;v (f  ) = 2~ : jXG;v j ~(

)

0

0

Here k~ is de ned as the number of connected components of the set of nonzeroes in  , except that all connected components intersecting fv1 ; : : : ; vm g count as a single one. Single-site conditional distributions under P~ G;v become identical to those obtained for PG;v in the proof of Proposition 2.4, except in (9) where (v;  ) is replaced by ~(v;  ). The latter quantity is de ned as the number of connected components of nonzeroes in  that intersect the neighborhood of v, again counting all connected components intersecting fv1 ; : : : ; vm g as just a single one. Clearly, ~(v;  )  (v;  ), and it follows that the conditional distribution of f  (v) given that f  (VG n fvg) =  is stochastically greater for P~ G;v than for PG;v , as desired. 0

0

0

0

Proposition 2.5 is a key ingredient in proving the following upper bound on the uctuations under PG;v . The di usive bound (3) is an immediate consequence. Theorem 2.1. Let G be a bipartite connected nite graph and x v0 ; v 2 VG . Let fS (k )gk=0;1;::: denote a SRW on Z starting with S (0) = 0. Then the distribution of jf (v)j under PG;v is stochastically dominated by the distribution of jS (d(v0 ; v))j. For the proof, it is convenient to isolate the following lemma. A random variable X is said to be symmetric if ?X has the same distribution as X . Lemma 2.3. Let X and Y be symmetric random variables taking values in 2Z. Suppose that jX j is stochastically dominated by jY j. Let Z be a 1-valued random variable which is independent of X and Y . Then jX + Z j is stochastically dominated by jY + Z j. The same thing holds if X and Y take values in 2Z + 1 rather than in 2Z. 0

0

Proof. The fact that jX j is stochastically dominated by jY j is equivalent to the existence of a coupling P of X and Y such that

P [jX j  jY j] = 1

(11)

(this is Strassen's Theorem; see e.g. Lindvall [16]). Since both X and Y are symmetric, (11) implies that there exists a coupling which assigns probability 1 to the event

f0  X  Y g [ fY  X  0g :

(12)

7

RANDOM GRAPH HOMOMORPHISMS

We now look at X +Z and Y +Z under such a coupling. If X = Y we must have jX +Z j = jY +Z j. If X 6= Y then we have jX j  jY j + 2. This implies that again jX + Z j  jY + Z j since Z is 1valued. Proof (of Theorem 2.1). Let d = d(v0 ; v). We prove the theorem by induction on d. If d = 0 there is nothing to prove. Suppose that d > 0. Let G0 be the graph obtained from G by gluing together all the neighbours of v0 into a single vertex v0 . By the induction hypothesis we know that the distribution of jf (v)j under PG0 ;v0 is dominated by the distribution of jS (d ? 1)j. Therefore if X is a random variable which takes each of the values ?1; 1 with probability 1=2 and is independent of PG0 ;v0 , then by Lemma 2.3 the distribution of jX + f (v)j under PG0 ;v0 is dominated by the distribution of jX + S (d ? 1)j. However, the distribution of jX + S (d ? 1)j is nothing but the distribution of jS (d)j. Moreover, by Proposition 2.5 the distribution of jf (v)j under PG;v is stochastically dominated by the distribution of jX + f (v)j under PG0 ;v0 . Putting these observations together, we have that the distribution of jf (v)j under PG;v is dominated by the distribution of jS (d)j, as desired.

Another way to state Theorem 2.1 is the following. Fix a positive integer d and any increasing function g (taking g(x) = x2 corresponds to (3)). The supremum of EG;v (g(jf (v)j)) among all choices of bipartite connected nite G and v0 ; v 2 VG with d(v0 ; v)  d, is attained when G is simply a path of length d, and v and v0 are the two endpoints of the path. This maximum is clearly not unique; it is e.g. attained whenever G is a tree. Somewhat related is the following conjecture. 0

Conjecture 2.2. The supremum of the expected range EG;v (R(f )) among all bipartite nite connected graphs G on n vertices, is attained when G is a path of length n ? 1. Perhaps even the same is true for EG;v (g(R(f ))) for any increasing g. 0

0

3. PARALLEL PATHS

In this section, we investigate the series-parallel behavior of the G-indexed random walk model, by considering the case where G = Gk;m = f0g [ f1; : : : ; kg  f1; : : :; mg [ fk + 1g, and there are edges between (i; s) and (i + 1; s) for all 1  i < k and 1  s  m. There are also edges between 0 and (1; s) for all s, and between (k; s) and k + 1 for all s. See Figure 1. Note that when m = 2 we get a SRW bridge.

0

-1

0

-1

0

1

2

3

-1

-2

-1

0

-1

0

1

1

2

3

2

1

2

1

FIG. 1.

A typical

G7;3

2

-indexed walk.

We are interested in the range of the walk and in the PGk;m ;0 -distribution of f (k + 1), which we call the top (despite the orientation of Figure 1!). We consider the asymptotic behavior as k ! 1 and m = m(k) may depend on k in various ways. When m(k) is small we have the following result. Note that this result includes as a special case the well know result for simple random walk when m(k) = 1 for all k. Proposition 3.1. If

m(k) satis es lim m(k) = 0 ;

k!1 k + 1

(13)

BENJAMINI, HA GGSTRO M AND MOSSEL

8

q

(k) then the distribution of f (k + 1) mk+1 under PGk;m ;0 converges to a standard normal distribution.

Proof. Let pk+1;x be the probability that a SRW is at site x at time k + 1. Assume rst that k + 1 is even. We then have

PGk;m;

0

P pmk k;x x 2 a;b : [f (k + 1) 2 [a; b]] = P pm k [

]

( ) +1

( )

y2Z k+1;y

(14)

Fix  > 0. By the CLT, we have a nite A > 0 such that

X

x2[?Apk+1;Apk+1]

pk+1;x > 1 ? :

(15)

By (14) and the monotonicity properties of fpk+1;xg in x, (15) implies that

h i p p PGk;m ; f (k + 1) 2 [?A k + 1; A k + 1] > 1 ? : (16) p p By the local CLT (see e.g. Lawler [14]) we have for all even x 2 [?A k + 1; A k + 1] that 0

r 2 x   1  ? k pk ;x = k + 1 e 1+O k+1 2 2 +2

+1

so that if (13) holds, then (k) pmk+1 ;x =

By (16),(14) and (17) we have

"

 2  m k m k x e? k k+1 ( ) 2

( ) 2 2 +2

(1 + o(1)) :

#

r

PGk;m ;0 f (k + 1) km+(k1) 2 [a; b] " # r p p m ( k ) = PGk;m ;0 f (k + 1) k + 1 2 [a; b] f (k + 1) 2 [?A k + 1; A k + 1] + O() P p p m(k) x2[a mk k ;b mk k ]\[?Apk+1;Apk+1]\2Z pk+1;x + O() = P p p m(k) y2[?A k+1;A k+1]\2Z pk+1;y mkx P (1 + o(1)) x2[ap k ;bp k ]\[?Apk+1;Apk+1]\2Z e? k mk mk + O() = mky P (1 + o(1)) y2[?Apk+1;Apk+1]\2Z e? k P p p ? m kk x e k k x2[a m k ;b m k ]\2Z + O() = P ?m k y y2[?Apk+1;Apk+1]\2Z e k R b e? x dx R b e? x dx a = R A y + O() = R 1a y + O() ; ? ? ?A e dy ?1 e dy +1 ( )

(17)

+1 ( )

+1 ( )

( ) 2 2 +2

+1 ( )

( ) 2 2 +2

+1 ( )

+1 ( )

( ) 2 2 +2

( ) 2 2 +2

2 2

2 2

2 2

2 2

(18)

where we have used the assumption (13) in the rst equality in (18). The case with k + 1 being odd is treated similarly.

RANDOM GRAPH HOMOMORPHISMS

9

When m(k) is larger we get a tight family of distributions. We say that a family fPn g of distributions on IR is tight (or tight as n ! 1) if for all  > 0, there exits a number A such that for all n we have Pn [?A; A] > 1 ?  (roughly speaking, the mass does not escape to 1 as n goes to 1). Proposition 3.2. The distribution of f (k + 1) under PGk;m ;0 is tight as k ! 1 if and only if lim inf m(k) > 0: k!1 k + 1

(19)

Proof. Assume rst that k + 1 = 2r is even. As before, we denote by pk+1;x the probability that SRW is at x at time k + 1. Note that we have: (k) pmk+1 ;x : m (k) y2Z pk+1;y

PGk;m; [f (k + 1) = x] = P 0

(20)

Using the representation of pk+1;x as a binomial coecient, we see that for r  t  0 we have:

pk+1;2t+2 = r ? t : pk+1;2t r + t + 1

(21)

[f (k + 1) = 4] P [f (k + 1) = 2] P 1  PGk;m ;0 [f (k + 1) = 0]  PGk;m ;0 [f (k + 1) = 2]  : : :

(22)

Thus, by (20) we have: Gk;m ;0

Gk;m ;0

Therefore, the distribution are tight if and only if there exists an integer t such that

P [f (k + 1) = 2t + 2] lim sup GPk;m ;0 [f (k + 1) = 2t] < 1: k!1

Gk;m ;0

(23)

Using (20) we see that (23) is equivalent to

 r ? t m k lim sup r + t + 1 < 1: k!1 ( )

This, in turn, is equivalent to (19). The case where k + 1 is odd is similar. Proposition 3.3. The distribution of f (k +1) under PGk;m ;0 converges to 0 as k = 2r ! 1 (i.e. limk=2r!1 PGk;m ;0 [f (k + 1) = 0] = 1) if and only if

m(k) = 1: lim inf k=2r!1 k + 1

(24)

Proof. We use the same notation as in the proof of Proposition 3.2. Note that by (22) the distribution converges to 0 if and only if

P [f (k + 1) = 2] lim sup PGk;m ;0 [f (k + 1) = 0] = 0;

k=2r!1 Gk;m ;0

which is equivalent to



r lim sup k=2r!1 r + 1

m k

( )

= 0;

BENJAMINI, HA GGSTRO M AND MOSSEL

10 which is equivalent to (24).

Remark: Similarly, for odd k condition (24) is equivalent to convergence of f (k + 1) to ( + 1 2

?1 ).

1

We next consider the range R(f ) of the Gk;m -indexed random walk; recall the de nition in (2). Proposition 3.4. If m(k )  Ck for some C > 0 and  > 1 then there exists a constant D > 0 such that lim P [R(f ) > Dk] = 1 k!1 Gk;m ;0

(25)

where f is a Gk;m -indexed walk. If limk!1 log(mk (k)) = 0, then for all D > 0

lim P [R(f ) > Dk] = 0: k!1 Gk;m ;0

(26)

Proof. We let fS k (n))gn=0;:::;k denote SRW, and fSxk (n)gn=0;:::;k denote S condition on S (k) = x. Assume rst that m(k)  Ck for C > 0;  > 1. By Proposition 3.3 and the remark following that proposition, in this case for k odd and for all D:

lim P [R(f ) > Dk] = klim P [R(f ) > Dkjf (k + 1) = 0]; k!1 Gk;m ;0 !1 Gk;m ;0

(27)

and for k even and all D: 1

P [R(f ) > Dkjf (k + 1) = 1] lim P [R(f ) > Dk] = klim k!1 Gk;m ;0 !1 2 Gk;m ;0

1P + klim [R(f ) > Dkjf (k + 1) = ?1]: !1 2 Gk;m ;0

(28)

On the other hand, from well-known results on SRW bridges, there exists C 0 > 0; D > 0;  > ?1 , such that for x 2 f?1; 0; 1g,

PGk;m ;

0





max jS k (n)j > Dk > C 0 k : n2f0;:::kg x

(29)

Moreover, PGk;m ;0 [R(f ) > Dkjf (k + 1) = x] is the probability that if we take m(k) independent copies of Sxk+1 , there exists at least one of them for which maxn2f0;:::kg jSxk (n)j > Dk. Since

m(k)  Ck+1  CC 0 ()k ! 1 (29), (27) and (28) imply (25). In order to prove (26), note that if S k+1;1 ; : : : ; S k+1;m(k) are m(k) independent copies of SRW, and if lim log(mk (k)) = 0, then for all D > 0

PGk;m;



0



max jS k+1;i (n)j > Dk ! 0: n2f0;:::;k+1g;i2f1;:::;m(k)g

However, if we set B k+1;i = (f (0); f ((1; i)); f ((2; i)); : : : ; f ((k; i)); f (k + 1)), i.e. B k+1;i is the i:th of the parallel paths, then

PGk;m ; [R(f )> Dk]  k ;i (n)j > Dk j B = PGk;m ; nk max ; im k 0

0

+1

0

+1 1

( )

11

RANDOM GRAPH HOMOMORPHISMS

=



mX (k) j =1 mX (k) j =1













jB k+1;j (n)j > Dk 0nk+1 max jB k+1;i (n)j  Dk PGk;m;0 0max nk+1 ;0ij ?1 jS k+1;j (n)j > Dk 0nk+1 max jS k+1;i (n)j  Dk PGk;m;0 0max nk+1 ;0ij ?1

= PGk;m ;0



max

nk+1;1im(k)

0

jS k

;i (n)j > Dk

+1



!0

as needed.

4. WIRED REGULAR TREES 4.1. Main results

In this section we discuss the case where Gdk is a k-level d-ary tree (d  2) rooted at v0 , with all the leaves at the last (k'th) level connected to a single node v (which is distinct from all the nodes of the tree). This process may be described as a conditional branching random walk (with deterministic branching mechanism, so that all the randomness is in the displacement of the particles) where the condition is that all particles occupy the same location at time k + 1. When Tkd is the k-level d-ary tree rooted at h0 (with no additional vertices), the behavior of d Tk -indexed walks is well known. If f is a Tkd-indexed random walk and h 2 Tkd is at level l, then it is trivial that f (h) has the same distribution as the distribution of SRW started at 0 at time l. Moreover, using e.g. the second moment arguments of Benjamini and Peres [3], one may see that there exists a constant D > 0 such that for Tkd-indexed walks lim P d [R(f ) > Dk] = 1: k!1 Tk ;h0

(30)

Note that (30) also holds for dk parallel paths (by Proposition 3.3). However, we will see that (30) does not hold for Gdk -indexed walks. The rst result we have is: Proposition 4.1. For all k , we have for Gdk -indexed walks f that

PGdk ;v [jf (v)j > n]  2tdn 0

for some t = t(d)  2?d+1. In particular, the distribution of f (v ) is tight as k ! 1. The proof of this proposition is based on properties of power-type Pascal triangles which are developed in the next subsection. Our main result is: Theorem 4.1. For all c > 0, we have that for Gdk -indexed walks

lim P d k!1 Gk ;v0

Remarks:

 (1 ? c) log k 2 log d



c) log k = 1: < R(f ) < (1 +log d

(31)

1. Proposition 4.1 holds if we replace the tree of Gdk by any k-level tree in which the degrees of the internal vertices are at least d. In particular consider the following two step process. At the rst step a super-critical branching process for which the children distribution is supported on the integers which are greater or equal to 2 is used to produce a k-level tree. All the leaves of that tree are connected to some vertex v to obtain some (random) graph G. At the second step we consider a G indexed walk on the graph obtained. Then, Proposition 4.1 hold (with tdn replaced with t2n ). The proof for these generalizations follows the lines of the proof given below.

BENJAMINI, HA GGSTRO M AND MOSSEL

12

2. Theorem 4.1 holds if we replace the tree of Gdk by any k-level tree in which the degrees of the internal vertices are bounded below by d and above by M . More formally, the exist constants C1 , C2 which depend on d and M such that for any sequence of such trees: lim P d [C log k < R(f ) < C2 log k] = 1: k!1 Gk ;v0 1 Again, this implies the result for super-critical branching processes in which the child distribution is supported on f2; : : : ; M g. The proof is similar to the proof of the theorem given below. 3. If we consider supercritical branching processes in which the children distribution is supported on f1; : : :; M g with positive probability on 1, then Theorem 4.1 is no longer true. Instead, we have for a positive constant D, lim P[R(f ) > Dk] = 1:

(32)

k!1

This follows from the fact that in such a tree with high probability there are exponential number of pipes of linear length. If the child distribution of the super-critical process is supported on f0; : : :; M g with positive probability of 0, and we consider the back-bone of the tree, then (32) is still true where P denotes the probability conditioned on survival. We omit the details. Recall that XGdk ;v is the set of all Gdk -indexed walks. What can be said about the cardinality of XGdk ;v ? The corresponding question for the discrete cube is well known, see Kahn [13]. Since nearest-neighbours in Gdk are mapped to nearest-neighbours in Z, we must have: jXGdk ;v j  2jGdk j?1 . On the other hand by mapping all the vertices in odd (even) levels to the same element in Z, and all vertices in even (odd) levels to one of the two neighbours of this element, we have jXGdk ;v j  maxf2even(Gdk ) ; 2odd(Gdk) g, where even(G) (odd(G)) denotes the number of vertices in even (odd) levels of G, excluding the root. It is easy to see that this bound is not optimal: If we x every 4:th level to be mapped to 0 we get a somewhat better result, if we x every 8:th level to be mapped to 0 we do even better and so on. However, using entropy methods (as in Kahn [13]) and Proposition 4.1, we improve the trivial upper bound. For a discrete random variable X taking k di erent values with probabilities p1 ; : : : ; pk we de ne the entropy H (X ) as 0

0

0

0

H (X ) = H (p1 ; : : : ; pk ) = ?

k X i=1

pi log2 pi

(see e.g [1] for basic properties of entropy). Proposition 4.2. We have

n

jXGdk ;v j  4 max 2odd Gdk

where h(d) = H



0

even(Gdk )h(d) ; 2even(Gdk )+odd(Gdk )h(d)

)+

o

t(d) ?d+1 (note that limd!1 h(d) = 0). t d ; 1+t(d) , and t(d)  2

1 1+ ( )

Definition 4.1.



(

(33)

4.2. Power-type Pascal triangles Fix an integer d  1. The power-d Pascal triangle is the array P~d = fP~d (k; n)gk ; ;:::; n2Z ; =0 1

de ned by the recursion

 d P~d (n; k) = P~d (k ? 1; n ? 1) + P~d (k ? 1; n + 1) ;

(34)

13

RANDOM GRAPH HOMOMORPHISMS

with initial values



n 2 f?1; 1g P~d (0; n) = 10 for for n 2= f?1; 1g :

(35)

In the usual (d = 1) Pascal triangle each term is the sum of the two terms above it. In the power-d Pascal triangle, each term is the d:th power of the two terms above it. See Figure 2. 1 1

1

1 1 1

4 25

676

1 1 1

1

25 2500

1 676

1

1 8

729

1 729

1

1

The rst few elements in the power-2 and power-3 Pascal triangles. (The numbers quickly become too large to t typographically in such arrays!) FIG. 2.

The connection between the Gdk -indexed walks and power-type Pascal triangles is given in the following proposition. Proposition 4.3. For all d; n; k , we have ~ PGdk ;v [f (v ) = n] = P1P (n; ~k) : j =?1 P (n; j ) 0

Proof. Immediate by induction.

Next, we give the main tool for the proof of Proposition 4.1. De ne Pd (k; n) = PGdk ;v [f (v ) = n]. 0

Lemma 4.1. There exists a constant t(d)  2?d+1 < 1 such that for all k  1 and n  0

Pd (k; n + 2)  t(d)Pd (k; n):

(36)

Pd (k; n ? 2)  t(d)Pd (k; n):

(37)

Similarly, for n  0,

Proof. We will prove the lemma for k = 2m + 1; m  1. The proof for even k is similar. By Proposition 4.3 we may prove (36) and (37) for P~d instead of Pd . We prove these inequalities by induction on m for t(d) = 2?d+1. For m = 0 we have P~d (1; ?2) = P~d (1; 2) = 1, and P~d (1; 0) = 2d, so (36) and (37) hold. We now deduce (36) and (37) for k = 2m + 3 from (36) and (37) for k = 2m + 1. Iterating (34) we have

P~d (k; n) =

 ~

d  ~ dd ~ ~ Pd (k ? 2; n ? 2) + Pd (k ? 2; n) + Pd (k ? 2; n) + Pd (k ? 2; n + 2) :(38)

Assume rst that n > 0. In this case, by the induction hypothesis we have P~d (k ? 2; n)  t(d)P~d (k ? 2; n ? 2);

P~d (k ? 2; n + 2)  t(d)P~d (k ? 2; n);

BENJAMINI, HA GGSTRO M AND MOSSEL

14

P~d (k ? 2; n + 4)  t(d)P~d (k ? 2; n + 2);

so (38) generates:

?  P~d (k; n + 2)  t(d)d d P~d (k; n)  t(d)P~d (k; n):

(39)

The critical case is when is when n = 0. There we get

P~d (k; 2) = =

 ~

d 

Pd (k ? 2; 0) + P~d (k ? 2; 2) + P~d (k ? 2; 2) + P~d (k ? 2; 4)

dd

 1 + t(d)d d 

 2  t(d)Pd (k; 0):

d  ~ dd ~ ~ ~ Pd (k ? 2; 0) + Pd (k ? 2; 2) + Pd (k ? 2; ?2) + Pd (k ? 2; 0)

We have proved (36); (37) follows since Pd (k; ?n) = Pd (k; n). Now we use Lemma 4.1 to obtain tail estimates: Lemma 4.2. There exists a constant t(d)  2?d+1 < 1 such that for all k  1 and n  0

Pd (k; n + 2)  t(d)dn Pd (k; n):

(40)

Pd (k; n ? 2)  t(d)d?n Pd (k; n):

(41)

Similarly for n  0,

Proof. Once more we will prove for k = 2m + 1 by induction on m. Here also we may prove (40) and (41) for P~d instead of Pd . When m = 0; k = 1, the inequalities hold. We now deduce the claim for k + 2 from the claim for k. The case of n = 0 is covered by Lemma 4.1. Hence, we may assume n > 0. By the induction hypothesis

P~d (k ? 2; n)  t(d)n?2 P~d (k ? 2; n ? 2); P~d (k ? 2; n + 2)  t(d)n P~d (k ? 2; n)  t(d)n?2 P~d (k ? 2; n); P~d (k ? 2; n + 4)  t(d)n+2 P~d (k ? 2; n + 2)  t(d)n?2 P~d (k ? 2; n + 2);

so (38) generates

P~d (k + 2; n) 

 n?2 d d d Pd (k; n) = t(d)dn Pd (k; n) t(d)



as needed.

4.3. The range and the top

We now prove Proposition 4.1 and Theorem 4.1.

Proof (of Proposition 4.1). Immediate from Proposition 4.3 and Lemma 4.2.

In order to prove Theorem 4.1 we need some more lemmas.

15

RANDOM GRAPH HOMOMORPHISMS

Lemma 4.3. Letting v1l ; : : : ; vdl l denote the vertices of the l:th level of Gdk (or Tkd ), we have PGdk ;v0 ?f (v1l ); : : : ; f (vdl l ) = (x1 ; : : : ; xdl )jf (v) = x = dl Y ?   1 l l = Z PTld ;v0 f (v1 ); : : : ; f (vdl ) = (x1 ; : : : ; xdl ) PGdk?l ;v0 [f (v ) = x ? xi ] : i=1

for some positive constant Z . Proof. Immediate.

To lighten the notation in what follows, we write Qk for PTkd ;v . For an integer t  0, we also write Qtk for Qk conditioned on the event that jv1k j = t. t s Lemma 4.4. For t  s there exists a coupling Qt;s k of the measures Qk and Qk satisfying 0

hn

Qt;s (f; g) : jf (v)j  jg(v)j for all v 2 VTkd k

oi

= 1:

(42)

Proof. The result follows by considering PTkd ;v conditioned on f (v1k ) = t (or on f (v1k ) = s), calculating conditional probabilities as in the proof of Proposition 2.3 and applying Lemma 2.1. 0

Proof (of Theorem 4.1). We rst claim that it suces to prove (31) for even k and condition on f (v ) = 0. Indeed, suppose we have proven (31) under these conditions, and we have for some c > 0,   (1 ? c) log k 6= 1: lim P d R ( f ) > k!1 Gk ;v 2 log d Thus, from Proposition 4.1 there exists an integer r, an  > 0, and an in nite number of ki 's such that,    (1 ? c ) log k i PGdki ;v R(f )  2 log d f (v ) = r >  : (43) 0

0

For such ki , let li 2 fki + jrj; ki + jrj + 1g be even. We claim that





c) log ki + jrj + 1 f (v ) = 0 > 2?djrj PGdli ;v R(f )  (1 ?2 log d

+2

+1

0

for some 0 < y < 1. This implies that for c0 = c=2 we have



+1

+1

(44)





0

ydjrj djrj

c ) log k f (v ) = 0 6= 1; lim P d R(f ) > (1 ?2 log k!1 Gk ;v d 0

where the limit is taken over even k, in contradiction to our assumption. In order to show that (43) implies (44), let Ali be the event that the Gli -indexed walk maps all v in level j  li ? ki to j . >From Lemma 4.3 we have that

PGdli ;v [Ali jf (v ) = 0] > 2?djrj

+2

0

+1

ydjrj ;

(45)

+1

for some y > 0 (which depends on r but not on ki or li ), and it is clear that







c) log ki + jrj + 1 A ; f (v ) = 0 > djrj : PGdli ;v R(f )  (1 ?2 log li d 0

+1

(46)

BENJAMINI, HA GGSTRO M AND MOSSEL

16

Combining (45) and (46) we see that (43) implies (44). The proof that for the other bound it is enough to assume that f (v ) = 0 and that k is even is similar (but easier). It remains to prove that for even k and for all c > 0, we have









(1 + c) log k f (v ) = 0 = 1; lim P dk ;v R(f ) < G k!1 log d 0

and

(1 ? c) log k f (v ) = 0 = 1: lim P dk ;v R(f ) > G k!1 2 log d 0

(47)

(48)

We start with a proof of (47). Let v be any vertex. We will show that there exist r 2 (0; 1) such that if t; s 2 Z, and t > s, then

PGdk ;v [jf (v)j = tjf (v) = 0]  rdt PGdk ;v [jf (v)j = sjf (v) = 0] : 0

0

(49)

>From this it follows that

PGdk ;v [jf (v)j > s]  rds ;

(50)

0

(for some other r 2 (0; 1)) and therefore if s(k)

lim rd dk = 0; k!1 then

lim P d [R(f ) < s(k)jf (v ) = 0] = 1: k!1 Gk ;v0

In particular, (47) holds for all c > 0. In order to prove (49), assume that v = vil is at level l, at index i. We denote w = (w1 ; : : : ; wdl ), and v = (v1l ; : : : ; vdl l ). Lemmas 4.2, 4.3 and 4.4 imply that

PGdk ;v [jf (vil )j = tjf (v ) = 0] PGdk ;v [jf (vil )j = sjf (v) = 0] = 0

0

P

Ql

Z ?1 w:jwij=t Ql [f (v) = w] dj=1 PGdk?l ;v [f (v ) = ?wj ] = ?1 P Qdl Z w:jwi j=s Ql [f (v ) = w] j =1 PGdk?l ;v [f (v  ) = ?wj ] P Ql Ql [jf (vil )j = t] w Qtl [w] dj=1 PGdk?l ;v [f (v ) = ?wj ] dt = Q l P d [f (v) = ?w ]  r : P Ql [jf (vil )j = s] w Qsl [w] dj=1 j Gk?l ;v 0

0

0

0

The last equality follows from the fact that for all w with jwi j = t, we have

Ql [f (v) = w] = Ql [jf (vil )j = t]Qtl [f (v) = w] ; whereas the last inequality follows from the fact that since t > s > 0,

Ql [jf (vil )j = t]  Ql [jf (vil )j = s]: Moreover, using the coupling of Lemma 4.4, we get

P Qt[w] Qdl P d [f (v ) = ?w ] P d [f (v ) = t] j G ;v G ;v t l Pw Qs[w] Qjdl P kd?l [f (v ) = ?w ]  PGdk?l;v [f (v) = s]  rd : =1

w l

0

j =1 Gk?l ;v0

0

j

k?l

0

17

RANDOM GRAPH HOMOMORPHISMS

The proof of the upper bound (47) is now complete. We turn to the proof of the lower bound. For a moment x h. Let Ak be the event that R(f )  h. We denote the set of nodes at level i by Li , and let Bk be the following event:

Bk = fmaxff (v) : v 2 [i L2ih g ? minff (v) : v 2 [i L2ih g < hg : Clearly,

PGdk ;v [Ak ] = PGdk ;v [Bk ]PGdk ;v [Ak jBk ] + PGdk ;v [Bk ]PGdk ;v [Ak jBk ] = PGdk ;v [Bk ] + PGdk ;v [Bk ]PGdk ;v [Ak jBk ]  PGdk ;v [Ak jBk ]: 0

0

0

0

0

0

0

0

0

We now estimate PGdk ;v [Ak jBk ]. We note that 0

PGdk ;v [Ak jBk ]  min PGdk ;v [Ak jBk ; ff (v)gv2[iL ih ]; 2

0

0

(51)

where the minimum is taken over all ff (v)gv2[i L ih for which Bk hold. For each v 2 [i L2ih , let T2h(v), be the subtree rooted at v of 2h levels, and let 2

Avk = fjff (v) : v 2 T2h(v)gj  hg : The events Avk are independent given ff (v)gv2[iL ih . Moreover, it is easy to see that for all v 2 [i L2ih , PGdk ;v [Avk jBk ; ff (v)gv2[iL ih ]  2?d h +1: Therefore, if we have for h = h(k) that 2

2 +1

2

0

h lim dk?h 2?d

2 +1

k!1

then also

+1

! 1;

lim P d [A jB ] ! 1: k!1 Gk ;v0 k k

Taking

c) log k h(k) = (1 ?2 log d

we obtain the desired result.

4.4. Number of Gdk Walks

Proof (of Proposition 4.2). We will prove the proposition for odd k. The proof for even k is similar. Since k is odd, the task is to prove that

jXGdk ;v j  4  2odd Gdk (

even(Gdk )h(d) :

)+

0

(52)

Let XG0 dk ;v be the set of Gdk -indexed walks which satisfy fk (v ) = 0. From Lemma 4.2 it follows that jXGdk ;v j  4jXG0 dk ;v j: Therefore, in order to prove (52) it suces to show that 0

0

0

jXGdk ;v j  2odd Gdk (

0

0

even(Gdk )h(d) :

)+

(53)

BENJAMINI, HA GGSTRO M AND MOSSEL

18

Let X be a uniform variable on XG0 dk ;v . Let H be the entropy function. It is clear that (53) is equivalent to 0

H (X )  odd(Gdk ) + even(Gdk )h(d):

(54)

However,

H (X ) 



k X ld  X l=1 i=1

H X (vil ) X (vil 0 )



(55)

where v0 denotes the0 parent of v. Since given X (vil ), X (vil ) has two possible values, we have for all vil ,





H X (vil )jX (vil 0 )  1:

(56)

Moreover, if l is even, then from Lemma 4.1, we have that if X (vli 0 ) > 0 then

h

i

h

i

h

i

PGdk ;v X (vil ) = X (vil 0) + 1  t(d)PGdk ;v X (vil) = X (vil0 ) ? 1 : 0

0

Similarly if X (vli 0 ) < 0, then,

h

i

PGdk ;v X (vil ) = X (vil 0) ? 1  t(d)PGdk ;v X (vil) = X (vil0 ) + 1 : 0

0

Equations (57) and (58) imply that for l even,

H





X (vil )jX (vil 0 )





 H 1 +1t(d) ; 1 +t(dt()d) :

(57)

(58)

(59)

In (55) we now take the bound (56) for odd l and (59) for even l, to obtain (54).

5. THE DISCRETE CUBE

In this short section we discuss the case of the k-dimensional discrete cube: Gk = (VGk ; EGk ) where VGk = f0; 1gk ; EGk = f(x; y) : h(x; y) = 1g and h denotes Hamming distance. In this case we let v0 = (0; : : : ; 0). By a direct application of Theorem 2.1 and well-known large deviations behavior of SRW (see e.g. Durrett [4], p. 76), we get the following. Corollary 5.1. For any integer

k and any t > 0, we have for Gk -indexed walks that

PGk ;v [jf (v)j  tk]  2e? kt

2 4

0

for all v 2 VGk . Remark: Instead of using Theorem 2.1, one may utilize measure concentration results for the discrete cube (see e.g. Talagrand [18]) to obtain a similar result (with somewhat worse constants). We outline the argument below: Fix v and de ne S (f )(u) = Sv (f )(u) = f (u) ? f (u  v), where  is the addition in the group (Z=2Z)k . It is easy to see that for all f 2 Gk , S (f ) is a Lipschitz function with constant 2. A moment's re ection reveals that for all w1 ; w2 2 VGk and all t 2 Z, we have

PGk ;v [S (f )(w ) = t] = PGk ;v [S (f )(w ) = t]: 0

1

0

2

(60)

19

RANDOM GRAPH HOMOMORPHISMS

On the other hand, from measure concentration results for the discrete cube (see e.g. Talagrand [18]) we have for all xed f 2 XGk ;v that 0

jfx : jS (f )(x)j > tkgj  1 e? kt : 2k 2 2 8

Combining (60) with (61) we have

PGk ;v [jf (v)j > tk] = PGk ;v [jS (f )(v)j > tk] = 21k 0

0



X u2VG

(61)

PGk ;v [jS (f )(u)j > tk] 0



= EGk ;v 21k jfu 2 VG : jS (f )(u)j > tkgj  21 e? kt 0

2 8

as desired. We conjecture that the concentration of measure for a typical Gk -indexed random walk should be much stronger than the deterministic bound R(f )  k + 1. In particular, a modest achievement in that direction would be to prove the following. Conjecture 5.1. For all t > 0, we have [R(f ) > tk] = 0: lim P k!1 Gk ;v0

Remark: We note that the analogue of Conjecture 5.1 for the Gaussian eld model holds. Since the resistance between any two vertices is bounded by some global constant (independent of k), the variance of f (v) is also bounded by some global constant. However, f (v) is Gaussian and therefore it follows that for all k, and all v 2 Gk ,

Pk [jf (v)j  t]  C e?C t for some positive C ; C , where Pk denotes the Gaussian eld measure on the k-dimensional 1

1

2

2

2

discrete cube. Therefore, for the Gk Gaussian elds we have

p

lim P [jR(f )j  C k] = 0 k!1 k for some positive C > 0. An obvious attempt to bound R(f ) would be to use Corollary 5.1 to bound the expected number of vertices taking value above tk, but unfortunately this does not give any useful bound. Kahn [13] give bounds on the number of Gk -indexed walks. We do not see how to use these bounds for our purpose.

6. EMULATING THE ISING MODEL Propositions 2.1, 2.2 and 2.3 are all indications that PG;v is, in various respects, well-behaved. 0

A pessimistic interpretation would be to conclude that G-indexed random walks are \dull". As an argument that this is not the case, we will now demonstrate how the ferromagnetic Ising model on any nite graph H can be emulated by a graph-indexed walk on a di erent graph G. The Ising model is one of the most fundamental models in statistical mechanics. It has been the subject of countless studies, and many intricate phenomena have been revealed; the reader may turn e.g. to Liggett [15], Georgii [6] or Georgii et al. [7] for a start. Let H = (VH ; EH ) be any nite graph. The Gibbs measure H for the Ising model on H at reciprocal temperature  0 is the probability measure on f?1; 1gVH which to each

BENJAMINI, HA GGSTRO M AND MOSSEL

20

! 2 f?1; 1gVH assigns probability

0

H (!) = 1H exp @ Z

X fu;vg

1 !(u)!(v)A :

(62)

Here fu; vg means that we sum over all (unordered) pairs of vertices sharing an edge, and Z H is a normalizing constant. Given H , we de ne another graph G = (VG ; EG ) from H by (i) replacing each edge in H by two edges in series, (ii) adding an additional vertex v0 , and (iii) including an edge between v0 and v for each v 2 VH . In other words, VG = VH [ EH [ fv0 g and

EG = ffv; eg : v 2 VH ; e 2 EH ; e is incident to vg [ ffv0; vg : v 2 VH g : A direct counting argument shows the following. Proposition 6.1. With G and H as above, the PG;v -distribution of f (VH ) equals the Ising model Gibbs measure H with = 21 log 2. If we modify G further by placing k paths of length 2 in parallel between v0 and each e 2 EH , then the PG;v -distribution of f (VH ) instead equals H with = 21 log(1 + 2?k ). By placing n such \decorations" in parallel between each pair of vertices u; v 2 VH with fu; vg 2 EH , we get distribution H with = n2 log(1 + 2?k ). The set of reciprocal temperatures for which we can emulate the Ising model on H is therefore dense in (0; 1). This construction has some resemblance with the subshift of nite type imitations of Gibbs models obtained by Haggstrom [9, 10]. Since there are only countably many ways to construct H , the restriction to a countable dense set of -values cannot be removed. One may also ask whether it is possible to do the same thing for < 0 (this is the so called antiferromagnetic Ising model), but it follows from Proposition 2.2 that this cannot be done. 0

0

7. FINAL REMARKS

We expect that a lot remains to be revealed about G-indexed random walks. Among open problems, we have already mentioned Conjectures 2.2 and 5.1. Another problem which may be of interest is the following. Open problem: Let the graphs G and H satisfy the usual assumptions ( nite, connected, bipartite) and suppose that G and H are roughly isometric with constant k < 1 (that is, there is a function g from VG to VH such that k?1 d(x; y) ? k  d(g(x); g(y))  kd(x; y) + k for any x; y 2 VG , and for every z 2 VH there is some x 2 VG so that d(g(x); z )  k). What is the relationship between G- and H -indexed random walks? In particular, suppose the we have two families of graphs fGn g and fHng, and that each Gn is roughly isometric to Hn with the same constant k. Can it happen that

EGn;v R(f ) = 1 lim n!1 EHn ;v R(f ) 0

0

or is there some constant C = C (k) bounding EEHGnn;v;v RR((ff )) ? There are of course also various ways in which our model may be extended. The image Z of our graph homomorphisms may be replaced by any other graph. For instance, if we replace it by a complete graph on k vertices, then we obtain the usual random k-coloring model. 0

0

RANDOM GRAPH HOMOMORPHISMS

21

Generalizing further, the underlying simple random walk can be replaced by any reversible Markov chain. Uniform measure is then replaced by some weighted measure where each f gets Q a weight proportional to fu;vg2EG C (f (u); f (v)) for some interaction function C , thus putting us in the familiar generality of Gibbs measures with nearest-neighbor pair interactions. Acknowledgment. Thanks to Yuval Peres, Je Steif and Oded Schramm for useful discussions, and to Johan Jonasson for helpful comments on the manuscript.

REFERENCES

1. Ash, R, B. (1965) Information Theory, Dover, New York. 2. Asmussen, S. and Hering, H. (1983) Branching Processes, Birkhauser, Boston. 3. Benjamini, I. and Peres, Y. (1994) Tree-indexed random walks on groups and rst passage percolation, Probab. Th. Relat. Fields 98, 91{112. 4. Durrett, R. (1995) Probability: Theory and Examples (2nd edition), Duxbury Press, Belmont. 5. Fortuin, C. M., Kasteleyn, P. W. and Ginibre, J. (1971) Correlation inequalities on some partially ordered sets, Commun. Math. Phys. 22, 89{103. 6. Georgii, H.-O. (1988) Gibbs Measures and Phase Transitions, de Gruyter, New York. 7. Georgii, H.-O., Haggstrom, O. and Maes, C. (1998) The random geometry of equilibrium phases, Phase Transitions and Critical Phenomena (C. Domb and J.L. Lebowitz, eds), Academic Press, London, to appear 8. Grimmett, G. R. (1995) The stochastic random-cluster process, and the uniqueness of random-cluster measures, Ann. Probab. 23, 1461{1510. 9. Haggstrom, O. (1995) A subshift of nite type that is equivalent to the Ising model, Ergod. Th. Dynam. Sys. 15, 543{556. 10. Haggstrom, O. (1995) On the relation between nite range potentials and subshifts of nite type, Probab. Th. Relat. Fields 101, 469{478. 11. Holley, R. (1974) Remarks on the FKG inequalities, Commun. Math. Phys. 36, 227{231. 12. Janson, S. (1997) Gaussian Hilbert Spaces, Cambridge University Press. 13. Kahn, J. (1998) In preparation. 14. Lawler, G. F. (1995) Intersections of Random Walks, Birkhauser, Boston. 15. Liggett, T. M. (1985) Interacting Particle Systems, Springer, New York. 16. Lindvall, T. (1992) Lectures on the Coupling Method, Wiley, New York. 17. Ney, P. (1991) Branching random walk, Spatial Stochastic Processes (K. Alexander and J. Watkins, eds), pp 3{22, Birkhauser, Boston. 18. Talagrand, M. (1995) Concentration of measure and isoperimetric inequalities in product spaces, Publ. Math. IHES 81, 73{205.