MARTIN CAPACITY FOR MARKOV CHAINS AND RANDOM WALKS IN VARYING DIMENSIONS Itai Benjamini 1 Robin Pemantle 2; 3 Yuval Peres 4
1 Introduction Kakutani (1944) discovered that a compact set IRd is hit with positive probability by a d-dimensional Brownian motion (d 3) if and only if has positive Newtonian capacity. When capacity criteria were transferred to the discrete setting (by Ito and Mckean (1960) and Lamperti (1963)) it was in the form of a \Wiener Test" (c.f. Corollary 2.4). This kind of summability condition is quite eective in deciding whether a given subset of a lattice is hit in nitely often by a random walk, but does not yield estimates of the probability of ever hitting the set. Such estimates are obtained from the discrete analogue of the following.
Proposition 1.1 Let fBd (t)g denote standard d-dimensional Brownian motion with Bd (0) = 0 and d 3. Let 2 IRd be any compact set. Then 1 (1) 2 CapK () P[9t > 0 : Bd (t) 2 ] CapK ()
Mathematical Sciences Institute, 409 College Ave., Ithaca, NY, 14853. Research partially supported by the U. S. Army Research Oce through the Mathematical Sciences Institute of Cornell University 2 Research supported in part by National Science Foundation grant # DMS 9300191, by a Sloan Foundation Fellowship, and by a Presidential Faculty Fellowship 3 Department of Mathematics, University of Wisconsin-Madison, Van Vleck Hall, 480 Lincoln Drive, Madison, WI 53706 4 Department of Statistics, 367 Evans Hall University of California, Berkeley, CA 94720 1
1
where
d 2 K (x; y ) = jjxjjy jjy jjd
2
for x; y 2 IRd . Here jjx y jj is the Euclidean distance and
"
CapK () = ()=1 inf
Z Z
K (x; y ) d(x) d(y )
#
1
:
Remarks: 1. More detailed de nitions will be given later. 2. The constant 1=2 in (1) is sharp. The classical criterion P[9t > 0 : Bd (t) 2 ] > 0 , CapG () > 0; where G(x; y ) = jjx y jj2 d, is clearly contained in Proposition 1.1 ; passing from the Green kernel G(x; y ) to the Martin kernel K (x; y ) = G(x; y )=G(0; y ) yields sharper estimates which are then useful in a discrete setting. By applying such estimates to an appropriate space-time Markov chain, we obtain criteria for a set A of integers to contain in nitely many times of return of a random walk to the origin (Corollary 2.5); in particular, an in nite expected number of returns is not sucient (Examples 1 and 2 below); this is well-known. Lamperti's Wiener Test and a theorem of Lyons about percolation on trees are also obtained as corollaries. Our initial motivation for these criteria was understanding random walks in varying dimension. Let F2 and F3 be two distributions with mean zero and nite variance on the lattice Z3, where F2 is supported on a plane (but not on any line) and supp(F3 ) is not contained in any plane. Given an increasing sequence of positive integers fan g we consider the inhomogeneous random walk fSk g whose independent increments Sk Sk 1 have distribution F3 if k 2 fan g and distribution F2 otherwise. Theorem 5.4 shows that the process fSk g is recurrent if
an exp(exp(n1=2)) but transient if an exp(exp(n )) for any 2 (0; 1=2). Here recurrence means that the number of k for which Sk = 0 is almost surely in nite, and transience means that this number is almost 2
surely nite. (these alternatives are exhaustive, cf. Lemma 5.1.) An easy calculation shows that the expected number of visits to the origin by fSk g is in nite when < 1=2 as well as when = 1=2. We also consider variants in other dimensions. For instance, there exists a recurrent random walk which interlaces two-dimensional, four-dimensional and six-dimensional steps (but the four-dimensional steps are indispensible here; see Corollary 5.3). Conversely, there is a transient process obtained by alternating blocks of one-dimensional and two-dimensional random walk steps (Proposition 6.1). Durrett, Kesten and Lawler (1991) analyze a random walk in one dimension that interlaces several increment distributions all having mean zero. In that setting, distributions without second moments are necessary in order to obtain transience. Another inhomogeneous model was analyzed by D. Scott (1990). The rest of this paper is organized as follows. Martin capacity for Markov chains is the focus of Section 2. Several examples are given, including an interesting relation between simple random walk in three dimensions and the time-space chain arising from simple random walk in the plane. Section 3 shows how to derive Lyons' percolation Theorem from the general capacity estimate for Markov chains (Theorem 2.2.) In Section 4 we give the easy proofs of Proposition 1.1 and related results concerning Brownian motion. Random walks in varying dimension are analyzed in Section 5. This section is written so it can be read independently of the rest of the paper. However, it is connected to the previous sections both in the methods of proof and in that in both settings the number of returns to the origin can be almost surely nite but have in nite expectation. Finally, Section 6 contains the examples of transient walks which interlace one-dimensional and two-dimensional steps.
2 Polarity for Markov chains First we recall some potential theory notions.
3
De nition 2.1 Let be a set and B a - eld of subsets of . Given a measurable function F : ! [0; 1] and a nite measure on (; B), the F -energy of is Z Z IF () =
The capacity of in the kernel F is
F (x; y) d(x) d(y ):
CapF () = inf IF ()
1
where the in mum is over probability measures on (; B) and by convention, 1 1 = 0.
If is contained in Euclidean space, we always take B to be the Borel - eld; if is countable, we take B to be the - eld of all subsets. When is countable we also de ne the asymptotic capacity of in the kernel F : Cap(F1) () =
0
inf CapF ( n 0): nite
(2)
P
Let fp(x; y ) : x; y 2 Y g be transition probabilities on the countable set Y , i.e. y p(x; y ) = 1 for every x 2 Y . Let 2 Y be a distinguished starting state and let fXn : n 0g be a Markov chain with P[Xn+1 = y j Xn = x] = p(x; y ). De ne the Green function
G(x; y ) =
1 X n=1
p(n) (x; y ) =
1 X n=1
Px[Xn = y]
where p(n) (x; y ) are the n-step transition probabilities and Px is the law of the chain fXn : n 0g when X0 = x. We assume that the Markov chain fXn g is transient or equivalently that G(x; y) < 1 for all x; y 2 Y , and want to estimate the probability that a sample path fXng hits a set Y .
Theorem 2.2 Let fXng be a transient Markov chain on the countable state space Y with initial state and transition probabilities p(x; y ). For any subset of Y we have 1 Cap () P [9n 0 : X 2 ] Cap () n K 2 K
4
(3)
and
1 Cap(1) () P [X 2 in nitely often ] Cap(1) () n K 2 K where K is the Martin kernel (x; y ) K (x; y ) = G G(; y )
(4) (5)
de ned using the intial state .
Remarks: 1. The Martin kernel K (x; y ) can obviously be replaced by the symmetric kernel 21 (K (x; y )+ K (y; x)) without aecting the energy of measures or the capacity of sets. 2. If the Markov chain starts according to an initial measure on the state space, rather than from a xed initial state, the theorem may be applied by adding an abstract initial state with transition probabilities p(; y ) = (y ) for y 2 Y . Proof: (i) The right hand inequality in (3) follows from an entrance time decomposition.
Let be the rst hitting time of and let be the (possibly defective) hitting measure (x) = P[X = x] for x 2 . Then
() = P[9n 0 : Xn 2 ] and
Z R
G(x; y) d (x) =
X x2
(6)
P[X = x]G(x; y) = G(; y):
Therefore K (x; y ) d (x) = 1 for every y 2 . Consequently
= () 2I ( ) = () 1; IF () F
so that CapK () (). By (6), this proves half of (3).
To establish the left hand inequality in (3) we use the second moment method. Given a probability measure on , consider the random variable
Z=
Z
G(; y )
1X1 fXn =yg d(y ): n=1
1
5
By Tonelli and the de nition of G,
E Z = 1:
Now we bound the second moment:
E
Z2
= E
Z Z
2E
G(; y ) 1G(; x)
Z Z
For each m we have
E
n=m
1 X
1
m;n=0
G(; y ) 1G(; x)
1 X
(7)
1
1fXm=x;Xn=yg d(x) d(y)
X
0mn 0] (EEZZ 2) 2I 1() : K 2
Since the left hand side does not depend on we conclude that P[9n 0 : Xn 2 ] 12 CapK () as claimed.
2
To infer (4) from (3) observe that since fXn g is a transient chain, almost surely every state is visited only nitely often and therefore
fXn 2 in nitely often g =
\
f9n 0 : Xn 2 n 0g a.s. nite Applying (3) and the de nition (2) of asymptotic capacity yields (4). 0
2
The remainder of this section is devoted to deriving some consequences of Theorem 2.2. The rst involves the notion of intersection-equivalence. Say that two random subsets W1 and W2 6
of a countable space are intersection-equivalent (or that their laws are intersection-equivalent) if for every subset A of the space, P[W1 \ A 6= ;] and P[W2 \ A 6= ;] are bounded by constant multiples of each other. It is easy to see that if W1 and W2 are intersection-equivalent then P[jW1 \ Aj = 1] and P[jW2 \ Aj = 1] are also bounded by the same constant multiples of each other. An immediate corollary of Theorem 2.2 is the following, one instance of which is given in Corollary 2.6.
Corollary 2.3 Suppose the Green's functions for two Markov chains on the same state space
are bounded by constant multiples of each other. Then their ranges are intersection-equivalent.
Lamperti (1963) gave an alternative criterion for fXn g to visit the set in nitely often. Fix b > 1. With the notations of Theorem 2.2, denote Y (n) = fx 2 Y : b n 1 < G(; x) b n g.
Corollary 2.4 (Lamperti's Wiener Test) Assume that the set fx 2 Y : G(; x) > 1g is nite and that for some constant C and all x 2 Y (m) and y 2 Y (m + n) we have G(x; y ) < Cb
(m+n) ;
(8)
provided that m and n are suciently large. Then
P[Xn 2 in nitely often] > 0 , P
1 X
n=1
b nCapG ( \ Y (n)) = 1:
(9)
P
n Remark: Clearly 1 n=1 b CapG ( \ Y (n)) = 1 if and only if n CapK ( \ Y (m)) = 1. The equivalence then follows from a version of the Borel-Cantelli lemma proved in Lamperti's paper (a better proof is in Kochen and Stone (1964)). This corollary is useful in many cases; however the condition (8) excludes some natural transient chains such as simple random walk on a binary tree.
Next, we deduce a criterion for a recurrent Markov chain to visit its initial state in nitely often within a prescribed time set. 7
Corollary 2.5 Let fXng be a recurrent Markov chain on the countable state space Y with initial state X0 = and transition probabilities p(x; y ). For nonnegative integers m n denote G~(m; n) = P[Xn = j Xm = ] = p(n and
m) (; )
~
K~ (m; n) = G~(m; n) : G(0; n)
Then for any set of times A Z+ : 1 Cap (A) P[9n 2 A : X = ] Cap (A) n K~ 2 K~ and 1 Cap(1) (A) P[ X 1 (1) fXn =g = 1] CapK~ (A): 2 K~ n=A
(10) (11)
Proof: Consider the space-time chain f(Xn; n) : n 0g on the state space Y Z+ . This chain
is obviously transient; let G denote its Green function. Since G((; m); (; n)) = G~ (m; n) for m n, applying Theorem 2.2 with = fg A shows that (10) and (11) follow respectively from (3) and (4). 2
Example 1: Random walk on Z. Let Sn be the partial sums of mean-zero, nite variance, IID integer random variables. By the local limit theorem (c.f. Spitzer 1964),
G~(0; n) = P[Sn = 0] cn
1=2
provided that the summands Sn Sn 1 are aperiodic. Therefore
X
P[
n2A
1fSn=0g = 1] > 0 , Cap(F1)(A) > 0;
(12)
with F (m; n) = (n1=2=(m n)1=2)1fm 0 : Bd (t) 2 g. The distribution of Bd ( ) on the event < 1 is a possibly defective distribution satisfying
() = P[ < 1] = P[9t > 0 : Bd (t) 2 ]:
(17)
Now recall the standard formula, valid when < jjy jj: d 2
P[9t > 0 : jjBd(t) yjj < ] = jjy jjd 2 :
(18)
By a rst entrance decomposition, the probability in (18) is at least
P[jjBd( ) yjj > and 9t > 0 : jjBd(t) yjj < ] = Letting go to zero we obtain
Z
x:jjx yjj> jjx
d
2
y jjd 2 d (x):
Z
d (x) 1 ; jjyjjd 2 jjx y jjd 2 R i.e. K (x; y ) d (x) 1 for all y 2 . Therefore IK ( ) () and thus CapK () [IK (= ())] 1 (); which by (17) yields the upper bound on the hitting probability of . To obtain a lower bound for this probability, a second moment estimate is used. For > 0 and y 2 IRd let D(y; ) denote the Euclidean ball of radius about y and let h (jjy jj) denote the probability that a Brownian path will hit this ball:
d
h (r) = minf1; r
2
g:
(19)
De ne h (r) = 1 for r < 0. Given a probability measure on , and > 0, consider the random variable Z Z = 1f9t>0:Bd(t)2D(y;)gh (jjy jj) 1 d(y ):
14
Clearly EZ = 1. We compute the second moment of Z in order to apply Cauchy-Schwartz as in the proof of Theorem 2.2. By symmetry,
EZ2 = 2E 2E
Z Z
d(x)d(y ) 1 f9 t> 0: B ( t ) 2 D ( x; ) and 9 s>t : B ( s ) 2 D ( y; ) g d d h (jjxjj)h (jjy jj)
h(jjy xjj ) 1 f9 t>0:Bd(t)2D(x;)g h (jjxjj)h (jjy jj) d(x) d(y )
Z Z
Z Z h(jjy xjj ) h(jjy jj) d(x) d(y ): Since the last integrand is bounded by 2K (x; y ) if y 2= D(0; ) and by 1 if y 2 D(0; ), we get ZZ EZ2 4 1fjjx yjj2gK (x; y) d(x) d(y) + 2(D(0; )) jjyjj d 2 ZZ d(x) d(y ): +2 1fjjx yjj>2g jjy xjj + 1 The rst two terms drop out as ! 0 (by dominated convergence) leaving = 2
lim EZ2 2IK () #0
(20)
provided (f0g) = 0. Clearly the hitting probability P[9t > 0; y 2 : Bd (t) 2 D(y; )] is at least 2 P[Z > 0] (EEZZ2) = (EZ2) 1
Transience of Brownian motion implies that if the Brownian path visits every -neighborhood of the compact set then it almost surely intersects itself. Therefore, by (20): P[9t > 0 : Bd (t) 2 ] lim (EZ2 ) 1 2I 1() : #0 K Since this is true for all probability measures on , we get the desired conclusion: P[9t > 0 : Bd (t) 2 ] 12 CapK (): (21)
2
15
Remark: To see that the constant 1=2 in (21) cannot be increased, consider the spherical shell R = fx 2 IRd : 1 jjxjj Rg; it is easy to check that limR!1 CapK (R) = 2. Next, we pass from the local to the global behavior of Brownian paths. Barlow and Taylor (1991) noted that for d 2 the set of nearest neighbor lattice points to a Brownian path in IRd is a subset of Zd with dimension 2, using their de nition of dimension which is equivalent to (16). This is a property of the path near in nity; another such property is given by
Proposition 4.1 Let Bd (t) denote d-dimensional Brownian motion. Let IRd with d 3 and let 1 be the cubical fattening of de ned by
1 = fx 2 IRd : 9y 2 s:t: jjy xjj1 1g: Then a necessary and sucient condition for the almost sure existence of times tj " 1 at which Bd (tj ) 2 1 is that Cap(d12) (1 \ Zd ) > 0.
The proof is very similar to the proof of Theorem 2.2 and is omitted.
5 Random walk in varying dimension To give meaning to the terms \recurrent" and \transient", we prove a \folklore" lemma which implies a 0-1 law for recurrence of RWVD.
Lemma 5.1 Let fFj : 1 j lg be distributions on the abelian group Y and let (n(1); n(2); : : :) 2 f1; 2; : : :; lgZ be any sequence in which each value 1; : : :; l occurs in nitely often. Let fXk g be independent +
random variables with distributions Fn(k) . Then any tail event for the sequence of partial sums SN = PNk=1 Xk has probability 0 or 1.
16
Proof: If l = 1, this is a consequence of the Hewitt-Savage 0-1 law. If l > 1, assume for
induction that the result is true for smaller values of l and let Fl 1 denote the - eld generated by fXk : n(k) l 1g: (22) Conditional on Fl 1, the event B is exchangeable in the remaining variables fXk : n(k) = lg; since these variables are identically distributed, the Hewitt-Savage 0-1 law shows that P[B j Fl 1] 2 f0; 1g almost surely. The set B~ := fP[B j Fl 1] = 1g is Fl 1-measurable, and it is a tail event for the partial sums of the variables in (22). By induction, P[B~ ] 2 f0; 1g, which shows that P[B ] 2 f0; 1g. 2 Next, recall the random walk in varying dimension considered in the introduction: a process fSk g in Z3 with independent increments Sk Sk 1 distributed according to a truly 3-dimensional distribution F3 if k 2 fan : n 1g, and according to the projection F2 of F3 to the x-y plane if k 2= fan g. We assume that:
F3 makes the three coordinates independent, and
(23)
F3 has mean zero and nite variance. We rst state an easy qualitative proposition which is sharpened in Theorem 5.4 below.
Proposition 5.2 If fang grow suciently fast, then the RWVD in 2 and 3 dimensions is recurrent.
Proof: Denote by z projection to the z -axis and by xy the projection map to the x-y plane.
Since fxy (Sn )g is a recurrent planar random walk, we may select an inductively to satisfy
P [9k 2 (an; an+1 ] : xy (Sk ) = 0] 1=2:
(24)
The process fz (San )g is a recurrent one-dimensional random walk, so there is almost surely a random in nite sequence N (1); N (2); : : : for which z (SaN j ) = 0 for all j 1. Independence of fxy (Sn)g and fz (Sn)g implies that the set of j for which there exists a k 2 (aN (j); aN (j)+1] such ( )
17
that Sak = 0 stochastically dominates the random set of positive integers gotten by including each one independently with probability 1=2. In particular, there are almost surely in nitely many such j , and for each j there is some k 2 (aN (j ); aN (j )+1] with Sk = 0, proving recurrence.
2
The argument above is quite general and extends in an obvious way to the product of two recurrent Markov chains. Iterating this argument yields the next corollary.
Corollary 5.3 If d1 < d2 < < dN and maxfdj +1 dj : 1 j N 1g 2;
(25)
then there exists a recurrent process fSn g with independent increments which interlaces in nitely many dj -dimensional steps for each j . More precisely, Sk+1 Sk has a truly D(k)-dimensional distribution for each k, and the sequence fD(k)g takes on only the values d1 ; : : :; dN , each one in nitely often. If (25) is violated then (clearly) any such process fSn g must be transient.
Next, we give the quantitative version, Theorem 5.4, of Proposition 5.2. This will be proved in detail. We also state similar theorems for RWVD in 2 and 4 dimensions and RWVD in 1 and 3 dimensions and give the necessary modi cations to the proof of Theorem 5.4. De ne (n) = log(logana+1 =an ) ; (26) n+1
1(n) =
s
an+1 an : an+1
(27)
Theorem 5.4 For the \Z2 in Z3" random walk in varying dimension fSng considered in Proposition 5.2, we have:
P
(i) If n n 1=2(n) < 1 then fSn g is transient. P (ii) If n n 1=2 (n) = 1 and the sequence f(n)g is nonincreasing, then fSn g is recurrent. 18
Remarks: 1. In particular, Sn is recurrent for an = exp(en = ) and transient for an = exp(en ) when > 1=2. 1 2
2. The monotonicity assumption in (ii) is far from necessary, and may be weakened in several ways. If is bounded below, Sn is recurrent and the proof is easier. If sup (m)=(n) < 1;
m>n
(28)
P
then Sn is still recurrent when n n 1=2 (n) = 1. On the other hand, the hypothesis may not be discarded completely. To see this, let A f1; 2; 3; : : :g be a set of times such that a simple random walk fYn g on Z1 will have Yn = 0 for only nitely many n 2 A P almost surely, even though n2A P[Yn = 0] = 1 (c.f. Example 1). De ne the sequence fang by an+1 = 2an 1 if n 2= A and an+1 = a2n if n 2 A. For n 2 A, (n) = 1=2, so the P sum in (ii) is in nite by the assumption n2A P[Yn = 0] = 1. But with probability one, the San is in the x-y plane for only nitely many n 2 A, while by Lemma 5.9, fSk g visits the origin nitely often in time intervals [an ; an+1 ] for n 2= A. 3. To see the connection to Theorem 2.2, let W be any subset of the positive integers and de ne [ (W ) = f [an; an+1 1] : n 2 W g: If W1 is the set of times a one-dimensional random walk is at the origin and W2 is the set of times an independent two-dimensional random walk is at the origin, then (W1) intersects W2 in nitely often if and only if the RWVD is recurrent. Exact capacity criteria are available for which sets intersect W2 in nitely often as well as which gauges give W1 positive capacity, but the complication introduced by the map makes it easier to use the second moment method directly.
Theorem 5.5 For the \Z2 in Z4" random walk in varying dimension, 19
P
(i) If n n 1 (n) < 1 then fSn g is transient. P (ii) If n n 1 (n) = 1 and the sequence f(n)g is nonincreasing, then fSn g is recurrent.
Theorem 5.6 For the \Z1 in Z3" random walk in varying dimension, P
(i) If n n 1 1 (n) < 1 then fSn g is transient. P (ii) If n n 1 1 (n) = 1 and the sequence f1(n)g is nonincreasing, then fSn g is recurrent. The proofs begin with some elementary estimates on the probability of returning to the origin in a speci ed time interval.
Lemma 5.7 Let fSn g be the partial sums of an aperiodic random walk on the one-dimensional
integer lattice with mean zero and nite variance. Then there exist constants c1 and c2 depending only on the distribution of the increments, such that for suciently large integers 0 < a < b,
s
s
c1 b b a P[Sn = 0 for some a n < b] c2 b b a :
(29)
Lemma 5.8 Let fSng be the partial sums of an aperiodic random walk on the two-dimensional
integer lattice with mean zero and nite variance. Then there exist constants c1 and c2 depending only on the distribution of the increments, such that for suciently large integers 0 < a < b,
b=a) P[S = 0 for some a n < b]; c1 log( n log b
(30)
and, in the case that b > 2a,
b=a) : P[Sn = 0 for some a n < b] c2 log( log b 20
(31)
Proof of Lemma 5.7: The Local Central Limit Theorem (c.f. Spitzer 1964) gives
P[Sn = 0] = pcn (1 + o(1))
(32)
for some constant c as k ! 1. Write G for the event that Sn = 0 for some k 2 [a; b 1]. Then E#fk : a k < b and Sk = 0g : P[G] = E (# fk : a k < b and Sk = 0g j G)
p
p
The numerator is (c + o(1))( b a) as a ! 1 according to the Local CLT. To get an upper bound on the denominator, let T = minfa k < b : Sk = 0g be the (possibly in nite) hitting time and condition on T to get
E (#fk : a k < b and Sk = 0g j G) sup E (#fk : a k < b and Sk = 0g j T = t) at 2a, proving (31).
2
Proof of Theorem 5.4: The second moment method will be used, as in the proof of The-
orem 2.2. It is possible to get a good second moment estimate on the number of intervals [an ; an+1 1] that contain a return to zero, but only after throwing out some of them. We must rst prove:
Lemma 5.9 The number of k for which Sk = 0 and an k < an+1 for some n satisfying an+1 < 2an is almost surely nite.
Proof: Let m(1); m(2); : : : enumerate the integers m for which [2m 1; 2m+1 1] contains some
an . It suces to show that nitely many intervals of the form [2m(j ) 1; 2m(j )+1 1] contain 22
values of k for which Sk = 0, since these cover all intervals of the form [an ; an+1 1] satisfying an+1 < 2ak . Fix j and let n(j ) denote the least n such that n 2 [2m(j ) 1 ; 2m(j )+1 1]. By the independence of the coordinates of fSk g, and by the Local CLT in one and two dimensions, one sees that for p each k 2 [2m(j ) 1 ; 2m(j )+1 1], the probability of Sk = 0 is at most c=(k n(j )). Summing this over all k in the interval gives
P[Sk = 0 for some 2m(j) 1 k < 2m(j)+1] pnc(j ) :
Another way to get an upper bound on this is to see that the probability of this event is at most the product of the probability that the walk returns to the x-y plane during the interval with the probability that it returns to the z -axis during the interval. Lemmas 5.7 and 5.8 applied to the intervals [n(j ); n(j + 1) 1] and [2m(j ) 1; 2m(j )+1 1] respectively show this product to be at most s c n(j n+(j1)+ 1)n(j ) m1(j ) : Since m(j ) j , these two upper bounds may be written as
P[Sk
= 0 for some 2m(j ) 1 k < 2m(j )+1 ] c min
s
!
pn1(j ) ; n(j n+(j1)+ 1)n(j ) m1(j ) :
Lemma 5.10 with bj = n(j + 1) n(j ) now shows that these probabilities are summable in j , and Borel-Cantelli nishes the proof. For continuity's sake, the lemma (which is a fact about deterministic integer sequences) is given at the end of the section. 2 Proof of Theorem 5.4 (continued): Let In = 1 if an+1
2an and Sk = 0 for some
k 2 [an ; an+1 1], and let In = 0 otherwise. Part (i) of the theorem is just Borel-Cantelli: the hypothesis in (i) and the estimate (31) in the case b > 2a together imply that EIn n 1=2(n) is summable. Thus the random walk visits zero nitely often in intervals [an ; an+1 1] for which an+1 2an; this, together with Lemma 5.9, proves (i). To prove (ii), it suces, by the 0-1 law (Lemma 5.1), to show that the probability of Sk returning to the origin in nitely often is at least some c > 0. This follows from the two 23
propositions: Stone).
P1 EI = 1, and E(PM I )2 c(PM EI )2 (c.f. something like Kechenn=1 n n=1 n n=1 n
P EI = 1 is easy, since P1 n 1=2(n) is assumed to be in nite; the Seeing that 1 n=1 n n=1 dierence between the two sums is 1 (n) X 1=2 1an 0 and that r and n are not consecutive among numbers k with EIk > 0. Then
EIn Ir = (EIn)E(Ir j In = 1) max E(Ir j St = 0): c n(1n=2) t