Applications of the continuous-time ballot theorem to Brownian motion and related processes. by Jason Schweinsberg Technical Report No. 583 Department of Statistics, University of California 367 Evans Hall # 3860 Berkeley, CA 94720-3860 November 8, 2000
Abstract
Motivated by questions related to a fragmentation process which has been studied by Aldous, Pitman, and Bertoin, we use the continuous-time ballot theorem to establish some results regarding the lengths of the excursions of Brownian motion and related processes. We show that the distribution of the lengths of the excursions below the maximum for Brownian motion conditioned to rst hit > 0 at time t is not aected by conditioning the Brownian motion to stay below a line segment from (0; c) to (t; ). We extend a result of Bertoin by showing that the length of the rst excursion below the maximum for a negative Brownian excursion plus drift is a size-biased pick from all of the excursion lengths, and we describe the law of a negative Brownian excursion plus drift after this rst excursion. We then use the same methods to prove similar results for the excursions of more general Markov processes.
1 Introduction We use a continuous-time analog of the classical ballot theorem to prove some results pertaining to the lengths of the excursions of Brownian motion. We also extend these results to other Markov processes. This work was motivated by questions raised by an alternative construction described by Bertoin in [6] of a fragmentation process introduced by Aldous and Pitman in [4].
1
Before reviewing the descriptions of this process, we recall the de nition of a fragmentation process, as given in [6]. For l 0, de ne
l = (xi )1 i=1 : x1 x2 : : : 0; S
1 X i=1
xi = l :
Let = l0 l . Suppose t (l) is a probability measure on l for all l 0 and all t 0. For each L = (l1; l2; : : : ) 2 , let t (L) denote the distribution of the decreasing rearrangement of the terms of independent sequences L1 ; L2; : : : , where Li has distribution t (li) for all i 2 N. Then, for each t 0, denote by t the family of distributions (t (L); L 2 ), which we call the fragmentation kernel generated by (t (l); l 0). If the fragmentation kernels (t ; t 0) form a semigroup, then any -valued Markov process with (t; t 0) as its transition semigroup is called a fragmentation process. In [7], Bertoin characterizes all fragmentation processes that satisfy a kind of invariance under scaling. In [8], Bertoin extends this characterization to a class of fragmentation processes having a weaker self-similarity property. The self-similar fragmentation that has been studied the most thoroughly is the fragmentation process introduced by Aldous and Pitman in [4] and constructed another way by Bertoin in [6]. If X = (Xt)t0 is a stochastic process such that Z = ft : Xt = 0g is almost surely a closed set of zero Lebesgue measure, then (0; l) n Z almost surely consists of a nite or countable collection of disjoint open intervals whose lengths sum to l. The sequence consisting of the lengths of these intervals in decreasing order is almost surely in l , and we denote this sequence by Vl (X ). The distribution of Vl(X ) when X is Brownian motion or a Bessel process of dimension 2 (0; 2) is studied in [23], [25], and [27]. In this case, it was shown in [21] that Z is the closure of the range of a stable subordinator of index , where = 1 ? =2. See also [10] for a discussion of two 1 -valued coalescent processes (M)0< 0g. Then, H is a size-biased pick from the sequence V ( e). 0
1
1
5
1.3 The process (t ? et)H t
1
We know from Proposition 3 that H = inf ft : t ? et > 0g is a size-biased pick from the sequence V1( e). It follows from results in [12] (see Theorem 2.6 and the discussion in subsection 6.3) that conditional on H = h, the process (t ? et )0tH has the same law as a Brownian excursion of length h. The following proposition describes the process (t ? et )H t1. Proposition 4 Let e = (et)0t1 be a Brownian excursion of length 1. Fix > 0, and let H = inf ft : t ? et > 0g. For all r 0, let W ;r = (W ;r )t0 be a process with the same law as a Brownian motion B conditioned on T = r, where T = inf ft : Bt > g. Then, the law of the process ((t + H ) ? et+H )0t1?H conditioned on H = h is the same as the conditional law of (Wt;1?h )0t1?h given the event fWt;1?h (t + h) for all t 2 [0; 1 ? h]g. The following heuristic argument suggests that Proposition 4 should be true. A negative Brownian excursion (?et )0t1 can be viewed as a Brownian bridge from (0; 0) to (1; 0) conditioned to stay below the line segment from (0; 0) to (1; 0). Therefore, (t ? et )0t1 is a Brownian bridge from (0; 0) to (1; ) conditioned to stay below the line segment from (0; 0) to (1; ). Conditional on H = h, the process (t ? et )0t1 can be split into independent pieces (t ? et )0th and (t ? et )ht1 , and the process (t ? et )ht1 is a Brownian bridge from (h; 0) to (1; ) conditioned to stay below the line segment from (0; 0) to (1; ). Equivalently, the process ((t + h) ? et+h )0t1?h is a Brownian bridge from (0; 0) to (1 ? h; ) conditioned to stay below the line segment from (0; h) to (1 ? h; ), which is equivalent to Proposition 4. We show in section 4 that Proposition 4 follows from the continuous-time ballot theorem and a result in [22] pertaining to size-biased sampling from Poisson point processes.
2 The continuous-time ballot theorem We rst recall the classical ballot theorem. Suppose in an election, candidate A receives a votes and candidate B receives b votes, where a > b. The classical ballot theorem states that if the votes are counted in random order, then the probability that, for all n 1, candidate A leads candidate B after n votes have been counted is (a ? b)=(a + b). A short proof using the re ection principle is given in section 3.3 of [14]. Another proof is given in [29]. To reformulate this result, let i = 0 if candidate A receives the ith vote, and let i = 2 if Pn candidate B receives the ith vote. Let Xn = i=1 i , and let N = a + b. Then, the classical ballot theorem states that for all even integers k less than N , we have ? b = 1 ? 2b = 1 ? k : P (Xn < n for all 1 n N jXN = k) = aa + (11) b N N As shown in [29] and [19], equation (11) holds whenever the vector (1; : : : ; N ) has nonnegative integer-valued components and its distribution is invariant under the N cyclic permutations of its components. 6
The ballot theorem has a natural generalization to continuous-time processes with cyclically interchangeable increments. Namely, if T > 0 is xed and (Xt)0tT is a nondecreasing process with cyclically interchangeable increments such that the derivative of t 7! Xt is almost surely zero Lebesgue almost everywhere, then X T P (Xt t for all 0 t T jXT ) = max 0; 1 ? T :
(12)
In [29], Takacs studies this generalization extensively and discusses applications to queuing processes and storage processes. See [19] for another proof of (12). See also [18] for a recent extension of the result to include processes with stationary, but not necessarily cyclically interchangeable, increments. The result (12) follows easily immediately from Proposition 6 below, which is a restatement of the theorem on p. 1 of [29]. To state Proposition 6, we need the following de nition.
De nition 5 Fix T > 0. Let f : [0; T ] ! R be a function, and x w 2 [0; T ]. Let rwf : [0; T ] ! R be the function de ned by rw f (t) = f (w + t) ? f (w) rw f (t) = f (T ) ? f (w) + f (w + t ? T )
if 0 t < T ? w if T ? w t T .
That is, rw f is the function obtained by cutting the function f at the point w and interchanging the segment of f from 0 to w with the segment of f from w to T . If (Xt)0tT is a stochastic process and w 2 [0; T ], then we can de ne a stochastic process rw X = (rw Xt )0tT by replacing f with X in the de nition above.
Proposition 6 Fix T > 0. Let f : [0; T ] ! R be a nondecreasing function such that f (0) = 0, f (T ) T , and f 0 (t) = 0 for Lebesgue almost every t. Then the set fw 2 [0; T ] : rw f (t) t for all 0 t T g has Lebesgue measure T ? f (T ). From Proposition 6, we easily obtain the following corollary, which we will apply in section 3.
Corollary 7 Fix T > 0. Let f : [0; T ] ! R be a nondecreasing function such that f (0) = 0, f (T ) > 0, and f 0 (t) = 0 for Lebesgue almost every t. Fix c 2 [0; T ). Then, the set
f ( T ) w 2 [0; T ] : rw f (t) (t ? c) T ? c for all 0 t T has Lebesgue measure c.
7
(13)
T f (T )
T f (T )
? ? f (t) ? ? ? ? ??
? ? ?c? ? ? ? Figure 1
??
?
??
?? ??
? ? g(t) ?? ??
T
T
Figure 2
Proof. Let K = (T ? c)=f (T ). De ne the function g : [0; T ] ! R by g(t) = K (f (T ) ? f (T ? t)).
(See Figures 1 and 2 above, and note that Figure 2 can be obtained from Figure 1 by a 180 degree rotation.) We have g 0(t) = Kf 0 (T ? t) for all 0 < t < T , so g 0(t) = 0 for Lebesgue almost every t. Also, g(0) = 0 and g(T ) = T ? c T . Therefore, by Proposition 6, the set fw 2 [0; T ] : rwg(t) t for all 0 t T g (14) has Lebesgue measure T ? g (T ) = c. Suppose 0 t < T ? w. Then rT ?w f (T ? t) = f (T ) ? f (T ? w) + f (T ? t ? w). Therefore, rw g(t) = g(w + t) ? g(w) = K (f (T ? w) ? f (T ? t ? w)) = K (f (T ) ? rT ?w f (T ? t)): Suppose instead T ? w t T . Then rT ?w f (T ? t) = f (2T ? t ? w) ? f (T ? w), using the assumption that f (0) = 0 in the case t = T ? w. Therefore, rw g(t) = g(T ) ? g(w) + g(w + t ? T ) = K (f (T ) + f (T ? w) ? f (2T ? t ? w)) = K (f (T ) ? rT ?w f (T ? t)): Thus, for 0 t T , we have rw g (t) t if and only if K (f (T ) ? rT ?w f (T ? t)) t, or, equivalently, if and only if rT ?w f (T ? t) f (T ) ? K ?1 t = K ?1 (T ? t ? c). Thus, rw g (t) t for all 0 t T if and only if rT ?w f (t) K ?1 (t ? c) for all 0 t T . Since the set in (14) has Lebesgue measure c, the set in (13) has Lebesgue measure c, which proves the corollary.
3 Results for processes with interchangeable increments In this section, we establish two theorems which apply to all nondecreasing right-continuous processes (Xa)0aT with interchangeable increments for which the closure of the range has zero 8
Lebesgue measure. The rst theorem states that the probability that the inverse of such a process does not cross a line from (0; c) to (XT ; T ) is c=T , and that this \crossing event" is independent of the jump sizes of (Xa)0aT .
Theorem 8 Fix T > 0. Let X = (Xa)0aT be a nondecreasing right-continuous process with interchangeable increments such that X0 = 0 a.s. Let S = XT , and assume S > 0 a.s. Let Z be the closure of ft : Xa = t for some 0 a T g, and assume Z has Lebesgue measure zero. Let J = (Ji )1 i=1 be the sequence consisting of the lengths, in decreasing order, of the disjoint open intervals whose union is (0; S ) n Z . Let Y = (Yt )0tS be the right-continuous inverse of X , de ned by Yt = inf fa : Xa > tg for 0 t < S and YS = T . Let c 2 [0; T ]. Then
T ? c P Yt c + S t for all 0 t S = Tc : T ? c t for all 0 t S . Moreover, J is independent of the event Yt c + S ?
T
c
Y
(15)
t
S
Figure 3
Proof. Since X is nondecreasing and Z has Lebesgue measure zero, one can deduce from Kallen-
berg's characterization of processes with interchangeable increments (see Theorem 2.1 of [17]) that there exists a sequence (Ui )1 i=1 of independent random variables with a uniform distribution on 1 [0; T ] such that (Ui )i=1 is independent of (Ji )1 i=1 and
Xa =
1 X i=1
Ji 1fU ag i
(16)
for all 0 a T . For all i 2 N, Slet gi = XUi ? and di = XUi . Note that almost surely di ? gi = Ji for all i, and (0; S ) n Z = 1 i=1 (gi ; di). Let U be a random variable, independent of X , with a uniform distribution on [0; T ]. Since X has interchangeable increments, we have rU X =d X . Let rU Z 9
denote the closure of ft : rU Xa = t for some 0 a T g. If U < Ui , then we can see from De nition 5 that (gi ? XU ; di ? XU ) is one of the disjoint open intervals of (0; S ) n rU Z . If U Ui , then (gi + S ? XU ; di + S ? XU ) is one of the disjoint open intervals of (0; S ) n rU Z . Therefore, the sequence of lengths, in decreasing order, of the disjoint open intervals whose union is (0; S ) n rU Z is (Ji )1 i=1 . Thus, (X; J ) and (rU X; J ) have the same distribution. Since Yt T for all 0 t S , the conclusions of the theorem hold when c = T . Assume for the rest of the proof that c < T , and let K = S=(T ? c). We claim that Yt c + K ?1 t for all 0 t S if and only if Xa K (a ? c) for all 0 a T . If Yt c + K ?1 t, then Xc+K ? t t by the right continuity of X . If Yt > c + K ?1 t for some 0 t < S , then there exists > 0 such that Yt > c + K ?1 (t + ) and thus Xc+K ? (t+) t < t + . Therefore, Yt c + K ?1 t for all 0 t S if and only if Xc+K ? t t for all 0 t S . By making the substitution a = c + K ?1 t, we see that Xc+K ? t t for all 0 t S if and only if Xa K (a ? c) for all 0 a T , which proves the claim. Let A be a Borel subset of . Then, using Fubini's Theorem for the fourth equality, 1
1
1
1
P (fYt c + K ?1 t for all 0 t S g \ fJ 2 Ag) = P (fXa K (a ? c) for all 0 a T g \ fJ 2 Ag) = P (frU Xa K (a ? c) for all 0 a T g \ fJ 2 Ag) = E 1fr X K (a?c) for all 0aT g1fJ 2Ag
= E T1
a
U
Z
T
0
1frw Xa K (a?c) for all 0aT g 1fJ 2Ag dw
Z T 1 = E 1fJ 2Ag T 0 1frw XaK(a?c) for all 0aT g dw :
(17)
It follows from (16) that the derivative of the function a 7! Xa is almost surely zero Lebesgue almost everywhere (see the Corollary on p.529 of [16]). Therefore, Corollary 7 implies that the set fw 2 [0; T ] : rw Xa K (a ? c) for all 0 a T g has Lebesgue measure c almost surely. Thus, (17) implies that for all Borel subsets A of , we have
P (fYt c + K ?1 t for all 0 t S g \ fJ 2 Ag) = E 1fJ 2Ag Tc = Tc P (J 2 A): (18) Equation (15) now follows by taking A = in (18). It is then immediate from (15) and (18) that J is independent of fYt c + K ?1 t for all 0 t S g.
Remark 9 In Theorem 8, it is possible to replace the assumption that X has interchangeable in-
crements with the weaker assumption that X has cyclically interchangeable increments. However, we will not need this generalization for any of the results that follow. We now state the second theorem of this section. 10
Theorem 10 Fix T > 0. Let X = (Xa) aT be a nondecreasing right-continuous process with 0
interchangeable increments such that X0 = 0 a.s. Let S = XT , and assume S > 0 a.s. Let = T=S . Let Z be the closure of ft : Xa = t for some 0 a T g, and assume Z has Lebesgue measure zero. Let J = (Ji )1 i=1 be the sequence consisting of the lengths, in decreasing order, of the disjoint open intervals whose union is (0; S ) n Z . Let U = (Ui)1 i=1 be a sequence of independent random variables with a uniform distribution on [0; T ] such that U is independent of J and (16) holds. Let Y = (Yt )0tS be the right-continuous inverse of X , de ned by Yt = inf fa : Xa > tg for 0 t < S and YS = T . Let c 2 [0; T ]. Then (a) The process (Yt ? t)0t<S almost surely attains its maximum at a unique time, which we denote by K . Almost surely K = gi for some i, where gi = XUi ? . (b) Let H = inf ft : rK Yt > 0g. Then H is a size-biased pick from (Ji )1 i=1 . . T T . Yt
Ji
gi
=K
di
S
rK Yt H
Figure 4
= Ji
S
Figure 5
In Figure 4 above, we have labeled the time K at which (Yt ? t)0t<S attains its maximum. Part (a) of Theorem 10 states that such a time must exist and be unique. Since the jump of X having size Ji is associated with a at interval of Y having length Ji , part (b) of Theorem 10 implies that the length of the at interval of Y starting at K is a size-biased pick from the lengths of all of the at intervals of Y . Equivalently, part (b) implies that the length of the rst
at interval of rK Y (see Figure 5) is a size-biased pick from the lengths of all at intervals of rK Y . We now outline our strategy for proving Theorem 10. We rst show in Lemma 11 that if w 2 [0; S ], then (Yt ? t)0t<S attains a unique maximum at w if and only if rw Yt < t for all 0 < t < S . Then we show in Lemma 14 that if the distribution of the lengths (Ji )1 i=1 of the at intervals of Y is concentrated at one point (i.e. the lengths are xed) and gi is the left endpoint of the at interval of length Ji , then
P (rg Yt < t for all 0 < t < S ) = Ji =S: i
11
(19)
P
Since 1 i=1 Ji = S , equation (19) implies part (a) of Theorem 10, and part (b) follows from the fact that Ji = inf ft : rgi Yt > 0g. We then extend the results to the case in which (Ji )1 i=1 has an arbitrary distribution by elementary conditioning arguments. To see how (19) follows from the continuous-time ballot theorem, consider Figure 5 above. Note that rgi Yt = 0 for 0 t Ji , so it suces to check that we have rgi Yt < t for Ji < t < S . Thus, (19) holds if and only if (rgi Yt )Ji t1 stays below the line from (Ji ; Ji) to (S; T ). Since the portion of Figure 5 to the right of the dashed line looks like Figure 3, the probability of this event is Ji=T = Ji =S . We now begin the formal proof. For Lemmas 13 and 14, we use the notation of Theorem 10 and assume that the hypotheses of Theorem 10 hold. We rst establish two deterministic lemmas. Lemma 11 Fix s > 0, T > 0, and w 2 [0; s]. Let = T=s. Suppose f : [0; s] ! [0; T ] is a function such that f (0) = 0 and f (s) = T . Then rw f (t) ? t 0 for all 0 t s if and only if f (w) ? w = 0max (f (t) ? t): ts
Also, rw f (t) ? t < 0 for all 0 < t < s if and only if f (w) ? w > f (t) ? t for all t such that 0 < jt ? wj < s. Proof. If 0 t < s ? w, then ? rw f (t) ? t = f (w + t) ? f (w) ? t = f (w + t) ? (w + t) ? (f (w) ? w): (20) If s ? w t s, then rw f (t) ? t = f (s) ? f (w) + f (w + t ? s) ??t = (f (s) ? s) ? (f (w) ? w) + f (w + t ? s) ? (t + w ? s) ? = f (w + t ? s) ? (w + t ? s) ? (f (w) ? w): (21) Equations (20) and (21) imply both statements of the lemma. Lemma 12 Fix PT 1> 0. Choose a sequence j = (ji)1i=1 in and a sequence u = (ui)1i=1 in [0; T ]1. Let s = i=1 ji . De ne a function f : [0; T ] ! [0; s] by
f (t) =
1 X i=1
ji 1fu tg : i
De ne a function m : [0; T ] [0; T ] ! (0; T ] such that m(u; w) = u ? w if u > w and m(u; w) = u ? w + T if u w. Then,
rw f (t) = for all w 2 [0; T ].
1 X i=1
ji1fm(u ;w)tg
12
i
(22)
Proof. If 0 t < T ? w, then rw f (t) = f (w + t) ? f (w) = If T ? w t T , then
1 X i=1
ji 1fwtg = i
1 X i=1
1 X i=1
1 X i=1
i
(23)
ji 1fw+t?T tg for 0 t < S ? Ji a.s. Proof. It follows from (16) that di = XUi = Ji + gi and Yt = Ui for gi t < di. Also, X0 = 0 a.s. because Uj = 0 a.s. for all j 2 N. We now prove the lemma by considering three cases. Case 1: Suppose 0 t < Ji . Then gi + t < Ji + gi = di S , so rgi Yt = Yt+gi ? Ygi = Ui ? Ui = 0, as claimed. Case 2: Suppose 0 t < S ?di . Then XT ?XUi = S ?di > t, so inf fa : XUi+a ?XUi > tg T ?Ui. It follows from this inequality and the fact that X0 = 0 a.s. that inf fa : rUi Xa > tg = inf fa : XUi+a ? XUi > tg = inf fb ? Ui : Xb > t + di g = Yt+di ? Ui = Ygi +t+Ji ? Ygi = rgi Yt+Ji , as claimed. Case 3: Suppose S ? di t < S ? Ji . If a < T ? Ui , then XUi +a ? XUi S ? di t. Therefore, inf fa : rUi Xa > tg = inf fa : S ? XUi + XUi +a?T > tg = inf fb + T ? Ui : Xb > t + di ? S g = Yt+di ?S + T ? Ui = T ? Ygi + Ygi+t+Ji ?S = rgi Yt+Ji , as claimed. Lemma 14 Assume that all hypotheses of Theorem 10 hold. For all i 2 N, de ne gi = XUi? and di = XUi . Assume the distribution of (Ji )1 i=1 is concentrated at one point. Then, for all i 2 N such that Ji > 0, we have P (rgi Yt < t for all 0 < t < S ) = Ji =S . Proof. It is easy to verify the lemma if Ji = 0 for all i 2, so we will assume J2 > 0. Fix i 2 N such that Ji > 0. De ne a process Ri = (Rit)0tS ?Ji by Rit = rgi Yt+Ji . By Lemma 13, we have rgi Yt = 0 < t for 0 < t < Ji . Therefore rgi Yt < t for all 0 < t < S if and only if rgi Yt < t for all Ji t < S , or, equivalently, if and only if Rit < (t + Ji ) for all 0 t < S ? Ji . De ne X i = (Xai )0aT such that Xai = rUi Xa if 0 a < T and XTi = S ? Ji . By Lemma 12, we have X Xai = Jj 1fm(Uj ;Ui)ag (25) j 6=i
13
for all 0 a T , where m is the function de ned in Lemma 12. Since the random variables m(Uj ; Ui) for j 6= i are independent and have a uniform distribution on [0; T ], equation (25) implies that X i is a nondecreasing, right-continuous process with interchangeable increments such that X0 = 0 a.s. and XT = S ? Ji > 0. Also, the closure of ft : Xai = t for some 0 a T g has Lebesgue measure zero. By Lemma 13, Rit = rgi Yt+Ji = inf fa : rUi Xa > tg = inf fa : Xai > tg for all 0 t < S ? Ji , and RiS ?Ji = rgi YS = T . Let 0 < 1, and let c = (1 ? )Ji T=S . By Theorem 8, T ? c i P (R c + (26) t for all 0 t S ? J ) = c = (1 ? )Ji : t
i
S ? Ji
T
S
Note that
T ? (1 ? ) J T=S S ? (1 ? ) J (1 ? ) J T T ? c i i i + t = (1 ? )Ji + t ; c+ S ?J t = S S ? Ji S ? Ji i which equals (t + Ji ) if = 0 and is less than (t + Ji ) if > 0 and 0 t < S ? Ji . Therefore, equation (26) when = 0 becomes P (Rit (t + Ji ) for all 0 t S ? Ji ) = JSi : Since we always have Rit (t + Ji ) when t = S ? Ji , it follows that (27) P (Rit < (t + Ji ) for all 0 t < S ? Ji ) JSi : Using (26) when > 0, we obtain T ? c i i P (Rt < (t + Ji) for all 0 t < S ? Ji ) P (Rt c + S ? J t for all 0 t S ? Ji) i = (1 ?S)Ji :
Letting # 0 gives P (Rit < (t + Ji ) for all 0 t < S ? Ji ) Ji =S , which, combined with (27), implies the lemma.
Proof of Theorem 10. First assume that (Ji)1i is constant. By Lemma 11, (YPt ?1 t) t<S =1
0
attains its maximum only at time gi if and only if rgi Yt < t for all 0 < t < S . Since i=1 Ji = S , it follows from Lemma 14 that (Yt ? t)0t<S almost surely attains its maximum at a unique time K and that P (K = gi ) = Ji =S . Therefore, part (a) of Theorem 10 holds, and part (b) will follow if we can show that H = Ji on the event fK = gi g whenever Ji > 0. By Lemma 13, rgi Yt = 0 for 0 t < Ji a.s. If di = S , then rgi YJi = YS ? Ygi + Y0 T ? Ui > 0 a.s. because almost surely Ui < T for all i. Therefore, inf ft : rgi Yt > 0g = Ji . If instead di < S , then for 14
0 < t < S ? di, we have rgi YJi +t = Ydi +t ? Ygi > 0, since Ydi +t > Ui by the right continuity of X . Thus, inf ft : rgi Yt > 0g = Ji , as claimed. Now, consider the general case in which the distribution of (Ji )1 i=1 is not necessarily concen0 1 trated at one point. Let = nf(0; 0; : : : )g. Since the sequences (Ji )1 i=1 and (Ui)i=1 determine 0 1 the processes X and Y , there exists a subset A of [0; T ] such that (Yt ? t)0t<S attains its maximum at a unique time K , where K = gi for some i, if and only if (J; U ) 2 A. Let T denote Lebesgue measure on [0; T ], normalized by 1=T . Since part (a) of Theorem 10 holds when 1 c 0 (Ji )1 i=1 is constant, we have T (fu : (j; u) 2 A g) = 0 for all j 2 . Thus, by Fubini's Theorem and the fact that J and U are independent, we have P ((J; U ) 2 Ac ) = 0, which proves part (a) in the general case. 1 Since part (b) involves conditioning on (Ji )1 i=1 , the fact that (b) holds when (Ji )i=1 is xed 0 1 implies that (b) holds when (Ji )1 i=1 is random. We can de ne a function ' : [0; T ] ! R such that '(j; u) is the value of H when J = j and U = u. Since (b) holds when (Ji )1 i=1 is P 1 0 xed,Pwe have that for all j = (ji)1 , where ( j =s ) 2 , the distribution of ' ( j; U ) is ji i=1 i=1 i s= 1 j and is a unit mass at j . Thus, since J and U are independent, we can deduce i j i i i=1 part (b) from the next lemma by taking E1 = 0 , E2 = [0; T ]1, X = J , Y = U , and Z = H .
Lemma 15 Let E and E be standard Borel spaces. Let X be an E -valued random variable, and let Y be an E -valued random variable that is independent of X . Let ' : E E ! R be a measurable function, and let Z = '(X; Y ). For each x 2 E , de ne (x; :) to be the distribution of '(x; Y ). Then f(x; : ) : x 2 E g is a family of conditional distributions for Z given X = x. 1
2
1
2
1
2
1
1
Note that Lemma 15 is intuitively obvious and can be proved by applying Lemma 1 in chapter 22 of [15].
4 Proofs of Propositions 1, 3, and 4 In this section, we prove Propositions 1, 3, and 4 in the introduction by applying Theorems 8 and 10. We rst introduce some notation. Let B jbrj;r = (Btjbrj;r )0tr be a re ecting Brownian bridge from (0; 0) to (r; 0), and let (Ljtbrj;r )0tr be its local time at zero, meaning that Ljtbrj;r is the local time of B jbrj;r at zero up to time t. By Lemma 12 of [24], for each r > 0 there exists, on the path space of continuous functions de ned on [0; r], a unique family of conditional laws (P ;r ; 0) for B jbrj;r given Ljrbrj;r = that is weakly continuous in . Let A;r = (A;r t )0tr be a process with law P ;r , and let L;r = (L;r ) denote its local time at zero. De ne another process t 0tr ;r ;r ;r ;r ;r W = (Wt )0tr by Wt = Lt ? At . We claim that for > 0, the process W ;r has the same law as Brownian motion conditioned to rst hit at time r. To prove this claim, let (Bt )t0 be Brownian motion, and let (Lt)t0 be its local time at zero. Let Mt = sup0st Ms for all t 0. For all 0, let T = inf ft : Bt > g and let = inf ft : Lt > g. By equation (5.a) of [25], the conditional law of (Btjbrj;r )0tr given 15
Ljrbrj;r = is the same as the conditional law of (jBtj)0tr given = r. By Levy's Theorem, (Mt ? Bt ; Mt)t0 =d (jBtj; Lt)t0. Therefore, for all > 0, the conditional law of the process (Mt ? Bt ; Mt)0tr given T = r is the same as the conditional law of (jBt j; Lt)0tr given = r. ;r Thus, the process (A;r t ; Lt )0tr has the same law as the conditional law of (Mt ? Bt ; Mt)0tr given T = r. It follows that (Wt;r ; L;r t )0tr has the same law as the conditional law of (Mt ? (Mt ? Bt ); Mt)0tr = (Bt ; Mt)0tr given T = r, which implies the claim. A construction of A;r for 0 and r > 0 is sketched in the proof of Lemma 12 of [24]. This construction in the case when r = 1 is described in section 6 of [25] and subsection 6.3 of [12]. We record the construction below.
Construction 16 Fix 0 and r > 0. Let J = (Ji)1i be a random sequence having the same =1
distribution as Vr (A;r ). For a description of this distribution when r = 1, see subsection 6.3 of [12]. Independently of J , let U = (Ui )1 i=1 be a sequence of i.i.d. random variables with a uniform distribution on [0; ]. De ne a process X = (Xa )0a by
Xa =
1 X i=1
Ji1fU ag :
(28)
i
For all i 2 N, let di = XUi and gi = XUi? . Independently of (J; U ), let (ei )1 i=1 be a sequence of ;r ;r independent Brownian excursions of length 1. De ne A = (At )0tr such that p
i A;r t = di ? gi e(t?g )=(d ?g ) i
i
i
S
1 ;r ;r for t 2 (gi ; di) and A;r t = 0 for t 2= (0; r) n i=1 (gi; di). Then, A has law P . Moreover, it is ;r stated in [12] and [25] that if L;r is the local time of A;r at zero, then Lt = Ui for t 2 (gi ; di).
Using the notation of Construction 16, note that if t 2 (gi; di) for some i 2 N, then we have ;r inf fa : Xa > tg = Ui = L;r t . Since t 7! Lt is a.s. nondecreasing and continuous, it follows that
L;r t = inf fa : Xa > tg for all 0 t < r a.s.
(29)
Likewise, if we de ne W ;r = L;r ? A;r and Mt;r = sup0st Ws;r , then Mt;r = Ui for all gi < t < di and t 7! Mt;t is a.s. nondecreasing and continuous. Therefore, ;r for all 0 t r a.s.; L;r t = Mt
(30)
;r ;r ;r ;r ;r (Ji )1 i=1 = Vr (A ) = Vr (L ? W ) = Vr (M ? W ) a.s.
(31)
and so 16
It follows from (31) that the terms of the sequence (Ji )1 i=1 are both the lengths of excursions of ;r A away from zero and the lengths of excursions of W ;r below its current maximum. It is clear from (28) that (Xa)0a is a nondecreasing right-continuous process with interchangeable increments such that X0 = 0 a.s. and X = r > 0 a.s. Also, (28) implies that if Z is the closure of ft : Xa = t for some 0 a g, then Z has Lebesgue measure zero a.s. Therefore, X satis es the hypotheses of Theorems 8 and 10. From (29) and the fact that L;r r = , we see that the process L;r plays the role of Y in those theorems. Also, J is the sequence of ranked lengths of the open intervals whose union is (0; r) n Z . Therefore, by applying Theorem 8, we obtain Proposition 17 below. From Proposition 17, we can deduce Proposition 1 by putting +1 in place of and setting r = c = 1.
Proposition 17 Fix > 0 and r > 0. Let (W ;r) tr be a process with the same law as a Brownian motion B conditioned on T = r, where T = inf ft : Bt > g. Fix c 2 [0; ], and de ne 0
Mt;r = sup0st Ws;r. Then
? c c P (32) t for all t 2 [0; r] = : r ;r ? c ;r ;r Moreover, Vr (M ? W ) is independent of the event Wt c + t for all t 2 [0; r] . r ?
Wt;r c +
Proof. From Theorem 8 and equations (29) and (31), we obtain the conclusions of the proposition
;r with L;r t in place of Wt in (32) and in the de nition of the event at the end of the statement of the proposition. The conclusions of the proposition then follow from (30) and the fact that the events fMt;r c + (( ? c)=r)t for all t 2 [0; r]g and fWt;r c + (( ? c)=r)t for all t 2 [0; r]g are the same.
Our next goal is to prove Propositions 3 and 4. For the rest of this section, we will x > 0 and we will use the notation of Construction 16 and the discussion preceding Proposition 17. Also, we will de ne A = A;1, L = L;1, W = W ;1, and M = M ;1. By part (a) of Theorem 10, there is almost surely a unique time K at which (Lt ? t)0t J2 > : : : > 0 a.s. Therefore
fH = Jig = fK = gig 17
(33)
up to a null set. Since Agi = 0 for all i 2 N, we have LK = WK a.s. Since Wt Lt for all t, it follows that K is also the unique time at which (Wt ? t)0t 0g = inf ft : rK Wt > 0g = inf ft : t ? et > 0g: 0
1
By part (b) of Theorem 10, H is a size-biased pick from (Ji )1 i=1 . Therefore, to prove Proposition 3, it suces to prove that (Ji )1 = V ( e ) a.s., where e is as de ned in (2). By (31), 1 i=1 it suces to show that V1 (M ? W ) = V1( e) a.s. Note that Mt ? Wt = 0 if and only if Wt = sup0st Ws, and et = 0 if and only if t ? et = sup0st (s ? es) or, equivalently, if and only if rK Wt = sup0st rK Ws . Since M0 = W0 = 0, MK = WK , and M1 = W1 = a.s., De nition 5 implies that the following hold up to a null set:
ft 1 ? K : rK Wt = sup rK Wsg = ft ? K : t K; Wt = sup Wsg st st ft 1 ? K : rK Wt = sup rK Wsg = ft + 1 ? K : t K; Wt = sup Ws g: 0
0
st
st
0
0
(36) (37)
Equations (36) and (37) imply that V1(M ? W ) = V1( e), which proves the proposition. To prove Proposition 4, we will need the following lemma, which can be deduced from equation (4.i) of [22]. 18
Lemma 20 Let (Zi)1i be the points of a Poisson point process N on P (0; 1) with mean measure . Assume that is - nite and ((0; 1)) = 1. Also, assume T = 1 i Zi is a.s. nite. Let =1
=1
N 0 be a point process obtained by deleting a point Z from N , where Z is a size-biased pick from 0 0 0 0 (Zi )1 i=1 . Let T = T ? Z . Then, the conditional distribution of N given T = t and T = t is the 0 same as the conditional distribution of N given T = t .
Remark 21 Recall that the jump sizes of a subordinator run for time t have the same distribution as the points of a Poisson point process on (0; 1) whose mean measure is t times the Levy
measure of the subordinator. The Levy measure of a stable subordinator of index 1=2 is given by (dx) = Cx?3=2 dx, where C is a constant. Therefore, t is - nite and t ((0; 1)) = 1.
Proof of Proposition 4. For all l 2 N, let J ?l = (Ji ?l )1i be a sequence of random variables ?l such that Ji ?l = Ji for i < l and Ji ?l = Ji for i l. De ne (Ui ?l )1 i by Ui = m(Ui ; Ul) for i < l and Ui ?l = m(Ui ; Ul) for i l, where m is as de ned in Lemma 12 with in place of T . It follows from Lemma 12 and the fact that m(Ul ?l ; Ul ?l ) = for all l 2 N that (
(
(
)
(
)
)
(
)
)
(
+1
(
1 X
)
(
)
(
)
Ji(?l) 1fU ? ag ( l)
l
)
=1
+1
rU Xa =
=1
i=1 0 01 for all l 2 N and 0 a < . Now de ne J 0 = (Ji0 )1 i=1 and U = (Ui )i=1 such that for all ( ? l ) ( ? l ) and l 2 N, we have Ji0 = Ji and Ui0 = Ui on fK = gl g. De ne X 0 = (Xa0 )0a by 1 X Xa0 = Ji0 1fUi0ag: i=1 i
(38)
i2N (39)
By Lemma 13, we have rgl Lt+Jl = inf fa : rUl Xa > tg for all l 2 N and 0 t < 1 ? Jl . Since H = Jl and X 0 = rUl X a.s. on fK = glg, it follows that
rK Wt+H = rK Lt+H ? rK At+H = inf fa : Xa0 > tg ? rK At+H (40) for all 0 t < 1 ? H . De ne W ;1?h = L;1?h ? A;1?h , where A;1?h and L;1?h are obtained from J~ = (J~i )1 i=1, i 1 ~U = (U~i)1 i=1 , and a sequence of Brownian excursions (~e )i=1 as in Construction 16. De ne X~ a =
1 X i=1
J~i 1fU~ ag i
(41)
for all 0 a . Then, using (29), we obtain
Wt;1?h = Lt;1?h ? At;1?h = inf fa : X~ a > tg ? At;1?h 19
(42)
for 0 t < 1 ? h. We now claim that the conditional distribution of (J;~ U~ ) given the event fWt;1?h (t + h) for all t 2 [0; 1 ? h]g, which we denote hereafter by Eh, is the same as the conditional distribution of (J 0 ; U 0) given H = h. Recall that (rK At+H )0t1?H and (At;1?h )0t1?h were constructed from independent Brownian excursions over the at intervals of (rK Lt+H )0t1?H and (Lt;1?h )0t1?h respectively. Therefore, by equations (39), (40), (41), and (42), the claim implies that the conditional law of W ;1?h given Eh is the same as the conditional law of (rK Wt+H )0t1?H given H = h. Thus, by Lemma 18, the claim proves Proposition 4. To prove the claim, it suces to prove the following two statements: (a) The conditional distribution of J~ given Eh is the same as the conditional distribution of J 0 given H = h. (b) The conditional distribution of U~ given Eh and given J~ = j is the same as the conditional distribution of U 0 given H = h and J 0 = j . We rst prove (a). Let B be a Brownian motion, and let (Lt)t0 be the local time of B at zero. De ne = inf ft : Lt > g. As shown in the second paragraph of this section, the law of M ? W is the same as the conditional law of jBj given = 1. Therefore, using (31), we see that J has the same distribution as the conditional distribution of V1(B) given = 1, which is the same as the conditional distribution of V (B ) given = 1. By (33) and part (b) of Theorem 10, J 0 is obtained from J by deleting a point H , where H is a size-biased pick from J . Therefore, if V0 (B ) is a sequence obtained by removing a size-biased pick Z from the sequence V (B ), then the conditional distribution of J 0 given H = h is the same as the conditional distribution of V0 (B) given = 1 and Z = h, which is the same as the conditional distribution of V0 (B) given = 1 and ? Z = 1 ? h. Recall that V (B) consists of the jump sizes of a stable subordinator of index 1=2 run for time , and is the sum of these jump sizes. Therefore, by Lemma 20 and Remark 21, the conditional distribution of V0 (B ) given = 1 and ? Z = 1 ? h is the same as the conditional distribution of V (B ) given = 1 ? h, which by (31) is the same as the distribution of J~. By Proposition 17, J~ is independent of Eh . Therefore, the distribution of J~ is the same as the conditional distribution of J~ given Eh , which establishes (a). We now prove (b). For all h 2 (0; 1) and all j 2 1?h , there exists a subset Dj;h of [0; ]1 such that if J~ = j then Eh occurs if and only if U~ 2 Dj;h . Let denote Lebesgue measure on [0; ], normalized by 1=, and let A be a Borel subset of [0; ]1. Proposition 17 implies that P (Eh ) = h and Eh is independent of J~. Fix j 2 1?h . Since U~ has distribution 1 and is independent of J~, we have 1 ~ ~ ~ ~ P (U~ 2 AjEh ; J~ = j) = P (fU 2 Ag \~ Eh jJ = j) = P (U 2 A \hDj;hjJ = j ) = (A h\ Dj;h ) : P (Eh jJ = j ) (43) Fix l 2 N such that l ? 1 is the number of terms in the sequence j greater than h, and let j (+h) be the sequence in 1 whose terms include h and all of the terms of j . By Lemma 11, we have 20
K = gl if and only if rg Lt < t for all 0 < t < 1. Since (Lt ? t)0t 0, de ne T = inf ft : Bt = g and L = supft : Rt = g. Then, the processes (Rt)0tL and ( ? BT ?t )0tT have the same law. ?
Part (a) of Lemma 22 is part of exercise 3.10 in chapter I of [28]. The fact that tR(1?t)=t 0t1 is a Brownian excursion is stated in the proof of Proposition 10 in [6]. It then follows from the invariance of Brownian ? excursions under time reversal (see Corollary 4.3 in chapter XII of [28]) that (1 ? t)Rt=(1?t) 0t1 is a Brownian excursion. Part (c) is a time-reversal theorem proved 21
by Williams in [30] and is also Corollary 4.4 in chapter XII of [28]. LeGall gives an alternative approach to this result in [20]. We begin with the following corollary pertaining to the Brownian bridge. Corollary 23 Let Bbr = (Btbr )0t1 be a Brownian bridge. Fix > 0 and c 2 [0; ]. Let = inf ft : Btbr = (1 ? t)g. Then ? br (1 ? ) ? c (45) P B c+ t for all 0 t = c : t
Likewise, let = supft : Btbr = tg. Then ? (46) P Btbr ( ? c)1 +? ( c ? )t for all t 1 = c : Equation (45) states that if (; ? ) is the point at which B br rst crosses the line segment from (0; ) to (1; 0), then B br crosses the line segment from (0; c) to (; (1 ? )) with probability c= (see Figure 6 below). Equation (46) states that if is the last time that B br crosses the line segment from (0; 0) to (1; ), then B br crosses the line segment from ( ; ) to (1; c) with probability c= (see Figure 7 below).
HH HHH Btbr HH DHD CCHE HH EE HHBH EE CC CCH E D B 1 D
c
.... ....
.... ....
... ... ... ..
... ... ... ..
... .... ..... ......
... ... ... ..
... .....
..... ...
DDhhhhh hhh CCEE E C B C EEE C B C 1 DD
.... ....
.... ....
c
... ... ... ..
.... ... .
Btbr
.... ...
... ... ... ..
... ... ... .
... .... .... ..... .
... ... ... .
.... ...
.... ....
... ... ... ..
Figure 7
Figure 6
Proof. Let B = (Bt)t be Brownian motion, and let T = inf ft : Bt = g. By part (a) of Lemma 22, we may assume that Btbr = (1 ? t)Bt= ?t for all 0 t 1. Then, we have Btbr = (1 ? t) if and only if Bt= ?t = , so T = =(1 ? ). Furthermore, since ? c t (1 ? ) ? c (1 ? t) c + T t; 1?t = c+ 0
(1
(1
)
)
it follows from Proposition 17 that ? P B br c + (1 ? ) ? c t for all 0 t t
? ? c = P Bt c + T t for all 0 t T T = c ;
22
which proves (45). We can deduce (46) from (45) by time reversal, replacing t by 1 ? t. Alternatively, we can establish (46) by assuming Btbr = tB(1?t)=t and giving another argument similar to that above. c
D D
Rt ..
CC C B D C B D B D D B ... ... ... ..
.. ....
... ... . ... ..... ...
. ... ....
.... ....
... .. ... ... ...... ..
..... ...
.... ....
r
Figure 8
We now use Williams' time-reversal theorem to obtain a result about the three-dimensional Bessel process. See Figure 8 above for the associated picture. Corollary 24 Let (Rt)t0 be a three-dimensional Bessel process started at zero. Fix > 0 and r > 0, and x c 2 [0; ]. Then, P (Rt ct=r for all 0 t rjRr = ) = ( ? c)=. Proof. Let (Bt)t0 be Brownian motion. De ne T = inf ft : Bt = g and L = supft : Rt = g. Let a = ? c. Then, Bt a + ( ? a)t=T for all 0 t T if and only if ? BT ?t ( ? a)t=T = ct=T for all 0 t T . It follows from Proposition 17 and part (c) of Lemma 22 that P (Rt ct=r for all 0 t rjL = r) = P ( ? BT ?t ct=r for all 0 t rjT = r)
(47) = P Bt a + ? a t for all 0 t r T = r = a = ? c : r It is stated in the proof of Theorem 3 of [11] that (Rt)0tr conditioned on L = r has the same law as (Rt)0tr conditioned on Rr = . This result, combined with (47), establishes the ?
corollary.
Remark 25 Let (mt) t be a normalized Brownian meander. It is proved in [11] that if f is a nonnegative measurable function whose domain is the set of all continuous [0; 1)-valued 0
1
functions de ned in [0; 1], then ?
?
E f (mt)0t1 = E f (Rt)0t1 23
r
1 : 2 R1
Therefore, the process (mt)0t1 conditioned on m1 = has the same law as (Rt)0t1 conditioned on R1 = . Thus, Corollary 24 gives P (mt ct for all 0 t 1jm1 = ) = ( ? c)= for all c 2 [0; ]. We now show how Corollary 24 gives rise to a result for Brownian excursions. See gures 9 and 10 below for the pictures associated with equations (48) and (49) respectively.
Corollary 26 Let e = (et) t be a normalized Brownian excursion. Fix > 0 and u 2 (0; 1). Fix c 2 [0; ]. Then ? (48) P e c t for all 0 t u e = = ? c 0
1
t
and
u
u
c c: P et 1 ? u (1 ? t) for all u t 1 eu = = ? ?
(49)
Proof. Let (Rt)t be a three-dimensional Bessel process. By part (b) of Lemma 22, we may assume that et = (1 ? t)Rt= ?t for all 0 t 1. Then eu = if and only if Ru= ?u = =(1 ? u). 0
(1
)
Therefore, using Corollary 24 for the next-to-last equality, we have
(1
)
P (et ct=u for all 0 t ujeu = ) ? = P (1 ? t)Rt=(1?t) ct=u for all 0 t ujRu=(1?u) = =(1 ? u) ? = P Rt=(1?t) ct=u(1 ? t) for all 0 t ujRu=(1?u) = =(1 ? u) ? = P Rs cs=u for all 0 s u=(1 ? u)jRu=(1?u) = =(1 ? u) ?1 c u u = P Rs 1 ? u 1 ? u s for all 0 s 1 ? u Ru=(1?u) = 1 ? u c ? c; = 1?u ? 1?u = 1?u which proves (48). Equation (49) follows easily from the symmetry of Brownian excursion under time reversal. Alternatively, (49) can be proved by assuming et = tR(1?t)=t and following steps similar to those above.
24
B B CC et CC CC CC CCD D CC C C ... ... . ... .... ....
.... ...
... ....... ... .. ...
c
.... ....
... ....... ... .. .. ...
.... ....
.. ....... ... .. ...
c
.. .. ... ...
u
Figure 9
... ... . ... .... ....
.... ....
.... ....
.... ....
B B CC et CC CC CC CCD HH HHH D C HHCHCC
1
..... ...
.... ....
... ...... ... ... .. ..
.... ...
.... ....
.. .. ... ...
u
Figure 10
1
6 Excursions of Markov processes In this section, we show how Theorems 8 and 10 lead to results pertaining to the excursions of more general Markov processes. We consider a Markov process = (t )t0 which is \nice" in the sense de ned at the beginning of chapter IV of [5]. That is, we assume is an Rd valued stochastic process with right-continuous sample paths that is adapted to a complete rightcontinuous ltration (Ft )t0 and satis es a Markov property. The Markov property is de ned in [5] as the property that there exists a family of probability measures (P x ; x 2 Rd ) such that for every stopping time T < 1, the shifted process (T +t)t0 conditional on T = x is independent of FT and has law P x . As noted in [5], Feller processes satisfy these conditions. We assume that 0 = 0 a.s. We also assume that 0 is a regular point, which means that inf ft > 0 : t = 0g = 0 a.s., and an instantaneous point, meaning that inf ft > 0 : t 6= 0g = 0 a.s. Thus, does not hold in its initial state, but it returns to that state at arbitrarily small positive times. We also assume that 0 is recurrent, meaning that supft : t = 0g = 1 a.s. Let Z denote the closure of ft : t = 0g. Then, (0; 1) n Z consists of a collection of disjoint open intervals, which we call the excursion intervals of away from 0. In section 2 of chapter IV of [5], Bertoin constructs a process (Lt)t0 called the local time of , which is determined up to an arbitrary positive constant. By Theorem 4 in chapter IV of [6], the process (Lt)t0 is continuous and nondecreasing and satis es L0 = 0. The same theorem states that Z is the support of the Stieltjes measure dL, so L is constant on the excursion intervals of . Still following [5], we de ne the inverse local time process = (a)a0 by a = inf ft : Lt > ag. Then, by Proposition 7 in chapter IV of [5], the following two equations hold for all t > 0: Lt = inf fs > t : s = 0g; (50) Lt ? = supfs < t : s = 0g: (51) By Theorem 8 in chapter IV of [5], the process (a)a0 is a subordinator. For this result, we need the assumption that 0 is recurrent, which ensures that limt!1 Lt = 1 almost surely. 25
Lemma 27 The set Z is the closure of ft : a = t for some ag. Proof. Since = 0, clearly 0 2 Z . If Lt = 0 for some t > 0, then the Stieltjes measure dL is supported on [t; 1), which contradicts that Z is the is the support of dL. Thus, = 0. Now suppose t > 0 and t = 0. By (50), if 0 < < t, then L ? = inf fs > t ? : s = 0g, which is in the interval [t ? ; t]. Therefore, t is in the closure of ft : a = t for some ag. It follows that Z is contained in the closure of ft : a = t for some ag. Next, suppose t > 0 and a = t for some a. Since (Ls )s is continuous and lims!1 Ls = 1 a.s., there exists u > 0 such that Lu = a. By (50), we have a = L = inf fs > u : s = 0g 2 Z . It follows that ft : a = t for some ag Z , so the closure of ft : a = t for some ag is contained 0
0
t
0
u
in Z .
Corollary 28 Fix T > 0. Let = (t)t be a Markov process which is \nice" in the sense 0
de ned at the beginning of this section. Assume 0 = 0 a.s. and that 0 is regular, instantaneous, and recurrent. Let Z be the closure of ft : t = 0g, and assume Z has Lebesgue measure zero a.s. Let (Lt)t0 be the local time of at zero. Let S = inf ft : Lt > T g, and let = T=S . Let (Ji )1 i=1 be the sequence consisting of the lengths, in decreasing order, of the disjoint open intervals whose union is (0; S ) n Z . Then for all c 2 [0; T ],
T ? c P Lt c + S t for all 0 t S = Tc ; and (Ji )1 i=1 is independent of the event fLt c + ((T ? c)=S )t for all 0 t S g. Moreover, (Lt ? t)0t<S almost surely attains its maximum at a unique time, which we denote by K . If H = inf ft : rK Lt > 0g, then H is a size-biased pick from (Ji )1 i=1 . ?
Proof. De ne = (a)a by a = inf ft : Lt > ag. Since is a subordinator, has interchange0
able increments. Recall that 0 = 0 as shown in the proof of Lemma 27, and S = T > 0 a.s. because (Lt)t0 is continuous. By Lemma 27, the closure of ft : a = t for some ag equals Z , which has Lebesgue measure zero by assumption. By (51), Lt ? t for all t > 0, so Lt inf fa : a > tg for all t > 0. By the continuity of (Lt )t0, we have Lt + > t for all t > 0 and all > 0, so Lt + inf fa : a > tg for all t > 0 and all > 0. Also, a > 0 for all a > 0, so L0 = 0 = inf fa : a > 0g. Hence, Lt = inf fa : a > tg for all 0 t < S , and LS = T by the continuity of (Lt )t0. Thus, Corollary 28 follows from Theorems 8 and 10.
Note that (Ji )1 i=1 consists of the lengths of the excursions of away from 0 that are completed before local time T . Corollary 28 thus states that the event that the local time process stays below the line from (0; c) to (S; T ) occurs with probability c=T and is independent of the excursion lengths. Also, note that H is the length of the excursion of that begins at K , so the corollary shows that the length of the excursion that begins at the unique time when (Lt ? t)0t<S attains its maximum is a size-biased pick from all of the excursion lengths. 26
Acknowledgments The author thanks Jim Pitman for helpful discussions and for many comments on earlier drafts of this work. He also thanks Marc Yor for some suggestions.
References [1] D. Aldous. The continuum random tree I. Ann. Probab., 19:1{28, 1991. [2] D. Aldous. The continuum random tree II: an overview. In M. Barlow and N. Bingham, editors, Stochastic Analysis, pages 23{70. Cambridge University Press, 1991. [3] D. Aldous. The continuum random tree III. Ann. Probab., 21:248{289, 1993. [4] D. Aldous and J. Pitman. The standard additive coalescent. Ann. Probab., 26:1703{1726, 1998. [5] J. Bertoin. Levy Processes. Cambridge University Press, 1996. [6] J. Bertoin. A fragmentation process connected to Brownian motion. Probab. Theory Related Fields, 117:289{301, 2000. [7] J. Bertoin. Homogeneous fragmentation processes. Preprint, 2000. [8] J. Bertoin. Self-similar fragmentations. Preprint, 2000. [9] J. Bertoin and J. Pitman. Path transformations connecting Brownian bridge, excursion, and meander. Bull. Sci. Math., 118:147{166, 1994. [10] J. Bertoin and J. Pitman. Two coalescents derived from the ranges of stable subordinators. Electron. J. Probab., 5:1{17, 2000. [11] P. Biane, J.-F. LeGall, and M. Yor. Un processus qui ressemble au pont Brownien. In Seminaire des Probabilites, XXI, Lecture Notes in Mathematics, Vol. 1247, pages 270{275. Springer, Berlin, 1987. [12] P. Chassaing and S. Janson. A Vervaat-like path transformation for the re ected Brownian bridge conditioned on its local time at 0. Available via http://altair.iecn.unancy.fr/~chassain, 1999. [13] P. Chassaing and G. Louchard. Phase transition for parking blocks, Brownian excursion, and coalescence. Available via http://altair.iecn.u-nancy.fr/~chassain, 2000. [14] R. Durrett. Probability: Theory and Examples. 2nd ed. Duxbury Press, Belmont, CA, 1996. 27
[15] B. Fristedt and L. Gray. A Modern Approach to Probability Theory. Birkhauser, Boston, 1997. [16] F. Jones. Lebesgue Integration on Euclidean Space. Jones and Bartlett, Boston, 1993. [17] O. Kallenberg. Canonical representations and convergence criteria for processes with interchangeable increments. Z. Wahrscheinlichkeitstheorie und verw. Gebiete, 27:23{36, 1973. [18] O. Kallenberg. Ballot theorems and sojourn laws for stationary processes. Ann. Probab., 27:2011{2019, 1999. [19] T. Konstantopoulos. Ballot theorems revisited. Statist. Probab. Lett., 24:331{338, 1995. [20] J.-F. LeGall. Une approche elementaire des theorems de decomposition de Williams. In Seminaire des Probabilites, XX, Lecture Notes in Mathematics, Vol. 1204, pages 447{464. Springer, Berlin, 1986. [21] S.A. Molchanov and E. Ostrovski. Symmetric stable processes as traces of degenerate diusion processes. Theory Probab. Appl., 14:128{131, 1969. [22] M. Perman, J. Pitman, and M. Yor. Size-biased sampling of Poisson point processes and excursions. Probab. Theory Related Fields, 92:21{39, 1992. [23] J. Pitman. Partition structures derived from Brownian motion and stable subordinators. Bernoulli, 3:79{96, 1997. [24] J. Pitman. The SDE solved by local times of a Brownian excursion or bridge derived from the height pro le of a random tree or forest. Ann. Probab., 27:261{283, 1999. [25] J. Pitman and M. Yor. Arcsine laws and interval partitions derived from a stable subordinator. Proc. London Math. Soc., 65:326{356, 1992. [26] J. Pitman and M. Yor. Random discrete distributions derived from self-similar random sets. Electron. J. Probab., 1:1{28, 1996. [27] J. Pitman and M. Yor. The two-parameter Poisson-Dirichlet distribution derived from a stable subordinator. Ann. Probab., 25:855{900, 1997. [28] D. Revuz and M. Yor. Continuous Martingales and Brownian Motion. 3rd ed. SpringerVerlag, Berlin, 1999. [29] L. Takacs. Combinatorial Methods in the Theory of Stochastic Processes. Wiley, New York, 1967. [30] D. Williams. Path decomposition and continuity of local time for one-dimensional diusions, I. Proc. London Math. Soc., 28:738{768, 1974. 28